Semi-Riemannian geometry : the mathematical language of general relativity 9781119517542, 1119517540, 9781119517559, 1119517559

536 51 5MB

English Pages 0 [656] Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Semi-Riemannian geometry : the mathematical language of general relativity
 9781119517542, 1119517540, 9781119517559, 1119517559

Citation preview

Trim Size: 152mm x 229mm

k

k

Newman

f01.tex

V1 - 05/16/2019

5:0-2040 P.M.

Page ii

k

k

Trim Size: 152mm x 229mm

k

Newman

f01.tex

V1 - 05/16/2019

5:0-2040 P.M.

Page i

Semi-Riemannian Geometry

k

k

k

Trim Size: 152mm x 229mm

k

k

Newman

f01.tex

V1 - 05/16/2019

5:0-2040 P.M.

Page ii

k

k

Trim Size: 152mm x 229mm

k

Newman

f01.tex

V1 - 05/16/2019

5:0-2040 P.M.

Page iii

Semi-Riemannian Geometry The Mathematical Language of General Relativity

STEPHEN C. NEWMAN University of Alberta Edmonton, Alberta, Canada k

k

k

Trim Size: 152mm x 229mm

k

Newman

f01.tex

V1 - 05/16/2019

5:0-2040 P.M.

Page iv

This edition first published 2019 c 2019 John Wiley & Sons, Inc.  All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions. The right of Stephen C. Newman to be identified as the author of this work has been asserted in accordance with law. Registered Office John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA Editorial Office 111 River Street, Hoboken, NJ 07030, USA For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com. Wiley also publishes its books in a variety of electronic formats and by print-on-demand. Some content that appears in standard print versions of this book may not be available in other formats.

k

Limit of Liability/Disclaimer of Warranty While the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Library of Congress Cataloging-in-Publication Data Names: Newman, Stephen C., 1952- author. Title: Semi-Riemannian geometry : the mathematical language of general relativity / Stephen C. Newman (University of Alberta, Edmonton, Alberta, Canada). Description: Hoboken, New Jersey : Wiley, [2019] | Includes bibliographical references and index. | Identifiers: LCCN 2019011644 (print) | LCCN 2019016822 (ebook) | ISBN 9781119517542 (Adobe PDF) | ISBN 9781119517559 (ePub) | ISBN 9781119517535 (hardcover) Subjects: LCSH: Semi-Riemannian geometry. | Geometry, Riemannian. | Manifolds (Mathematics) | Geometry, Differential. Classification: LCC QA671 (ebook) | LCC QA671 .N49 2019 (print) | DDC 516.3/73–dc23 LC record available at https://lccn.loc.gov/2019011644 Cover design: Wiley Set in 10/12pt Computer Modern by SPi Global, Chennai, India Printed in the United States of America 10 9 8 7 6 5 4 3 2 1

k

k

Trim Size: 152mm x 229mm

k

Newman

f01.tex

V1 - 05/16/2019

5:0-2040 P.M.

Page v

To Sandra

k

k

k

Trim Size: 152mm x 229mm

k

k

Newman

f01.tex

V1 - 05/16/2019

5:0-2040 P.M.

Page vi

k

k

Contents I

Preliminaries

1

Vector Spaces 1.1 Vector Spaces . . . . . 1.2 Dual Spaces . . . . . . 1.3 Pullback of Covectors 1.4 Annihilators . . . . . .

2

1 . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

5 5 17 19 20

Matrices and Determinants 2.1 Matrices . . . . . . . . . . . . . . . . . . 2.2 Matrix Representations . . . . . . . . . 2.3 Rank of Matrices . . . . . . . . . . . . . 2.4 Determinant of Matrices . . . . . . . . . 2.5 Trace and Determinant of Linear Maps .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

23 23 27 32 33 43

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

3

Bilinear Functions 45 3.1 Bilinear Functions . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.2 Symmetric Bilinear Functions . . . . . . . . . . . . . . . . . . . 49 3.3 Flat Maps and Sharp Maps . . . . . . . . . . . . . . . . . . . . 51

4

Scalar Product Spaces 4.1 Scalar Product Spaces . . . . 4.2 Orthonormal Bases . . . . . . 4.3 Adjoints . . . . . . . . . . . . 4.4 Linear Isometries . . . . . . . 4.5 Dual Scalar Product Spaces . 4.6 Inner Product Spaces . . . . . 4.7 Eigenvalues and Eigenvectors 4.8 Lorentz Vector Spaces . . . . 4.9 Time Cones . . . . . . . . . .

5

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

57 57 62 65 68 72 75 81 84 91

Tensors on Vector Spaces 97 5.1 Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 5.2 Pullback of Covariant Tensors . . . . . . . . . . . . . . . . . . . 103 5.3 Representation of Tensors . . . . . . . . . . . . . . . . . . . . . 104 vii

viii

Contents 5.4

6

Contraction of Tensors . . . . . . . . . . . . . . . . . . . . . . . 106 . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

113 113 114 119 123 127 129

Multicovectors 7.1 Multicovectors . . . . . . . . . . . . 7.2 Wedge Products . . . . . . . . . . . 7.3 Pullback of Multicovectors . . . . . . 7.4 Interior Multiplication . . . . . . . . 7.5 Multicovector Scalar Product Spaces

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

133 133 137 144 148 150

Orientation 8.1 Orientation of Rm . . . . . . . . . . 8.2 Orientation of Vector Spaces . . . . 8.3 Orientation of Scalar Product Spaces 8.4 Vector Products . . . . . . . . . . . . 8.5 Hodge Star . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

155 155 158 163 166 178

Topology 9.1 Topology . . . . . . . 9.2 Metric Spaces . . . . . 9.3 Normed Vector Spaces 9.4 Euclidean Topology on

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

183 183 193 195 195

10 Analysis in Rm 10.1 Derivatives . . . . . . . . . . . . . . . 10.2 Immersions and Diffeomorphisms . . . 10.3 Euclidean Derivative and Vector Fields 10.4 Lie Bracket . . . . . . . . . . . . . . . 10.5 Integrals . . . . . . . . . . . . . . . . . 10.6 Vector Calculus . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

199 199 207 209 213 218 221

7

8

9

II

Tensors on Scalar Product Spaces 6.1 Contraction of Tensors . . . . . . 6.2 Flat Maps . . . . . . . . . . . . . 6.3 Sharp Maps . . . . . . . . . . . . 6.4 Representation of Tensors . . . . 6.5 Metric Contraction of Tensors . . 6.6 Symmetries of (0, 4)-Tensors . . .

. . . . . . . . . Rm

. . . .

. . . .

. . . .

. . . . . .

. . . .

. . . .

Curves and Regular Surfaces

11 Curves and Regular Surfaces in R3 11.1 Curves in R3 . . . . . . . . . . . . . 11.2 Regular Surfaces in R3 . . . . . . . . 11.3 Tangent Planes in R3 . . . . . . . . . 11.4 Types of Regular Surfaces in R3 . . 11.5 Functions on Regular Surfaces in R3

223 . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

225 225 226 237 240 246

ix

Contents 11.6 11.7

Maps on Regular Surfaces in R3 . . . . . . . . . . . . . . . . . . 248 Vector Fields along Regular Surfaces in R3 . . . . . . . . . . . . 252

12 Curves and Regular Surfaces in R3ν 12.1 Curves in R3ν . . . . . . . . . . . . . . . . . . . 12.2 Regular Surfaces in R3ν . . . . . . . . . . . . . . 12.3 Induced Euclidean Derivative in R3ν . . . . . . . 12.4 Covariant Derivative on Regular Surfaces in R3ν 12.5 Covariant Derivative on Curves in R3ν . . . . . 12.6 Lie Bracket in R3ν . . . . . . . . . . . . . . . . . 12.7 Orientation in R3ν . . . . . . . . . . . . . . . . . 12.8 Gauss Curvature in R3ν . . . . . . . . . . . . . . 12.9 Riemann Curvature Tensor in R3ν . . . . . . . . 12.10 Computations for Regular Surfaces in R3ν . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

255 256 257 266 274 282 285 288 292 299 310

13 Examples of Regular Surfaces 13.1 Plane in R30 . . . . . . . . . . . . 13.2 Cylinder in R30 . . . . . . . . . . 13.3 Cone in R30 . . . . . . . . . . . . 13.4 Sphere in R30 . . . . . . . . . . . 13.5 Tractoid in R30 . . . . . . . . . . 13.6 Hyperboloid of One Sheet in R30 . 13.7 Hyperboloid of Two Sheets in R30 13.8 Torus in R30 . . . . . . . . . . . . 13.9 Pseudosphere in R31 . . . . . . . . 13.10 Hyperbolic Space in R31 . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

321 321 322 323 324 325 326 327 329 330 331

III

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

Smooth Manifolds and Semi-Riemannian Manifolds

14 Smooth Manifolds 14.1 Smooth Manifolds . . . . . . . . 14.2 Functions and Maps . . . . . . . 14.3 Tangent Spaces . . . . . . . . . . 14.4 Differential of Maps . . . . . . . 14.5 Differential of Functions . . . . . 14.6 Immersions and Diffeomorphisms 14.7 Curves . . . . . . . . . . . . . . . 14.8 Submanifolds . . . . . . . . . . . 14.9 Parametrized Surfaces . . . . . .

333 . . . . . . . . .

337 337 340 344 351 353 357 358 360 364

15 Fields on Smooth Manifolds 15.1 Vector Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2 Representation of Vector Fields . . . . . . . . . . . . . . . . . . 15.3 Lie Bracket . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

367 367 372 374

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

x

Contents 15.4 15.5 15.6 15.7 15.8 15.9 15.10 15.11 15.12 15.13 15.14

Covector Fields . . . . . . . . . . . . . . . . Representation of Covector Fields . . . . . . Tensor Fields . . . . . . . . . . . . . . . . . Representation of Tensor Fields . . . . . . . Differential Forms . . . . . . . . . . . . . . Pushforward and Pullback of Functions . . Pushforward and Pullback of Vector Fields Pullback of Covector Fields . . . . . . . . . Pullback of Covariant Tensor Fields . . . . Pullback of Differential Forms . . . . . . . . Contraction of Tensor Fields . . . . . . . . .

16 Differentiation and Integration on Smooth 16.1 Exterior Derivatives . . . . . . . . . . . . 16.2 Tensor Derivations . . . . . . . . . . . . . 16.3 Form Derivations . . . . . . . . . . . . . . 16.4 Lie Derivative . . . . . . . . . . . . . . . . 16.5 Interior Multiplication . . . . . . . . . . . 16.6 Orientation . . . . . . . . . . . . . . . . . 16.7 Integration of Differential Forms . . . . . 16.8 Line Integrals . . . . . . . . . . . . . . . . 16.9 Closed and Exact Covector Fields . . . . . 16.10 Flows . . . . . . . . . . . . . . . . . . . . 17 Smooth Manifolds with Boundary 17.1 Smooth Manifolds with Boundary . . . 17.2 Inward-Pointing and Outward-Pointing 17.3 Orientation of Boundaries . . . . . . . 17.4 Stokes’s Theorem . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

376 379 382 385 387 389 391 393 398 401 405

Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

407 407 413 417 419 423 425 432 435 437 443

. . . .

. . . .

. . . .

. . . .

. . . .

449 449 452 456 459

. . . . . . . . . . .

463 463 466 472 476 479 485 488 497 502 507 509

. . . . . Vectors . . . . . . . . . .

18 Smooth Manifolds with a Connection 18.1 Covariant Derivatives . . . . . . . . . . . 18.2 Christoffel Symbols . . . . . . . . . . . . 18.3 Covariant Derivative on Curves . . . . . 18.4 Total Covariant Derivatives . . . . . . . 18.5 Parallel Translation . . . . . . . . . . . . 18.6 Torsion Tensors . . . . . . . . . . . . . . 18.7 Curvature Tensors . . . . . . . . . . . . 18.8 Geodesics . . . . . . . . . . . . . . . . . 18.9 Radial Geodesics and Exponential Maps 18.10 Normal Coordinates . . . . . . . . . . . 18.11 Jacobi Fields . . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . .

. . . . . . . . . . .

. . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

xi

Contents 19 Semi-Riemannian Manifolds 19.1 Semi-Riemannian Manifolds . . . . . . . . . . . . . . . 19.2 Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.3 Fundamental Theorem of Semi-Riemannian Manifolds 19.4 Flat Maps and Sharp Maps . . . . . . . . . . . . . . . 19.5 Representation of Tensor Fields . . . . . . . . . . . . . 19.6 Contraction of Tensor Fields . . . . . . . . . . . . . . . 19.7 Isometries . . . . . . . . . . . . . . . . . . . . . . . . . 19.8 Riemann Curvature Tensor . . . . . . . . . . . . . . . 19.9 Geodesics . . . . . . . . . . . . . . . . . . . . . . . . . 19.10 Volume Forms . . . . . . . . . . . . . . . . . . . . . . . 19.11 Orientation of Hypersurfaces . . . . . . . . . . . . . . 19.12 Induced Connections . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

515 515 519 519 526 529 532 535 539 546 550 551 558

20 Differential Operators on Semi-Riemannian Manifolds 20.1 Hodge Star . . . . . . . . . . . . . . . . . . . . . . . . . 20.2 Codifferential . . . . . . . . . . . . . . . . . . . . . . . . 20.3 Gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.4 Divergence of Vector Fields . . . . . . . . . . . . . . . . 20.5 Curl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.6 Hesse Operator . . . . . . . . . . . . . . . . . . . . . . . 20.7 Laplace Operator . . . . . . . . . . . . . . . . . . . . . . 20.8 Laplace–de Rham Operator . . . . . . . . . . . . . . . . 20.9 Divergence of Symmetric 2-Covariant Tensor Fields . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

561 561 562 566 568 572 573 575 576 577

21 Riemannian Manifolds 579 21.1 Geodesics and Curvature on Riemannian Manifolds . . . . . . . 579 21.2 Classical Vector Calculus Theorems . . . . . . . . . . . . . . . . 582 22 Applications to Physics 587 22.1 Linear Isometries on Lorentz Vector Spaces . . . . . . . . . . . 587 22.2 Maxwell’s Equations . . . . . . . . . . . . . . . . . . . . . . . . 598 22.3 Einstein Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . 603

IV

Appendices

609

A Notation and Set Theory B Abstract Algebra B.1 Groups . . . . . . . . B.2 Permutation Groups B.3 Rings . . . . . . . . B.4 Fields . . . . . . . . B.5 Modules . . . . . . . B.6 Vector Spaces . . . .

. . . . . .

611

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

617 617 618 623 623 624 625

xii

Contents B.7

Lie Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626

Further Reading

627

Index

629

Preface Physics texts on general relativity usually devote several chapters to an overview of semi-Riemannian geometry. Of necessity, the treatment is cursory, covering only the essential elements and typically omitting proofs of theorems. For physics students wanting greater mathematical rigor, there are surprisingly few options. Modern mathematical treatments of semi-Riemannian geometry require grounding in the theory of curves and surfaces, smooth manifolds, and Riemannian geometry. There are numerous books on these topics, several of which are included in Further Reading. Some of them provide a limited amount of material on semi-Riemannian geometry, but there is really only one mathematics text currently available that is devoted to semi-Riemannian geometry and geared toward general relativity, namely, Semi-Riemannian Geometry: With Applications to Relativity by Barrett O’Neill. This is a classic, but it is pitched at an advanced level, making it of limited value to the beginner. I wrote the present book with the aim of filling this void in the literature. There are three parts to the book. Part I and the Appendices present background material on linear algebra, multilinear algebra, abstract algebra, topology, and real analysis. The aim is to make the book as self-contained as possible. Part II discusses aspects of the classical theory of curves and surfaces, but differs from most other expositions in that Lorentz as well as Euclidean signatures are discussed. Part III covers the basics of smooth manifolds, smooth manifolds with boundary, smooth manifolds with a connection, and semi-Riemannian manifolds. It concludes with applications to Lorentz vector spaces, Maxwell’s equations, and the Einstein tensor. Not all theorems are provided with a proof, otherwise an already lengthy volume would be even longer. The manuscript was typed using the WYSIWYG scientific word processor R EXP , and formatted as a camera-ready PDF file using the open-source TEXA L TEX typesetting system MiKTeX, available at https://miktex.org. Figure 19.5.1 was prepared using the TEX macro package diagrams.sty developed by Paul Taylor. I am indebted to Professor John Lee of the University of Washington for reviewing portions of the manuscript. Any remaining errors or deficiencies are, of course, solely my responsibility. I am most interested in receiving your comments, which can be emailed to me at [email protected]. A list of corrections will be posted on the website https://sites.ualberta.ca/∼sn2/. Should the email address become unavailable, an alternative will be included with the list of corrections. xiii

xiv

Preface

On the other hand, if the website becomes inaccessible, the list of corrections will be stored as a public file on Google Drive that can be searched using “Corrections to Semi-Riemannian Geometry by Stephen Newman”. Allow me to close by thanking my wife, Sandra, for her unwavering support and encouragement throughout the writing of the manuscript. It is to her, with love, that this book is dedicated.

Part I

Preliminaries

1

3 Differential geometry rests on the twin pillars of linear algebra–multilinear algebra and topology–analysis. Part I of the book provides an overview of selected topics from these areas of mathematics. Most of the linear algebra presented here is likely familiar to the reader, but the same may not be true of the multilinear algebra, with the exception of the material on determinants. Topology and analysis are vast subjects, and only the barest of essentials are touched on here. In order to keep the book to a manageable size, not all theorems are provided with a proof, a remark that also applies to Part II and Part III.

4

Chapter 1

Vector Spaces 1.1

Vector Spaces

The definition of a vector space over a field and that of a subspace of a vector space are given in Section B.6. Our focus in this book is exclusively on vector spaces over the real numbers (as opposed to the complex numbers or some other field). Throughout, all vector spaces are over R, the field of real numbers. For brevity, we will drop the reference to R whenever possible and write, for example, “linear” instead of “R-linear”. Of particular importance is the vector space Rm , but many other examples of vector spaces will be encountered. It is easily shown that the intersection of any collection of subspaces of a vector space is itself a subspace. The zero vector of a vector space is denoted by 0, and the zero subspace of a vector space by {0}. The zero vector space, also denoted by {0}, is the vector space consisting only of the zero vector. We will generally avoid explicit consideration of the zero vector space. Most of the results on vector spaces either apply directly to the zero vector space or can be made applicable with a minor reworking of definitions and proofs. The details are usually left to the reader. Example 1.1.1. Let V and W be vector spaces. Following Section B.5 and Section B.6, we denote by Lin(V, W ) the vector space of linear maps from V to W , where addition and scalar multiplication are defined as follows: for all maps A, B in Lin(V, W ) and all real numbers c, (A + B)(v) = A(v) + B(v) and (cA)(v) = cA(v) Semi-Riemannian Geometry, First Edition. Stephen C. Newman. c 2019 John Wiley & Sons, Inc. Published 2019 by John Wiley & Sons, Inc.

5

6

1 Vector Spaces

for all vectors v in V . The zero element of Lin(V, W ), denoted by 0, is the zero map, that is, the map that sends all vectors in V to the zero vector 0 in W . When V = W , we make Lin(V, V ) into a ring by defining multiplication to be composition of maps: for all maps A, B in Lin(V, V ), let  A ◦ B(v) = A B(v) for all vectors v in V . The identity element of the ring Lin(V, V ) is the identity map on V , denoted by idV . ♦ A linear combination of vectors in a vector space V is defined to be a finite sum of the form a1 v1 + · · · + ak vk , where a1 , . . . , ak are real numbers and v1 , . . . , vk are vectors in V . The possibility that some (or all) of a1 , . . . , ak equal zero is not excluded. Let us pause here to comment on an aspect of notation. Following the usual convention in differential geometry, we index the scalars and vectors in a linear combination with superscripts and subscripts, respectively. This opens the door to the Einstein summation convention, according to which, for example, Pk a1 v1 +· · ·+ak vk and i=1 ai vi are abbreviated as ai vi . The logic is that when an expression has a superscript and subscript in common, it is understood that the index is being summed over. Despite the potential advantages of this notation, especially when multiple indices involved, the Einstein summation convention will not be adopted here. Let S be a (nonempty and not necessarily finite) subset of V . The span of S is denoted by span(S) and defined to be the set of linear combinations of vectors in S: span(S) = {a1 v1 + · · · + ak vk : a1 , . . . , ak ∈ R; v1 , . . . , vk ∈ S; k = 1, 2, . . .}.

For a vector v in V , let us denote span({v}) = {av : a ∈ R}

by

Rv.

For example, in R2 , we have  span {(1, 0), (0, 1)} = R2 and  span {(1, 0)} = R(1, 0) = {(a, 0) ∈ R2 : a ∈ R}. It is easily shown that span(S) is a subspace of V . In fact, span(S) is the smallest subspace of V containing S, in the sense that any subspace of V containing S also contains span(S). When span(S) = V , it is said that S spans V or that the vectors in S span V , and that each vector in V is in the span of S. We say that S is linearly independent or that the vectors in S are linearly independent if the only linear combination of distinct vectors in S

7

1.1 Vector Spaces

that equals the zero vector is the one with all coefficients equal to 0. That is, if v1 , . . . , vk are distinct vectors in S and a1 , . . . , ak are real numbers such that a1 v1 +· · ·+ak vk = 0, then a1 = · · · = ak = 0. Evidently, any subset of a linearly independent set is linearly independent. When S is not linearly independent, it is said to be linearly dependent. In particular, the zero vector in any vector space is linearly dependent. As further examples, the vectors (1, 0), (0, 1) in R2 are linearly independent, whereas (0, 0), (1, 0) and (1, 0), (2, 0) are linearly dependent. The next result shows that when a linearly independent set does not span a vector space, it has a linearly independent extension. Theorem 1.1.2. Let V be a vector space, let S be a nonempty subset of V such that span(S) 6= V , and let v be a vector in V span(S). Then S is linearly independent if and only if S ∪ {v} is linearly independent. Proof. (⇒): Suppose av + b1 s1 + · · · + bk sk = 0 for distinct vectors s1 , . . . , sk in S and real numbers a, b1 , . . . , bk . Then a = 0; for if not, then  1   k  b b s1 + · · · + sk , v=− a a hence v is in span(V ), which is a contradiction. Thus, b1 s1 + · · · + bk sk = 0, and since S is linearly independent, we have b1 = · · · = bk = 0. (⇐): As remarked above, any subset of a linearly independent set is linearly independent. A (not necessarily finite) subset H of a vector space V is said to be an unordered basis for V if it spans V and is linearly independent. Theorem 1.1.3. If V is a vector space and H is an unordered basis for V , then each vector in V can be expressed uniquely (up to order of terms) as a linear combination of vectors in H. Proof. Since H spans V , each vector in V can be expressed as a linear combination of vectors in H. Suppose a vector v in V can be expressed as a linear combination in two ways. Let h1 , . . . , hk be the distinct vectors in the linear combinations. Then v = a1 h1 + · · · + ak hk

and

v = b1 h1 + · · · + bk hk ,

for some real numbers a1 , . . . , ak , b1 , . . . , bk , hence (a1 − b1 )h1 + · · · + (ak − bk )hk = 0. Since H is linearly independent, ai − bi = 0 for i = 1, . . . , k. Theorem 1.1.4. Let V be a vector space, and let S and T be nonempty subsets of V , where S is linearly independent, and T is finite and spans V . Then S is finite and card(S) ≤ card(T ), where card denotes cardinality.

8

1 Vector Spaces

Proof. Since S is linearly independent, it does not contain the zero vector. Let card(T ) = m and T = {t1 , . . . , tm }. We proceed in steps. For the first step, let s1 be a vector in S. Since V = span(T ), s1 is a linear combination of t1 , . . . , tm . Because s1 is not the zero vector, at least one of the coefficients in the linear combination must be nonzero. Renumbering t1 , . . . , tm if necessary, suppose it is the coefficient of t1 , and let S1 = {s1 , t2 , . . . , tm }. Then t1 can be expressed as a linear combination of the vectors in S1 , hence V = span(S1 ). For the second step, let s2 be a vector in S{s1 }. Since V = span(S1 ), s2 is a linear combination of s1 , t2 , . . . , tm . Because s1 , s2 are linearly independent, at least one of the coefficients of t2 , . . . , tm in the linear combination is nonzero. Renumbering t2 , . . . , tm if necessary, suppose it is the coefficient of t2 , and let S2 = {s1 , s2 , t3 , . . . , tm }. Then t2 can be expressed as a linear combination of the vectors in S2 , hence V = span(S2 ). Proceeding in this way, after k ≤ m steps, we have a set Sk = {s1 , . . . , sk , tk+1 , . . . , tm }, with V = span(Sk ). Then card(S) ≤ card(T ); for if not, at the mth step, we would have Sm = {s1 , . . . , sm }, with V = span(Sm ) and SSm nonempty. Then any vector in SSm could be expressed as a linear combination of vectors in Sm , which contradicts the assumption that S is linearly independent. We say that a vector space is finite-dimensional if it has a finite unordered basis. Finite-dimensional vector spaces have an associated invariant that, as we will see, largely characterizes them. Theorem 1.1.5. If V is a finite-dimensional vector space, then every unordered basis for V has the same (finite) number of vectors. This invariant, denoted by dim(V ), is called the dimension of V . Proof. Let H and F be bases for V , with F finite. By Theorem 1.1.4, H is finite and card(H) ≤ card(F). Then H is finite, so we use Theorem 1.1.4 again and obtain card(F) ≤ card(H). Thus, card(H) = card(F). For completeness, we assign the zero vector space the dimension 0: dim({0}) = 0. Theorem 1.1.6. If V is a vector space of dimension m, then: (a) Every subset of V that spans V contains at least m vectors. (b) Every linearly independent subset of V contains at most m vectors. Proof. (a): Let H be an unordered basis for V , and suppose T is a subset of V that spans V . The result is trivial if T is infinite, so assume otherwise. Then Theorem 1.1.4 and Theorem 1.1.5 give m = card(H) ≤ card(T ). (b): Suppose S is a linearly independent subset of V . Then Theorem 1.1.4 and Theorem 1.1.5 yield card(S) ≤ card(H) = m. Theorem 1.1.7. Let V be a vector space of dimension m, and let U be a subspace of V . Then: (a) U is finite-dimensional and dim(U ) ≤ dim(V ).

9

1.1 Vector Spaces

(b) If dim(U ) = dim(V ), then U = V . (c) If dim(U ) < dim(V ), then any unordered basis for U can be extended to an unordered basis for V . That is, given an unordered basis {h1 , . . . , hk } for U , there are vectors hk+1 , . . . , hm in V such that {h1 , . . . , hk , hk+1 , . . . , hm } is an unordered basis for V . Proof. (a): We proceed in steps. For the first step, let u1 be a vector in U . If span({u1 }) = U , we are done. If not, for the second step, let u2 be a vector in U span({u1 }). It follows from Theorem 1.1.2 that u1 , u2 are linearly independent. If span({u1 , u2 }) = U , we are done, and so on. By Theorem 1.1.6(b), this process ends after k ≤ m steps. Then u1 , . . . , uk are linearly independent and span U , which is to say that {u1 , . . . , uk } is an unordered basis for U . (b): Let H and F be bases for U and V , respectively, and suppose U 6= V . Since U = span(H), there is a vector v in V span(H). By Theorem 1.1.2, H ∪ {v} is linearly independent. We have from Theorem 1.1.5 that card(H ∪ {v}) > card(H) = dim(U ) = dim(V ) = card(F), which contradicts Theorem 1.1.6(b). (c): Given the unordered basis {h1 , . . . , hk } for U , the algorithm described in part (a) can be used to find vectors hk+1 , . . . , hm in V such that {h1 , . . . , hk , hk+1 , . . . , hm } is an unordered basis for V . Throughout the remainder of Part I, unless stated otherwise, all vector spaces are finite-dimensional. Let V be a vector space, and let {h1 , . . . , hm } be an unordered basis for V . The m-tuple (h1 , . . . , hm ) is said to be an ordered basis for V , as is any m-tuple derived from (h1 , . . . , hm ) by permuting h1 , . . . , hm . For example, (h1 , h2 , . . . , hm ) and (h2 , h1 , . . . , hm ) are distinct ordered bases for V . Example 1.1.8 (Rm ). Let ei be the vector in Rm defined by ei = (0, . . . , 0, 1, 0, . . . , 0), where 1 is in the ith position and 0s are elsewhere for i = 1, . . . , m. For real numbers a1 , . . . , am , we have a1 e1 + · · · + am em = (a1 , . . . , am ), from which it follows that e1 , . . . , em span Rm and are linearly independent. We refer to {e1 , . . . , em } as the standard unordered basis for Rm , and to (e1 , . . . , em ) as the standard ordered basis for Rm . Thus, not surprisingly, Rm has dimension m. ♦ Throughout the remainder of Part I, unless stated otherwise, all bases are ordered.

10

1 Vector Spaces Accordingly, we now refer to (e1 , . . . , em ) as the standard basis for Rm . Let V and W be vector spaces. A map A : V −→ W is said to be linear if A(cv + w) = cA(v) + A(w)

for all vectors v, w in V and all real numbers c. Thus, a linear map respects vector space structure. Suppose A is in fact a linear map. Given a basis H = (h1 , . . . , hm ) for V , let us denote  A(h1 ), . . . , A(hm ) by A(H). We say that A is a linear isomorphism, and that V and W are isomorphic, if A is bijective. To illustrate, let x be an indeterminate, and let Pm = {a0 + a1 x + · · · + am xm : a0 , . . . , am ∈ R} be the set of real polynomials of degree at most m. From the properties of polynomials, it is easily shown that Pm is a vector space of dimension m+1, and that the map A : Rm+1 −→ Pm given by A(a0 , . . . , am ) = a0 + a1 x + · · · + am xm for all vectors (a0 , . . . , am ) in Rm+1 is a linear isomorphism. Following Section B.5, we denote the existence of an isomorphism by Rm+1 ≈ Pm . Since a linear isomorphism is a bijective map, it has an inverse map. The next result shows that the inverse of a linear isomorphism is automatically a linear isomorphism. Theorem 1.1.9. If V and W are vector spaces and A : V −→ W is a linear isomorphism, then A−1 : W −→ V is a linear isomorphism. Proof. By assumption, A−1 is bijective. Let w1 , w2 be vectors in W , and let c be a real number. Since A is bijective, there are unique vectors v1 , v2 in V such that A(v1 ) = w1 and A(v2 ) = w2 . Then   A−1 (cw1 + w2 ) = A−1 cA(v1 ) + A(v2 ) = A−1 A(cv1 + v2 ) = cv1 + v2 = cA−1 (w1 ) + A−1 (w2 ). A linear map is completely determined by its values on a basis, as we now show. Theorem 1.1.10. Let V and W be vector spaces, let H = (h1 , . . . , hm ) be a basis for V , and let w1 , . . . , wm be vectors in W . Then there is a unique linear map A : V −→ W such that A(H) = (w1 , . . . , wm ). Proof. Uniqueness. Since H is a basis for V , for each vector v in V , there is a unique m-tuple (a1 , . . . , am ) in Rm such that v = a1 h1 + · · · + am hm . Suppose A : V −→ W is a linear map such that A(H) = (w1 , . . . , wm ). Then A(v) = A(a1 h1 + · · · + am hm )

= a1 A(h1 ) + · · · + am A(hm ) 1

m

= a w1 + · · · + a wm ,

(1.1.1)

11

1.1 Vector Spaces

from which it follows that A is unique. Existence. Let us define A : V −→ W using (1.1.1) for all vectors v in V . The uniqueness of the m-tuple (a1 , . . . , am ) ensures that A is well-defined. Clearly, A(H) = (w1 , . . . , wm ). Let u = b1 h1 + · · · + bm hm be a vector in V , and let c be a real number. Then cv + u = (ca1 + b1 )h1 + · · · + (cam + bm )hm , hence A(cv + u) = (ca1 + b1 )A(h1 ) + · · · + (cam + bm )A(hm ) = (ca1 + b1 )w1 + · · · + (cam + bm )wm

= c(a1 w1 + · · · + am wm ) + (b1 w1 + · · · + bm wm ) = cA(v) + A(u). Thus, A is linear. From the point of view of linear structure, isomorphic vector spaces are indistinguishable. In fact, it is easily shown using Theorem 1.1.10 that all m-dimensional vector space are isomorphic. More than that, they are all isomorphic to Rm . The isomorphism constructed with the help of Theorem 1.1.10 depends on the choice of bases for the vector spaces. However, we will see an instance in Section 1.2 where an isomorphism can be defined without having to resort to such an arbitrary choice. Let V and W be vector spaces, and let A : V −→ W be a linear map. The kernel of A is defined by ker(A) = {v ∈ V : A(v) = 0}, and the image of A by im(A) = {A(v) ∈ W : v ∈ V }. It is easily shown that ker(A) is a subspace of V , and im(A) is a subspace of W . The nullity of A is defined by  null(A) = dim ker(A) , and the rank of A by  rank(A) = dim im(A) . The nullity and rank of a linear map satisfy an important identity. Theorem 1.1.11 (Rank–Nullity Theorem). If V and W are vector spaces and A : V −→ W is a linear map, then dim(V ) = rank(A) + null(A).

(1.1.2)

12

1 Vector Spaces

Proof. By Theorem 1.1.7(c), any basis (h1 , . . . , hk ) for ker(A) can be extended to a basis (h1 , . . . , hk , hk+1 , . . . , hm ) for V . We claim that A(hk+1 ), . . . , A(hm ) is a basis for im(A). Let v be a vector in V . Since H spans V , we have v = a1 h1 + · · · + am hm for some real numbers a1 , . . . , am . Then A(v) = a1 A(h1 ) + · · · + ak A(hk ) + ak+1 A(hk+1 ) + am A(hm ) = ak+1 A(hk+1 ) + am A(hm ),

hence A(hk+1 ), . . . , A(hm ) span im(A). Suppose ck+1 A(hk+1 ) + · · · + cm A(hm ) = 0 for some real numbers ck+1 , . . . , cm . Then A(ck+1 hk+1 + · · · + cm hm ) = 0, so ck+1 hk+1 + · · · + cm hm is in ker(V ). Since h1 , . . . , hk span ker(A), there are real numbers b1 , . . . , bk such that b1 h1 + · · · + bk hk = ck+1 hk+1 + · · · + cm hm , hence b1 h1 + · · · + bk hk + (−ck+1 )hk+1 + · · · + (−cm )hm = 0. From the linear independence of h1 , . . . , hk , hk+1 , . . . , hm , we have ck+1 = · · · = cm = 0. Thus, A(hk+1 ), . . . , A(hm ) are linearly independent. This proves the claim. It follows that   rank(A) = dim im(A) = m − k = dim(V ) − dim ker(A) = dim(V ) − null(A).

As an example of the rank–nullity identity, consider the linear map A : R3 −→ R2 given by A(x, y, z) = (x + y, 0). Then ker(A) = {(x, y, z) ∈ R3 : x + y = 0} and im(A) = {(x, y) ∈ R2 : y = 0}.

In geometric terms, ker(A) is a plane in R3 and im(A) is a line in R2 . Thus, null(A) = 2 and rank(A) = 1, which agrees with Theorem 1.1.11. In the notation of Theorem 1.1.11, we observe from (1.1.2) that rank(A) ≤ dim(V ). Thus, a linear map at best “preserves” dimension, but never increases it. Theorem 1.1.12. If V and W are vector spaces and A : V −→ W is a linear map, then the following are equivalent: (a) rank(A) = dim(V ). (b) null(A) = 0. (c) ker(A) = {0}. (d) A is injective.

13

1.1 Vector Spaces Proof. (a) ⇔ (b) ⇔ (c): By Theorem 1.1.11, rank(A) = dim(V ) ⇔

null(A) = 0  dim ker(A) = 0



ker(A) = {0}.



(c) ⇒ (d): For vectors v, w in V , we have A(v) = A(w) ⇔

A(v − w) = 0



v − w = 0.



v − w is in ker(A)

(d) ⇒ (c): Clearly, 0 is in ker(V ). For a vector v in V , we have v is in ker(A) ⇔

A(v) = 0



v = 0.



A(v) = A(0)

Theorem 1.1.13. Let V and W be vector spaces, let H be a basis for V , and let A : V −→ W be a linear map. Then: (a) A is a linear isomorphism if and only if A(H) is a basis for W . (b) If A is a linear isomorphism, then dim(V ) = dim(W ). Proof. Let H = (h1 , . . . , hm ). (a)(⇒): Since A is surjective, for each vector w in W , there is a vector v in V such that A(v) = w. Let v = a1 h1 + · · · + am hm for some real numbers a1 , . . . , am . Then w = A(v) = a1 A(h1 ) + · · · + am A(hm ), so A(H) spans W . Suppose b1 A(h1 ) + · · · + bm A(hm ) = 0 for some real numbers b1 , . . . , bm . Then A(b1 h1 +· · ·+bm hm ) = 0, hence b1 h1 +· · ·+bm hm is in ker(A). Since A is injective, it follows from Theorem 1.1.12 that b1 h1 + · · · + bm hm = 0, hence b1 = · · · = bm = 0. Thus, A(H) is linearly independent. (a)(⇐): Let w be a vector in W . Since A(H) spans W , we have w = b1 A(h1 ) + · · · + bm A(hm ) for some real numbers b1 , . . . , bm . Then w = A(b1 h1 + · · · + bm hm ), so A is surjective. Let v = a1 h1 + · · · + am hm be a vector in ker(A). Then 0 = A(v) = a1 A(h1 )+· · ·+am A(hm ). Since A(H) is linearly independent, it follows that a1 = · · · = am = 0, so v = 0. Thus, ker(A) = {0}. By Theorem 1.1.12, A is injective. (b): This follows from part (a).

14

1 Vector Spaces

We pause here to comment on the way proofs are presented when there is an equation or other type of display that stretches over several lines of text. The necessary justification for logical steps in such displays, whether it be equation numbers, theorem numbers, example numbers, and so on, are often provided in brackets at the end of corresponding lines. In order to economize on space, “[Theorem x.y.z]” and “[Example x.y.z]” are abbreviated to “[Th x.y.z]” and “[Ex x.y.z]”. The proof of the next result illustrates these conventions. Theorem 1.1.14. If V and W are vector spaces of dimension m and A : V −→ W is a linear map, then the following are equivalent: (a) A is a linear isomorphism. (b) A is injective. (c) A is surjective. (d) rank(A) = m. Proof. (a) ⇒ (b): This is true by definition. (b) ⇔ (c): By Theorem 1.1.11,  dim(W ) = dim(V ) = rank(A) + null(A) = dim im(A) + null(A), hence

W = im(A) ⇔ ⇔

null(A) = 0

[Th 1.1.7(b)]

A is injective.

[Th 1.1.12]

(c) ⇒ (a): Since A is surjective, we have from (b) ⇔ (c) that A is also injective. (d) ⇔ (b): This follows from Theorem 1.1.12. Let V be a vector space, and let U1 , . . . , Uk be subspaces. The sum of U1 , . . . , Uk is denoted by U1 + · · · + Uk and defined by U1 + · · · + Uk = {u1 + · · · + uk : u1 ∈ U1 , . . . , uk ∈ Uk }. For example, R(1, 0) + R(0, 1) = R2 . It is easily shown that U1 + · · · + Uk = span(U1 ∪ · · · ∪ Uk ), from which it follows that U1 + · · · + Uk is the smallest subspace of V containing each of U1 , . . . , Uk , in the sense that any subspace containing each of U1 , . . . , Uk also contains U1 + · · · + Uk . We observe that U1 + · · · + Uk + {0} = U1 + · · · + Uk , which shows that adding the zero vector spaces does not change a sum. For vectors v1 , . . . , vk in V , we have the following connection between spans and sums: span({v1 , . . . , vk }) = Rv1 + · · · + Rvk .

15

1.1 Vector Spaces

Theorem 1.1.15. If V is a vector space, and U1 and U2 are subspaces of V , then dim(U1 + U2 ) = dim(U1 ) + dim(U2 ) − dim(U1 ∩ U2 ). Proof. Let H = (h1 , . . . , hk ) be a basis for U1 ∩ U2 . By Theorem 1.1.7(c), H can be extended to a basis (h1 , . . . , hk , f1 , . . . , fm ) for U1 , and also to a basis (h1 , . . . , hk , g1 , . . . , gn ) for U2 . Let V = {h1 , . . . , hk , f1 , . . . , fm , g1 , . . . , gn }. We claim that V is basis for U1 + U2 . Clearly, V spans U1 + U2 . To show that V is linearly independent, suppose (a1 h1 + · · · + ak hk ) + (b1 f1 + · · · + bm fm ) + (c1 g1 + · · · + cn gn ) = 0 for some real numbers a1 , . . . , ak , b1 , . . . , bm , c1 , . . . , cn . Then c1 g1 + · · · + cn gn = −(a1 h1 + · · · + ak hk ) − (b1 f1 + · · · + bm fm ).

(1.1.3)

Since h1 , . . . , hk , f1 , . . . , fm are in U1 , so is c1 g1 + · · · + cn gn , and because g1 , . . . , gn are in U2 , so is c1 g1 + · · · + cn gn . Thus, c1 g1 + · · · + cn gn is in U1 ∩ U2 , hence c1 g1 + · · · + cn gn = d1 h1 + · · · + dk hk for some real numbers d1 , . . . , dk , so

(−d1 )h1 + · · · + (−dk )hk + c1 g1 + · · · + cn gn = 0. Since h1 , . . . , hk , g1 , . . . , gn are linearly independent, c1 = · · · = cn = 0.

(1.1.4)

Then (1.1.3) gives (a1 h1 + · · · + ak hk ) + (b1 f1 + · · · + bm fm ) = 0. Because h1 , . . . , hk , f1 , . . . , fm are linearly independent, a1 = · · · = ak = 0

and

b1 = · · · = bm = 0.

(1.1.5)

It follows from (1.1.4) and (1.1.5) that V is linearly independent. This proves the claim. By Theorem 1.1.5, dim(U1 ) + dim(U2 ) = (k + m) + (k + n) = k + (k + m + n) = dim(U1 ∩ U2 ) + dim(U1 + U2 ). Let V be a vector space, and let U1 , . . . , Uk be subspaces of V . We say that the subspace U1 + · · · + Uk of V is a direct sum, and write U1 + · · · + Uk = U1 ⊕ · · · ⊕ Uk ,

16

1 Vector Spaces

if each vector v in U1 + · · · + Uk can be expressed uniquely (up to order of terms) in the form v = u1 + · · · + uk for some vectors ui in Ui for i = 1, . . . , k. As a matter of notation, writing V = U1 ⊕ · · · ⊕ Uk is shorthand for V = U1 + · · · + Uk

and

For example, R = R(1, 0) ⊕ R(0, 1).

U1 + · · · + Uk = U1 ⊕ · · · ⊕ Uk .

2

Theorem 1.1.16. Let V be a vector space and let v1 , . . . , vk be nonzero vectors in V . Then v1 , . . . , vk are linearly independent if and only if Rv1 + · · · + Rvk = Rv1 ⊕ · · · ⊕ Rvk .

Proof. (⇒): If a1 , . . . , ak , b1 , . . . , bk are real numbers such that a1 v1 + · · · + ak vk = b1 v1 + · · · + bk vk ,

then linear independence gives ai = bi for i = 1, . . . , k. (⇐): If a1 , . . . , ak are real numbers such that

a1 v1 + · · · + ak vk = 0 = 0 + · · · + 0 [k terms],

then the uniqueness property of Rv1 ⊕ · · · ⊕ Rvk gives ai vi = 0 for i = 1, . . . , k. It follows that each ai 6= 0; for if not, then vi = (ai )−1 ai vi = 0 for some i, which is a contradiction. Theorem 1.1.17. Let V be a vector space, and let U1 , U2 be subspaces of V . Then U1 + U2 = U1 ⊕ U2 if and only if U1 ∩ U2 = {0}.

Proof. (⇒): Evidently, {0} ⊆ U1 ∩ U2 . Let u be a vector in U1 ∩ U2 . Then 0 can be expressed as 0 = u + (−u), where u is in U1 and −u is in U2 . Since 0 = 0 + 0, it follows from the uniqueness property of U1 ⊕ U2 that u = 0. Thus, U1 ∩ U2 ⊆ {0}. (⇐): Let u be a vector in U1 + U2 such that u = v1 + v2 and u = w1 + w2 for some vectors vi , wi in Ui for 1 = 1, 2. Then v1 −w1 = w2 −v2 is in U1 ∩U2 = {0}, so vi − wi = 0 for i = 1, 2. Theorem 1.1.18. If V is a vector space and U1 , . . . , Uk are subspaces of V such that U1 + · · · + Uk = U1 ⊕ · · · ⊕ Uk , then dim(U1 + · · · + Uk ) = dim(U1 ) + · · · + dim(Uk ). Proof. The proof is by induction. For k = 2, the result follows from Theorem 1.1.15 and Theorem 1.1.17. Let k > 2, and suppose the assertion is true for all indices < k. Since U1 + · · · + Uk is a direct sum, so are U1 + · · · + Uk−1 and (U1 + · · · + Uk−1 ) + Uk , hence U1 + · · · + Uk = (U1 + · · · + Uk−1 ) + Uk

= (U1 ⊕ · · · ⊕ Uk−1 ) ⊕ Uk .

Using the induction hypothesis twice gives dim(U1 + · · · + Uk ) = dim(U1 ⊕ · · · ⊕ Uk−1 ) + dim(Uk )

= dim(U1 ) + · · · + dim(Uk−1 ) + dim(Uk ).

17

1.2 Dual Spaces

Let V1 , . . . , Vk be vector spaces. Following Section B.5, we make V1 ×· · ·×Vk into a vector space, called the product of V1 , . . . , Vk , as follows: for all vectors (v1 , . . . , vk ), (w1 , . . . , wk ) in V1 × · · · × Vk and all real numbers c, let (v1 , . . . , vk ) + (w1 , . . . , wk ) = (v1 + w1 , . . . , vk + wk ) and c(v1 , . . . , vk ) = (cv1 , . . . , cvk ). When V1 = · · · = Vk = V , we denote V × ··· × V

V k.

by

We close this section with two definitions that have obvious geometric content. Let V be a vector space. A subset S of V is said to be star-shaped if there is a vector v0 in S such that for all vectors v in S and all real numbers 0 ≤ t ≤ 1, the vector tv + (1 − t)v0 is in S. In that case, we say that S is star-shaped about v0 . Since tv + (1 − t)v0 = v0 + t(v − v0 ), we can think of {tv + (1 − t)v0 : 0 ≤ t ≤ 1} as the “line segment” joining v0 to v. A subset C of V is said to be cone-shaped if for all vectors v, v1 , v2 in C and all real numbers c > 0, the vectors cv and v1 +v2 are in C. It is easily shown that if C is cone-shaped, then it is star-shaped about any vector it contains. For example, the closed cell {(x, y) : x, y ∈ [−1, 1]} in R2 is star-shaped about (0, 0), but not cone-shaped; whereas the half-plane {(x, y) ∈ R2 : x > 0} in R2 is cone-shaped, hence star-shaped about (0, 0).

1.2

Dual Spaces

In this section, we define the dual (vector) space of a vector space. From this humble beginning, the theory of differential forms will eventually emerge (see Section 15.8). Let V be a vector space. Following Section B.5 and Section B.6, we denote by Lin(V, R) the vector space of linear maps from V to R. By definition, for all maps η, ζ in Lin(V, R) and all real numbers c, (η + ζ)(v) = η(v) + ζ(v) and (cη)(v) = cη(v) for all vectors v in V . For brevity, we henceforth denote Lin(V, R)

by

V ∗.

We say that V ∗ is the dual (vector) space of V and refer to each map in V ∗ as a covector. As an example, the map η : R2 −→ R given by η(x, y) = x + y is in (R2 )∗ . Let us denote (V ∗ )∗

by

and say that V ∗∗ is the double dual of V .

V ∗∗

18

1 Vector Spaces

Theorem 1.2.1. If V is a vector space and H = (h1 , . . . , hm ) is a basis for V , then: (a) There is a unique covector θi in V ∗ such that θi (hj ) = δji for i, j = 1, . . . , m, where δji is Kronecker’s delta. (b) (θ1 , . . . , θm ) is a basis for V ∗ , called the dual basis corresponding to H. (c) dim(V ∗ ) = dim(V ). (d) For all vectors v in V , X v= θi (v)hi . i

(e) For all covectors η in V ∗ , η=

X

η(hi )θi .

i

Proof. (a): This P follows from Theorem 1.1.10. (d): Let v = i ai hi . By part (a), X  X θi (v) = θi aj hj = aj θi (hj ) = ai . j

j

(e): For a vector v in V , we have from part (d) that X  X X  η(v) = η θi (v)hi = θi (v)η(hi ) = η(hi )θi (v). i

i

i

Since v was arbitrary, the result follows. P (b): It follows from part (e) that θ1 , . . . , θm span V ∗ . Suppose i ai θi = 0 for some real numbers a1 , . . . , am . By part (a), X  X j j 0= a θ (hi ) = aj θj (hi ) = ai j 1

j m

for 1 = 1, . . . , m, hence θ , . . . , θ are linearly independent. (c): This follows from part (b). Theorem 1.2.2. Let V be a vector space, and let ι : V −→ V ∗∗ be the map defined by ι(v)(η) = η(v) (1.2.1) for all vectors v in V and all covectors η in V ∗ . Then ι is a linear isomorphism: V ≈ V ∗∗ . Remark. We observe that ι is defined without choosing specific bases for V and V ∗ . If we denote ι(v) by v ∗∗ , then (1.2.1) can be expressed more “symmetrically” as v ∗∗ (η) = η(v). (1.2.2)

19

1.3 Pullback of Covectors

Proof. It is clear that ι is a linear map. Let (h1 , . . . , hm ) be a basis for V , and let (θ1 , . . . , θm ) be its dual basis. If v is a vector in V such that ι(v) = 0, then η(v) = 0 for all covectors η in V ∗ . In particular, θi (v) = 0 for i = 1, . . . , m. It follows from Theorem 1.2.1(d) that v = 0, hence ker(ι) = {0}. By Theorem 1.1.12, ι is injective. Using Theorem 1.2.1(c) twice yields dim(V ) = dim(V ∗ ) = dim(V ∗∗ ). The result now follows Theorem 1.1.14. In view of Theorem 1.2.2, and especially because ι was defined without choosing specific bases for V and V ∗ , we adopt the following convention: Throughout, we identify V ∗∗ with V , and write V ∗∗ = V . Let v be a vector in V , and let η be a covector in V ∗ . Having made the identification V ∗∗ = V , we henceforth denote ι(v)

by

v.

Thus, (1.2.1) and (1.2.2) both become v(η) = η(v).

(1.2.3)

In particular, we have hj (θi ) = θi (hj ) for i, j = 1, . . . , m.

1.3

Pullback of Covectors

In Section 1.2, we introduced the dual space of a vector space. Continuing with that theme, we now associate with a given linear map a corresponding linear map between their dual spaces. Let V and W be vector spaces, and let A : V −→ W be a linear map. Pullback by A is the linear map A∗ : W ∗ −→ V ∗ defined by A∗ (η) = η ◦ A

for all covectors η in W ∗ ; that is,

 A∗ (η)(v) = η A(v)

(1.3.1)

for all vectors v in V . We refer to A∗ (η) as the pullback of η by A. Note that the pullback “reverses the order” of vector spaces. Let us denote (A∗ )∗

by

A∗∗

20

1 Vector Spaces

and observe that with the identifications V ∗∗ = V and W ∗∗ = W , we have A∗∗ : V −→ W.

As an example, consider the map A : R3 −→ R2 defined by A(x, y, z) = (x + z, y + z) for all vectors (x, y, z) in R3 , and let η be the covector in (R2 )∗ given by η(x, y) = x + y. Then  A∗ (η)(x, y, z) = η A(x, y, z) = η(x + z, y + z) = x + y + 2z. Pullbacks behave well with respect to basic algebraic structure. Theorem 1.3.1. Let U , V , and W be vector spaces, and let A, B : U −→ V and C : V −→ W be linear maps. Then: (a) (A + B)∗ = A∗ + B ∗ . (b) (C ◦ B)∗ = B ∗ ◦ C ∗ . (c) A∗∗ = A. (d) If A is a linear isomorphism, then (A−1 )∗ = (A∗ )−1 . (e) A is a linear isomorphism if and only if A∗ is a linear isomorphism. Proof. (a), (b): Straightforward. (c): For a vector v in V and a covector η in W ∗ , we have  A∗∗ (v)(η) = v A∗ (η) [(1.3.1)] = A∗ (η)(v) [(1.2.3)]  = η A(v) [(1.3.1)] = A(v)(η). Since v and η were arbitrary, A (d): By part (b),

∗∗

[(1.2.3)]

= A.

idV ∗ = (idV )∗ = (A−1 ◦ A)∗ = A∗ ◦ (A−1 )∗ , from which the result follows. (e)(⇒): Since A is a linear isomorphism, we have from Theorem 1.1.13(b) that dim(V ) = dim(W ), and then from Theorem 1.2.1(c) that dim(V ∗ ) = dim(W ∗ ). If η is a covector in W ∗ such that A∗ (η) = 0, then (1.3.1) gives η A(v) = 0 for all vectors v in V . Since A is surjective, η(w) = 0 for all vectors w in W , hence η = 0. Thus, ker(A∗ ) = {0}. The result now follows from Theorem 1.1.12 and Theorem 1.1.14. (e)(⇐): Since A∗ : W ∗ −→ V ∗ is a linear isomorphism, we have from (e)(⇒) that so is A∗∗ : V ∗∗ −→ W ∗∗ . Then part (c) and the identifications V = V ∗∗ and W = W ∗∗ give the result.

1.4

Annihilators

Let V be a vector space, and let U be a subspace of V . The annihilator of U in V is denoted by U 0 and defined by U 0 = {η ∈ V ∗ : η(u) = 0 for all u ∈ U } = {η ∈ V ∗ : U ⊆ ker(η)}.

21

1.4 Annihilators It is easily shown that U 0 is a subspace of V ∗ . Let us denote (U 0 )0

U 00

by

and observe that with the identification V ∗∗ = V , U 00 is a subspace of V . Theorem 1.4.1. If V is a vector space and U is a subspace of V , then dim(V ) = dim(U ) + dim(U 0 ). Proof. If U is the zero subspace, the result is trivial, so assume otherwise. Let (h1 , . . . , hk ) be a basis for U . Using Theorem 1.1.7(c), we extend (h1 , . . . , hk ) to a basis (h1 , . . . , hk , hk+1 , . . . , hm ) for V . Let (θ1 , . . . , θk , θk+1 , . . . , θm ) be its dual basis, so that (θk+1 , . . . , θm ) is the dual basis of (hk+1 , . . . , hm ). It follows from θi (h1 ) = · · · = θi (hk ) = 0 for i = k + 1, . . . , m that θk+1 , . . . , θm are covectors in U 0 . We claim that (θk+1 , . . . , θm ) is a basis for U 0 . For a covector η in U 0 , we have from Theorem 1.2.1(e) that η=

m X

η(hi )θi =

i=i

m X

η(hi )θi ,

i=k+1

hence θk+1 , . . . , θm span U 0 . Since (θ1 , . . . , θm ) is a basis for V ∗ , it follows that θk+1 , . . . , θm are linearly independent. This proves the claim. By Theorem 1.1.5, dim(U 0 ) = m − k = dim(V ) − dim(U ). Theorem 1.4.2. If V is a vector space and U is a subspace of V , then U 00 = U. Proof. If u is in U , then u(η) = η(u) = 0 for all covectors η in U 0 , hence u is in U 00 . Thus, U ⊆ U 00 . We have dim(U ) + dim(U 0 ) = dim(V )

[Th 1.4.1]



= dim(V ) 0

[Th 1.2.1(c)] 00

= dim(U ) + dim(U ),

[Th 1.4.1]

so dim(U ) = dim(U 00 ). The result now follows from Theorem 1.1.7(b). Theorem 1.4.3. Let V and W be vector spaces, and let A : V −→ W be a linear map. Then: (a) rank(A∗ ) = rank(A). (b) If dim(V ) = dim(W ), then null(A∗ ) = null(A). (c) ker(A∗ ) = im(A)0 . (d) im(A∗ ) = ker(A)0 .

22

1 Vector Spaces

Proof. (c): We have η is in ker(A∗ ) ⇔

A∗ (η) = 0



A∗ (η)(v) = 0 for all v in V  η A(v) = 0 for all v in V



0



[(1.3.1)]

η is in im(A) .

(d): It follows from Theorem 1.3.1(c) and part (c) that ker(A) = ker(A∗∗ ) = im(A∗ )0 , and then from Theorem 1.4.2 that ker(A)0 = im(A∗ )00 = im(A∗ ). (a): We have rank(A∗ ) = dim im(A∗ )



= dim ker(A)0



 = dim(V ) − dim ker(A) = rank(A).

[part (d)] [Th 1.4.1] [Th 1.1.11]

(b): We have rank(A∗ ) + null(A∗ ) = dim(W ∗ )

[Th 1.1.11]

= dim(W ) = dim(V )

[Th 1.2.1(c)] [assumption]

= rank(A) + null(A).

[Th 1.1.11]

The result now follows from part (a).

Chapter 2

Matrices and Determinants In this chapter, we review some of the basic results from the theory of matrices and determinants.

2.1

Matrices

Let us denote by Matm×n the set of m × n matrices (that is, m rows and n columns) with real entries. When m = n, we say that the matrices are square. It is easily shown that with the usual matrix addition and scalar multiplication, Matm×n is a vector space, and that with the usual matrix multiplication, Matm×m is a ring. Let P be a matrix in Matm×n , with p1  i   .1 P = pj =  .. 

pm 1

··· .. . ···

 p1n .. . . 

pm n

The transpose of P is the matrix P T in Matn×m defined by p1  j   .1 = pi =  .. 

PT

p1n

··· .. . ···

 pm 1 .. . .  pm n

The row matrices of P are  1 p1

···

p1n



 2 p1

···

p2n



...



pm 1

···

 pm n ,

Semi-Riemannian Geometry, First Edition. Stephen C. Newman. c 2019 John Wiley & Sons, Inc. Published 2019 by John Wiley & Sons, Inc.

23

24

2 Matrices and Determinants

and the column matrices of P are  1  1 p1 p2  ..   ..   .   .  m p1 pm 2

 p1n  ..   . . 

...

pm n

Example 2.1.1. For  1 P = 4 7

 3 6, 9

2 5 8

the transpose is PT and the column matrices are   1 4 7

 1 = 2 3

4 5 6

  2 5 8

 7 8 9

  3 6. 9



Theorem 2.1.2. If P and Q are matrices in Matl×m and Matm×n , respectively, then: (a) (P T )T = P . (b) (P Q)T = QT P T . Proof. Straightforward.   We say that a matrix Q = qji in Matm×m is symmetric if Q = QT , and diagonal if qji = 0 for all i 6= j. Evidently, a diagonal matrix is symmetric. Given a vector (a1 , . . . , am ) in Rm , the corresponding diagonal matrix is defined by  1  a ··· 0  .. , diag(a1 , . . . , am ) =  ... . . . .  0

···

am

where all the entries not on the (upper-left to lower-right) diagonal are equal to 0. For example,     1 0 0   1 0 diag(1) = 1 diag(1, 2) = diag(1, 2, 3) = 0 2 0. 0 2 0 0 3 The zero matrix in Matm×n , denoted by Om×n , is the matrix that has all entries equal to 0. The identity matrix in Matm×m is defined by Im = diag(1, . . . , 1),

25

2.1 Matrices so that, for example,   I1 = 1

 1 I3 = 0 0



 1 0 I2 = 0 1

0 1 0

 0 0. 1

We say that a matrix Q in Matm×m is invertible if there is a matrix in Matm×m , denoted by Q−1 and called the inverse of Q, such that QQ−1 = Q−1 Q = Im . It is easily shown that if the inverse of a matrix exists, then it is unique. Theorem 2.1.3. If P and Q are invertible matrices in Matm×m , then: (a) (P −1 )−1 = P . (b) (P Q)−1 = Q−1 P −1 . (c) (P −1 )T = (P T )−1 . Proof. (a), (b): Straightforward. (c): By Theorem 2.1.2(b), Im = (Im )T = (P P −1 )T = (P −1 )T P T , from which the result follows. Multi-index notation, introduced in Appendix A, provides a convenient way to specify submatrices of matrices. Let 1 ≤ r ≤ m and 1 ≤ s ≤ n be integers, and let I = (i1 , . . . , ir ) and J = (j1 , . . . , js ) be multi-indices in Ir,m and Is,n , respectively. For a matrix P = pij in Matm×n , we denote by (i ,...,i )

P (j11 ,...,jrs ) ,

PJI ,

 i (i1 ,...,ir ) pj (j ,...,j ) , 1

s

or

 i I pj J

the r × s submatrix of P consisting of the overlap of rows i1 , i2 , . . . , ir and columns j1 , j2 , . . . , js (in that order); that is,  i1  pj1 · · · pij1s  (i ,...,i ) .. . .. P (j11 ,...,jrs ) =  ... . .  pijr1 · · · pijrs When r = m, in which case (i1 , . . . , im ) = (1, . . . , m), we denote (1,...,m)

P (j1 ,...,js )

by

P(j1 ,...,js ) ,

and when s = n, in which case (j1 , . . . , jn ) = (1, . . . , n), we denote (i ,...,i )

1 r P (1,...,n)

by

P (i1 ,...,ir ) .

When r = 1, so that I = (i) for some 1 ≤ i ≤ m, we have   P (i) = pi1 · · · pin ,

26

2 Matrices and Determinants

which is the ith row matrix of P . Similarly, when s = 1, so that J = (j) for some 1 ≤ j ≤ n, we have  1 pj  ..  P(j) =  . , pm j which is the jth column matrix of P . Example 2.1.4. Continuing with Example 2.1.1, we have   2   P (1) = 1 2 3 P(2) = 5 8 (1,2) P (2,3)

 2 = 5

3 6



P (1,2)

 1 = 4

2 5

 3 6

P(2,3)

 2 = 5 8

 3 6. 9



Theorem 2.1.5. If P , Q, and R are matrices in Matk×l , Matl×m , and Matm×n , respectively, then: (a)  (1)  P Q(1) · · · P (1) Q(m)   .. .. .. PQ =  . . . . (k) (k) P Q(1) · · · P Q(m) (b) P (1) QR(1)  .. P QR =  . 

P (k) QR(1)

··· .. . ···

 P (1) QR(n)  .. . . P (k) QR(n)

Proof. (a): By definition of matrix multiplication, the ij-th entry of P Q is P (i) Q(j) . (b): This follows from part (a) and the observation that (QR)(j) = QR(j) . Theorem 2.1.6. Let P , Q, and R be matrices in Matk×l , Matl×m , and Matm×n , respectively, and let I, J, K, and L be multi-indices in Ir,k , Is,l , It,m , and Iu,n , respectively. Then: (a) (P I )T = (P T )I . (b) (PJ )T = (P T )J . (c) (P Q)IK = P I QK . (d) (P QR)IL = P I QRL . Proof. (a), (b), (c): Straightforward. (d): By part (c), (P QR)IL = (P Q)I RL = P I QRL .

27

2.2 Matrix Representations   For a matrix P = pij in Matm×m , the trace of P is defined by tr(P ) =

X

pii .

i

Theorem 2.1.7. If P and Q are matrices in Matm×m , then: (a) tr(P + Q) = tr(P ) + tr(Q). (b) tr(P Q) = tr(QP ). (c) If Q is invertible, then tr(Q−1 P Q) = tr(P ). Proof. (a): Straightforward.     (b): Let P = pij and Q = qji . By Theorem 2.1.5(a), tr(P Q) =

X

P

(i)

Q(i) =

i

=

X

XX i

(j)

Q

pij qij

j

 =

XX j

qij pij



i

P(j) = tr(QP ).

j

(c): This follows from part (b).

2.2

Matrix Representations

Matrices have many desirable computational properties. For this reason, when computing in vector spaces, it is often convenient to reformulate arguments in terms of matrices. We employ this device often. Let V be a vector space, let H = (h1 , . . . , hm ) be a basis for V , and let v be a vector in V , with X v= ai hi . i

  The matrix representation of v with respect to H is denoted by v H and defined by  1 a    ..  v H =  . . am

We refer to a1 , . . . , am as the components of v with respect to H. In particular,   0  ..  .      hi H =  (2.2.1) 1, . . . 0 where 1 is in the ith position and 0s are elsewhere for i = 1, . . . , m.

28

2 Matrices and Determinants

Theorem 2.2.1 (Representation of Vectors). If V is a vector space of dimension m and H is a basis for V , then the map LH : V −→ Matm×1 defined by   LH (v) = v H for all vectors v in V is a linear isomorphism: V ≈ Matm×1 . Proof. Straightforward. With V and H as above, let W be another vector space, and let F = (f1 , . . . , fn ) be a basis for W . Let A : V −→ W be a linear map, with A(hj ) =

X

aij fi ,

(2.2.2)

i

so that  a1j     A(hj ) F =  ...  anj 

for j = 1, . . . , m. The matrix representation of A with respect to H and F  F is denoted by A H and defined to be the n × m matrix a11  .. = . 

 F A H

an1

··· .. . ···

 a1m h ..  = A(h ) 1 F .  anm

···

  i A(hm ) F .

(2.2.3)

As an example, consider the linear map A : R2 −→ R3 given by A(x, y) = (y, 2x, 3x + 4y), and let E and F be the standard bases for R2 and R3 , respectively. Then   0 1  F A E = 2 0. 3 4 Theorem 2.2.2. Let U , V , and W be vector spaces, let H, F, and G be respective bases, let A, B : U −→ V and C : V −→ W be linear maps, and let c be a real number. Then:  F  F F (a) A + B H = A H + B H .  F  F (b) cA H = c A H .  G  G  F (c) C ◦ A H = C F A H .

29

2.2 Matrix Representations

Proof. (a), (b): Straightforward. (c): Let H = (h1 , . . . , hm ), F = (f1 , . . . , fn ), and G = (g1 , . . . , gr ), and let  F  i   G   A H = aj and C F = ckl . According to (2.2.2) and (2.2.3), X

A(hj ) =

aij fi

and

C(fl ) =

i

X

ckl gk .

k

Then 

C A(hj ) = C

X

akj fk

 =

k

=

akj C(fk )

k

XX i

X

k

cik akj



=

X k

akj

X



i

X G (i)  F  C F A H gi = i

cik gi

(j)

gi ,

from which the result follows. Theorem 2.2.3 (Representation of Linear Maps). Let V and W be vector spaces of dimensions m and n, respectively, and let H and F be respective bases. Define a map LF H : Lin(V, W ) −→ Matn×m by  F LF H (A) = A H for all maps A in Lin(V, W ), where Lin(V, W ) is defined in Example 1.1.1. Then: (a) LF H is a linear isomorphism with respect to the additive structure of Lin(V, W ): Lin(V, W ) ≈ Matn×m . (b) If V = W , then LF H is a ring isomorphism with respect to the multiplicative structure of Lin(V, V ). Remark. We showed in Example 1.1.1 that Lin(V, V ) is both a vector space and a ring, and remarked at the beginning of Secion 2.1 that the same is true of Matm×m , so the assertion in part (b) makes sense. Proof. (a): By parts (a) and (b) of Theorem 2.2.2, LF H is a linear map, and it is easily shown that LF is injective. We claim that Mat n×m and Lin(V, W ) H both have dimension mn. Let Eij be the matrix in Matn×m with 1 in the ijth position and 0s elsewhere for i = 1, . . . , n and j = 1, . . . , m. It is readily demonstrated that the Eij comprise a basis for Matn×m , which therefore has dimension mn. Let H = (h1 , . . . , hm ) and F = (f1 , . . . , fn ), and using Theorem 1.1.10, define linear maps Lij in Lin(V, W ) by ( fi if k = j Lij (hk ) = 0 if k 6= j

30

2 Matrices and Determinants

for i = 1, . . . , n and j = 1, . . . , m. Then X  X X i aj Lij (hk ) = aij Lij (hk ) = aik fi ij

ij

(2.2.4)

i

 F   for all real numbers aij . Let B be a map in Lin(V, W ), with B H = bij . We have from (2.2.4) that X  X i bj Lij (hk ) = bik fi = B(hk ) ij

i

for k = 1, . . . , m. By Theorem 1.1.10, ij bij Lij = B, so the Lij span Lin(V, W ). P i Suppose aij , where 0 denotes the zero ij aj Lij = 0 for some real numbers P map in Lin(V, W ). It follows from (2.2.4) that i aik fi = 0 for k = 1, . . . , m. Since F is a basis for W , aik = 0 for i = 1, . . . , n and k = 1, . . . , m, so the Lij are linearly independent. Thus, the Lij comprise a basis for Lin(V, W ), and therefore, Lin(V, W ) has dimension mn. This proves the claim. Suppose LF H (A) = On×m for some map A in Lin(V, W ). It follows from Theorem 1.1.10, (2.2.2), and (2.2.3) that A is the zero map, hence ker(LF H ) = {0}. The result now follows from Theorem 1.1.12 and Theorem 1.1.14. (b): This follows from Theorem 2.2.2(c) and part (a). P

Theorem 2.2.4. Let V and W be vector spaces, let H and F be respective bases, let A : V −→ W be a linear map, and let v be a vector in V . Then    F   A(v) F = A H v H . P i Proof. Let H = (h1 , . . . , hm ) and v = i a hi . It follows from (2.2.1) and (2.2.2) that    F   A(hi ) F = A H hi H . By parts (a) and (b) of Theorem 2.2.2, X     P i  P i   A(v) F = A ai A(hi ) F i a A(hi ) F = i a hi F = i

 1 a X  F     F X     F  .  i i = a A H hi H = A H a hi H = A H  ..  i

i

am

 F   = A H v H. Let V be a vector space, and let H and F be bases for V . Setting A = idV in Theorem 2.2.4 yields    F   v F = idV H v H . (2.2.5)  F This shows that idV H is the matrix that transforms components with respect  F to H into components with respect to F. For this reason, idV H is called the

31

2.2 Matrix Representations

 F   change of basis matrix from H to F. Let idV H = aij . Then (2.2.2) and (2.2.3) specialize to X hj = aij fi (2.2.6) i

for i = 1, . . . , m and  F h  idV H = h1 F

···

  i hm F .

(2.2.7)

Theorem 2.2.5. Let V and W be vector spaces, let H and F be respective bases, and let A : V −→ W be a linear isomorphism. Then: (a)  F  F A H = idW A(H) . (b)  −1 H  F −1 A F = A H . Remark. By Theorem 1.1.13(a), A(H) is a basis for W , so the assertion in part (a) makes sense. Proof. (a): Let H = (h1 , . . . , hm ), so that    i  F h A(hm ) F A H = A(h1 ) F · · ·  F = idW A(H) .

[(2.2.3)] [(2.2.7)]

(b): By Theorem 2.2.2(c),  H  H  H  F Im = idV H = A−1 ◦ A H = A−1 F A H , from which the result follows. Theorem 2.2.6 (Change of Basis). Let V be a vector space, let H and F be bases for V , and let A : V −→ V be a linear map. Then  F  H −1  H  H A F = idV F A H idV F . Proof. By Theorem 2.2.2(c),  F  F  F  H  H A F = idV ◦ A ◦ idV F = idV H A H idV F . The result now follows from Theorem 2.2.5(b). e be bases for V , let Θ Theorem 2.2.7. Let V be a vector space, let H and H e be the corresponding dual bases, and let A : V −→ V be a linear map. and Θ Then  ∗ Θ  He T A Θ A H , e = where A∗ is the pullback by A.

32

2 Matrices and Determinants

e = (e Proof. Let H = (h1 , . . . , hm ) and H h1 , . . . , e hm ), let Θ = (θ1 , . . . , θm ) and     e e = (θe1 , . . . , θem ), and let A H = aij . By (2.2.2) and (2.2.3), A(hj ) = P ai e Θ i j hi . H Then X A∗ (θej ) = A∗ (θej )(hi )θi [Th 1.2.1(e)] i

=

X

 θej A(hi ) θi

[(1.3.1)]

i

 X  X X j ke e = θ ai hk θi = aki θej (e hk )θi i

=

k

X

ik

aji θi .

[Th 1.2.1(a)]

i

 Θ  j   i T aj Again by (2.2.2) and (2.2.3), A∗ Θ . e = ai =

2.3

Rank of Matrices

Consider the n-dimensional vector space Mat1×n of row matrices and the mdimensional vector space Matm×1 of column matrices. Let P be a matrix in Matm×n . The row rank of P is defined to be the dimension of the subspace of Mat1×n spanned by the rows of P : rowrank(P ) = dim(span{P (1) , . . . , P (m) }). Similarly, the column rank of P is defined to be the dimension of the subspace of Matm×1 spanned by the columns of P : colrank(P ) = dim(span{P(1) , . . . , P(n) }). To illustrate, for P =

 1 0

0 1

 0 , 0

we have rowrank(P ) = colrank(P ) = 2. As shown below, it is not a coincidence that the row rank and column rank of P are equal. Theorem 2.3.1. Let V and W be vector spaces, let H and F be respective bases, and let A : V −→ W be a linear map. Then   F rank(A) = colrank [A]H . Proof. Let H = (h1 , . . . , hm ). It follows from Theorem 2.2.1 and (2.2.3) that      F [A]H = A(hj ) F = LF A(hj ) (j)

33

2.4 Determinant of Matrices

for j = 1, . . . , m. Since LF is an isomorphism and A(h1 ), . . . , A(hm ) span the image of A, we have      F colrank [A]H = dim span {A(h1 ), . . . , A(hm )} = dim im(A) = rank(A). Theorem 2.3.2. If P is a matrix in Matm×n , then rowrank(P ) = colrank(P ). m Proof. Let E and F be the standard bases for Rn and and let  i R , respectively, p Ξ and Φ be the corresponding dual bases. Let P = j , and let A : Rn −→ Rm be the linear map defined by X  X 1 n 1 j m j A(x , . . . , x ) = pj x , . . . , pj x . j

j

 F Then A E = P . By Theorem 2.3.1, rank(A) = colrank(P ).  Ξ  F T We have from Theorem 2.2.7 that A∗ Φ = A E = P T , so a similar argument gives rank(A∗ ) = colrank(P T ) = rowrank(P ). The result now follows from Theorem 1.4.3(a). In light of Theorem 2.3.2, the common value of the row rank and column rank of P is denoted by rank(P ) and called the rank of P . Thus, rank(P ) = rowrank(P ) = colrank(P ).

2.4

(2.3.1)

Determinant of Matrices

This section presents the basic results on the determinant of matrices. Consider the m-dimensional vector space Matm×1 of column matrices and the corresponding product vector space (Matm×1 )m . We denote by (E1 , . . . , Em ) the standard basis for Matm×1 , where Ej has 1 in the jth row and 0s elsewhere for j = 1, . . . , m. Let ∆ : (Matm×1 )m −→ R be an arbitrary function, and let σ be a permutation in Sm , the symmetric group on {1, 2, . . . , m}. We define a function σ(∆) : (Matm×1 )m −→ R by σ(∆)(P1 , . . . , Pm ) = ∆(Pσ(1) , . . . , Pσ(m) ) for all matrices P1 , . . . , Pm in Matm×1 .

34

2 Matrices and Determinants

Theorem 2.4.1. If ∆ : (Matm×1 )m −→ R is a function, and σ and ρ are permutations in Sm , then  (σρ)(∆) = σ ρ(∆) . Proof. Setting Pσ(1) = Q1 , . . . , Pσ(m) = Qm , we have (σρ)(∆)(P1 , . . . , Pm ) = ∆(P(σρ)(1) , . . . , P(σρ)(m) ) = ∆(Pσ(ρ(1)) , . . . , Pσ(ρ(m)) ) = ∆(Qρ(1) , . . . , Qρ(m) ) = ρ(∆)(Q1 , . . . , Qm )  = ρ(∆)(Pσ(1) , . . . , Pσ(m) ) = σ ρ(∆) (P1 , . . . , Pm ). Since P1 , . . . , Pm were arbitrary, the result follows. A function ∆ : (Matm×1 )m −→ R is said to be multilinear if for all matrices P1 , . . . , Pm , Q in Matm×1 and all real numbers c, ∆(P1 , . . . , cPi + Q, . . . , Pm ) = c ∆(P1 , . . . , Pi , . . . , Pm ) + ∆(P1 , . . . , Q, . . . , Pm ) for i = 1, . . . , m. We say that ∆ is alternating if for all matrices P1 , . . . , Pm in Matm×1 , ∆(P1 , . . . , Pi , . . . , Pj , . . . , Pm ) = −∆(P1 , . . . , Pj , . . . , Pi , . . . , Pm ) for all 1 ≤ i < j ≤ m. Equivalently, ∆ is alternating if τ (∆) = −∆ for all transpositions τ in Sm . Theorem 2.4.2. If ∆ : (Matm×1 )m −→ R is a multilinear function, then the following are equivalent: (a) ∆ is alternating. (b) σ(∆) = sgn(σ) ∆ for all permutations σ in Sm . (c) If P1 , . . . , Pm are matrices in Matm×1 and two (or more) of them are equal, then ∆(P1 , . . . , Pm ) = 0. (d) If P1 , . . . , Pm are matrices in Matm×1 and ∆(P1 , . . . , Pm ) 6= 0, then P1 , . . . , Pm are linearly independent. Proof. (a) ⇒ (b): Let σ = τ1 · · · τk be a decomposition of σ into transpositions. By Theorem 2.4.1,  σ(∆) = (τ1 · · · τk )(∆) = (τ1 · · · τk−1 ) τk (∆) = (τ1 · · · τk−1 )(−∆) = −(τ1 · · · τk−1 )(∆).

Repeating the process k − 1 more times gives σ(∆) = (−1)k ∆. By Theorem B.2.3, sgn(σ) = (−1)k . (b) ⇒ (a): If τ is a transposition in Sm , then, by Theorem B.2.3, τ (∆) = sgn(τ ) ∆ = −∆. (a) ⇒ (c): If Pi = Pj for some 1 ≤ i < j ≤ m, then ∆(P1 , . . . , Pi , . . . , Pj , . . . , Pm ) = ∆(P1 , . . . , Pj , . . . , Pi , . . . , Pm ).

35

2.4 Determinant of Matrices On the other hand, since ∆ is alternating, ∆(P1 , . . . , Pj , . . . , Pi , . . . , Pm ) = −∆(P1 , . . . , Pi , . . . , Pj , . . . , Pm ). Thus, ∆(P1 , . . . , Pi , . . . , Pj , . . . , Pm ) = −∆(P1 , . . . , Pi , . . . , Pj , . . . , Pm ), from which the result follows. (c) ⇒ (a): For 1 ≤ i < j ≤ m, we have 0 = ∆(P1 , . . . , Pi + Pj , . . . , Pi + Pj , . . . , Pm ) = ∆(P1 , . . . , Pi , . . . , Pi , . . . , Pm ) + ∆(P1 , . . . , Pi , . . . , Pj , . . . , Pm ) + ∆(P1 , . . . , Pj , . . . , Pi , . . . , Pm ) + ∆(P1 , . . . , Pj , . . . , Pj , . . . , Pm ) = ∆(P1 , . . . , Pi , . . . , Pj , . . . , Pm ) + ∆(P1 , . . . , Pj , . . . , Pi , . . . , Pm ),

from which the result follows. To prove (c) ⇔ (d), we replace the assertion in part (d) with the following logically equivalent assertion: (d0 ): If P1 , . . . , Pm are linearly dependent matrices in Matm×1 , then ∆(P1 , . . . , Pm ) = 0. (c) ⇒ (d0 ): Since P1 , . . . , Pm are linearly dependent, P one of them can be m expressed as a linear combination of the others, say, P1 = i=2 ai Pi . Then X  X m m ∆(P1 , . . . , Pm ) = ∆ ai Pi , P2 , . . . , Pm = ai ∆(Pi , P2 , . . . , Pm ). i=2

i=2

Since ∆(Pi , P2 , . . . , Pm ) has Pi in (at least) two positions, ∆(Pi , P2 , . . . , Pm ) = 0 for i = 2, . . . , m. Thus, ∆(P1 , . . . , Pm ) = 0. (d0 ) ⇒ (c): If two (or more) of P1 , . . . , Pm are equal, then P1 , . . . , Pm are linearly dependent, so ∆(P1 , . . . , Pm ) = 0. A function ∆ : (Matm×1 )m −→ R is said to be a determinant function (on Matm×1 ) if it is both multilinear and alternating. Theorem 2.4.3. Let ∆ : (Matm×1 )m −→ R be a determinant function, and let P1 , . . . , Pm be matrices in Matm×1 , with  1  1 p1 pm  ..   ..  P1 =  . , ··· Pm =  . . pm 1

pm m

Then ∆(P1 , . . . , Pm ) = ∆(E1 , . . . , Em )

X σ∈Sm

σ(1)

sgn(σ) p1

· · · pσ(m) . m

36

2 Matrices and Determinants

Proof. We have P1 =

m X

pi11 Ei1 ,

···

i1 =1

Pm =

m X

pimm Eim ,

im =1

hence ∆(P1 , . . . , Pm ) X  m m X i1 im =∆ p1 Ei1 , . . . , pm Eim i1 =1

im =1

X

=

pi11

1≤i1 ,...,im ≤m

X

=

1≤i1 ,...,im ≤m i1 ,...,im distinct

X

=

X σ∈Sm

pi11 · · · pimm ∆(Ei1 , . . . , Eim )

σ(1)

· · · pσ(m) ∆(Eσ(1) , . . . , Eσ(m) ) m

σ(1)

· · · pσ(m) σ(∆)(E1 , . . . , Em ) m

p1

σ∈Sm

=

· · · pimm ∆(Ei1 , . . . , Eim )

p1

= ∆(E1 , . . . , Em )

X

σ(1)

sgn(σ) p1

σ∈Sm

· · · pσ(m) . m

[Th 2.4.2]

[Th 2.4.2]

Theorem 2.4.4 (Existence of Determinant Function). There is a unique determinant function det : (Matm×1 )m −→ R on Matm×1 such that det(E1 , . . . , Em ) = 1. More specifically, X σ(1) det(P1 , . . . , Pm ) = sgn(σ) p1 · · · pσ(m) m σ∈Sm

=

X σ∈Sm

sgn(σ) p1σ(1) · · · pm σ(m)

(2.4.1)

for all matrices P1 , . . . , Pm in Matm×1 , where  1  1 p1 pm  ..   ..  P1 =  . , ··· Pm =  . . pm 1

pm m

Proof. Let us begin by showing that the right-hand sides of (2.4.1) are equal. We have X X σ(1) sgn(σ) p1 · · · pσ(m) = sgn(σ) p1σ−1 (1) · · · pm m σ −1 (m) σ∈Sm

σ∈Sm

=

X σ∈Sm

=

X σ∈Sm

sgn(σ −1 ) p1σ−1 (1) · · · pm σ −1 (m) sgn(σ) p1σ(1) · · · pm σ(m) ,

37

2.4 Determinant of Matrices

where the second equality follows from Theorem B.2.2(b), and the third equality from the observation that as σ varies over Sm , so does σ −1 . Existence. With P1 , . . . , Pm as above, define a function det : (Matm×1 )m −→ R using the first equality in (2.4.1). It is easily shown that det is multilinear. For a given permutation ρ in Sm , we have  1   1  pρ(1) pρ(m)  ..   ..  Pρ(1) =  . , ··· Pρ(m) =  . , pm ρ(1)

pm ρ(m)

hence ρ(det)(P1 , . . . , Pm ) = det(Pρ(1) , . . . , Pρ(m) ) X σ(1) σ(m) = sgn(σ) pρ(1) · · · pρ(m) σ∈Sm

=

X

σρ−1 (1)

sgn(σ) p1

σ∈Sm

= sgn(ρ)

X

−1

· · · pσρ m

(m)

σρ−1 (1)

sgn(σρ−1 ) p1

σ∈Sm

−1

· · · pσρ

(m)

= sgn(ρ) det(P1 , . . . , Pm ), where the fourth equality follows from Theorem B.2.2, and the last equality follows from the observation that as σ varies over Sm , so does σρ−1 . Since P1 , . . . , Pm were arbitrary, ρ(det) = sgn(ρ) det. By Theorem 2.4.2, det is alternating. Thus, det is a determinant function. A straightforward computation shows that det(E1 , . . . , Em ) = 1. Uniqueness. This follows from Theorem 2.4.3. Let P be a matrix in Matm×m , and recall that in multi-index notation the column matrices of P are P(1) , . . . , P(m) . Setting   P = P(1) · · · P(m) , we henceforth view P as an m-tuple of column matrices. In this way, the vector spaces Matm×m and (Matm×1 )m are identified. Accordingly, we now express the determinant function in Theorem 2.4.4 as det : Matm×m −→ R, so that det(P ) = det(P(1) , . . . , P(m) ), where det(P ) is referred to as the determinant of P . In particular, the condition det(E1 , . . . , Em ) = 1 in Theorem 2.4.4 becomes det(Im ) = 1.

(2.4.2)

38

2 Matrices and Determinants

Theorem 2.4.5. If P and Q are matrices in Matm×m , then: (a) det(P Q) = det(P ) det(Q). (b) If P is invertible, then det(P −1 ) = det(P )−1 . (c) det(P T ) = det(P ). Proof. (a): Let us define a function ∆ : (Matm×1 )m −→ R by   ∆(R1 , . . . , Rm ) = det P R1 · · · P Rm for all matrices R1 , . . . , Rm in Matm×1 . It is easily shown that ∆ is a determinant function. We have   det P Q(1) · · · P Q(m) = ∆(Q(1) , . . . , Q(m) )   = ∆(E1 , . . . , Em ) det Q(1) · · · Q(m) [(2.4.1), Th 2.4.3] = det(P ) det(Q).

(b): We have from (2.4.1) and part (a) that 1 = det(Im ) = det(P P −1 ) = det(P ) det(P −1 ), from which the result follows.  T   (c): Let P T = pij = qji . Then det(P T ) =

X σ∈Sm

1 m sgn(σ) qσ(1) · · · qσ(m) =

X σ∈Sm

σ(1)

sgn(σ) p1

· · · pσ(m) m

= det(P ), where the first and last equalities follow from (2.4.1), and the second equality σ(i) i from the observation that qσ(i) = pi for all permutations σ in Sm . Let P be a matrix in Matm×m . The ij-th cofactor of P is defined by   (1,...,b i,...,m) cij = (−1)i+j det P (1,...,bj,...,m) , (1,...,b i,...,m)

where b indicates that an expression is omitted. Thus, P (1,...,bj,...,m) is the matrix in Mat(m−1)×(m−1) obtained by deleting the ith row and jth column of P . The adjugate of P is the matrix in Matm×m defined by  T adj(P ) = cij .   Theorem 2.4.6 (Column Expansion of Determinant). If P = pij is a matrix in Matm×m and 1 ≤ j ≤ m, then X det(P ) = cij pij , i

which is called the expansion of det(P ) along the jth column of P .

39

2.4 Determinant of Matrices Proof. We have  P = P(1,...,j−1) hence det(P ) =

X

P

i

pij Ei

 pij det P(1,...,j−1)

 P(j+1,...,m) , Ei

 P(j+1,...,m) .

i

We also have   det P(1,...,j−1) Ei P(j+1,...,m)  (1,...,i−1) (1,...,i−1)  P (1,...,j−1) O(i−1)×1 P (j+1,...,m)      (i)  (i) = det P (1,...,j−1) 1 P (j+1,...,m)      (i+1,...,m) (i+1,...,m) P (1,...,j−1) O(m−i)×1 P (j+1,...,m)  = (−1)

(i−1)+(j−1)



(i)

1

   det O(i−1)×1   O(m−i)×1



P (j+1,...,m)

(1,...,i−1) P (1,...,j−1)

(1,...,i−1)  P (j+1,...,m)  

(i+1,...,m) P (1,...,j−1)

(i+1,...,m) P (j+1,...,m)

(i)

1

(i)

P (1,...,j−1)

P (1,...,bj,...,m)

  = (−1)i+j det  (1,...,b i,...,m) O(m−1)×1 P (1,...,bj,...,m)   (1,...,b i,...,m) = (−1)i+j det P (1,...,bj,...,m) = cij ,

  

   

where the second equality is obtained by switching i − 1 adjacent rows, and then switching j − 1 adjacent columns, and where the fourth equality follows from (2.4.1). Combining the above identities gives the result. Theorem 2.4.7. If P is a matrix in Matm×m , then P adj(P ) = det(P ) Im .  i   Proof. Let P = pj . For given integers 1 ≤ k, l ≤ m, let Q = qji be the matrix in Matm×m obtained by replacing the kth column of P with the lth column of P ; that is,   Q = P(1) · · · P(k−1) P(l) P(k+1) · · · P(m) .  T  T Let adj(P ) = cij and adj(Q) = dij . By Theorem 2.4.6, X cik pik = det(P ). i

40

2 Matrices and Determinants

If k 6= l, then two columns of Q are equal and (1,...,b i,...,m) (1,...,b k,...,m)

Q

=P

(1,...,b i,...,m) . (1,...,b k,...,m)

It follows from Theorem 2.4.2 and Theorem 2.4.6 that X X cik pil = dik qki = det(Q) = 0. i

i

Thus, X

cik pil = det(P ) δkl

i

   T for k, l = 1, . . . , m, where δkl is Kronecker’s delta. Then cij pij = det(P )Im . Taking transposes gives the result. Theorem 2.4.8. If P is a matrix in Matm×m , then the following are equivalent: (a) P is invertible. (b) det(P ) 6= 0. (c) rank(P ) = m. If any of the above equivalent conditions is satisfied, then P −1 =

1 adj(P ). det(P )

Proof. (a) ⇒ (b): By Theorem 2.4.5(b), det(P ) det(P −1 ) = 1, hence det(P ) 6= 0.   (b) ⇒ (c): Since det P(1) · · · P(m) = det(P ) 6= 0, it follows from Theorem 2.4.2 that P(1) , . . . , P(m) are linearly independent, so  rank(P ) = rank P(1)

···

P(m)



= m.

(c) ⇒ (a): Since rank(P m, P = (P(1) , . . . , P(m) ) is a basis for Matm×1 .  ) =     Em P . By definition, Ej = P Ej P for j = 1, . . . , m, Let Q = E1 P · · · hence   Im = E1 · · · Em = P Q, so Q = P −1 . The final assertion follows from Theorem 2.4.7.

Theorem 2.4.9. The matrices P1 , . . . , Pm in Matm×1 are linearly independent if and only if det P1 · · · Pm 6= 0. Proof. We have P1 , . . . , Pm are linearly independent   ⇔ rank P1 · · · Pm = m   ⇔ det P1 · · · Pm 6= 0.

[Th 2.4.8]

41

2.4 Determinant of Matrices

The next result is not usually included in an overview of determinants, but it will prove invaluable later on. Theorem 2.4.10 (Cauchy–Binet Identity). Let 1 ≤ k ≤ m be integers. (a) If P and Q are matrices in Matm×k , then X det(P T Q) = det(P I ) det(QI ). I∈Ik,m

(b) If R and S are matrices in Matk×m , then X det(RS T ) = det(RJ ) det(SJ ). J∈Ik,m

  Proof. (a): Let Q = qji , so that  Q = Q(1)

···

 Q(j) = qj1

 Q(k) ,

and P T Q(j) =

X

···

qjm

T

,

qji (P T )(i)

i

for j = 1, . . . , k. We have det(P T Q)   = det P T Q(1) · · · P T Q(k) hP i P ik T i1 T = det q (P ) · · · q (P ) (i1 ) (ik ) i1 1 ik k X  T  ik i1 = q1 · · · qk det (P )(i1 ) · · · (P T )(ik ) 1≤i1 ,...,ik ≤m

=

X 1≤i1 ,...,ik ≤m i1 ,...,ik distinct

=

X

 q1i1 · · · qkik det (P T )(i1 )  X

1≤i1 0; the argument when hv, vi < 0 is similar. Then Rv is a nonzero subspace of V on which g is positive definite. Let V + be a (not necessarily unique) subspace of V that has maximal dimension among subspaces on which g is positive definite, and let V − = (V + )⊥ . Then g is nondegenerate on V + , so Theorem 4.1.3 gives V =V+⊕V−

(4.1.3)

61

4.1 Scalar Product Spaces and V + ∩ V − = {0}.

(4.1.4)

We claim that g is negative semidefinite on V − . Suppose, for a contradiction, that there is a nonzero vector v − in V − such that hv − , v − i > 0, and consider the subspace V + + Rv − of V . Since Rv − ⊆ V − , it follows from (4.1.3) that V + + Rv − = V + ⊕ Rv − .

(4.1.5)

Let v + + av − be a vector in V + + Rv − . Then hv + + av − , v + + av − i = hv + , v + i + a2 hv − , v − i ≥ 0, with hv + + av − , v + + av − i = 0 ⇔ ⇔ ⇔

hv + , v + i = 0 and a2 hv − , v − i = 0

v + = 0 and a = 0 v + + av − = 0.

[(4.1.5)]

This shows that g is positive definite on V + + Rv − . It follows from the maximality property of V + that V + + Rv − = V + , so v − is in V + , which contradicts (4.1.4). This proves the claim. We now claim that g is negative definite on V − . Let w− be a vector in − V such that hw− , w− i = 0. Since g is negative semidefinite on V − , Theorem 3.2.6 applies. Thus, hw− , v − i2 ≤ hw− , w− ihv − , v − i = 0 for all vectors v − in V − , hence hw− , v − i = 0 for all vectors v − in V − . Since w− is in V − = (V + )⊥ , hw− , v + i = 0 for all vectors v + in V + . We have from (4.1.3) that V = V + +V − , so hw− , vi = 0 for all vectors v in V . Since g is nondegenerate, w− = 0. Thus, g is negative definite on V − . This proves the claim. (b): Suppose there is a nonzero vector v in V + ∩ W − . By assumption, g is positive definite on V + , so hv, vi > 0. Also by assumption, g is negative definite on W − , hence hv, vi < 0, which is a contradiction. Thus, V + ∩ W − = {0}. We have from Theorem 1.1.17 that V + + W − = V + ⊕ W − , and then from Theorem 1.1.18 that dim(V + ) + dim(W − ) = dim(V + + W − ) ≤ dim(V ). By Theorem 1.1.18 and (4.1.3), dim(V + ) + dim(V − ) = dim(V ). It follows that dim(W − ) ≤ dim(V − ). Similarly, since g is positive definite on W + and negative definite on V − , we have dim(V − ) ≤ dim(W − ). Thus, dim(V − ) = dim(W − ). A corresponding argument shows that dim(V + ) = dim(W + ).

62

4.2

4 Scalar Product Spaces

Orthonormal Bases

Let (V, g) be a scalar product space of dimension m, and let v, w be vectors in V . Recall from Section 3.1 that v is said to be orthogonal to w if hv, wi = 0. Since g is symmetric, v is orthogonal to w if and only if w is orthogonal to v. For an integer k ≥ 2, a k-tuple (v1 , . . . , vk ) of distinct nonzero vectors in V is said to be an orthogonal k-tuple if vi and vj are orthogonal for i 6= j = 1, . . . , k. This allows the possibility that vi is orthogonal to itself; that is, vi might be a lightlike vector for one or more 1 ≤ i ≤ m. If an orthogonal k-tuple (v1 , . . . , vk ) consists of unit vectors, it is said to be an orthonormal k-tuple, in which case lightlike vectors are excluded. When an orthogonal m-tuple is a basis for V , it is called an orthogonal basis. It follows from Theorem 4.2.3 that an orthonormal m-tuple is necessarily a basis for V , which is called an orthonormal basis. Thus, if (e1 , . . . , em ) is an orthonormal basis for V , then, by definition, each of e1 , . . . , em is a unit vector, so hei , ej i = ±δij for i, j = 1, . . . , m, where δij is Kronecker’s delta. In particular, the standard basis for Rm ν is orthonormal. Orthonormal bases are computationally convenient. It turns out that a scalar product space always has one, as we now show. Theorem 4.2.1. Every nonzero scalar product space contains a unit vector. Proof. This follows from Theorem 3.2.4(b) and Theorem 3.3.4. Theorem 4.2.2. If (V, g) is a scalar product space of dimension m and 1 ≤ k ≤ m − 1 is an integer, then any orthonormal k-tuple of vectors in V can be extended to an orthonormal (k + 1)-tuple of vectors in V . Proof. Let E = (e1 , . . . , ek ) be an orthonormal k-tuple of vectors in V . Then E is an orthonormal basis for the subspace U = Re1 + · · · + Rek of V . By Theorem 1.1.16, U = Re1 ⊕· · ·⊕Rek . Since (g|U )E , the matrix of g|U with respect to E, is a diagonal matrix with each diagonal entry either 1 or −1, (g|U )E is invertible. It follows from Theorem 3.3.3 that g is nondegenerate on U , and then from Theorem 4.1.3 that g is nondegenerate on U ⊥ . Thus, (U ⊥ , g|U ⊥ ) is a nonzero scalar product space. By Theorem 4.2.1, U ⊥ contains a unit vector ek+1 . Then (e1 , . . . , ek+1 ) is the desired orthonormal (k + 1)-tuple of vectors in V . Theorem 4.2.3. If (V, g) is a scalar product space and (e1 , . . . , ek ) is an orthonormal k-tuple of vectors in V , then e1 , . . . , ek are linearly independent. P Proof. If i ai ei = 0, then  0=

ei ,

X

 aj ej

j

=

X j

aj hei , ej i = ai hei , ei i = ±ai ,

hence ai = 0 for i = 1, . . . , m. Theorem 4.2.4. Every nonzero scalar product space has an orthonormal basis.

63

4.2 Orthonormal Bases

Proof. Let (V, g) be a scalar product space of dimension m ≥ 1. By Theorem 4.2.1, V contains a unit vector. Applying Theorem 4.2.2 inductively, we construct an orthonormal m-tuple of vectors in V . It follows from Theorem 4.2.3 that this is an orthonormal basis for V . It is convenient to restrict attention to orthonormal bases satisfying a certain condition. Throughout, all orthonormal bases (e1 , . . . , em ) are assumed ordered such that any +1s among he1 , e1 i, . . . , hem , em i precede any −1s. To illustrate, let E = (e1 , . . . , em−2 , em−1 , em ) be the standard basis for Rm 1 , and let F = (e1 , . . . , em−2 , em , em−1 ). Then E and F are both orthonormal bases for Rm 1 . Since he1 , e1 i = · · · = hem−1 i = 1 and hem , em i = −1, we see that E satisfies the preceding convention but F does not. Let (V, g) be a scalar product space, and let V = V + ⊕ V − be a direct sum of the type given by Theorem 4.1.6(a). The index of g, also called the index of V , is defined by ind(g) = dim(V − ). We often denote ind(g)

by

ν.

According to Theorem 4.1.6(b), ind(g) is independent of the choice of direct m sum. Taking Rm ν = (R , s) as an example, we see from (4.1.2) that ind(s) = ν. Let E = (e1 , . . . , em ) be an orthonormal basis for V . The signature of g, also called the signature of V , is defined to be the m-tuple (he1 , e1 i, . . . , hem , em i). In light of the above convention on orthonormal bases, the signature of g has +1s in the first m − ν positions and −1s in the remaining ν positions. Let us denote hei , ei i by εi for i = 1, . . . , m. Then hei , ej i = εi δij

(4.2.1)

for i, j = 1, . . . , m, where δij is Kronecker’s delta. In this notation, the signature of g is (ε1 , . . . , εm ). We note that gE = diag(ε1 , . . . , εm ),

(4.2.2)

det(gE ) = ε1 · · · εm = (−1)ν .

(4.2.3)

hence Here is generalization of the above identities.

64

4 Scalar Product Spaces

Theorem 4.2.5. Let (V, g) be a scalar product space with signature (ε1 , . . . , εm ) and index ν. Let E and H be bases for V , with E orthonormal, and let A : V −→ V be the linear isomorphism defined by A(E) = H. Then det(gH ) = ε1 · · · εm det(A)2 = (−1)ν det(A)2 . Proof. We have  E T  E idV A(E) gE idV A(E)   T   E E = A E gE A E ,

gH =

[Th 3.1.3] [Th 2.2.5(a)]

hence det(gH ) = det(gE ) det(A)2 . The result now follows from (4.2.3). Theorem 4.2.6. Let (V, g) be a scalar product space, and let U be a subspace of V . If g is nondegenerate on U , then ind(g) = ind(g|U ) + ind(g|U ⊥ ). Remark. By Theorem 4.1.3, g is nondegenerate on U ⊥ . Thus, (U, g|U ) and (U ⊥ , g|U ⊥ ) are both scalar product spaces, so the assertion makes sense. Proof. Let dim(V ) = m and dim(U ) = k. We have from Theorem 4.1.3 that V = U ⊕ U ⊥ , and then from Theorem 1.1.18 that dim(U ⊥ ) = m − k. Let (f1 , . . . , fk ) and (fk+1 , . . . , fm ) be orthonormal bases for U and U ⊥ , respectively, and let e1 , . . . , em be a reordering of f1 , . . . , fm such that (e1 , . . . , em ) is an orthonormal basis for V satisfying the above convention on the ordering of signatures. Evidently, the number of −1s in (e1 , . . . , em ) equals the number of −1s in (f1 , . . . , fk ) plus the number of −1s in (fk+1 , . . . , fm ). The result follows. Theorem 4.2.7. Let (V, g) be a scalar product space with signature (ε1 , . . . , εm ), let (e1 , . . . , em ) be an orthonormal basis for V , and let v be a vector in V . Then X v= εi hv, ei iei . i

Proof. Let v =

aj ej . Then X  X hv, ei i = aj ej , ei = aj hej , ei i = εi ai ,

P

j

j

j

hence ai = εi hv, ei i for i = 1, . . . , m. Theorem 4.2.8. Let (V, g) be a scalar product space with signature (ε1 , . . . , εm ), let (eP basis for V , and let v, w be vectors in V , with 1 , . . . , em ) be an orthonormal P v = i ai ei and w = j bj ej . Then X hv, wi = εi ai bi . i

65

4.3 Adjoints Proof. We have hv, wi =

4.3

X

i

a ei ,

i

X

j

b ej

 =

X

j

ij

ai bj hei , ej i =

X

εi ai bi .

i

Adjoints

We previously defined what it means for a matrix to be symmetric. In this section, we introduce a type of linear map that under certain circumstances has a related symmetry property (see Theorem 4.6.10). Let (V, g) and (W, h) be scalar product spaces, and let A : V −→ W be a linear map. We define a map A† : W −→ V , called the adjoint of A, by A† = S ◦ A∗ ◦ F,

(4.3.1)

where S and F are the sharp map and flat map corresponding to g and h, respectively, and A∗ is the pullback by A. Let us denote (A† )†

by

A†† .

Theorem 4.3.1. With the above setup: (a) A† is the unique linear map such that hA(v), wi = hv, A† (w)i for all vectors v in V and w in W . (b) If V = W , then det(A† ) = det(A). Proof. (a): We have A† = S ◦ A∗ ◦ F ⇔

A∗ ◦ F = F ◦ A†



(A∗ ◦ F)(w) = (F ◦ A† )(w) for all w in W   A∗ F(w) (v) = F A† (w) (v) for all v in V and w in W   F(w) A(v) = F A† (w) (v) for all v in V and w in W



hw, A(v)i = hA† (w), vi for all v in V and w in W

⇔ ⇔ ⇔

hA(v), wi = hv, A† (w)i for all v in V and w in W.

This shows that A† has the desired property, and at the same time that it is uniquely determined by this property. (b): With F = F, we have S = F−1 , so the result follows from Theorem 2.4.5 and (4.3.1).

66

4 Scalar Product Spaces Adjoints behave well with respect to basic algebraic structure.

Theorem 4.3.2. If (V, g) is a scalar product space and A, B : V −→ V are linear maps, then: (a) (A + B)† = A† + B † . (b) (A ◦ B)† = B † ◦ A† . (c) A†† = A. (d) If A is a linear isomorphism, then (A−1 )† = (A† )−1 . (e) A is a linear isomorphism if and only if A† is a linear isomorphism. Proof. We have A† = S ◦ A∗ ◦ F and B † = S ◦ B ∗ ◦ F. (a): By Theorem 1.3.1(a), (A + B)† = S ◦ (A + B)∗ ◦ F = S ◦ (A∗ + B ∗ ) ◦ F

= (S ◦ A∗ ◦ F) + (S ◦ B ∗ ◦ F) = A† + B † .

(b): By Theorem 1.3.1(b), (A ◦ B)† = S ◦ (A ◦ B)∗ ◦ F = S ◦ (B ∗ ◦ A∗ ) ◦ F

= (S ◦ B ∗ ◦ F) ◦ (S ◦ A∗ ◦ F) = B † ◦ A† .

(c): By Theorem 1.3.1(c) and Theorem 3.3.5, A†† = S ◦ (A† )∗ ◦ F = S ◦ (S ◦ A∗ ◦ F)∗ ◦ F = S ◦ (F∗ ◦ A∗∗ ◦ S∗ ) ◦ F = A.

(d): By Theorem 1.3.1(d), (A−1 )† = S ◦ (A−1 )∗ ◦ F = F−1 ◦ (A∗ )−1 ◦ S−1 = (S ◦ A∗ ◦ F)−1 = (A† )−1 .

(e): Since A† = S ◦ A∗ ◦ F, and F and S are isomorphisms, the result follows. For a scalar product space (V, g), we say that a linear map A : V −→ V is self-adjoint if A = A† . In view of Theorem 4.3.1(a), this condition is equivalent to hA(v), wi = hv, A(w)i for all vectors v, w in V . Theorem 4.3.3. Let (V, g) be a scalar product space, let A : V −→ V be a linear map, and let H be a basis for V . Then   T  † H H A H gH . A H = g−1 H Remark. It follows from Theorem 3.3.3 that the inverse of gH exists, so the assertion makes sense.

67

4.3 Adjoints

   H      H Proof. Let H = (h1 , . . . , hm ), and let A = A H = aij and A† = A† H =  i bj . We have     hA(hi ), hj i = hhi , A† (hj )i . By Theorem 2.1.6(b), (2.2.2), and (2.2.3), X  X X k hA(hi ), hj i = ai hk , hj = aki hhk , hj i = aki gkj k

k

k

  T  T (i) = A (i) (gH )(j) = A (gH )(j) , hence

   T hA(hi ), hj i = A gH .

On the other hand, 



hhi , A (hj )i =

hi ,

= (gH )

X

bkj hk

k  (i)

A†

 =

X k

 (j)

bkj hhi , hk i =

X

gik bkj

k

,

so     hhi , A† (hj )i = gH A† .  T   Combining the above identities gives A gH = gH A† , from which the result follows. Theorem 4.3.4. Let (V, g) and (W, h) be scalar product spaces, and let A : V −→ W be a linear map. Then: (a) rank(A† ) = rank(A). (b) If dim(V ) = dim(W ), then null(A† ) = null(A). (c) ker(A† ) = im(A)⊥ . (d) im(A† ) = ker(A)⊥ . Proof. The proof emulates that of Theorem 1.4.3. (c): We have w is in ker(A† ) ⇔

A† (w) = 0



hw, A(v)i = 0 for all v in V



hA† (w), vi = 0 for all v in V





[h is nondegenerate] [Th 4.3.1(a)]

w is in im(A) .

(d): It follows from Theorem 4.3.2(c) and part (c) that ker(A) = ker(A†† ) = im(A† )⊥ , and then from Theorem 4.1.2(c) that ker(A)⊥ = im(A† )⊥⊥ = im(A† ).

68

4 Scalar Product Spaces (a): We have  rank(A† ) = dim im(A† ) = dim ker(A)⊥



[part (d)]

= dim(V ) − dim ker(A)



[Th 4.1.2(b)]

= rank(A).

[Th 1.1.11]

(b): We have rank(A† ) + null(A† ) = dim(W )

[Th 1.1.11]

= dim(V ) = rank(A) + null(A).

[assumption] [Th 1.1.11]

The result now follows from part (a). Let (V, g) be a scalar product space, let A : V −→ V be a linear map, and let U be a subspace of V on which g is nondegenerate. The left-hand side of the following table summarizes selected results on annihilators and pullbacks, while the right-hand side of the table does the same for perps and adjoints. The correspondence between annihilators and perps on the one hand, and pullbacks and adjoints on the other, is evident and demonstrates the similar roles played by U 0 and U ⊥ . Annihilators and pullbacks

Perps and adjoints

dim(V ) = dim(U ) + dim(U 0 )

dim(V ) = dim(U ) + dim(U ⊥ )

=U

U ⊥⊥ = U

A∗∗ = A

A†† = A

rank(A∗ ) = rank(A)

rank(A† ) = rank(A)

null(A∗ ) = null(A)

null(A† ) = null(A)

U

00



ker(A ) = im(A)

0

im(A∗ ) = ker(A)0

4.4

ker(A† ) = im(A)⊥ im(A† ) = ker(A)⊥

Linear Isometries

A linear map is one that respects the linear structure of a vector space. In a scalar product space, there is the additional feature of a scalar product. This section introduces a type of linear map that also respects scalar products. Let (V, g) and (W, h) be scalar product spaces, and let A : V −→ W be a linear map. We say that A is a linear isometry (or orthogonal transformation) if hA(v), A(w)i = hv, wi for all vectors v, w in V .

69

4.4 Linear Isometries

Theorem 4.4.1. If (V, g) and (W, h) are scalar product spaces with the same dimension and A : V −→ W is a linear isometry, then: (a) A is a linear isomorphism. (b) A maps every orthonormal basis for V to an orthonormal basis for W . Proof. (a): Let v be a vector in ker(A). Then hv, wi = hA(v), A(w)i = 0 for all vectors w in V . Since g is nondegenerate, v = 0. Thus, ker(A) = {0}. By Theorem 1.1.12 and Theorem 1.1.14, A is a linear isomorphism. (b): Let E = (e1 , . . . , em ) be an orthonormal basis for V . By Theorem 1.1.13(a) and part (a), A(E) is basis for V . Since A is a linear isometry, hA(ei ), A(ej )i = hei , ej i for i, j = 1, . . . , m. Thus, A(E) is orthonormal. Theorem 4.4.2. Let (V, g) be a scalar product space, and let A : V −→ V be a linear map. Then the following are equivalent: (a) A is a linear isometry. (b) A is a linear isomorphism and A† = A−1 . (c) For some (hence every) basis H for V ,   T   H H gH = A H gH A H . Proof. (a) ⇒ (b): By Theorem 4.4.1(a), A is a linear isomorphism. Let v, w be vectors in V . Then

 hv, wi = hA(v), A(w)i = A† A(v) , w , hence



 0 = A† A(v) − v, w = (A† ◦ A) − idV (v), w .  Since g is nondegenerate and w was arbitrary, (A† ◦ A) − idV (v) = 0 for all v in V . That is, (A† ◦ A) − idV = 0, hence A† ◦ A = idV , so A† = A−1 . (b) ⇒ (c): Since A† = A−1 , by Theorem 2.2.5(b),  † H  −1 H  H −1 A H= A H= A H . The result now follows from Theorem 4.3.3. (c) ⇒ (a): For vectors v, w in V , we have   T   hA(v), A(w)i = A(v) H gH A(w) H     T     H H = A H v H gH A H w H     T  H T  H   A H gH A H w H = v H =

[Th 3.1.2(a)] [Th 2.2.4]

  T   v H gH w H

= hv, wi.

[Th 3.1.2(a)]

Theorem 4.4.3. If (V, g) is a scalar product space and A : V −→ V is a linear isometry, then det(A) = ±1.

70

4 Scalar Product Spaces

Proof. Let H be a basis for V . We have from Theorem 4.4.2 that det(gH ) = det(gH ) det(A)2 , and from Theorem 3.3.3 that det(gH ) 6= 0. Thus, det(A)2 = 1, from which the result follows. Linear isometries are of special interest because they “preserve norms”, as the next result shows. Theorem 4.4.4. Let (V, g) be a scalar product space, and let A : V −→ V be a linear map. Then: (a) If A is a linear isometry, then kA(v)k = kvk for all vectors v in V . (b) If kA(v)k = kvk for all vectors v in V , and A maps spacelike (resp., timelike, lightlike) vectors to spacelike (resp., timelike, lightlike) vectors, then A is a linear isometry. Proof. (a): We have kA(v)k =

p

|hA(v), A(v)i| =

p

|hv, vi| = kvk .

(b): Since kA(v)k = kvk is equivalent to |hA(v), A(v)i| = |hv, vi|, the assumption regarding the way A maps vectors yields hA(v), A(v)i = hv, vi. Theorem 4.4.5. Let (V, g) be a scalar product space, and let A : V −→ V be a linear map. Then A is a linear isometry if and only if A† is a linear isometry. Proof. (⇒): By Theorem 4.4.2, A is a linear isomorphism and A† = A−1 . For vectors v, w in V , we then have

 hA† (v), A† (w)i = v, A†† A† (w) [Th 4.3.1(a)] = hv, A ◦ A† (w)i

[Th 4.3.2(c)]

= hv, wi.

(⇐): This follows from Theorem 4.3.2(c) and (⇒). Let (V, g) be a scalar product space of dimension m and index ν, let E be an orthonormal basis for V , and let A : V −→ V be a linear isometry. To  E   simplify notation, we denote A E by A and adopt similar abbreviations for   other matrices. Consider the partition of A given by  1  1  A 1 A 2   A =     , (4.4.1) A 21 A 22 where the submatrices have the following dimensions:  1 A 1 (m − ν) × (m − ν)  1 A 2 (m − ν) × ν  2 A 1 ν × (m − ν)  2 A 2 ν × ν.

71

4.4 Linear Isometries Since gE = g−1 E , we have from Theorem 4.4.2 that  −1  T A = gE A gE , hence



 1 T A 1

 −1  A =   T − A 12

  T  − A 21  .  2 T A 2

(4.4.2)

Theorem 4.4.6. With the above setup: (a)  1  1 T     T A 1 A 1 = Im−ν + A 12 A 12 . (b)  1  2 T  1  2 T A 1 A 1 = A 2 A 2 . (c)  2  2 T     T A 2 A 2 = Iν + A 21 A 21 . (d)  1 T  1   T  2 A 1 A 1 = Im−ν + A 21 A 1. (e)  1 T  1   T  2 A 1 A 2 = A 21 A 2. (f)  2 T  2   T  1 A 2 A 2 = Iν + A 12 A 2.   −1 Proof. Since A A = Im , we have  1 A 1   2 A 1 hence

 1   A 2   2  A 2 −

 1 T A 1  1 T A 2

  T   − A 21 Im−ν  =  Oν×(m−ν)  2 T A 2

 O(m−ν)×ν , Iν

 1  1 T  1  1 T A 1 A 1 − A 2 A 2 = Im−ν     T     T − A 11 A 21 + A 12 A 22 = O(m−ν)×ν  2  1 T  2  1 T A 1 A 1 − A 2 A 2 = Oν×(m−ν)     T     T − A 21 A 21 + A 22 A 22 = Iν .

    Since A −1 A = Im , we have

(4.4.3)

72

4 Scalar Product Spaces 

 1 T A 1

    T − A 12 hence

  T   1 A 1 − A 21      2 T A 21 A 2

 1   A 2 Im−ν =  2 Oν×(m−ν) A 2

 O(m−ν)×ν , Iν

 1 T  1   T  2 A 1 A 1 − A 21 A 1 = Im−ν  1 T  1   T  2 A 1 A 2 − A 21 A 2 = O(m−ν)×ν

(4.4.4)

  T  1   T  2 A 1 + A 22 A 1 = Oν×(m−ν) − A 12   T  1   T  2 A 2 + A 22 A 2 = Iν . − A 12 (a): This follows from the first identity of (4.4.3). (b): This follows from either the second or third identity of (4.4.3). (c): This follows from the fourth identity of (4.4.3). (d): This follows from the first identity of (4.4.4). (e): This follows from either the second or third identity of (4.4.4). (f): This follows from the fourth identity of (4.4.4).

4.5

Dual Scalar Product Spaces

In this section, we show that a scalar product on a vector space induces a corresponding scalar product on its dual space. Let (V, g) be a scalar product space, let V ∗ be its dual space, and let F and S be the flat map and sharp map corresponding to g. We define a function g∗ = h·,·i∗ : V ∗ × V ∗ −→ R by hη, ζi∗ = hη S , ζ S i ∗

(4.5.1) S

S

S F

S

for all covectors η, ζ in V . According to (3.3.1), hη , ζ i = (η ) (ζ ) = η(ζ S ), hence hη, ζi∗ = η(ζ S ). (4.5.2) Theorem 4.5.1. With the above setup, (V ∗ , g∗ ) is a scalar product space, called the dual scalar product space corresponding to (V, g). Proof. Clearly, g∗ is bilinear and symmetric. Suppose η is a covector in V ∗ such that hη, ζi∗ = 0, that is, hη S , ζ S i = 0, for all covectors ζ in V ∗ . Since S is surjective, as ζ varies over V ∗ , ζ S varies over V . Because g is nondegenerate, η S = 0; and since S is injective, by Theorem 1.1.12, η = 0. Thus, g∗ is nondegenerate.

73

4.5 Dual Scalar Product Spaces

Let F and S be the flat map and sharp map corresponding to g∗ . We have from the identification V ∗∗ = V that F : V ∗ −→ V

S : V −→ V ∗ .

and

Perhaps the more obvious choice of notation would be F∗ instead of F, and S∗ instead of S. However, looking back at Theorem 3.3.5, this would confuse what we now denote by F and S with the pullback by F and the pullback by S. Theorem 4.5.2. If (V, g) is a scalar product space, and F and S are the flat map and sharp map corresponding to g, then: (a) F = S. (b) S = F. Proof. (a): For covectors η, ζ in V ∗ , we have η F (ζ) = hη, ζi∗ S

S

S

S

[(3.3.1)]

= hη , ζ i

[(4.5.1)]

= (ζ S )F (η S )

[(3.3.1)]

= hζ , η i S

= ζ(η ) = η S (ζ).

[(1.2.3)]

Since η and ζ were arbitrary, F = S. (b): We have S = F −1

[(3.3.5)]

=S

[part (a)]

= F.

[(3.3.5)]

−1

Theorem 4.5.3. Let (V, g) be a scalar product space, let H = (h1, . . ., hm ) be ij a basis for V , let Θ = (θ1 , . . . , θm ) be its dual basis, and let g−1 . Then: H = g (a) X θjS = gij hi i

for i = 1, . . . , m. (b)  H S Θ = g∗Θ = g−1 H . Proof. (b): Since a scalar product is symmetric, the same is true of its matrix with respect to a given basis. Accordingly, the transpose in Theorem 3.3.1(b)

74

4 Scalar Product Spaces

can be dropped. We then have  H g∗Θ = F Θ  H = S Θ  H = F−1 Θ   −1 Θ = F H

[Th 3.3.1(b)] [Th 4.5.2(a)]

[Th 2.2.5(b)]

= g−1 H .

[Th 3.3.1(b)]

(a): This follows from (2.2.2), (2.2.3), and part (b). Theorem 4.5.4 (Raising an Index). Let (V, g) be a scalar product space, let H = (h1 , . . . , hm ) be a basis for V , and let Θ = (θ1 , . . . , θm ) be its dual basis. ij Let g−1 , and let η be a covector in V ∗ , with H = g X η= ai θi . (4.5.3) i

Then ηS =

X

ai hi ,

i

where ai =

X

gij aj

(4.5.4)

j

for i = 1, . . . , m. Expressed more succinctly:  S   η H = g−1 H η Θ. Proof. This follows from Theorem 3.3.2 and Theorem 4.5.2(a), but it is instructive to work through separately. Let ζ be a covector in V ∗ , and  the details  observe that g∗Θ = hθi , θj i∗ . Then η S (ζ) = η F (ζ) = hη, ζi

[Th 4.5.2(a)]



[(3.3.1)]



= hζ, ηi X ∗ X = ζ(hi )θi , aj θj i

=

X

=

X

ij

ζ(hi )aj hθi , θj i∗ gij aj hi (ζ)

ij

=

XX i

=

X i

[Th 1.2.1(e)]

j

  g aj hi (ζ) ij

j

 a hi (ζ). i

[(1.2.3), Th 4.5.3(b)]

4.6 Inner Product Spaces

75

P Since ζ was arbitrary, η S = i ai hi . To prove the succinct version directly:  S   η H = S(η) H  H   = S Θ η Θ [Th 2.2.4]   −1 = gH η Θ . [Th 4.5.3] We see from (4.5.3) and (4.5.4) why taking the sharp of η to obtain η S is classically referred to as raising an index by g. The flat map F and sharp map S therefore lower and raise indices, respectively. In the mathematical literature, these maps are colorfully referred to as the musical isomorphisms. A scalar product space and its dual scalar product space are isomorphic under the flat map and sharp map. The remaining results of this section show that there is also a close connection between other familiar structures. Theorem 4.5.5. Let (V, g) be a scalar product space with signature (ε1 , . . . , εm ), let (e1 , . . . , em ) be an orthonormal basis for V , and let (ξ 1 , . . . , ξ m ) be its dual basis. Then, for i = 1, . . . , m: (a) eFi = εi ξ i . (b) ξ iS = εi ei . Proof. (a): This follows from Theorem 3.3.1(a). (b): This follows from Theorem 4.5.3(a) or from part (a). Theorem 4.5.6. Let (V, g) be a scalar product space, let E be an orthonormal basis for V , and let Ξ be its dual basis. Then: (a) Ξ is an orthonormal basis for (V ∗ , g∗ ) . (b) (V, g) and (V ∗ , g∗ ) have the same signature. Proof. Let V have signature (ε1 , . . . , εm ), and let E = (e1 , . . . , em ) and Ξ = (ξ 1 , . . . , ξ m ). By (4.5.1) and Theorem 4.5.5(b), ( εi if i = j i j ∗ iS jS hξ , ξ i = hξ , ξ i = εi εj hei , ej i = 0 if i 6= j, hence hξ i , ξ j i∗ = hei , ej i for i, j = 1, . . . , m. The result follows.

4.6

Inner Product Spaces

Having discussed in some detail the features of scalar product spaces, we now specialize to the type of scalar product space that is usually the focus in introductory linear algebra. Let V be a vector space, and let g be a bilinear function on V . We say that g is an inner product on V , and that the pair (V, g) is an inner product space, if g is symmetric and positive definite on V . Recall from Section 3.1 that g is symmetric on V if hv, wi = hw, vi for all vectors v, w in V , and that g is positive definite on V if hv, vi > 0 for all nonzero vectors v in V .

76

4 Scalar Product Spaces

Suppose (V, g) is in fact an inner product space. If v is a vector in V such that hv, wi = 0 for all vectors w in V , then hv, vi = 0, hence v = 0. Thus, g is nondegenerate on V , demonstrating that (V, g) is automatically a scalar product space. In the present context, the norm corresponding to g, given by (4.1.1), simplifies to the function k·k : V −→ R defined by kvk =

p

hv, vi =

p q(v)

(4.6.1)

for all vectors v in V . Since an inner product space is a type of scalar product space, the definitions of orthogonal, orthogonal k-tuple, orthogonal basis, orthonormal, orthonormal k-tuple, and orthonormal basis given in Section 4.2 also apply here. Let (e1 , . . . , em ) be an orthonormal basis for V . Since g is an inner product, hei , ei i = 1 for i = 1, . . . , m. Thus, ind(g) = 0 and the signature of g is an m-tuple of +1s. The fact that an inner product is positive definite distinguishes it from a scalar product space in ways that are fundamental. For example, in an inner product space, only the zero vector is orthogonal to itself; or equivalently, only the zero vector has zero norm. Perhaps most important of all, in an inner product space, the inner product is positive definite (hence nondegenerate) on every subspace. Consequently, pairing any subspace of an inner product with the corresponding restriction of the inner product yields an inner product space. Example 4.6.1 (Rm 0 ). Continuing with Example 4.1.1, let ν = 0. In this case, we denote s by e. Then e : Rm × Rm −→ R is given by  e (x1 , . . . , xm ), (y 1 , . . . , y m ) = x1 y 1 + · · · + xm y m . We refer to m Rm 0 = (R , e)

as Euclidean m-space and to e as the Euclidean inner product. It is easily shown that e is in fact an inner product on Rm . Following Example m 4.1.1, the standard basis for Rm 0 is simply the standard basis for R . The norm corresponding to e is given by

1

p

(x , . . . , xm ) = (x1 )2 + · · · + (xm )2 and called the Euclidean norm. The reader is likely familiar with k·k as a measure of “length” in Rm , a notion to be explored further in Section 9.3. ♦ Theorem 4.6.2. Let (V, g) be a scalar product space of index ν. Then g is an inner product if and only if ν = 0.

77

4.6 Inner Product Spaces

Proof. (⇒): This follows from above remarks. (⇐): Let (e1P , . . . , em ) be an orthonormal basis for V , and let v be a vector in V , with v = i ai ei . Since ν = 0, X  X X X hv, vi = ai ei , aj ej = hei , ej iai aj = (ai )2 , i

j

ij

i

hence hv, vi ≥ 0, with equality if and only if each ai = 0, that is, if and only if v = 0. Theorem 4.6.3. If (V, g) is an inner product space and H is a basis for V , then det(gH ) > 0. Proof. We have from Theorem 4.2.5 and Theorem 4.6.2 that det(gH ) = det(A)2 ≥ 0. By Theorem 3.3.3, det(gH ) 6= 0. We encountered a version of the Cauchy–Schwarz inequality in Theorem 3.2.6, the proof of which was somewhat involved. For an inner product space, we get a stronger result with much less effort. Theorem 4.6.4 (Cauchy–Schwarz Inequality). If (V, g) is an inner product space and v, w are vectors in V , then hv, wi2 ≤ hv, vihw, wi,

(4.6.2)

with equality if and only if v, w are linearly dependent. Proof. The result is trivial if either v = 0 or w = 0, so assume v, w 6= 0. Since hw, wi(hv, vihw, wi − hv, wi2 )

= hw, wiv − hv, wiw, hw, wiv − hv, wiw ≥ 0,

(4.6.3)

we have hv, vihw, wi − hv, wi2 ≥ 0, from which (4.6.2) follows. If equality holds in (4.6.2), then equality holds in (4.6.3), that is,

hw, wiv − hv, wiw, hw, wiv − hv, wiw = 0, hence hw, wiv − hv, wiw = 0, so v, w are linearly dependent. Conversely, if w = cv for some nonzero real number c, then hv, vihw, wi = hv, w/cihcv, wi = hv, wi2 . Theorem 4.6.5 (Triangle Inequality). If (V, g) is an inner product space and v, w are vectors in V , then kv + wk ≤ kvk+kwk, with equality if and only if v, w are linearly dependent.

78

4 Scalar Product Spaces

Proof. By Theorem 4.6.4, hv, wi ≤ |hv, wi| = hence

p p p hv, wi2 ≤ hv, vi hw, wi = kvk kwk ,

2

kv + wk = hv + w, v + wi = hv, vi + 2hv, wi + hw, wi 2

2

≤ kvk + 2 kvk kwk + kwk = (kvk + kwk)2 ,

so kv + wk ≤ kvk+kwk. There is equality if and only if hv, wi = kvk kwk, which is equivalent to hv, wi ≥ 0 and hv, wi2 = hv, vihw, wi, which in turn, by Theorem 4.6.4, is equivalent to v, w being linearly dependent. In geometric terms, the triangle inequality has the following intuitive interpretation: in an inner product space, it is shorter to walk diagonally across a rectangular field than to “go around the corner”. As we will see in Section 4.8, this commonplace observation depends crucially on the properties of the inner product. 1 m 1 m m Example 4.6.6 (Rm 0 ). For vectors (x , . . . , x ) and (y , . . . , y ) in R0 , the Cauchy–Schwarz inequality and triangle inequality become

X

xi y i

i

and

sX i

2 ≤

X

(xi + y i )2 ≤

(xi )2

X

i

(y i )2



i

sX

(xi )2 +

i

sX

(y i )2 ,

i

respectively.



Theorem 4.6.7 (Pythagora’s Theorem). If (V, g) is an inner product space and U is a subspace of V , then 2

2

2

kvk = kPU (v)k + kPU ⊥ (v)k , where PU and PU ⊥ are the orthogonal projection maps defined in Section 4.1. Proof. By Theorem 4.1.4, 2

kvk = hPU (v) + PU ⊥ (v), PU (v) + PU ⊥ (v)i = hPU (v), PU (v)i + hPU ⊥ (v), PU ⊥ (v)i 2

2

= kPU (v)k + kPU ⊥ (v)k .

The next result is the inner product space counterpart of Theorem 4.2.3. Theorem 4.6.8. If (V, g) is an inner product space and (e1 , . . . , ek ) is an orthogonal (but not necessarily orthonormal) k-tuple of vectors in V , then e1 , . . . , ek are linearly independent.

79

4.6 Inner Product Spaces Proof. If

P

i

ai ei = 0, then  X  X 0 = ei , aj ej = aj hei , ej i = ai hei , ei i. j

j

By definition, ei is nonzero, hence hei , ei i = 6 0, so ai = 0 for i = 1, . . . , m. Theorem 4.2.4 guarantees the existence of an orthonormal basis for every scalar product space, but does not provide an explicit mechanism for constructing one. In an inner product space, there is a step-by-step algorithm that produces the desired basis. Theorem 4.6.9 (Gram–Schmidt Orthogonalization Process). Let (V, g) be an inner product space, and let H = (h1 , . . . , hm ) be a basis for V . Computing in a sequential manner, let f1 = h1 hh2 , f1 i f1 hf1 , f1 i hh3 , f1 i hh3 , f2 i f3 = h3 − f1 − f2 hf1 , f1 i hf2 , f2 i .. . f2 = h2 −

fm = hm −

hhm , f1 i hhm , f2 i hhm , fm−1 i f1 − f2 − · · · − fm−1 . hf1 , f1 i hf2 , f2 i hfm−1 , fm−1 i

Then: (a) F = (f1 , . . . , fm ) is an orthogonal basis for V . (b) E = (f1 / kf1 k , . . . , fm / kfm k) is an orthonormal basis for V . Proof. The first three steps of the sequential process are as follows, where we make repeated use of Theorem 4.1.4. Step 1. Let f1 = h1 . By definition, f1 is nonzero. Step 2. Let f2 = P(Rf1 )⊥ (h2 ). Then h2 = a1 f1 + f2 for some real number 1 a , hence hh2 , f1 i = ha1 f1 + f2 , f1 i = a1 hf1 , f1 i, so

a1 = Thus,

hh2 , f1 i . hf1 , f1 i

f2 = h2 −

hh2 , f1 i f1 . hf1 , f1 i

Since h2 is not in Rf1 = Rh1 , it follows that f2 is nonzero. Step 3. Let f3 = P(Rf1 ⊕Rf2 )⊥ (h3 ). Then h3 = b1 f1 + b2 f2 + f3 for some real numbers b1 , b2 , hence hh3 , f1 i = hb1 f1 + b2 f2 + f3 , f1 i = b1 hf1 , f1 i

80

4 Scalar Product Spaces

and hh3 , f2 i = hb1 f1 + b2 f2 + f3 , f2 i = b2 hf2 , f2 i, so b1 = Thus,

hh3 , f1 i hf1 , f1 i

f3 = h3 −

and

b2 =

hh3 , f2 i . hf2 , f2 i

hh3 , f1 i hh3 , f2 i f1 − f2 . hf1 , f1 i hf2 , f2 i

Since h3 is not in Rf1 ⊕ Rf2 = Rh1 ⊕ Rh2 , it follows thatf3 is nonzero. Proceeding in this way, after m steps we arrive at F = (f1 , . . . , fm ). It is clear that, by construction, F is an orthogonal basis for V , and E is an orthonormal basis for V . With the aid of Theorem 4.6.9, we are now able to provide an alternative proof of Theorem 4.2.4. Let (V, g) be a scalar product space of dimension m and index ν 6= 0. It follows from Theorem 4.1.6(a) that there are subspaces V + and V − of V such that V = V + ⊕ V − , with g an inner product on V + , and −g an inner product on V − . By Theorem 4.6.9, V + has a basis (e1 , . . . , em−ν ) that is orthonormal with respect to the inner product g|V + . Similarly, V − has a basis (em−ν+1 , . . . , em ) that is orthonormal with respect to the inner product −g|V − . Then (e1 , . . . , em−ν , em−ν+1 , . . . , em ) is basis for V that is orthonormal with respect to the scalar product g (except that it needs to be reordered to conform with the convention on signs as they appear in signatures). It was remarked in Section 4.3 that adjoints of linear maps have a certain symmetry property that is akin to the symmetry of a matrix. Part (b) of the next result justifies that observation. Theorem 4.6.10. Let (V, g) be an inner product space, let A : V −→ V be a linear map, and let E be an orthonormal basis for V . Then:  E  E T (a) A† E = A E .  E (b) If A is self-adjoint, then A E is a symmetric matrix. Proof. (a): Since gE is the identity matrix, the result follows from Theorem 4.3.3. (b): This follows from part (a). Theorem 4.6.11. If (V, g) and (W, h) are inner product spaces with the same dimension and A : V −→ W is a linear map, then the following are equivalent: (a) A is a linear isometry. (b) A maps some orthonormal basis for V to an orthonormal basis for W . (c) A maps every orthonormal basis for V to an orthonormal basis for W . Proof. (a) ⇒ (c): This follows from Theorem 4.4.1(b). (c) ⇒ (b): Straightforward. (b) ⇒ (a): Let E = (e1 , . . . , em ) be an orthonormal basis for V such that A(E) is an orthonormal basis for W . Since V and W are inner product spaces,

81

4.7 Eigenvalues and Eigenvectors

their (+1, . . . , +1). Let w be vectors in VP , with v = P i signatures are P both P v, j i j a e and w = b e , hence A(v) = a A(e ) and A(w) = i j i i j i j b A(ej ). By Theorem 4.2.8, X hA(v), A(w)i = ai bi = hv, wi. i

Theorem 4.6.12. If (V, g) is an inner product space and A : V −→ V is a linear map, then the following are equivalent: (a) A is a linear isometry. (b) A maps some orthonormal basis for V to an orthonormal basis for V . (c) A maps every orthonormal basis for V to an orthonormal basis for V . (d) kA(v)k = kvk for all vectors v in V .  E (e) For every orthonormal basis E for V , the matrix A E is in O(m), the orthogonal group defined in Section 2.4. Proof. (a) ⇔ (b) ⇔ (c): This is just Theorem 4.6.11 with W = V . (a) ⇔ (d): This follows from Theorem 4.4.4. (c) ⇔ (e): Let E = (e1 , . . . , em ), and observe that gE = Im . Then   T   A(ej ) E hA(ei ), A(ej )i = A(ei ) E [Th 3.1.2(a)]     T      E E A E ej E = A E ei E [Th 2.2.4]     T    E E A E (i) A E = (j)

    (i)    E T E A E A E = hence

,

[Th 2.1.6(b)]

   E T  E hA(ei ), A(ej )i = A E A E.

It follows that

4.7

(j)

A(E) is orthonormal   hA(ei ), A(ej )i = Im ⇔   T   E E A E A E = Im ⇔  E A E is in O(m). ⇔

Eigenvalues and Eigenvectors

Let P be a matrix in Matm×m , and let x be an “indeterminate”. The characteristic polynomial of P is defined by charP (x) = det(xIm − P ). Theorem 4.7.1. If P and Q are matrices in Matm×m , with Q invertible, then charQ−1 P Q (x) = charP (x).

82

4 Scalar Product Spaces

Proof. We have xIm − Q−1 P Q = Q−1 (xIm − P )Q,  hence det xIm − Q−1 P Q = det(xIm − P ). Let V be a vector space of dimension m, and let A : V −→ V be a linear map. We say that a real number κ is an eigenvalue of A if there is a nonzero vector v in V such that A(v) = κv, or equivalently, (κ idV − A)(v) = 0. In that case, v is said to be an eigenvector of A corresponding to κ. Since A(cv) = cκv for all real numbers c, if there is one eigenvector of A corresponding to κ, then there are infinitely many. Let H be a basis for V . The characteristic polynomial of A is denoted  H by charA (x) and defined to be the characteristic polynomial of A H :   H  charA (x) = det xIm − A H . We observe that charA (0) = (−1)m det(A).

(4.7.1)

Theorem 4.7.2. With the above setup, charA (x) is independent of the choice of basis. Proof. Let F be another basis for V . By Theorem 2.2.6,  F  F xIm − A F = x idV − A F  H −1  H  H x idV − A H idV F . = idV F The result now follows from Theorem 4.7.1. Theorem 4.7.3. Let V be a vector space, let A : V −→ V be a linear map, and let κ be a real number. Then the following are equivalent: (a) κ is an eigenvalue of A. (b) det(κ idV − A) = 0. (c) charA (κ) = 0. Proof. Let H be a basis for V . We have κ is an eigenvalue of A ⇔

(κ idV − A)(v) = 0 for some nonzero vector v in V



κ idV − A is not a linear isomorphism

⇔ ⇔ ⇔ ⇔ ⇔

ker(κ idV − A) 6= {0}

det(κ idV − A) = 0  H  det κ idV − A H = 0   H  det κIm − A H = 0 charA (κ) = 0,

4.7 Eigenvalues and Eigenvectors

83

where the third equivalence follows from Theorem 1.1.12 and Theorem 1.1.14, and the fourth equivalence from Theorem 2.5.3. Let V be a vector space of dimension m, and let A : V −→ V be a linear map. It is easily shown that charA (x) is a polynomial of degree m with real coefficients. By the fundamental theorem of algebra, charA (x) has m (not necessarily distinct) roots in the field of complex numbers. Without further assumptions, there is no guarantee that any of them are real. In what follows, we focus on the case m = 2. Theorem 4.7.4. Let (V, g) be a 2-dimensional inner product space, and let A : V −→ V be a self-adjoint linear map. Then A has two real eigenvalues, and they are equal if and only if A = c idV for some real number c.  E Proof. Let E be an orthonormal basis for V . By Theorem 4.6.10(b), A E is symmetric, so    E a b A E= b c for some real numbers a, b, and c. Then   x−a −b charA (x) = det = x2 − (a + c)x + ac − b2 . −b x−c From the quadratic formula, the two roots of charA (x) are p a + c ± (a − c)2 + 4b2 . 2 Since (a − c)2 + 4b2 ≥ 0, both roots are real. They are equal if and only if  E (a−c)2 +4b2 = 0, which is equivalent to a = c and b = 0; that is, A E = cI2 . Theorem 4.7.5. Let (V, g) be a 2-dimensional inner product space, let A : V −→ V be a self-adjoint linear map, and let κ1 , κ2 be the (real but not necessarily distinct) eigenvalues of A. Then det(A) = κ1 κ2 . Proof. Since κ1 and κ2 are roots of charA (x), we have from the theory of polynomials that charA (x) = (x − κ1 )(x − κ2 ), hence charA (0) = κ1 κ2 . On the other hand, (4.7.1) gives charA (0) = det(A). Theorem 4.7.6 (Euler’s Rotation Theorem). Let (V, g) be an inner product space of odd dimension, and let A : V −→ V be a linear isometry. If det(A) = 1, then 1 is an eigenvalue of A. Remark. Recall from Theorem 4.4.3 that det(A) = ±1, so the assertion makes sense.

84

4 Scalar Product Spaces

Proof. Let dim(V ) = m, and let E be an orthonormal basis for V . By Theorem  E  E T  E A E = Im . Then 4.6.12, the matrix A E is in O(m), so A E  E T   E T   E T   E  idV − A E = Im − A E = − A E Im − A E    T  E E idV − A E , = − A E hence det(idV − A) = (−1)m det(A) det(idV − A) = −det(idV − A), so det(idV − A) = 0. By Theorem 4.7.3, κ = 1 is an eigenvalue of A. According to Theorem 4.6.12, a linear isometry on an inner product space “preserves norms”, and as we will see in Section 9.3, this means that it also “preserves distance”. Such a linear isometry is said to produce a “rigid motion” about the origin. Theorem 4.7.10 has an interpretation that fits nicely with these observations. Suppose 1 is an eigenvalue of A, with corresponding eigenvector v. Then A(cv) = cv for all real numbers c, so that A fixes the subspace Rv. Thus, the action of A is to rigidly rotate V about the origin, with Rv as the “axis of rotation”.

4.8

Lorentz Vector Spaces

Lorentz vector spaces play a central role in the special theory of relativity and, by extension, in general relativity. The geometry of Lorentz vector spaces is considerably more complicated than that of inner product spaces. As we will see, when translated into physical terms, the implications can be surprising to our Euclidean sensibilities. Let (V, g) be a nonzero scalar product space. We say that g is a Lorentz scalar product, and that the pair (V, g) is a Lorentz vector space, if ind(g) = 1. Based on our convention on signs, the signature of g is therefore (+1, . . . , +1, −1). Recall from Section 3.1 that a vector v in V is: spacelike timelike lightlike

if v = 0 or hv, vi > 0. if hv, vi < 0.

if v 6= 0 and hv, vi = 0.

Thus, by definition, e1 , . . . , em−1 are spacelike and em is timelike. The convention on signs adopted here is not the only one appearing in the literature. For instance, some authors take the signature to be (−1, +1, . . . , +1). The choice of one approach over another can lead to differences in the number and location of negative signs in expressions, but does not affect substance. Let U be a nonzero subspace of V . If g is nondegenerate on U , then according to Theorem 4.2.6, ind(g|U ) + ind(g|U ⊥ ) = 1.

85

4.8 Lorentz Vector Spaces Thus, ind(g|U ) equals 0 or 1. We say that U is: spacelike

if g is nondegenerate on U and ind(g|U ) = 0; that is, g is an inner product on U.

timelike

if g is nondegenerate on U and ind(g|U ) = 1;

(4.8.1)

that is, g is a Lorentz scalar product on U. lightlike

if g is degenerate on U.

Evidently, every subspace of V is either spacelike, timelike, or lightlike. By convention, the zero subspace of V is regarded as spacelike. We see from (4.8.1) that in contrast to the situation with an inner product, a Lorentz scalar product on V may be degenerate on U and therefore not a scalar product (not to mention a Lorentz scalar product) on U . Recall from Section 3.1 that the light set of V is Λ = {v ∈ V : hv, vi = 0}{0}. In the present context, we refer to Λ as the light cone of V , for reasons that will become apparent shortly. Example 4.8.1 (Rm 1 ). Continuing with Example 4.1.1, let ν = 1. In this case, we denote s by m. Then m : Rm × Rm −→ R is the function given by X  m−1 m (x1 , . . . , xm ), (y 1 , . . . , y m ) = xi y i − xm y m . i=1

We refer to m Rm 1 = (R , m)

as Minkowski m-space and to m as the Minkowski scalar product. The m m standard basis for Rm 1 is the same as the standard basis for R0 (and R ). The norm corresponding to m is given by v u m−1 u X

1



(x , . . . , xm ) = t (xi )2 − (xm )2 , i=1

3 and the light cone of Rm 1 is denoted by Λm . In R1 , for example, we have p k(x, y, z)k = |x2 + y 2 − z 2 |

and Λ3 = {(x, y, z) ∈ R31 : x2 + y 2 − z 2 = 0}{(0, 0, 0)}. See Figure 4.8.1. In geometric terms, Λ3 is the union of a pair of cones with their vertices removed. A bit of analytic geometry shows that the set of timelike

86

4 Scalar Product Spaces

z

y x Figure 4.8.1. Light cone: Diagram for Example 4.8.1

vectors in R31 is the “inside” of Λ3 , and the set of spacelike vectors in R31 is the “outside” of Λ3 . Any 2-dimensional subspace of R31 can be represented by a plane Π through the origin. This gives precisely three possibilities, as depicted in Figure 4.8.2, where dashed lines indicate intersections. Π1 does not intersect Λ3 : it is spacelike and contains only spacelike vectors. Π2 intersects Λ3 in a pair of lines (minus the origin): it is timelike and contains spacelike, timelike, and lightlike vectors. Π3 intersects Λ3 in a single line (minus the origin): it is lightlike and contains spacelike and lightlike vectors, but no timelike vectors. We observe that according to this analysis, every 2-dimensional subspace of R31 contains spacelike vectors. ♦

Π1

Π2

Π3 Figure 4.8.2. Diagram for Example 4.8.1

87

4.8 Lorentz Vector Spaces

In the following series of results, we show that the characteristics of 2dimensional subspaces of R31 delineated in Example 4.8.1 hold more generally. Theorem 4.8.2. Let (V, g) be a Lorentz vector space, and let v be a vector in V . Then v is a spacelike (resp., timelike, lightlike) vector if and only if Rv is a spacelike (resp., timelike, lightlike) subspace. Proof. Each vector in Rv is of the form cv for some real number c, so we have hcv, cvi = c2 hv, vi. The result follows. Theorem 4.8.3. Let (V, g) be a Lorentz vector space, and let U be a subspace of V . Then U is spacelike if and only if it consists entirely of spacelike vectors. Proof. We have U is spacelike ⇔ ⇔

g is an inner product on U g is an inner product on Ru for all u in U

[(4.8.1)]

Ru is spacelike for all u in U

[(4.8.1)]



u is spacelike for all u in U.

[Th 4.8.2]



Theorem 4.8.4. Let (V, g) be a Lorentz vector space, and let U be a subspace of V . Then: (a) U is spacelike if and only if U ⊥ timelike. (b) U is timelike if and only if U ⊥ spacelike. (c) U is lightlike if and only if U ⊥ is lightlike. Proof. (a)(⇒): Since U is spacelike, g is nondegenerate on U and ind(g|U ) = 0. It follows from Theorem 4.1.3 that g is nondegenerate on U ⊥ , and then from Theorem 4.2.6 that 1 = ind(g) = ind(g|U ⊥ ). Thus, U ⊥ is timelike. (b)(⇒): The proof is similar to that given for (a)(⇒). (c)(⇒): We observe that U ⊥ is not timelike; for if it were, then, by Theorem 4.1.2(c) and (b)(⇒), U would be spacelike, which contradicts the assumption on U . On the other hand, U ⊥ is not spacelike; for if it were, then, by Theorem 4.1.2(c) and (a)(⇒), U would be timelike, which again contradicts the assumption on U . Thus, U ⊥ is lightlike. (a)(⇐): This follows from Theorem 4.1.2(c) and (b)(⇒). (b)(⇐): This follows from Theorem 4.1.2(c) and (a)(⇒). (c)(⇐): This follows from Theorem 4.1.2(c) and (c)(⇒). Theorem 4.8.5. If (V, g) is a Lorentz vector space and v is a timelike vector in V , then V = (Rv)⊥ ⊕ Rv, where (Rv)⊥ is spacelike and Rv is timelike. Thus, each vector w in V can be expressed uniquely in the form w = w + cv, where w is a (spacelike) vector in (Rv)⊥ and c is a real number.

88

4 Scalar Product Spaces

Proof. Since v is timelike, it follows from Theorem 4.8.2 that Rv is timelike, and then from Theorem 4.8.4(b) that (Rv)⊥ is spacelike. Thus, g is nondegenerate on (Rv)⊥ . By Theorem 4.1.2(c) and Theorem 4.1.3, V = (Rv)⊥ ⊕ Rv. In the notation of Theorem 4.8.5, since (Rv)⊥ is spacelike and Rv is timelike, it follows that g is positive definite on (Rv)⊥ and negative definite on Rv. Thus, V = (Rv)⊥ ⊕ Rv is a direct sum of the type given by Theorem 4.1.6(a). Theorem 4.8.6. Let (V, g) be a Lorentz vector space, and let v, w be lightlike vectors in V . Then v and w are orthogonal if and only if they are linearly dependent. Proof. (⇒): Let (e1 , . . . , em ) be an orthonormal basis for V . Then em is timelike, so we have from Theorem 4.8.5 that v and w can be expressed as v = v + aem

and

w = w + bem ,

(4.8.2)

where v, w are (spacelike) vectors in (Rem )⊥ and a, b are real numbers. Then 0 = hv, vi = hv, vi − a2

0 = hw, wi = hw, wi − b2 0 = hv, wi = hv, wi − ab, hence a2 = hv, vi

b2 = hw, wi

ab = hv, wi,

(4.8.3)

so hv, wi2 = hv, vihw, wi.

(4.8.4)

v = cw

(4.8.5)

Since em is timelike, we have from Theorem 4.8.2 that Rem is timelike, and then from Theorem 4.8.4(b) that (Rem )⊥ is spacelike. Thus, g is an inner product on (Rem )⊥ . It follows from Theorem 4.6.4 and (4.8.4) that

for some real number c. Then (4.8.5) and the second and third identities in (4.8.3) give ab = b2 c. We have b 6= 0; for if not, then w = w, where w is lightlike and w is spacelike, which is a contradiction. Thus, a = bc.

(4.8.6)

Combining (4.8.2), (4.8.5), and (4.8.6) yields v = cw. (⇐): If v, w are linearly dependent, then w = cv for some real number c, hence hv, wi = chv, vi = 0. Theorem 4.8.7. If (V, g) is a Lorentz vector space and U is a subspace of V of dimension m ≥ 2, then the following are equivalent: (a) U is timelike. (b) U contains two linearly independent lightlike vectors.

89

4.8 Lorentz Vector Spaces (c) U contains a timelike vector.

Proof. (a) ⇒ (b): By assumption, U is timelike, so g is a (Lorentz) scalar product on U . By Theorem 4.2.4, there is an orthonormal basis (e1 , . . . , ek ) for U . Then e1 is a spacelike unit vector, ek is a timelike unit vector, and e1 and ek are orthogonal. It follows that he1 + ek , e1 + ek i = 0

and

he1 − ek , e1 − ek i = 0,

so e1 + ek and e1 − ek are lightlike. Suppose 0 = a(e1 + ek ) + b(e1 − ek ) = (a + b)e1 + (a − b)ek for some real numbers a, b. Since e1 , ek are linearly independent, we have a = b = 0. Thus, e1 + ek , e1 − ek are linearly independent. (b) ⇒ (c): By assumption, U contains two linearly independent lightlike vectors v and w. Then hv + w, v + wi = 2hv, wi

and

hv − w, v − wi = −2hv, wi.

It follows from Theorem 4.8.6 that hv, wi = 6 0, so either v +w or v −w is timelike. (c) ⇒ (a): By assumption, U contains a timelike vector v. It follows from Theorem 4.8.2 that Rv is timelike, and then from Theorem 4.8.4(b) that (Rv)⊥ is spacelike. Since Rv ⊆ U , we have U ⊥ ⊆ (Rv)⊥ , and then from Theorem 4.8.3 that U ⊥ is spacelike. By Theorem 4.1.2(c) and Theorem 4.8.4(a), U = U ⊥⊥ is timelike. Theorem 4.8.8. If (V, g) is a Lorentz vector space and U is a subspace of V , then the following are equivalent: (a) U is lightlike. (b) U contains a lightlike vector but not a timelike vector. (c) U ∩ Λ = Rv{0} for some lightlike vector v in U . Proof. If V is 1-dimensional, the result is trivial, so assume dim(V ) ≥ 2. (a) ⇒ (b): By assumption, U is lightlike. Then g is degenerate on U , so there is a nonzero vector v in U such that hv, wi = 0 for all vectors w in U . In particular, hv, vi = 0, hence v is lightlike. Furthermore, U does not contain a timelike vector; for if it did, then, by Theorem 4.8.7, U would be timelike, which contradicts the assumption on U . (b) ⇒ (c): By assumption, U contains a lightlike vector v. Then U ∩ Λ does not contain a lightlike vector that is linearly independent of v; for if it did, then, by Theorem 4.8.7, U would contain a timelike vector, which contradicts the assumption on U . Thus, U ∩ Λ = Rv{0}. (c) ⇒ (a): By assumption, U ∩ Λ = Rv{0} for some lightlike vector v in U . We have from Theorem 4.8.3 that U is not spacelike. Since U ∩ Λ is contained in the 1-dimensional subspace Rv of V , it does not contain two linearly independent (lightlike) vectors. By Theorem 4.8.7, U is not timelike. Therefore, U is lightlike.

90

4 Scalar Product Spaces

Theorem 4.8.9. If (V, g) is a Lorentz vector space and U is a subspace of V of dimension m ≥ 2, then U contains a nonzero spacelike vector. Proof. There are three cases to consider. Case 1. U is spacelike. By Theorem 4.8.3, any nonzero vector in U is spacelike. Case 2. U is timelike. Let dim(V ) = m. According to Theorem 4.8.7, U contains a timelike vector v. Since a timelike vector is nonzero, dim(Rv) = 1, so Theorem 4.1.2(b) gives dim (Rv)⊥ = m − 1. By assumption, dim(U ) ≥ 2. We have from Theorem 1.1.15 that  m ≥ dim (Rv)⊥ + U   = dim (Rv)⊥ + dim(U ) − dim (Rv)⊥ ∩ U  ≥ m + 1 − dim (Rv)⊥ ∩ U ,  hence dim (Rv)⊥ ∩ U ≥ 1. Thus, (Rv)⊥ ∩ U is not the zero subspace. It follows from Theorem 4.8.2 that Rv is timelike, and then from Theorem 4.8.4(b) that (Rv)⊥ is spacelike. By Theorem 4.8.3, any nonzero vector in (Rv)⊥ ∩ U is spacelike. Case 3. U is lightlike. By Theorem 4.8.8, U contains a lightlike vector v. Let w be a vector in U Rv. There are three cases to consider: w is either spacelike, timelike, or lightlike. If w is spacelike, then we are done. Suppose w is not spacelike. By Theorem 4.8.8, w cannot be timelike, so it is lightlike. It follows from Theorem 1.1.2 that v, w are linearly independent, and then from Theorem 4.8.6 that they are not orthogonal. Thus, hv + w, v + wi = 2hv, wi = 6 0, so the vector v + w in U is nonzero and not lightlike. By Theorem 4.8.8, v + w cannot be timelike, hence it is spacelike. The following table summarizes some of the above findings on Lorentz vector spaces. As can be seen, the observations for 2-dimensional subspaces of R31 made in Example 4.8.1 hold for Lorentz vector spaces in general. Type of subspace

Types of vectors in subspace

(dimension ≥ 2)

(excluding zero vector) Spacelike

Timelike

Lightlike

Spacelike

yes

no

no

Timelike

yes

yes

yes

Lightlike

yes

no

yes

91

4.9 Time Cones

Looking again at Figure 4.8.2, we confirm using Theorem 4.8.3, Theorem 4.8.7, and Theorem 4.8.8, respectively, that Π1 is spacelike, Π2 is timelike, and Π3 is lightlike. We conclude this section with yet another version of the Cauchy–Schwarz inequality. It is labeled “reversed” because of the direction of the inequality. This is a reminder of how differently behaved a Lorentz vector space is compared to an inner product space. Theorem 4.8.10 (Reversed Cauchy–Schwarz Inequality). If (V, g) is a Lorentz vector space and v, w are timelike vectors, then hv, wi2 ≥ hv, vihw, wi, with equality if and only if v, w are linearly dependent. Proof. Since v is timelike, it follows from Theorem 4.8.2 that Rv is timelike, and then from Theorem 4.8.4(b) that (Rv)⊥ is spacelike. Thus, g is an inner product on (Rv)⊥ . By Theorem 4.8.5, w = w + cv, where w is a (spacelike) vector in (Rv)⊥ and c is a real number. Then hv, wi = chv, vi hence

and

hw, wi = hw, wi + c2 hv, vi,

hv, wi2 = c2 hv, vi2 = hv, vi(hw, wi − hw, wi) = hv, vihw, wi − hv, vihw, wi.

Since v and w are timelike, hv, vihw, wi > 0, and because v is timelike and w is spacelike, hv, vihw, wi ≤ 0. It follows that hv, wi2 ≥ hv, vihw, wi, with equality if and only if hw, wi = 0. Since g is an inner product on (Rv)⊥ , hw, wi = 0 if and only if w = 0, which is equivalent to w = cv, which in turn is equivalent to v and w being linearly dependent.

4.9

Time Cones

Let (V, g) be a Lorentz vector space. The time cone of V is denoted by T and defined to be the set of timelike vectors in V : T = {v ∈ V : hv, vi < 0}. In Rm 1 , the time cone is denoted by Tm . For example, T3 = {(x, y, z) ∈ R31 : x2 + y 2 − z 2 < 0}. Recall from Example 4.8.1 that the light cone of R31 is Λ3 = {(x, y, z) ∈ R31 : x2 + y 2 − z 2 = 0}{(0, 0, 0)}. Thus, T3 is “inside” Λ3 .

92

4 Scalar Product Spaces

Theorem 4.9.1. Let (V, g) be a Lorentz vector space. If v is a timelike vector in V and w is a timelike or lightlike vector in V , then hv, wi = 6 0. Proof. Since v is timelike, it follows from Theorem 4.8.2 that Rv is timelike, and then from Theorem 4.8.4(b) that (Rv)⊥ is spacelike. Since w is not spacelike, by Theorem 4.8.3, it is not in (Rv)⊥ , hence hv, wi = 6 0.

Let (V, g) be a Lorentz vector space, and let v, w be timelike vectors in V . It follows from Theorem 4.9.1 that either hv, wi < 0 or hv, wi > 0. Let us write v∼w

hv, wi < 0.

if

Theorem 4.9.2. With the above setup, ∼ is an equivalence relation on T . Proof. Clearly, ∼ is reflexive and symmetric. It remains to show that ∼ is transitive; that is, if u, v, w are vectors in T such that hu, vi < 0 and hu, wi < 0, then hv, wi < 0. Suppose without loss of generality that u is a (timelike) unit vector. By Theorem 4.8.5, v = v + au

and

w = w + bu,

where v, w are (spacelike) vectors in (Ru)⊥ and a, b are real numbers. It follows that hu, vi = −a hu, wi = −b (4.9.1) hv, wi = hv, wi − ab 2

hv, vi = hv, vi − a

(4.9.2)

2

hw, wi = hw, wi − b .

(4.9.3)

By assumption, hv, vi < 0 and hw, wi < 0. Since v and w are spacelike, we have hv, vi ≥ 0 and hw, wi ≥ 0. It follows from (4.9.3) that 0 ≤ hv, vi < a2

and

0 ≤ hw, wi < b2 ,

hence (ab)2 > hv, vihw, wi ≥ 0.

(4.9.4)

By assumption, hu, vi < 0 and hu, wi < 0, so hu, vihu, wi > 0. Then (4.9.1) yields ab > 0. (4.9.5) Since u is timelike, we have from Theorem 4.8.2 that Ru is timelike, and then from Theorem 4.8.4(b) that (Ru)⊥ is spacelike. Thus, g is an inner product on (Ru)⊥ . By Theorem 4.6.4, hv, vihw, wi ≥ hv, wi2 .

(4.9.6)

It follows from (4.9.4) and (4.9.6) that (ab)2 > hv, wi2 , and then from (4.9.5) that (4.9.7) ab > |hv, wi|. Finally, (4.9.2) and (4.9.7) give

hv, wi = hv, wi − ab ≤ |hv, wi| − ab < 0.

93

4.9 Time Cones

Let (V, g) be a Lorentz vector space, and let T be its time cone. It is clear that ∼ partitions T into two equivalence classes, which we denote by T + and T − . We arbitrarily label T + the future time cone, and T − the past time cone. Vectors in T + are said to be future-directed, while those in T − are called past-directed. In Rm 1 , the future time cone and past time cone are denoted by Tm+ and Tm− , respectively. Continuing with the above example, we observe that (0, 0, 1) is a vector in T3 . For a vector (x, y, z) in T3 , we have (x, y, z) ∼ (0, 0, 1) ⇔

h(x, y, z), (0, 0, 1)i < 0



z > 0.



−z < 0

Similarly, (x, y, z) ∼ (0, 0, −1) if and only if z < 0. Thus, T3+ is either {(x, y, z) ∈ T3 : z > 0}

or

{(x, y, z) ∈ T3 : z < 0},

but which one is a matter of choice. Part (a) of the next result shows that time cones are aptly named. Theorem 4.9.3. If (V, g) is a Lorentz vector space, then: (a) T + and T − are cone-shaped. (b) T − = −T + . Proof. (a): Let v, w be vectors in T + , and let c > 0 be a real number. By definition, hv, vi, hw, wi, hv, wi < 0. We have hcv, cvi < 0 and hcv, vi < 0, hence cv is in T and cv ∼ v. By Theorem 4.9.2, cv is in T + . We also have hv + w, v + wi < 0 and hv + w, vi < 0, so v + w is in T and v + w ∼ v. Again by Theorem 4.9.2, v + w is in T + . Thus, T + is cone-shaped. A similar argument shows that so is T − . (b): Let v be a vector in T + . Then h−v, −vi < 0, hence −v is in T . Furthermore, −v is not in T + ; for if it were, then, since v is in T + , we would have v ∼ −v, that is, hv, −vi < 0, hence hv, vi > 0, which contradicts the assumption on v. It follows that −v is in T − , so v is in −T − . Then T + ⊆ −T − , hence −T + ⊆ T − . A similar argument shows that T − ⊆ −T + . Thus, T − = −T + . Let (V, g) be a Lorentz vector space, let v, w be timelike vectors in V , and let u be a lightlike vector in V . It follows from Theorem 4.9.1 that hu, vi, hu, wi = 6 0, so hu, vi and hu, wi either have the same sign or opposite signs. Theorem 4.9.4. With the above setup, hu, vi and hu, wi have the same sign if and only if v ∼ w. Proof. (⇐): By assumption, v ∼ w, so either v, w are in T + or v, w are in T − . Assume the former and suppose, for a contradiction, that hu, vi and hu, wi have opposite signs, say, hu, vi < 0 and hu, wi > 0. Observe that −hu, vi/hu, wi > 0,

94

4 Scalar Product Spaces

and let w e = −(hu, vi/hu, wi)w. By Theorem 4.9.3(a), v + w e is in T + . It follows from Theorem 4.8.2 that R(v + w) e is timelike, and then from Theorem 4.8.4(b) that R(v + w) e ⊥ is spacelike. Since hu, vi = −hu, wi, e hence hu, v + wi e = 0, we see that u is in R(v + w) e ⊥ . By Theorem 4.8.3, u is spacelike, which contradicts the assumption on u. Thus, hu, vi and hu, wi have the same sign. The proof when v, w are in T − is similar. (⇒): We prove the logically equivalent assertion: if v 6∼ w, then hu, vi and hu, wi have opposite signs. Since v 6∼ w, either v is in T + and w is in T − , or v is in T − and w is in T + . Suppose the former. By Theorem 4.9.3(b), −w is in T + , hence v ∼ −w. It follows from (⇐) that hu, vi and hu, −wi have the same sign, hence hu, vi and hu, wi have opposite signs. The proof when v is in T − and w is in T + is similar. Theorem 4.9.4 provides a mechanism for partitioning a light cone in a way that is consistent with the time structure of the time cone. Let (V, g) be a Lorentz vector space, and let u be a lightlike vector in V . We say that u is future-directed if hu, vi < 0 for some (hence every) vector v in T + , and past-directed if hu, wi < 0 for some (hence every) vector w in T − . The future light cone is denoted by Λ+ and defined to be the set of future-directed lightlike vectors: Λ+ = {u ∈ Λ : hu, vi < 0 for some (hence every) v ∈ T + }. Similarly, the past light cone is denoted by Λ− and defined to be the set of past-directed lightlike vectors: Λ− = {u ∈ Λ : hu, wi < 0 for some (hence every) w ∈ T − }. + − The future and past light cones in Rm 1 are denoted by Λm and Λm , respectively.

Theorem 4.9.5. Let (V, g) be a Lorentz vector space, let A : V −→ V be a linear isometry, and let v be a vector in V . Then v is spacelike (resp., timelike, lightlike) if and only if A(v) is spacelike (resp., timelike, lightlike). Proof. Straightforward. Theorem 4.9.6. Let (V, g) be a Lorentz vector space, and let A : V −→ V be a linear isometry. Then either A(T + ) = T + (equivalently, A(T − ) = T − ), in which case A is said to be orthochronous, or A(T + ) = T − (equivalently, A(T − ) = T + ), in which case A is said to be nonorthochronous. Proof. It follows from Theorem 4.9.5 that A(T ) = T . Let v, w be vectors in T + . Then hA(v), A(w)i = hv, wi < 0, so either A(v), A(w) are in T + or A(v), A(w) are T − . Since v, w were arbitrary, it follows that either A(T + ) ⊆ T + or A(T + ) ⊆ T − . Suppose A(T + ) ⊆ T + . Since A−1 is a linear isometry, the preceding argument shows that either A−1 (T + ) ⊆ T + or A−1 (T + ) ⊆ T − , or equivalently, either T + ⊆ A(T + ) or T + ⊆ A(T − ). If T + ⊆ A(T + ), then, since A(T + ) ⊆ T + , we have A(T + ) = T + . On the other

95

4.9 Time Cones

hand, if T + ⊆ A(T − ), then Theorem 4.9.3(b) gives T + ⊆ −A(T + ), hence T − = −T + ⊆ A(T + ) ⊆ T + , which is contradiction. Thus, A(T + ) = T + . Now suppose A(T + ) ⊆ T − . An argument similar to that just given shows that A(T + ) = T − . Let v, w be vectors in T − . An argument similar to that just given shows that either A(T − ) = T − or A(T − ) = T + . Let (V, g) be a Lorentz vector space, and let (e1 , . . . , em ) be an orthonormal basis for V . Since em is timelike, it is in either T + or T − . The labeling of time cones in V is arbitrary, so we are free to adopt the following convention. Given an orthonormal basis (e1 , . . . , em ) in a Lorentz vector space, T + is the time cone containing em . Thus, T + = {v ∈ T : hv, em i < 0}

(4.9.8)

and Λ+ = {u ∈ Λ : hu, em i < 0}, where we note that em is in T + . It needs to be emphasized that T + and Λ+ depend on the choice of orthonormal basis. In particular, if we decide to use the orthonormal basis (e1 , . . . , −em ) instead of (e1 , . . . , em ), then “future” and “past” are reversed. P Theorem 4.9.7. With the above setup, let v be a vector in T , with v = i ai ei . Then v is in T + if and only if am > 0. Proof. We have hv, em i =

X

ai ei , em



i

= hem , em iam = −am ,

from which the result follows. Continuing with the earlier example, let us choose the standard basis (e1 , e2 , e3 ) for R31 . According to the above convention, e3 = (0, 0, 1) is a future-directed vector, so T3+ = {(x, y, z) ∈ T3 : z > 0}

and

Λ+ 3 = {(x, y, z) ∈ Λ3 : z > 0}.

We close this section with what is perhaps the most striking illustration of the divide that separates the geometry of Lorentz vector spaces from that of inner product spaces. We observed in connection with Theorem 4.6.5 that in an inner product space, it is shorter to walk diagonally across a rectangular field than to go around the corner. The next result shows that, by contrast, in a Lorentz vector space, the opposite is true when the rectangular field is determined by equivalent timelike vectors.

96

4 Scalar Product Spaces

Theorem 4.9.8 (Reversed Triangle Inequality). If (V, g) is a Lorentz vector space and v, w are timelike vectors, with v ∼ w, then kv + wk ≥ kvk+kwk, with equality if and only if v, w are linearly dependent. Proof. It follows from hv, vi < 0 and hw, wi < 0 that 2

kvk = −hv, vi

and

2

kwk = −hw, wi,

(4.9.9)

and then from Theorem 4.8.10 that 2

2

kvk kwk = hv, vihw, wi ≤ hv, wi2 . Since v ∼ w, we have hv, wi < 0, hence kvk kwk ≤ −hv, wi.

(4.9.10)

Also, since v ∼ w, either v, w are in T + or v, w are in T − . Suppose the former. By Theorem 4.9.3(a), v + w is in T + , hence hv + w, v + wi < 0, so 2

kv + wk = −hv + w, v + wi.

(4.9.11)

Then 2

2

(kvk + kwk)2 = kvk + 2 kvk kwk + kwk

≤ −hv, vi − 2hv, wi − hw, wi

[(4.9.9), (4.9.10)]

= −hv + w, v + wi 2

= kv + wk ,

[(4.9.11)]

hence kvk+kwk ≤ kv + wk, with equality if and only if kvk kwk = −hv, wi. We have from (4.9.9) that the preceding identity is equivalent to hv, vihw, wi = hv, wi2 , which in turn, by Theorem 4.8.10, is equivalent to v, w being linearly dependent. The proof when v, w are in T − is similar.

Chapter 5

Tensors on Vector Spaces 5.1

Tensors

Let V be a vector space, and let r, s ≥ 1 be integers. Following Section B.5 and Section B.6, we denote by Mult(V ∗r × V s , R) the vector space of R-multilinear functions from V ∗r × V s to R, where addition and scalar multiplication are  defined as follows: for all functions A, B in Mult(V ∗r × V s , R and all real numbers c, (A + B)(η 1 , . . . , η r , v1 , . . . , vs )

= A(η 1 , . . . , η r , v1 , . . . , vs ) + B(η 1 , . . . , η r , v1 , . . . , vs )

and (cA)(η 1 , . . . , η r , v1 , . . . , vs ) = cA(η 1 , . . . , η r , v1 , . . . , vs ) for all covectors η 1 , . . . , η r in V ∗ and all vectors v1 , . . . , vs in V . For brevity, let us denote Mult(V ∗r × V s , R) by T rs (V ).

We refer to a function A in T rs (V ) as an (r, s)-tensor, an r-contravariants-covariant tensor, or simply a tensor on V , and we define the rank of A to be (r, s). When s = 0, A is said to be an r-contravariant tensor or just a contravariant tensor, and when r = 0, A is said to be an s-covariant tensor or simply a covariant tensor. In Section 1.2, we used the term covector to describe what we now call a 1-covariant tensor. Note that the sum of tensors is defined only when they have the same rank. The zero element of T rs (V ), called the zero tensor and denoted by 0, is the tensor that sends all elements of V ∗r × V s to the real number 0. A tensor in T rs (V ) is said to be nonzero if it is not the zero tensor. Let us observe that T 01 (V ) = V ∗

T 10 (V ) = V ∗∗ = V.

and

Semi-Riemannian Geometry, First Edition. Stephen C. Newman. c 2019 John Wiley & Sons, Inc. Published 2019 by John Wiley & Sons, Inc.

97

(5.1.1)

98

5 Tensors on Vector Spaces

For completeness, we define T 00 (V ) = R.

(5.1.2)

It is important not to confuse the above vector space operations with the multilinear property. The latter means that a tensor is linear in each of its arguments. That is, for all covectors η 1 , . . . , η r , ζ in V ∗ , all vectors v1 , . . . , vs , w in V , and all real numbers c, we have A(η 1 , . . . , cη i + ζ, . . . , η r , v1 , . . . , vs )

= c A(η 1 , . . . , η i , . . . , η r , v1 , . . . , vs ) + A(η 1 , . . . , ζ, . . . , η r , v1 , . . . , vs )

for i = 1, . . . , r, and A(η 1 , . . . , η r , v1 , . . . , cvj + w, . . . , vs )

= c A(η 1 , . . . , η r , v1 , . . . , vj , . . . , vs ) + A(η 1 , . . . , η r , v1 , . . . , w, . . . , vs )

for j = 1, . . . , s. Example 5.1.1. The classic example of a tensor is the determinant function det, which is an m-covariant tensor on Matm×1 . Another obvious example is a bilinear function b on a vector space V , which is a 2-covariant tensor on V . Here are two more examples of tensors. For given vectors v, w in V , the function A : V ∗2 −→ R defined by A(η, ζ) = η(v)ζ(w) for all covector η, ζ in V ∗ is a 2-contravariant tensor on V ∗ . Similarly, for given covectors η, ζ in V ∗ , the function B : V 2 −→ R defined by B(v, w) = η(v)ζ(w) for all vector v, w in V is a 2-covariant tensor on V . ♦ Computations involving tensors rely in a crucial way on the identification V ∗∗ = V , which is included as part of (5.1.1). We repeat here the remarks made at the close of Section 1.2: for a vector v in V and a covector η in V ∗ , the element of V ∗∗ corresponding to v is also denoted by v, so that, by definition, v(η) = η(v).

(5.1.3)

Thus, B in Example 5.1.1 can be expressed as B(v, w) = v(η)w(ζ). Despite (5.1.3), we usually prefer the notation η(v) to v(η). We now define a type of multiplication of tensors. Tensor product is the family of linear maps 0

0

⊗ : T rs (V ) × T rs0 (V ) −→ T r+r s+s0 (V ) defined for r, s, r0 , s0 ≥ 0 by 0

(A ⊗ B)(η 1 , . . . , η r+r , v1 , . . . , vs+s0 )

(5.1.4)

0

= A(η 1 , . . . , η r , v1 , . . . , vs ) B(η r+1 , . . . , η r+r , vs+1 , . . . , vs+s0 ) 0

0

for all tensors A in T rs (V ) and B in T rs0 (V ), all covectors η 1 , . . . , η r+r in V ∗ , and all vectors v1 , . . . , vs+s0 in V . We refer to A ⊗ B as the tensor product

99

5.1 Tensors

of A and B. Unlike with addition, the product of tensors does not require A and B to have the same rank. Usually, A ⊗ B = 6 B ⊗ A, so the tensor product is generally not commutative. The next result gives the basic algebraic properties of tensors. Theorem 5.1.2. Let V be a vector space, let A, A1 , A2 and B, B1 , B2 and C 0 00 be tensors in T rs (V ) and T rs0 (V ) and T rs00 (V ), respectively, and let c be a real number. Then: (a) (A1 + A2 ) ⊗ B = A1 ⊗ B + A2 ⊗ B. (b) A ⊗ (B1 + B2 ) = A ⊗ B1 + A ⊗ B2 . (c) (cA) ⊗ B = c(A ⊗ B) = A ⊗ (cB). (d) (A ⊗ B) ⊗ C = A ⊗ (B ⊗ C). Proof. Straightforward. In view of Theorem 5.1.2(d), we drop parentheses around tensor products and, for example, denote (A ⊗ B) ⊗ C and A ⊗ (B ⊗ C) by A ⊗ B ⊗ C, with corresponding notation for tensor products of more than three terms. By forming the tensor product of collections of vectors and covectors, we can construct tensors of arbitrary rank. To illustrate, from vectors w1 , . . . , wr in V and covectors ζ 1 , . . . , ζ s in V ∗ , we obtain the tensor w1 ⊗ · · · ⊗ wr ⊗ ζ 1 ⊗ · · · ⊗ ζ s in T rs (V ). Then (5.1.4) gives w1 ⊗ · · · ⊗ wr ⊗ ζ 1 ⊗ · · · ⊗ ζ s (η 1 , . . . , η r , v1 , . . . , vs ) = w1 (η 1 ) · · · wr (η r ) ζ 1 (v1 ) · · · ζ s (vs )

for all all covectors η 1 , . . . , η r in V ∗ and all vectors v1 , . . . , vs in V . Let V be a vector space, let A be a tensor in T rs (V ), let H = (h1 , . . . , hm ) be a basis for V , and let Θ = (θ1 , . . . , θm ) be its dual basis. The components of A with respect to H are defined by i1 ir r Aij11...i ...js = A(θ , . . . , θ , hj1 , . . . , hjs )

for all 1 ≤ i1 , . . . , ir ≤ m and 1 ≤ j1 , . . . , js ≤ m. Theorem 5.1.3 (Basis for T rs (V )). With the above setup: (a) X j1 js r A= Aij11...i ...js hi1 ⊗ · · · ⊗ hir ⊗ θ ⊗ · · · ⊗ θ .

(5.1.5)

1≤i1 ,...,ir ≤m 1≤j1 ,...,js ≤m

(b) {hi1 ⊗ · · · ⊗ hir ⊗ θj1 ⊗ · · · ⊗ θjs ∈ T rs (V ) : 1 ≤ i1 , . . . , ir ≤ m; 1 ≤ j1 , . . . , js ≤ m}

is an unordered basis for T rs (V ). (c) T rs (V ) has dimension mr+s .

100

5 Tensors on Vector Spaces

Proof. (a), (b): In view of preceding remarks, each hi1 ⊗· · ·⊗hir ⊗θj1 ⊗· · ·⊗θjs is in T rs (V ). Let Ae be the tensor in T rs (V ) defined by X j1 js r Ae = Aij11...i ...js hi1 ⊗ · · · ⊗ hir ⊗ θ ⊗ · · · ⊗ θ . 1≤i1 ,...,ir ≤m 1≤j1 ,...,js ≤m

Then e i1 , . . . , θir , hj , . . . , hj ) A(θ 1 s X k1 ...kr = Al1 ...ls hk1 ⊗ · · · ⊗ hkr ⊗ θl1 ⊗ · · · ⊗ θls 1≤k1 ,...,kr ≤m 1≤l1 ,...,ls ≤m

(θi1 , . . . , θir , hj1 , . . . , hjs ) X ...kr = Akl11...l hk1 (θi1 ) · · · hkr (θir ) θl1 (hj1 ) · · · θls (hjs ) s 1≤k1 ,...,kr ≤m 1≤l1 ,...,ls ≤m

=

X 1≤k1 ,...,kr ≤m 1≤l1 ,...,ls ≤m

...kr i1 Akl11...l δ k1 · · · δ ikrr δ lj11 · · · δ ljss s

i1 ir r = Aij11...i ...js = A(θ , . . . , θ , hj1 , . . . , hjs ).

Since H and Θ are bases for V and V ∗ , respectively, and A and B are multilinear, e Thus, the hi ⊗ · · · ⊗ hi ⊗ θj1 ⊗ · · · ⊗ θjs span T r (V ). If it follows that A = A. s 1 r X j1 js r aij11...i =0 ...js hi1 ⊗ · · · ⊗ hir ⊗ θ ⊗ · · · ⊗ θ 1≤i1 ,...,ir ≤m 1≤j1 ,...,js ≤m

r for some real numbers aij11...i ...js , then applying both sides of the preceding identity ...kr k1 kr to (θ , . . . , θ , hl1 , . . . , hls ) yields akl11...l = 0. Thus, the hi1 ⊗ · · · ⊗ hir ⊗ θj1 ⊗ s · · · ⊗ θjs are linearly independent. (c): This follows from part (a).

We see from Theorem 5.1.3 that a tensor is completely determined by its components with respect to a given basis. In fact, this is the classical way of defining a tensor. Choosing another basis generally yields different values for the components. The next result shows how to convert components when changing from one basis to another. e be Theorem 5.1.4 (Change of Basis). Let V be a vector space, let H and H  H  i   H −1  i  bases for V , and let idV He = aj and idV He = bj . Let A be a tensor r ei1 ...ir be the components of A with respect to H in T rs (V ), and let Aij11...i and A ...js j1 ...js e respectively. Then and H, X ...kr r Aeij11...i bik11 · · · bikrr alj11 · · · aljss Akl11...l . ...js = s 1≤k1 ,...,kr ≤m 1≤l1 ,...,ls ≤m

101

5.1 Tensors

e = (e Proof. Let H = (h1 , . . . , hm ) and H h1 , . . . , e hm ), and let Θ = (θ1 , . . . , θm ) e = (θe1 , . . . , θem ) be the corresponding dual bases. We have and Θ  Θ  Θ e −1 ∗ idV ∗ Θ id = [Th 2.2.5(b)] V e Θ   Θ e −1 = (idV )∗ Θ   H T −1 idV He = [Th 2.2.7]    H −1 T idV He = [Th 2.1.3(c)]  i T = bj . Using (2.2.6) and (2.2.7) gives r ei1 eir e e Aeij11...i ...js = A(θ , . . . , θ , hj1 , . . . , hjs ) X  X X X i1 k1 ir kr l1 ls =A bk1 θ , . . . , bkr θ , aj1 hl1 , . . . , ajs hls

k1

=

X

kr

bik11

1≤k1 ,...,kr ≤m 1≤l1 ,...,ls ≤m

=

X 1≤k1 ,...,kr ≤m 1≤l1 ,...,ls ≤m

l1

· · · bikrr alj11

· · · aljss

ls

k1

kr

A(θ , . . . , θ , hl1 , . . . , hls )

...kr bik11 · · · bikrr alj11 · · · aljss Akl11...l . s

As the next result shows, the components of the product of tensors are given by the product of the respective components. Theorem 5.1.5 (Product of Tensors). Let V be a vector space, let A and 0 B be tensors in T rs (V ) and T rs0 (V ), respectively, let H = (h1 , . . . , hm ) be a basis k1 ...kr0 r for V , and let (θ1 , . . . , θm ) be its dual basis. Let Aij11...i ...js and B l1 ...ls0 be the components of A and B with respect to H, respectively, so that X j1 js r A= Aij11...i ...js hi1 ⊗ · · · ⊗ hir ⊗ θ ⊗ · · · ⊗ θ 1≤i1 ,...,ir ≤m 1≤j1 ,...,js ≤m

and B=

X 1≤k1 ,...,kr0 ≤m 1≤l1 ,...,ls0 ≤m

k ...k

B l11...ls0r0 hk1 ⊗ · · · ⊗ hkr0 ⊗ θl1 ⊗ · · · ⊗ θls0 . k ...k

1 r r0 Then A ⊗ B has the components Aij11...i ...js B l1 ...ls0 with respect to H, so that

A⊗B =

X 1≤i1 ,...,ir ,k1 ,...,kr0 ≤m 1≤j1 ,...,js ,l1 ,...,ls0 ≤m

k ...k

1 r r0 Aij11...i ...js B l1 ...ls0 hi1 ⊗ · · · ⊗ hir ⊗ hk1 ⊗ · · · ⊗ hkr0

⊗ θj1 ⊗ · · · ⊗ θjs ⊗ θl1 ⊗ · · · ⊗ θls0 .

102

5 Tensors on Vector Spaces

Proof. It follows from Theorem 5.1.2 that A⊗B =

X 1≤i1 ,...,ir ,k1 ,...,kr0 ≤m 1≤j1 ,...,js ,l1 ,...,ls0 ≤m

k ...k

1 j1 js r r0 Aij11...i ...js B l1 ...ls0 (hi1 ⊗ · · · ⊗ hir ⊗ θ ⊗ · · · ⊗ θ )

⊗ (hk1 ⊗ · · · ⊗ hkr0 ⊗ θl1 ⊗ · · · ⊗ θls0 ). We need to check that (hi1 ⊗ · · · ⊗ hir ⊗ θj1 ⊗ · · · ⊗ θjs ) ⊗ (hk1 ⊗ · · · ⊗ hkr0 ⊗ θl1 ⊗ · · · ⊗ θls0 )

= (hi1 ⊗ · · · ⊗ hir ⊗ hk1 ⊗ · · · ⊗ hkr0 ) ⊗ (θj1 ⊗ · · · ⊗ θjs ⊗ θl1 ⊗ · · · ⊗ θls0 ). 0

Let η 1 , . . . , η r+r be covectors in V ∗ , and let v1 , . . . , vs+s0 be vectors in V . We have from (5.1.4) that applying the left-hand side of the above expression to 0 (η 1 , . . . , η r+r , v1 , . . . , vs+s0 ) gives [hi1 (η 1 ) · · · hir (η r )][θj1 (v1 ) · · · θjs (vs )] 0

· [hk1 (η r+1 ) · · · hkr0 (η r+r )][θl1 (vs+1 ) · · · θls0 (vs+s0 )],

while applying the right-hand side gives 0

[hi1 (η 1 ) · · · hir (η r )][hk1 (η r+1 ) · · · hkr0 (η r+r )]

· [θj1 (v1 ) · · · θjs (vs )][θl1 (vs+1 ) · · · θls0 (vs+s0 )].

The result follows.

Theorem 5.1.5 shows that the product of tensors, each of which is expressed in the standard format given by (5.1.5), can in turn be expressed in that same format. Example 5.1.6 (Kronecker’s Delta). Let V be a vector space, let H = (h1 , . . . , hm ) be a basis for V , and let (θ1 , . . . , θm ) be its dual basis. As remarked above, in the classical literature a tensor is specified by giving its components with respect to a chosen basis for V . From that perspective, perhaps the most basic of all tensors in T 11 (V ) is the one with components δji given by Kronecker’s delta. Accordingly, we consider the (1, 1)-tensor δ=

X ij

δji hi ⊗ θj .

103

5.2 Pullback of Covariant Tensors Let ηPbe a covector in V ∗ , and let v be a vector in V , with η = v = l bl hl . Then X X  X δ(η, v) = δji hi ⊗ θj ak θk , bl hl ij

=

X

=

X

k

δji

X

η(v) =

ak hi (θ )

=

ij

X

k

ai θi



l j

b θ (hl )

l

δji ai bj =

X

ai bi

i

X

i

X

l k

ij

and

bj hj

 =

j j i

ai b θ (hj ) =

X

X

 X  ai θi bj hj

i

j

i

ai b .

i

Thus, δ(η, v) = η(v).

5.2

ak θk and

l

k

X

ij

X

k

 X  X  δji hi ak θk θj bl hl

ij

=

P



Pullback of Covariant Tensors

In this brief section, we generalize to tensors the definition of pullback by a linear map given in Section 1.3. Let V and W be vector spaces, and let A : V −→ W be a linear map. Pullback by A (for covariant tensors) is the family of linear maps A∗ : T 0s (W ) −→ T 0s (V ) defined for s ≥ 1 by  A∗ (B)(v1 , . . . , vs ) = B A(v1 ), . . . , A(vs )

(5.2.1)

for all tensors B in T 0s (W ) and all vectors v1 , . . . , vs in V . We refer to A∗ (B) as the pullback of B by A. When s = 1, the earlier definition is recovered. As an example, let (V, g) be a scalar product space, and let A : V −→ V be a linear map. Then the tensor A∗ (g) in T 02 (V ) is given by A∗ (g)(v, w) = hA(v), A(w)i for all vectors v, w in V . Pullbacks of covariant tensors behave well with respect to basic algebraic structure. Theorem 5.2.1. Let U, V , and W be vector spaces, let A : U −→ V and B : V −→ W be linear maps, and let A, B and C be tensors in T 0s (V ) and T 0s0 (V ), respectively. Then: (a) A∗ (A + B) = A∗ (A) + A∗ (B). (b) A∗ (A ⊗ C) = A∗ (A) ⊗ A∗ (C).

104

5 Tensors on Vector Spaces

(c) (B ◦ A)∗ = A∗ ◦ B ∗ . Proof. Straightforward.

5.3

Representation of Tensors

Let V be a vector space, let H = (h1 , . . . , hm ) be a basis for V , let (θ1 , . . . , θm ) be its dual basis, and let s ≥ 1 be an integer. Following Section B.5 and Section B.6, we denote by Mult(V s , V ) the vector space of R-multilinear maps from V s to V . Let us define a map Rs : Mult(V s , V ) −→ T 1s (V ), called the representation map, by Rs (Ψ)(η, v1 , . . . , vs ) = η Ψ(v1 , . . . , vs )



(5.3.1)

for all maps Ψ in Mult(V s , V ), all covectors η in V ∗ , and all vectors v1 , . . . , vs in V . When s = 1, we denote R1

by

R.

Theorem 5.3.1 (Representation of Tensors). With the above setup: (a) Rs is a linear isomorphism: T 1s (V ) ≈ Mult(V s , V ). (b) For all maps Ψ in Mult(V s , V ), the components of the tensor Rs (Ψ) in T 1s (V ) with respect to H are  Rs (Ψ)ij1 ...js = θi Ψ(hj1 , . . . , hjs ) . (c) For all tensors A in T 1s (V ), R−1 s (A)(hj1 , . . . , hjs ) =

X i

Aij1 ...js hi ,

where the Aij1 ...js are the components of A with respect to H. Proof. (a): It is map. If Rs (Ψ) = 0, then  easily shown that Rs is a linear ∗ η Ψ(v1 , . . . , vs ) = 0 for all covectors η in V and vectors v1 , . . . , vs in V . In  particular, each θi Ψ(v1 , . . . , vs ) = 0. Because Ψ(v1 , . . . , vs ) is a vector in V , by Theorem 1.2.1(d), X  Ψ(v1 , . . . , vs ) = θi Ψ(v1 , . . . , vs ) hi , i

hence Ψ(v1 , . . . , vs ) = 0. Since v1 , . . . , vs were arbitrary, Ψ = 0. Thus, ker(Rs ) = {0}. By Theorem 5.1.3(c), T 1s (V ) has dimension ms+1 , and it can be shown

105

5.3 Representation of Tensors

that Mult(V s , V ) has the same dimension. It follows from Theorem 1.1.12 and Theorem 1.1.14 that Rs is a linear isomorphism. (b): We have from (5.3.1) that  Rs (Ψ)ij1 ...js = Rs (Ψ)(θi , hj1 , . . . , hjs ) = θi Ψ(hj1 , . . . , hjs ) . (c): Since R−1 s (A)(hj1 , . . . , hjs ) is a vector in V , by Theorem 1.2.1(d), X  R−1 θi R−1 s (A)(hj1 , . . . , hjs ) = s (A)(hj1 , . . . , hjs ) hi . i

We have from (5.3.1) and part (a) that  i Aij1 ...js = A(θi , hj1 , . . . , hjs ) = Rs R−1 s (A) (θ , hj1 , . . . , hjs )  = θi R−1 s (A)(hj1 , . . . , hjs ) . The result follows. It is instructive to specialize Theorem 5.3.1 to the case s = 1, which for us is the situation of most interest. We then have Mult(V, V ) = Lin(V, V ) and R : Lin(V, V ) −→ T 11 (V ) defined by  R(B)(η, v) = η B(v) = B ∗ (η)(v)

(5.3.2)



for all maps B in Lin(V, V ), all covectors η in V , and all vectors v in V . Theorem 5.3.2. With the above setup: (a) R is a linear isomorphism: T 11 (V ) ≈ Lin(V, V ).

(b) For all maps B in Lin(V, V ), the components of the tensor R(B) in T 11 (V ) with respect to H are  R(B)ij = θi B(hj ) . (c) For all tensors A in T 11 (V ), R−1 (A)(hj ) =

X i

hence

Aij hi ,

 −1 H   R (A) H = Aij ,

where the Aij are the components of A with respect to H.

Proof. This follows from Theorem 5.3.1, except for the second identity in part (c), which comes from (2.2.2) and (2.2.3). Theorem 5.3.2(c) shows that each tensor in T 11 (V ) has a representation as a matrix. Continuing with Example 5.1.6, the tensor δ in T 11 (V ) has the components δji with respect to H. By Theorem 5.3.2(c),  −1 H  i  R (δ) H = δ j = Im , so, not surprisingly, R−1 (δ) = idV .

106

5.4

5 Tensors on Vector Spaces

Contraction of Tensors

Contraction of tensors is closely related to the trace of matrices. Like dual vector spaces, it is another of those deceptively simple algebraic constructions that has far-reaching applications. Theorem 5.4.1. Let V be a vector space, and define a linear function C11 : T 11 (V ) −→ R, called (1, 1)-contraction, by C11 = tr ◦ R−1 , where tr is trace and R is given by (5.3.2). Then C11 is the unique linear function such that C11 (v ⊗ η) = v(η)

(5.4.1)

for all vectors v in V and all covectors η in V ∗ . Proof. Let H = (h1 , . . . , hm ) be a basis for V , and let (θ1 , . . . , θm ) be its dual basis. Existence. By Theorem 5.3.2(c),  −1 H   R (v ⊗ η) H = (v ⊗ η)ij .

(5.4.2)

Since (v ⊗ η)ij = (v ⊗ η)(θi , hj ) = v(θi )η(hj ), it follows from Theorem 1.2.1(d) and (1.2.3) that X    X tr (v ⊗ η)ij = v(θi )η(hi ) = η θi (v)hi = η(v). i

(5.4.3)

i

Thus,   H  tr R−1 (v ⊗ η) = tr R−1 (v ⊗ η) H   = tr (v ⊗ η)ij

[(2.5.1)] [(5.4.2)]

= η(v)

[(5.4.3)]

= v(η),

[(1.2.3)]

which shows that tr ◦ R−1 satisfies (5.4.1). Uniqueness. Suppose ` : T 11 (V ) −→ R is a linear function satisfying (5.4.1), let A be a tensor in T 11 (V ), and let Aij be the components of A with respect to

107

5.4 Contraction of Tensors H. Then X  X `(A) = ` Aij hi ⊗ θj = Aij `(hi ⊗ θj ) ij

=

X ij

=

X i

ij

Aij hi (θj )

[(5.4.1)]

  Aii = tr Aij

 H  = tr R−1 (A) H  = tr R−1 (A) .

[Th 5.3.2(c)] [(2.5.1)]

Since A was arbitrary, ` = tr ◦ R−1 . Theorem 5.4.2. Let V be a vector space, let H be a basis for V , let A be a tensor in T 11 (V ), and let Aij be the components of A with respect to H. Then X C11 (A) = Aii . i

Proof. This was shown as part of the proof of Theorem 5.4.1. Theorem 5.4.2 explains why in the literature “contraction” is sometimes called “trace”. The next result is a generalization of Theorem 5.4.1. Theorem 5.4.3. If V is a vector space, and 1 ≤ k ≤ r and 1 ≤ l ≤ s are integers, then there is a unique linear map Ckl : T rs (V ) −→ T r−1 s−1 (V ), called (k, l)-contraction, such that Ckl (v1 ⊗ · · · ⊗ vr ⊗ η 1 ⊗ · · · ⊗ η s )

= vk (η l ) v1 ⊗ · · · ⊗ vbk ⊗ · · · ⊗ vr ⊗ η 1 ⊗ · · · ⊗ ηbl ⊗ · · · ⊗ η s

for all vectors v1 , . . . , vr in V and all covectors η 1 , . . . , η s in V ∗ , where b indicates that an expression is omitted. Remark. In light of (5.1.2), when r = s = 1, we recover (5.4.1). Proof. Existence. Let A be a tensor in T rs (V ), let ζ 1 , . . . , ζ k−1 , ζ k+1 , . . . , ζ r be covectors in V ∗ , and let w1 , . . . , wl−1 , wl+1 , . . . , ws be vectors in V . We temporarily treat the preceding covectors and vectors as fixed and define a tensor Ae in T 11 (V ) by the assignment (ζ, w) 7−→ A(ζ 1 , . . . , ζ k−1 , ζ, ζ k+1 , . . . , ζ r , w1 , . . . , wl−1 , w, wl+1 , . . . , ws ) for all covectors ζ in V ∗ and all vectors w in V . Using Theorem 5.4.1, we obtain e Allowing ζ 1 , . . . , ζ k−1 , ζ k+1 , . . . , ζ r the corresponding (1, 1)-contraction C11 (A).

108

5 Tensors on Vector Spaces

and w1 , . . . , wl−1 , wl+1 , . . . , ws to vary defines a tensor Ckl (A) in T r−1 s−1 (V ). It follows from v1 ⊗ · · · ⊗ vr ⊗ η 1 ⊗ · · · ⊗ η s

(ζ 1 , . . . , ζ k−1 , ζ, ζ k+1 , . . . , ζ r , w1 , . . . , wl−1 , w, wl+1 , . . . , ws )

= [v1 (ζ 1 ) · · · vk−1 (ζ k−1 )] vk (ζ) [vk+1 (ζ k+1 ) · · · vr (ζ r )]

· [η 1 (w1 ) · · · η l−1 (wl−1 )] η l (w) [η l+1 (wl+1 ) · · · η s (ws )]

= [v1 (ζ 1 ) · · · vk−1 (ζ k−1 )] [vk+1 (ζ k+1 ) · · · vr (ζ r )]

 · [η 1 (w1 ) · · · η l−1 (wl−1 )] [η l+1 (wl+1 ) · · · η s (ws )] (vk ⊗ η l ) (ζ, w)

and Theorem 5.4.1 that Ckl (v1 ⊗ · · · ⊗ vr ⊗ η 1 ⊗ · · · ⊗ η s )

(ζ 1 , . . . , ζ k−1 , ζ k+1 , . . . , ζ r , w1 , . . . , wl−1 , wl+1 , . . . , ws )

= [v1 (ζ 1 ) · · · vk−1 (ζ k−1 )] [vk+1 (ζ k+1 ) · · · vr (ζ r )]

· [η 1 (w1 ) · · · η l−1 (wl−1 )] [η l+1 (wl+1 ) · · · η s (ws )] C11 (vk ⊗ η l )

= [v1 (ζ 1 ) · · · vk−1 (ζ k−1 )] [vk+1 (ζ k+1 ) · · · vr (ζ r )]

· [η 1 (w1 ) · · · η l−1 (wl−1 )] [η l+1 (wl+1 ) · · · η s (ws )] vk (η l ) = vk (η l ) v1 ⊗ · · · ⊗ vbk ⊗ · · · ⊗ vr ⊗ η 1 ⊗ · · · ⊗ ηbl ⊗ · · · ⊗ η s (ζ 1 , . . . , ζ k−1 , ζ k+1 , . . . , ζ r , w1 , . . . , wl−1 , wl+1 , . . . , ws ).

Since ζ 1 , . . . , ζ k−1 , ζ k+1 , . . . , ζ r , w1 , . . . , wl−1 , wl+1 , . . . , ws were arbitrary, the result follows. Uniqueness. This follows from the uniqueness property of C11 described in Theorem 5.4.1.

We say that (k, l) is the rank of a (k, l)-contraction. The composite of contractions (of various ranks) is called simply a contraction. We sometimes refer to the type of contractions defined here as ordinary, to distinguish them from a variant to be described in Section 6.5. Example 5.4.4. Let V be a vector space, let H = (h1 , . . . , hm ) be a basis for V , and let (θ1 , . . . , θm ) be its dual basis. Let A be a tensor in T 22 (V ), and let Aij kl be the components of A with respect to H, so that A=

X ijkl

k l Aij kl hi ⊗ hj ⊗ θ ⊗ θ .

109

5.4 Contraction of Tensors Then the tensor C12 (A) in T 11 (V ) is given by X ij C12 (A) = Akl C12 (hi ⊗ hj ⊗ θk ⊗ θl ) ijkl

=

X ijkl

=

l k Aij kl hi (θ ) hj ⊗ θ =

X X p

jk



k Apj kp hj ⊗ θ =

Thus, C12 (A) has the components C12 (A)ij =

X jkp

k Apj kp hj ⊗ θ

X X ij

P

p

p

 j Api jp hi ⊗ θ .

Api jp with respect to H.



Theorem 5.4.5. Let V be a vector space, let H be a basis for V , and let 1 ≤ k ≤ r and 1 ≤ l ≤ s be integers. Let A be a tensor in T rs (V ) with the r−1 k r components Aij11...i ...js with respect to H. Then the tensor Cl (A) in T s−1 (V ) has the components X i ...i p i ...i i ...i r−1 k Ckl (A)j11 ...jr−1 = Aj11 ...jk−1 s−1 l−1 p jl ...js−1 p

with respect to H. Proof. The proof is an elaboration of Example 5.4.4. Let H = (h1 , . . . , hm ), and let (θ1 , . . . , θm ) be its dual basis, so that X j1 js r A= Aij11...i ...js hi1 ⊗ · · · ⊗ hir ⊗ θ ⊗ · · · ⊗ θ . 1≤i1 ,...,ir ≤m 1≤j1 ,...,js ≤m

Then Ckl (A) =

X 1≤i1 ,...,ir ≤m 1≤j1 ,...,js ≤m

=

X 1≤i1 ,...,ir ≤m 1≤j1 ,...,js ≤m

=

k j1 js r Aij11...i ...js Cl (hi1 ⊗ · · · ⊗ hir ⊗ θ ⊗ · · · ⊗ θ ) jl r Aij11...i ...js hik (θ )

j1 js c j · hi1 ⊗ · · · ⊗ hc ik ⊗ · · · ⊗ hir ⊗ θ ⊗ · · · ⊗ θ l ⊗ · · · ⊗ θ   X X i ...i p i ...i r k+1 Aj11 ...jk−1 l−1 p jl+1 ...js

1≤i1 ,...,ibk ,...ir ≤m 1≤j1 ,...,jbl ,...,js ≤m

p

j1 js c j · hi1 ⊗ · · · ⊗ hc ik ⊗ · · · ⊗ hir ⊗ θ ⊗ · · · ⊗ θ l ⊗ · · · ⊗ θ ,

where b indicates that an expression is omitted. Relabeling indices (ik+1 , . . . , ir ) as (ik , . . . , ir−1 ), and (jl+1 , . . . , js ) as (jl , . . . , js−1 ), gives X  X i ...i p ik ...ir−1 Ckl (A) = Aj11 ...jk−1 l−1 p jl ...js−1 1≤i1 ,...ir−1 ≤m 1≤j1 ,...,js−1 ≤m

p

· hi1 ⊗ · · · ⊗ hir−1 ⊗ θj1 ⊗ · · · ⊗ θjs−1 ,

110

5 Tensors on Vector Spaces

from which the result follows. Example 5.4.6. Let V be a vector space, let A be a tensor in T 11 (V ), let η be a covector in V ∗ , and let v be a vector in V . We claim that A(η, v) = C11 ◦ C22 (v ⊗ A ⊗ η). Let H = (h1 , . . . , hm ) be a basis for V , let (θ1 , . . . , θm ) be its dual basis, and, in local coordinates, let X X X A= Aij hi ⊗ θj , η= ak θk , and v= bl hl . ij

k

Then A(η, v) =

X ij

=

X

=

X

=

X

ij

ij

ij

and v⊗A⊗η = =

X

Aij hi

ijkl

⊗θ

j

X

k

ak θ ,

X

k

l



b hl

l

 X  X  Aij hi ak θ k θ j bl hl k

Aij

X

l

(5.4.4)

X  ak hi (θk ) bl θj (hl )

k

l

Aij ai bj l

 ⊗

b hl

l

X

l

X ij

Aij hi

⊗θ

j



X



ak θ

k



k

bl Aij ak hl ⊗ hi ⊗ θj ⊗ θk ,

hence C22 (v ⊗ A ⊗ η) =

X

=

X

ijkl

ijkl

bl Aij ak C22 (hl ⊗ hi ⊗ θj ⊗ θk ) bl Aij ak hi (θk )hl ⊗ θj =

so C11 ◦ C22 (v ⊗ A ⊗ η) =

X

=

X

ijl

ijl

X ijl

bl Aij ai hl ⊗ θj ,

bl Aij ai C11 (hl ⊗ θj ) bl Aij ai hl (θj ) =

X ij

(5.4.5)

ai bj Aij .

The result follows from (5.4.4) and (5.4.5).

♦ r s (V

Theorem 5.4.7. Let V be a vector space, let A be a tensor in T ), let η 1 , . . . , η r be covectors in V ∗ , and let v1 , . . . , vs be vectors in V . Then there is a contraction C on V such that A(η 1 , . . . , η r , v1 , . . . , vs ) = C(v1 ⊗ · · · ⊗ vs ⊗ A ⊗ η 1 ⊗ · · · ⊗ η r ).

111

5.4 Contraction of Tensors

Proof. The proof is an elaboration of Example 5.4.6. Let H = (h1 , . . . , hm ) be a basis for V , let (θ1 , . . . , θm ) be its dual basis, and, in local coordinates, let X j1 js r A= Aij11...i ...js hi1 ⊗ · · · ⊗ hir ⊗ θ ⊗ · · · ⊗ θ , 1≤i1 ,...,ir ≤m 1≤j1 ,...,js ≤m

η1 =

X

X

a1k1 θk1 , . . . , η r =

k1

and v1 =

X

arkr θkr

kr

b1l1 hl1 , . . . , vs =

X

l1

bsls hls .

ls

Then A(η 1 , . . . , η r , v1 , . . . , vs )   X j1 js r = Aij11...i h ⊗ · · · ⊗ h ⊗ θ ⊗ · · · ⊗ θ ir ...js i1 1≤i1 ,...,ir ≤m 1≤j1 ,...,js ≤m

X

a1k1 θk1 , . . . ,

X

k1

kr

1≤i1 ,...,ir ≤m 1≤j1 ,...,js ≤m

· θj1

X

1≤i1 ,...,ir ≤m 1≤j1 ,...,js ≤m

· =

X l1

r Aij11...i ...js

b1l1 hl1 , . . . ,

X

X

a1k1 θk1

X k1

X



ls

 · · · hir

k1

· · · θ js

bsls hls

bsls hls

X

arkr θkr



kr

(5.4.6)



ls

 X  a1k1 hi1 (θk1 ) · · · arkr hir (θkr ) kr

 X  b1l1 θj1 (hl1 ) · · · bsls θjs (hls ) ls

X 1≤i1 ,...,ir ≤m 1≤j1 ,...,js ≤m

and



l1

X

=

r Aij11...i ...js hi1

b1l1 hl1

X l1



X

=

arkr θkr ,

r Aij11...i ...js a1i1

· · · arir b1j1 · · · bsjs

v1 ⊗ · · · ⊗ vs ⊗ A ⊗ η 1 ⊗ · · · ⊗ η r X  X  1l1 sls = b hl1 ⊗ · · · ⊗ b hls l1

⊗ ⊗

ls

X 1≤i1 ,...,ir ≤m 1≤j1 ,...,js ≤m

X k1

a1k1 θ

r Aij11...i ...js hi1

k1

 ⊗ ··· ⊗

⊗ · · · ⊗ hir ⊗ θj1 ⊗ · · · ⊗ θjs X kr

arkr θ

kr



112

5 Tensors on Vector Spaces =

X 1≤i1 ,...,ir ,k1 ,...,kr ≤m 1≤j1 ,...,js ,l1 ,...,ls ≤m

r b1l1 · · · bsls Aij11...i ...js a1k1 · · · arkr

· (hl1 ⊗ · · · ⊗ hls ) ⊗ (hi1 ⊗ · · · ⊗ hir ⊗ θj1 ⊗ · · · ⊗ θjs )

⊗ (θk1 ⊗ · · · ⊗ θkr ).

To complete the proof, it is now a matter of defining a sequence of (k, l)contractions to apply to v1 ⊗ · · · ⊗ vs ⊗ A ⊗ η 1 ⊗ · · · ⊗ η r so that hi1 is paired with θk1 to give hi1 (θk1 ), then .. . hir is paired with θkr to give hir (θkr ), then θj1 is paired with hl1 to give θj1 (hl1 ), then .. . θjs is paired with hls to give θjs (hls ). Denoting the composition of these contractions by C, we obtain C(v1 ⊗ · · · ⊗ vs ⊗ A ⊗ η 1 ⊗ · · · ⊗ η r ) X r = b1j1 · · · bsjs Aij11...i ...js a1i1 · · · arir . 1≤i1 ,...,ir ≤m 1≤j1 ,...,js ≤m

The result now follows from (5.4.6) and (5.4.7).

(5.4.7)

Chapter 6

Tensors on Scalar Product Spaces 6.1

Contraction of Tensors

In Section 5.4, we discussed contraction of tensors over vector spaces. In the setting of scalar product spaces, there is more to say. Theorem 6.1.1 (Orthonormal Basis Expression for Ordinary Contraction). Let (V, g) be a scalar product space with signature (ε1 , . . . , εm ), let (e1 , . . . , em ) be an orthonormal basis for V , and let A be a tensor in T 1s (V ). Then: (a) For integers s ≥ 2 and 1 ≤ l ≤ s − 1, the tensor C1l (A) in T 0s−1 (V ) is given by X C1l (A)(v1 , . . . , vs−1 ) = εi hR−1 s (A)(v1 , . . . , vl−1 , ei , vl , . . . , vs−1 ), ei i i

for all vectors v1 , . . . , vs−1 in V , where Rs is given by (5.3.1). (b) For s = 1, the real number C11 (A) is given by X C11 (A) = εi hR−1 (A)(ei ), ei i, i

where R is given by (5.3.2). Proof. (a): By Theorem 5.4.5, the components of C1l (A) with respect to (e1 , . . . , em ) are X p C1l (A)(ej1 , . . . , ejs−1 ) = C1l (A)j1 ...js−1 = Aj1 ...jl−1 p jl ...js−1 . (6.1.1) p

Semi-Riemannian Geometry, First Edition. Stephen C. Newman. c 2019 John Wiley & Sons, Inc. Published 2019 by John Wiley & Sons, Inc.

113

114

6 Tensors on Scalar Product Spaces

We have from Theorem 5.3.1(c) that X

R−1 s (A)(ej1 , . . . , ejl−1 , ep , ejl , . . . , ejs−1 ) =

q

Aqj1 ...jl−1 p jl ...js−1 eq ,

hence hR−1 s (A)(ej1 , . . . , ejl−1 , ep , ejl , . . . , ejs−1 ), ep i = =

X q

X q

Aqj1 ...jl−1 p jl ...js−1 eq , ep



Aqj1 ...jl−1 p jl ...js−1 heq , ep i

= εp Apj1 ...jl−1 p jl ...js−1 , so Apj1 ...jl−1 p jl ...js−1 = εp hR−1 s (A)(ej1 , . . . , ejl−1 , ep , ejl , . . . , ejs−1 ), ep i.

(6.1.2)

Then (6.1.1) and (6.1.2) yield X C1l (A)(ej1 , . . . , ejs−1 ) = εp hR−1 s (A)(ej1 , . . . , ejl−1 , ep , ejl , . . . , ejs−1 ), ep i. p

The result for arbitrary vectors v1 , . . . , vs−1 in V follows from the multilinearity of C1l (A) and R−1 s (A). (b): We have  X X X j −1 εi hR (A)(ei ), ei i = εi Ai ej , ei [Th 5.3.2(c)] i

i

=

X ij

j

εi hej , ei iAji =

X i

= C11 (A).

6.2

Aii [Th 5.4.2]

Flat Maps

Let (V, g) be a scalar product space, let H = (h1 , . . . , hm ) be a basis for V , and let (θ1 , . . . , θm ) be its dual basis. Recall from (5.1.1) that T 10 (V ) = V and T 01 (V ) = V ∗ . The flat map of Section 3.3 can therefore be expressed as F : T 10 (V ) −→ T 01 (V ). More generally, let 1 ≤ k ≤ r and 1 ≤ l ≤ s + 1 be integers (so that r ≥ 1 and s ≥ 0). The (k, l)-flat map is denoted by Fkl : T rs (V ) −→ T r−1 s+1 (V ) and defined for all tensors A in T rs (V ), all covectors η 1 , . . . , η r−1 in V ∗ , and all vectors v1 , . . . , vs+1 in V as follows, where b indicates that an expression is omitted:

115

6.2 Flat Maps [F1] For r = 1 (so that k = 1): F1l (A)(v1 , . . . , vs+1 ) = A(vlF , v1 , . . . , vbl , . . . , vs+1 ). [F2] For r ≥ 2 and 1 ≤ k ≤ r − 1: Fkl (A)(η 1 , . . . , η r−1 , v1 , . . . , vs+1 ) = A(η 1 , . . . , η k−1 , vlF , η k , . . . , η r−1 , v1 , . . . , vbl , . . . , vs+1 ). [F3] For r ≥ 2 and k = r: Frl (A)(η 1 , . . . , η r−1 , v1 , . . . , vs+1 ) = A(η 1 , . . . , η r−1 , vlF , v1 , . . . , vbl , . . . , vs+1 ).

The computational procedure is most transparent for [F2]: extract the vector in the lth covariant position, flat it, and insert the result in the kth contravariant position. Example 6.2.1. To illustrate [F1], let r = 1 (so that k = 1) and s = 0 (so that l = 1), and let w be a vector in T 10 (V ) = V . Then the covector F11 (w) in T 01 (V ) = V ∗ is given by F11 (w)(v) = w(v F ) for all vectors v in V . We have from (1.2.3) and (3.3.1) that w(v F ) = v F (w) = hv, wi = hw, vi = wF (v) = F(w)(v), hence F11 (w)(v) = F(w)(v). Since v and w were arbitrary, F11 = F. To illustrate [F2], let r = 2 (so that k = 1) and s = 0 (so that l = 1), and let A be a tensor in T 20 (V ). Then the tensor F11 (A) in T 11 (V ) is given by F11 (A)(η, v) = A(v F , η). To illustrate [F3], let r = 2 (so that k = 2) and s = 0 (so that l = 1), and let A be a tensor in T 20 (V ). Then the tensor F21 (A) in T 11 (V ) is given by F21 (A)(η, v) = A(η, v F ).



Example 6.2.2. Let A be a tensor in T 11 (V ). For r = s = 1, there are flat maps for k = 1 and l = 1, 2. For k = 1 and l = 1, the tensor F11 (A) in T 02 (V ) is given by F11 (A)(v1 , v2 ) = A(v1F , v2 ). For k = 1 and l = 2, the tensor F12 (A) in T 02 (V ) is given by F12 (A)(v1 , v2 ) = A(v2F , v1 ).



116

6 Tensors on Scalar Product Spaces

Example 6.2.3. For a tensor A in T 22 (V ), the tensor F21 (A) in T 13 (V ) is given by F21 (A)(η 1 , v1 , v2 , v3 ) = A(η 1 , v1F , v2 , v3 ). P By Theorem 3.3.1(a), hFj = p gjp θp , so F21 (A) has the components F21 (A)ijkl = F21 (A)(θi , hj , hk , hl ) = A(θi , hFj , hk , hl )  X  X = A θi , gjp θp , hk , hl = gjp A(θi , θp , hk , hl ) p

=

X p

p

gjp Aip kl

with respect to H.



Theorem Let (V, g) be a scalar product space, let H be a basis for V , and  6.2.4.  let gH = gij . Let A be a tensor in T rs (V ), and let 1 ≤ k ≤ r and 1 ≤ l ≤ s + 1 be integers. Then the tensor Fkl (A) in T r−1 s+1 (V ) has the components i ...i

Fkl (A)j11 ...jr−1 = s+1

X p

i ...i

gjl p Aj1 ...jbk−1 ...j 1

l

p ik ...ir−1

s+1

with respect to H. Proof. The proof is an elaboration of Example 6.2.3. We consider only the case [F2]. The tensor Fkl (A) in T r−1 s+1 (V ) is given by Fkl (A)(η 1 , . . . , η r−1 , v1 , . . . , vs+1 ) = A(η 1 , . . . , η k−1 , vlF , η k , . . . , η r−1 , v1 , . . . , vbl , . . . , vs+1 ) for all covectors η 1 , . . . , η r−1 in V ∗ and all vectors v1 , . . . , vs+1 in V . Let H = (h1 , . . . , hm ), and let (θ1 , . . . , θm ) be its dual basis. By Theorem 3.3.1(a), hFj = P p k p gjp θ , so Fl (A) has the components i ...i

Fkl (A)j11 ...jr−1 s+1 = Fkl (A)(θi1 , . . . , θik , . . . , θir−1 , hj1 , . . . , hjl , . . . , hjs+1 ) = A(θi1 , . . . , θik−1 , hFjl , θik , . . . , θir−1 , hj1 , . . . , hjl−1 , hjl+1 , . . . , hjs+1 )   X i1 ik−1 p ik ir−1 = A θ ,...,θ , gjl p θ , θ , . . . , θ , hj1 , . . . , hjl−1 , hjl+1 , . . . , hjs+1 p

=

X

=

X

p

p

i1

gjl p A(θ , . . . , θik−1 , θp , θik , . . . , θir−1 , hj1 , . . . , hjl−1 , hjl+1 , . . . , hjs+1 ) i ...i

gjl p Aj1 ...jbk−1 ...j 1

with respect to H.

l

p ik ...ir−1

s+1

117

6.2 Flat Maps

Theorem 6.2.5.  Let (V, g) be a scalar product space, let H be a basis for V , and let gH = gij . Let A be a tensor in T rs (V ), and let r ≥ 2 and s ≥ 0 be integers. Then: (a) For integers 1 ≤ k < l ≤ r and 1 ≤ n ≤ s, the components of the tensor Ckn ◦ Fln (A) in T r−2 (V ) with respect to H are s X i ...i p ik ...il−2 q il−1 ...ir−2 i ...i Ckn ◦ Fln (A)j11 ...jr−2 = gpq Aj11 ...jk−1 . (6.2.1) s s pq

(b) For integers 1 ≤ k ≤ r and 1 ≤ n ≤ s, the components of the tensor Ckn ◦ Fkn (A) in T r−2 (V ) with respect to H are s i ...i

i ...i

1 r−2 Ckn ◦ Fkn (A)j11 ...jr−2 = Ckn ◦ Fk+1 n (A)j1 ...js s X i ...i p q ik ...ir−2 = gpq Aj11 ...jk−1 . s

(6.2.2)

pq

Proof. By Theorem 6.2.4, the components of the tensor Fln (A) in T r−1 s+1 (V ) with respect to H are X i ...il−1 q il ...ir−1 i ...i Fln (A)j11 ...jr−1 = gjn q A 1 c . (6.2.3) s+1 j1 ...jn ...js+1

q

(a): We have i ...ib ...i

r−1 k Ckn ◦ Fln (A) 1 c j1 ...jn ...js+1 X i ...i p ik+1 ...ir−1 = Fln (A)j11 ...jk−1 n−1 p jn+1 ...js+1

[Th 5.4.5]

p

= =

X X

i ...ik−1 p ik+1 ...il−1 q il ...ir−1 j1 ...jc n ...js+1

p

q

X

gpq A 1

pq



gpq A 1

[(6.2.3)]

i ...ik−1 p ik+1 ...il−1 q il ...ir−1 j1 ...jc n ...js+1 .

Relabeling indices (ik+1 , . . . , il−1 , q, il , . . . , ir−1 ) as (ik , . . . , il−2 , q, il−1 , . . . , ir−2 ), and (jn+1 , . . . , js+1 ) as (jn , . . . , js ), gives the result. (b): We have i ...ibk ...ir−1 j1 ...jc n ...js+1

Ckn ◦ Fkn (A) 1

=

X

i ...i

pi

...i

r−1 k+1 Fkn (A)j11 ...jk−1 n−1 p jn+1 ...js+1

[Th 5.4.5]

p

=

XX

i ...ik−1 q p ik+1 ...ir−1 gpq A 1 c j1 ...jn ...js+1

p

q

=

X

gqp A 1

=

X

pq

pq

i ...ik−1 q p ik+1 ...ir−1 j1 ...jc n ...js+1 i ...ik−1 p q ik+1 ...ir−1 j1 ...jc n ...js+1

gpq A 1

 [(6.2.3)]

118

6 Tensors on Scalar Product Spaces

and i ...ibk ...ir−1 j1 ...jc n ...js+1

1 Ckn ◦ Fk+1 n (A)

=

X

i ...i

pi

...i

1 r−1 k−1 k+1 Fk+1 n (A)j1 ...jn−1 p jn+1 ...js+1

[Th 5.4.5]

p

= =

X X

i ...ik−1 p q ik+1 ...ir−1 gpq A 1 c j1 ...jn ...js+1

p

q

X

gpq A 1

pq

 [(6.2.3)]

i ...ik−1 p q ik+1 ...ir−1 , j1 ...jc n ...js+1

which gives the first equality. Relabeling indices (ik+1 , . . . , ir−1 ) as (ik , . . . , ir−2 ), and (jn+1 , . . . , js+1 ) as (jn , . . . , js ), gives the second equality. We observe that p and q in (6.2.1) are in the kth and lth contravariant positions of A, respectively, and that p and q in (6.2.2) are in the kth and (k + 1)th contravariant positions of A, respectively. We also note that the righthand side of (6.2.1) is independent of n. This makes sense because Fln lowers the lth contravariant index to the nth covariant position, and then Ckn contracts over the kth contravariant index and the nth covariant index. In a manner of speaking, the nth covariant position is simply a temporary location for the lowered lth contravariant index to reside prior to its being contracted with the kth contravariant index. Theorem 6.2.6. Let (V, g) be a scalar product space, let A be a tensor in T rs (V ), and let 1 ≤ k ≤ r be an integer. Then Fk1 (A) = Ck1 (g ⊗ A). Proof. All the components to follow are computed with respect to a basis H for V . By Theorem 6.2.4, the tensor Fk1 (A) in T r−1 s+1 (V ) has the components X i ...i p ik ...ir−1 i ...i Fk1 (A)j11 ...jr−1 = gj1 p Aj12 ...jk−1 . (6.2.4) s+1 s+1 p

It follows from Theorem 5.1.5 that the tensor g ⊗ A in T rs+2 (V ) has the components i1 ...ir r (g ⊗ A)ij11...i ...js+2 = gj1 j2 Aj3 ...js+1 , and then from Theorem 5.4.5 that the tensor Ck1 (g ⊗ A) in T r−1 s+1 (V ) has the components X i ...i p ik+1 ...ir ibk ...ir Ck1 (g ⊗ A)ij12... gpj2 Aj13 ...jk−1 . ...js+2 = s+2 p

Relabeling (ik+1 , . . . , ir ) as (ik , . . . , ir−1 ), and (j2 , . . . , js+2 ) as (j1 , . . . , js+1 ), gives X i ...i p ik ...ir−1 i ...i Ck1 (g ⊗ A)j11 ...jr−1 = gpj1 Aj12 ...jk−1 . (6.2.5) s+1 s+1 p

The result now follows from (6.2.4) and (6.2.5).

119

6.3 Sharp Maps

6.3

Sharp Maps

Let (V, g) be a scalar product space, let H = (h1 , . . . , hm ) be a basis for V , and let (θ1 , . . . , θm ) be its dual basis. Recall from (5.1.1) that T 10 (V ) = V and T 01 (V ) = V ∗ . The sharp map in Section 3.3 can therefore be expressed as S : T 01 (V ) −→ T 10 (V ). More generally, let 1 ≤ k ≤ r + 1 and 1 ≤ l ≤ s be integers (so that r ≥ 0 and s ≥ 1). The (k, l)-sharp map is denoted by Skl : T rs (V ) −→ T r+1 s−1 (V ) and defined for all tensors A in T rs (V ), all covectors η 1 , . . . , η r+1 in V ∗ , and all vectors v1 , . . . , vs−1 in V as follows, where b indicates that an expression is omitted: [S1] For s = 1 (so that l = 1): Sk1 (A)(η 1 , . . . , η r+1 ) = A(η 1 , . . . , ηck , . . . , η r+1 , η kS ). [S2] For s ≥ 2 and 1 ≤ l ≤ s − 1: Skl (A)(η 1 , . . . , η r+1 , v1 , . . . vs−1 ) = A(η 1 , . . . , ηck , . . . , η r+1 , v1 , . . . , vl−1 , η kS , vl , . . . , vs−1 ). [S3] For s ≥ 2 and l = s: Sks (A)(η 1 , . . . , η r+1 , v1 , . . . , vs−1 ) = A(η 1 , . . . , ηck , . . . , η r+1 , v1 , . . . , vs−1 , η kS ). The computational procedure is most transparent for [S2]: extract the covector in the kth contravariant position, sharp it, and insert the result in the lth covariant position. Example 6.3.1. To illustrate [S1], let r = 0 (so that k = 1) and s = 1 (so that l = 1), and let ζ be a covector in T 01 (V ) = V ∗ . Then the tensor S11 (ζ) in T 10 (V ) = V is given by S11 (ζ)(η) = ζ(η S ) for all covectors η in V ∗ . We have from (1.2.3) and (4.5.2) that ζ(η S ) = hζ, ηi∗ = hη, ζi∗ = η(ζ S ) = ζ S (η) = S(ζ)(η), hence S11 (ζ)(η) = S(ζ)(η). Since η and ζ were arbitrary, S11 = S. To illustrate [S2], let r = 0 (so that k = 1) and s = 2 (so that l = 1), and let A be a tensor in T 02 (V ). Then the tensor S11 (A) in T 11 (V ) is given by S11 (A)(η, v) = A(η S , v).

120

6 Tensors on Scalar Product Spaces

To illustrate [S3], let r = 0 (so that k = 1) and s = 2 (so that l = 2), and let A be a tensor in T 02 (V ). Then the tensor S12 (A) in T 11 (V ) is given by S12 (A)(η, v) = A(v, η S ).



Example 6.3.2. Let A be a tensor in T 11 (V ). For r = s = 1, there are sharp maps for k = 1, 2 and l = 1. For k = 1 and l = 1, the tensor S11 (A) in T 20 (V ) is given by S11 (A)(η 1 , η 2 ) = A(η 2 , η 1S ). For k = 2 and l = 1, the tensor S21 (A) in T 20 (V ) is given by S21 (A)(η 1 , η 2 ) = A(η 1 , η 2S ).



Example 6.3.3. For a tensor B in T 13 (V ), the tensor S21 (B) in T 22 (V ) is given by S21 (B)(η 1 , η 2 , v1 , v2 ) = B(η 1 , η 2S , v1 , v2 ). P By Theorem 4.5.3(a), θjS = p gjp hp , so S21 (B) has the components 2 i j i jS S21 (B)ij kl = S1 (B)(θ , θ , hk , hl ) = B(θ , θ , hk , hl )  X  X = B θi , gjp hp , hk , hl = gjp B(θi , hp , hk , hl ) p

=

X p

p

(6.3.1)

gjp B ipkl

with respect to H.



Theorem 6.3.4. space, let H be a basis for V ,  ij Let  (V, g) be a scalar product r g and let g−1 = . Let A be a tensor in T (V ), and let 1 ≤ k ≤ r + 1 and s H k 1 ≤ l ≤ s be integers. Then the tensor Sl (A) in T r+1 s−1 (V ) has the components i ...i

Skl (A)j11 ...jr+1 = s−1

X p

i ...ib ...i

gik p Aj11 ...jkl−1 pr+1 jl ...js−1

with respect to H. Proof. The proof is an elaboration of Example 6.3.3. We consider only the case [S2]. The tensor Skl (A) in T r+1 s−1 (V ) is given by Skl (A)(η 1 , . . . , η r+1 , v1 , . . . , vs−1 ) = A(η 1 , . . . , ηck , . . . , η r+1 , v1 , . . . , vl−1 , η kS , vl+1 , . . . , vs−1 ) for all covectors η 1 , . . . , η r−1 in V ∗ and all vectors v1 , . . . , vs+1 in V . Let H = (h1 , . . . , hm ), and let (θ1 , . . . , θm ) be its dual basis. By Theorem 4.5.3(a), θiS =

121

6.3 Sharp Maps P

p

gip hp , so Skl (A) has the components i ...i

Skl (A)j11 ...jr+1 s−1 = Skl (A)(θi1 , . . . , θik , . . . , θir+1 , hj1 , . . . , hjl , . . . , hjs−1 ) = A(θi1 , . . . , θik−1 , θik+1 , . . . , θir+1 , hj1 , . . . , hjl−1 , θik S , hjl , . . . , hjs−1 )   X = A θi1 , . . . , θik−1 , θik+1 , . . . , θir+1 , hj1 , . . . , hjl−1 , gik p hp , hjl , . . . , hjs−1 p

=

X

ik p

g

p

=

X p

i1

A(θ , . . . , θ

ik−1



ik+1

,...,θ

ir+1

, hj1 , . . . , hjl−1 , hp , hjl , . . . , hjs−1 )

i...ib ...i

gik p Aj1 ...jk l−1r+1 p jl ...js−1

with respect to H. Theorem 6.3.5. space, let H be a basis for V ,  ij Let (V, g) be a scalar product r g and let g−1 = . Let A be a tensor in T (V ), and let r ≥ 0 and s ≥ 2 be s H integers. Then: (a) For integers 1 ≤ k < l ≤ s and 1 ≤ n ≤ r, the components of the tensor Cnk ◦ Snl (A) in T rs−2 (V ) with respect to H are X r r Cnk ◦ Snl (A)ij11...i gpq Aij11...i (6.3.2) ...js−2 = ...jk−1 p jk ...jl−2 q jl−1 ...js−2 . pq

(b) For integers 1 ≤ k ≤ s and 1 ≤ n ≤ r, the components of the tensor Cnk ◦ Snl (A) in T rs−2 (V ) with respect to H are i1 ...ir n n r Cnk ◦ Snk (A)ij11...i ...js−2 = Ck ◦ Sk+1 (A)j1 ...js−2 X r = gpq Aij11...i ...jk−1 p q jk ...js−2 .

(6.3.3)

pq

Proof. By Theorem 6.3.4, the components of the tensor Snl (A) in T r+1 s−1 (V ) with respect to H are X i ...i i ...ib ...i Snl (A)j11 ...jr+1 = gin q Aj11 ...jnl−1 qr+1 (6.3.4) jl ...js−1 . s−1 q

(a): We have i ...ib ...i

Cnk ◦ Snl (A)j1 ...jbn...j r+1 = 1

l

s−1

=

X

pi

...i

[Th 5.4.5]

p

X X p

=

i ...i

n+1 r+1 Snl (A)j11 ...jn−1 k−1 p jk+1 ...js−1

X pq

q

i ...ib ...i

gpq Aj11 ...jnk−1 r+1 p jk+1 ...jl−1 q jl ...js−1 i ...ib ...i

gpq Aj11 ...jnk−1 r+1 p jk+1 ...jl−1 q jl ...js−1 .

 [(6.3.4)]

122

6 Tensors on Scalar Product Spaces

Relabeling indices (in+1 , . . . , ir+1 ) as (in , . . . , ir ), and (jk+1 , . . . , jl−1 , q, jl , . . . , js−1 ) as (jk , . . . , jl−2 , q, jl−1 , . . . , js−2 ), gives the result. (b): We have X i ...ib ...i i ...i p in+1 ...ir+1 Cnk ◦ Snk (A)j1 ...jbn ...jr+1 = Snk (A)j11 ...jn−1 [Th 5.4.5] k−1 p jk+1 ...js−1 1

s−1

k

p

=

XX p

q

=

X

=

X

pq

pq

i ...ib ...i

gpq Aj11 ...jnk−1 r+1 q p jk+1 ...js−1

 [(6.3.4)]

i ...ib ...i

gqp Aj11 ...jnk−1 r+1 q p jk+1 ...js−1 i ...ib ...i

gpq Aj11 ...jnk−1 r+1 p q jk+1 ...js−1

and i ...ib ...i

Cnk ◦ Snk+1 (A)j1 ...jbn ...jr+1 = 1

k

X

s−1

=

i ...i

...i

[Th 5.4.5]

p

XX p

=

pi

n+1 r+1 Snk+1 (A)j11 ...jn−1 k−1 p jk+1 ...js−1

X pq

q

i ...ib ...i

gpq Aj11 ...jnk−1 r+1 p q jk+1 ...js−1

 [(6.3.4)]

i ...ib ...i

gpq Aj11 ...jnk−1 r+1 p q jk+1 ...js−1 ,

which gives the first equality. Relabeling indices (in+1 , . . . , ir+1 ) as (in , . . . , ir ), and (jk+1 , . . . , js−1 ) as (jk , . . . , js−2 ), gives the second equality. We observe that p and q in (6.3.2) are in the kth and lth covariant positions of A, respectively, and that p and q in (6.3.3) are in the kth and (k + 1)th covariant positions of A, respectively. We also note that the right hand side of (6.3.2) is independent of n. See the corresponding remarks following Theorem 6.2.5. Example 6.3.6. For a tensor A in T 22 (V ), we have from Example 6.2.3 that X F21 (A)ipkl = gpq Aiq kl . q

Substituting B = F21 (A), as given by the above expression, into (6.3.1) yields X  X X iq jp 2 i jp S21 ◦ F21 (A)ij = g F (A) = g g A pq kl 1 pkl kl p

=

p

XX q

p



gjp gpq Aiq kl =

q

X q

ij δqj Aiq kl = Akl .

Thus, S21 ◦ F21 (A) = A. Likewise, F21 ◦ S21 (A) = A. Since A was arbitrary, it follows from Theorem A.4 that F21 and S21 are inverses of each other, and are therefore linear isomorphisms. ♦

123

6.4 Representation of Tensors

Theorem 6.3.7. If (V, g) is a scalar product space, and 1 ≤ k ≤ r and 1 ≤ l ≤ s are integers, then Fkl and Skl are linear isomorphisms that are inverses. Proof. The proof is an elaboration of Example 6.3.6. Let H be a basis for  V, gij . let A and B be tensors in T rs (V ) and T r−1 (V ), respectively, and let g = H s+1 r−1 k By Theorem 6.2.4, Fl (A) is a tensor in T s+1 (V ) that has the components X i ...i q ik ...ir−1 i ...i Fkl (A)j11 ...jr−1 = gjl q Aj1 ...jbk−1 s+1 ...j 1

q

s+1

l

with respect to H. Relabeling indices (ik , . . . , ir−1 ) as (ik+1 , . . . , ir ), and (jl , . . . , js+1 ) as (p, jl , . . . , js ), yields X i ...i q ik+1 ...ir ibk ...ir Fkl (A)ij11... gpq Aj11 ...jk−1 . (6.3.5) ...jl−1 p jl ...js = s q

By Theorem 6.3.4, the tensor Skl (B) in T rs (V ) has the components X ibk ...ir r Skl (B)ij11...i gik p Bji11 ... ...js = ...jl−1 p jl ...js

(6.3.6)

p

with respect to H. Substituting B = Fkl (A), as given by (6.3.5), into (6.3.6) gives X ibk ...ir r Skl ◦ Fkl (A)ij11...i gik p Fkl (A)ij11... ...js = ...jl−1 p jl ...js p

=

X

gik p

X

p

=

q

XX q

=

X q

ik p

g

p i ...i

i ...i

gpq Aj11 ...jk−1 s 

gpq Aj11 ...jk−1 s

δqik Aj11 ...jk−1 s

i ...i

= Aj11 ...jk−1 s

i ...i

q ik+1 ...ir



q ik+1 ...ir

q ik+1 ...ir

ik ik+1 ...ir

r = Aij11...i ...js .

The rest of the proof uses the argument presented in Example 6.3.6.

6.4

Representation of Tensors

Section 5.3 was devoted to the topic of representation of tensors over vector spaces. We now extend that discussion to tensors over scalar product spaces. Let (V, g) be a scalar product space, let H = (h  1 , . . . , hm ) be a basis for V , let (θ1 , . . . θm ) be its dual basis, and let gH = gij . Following Section B.5 and Section B.6, we denote by Mult(V s , V ) the vector space of R-multilinear maps from V s to V . Let us define a map Ss : Mult(V s , V ) −→ T 0s+1 (V ),

124

6 Tensors on Scalar Product Spaces

called the scalar product map, by Ss (Ψ)(v1 , . . . , vs+1 ) = hv1 , Ψ(v2 , . . . , vs+1 )i

(6.4.1)

for all maps Ψ in Mult(V s , V ) and all vectors v1 , . . . , vs+1 in V . When s = 1, we denote S1 by S. Theorem 6.4.1 (Representation of Tensors). With the above setup: (a) Ss is a linear isomorphism: T 0s+1 (V ) ≈ Mult(V s , V ). (b) For all maps Ψ in Mult(V s , V ), the components of the tensor Ss (Ψ) in T 0s+1 (V ) with respect to H are Ss (Ψ)j1 ...js+1 =

X

 gij1 θi Ψ(hj2 , . . . , hjs+1 ) .

i

(c) For all tensors A in T 0s+1 (V ), S−1 s (A)(hj1 , . . . , hjs ) =

X

S11 (A)ij1 ...js hi ,

i

where S11 is defined in Section 6.3. (d) Rs = S11 ◦ Ss , where Rs is given by (5.3.1). Proof. (a): It is easily shown that Ss is a linear map. If Ss (Ψ) = 0, then hΨ(v2 , . . . , vs+1 ), v1 i = 0 for all vectors v1 , . . . , vs+1 in V . Since g is nondegenerate, we have Ψ(v2 , . . . , vs+1 ) = 0, and because v2 , . . . , vs+1 were arbitrary, Ψ = 0. Thus, ker(Ss ) = {0}. By Theorem 5.1.3(c), T 0s+1 (V ) has dimension ms+1 , and it can be shown that Mult(V s , V ) has the same dimension. It follows from Theorem 1.1.12 and Theorem 1.1.14 that Ss is a linear isomorphism. (b): Since Ψ(hj2 , . . . , hjs+1 ) is a vector in V , we have from Theorem 1.2.1(d) that X  Ψ(hj2 , . . . , hjs+1 ) = θi Ψ(hj2 , . . . , hjs+1 ) hi , i

and then from (6.4.1) that Ss (Ψ)j1 ...js+1 = Ss (Ψ)(hj1 , . . . , hjs+1 ) = hhj1 , Ψ(hj2 , . . . , hjs+1 )i   X  = hj1 , θi Ψ(hj2 , . . . , hjs+1 ) hi i

=

X i

 gij1 θi Ψ(hj2 , . . . , hjs+1 ) .

125

6.4 Representation of Tensors

s −1 (c): Since S−1 s (A) is a map in Mult(V , V ), Ss (A)(hj2 , . . . , hjs+1 ) is a vector in V . We have from Theorem 1.2.1(d) that

X

S−1 s (A)(hj2 , . . . , hjs+1 ) =

 θi S−1 s (A)(hj2 , . . . , hjs+1 ) hi .

(6.4.2)

i

Then Aj1 ...js+1 = A(hj1 , . . . , hjs+1 )  = Ss S−1 s (A) (hj1 , . . . , hjs+1 ) =

 =

[part(a)]

hhj1 , S−1 s (A)(hj2 , . . . , hjs+1 )i hj1 ,

X

[(6.4.1)]

θi S−1 s (A)(hj2 , . . . , hjs+1 ) hi 

 [(6.4.2)]

i

=

X

 gij1 θi S−1 s (A)(hj2 , . . . , hjs+1 ) ,

i

hence X

ij1

g

j1

Aj1 ...js+1 = =

X

g

X

j1

X

gkj1 θ

k

S−1 s (A)(hj2 , . . . , hjs+1 )





k

X X k

=

ij1

ij1

g

  gj1 k θk S−1 s (A)(hj2 , . . . , hjs+1 )

j1

δki θk S−1 s (A)(hj2 , . . . , hjs+1 )



k

 = θi S−1 s (A)(hj2 , . . . , hjs+1 ) , so X ij1

gij1 Aj1 ...js+1 hi =

X

 θi S−1 s (A)(hj2 , . . . , hjs+1 ) hi .

(6.4.3)

i

By Theorem 6.3.4, S11 (A)ij2 ...js+1 =

X j1

gij1 Aj1 ...js+1 ,

hence X

S11 (A)ij2 ...js+1 hi =

i

X ij1

gij1 Aj1 ...js+1 hi .

Combining (6.4.2)–(6.4.4) yields S−1 s (A)(hj2 , . . . , hjs+1 ) =

X

S11 (A)ij2 ...js+1 hi ,

i

and relabeling indices (j2 , . . . , js+1 ) as (j1 , . . . , js ) gives the result.

(6.4.4)

126

6 Tensors on Scalar Product Spaces

(d): Let Ψ be a multilinear function in Mult(V s , V ), let η be a covector in V , and let v1 , . . . , vs be vectors in V . Then  Rs (Ψ)(η, v1 , . . . , vs ) = η Ψ(v1 , . . . , vs ) [(5.3.1)]  S F = (η ) Ψ(v1 , . . . , vs ) [(3.3.5)] ∗

= hη S , Ψ(v1 , . . . , vs )i S

= Ss (Ψ)(η , v1 , . . . , vs )  = S11 Ss (Ψ) (η, v1 , . . . , vs ).

[(3.3.1)] [(6.4.1)] [[S2] in §6.3]

Since Ψ, η, and v1 , . . . , vs were arbitrary, the result follows. Theorem 6.4.2. With the above setup: T 0s+1 (V ) ≈ Mult(V s , V ) ≈ T 1s (V ). Proof. This follows from Theorem 5.3.1(a) and Theorem 6.4.1(a). It is instructive to specialize Theorem 6.4.1 to the case s = 1. We then have Mult(V, V ) = Lin(V, V ) and S : Lin(V, V ) −→ T 02 (V ) defined by S(B)(v, w) = hv, B(w)i

(6.4.5)

for all maps B in Lin(V, V ) and all vectors v, w in V . For example, S(idV )(v, w) = hv, wi = g(v, w), so S(idV ) = g. Theorem 6.4.3. With the above setup: (a) S is a linear isomorphism: T 02 (V ) ≈ Lin(V, V ). (b) For all maps B in Lin(V, V ), the components of the tensor S(B) in T 02 (V ) with respect to H are X  S(B)ij = gik θk B(hj ) . k

(c) For all tensors A in T 02 (V ),  −1 H    S (A) H = gij Aij ,   where gij = g−1 H and the Aij are the components of A with respect to H.

127

6.5 Metric Contraction of Tensors (d) R = S11 ◦ S, where R is given by (5.3.2).

Proof. (a), (b), (d): These follow from the corresponding parts of Theorem 6.4.1. (c): By Theorem 6.4.1(c), X S11 (A)ij hi , S−1 (A)(hj ) = i

so that (2.2.2) and (2.2.3) give  −1 H   S (A) H = S11 (A)ij . From Theorem 6.3.4, S11 (A)ij =

X p

gip Apj ,

hence  1     S1 (A)ij = gij Aij . The result follows.

6.5

Metric Contraction of Tensors

For tensors over a vector space, we are restricted to contracting over one contravariant index and one covariant index. However, for tensors over a scalar product space, the flat map and sharp map make it possible to contract over two contravariant or two covariant indices, yielding what are referred to as metric contractions. Let (V, g) be a scalar product space. The following definitions are motivated by Theorem 6.2.5 and Theorem 6.3.5. For integers r ≥ 2, s ≥ 0, and 1 ≤ k < l ≤ r, the (k, l)-contravariant metric contraction is the linear map Ckl : T rs (V ) −→ T r−2 (V ) s defined by Ckl = Ck1 ◦ Fl1 .

(6.5.1)

We have from Theorem 6.2.5(b) that Ck,k+1 = Ck1 ◦ Fk1 .

(6.5.2)

For integers r ≥ 0, s ≥ 2, and 1 ≤ k < l ≤ s, the (k, l)-covariant metric contraction is the linear map Ckl : T rs (V ) −→ T rs−2 (V )

128

6 Tensors on Scalar Product Spaces

defined by Ckl = C1k ◦ S1l .

(6.5.3)

We have from Theorem 6.3.5(b) that Ck,k+1 = C1k ◦ S1k .

(6.5.4)

In order to avoid confusion between metric contractions and the contractions defined in Section 5.4, we recall that the latter are sometimes referred to as ordinary contractions. Theorem 6.5.1 (Orthonormal Basis Expression for Metric Contraction). Let (V, g) be a scalar product space with signature (ε1 , . . . , εm ), let (e1 , . . . , em ) be an orthonormal basis for V , and let A be a tensor in T 0s (V ). Then: (a) For integers s ≥ 3 and 1 ≤ k < l ≤ s, the tensor Ckl (A) in T 0s−2 (V ) is given by Ckl (A)(v1 , . . . , vs−2 ) X = εi A(v1 , . . . , vk−1 , ei , vk , . . . , vl−2 , ei , vl−1 , . . . , vs−2 )

(6.5.5)

i

for all vectors v1 , . . . , vs−2 in V . (b) For s = 2, the real number C12 (A) is given by C12 (A) =

X i

εi A(ei , ei ).

  Proof. (a): Let E = (e1 , . . . , em ) and gE = gij , and observe that gij = gij = εi δij . We have from Theorem 6.3.5(a) and (6.5.3) that Ckl (A) has the components Ckl (A)j1 ...js−2 =

X

=

X

pq

p

=

X p

gpq Aj1 ...jk−1 p jk ...jl−2 q jl−1 ...js−2 εp Aj1 ...jk−1 p jk ...jl−2 p jl−1 ...js−2 εp A(ej1 , . . . , ejk−1 , ep , ejk . . . , ejl−2 , ep , ejl−1 , . . . , ejs−2 )

with respect to H. The result for arbitrary vectors v1 , . . . , vs−2 in V follows from the multilinearity of A and Ckl (A).

6.6 Symmetries of (0, 4)-Tensors

129

(b): We have C12 (A) = C11 ◦ S11 (A) X

= εi R−1 ◦ S11 (A)(ei ), ei

[(6.5.4)] [Th 6.1.1(b)]

i

=

X i

=

X

εi hei , S−1 (A)(ei )i

[Th 6.4.3(d)]

 εi S S−1 (A) (ei , ei )

[(6.4.5)]

i

=

X i

εi A(ei , ei ).

Observe that in (6.5.5) the ei appear in the kth and lth positions. We saw in Theorem 5.4.1 that “contraction” and “trace” are related. Here is another manifestation of that same phenomenon. Theorem 6.5.2. If (V, g) is a scalar product space, then C12 = tr ◦ S−1 , where tr is trace and S is given by (6.4.5). Proof. We have C12 = C11 ◦ S11

[(6.5.4)]

−1

= (tr ◦ R

−1

= tr ◦ S

6.6

.

−1

) ◦ (R ◦ S

)

[Th 5.4.1, Th 6.4.3(d)]

Symmetries of (0, 4)-Tensors

Let V be a vector space, and let A be a tensor in T 04 (V ). Consider the following symmetries that A might satisfy for all vectors v1 , v2 , v3 , v4 in V : [S1] A(v1 , v2 , v3 , v4 ) = −A(v2 , v1 , v3 , v4 ). [S2] A(v1 , v2 , v3 , v4 ) = −A(v1 , v2 , v4 , v3 ). [S3] A(v1 , v2 , v3 , v4 ) = A(v3 , v4 , v1 , v2 ). [S4] A(v1 , v2 , v3 , v4 ) + A(v2 , v3 , v1 , v4 ) + A(v3 , v1 , v2 , v4 ) = 0. Observe that the left-hand side of [S4] is obtained by cyclically permuting v1 , v2 , v3 while leaving v4 in place. It follows from [S1] that A(v1 , v1 , v2 , v3 ) = 0,

(6.6.1)

A(v1 , v2 , v3 , v3 ) = 0

(6.6.2)

and from [S2] that for all vectors v1 , v2 , v3 in V . The next result shows that [S1]–[S4] are not independent.

130

6 Tensors on Scalar Product Spaces

Theorem 6.6.1. Let V be a vector space, and let A be a tensor in T 04 (V ). If A satisfies [S1], [S2], and [S4], then it satisfies [S3]. Proof. Using [S4] four times gives A(v3 , v1 , v4 , v2 ) + A(v1 , v4 , v3 , v2 ) + A(v4 , v3 , v1 , v2 ) = 0 A(v1 , v4 , v2 , v3 ) + A(v4 , v2 , v1 , v3 ) + A(v2 , v1 , v4 , v3 ) = 0 A(v4 , v2 , v3 , v1 ) + A(v2 , v3 , v4 , v1 ) + A(v3 , v4 , v2 , v1 ) = 0

(6.6.3)

A(v2 , v3 , v1 , v4 ) + A(v3 , v1 , v2 , v4 ) + A(v1 , v2 , v3 , v4 ) = 0. Applying [S1] and [S2] to the second and third columns of (6.6.3) gives A(v3 , v1 , v4 , v2 ) − A(v1 , v4 , v2 , v3 ) − A(v3 , v4 , v1 , v2 ) = 0 A(v1 , v4 , v2 , v3 ) − A(v4 , v2 , v3 , v1 ) + A(v1 , v2 , v3 , v4 ) = 0 A(v4 , v2 , v3 , v1 ) − A(v2 , v3 , v1 , v4 ) − A(v3 , v4 , v1 , v2 ) = 0

(6.6.4)

A(v2 , v3 , v1 , v4 ) − A(v3 , v1 , v4 , v2 ) + A(v1 , v2 , v3 , v4 ) = 0. Summing both sides of (6.6.4) yields 2A(v1 , v2 , v3 , v4 ) − 2A(v3 , v4 , v1 , v2 ) = 0, from which the result follows. Let V be a vector space, and let S(V ) be the set of tensors in T 04 (V ) that satisfy [S1] and [S2]. It is easily shown that S(V ) is a subspace of T 04 (V ). Let A be a tensor in T 04 (V ), and consider the function A defined by A(v1 , v2 ) = A(v1 , v2 , v2 , v1 ) for all vectors v1 , v2 in V . Observe that if A is in S(V ), then A(v1 , v2 ) = A(v1 , v2 , v2 , v1 ) = A(v2 , v1 , v1 , v2 ) = A(v2 , v1 ), so A is symmetric. Theorem 6.6.2. Let V be a 2-dimensional vector space, and let A be a tensor in S(V ). Then the following are equivalent: (a) A is the zero tensor. (b) A(h1 , h2 ) = 0 for some basis (h1 , h2 ) for V . (c) A(h1 , h2 ) = 0 for every basis (h1 , h2 ) for V . Proof. Let H = (h1 , h2 ) be a basis for V . For the moment, we make no further assumptions about A and H. Let v1 , v2 , v3 , v4 be vectors in V , with  1  1  1  1         a b c d v1 H = 2 v2 H = 2 v3 H = 2 v4 H = 2 . a b c d

6.6 Symmetries of (0, 4)-Tensors

131

It follows from [S1] and (6.6.1) that A(v1 , v2 , v3 , v4 ) = A(a1 h1 + a2 h2 , b1 h1 + b2 h2 , v3 , v4 )

= a1 b1 A(h1 , h1 , v3 , v4 ) + a1 b2 A(h1 , h2 , v3 , v4 )

+ a2 b1 A(h2 , h1 , v3 , v4 ) + a2 b2 A(h2 , h2 , v3 , v4 )

= (a1 b2 − a2 b1 )A(h1 , h2 , v3 , v4 ), and from [S2] and (6.6.2) that

A(h1 , h2 , v3 , v4 ) = A(h1 , h2 , c1 h1 + c2 h2 , d1 h1 + d2 h2 )

= c1 d1 A(h1 , h2 , h1 , h1 ) + c1 d2 A(h1 , h2 , h1 , h2 )

+ c2 d1 A(h1 , h2 , h2 , h1 ) + c2 d2 A(h1 , h2 , h2 , h2 )

= −(c1 d2 − c2 d1 )A(h1 , h2 , h2 , h1 ) = −(c1 d2 − c2 d1 ) A(h1 , h2 ). Thus,

A(v1 , v2 , v3 , v4 ) = −(a1 b2 − a2 b1 )(c1 d2 − c2 d1 ) A(h1 , h2 ).

(6.6.5)

(a) ⇒ (b): Since A is the zero tensor, A(h1 , h2 ) = A(h1 , h2 , h2 , h1 ) = 0. (b) ⇒ (a): Since A(h1 , h2 ) = 0, it follows from (6.6.5) that A(v1 , v2 , v3 , v4 ) = 0. Since v1 , v2 , v3 , v4 were arbitrary, A is the zero tensor. (b) ⇔ (c): Let (f1 , f2 ) be another basis for V . Setting v1 = v4 = f1 and v2 = v3 = f2 , we find that ai = di and bi = ci for i = 1, 2. Then (6.6.5) gives A(f1 , f2 ) = (a1 b2 − a2 b1 )2 A(h1 , h2 ). From (2.2.6), (2.2.7), and Theorem 2.4.12,  1  H  a 0 6= det idV F = det a2

b1 b2



(6.6.6)

= a1 b2 − a2 b1 .

The result follows. Theorem 6.6.3. Let (V, g) be a 2-dimensional scalar product space, and let D be the function defined by   hv1 , v3 i hv1 , v4 i D(v1 , v2 , v3 , v4 ) = det (6.6.7) hv2 , v3 i hv2 , v4 i for all vectors v1 , v2 , v3 , v4 in V . Then: (a) D is a nonzero tensor in S(V ), and D is given by   hv1 , v1 i hv1 , v2 i D(v1 , v2 ) = −det hv2 , v1 i hv2 , v2 i for all vectors v1 , v2 in V .

132

6 Tensors on Scalar Product Spaces

(b) S(V ) is a 1-dimensional subspace of T 04 (V ) that is spanned by D. Proof. (a): That D is a tensor in S(V ) follows from the properties of determinants, or perhaps more easily from the observation that D(v1 , v2 , v3 , v4 ) = hv1 , v3 ihv2 , v4 i − hv2 , v3 ihv1 , v4 i. Let (e1 , e2 ) be an orthonormal basis for V . Then D(e1 , e2 , e2 , e1 ) = −he1 , e1 ihe2 , e2 i = 6 0, hence D is nonzero. (b): Let A be a tensor in S(V ). We have from (6.6.5) that A(v1 , v2 , v3 , v4 ) = −(a1 b2 − a2 b1 )(c1 d2 − c2 d1 ) A(h1 , h2 ) and D(v1 , v2 , v3 , v4 ) = −(a1 b2 − a2 b1 )(c1 d2 − c2 d1 ) D(h1 , h2 ), hence A(v1 , v2 , v3 , v4 ) =

A(h1 , h2 ) D(v1 , v2 , v3 , v4 ), D(h1 , h2 )

where Theorem 6.6.2 and part (a) ensure that the denominator is nonzero. Since v1 , v2 , v3 , v4 were arbitrary, A=

A(h1 , h2 ) D, D(h1 , h2 )

so D spans S(V ). Theorem 6.6.4. Let (V, g) be a 2-dimensional scalar product space, let A be a tensor in S(V ), and let (h1 , h2 ) and (f1 , f2 ) be bases for V . Then A(h1 , h2 ) A(f1 , f2 ) = . D(h1 , h2 ) D(f1 , f2 ) Proof. Arguing as in the proof of part (b) of Theorem 6.6.3, but with (6.6.6) in place of (6.6.5), gives the result.

Chapter 7

Multicovectors It was remarked in Section 5.1 that the determinant function is the classic example of a multilinear function. In addition to its multilinearity, the determinant function has another characteristic feature—it is alternating. This chapter is devoted to an examination of tensors that have a corresponding property.

7.1

Multicovectors

Let V be a vector space of dimension m, and let s ≥ 1 be an integer. Following Section B.2, we denote by Ss the group of permutations on {1, . . . , s}. For each permutation σ in Ss , consider the linear map σ : T 0s (V ) −→ T 0s (V ) defined by σ(A)(v1 , . . . , vs ) = A(vσ(1) , . . . , vσ(s) )

(7.1.1)

0 s (V

for all tensors A in T ) and all vectors v1 , . . . , vs in V . By saying that σ is linear, we mean that for all tensors A, B in T 0s (V ) and all real numbers c, σ(cA + B)(v1 , . . . , vs ) = cA(vσ(1) , . . . , vσ(s) ) + B(vσ(1) , . . . , vσ(s) ). There is potential confusion arising from (7.1.1), as a simple example illustrates. Let s = 3, let A be a tensor in T 03 (V ), and consider the permutation σ = (1 2 3) in S3 . According to (7.1.1), σ(A)(v1 , v2 , v3 ) = A(vσ(1) , vσ(2) , vσ(3) ) = A(v2 , v3 , v1 ) for all vectors v1 , v2 , v3 in V . To be consistent, it seems that σ(A)(v1 , v3 , v2 ) should be interpreted as A(vσ(1) , vσ(3) , vσ(2) ) = A(v2 , v1 , v3 ), but this is incorrect. The issue is that the indices in (v1 , v3 , v2 ) are not sequential, which is Semi-Riemannian Geometry, First Edition. Stephen C. Newman. c 2019 John Wiley & Sons, Inc. Published 2019 by John Wiley & Sons, Inc.

133

134

7 Multicovectors

implicit in the way (7.1.1) is presented. Setting (w1 , w2 , w3 ) = (v1 , v3 , v2 ), we have from (7.1.1) that σ(A)(v1 , v3 , v2 ) = σ(A)(w1 , w2 , w3 ) = A(wσ(1) , wσ(2) , wσ(3) ) = A(w2 , w3 , w1 ) = A(v3 , v2 , v1 ).

In most of the computations that follow, the indices are sequential, thereby avoiding this issue. Theorem 7.1.1. Let V be a vector space, let σ, ρ be permutations in Ss , and let A be a tensor in T 0s (V ). Then  (σρ)(A) = σ ρ(A) . Proof. Let v1 , . . . , vs be vectors in V , and let (w1 , . . . , ws ) = (vσ(1) , . . . , vσ(s) ). Then (σρ)(A)(v1 , . . . , vs ) = A(v(σρ)(1) , . . . , v(σρ)(s) ) = A(vσ(ρ(1)) , . . . , vσ(ρ(s)) ) = A(wρ(1) , . . . , wρ(s) ) = ρ(A)(w1 , . . . , ws )  = ρ(A)(vσ(1) , . . . , vσ(s) ) = σ ρ(A) (v1 , . . . , vs ).

Since v1 , . . . , vs were arbitrary, the result follows. We say that a tensor A in T 0s (V ) is symmetric if σ(A) = A for all permutations σ in Ss ; that is, A(vσ(1) , . . . , vσ(s) ) = A(v1 , . . . , vs ) for all vectors v1 , . . . , vs in V . We denote the set of symmetric tensors in T 0s (V ) by Σs (V ). It is easily shown that Σs (V ) is subspace of T 0s (V ). A tensor B in T 0s (V ) is said to be alternating if τ (B) = −B for all transpositions τ in Ss , or equivalently, if B(v1 , . . . , vj , . . . , vi , . . . , vs ) = −B(v1 , . . . , vi , . . . , vj , . . . , vs ) for all vectors v1 , . . . , vs in V for all 1 ≤ i < j ≤ s. The set of alternating tensors in T 0s (V ) is denoted by Λs (V ). It is readily demonstrated that Λs (V ) is a subspace of T 0s (V ). An element of Λs (V ) is called an s-covector or multicovector. When s = 1, the alternating criterion is vacuous, and a 1-covector is simply a covector. Thus, Λ1 (V ) = V ∗ . (7.1.2) Since Λs (V ) is a subspace of T 0s (V ), the zero multicovector in Λs (V ) is precisely the zero tensor in T 0s (V ). A multicovector in Λs (V ) is nonzero when it is nonzero as a tensor in T 0s (V ). To be consistent with (5.1.2), we define Λ0 (V ) = R.

(7.1.3)

With these definitions, the determinant function det : (Matm×1 )m −→ R is seen to be a multicovector in Λm (Matm×1 ). There are several equivalent ways to characterize multicovectors in Λs (V ).

135

7.1 Multicovectors

Theorem 7.1.2. Let V be a vector space and let η be a tensor in T 0s (V ). Then the following are equivalent: (a) η is a multicovector in Λs (V ). (b) σ(η) = sgn(σ) η for all permutations σ in Ss . (c) If v1 , . . . , vs are vectors in V and two (or more) of them are equal, then η(v1 , . . . , vs ) = 0. (d) If v1 , . . . , vs are vectors in V and η(v1 , . . . , vs ) 6= 0, then v1 , . . . , vs are linearly independent. Proof. The proof is similar to that of Theorem 2.4.2. (a) ⇒ (b): Let σ = τ1 · · · τk be a decomposition of σ into transpositions. By Theorem 7.1.1,  σ(η) = (τ1 · · · τk )(η) = (τ1 · · · τk−1 ) τk (η) = (τ1 · · · τk−1 )(−η) = −(τ1 · · · τk−1 )(η).

Repeating the process k − 1 more times gives σ(η) = (−1)k η. By Theorem B.2.3, sgn(σ) = (−1)k . (b) ⇒ (a): If τ is a transposition in Ss , then, by Theorem B.2.3, τ (η) = sgn(τ ) η = −η. (a) ⇒ (c): If vi = vj for some 1 ≤ i < j ≤ s, then η(v1 , . . . , vi , . . . , vj , . . . , vs ) = η(v1 , . . . , vj , . . . , vi , . . . , vs ). On the other hand, since η is alternating, η(v1 , . . . , vj , . . . , vi , . . . , vs ) = −η(v1 , . . . , vi , . . . , vj , . . . , vs ). Thus, η(v1 , . . . , vi , . . . , vj , . . . , vs ) = −η(v1 , . . . , vi , . . . , vj , . . . , vs ), from which the result follows. (c) ⇒ (a): For all 1 ≤ i < j ≤ s, we have 0 = η(v1 , . . . , vi + vj , . . . , vi + vj , . . . , vs ) = η(v1 , . . . , vi , . . . , vi , . . . , vs ) + η(v1 , . . . , vi , . . . , vj , . . . , vs ) + η(v1 , . . . , vj , . . . , vi , . . . , vs ) + η(v1 , . . . , vj , . . . , vj , . . . , vs ) = η(v1 , . . . , vi , . . . , vj , . . . , vs ) + η(v1 , . . . , vj , . . . , vi , . . . , vs ), from which the result follows. To prove (c) ⇔ (d), we replace the assertion in part (d) with the following logically equivalent assertion: (d0 ): If v1 , . . . , vs are linearly dependent vectors in V , then η(v1 , . . . , vs ) = 0. (c) ⇒ (d0 ): Since v1 , . . . , vs are linearly dependent, P one of them can be s expressed as a linear combination of the others, say, v1 = i=2 ai vi . Then η(v1 , . . . , vs ) = η

X s i=2

i

a vi , v2 , . . . , vs

 =

s X i=2

ai η(vi , v2 , . . . , vs ).

136

7 Multicovectors

Since η(vi , v2 , . . . , vs ) has vi in (at least) two positions, η(vi , v2 , . . . , vs ) = 0 for i = 2, . . . , s. Thus, η(v1 , . . . , vs ) = 0. (d0 ) ⇒ (c): If two (or more) of v1 , . . . , vs are equal, then they are linearly dependent, so η(v1 , . . . , vs ) = 0. We now introduce a way of associating a multicovector to a given tensor. Let V be a vector space. Alternating map is the family of linear maps Alt : T 0s (V ) −→ T 0s (V ) defined for s ≥ 0 by

Alt(A) =

1 X sgn(σ) σ(A) s!

(7.1.4)

σ∈Ss

for all tensors A in T 0s (V ). Theorem 7.1.3. Let V be a vector space, let A be a tensor in T 0s (V ), let σ be a permutation in Ss , and let η be a multicovector in Λs (V ). Then:  (a) σ Alt(A) = sgn(σ) Alt(A) = Alt σ(A) . (b) Alt(A) is a multicovector in Λs (V ). (c) Alt(η) = η.  (d) Alt Alt(A) = Alt(A). (e) Alt T 0s (V ) = Λs (V ). Proof. (a): We have 

 1 X σ Alt(A) = σ sgn(ρ) ρ(A) s! 

[(7.1.4)]

ρ∈Ss

=

 1 X sgn(ρ) σ ρ(A) s! ρ∈Ss

=

1 X sgn(ρ) (σρ)(A) s!

[Th 7.1.1]

ρ∈Ss

= sgn(σ)

1 X sgn(σρ) (σρ)(A) s!

[Th B.2.2]

ρ∈Ss

= sgn(σ) Alt(A),

[(7.1.4)]

where the last equality follows from the observation that as ρ varies over Ss , so does σρ. We also have   1 X Alt σ(A) = sgn(ρ) ρ σ(A) s!

[(7.1.4)]

ρ∈Ss

= sgn(σ)

1 X sgn(ρσ) (ρσ)(A) s! ρ∈Ss

= sgn(σ) Alt(A),

[Th 7.1.1, Th B.2.2]

137

7.2 Wedge Products

where the last equality is justified as above.  (b): For all transpositions τ in Ss , it follows from part (a) that τ Alt(A) = −Alt(A), so Alt(A) is alternating. (c): We have 1 X sgn(σ) σ(η) s! σ∈Ss 1 X = η s!

Alt(η) =

[(7.1.4)] [Th 7.1.2(b)]

σ∈Ss

= η.

[card(Ss ) = s!]

(d): This follows from parts (b) and (c).  (e): We have from part (b) that Alt T 0s (V ) ⊆ Λs (V ). Since Λs (V ) ⊆ T 0s (V ), by part (c),   Λs (V ) = Alt Λs (V ) ⊆ Alt T 0s (V ) . In view of Theorem 7.1.3(b), we can replace the map in (7.1.4) with Alt : T 0s (V ) −→ Λs (V ).

7.2

Wedge Products

In Section 5.1, we introduced a type of multiplication of tensors called the tensor product. Our next task is to define a corresponding operation for multicovectors. Let V be a vector space. Wedge product is the family of linear maps 0

0

∧ : Λs (V ) × Λs (V ) −→ Λs+s (V ) defined for s, s0 ≥ 0 by η∧ζ =

(s + s0 )! Alt(η ⊗ ζ) s!s0 !

(7.2.1)

0

for all multicovectors η in Λs (V ) and ζ in Λs (V ). That is, (η ∧ ζ)(v1 , . . . , vs+s0 )  (s + s0 )! 1 = 0 s!s ! (s + s0 )! =

1 s!s0 !

X

X σ∈Ss+s0

 sgn(σ) σ(η ⊗ ζ)(vσ(1) , . . . , vσ(s+s0 ) )

sgn(σ) η(vσ(1) , . . . , vσ(s) ) ζ(vσ(s+1) , . . . , vσ(s+s0 ) )

σ∈Ss+s0

for all vectors v1 , . . . , vs+s0 in V , where the first equality follows from (7.2.1), and the second equality from (5.1.4) and (7.1.1).

138

7 Multicovectors

Example 7.2.1. Let η, ζ be covectors in Λ1 (V ) = V ∗ , and let v1 , v2 be vectors in V . With S2 = {id, (1 2)}, we have 1 X (η ∧ ζ)(v1 , v2 ) = sgn(σ) η(vσ(1) ) ζ(vσ(2) ) 1!1! σ∈S2

= η(v1 )ζ(v2 ) − η(v2 )ζ(v1 ).

Now let η be a multicovector in Λ2 (V ), let ζ be a covector in Λ1 (V ) = V ∗ , and let v1 , v2 be vectors in V . With S3 = {id, (1 2), (1 3), (2 3), (1 2 3), (1 3 2)}, we have (η ∧ ζ)(v1 , v2 , v3 ) =

1 X sgn(σ) η(vσ(1) , vσ(2) ) ζ(vσ(3) ) 2!1! σ∈S3

= [η(v1 , v2 )ζ(v3 ) − η(v2 , v1 )ζ(v3 ) − η(v3 , v2 )ζ(v1 )

− η(v1 , v3 )ζ(v2 ) + η(v2 , v3 )ζ(v1 ) + η(v3 , v1 )ζ(v2 )]/2

= [η(v1 , v2 )ζ(v3 ) + η(v1 , v2 )ζ(v3 ) + η(v2 , v3 )ζ(v1 )

+ η(v3 , v1 )ζ(v2 ) + η(v2 , v3 )ζ(v1 ) + η(v3 , v1 )ζ(v2 )]/2 = η(v1 , v2 )ζ(v3 ) + η(v2 , v3 )ζ(v1 ) + η(v3 , v1 )ζ(v2 ).



Wedge products behave well with respect to basic algebraic structure. Theorem 7.2.2. Let V be a vector space, let η, η 1 , η 2 and ζ, ζ 1 , ζ 2 be multicov0 ectors in Λs (V ) and Λs (V ), respectively, and let c be a real number. Then: (a) (η 1 + η 2 ) ∧ ζ = η 1 ∧ ζ + η 2 ∧ ζ. (b) η ∧ (ζ 1 + ζ 2 ) = η ∧ ζ 1 + η ∧ ζ 2 . (c) (cη) ∧ ζ = c(η ∧ ζ) = η ∧ (cζ). Proof. Straightforward.

Theorem 7.2.3. If V is a vector space, and η and ζ are multicovectors in 0 Λs (V ) and Λs (V ), respectively, then: 0 (a) η ∧ ζ = (−1)ss ζ ∧ η. (b) If s = s0 = 1, then η ∧ ζ = −ζ ∧ η. (c) If s = 1, then η ∧ η = 0.

Proof. (a): Let v1 , . . . , vs+s0 be vectors in V , and define a permutation σ in Ss+s0 by   1 ··· s s + 1 · · · s + s0 σ= 0 . s + 1 · · · s0 + s 1 ··· s0 We have σ(η ⊗ ζ)(v1 , . . . , vs+s0 ) = η ⊗ ζ(vσ(1) , . . . , vσ(s) , vσ(s+1) , . . . , vσ(s+s0 ) ) = η ⊗ ζ(vs0 +1 , . . . , vs0 +s , v1 , . . . , vs0 ) = η(vs0 +1 , . . . , vs0 +s ) ζ(v1 , . . . , vs0 ) = ζ(v1 , . . . , vs0 ) η(vs0 +1 , . . . , vs0 +s ) = ζ ⊗ η(v1 , . . . , vs0 , vs0 +1 , . . . , vs0 +s ).

139

7.2 Wedge Products Since v1 , . . . , vs+s0 were arbitrary, σ(η ⊗ ζ) = ζ ⊗ η. Then  Alt(ζ ⊗ η) = Alt σ(η ⊗ ζ) = sgn(σ) Alt(η ⊗ ζ) 0

= (−1)ss Alt(η ⊗ ζ), hence

[Th 7.1.3(a)]

[Th B.2.4]

0 (s + s0 )! ss0 (s + s)! Alt(η ⊗ ζ) = (−1) Alt(ζ ⊗ η) s!s0 ! s0 !s! 0 = (−1)ss ζ ∧ η.

η∧ζ =

(b): This follows from part (a). (c): This follows from part (b). Theorem 7.2.4. If V is a vector space, and A and B are multicovectors in 0 Λs (V ) and Λs (V ), respectively, then    Alt Alt(A) ⊗ B = Alt(A ⊗ B = Alt A ⊗ Alt(B) . Proof. Let us define a map ι : Ss −→ Ss+s0 as follows. For each permutation σ in Ss , let ι(σ) be the permutation in Ss+s0 that restricts to σ on {1, . . . , s} and fixes s + 1, . . . , s + s0 ; that is,   1 ··· s s + 1 · · · s + s0 ι(σ) = . σ(1) · · · σ(s) s + 1 · · · s + s0 Evidently, ι is a group isomorphism between Ss and the subgroup ι(Ss ) of Ss+s0 . It is easily shown that  sgn ι(σ) = sgn(σ) and ι(σ)(A ⊗ B) = σ(A) ⊗ B. (7.2.2) Then  X   1 Alt Alt(A) ⊗ B = Alt sgn(σ) σ(A) ⊗ B s! σ∈Ss  X   1 = Alt sgn(σ) σ(A) ⊗ B s! σ∈Ss  1 X = sgn(σ) Alt σ(A) ⊗ B s! σ∈Ss   1 X = sgn ι(σ) Alt ι(σ)(A ⊗ B) s! σ∈Ss 1 X = Alt(A ⊗ B) s! 

[(7.1.4)] [Th 5.1.2(a)]

[(7.2.2)] [Th 7.1.3(a)]

σ∈Ss

= Alt(A ⊗ B),

[card(Ss ) = s!]

which proves the first equality. The proof of the second equality is similar.

140

7 Multicovectors

Any operation that purports to be a type of “multiplication” should be associative. The wedge product meets this requirement. Theorem 7.2.5 (Associativity of Wedge Product). Let V be a vector 0 00 space, and let η and ζ and ξ be multicovectors in Λs (V ) and Λs (V ) and Λs (V ), respectively. Then (η ∧ ζ) ∧ ξ =

(s + s0 + s00 )! Alt(η ⊗ ζ ⊗ ξ) = η ∧ (ζ ∧ ξ). s!s0 !s00 !

Proof. We have  [(s + s0 ) + s00 ]! Alt (η ∧ ζ) ⊗ ξ) 0 00 (s + s )!s !    (s + s0 + s00 )! (s + s0 )! = Alt Alt(η ⊗ ζ) ⊗ ξ (s + s0 )!s00 ! s!s0 !  (s + s0 + s00 )! = Alt Alt(η ⊗ ζ) ⊗ ξ 0 00 s!s !s !  (s + s0 + s00 )! = Alt(η ⊗ ζ ⊗ ξ , 0 00 s!s !s !

(η ∧ ζ) ∧ ξ =

[(7.2.1)] [(7.2.1)]

[Th 7.2.4]

which proves the first equality. The proof of the second equality is similar. In light of the associativity of the wedge product, we drop parentheses and, for example, denote (η ∧ ζ) ∧ ξ and η ∧ (ζ ∧ ξ) by η ∧ ζ ∧ ξ, with corresponding notation for wedge products of more than three terms. Theorem 7.2.6. Let V be a vector space, and let η 1 , . . . , η s be covectors in V ∗ . If η i = η j for some integers 1 ≤ i < j ≤ s, then η 1 ∧ · · · ∧ η s = 0. Proof. We have η1 ∧ · · · ∧ ηi ∧ · · · ∧ ηj ∧ · · · ∧ ηk

= (−1)(i−1)+(j−2) (η i ∧ η j ) ∧ (η 1 ∧ · · · ∧ ηbi ∧ · · · ∧ ηbj ∧ · · · ∧ η k ) = 0,

where the first equality follows from repeated applications of Theorem 7.2.3(b), and the second equality from Theorem 7.2.3(c), and where b indicates that an expression is omitted. The next result is a generalization of Theorem 7.2.5. Theorem 7.2.7. If V is a vector space and η i is a multicovector in Λsi (V ) for i = 1, . . . , k, then η1 ∧ · · · ∧ ηk =

(s1 + · · · + sk )! Alt(η 1 ⊗ · · · ⊗ η k ). s1 ! · · · sk !

Proof. The result is trivial for k = 1. For k = 2, the result is simply the definition of Alt. For k = 3, the result is given by Theorem 7.2.5. For k ≥ 4,

141

7.2 Wedge Products

the proof is by induction. Let k ≥ 4, and suppose the assertion is true for all indices < k. Then (η 1 ∧ η 2 ) ∧ η 3 ∧ · · · ∧ η k

 [(s1 + s2 ) + s3 + · · · + sk ]! Alt (η 1 ∧ η 2 ) ⊗ η 3 ⊗ · · · ⊗ η k (s1 + s2 )!s3 ! · · · sk !   (s1 + s2 + s3 + · · · + sk )! (s1 + s2 )! 1 2 3 k = Alt Alt(η ⊗ η ) ⊗ (η ⊗ · · · ⊗ η ) (s1 + s2 )!s3 ! · · · sk ! s1 !s2 !  (s1 + · · · + sk )! = Alt Alt(η 1 ⊗ η 2 ) ⊗ (η 3 ⊗ · · · ⊗ η k ) s1 ! · · · sk ! (s1 + · · · + sk )! = Alt(η 1 ⊗ · · · ⊗ η k ), s1 ! · · · sk !

=

where the first equality follows from the induction hypothesis, the second equality from (7.2.1), and the last equality from Theorem 7.2.4. The next result shows that wedge products and determinants are closely related, which is not so surprising. Theorem 7.2.8. Let V be a vector space, let η 1 , . . . , η s be covectors in V ∗ , and let v1 , . . . , vs be vectors in V . Then  1  η (v1 ) · · · η 1 (vs )  .. . .. η 1 ∧ · · · ∧ η s (v1 , . . . , vs ) = det ... . .  η s (v1 )

···

η s (vs )

Proof. We have η 1 ∧ · · · ∧ η s (v1 , . . . , vs )

= s! Alt(η 1 ⊗ · · · ⊗ η s )(v1 , . . . , vs ) X = sgn(σ) σ(η 1 ⊗ · · · ⊗ η s )(v1 , . . . , vs )

[Th 7.2.7] [(7.1.4)]

σ∈Ss

=

X σ∈Ss

=

X σ∈Ss

sgn(σ) η 1 ⊗ · · · ⊗ η s (vσ(1) , . . . , vσ(s) )

[(7.1.1)]

sgn(σ) η 1 (vσ(1) ) · · · η s (vσ(s) )

  = det η i (vj ) .

[(2.4.1)]

Theorem 7.2.9. Let V be a vector space, let (h1 , . . . , hm ) be a basis for V , and let (θ1 , . . . , θm ) be its dual basis. Then ( 1 if (i1 , . . . , is ) = (j1 , . . . , js ) j1 js θ ∧ · · · ∧ θ (hi1 , . . . , his ) = 0 if (i1 , . . . , is ) 6= (j1 , . . . , js ) for all 1 ≤ i1 < · · · < is ≤ m and 1 ≤ j1 < · · · < js ≤ m.

142

7 Multicovectors

Proof. By Theorem 7.2.8, θj1 ∧ · · · ∧ θjs (hi1 , . . . , his ) = det

   (j ,...,j )  (j ,...,j ) θi (hj ) (i 1,...,i s) = det (Im )(i11,...,iss) . 1

s

(j ,...,j )

If (i1 , . . . , is ) = (j1 , . . . , js ), then (Im )(i11,...,iss) = Is , and if (i1 , . . . , is ) 6= (j1 , . . . , (j ,...,j )

js ), then (Im )(i11,...,iss) has at least one row (and at least one column) of 0s. The result follows. Theorem 7.2.10. Let V be a vector space, let η 1 , . . . , η s be covectors in V ∗ , and let σ be a permutation in Ss . Then η σ(1) ∧ · · · ∧ η σ(s) = sgn(σ) η 1 ∧ · · · ∧ η s . Proof. For vectors v1 , . . . , vs in V , we have η σ(1) ∧ · · · ∧ η σ(s) (v1 , . . . , vs )   = det η σ(i) (vj )   = det η σ(j) (vi )   = sgn(σ) det η j (vi )   = sgn(σ) det η i (vj ) 1

s

= sgn(σ) η ∧ · · · ∧ η (v1 , . . . , vs ),

[Th 7.2.8] [Th 2.4.5(c)] [Th 2.4.2] [Th 2.4.5(c)] [Th 7.2.8]

   T where η σ(j) (vi ) = η σ(i) (vj ) . Since v1 , . . . , vs were arbitrary, the result follows. Theorem 7.2.11. Let V be a vector space, let H = (h1 , . . . , hm ) and F be bases for V , and let (ϕ1 , . . . , ϕm ) be the dual basis corresponding to F. Then  F  ϕ1 ∧ · · · ∧ ϕm (h1 , . . . , hm ) = det idV H . P Proof. By Theorem 1.2.1(d), hj = i ϕi (hj )fi for j = 1, . . . , m. Then (2.2.6)  F  i  and (2.2.7) give idV H = ϕ (hj ) . The result now follows from Theorem 7.2.8. Theorem 7.2.12 (Basis for Λs (V )). Let V be a vector space, let (h1 , . . . , hm ) be a basis for V , and let (θ1 , . . . , θm ) be its dual basis. Then: (a) If 1 ≤ s ≤ m, then {θi1 ∧ · · · ∧ θis : 1 ≤ i1 < · · · < is ≤ m} is an unordered basis for Λs (V ). (b) If s > m, then Λs (V ) = {0}.  (c) Λs (V ) has dimension m s . (d) Any nonzero multicovector in Λm (V ) spans Λm (V ).

143

7.2 Wedge Products (e) (η 1 , . . . , η m ) is a basis for Λm−1 (V ), where η i = (−1)i−1 θ1 ∧ · · · ∧ θbi ∧ · · · ∧ θm for i = 1, . . . , m, and where b indicates that an expression is omitted. Proof. Let Ωs = {θii ⊗ · · · ⊗ θis : 1 ≤ i1 , . . . , is ≤ m}

and Then

Θs = {θi1 ∧ · · · ∧ θis : 1 ≤ i1 < · · · < is ≤ m}.  span(Θs ) = span Alt(Ωs )  = Alt span(Ωs )  = Alt T 0s (V ) s

= Λ (V ),

so Θs spans Λs (V ) for all s ≥ 1. (a): If X 1≤i1 0. Then O is an orientation of V , called the orientation induced by $. Proof. Let H = (h1 , . . . , hm ) and F = (f1 , . . . , fm ) be bases for V , with $(h1 , . . . , hm ) > 0. Then F is in O ⇔

$(f1 , . . . , fm ) > 0



H and F are consistent,



$(h1 , . . . , hm ) and $(f1 , . . . , fm ) have the same sign

where the last equivalence follows from Theorem 8.2.2. Thus, O is the set of bases for V that are consistent with H; that is, O = [H]. Example 8.2.4 (Rm ). Let E = (e1 , . . . , em ) be the standard basis for Rm , let (ξ 1 , . . . , ξ m ) be its dual basis, and let Ω = ξ 1 ∧ · · · ∧ ξ m . It follows from Theorem 7.2.9 that Ω(e1 , . . . , em )=1, so Ω is an orientation multicovector on Rm , and E is in the orientation of Rm induced by Ω. Thus, Ω induces the standard orientation of Rm . ♦ Let V be a vector space, and let $ and ϑ be orientation multicovectors on V . By Theorem 7.2.12(d) and Theorem 8.2.1, $ spans Λm (M ) (as does ϑ). It follows that $ = cϑ for some nonzero real number c. We say that $ and ϑ are consistent, and write $ ∼ ϑ, if c > 0. It is easily shown that ∼ is an equivalence relation on the set of orientation multicovectors on V , and that there are precisely two equivalence classes. The equivalence class containing $ is denoted by [$]. By definition, [$] comprises all multicovectors on V that are consistent with $. Evidently, $ and −$ are not consistent. Thus, for any orientation multicovector $ on V , the equivalence classes of orientation multicovectors are [$] and [−$]. Let H be a basis for V . It is clear that orientation multicovectors in the same equivalence class induce the same orientation, and that orientation multicovectors in different equivalence classes induce opposite orientations. We therefore have a bijective map ι : {[$], [−$]} −→ {[H], −[H]} defined by assigning [$] and [−$] to the orientations induced by $ and −$, respectively. This shows that we are free to specify orientations using either (equivalence classes of) bases for V or (equivalence classes of) orientation multicovectors on V . For purposes of computation, the latter approach is generally more convenient. Theorem 8.2.5 (Induced Orientation of Subspace). Let (V, O) be an oriented vector space of dimension m ≥ 2, let U be an (m−1)-dimensional subspace of V , and let v be a vector in V U . Then:

8.2 Orientation of Vector Spaces

161

(a) There is a unique orientation OU of U , called the orientation induced by v, such that (h1 , . . . , hm−1 ) is a basis for U that is positively oriented with respect to OU if and only if (v, h1 , . . . , hm−1 ) is a basis for V that is positively oriented with respect to O. (b) If $ is an orientation multicovector on V that induces O, then iv ($)|U is an orientation multicovector on U that induces OU . (c) OU is independent of the choice of orientation multicovector on V that induces O. Proof. (a), (b): Let (h1 , . . . , hm−1 ) be an (m − 1)-tuple of vectors in U . Since v is in V U , by Theorem 1.1.2, (h1 , . . . , hm−1 ) is a basis for U if and only if (v, h1 , . . . , hm−1 ) is a basis for V . Suppose (h1 , . . . , hm−1 ) is in fact a basis for U . It is clear that iv ($)|U is a multicovector in Λm−1 (U ). By definition, iv ($)|U (h1 , . . . , hm−1 ) = $(v, h1 , . . . , hm−1 ).

(8.2.2)

We have from Theorem 8.2.1 that $(v, h1 , . . . , hm−1 ) 6= 0, and then from (8.2.2) that iv ($)|U (h1 , . . . , hm−1 ) 6= 0. This shows that iv ($)|U is nonzero, and is therefore an orientation multicovector on U . Let OU be the orientation of U induced by iv ($)|U . It follows from (8.2.2) and Theorem 8.2.3 that (h1 , . . . , hm−1 ) is positively oriented with respect to OU if and only if (v, h1 , . . . , hm−1 ) is positively oriented with respect to O. Thus, OU has the desired property. Uniqueness follows from the (almost tautological) observation that an orientation of a vector space is determined by the bases it contains. (c): This follows from part (a). A remark is that, in contrast to part (c) of Theorem 8.2.5, the orientation induced on U is not independent of the choice of vector in V U . In particular, if v induces OU , then −v induces −OU . e be oriented vector spaces, let A : V −→ Theorem 8.2.6. Let (V, O) and (Ve , O) e V be a linear isomorphism, and let A(O) = {A(H) : H ∈ O}. Then either e in which case A is said to be orientation-preserving, or A(O) = A(O) = O, e in which case A is said to be orientation-reversing. −O, Proof. Let H and F be bases in O, and observe that, by Theorem 1.1.13(a),  F   A(H) and A(F) are bases for Ve . Let idV H = aij . We have from (2.2.6) and  A(F )  F P P (2.2.7) that hj = i aij fi , hence A(hj ) = i aij A(fi ), so idV A(H) = idV H . By definition, H and F are consistent, and therefore, so are A(H) and A(F). e or A(H) and A(F) are in −O. e Since H Thus, either A(H) and A(F) are in O, e e and F were arbitrary, either A(O) ⊆ O or A(O) ⊆ −O. By Theorem 1.1.9, A−1 e ⊆O is a linear isomorphism, so a similar argument shows that either A−1 (O) −1 e e ⊆ A(O) or O e ⊆ A(−O). It is easily shown or A (O) ⊆ −O, hence either O e ⊆ A(O) using (8.2.1) that A(−H) = −A(H) for any basis H in O, so either O e or −O ⊆ A(O). The result follows.

162

8 Orientation

e are oriented vector spaces and A : V −→ Theorem 8.2.7. If (V, O) and (Ve , O) e V is a linear isomorphism, then the following are equivalent: (a) A is orientation-preserving.  He  (b) det A H > 0 for some (hence every) basis H in O and some (hence every) e e in O. basis H (c) A∗ ($) e is an orientation multicovector on V that induces O for some (hence e every) orientation multicovector $ e on Ve that induces O. Remark. It is easily shown that since A is a linear isomorphism, A∗ ($) e is an orientation multicovector on V , so the assertion in part (c) makes sense. Proof. (a) ⇔ (b): We have A is orientation-preserving e e are consistent for some (hence every) H in O and H e in O ⇔ A(H) and H   He e e in O ⇔ det idVe A(H) > 0 for some (hence every) H in O and H   e  H e e in O, ⇔ det A H > 0 for some (hence every) H in O and H where the first equivalence follows from Theorem 8.2.6, and the last equivalence from Theorem 2.2.5(a). e = (e (b) ⇔ (c): Let H = (h1 , . . . , hm ) and H h1 , . . . , e hm ) be bases in O and e e We O, respectively, and let $ e be an orientation multicovector that induces O. have from Theorem 7.3.3(b) that   e  H A∗ ($)(h e e e h1 , . . . , e hm ). 1 , . . . , hm ) = det A H $( By Theorem 8.2.3, $( e e h1 , . . . , e hm ) > 0, hence A∗ ($) e induces O

⇔ A∗ ($)(h e 1 , . . . , hm ) > 0 for some (hence every) H in O   e  H e e in O ⇔ det A H $( e e h1 , . . . , e hm ) > 0 for some (hence every) H in O and H   e  H e e in O, ⇔ det A H > 0 for some (hence every) H in O and H

where the first equivalence follows from Theorem 8.2.3. Theorem 8.2.8. Let (V, O) be an oriented vector space, and let A : V −→ V be a linear isomorphism. Then A is orientation-preserving if and only if det(A) > 0. e and H = H e in Theorem 8.2.7 gives the result. Proof. Setting (V, O) = (Ve , O)

8.3 Orientation of Scalar Product Spaces

8.3

163

Orientation of Scalar Product Spaces

In Section 8.2, we considered orientation of vector spaces. We now expand our coverage to include scalar product spaces. Theorem 8.3.1. Let (V, g) be a scalar product space of index ν, let E = (e1 , . . . , em ) and H = (h1 , . . . , hm ) be bases for V , with E orthonormal, and let (ξ 1 , . . . , ξ m ) and (θ1 , . . . , θm ) be the corresponding dual bases. Then: (a)   E 2 det(gH ) = (−1)ν det idV H . (b) p ξ 1 ∧ · · · ∧ ξ m = ± |det(gH )| θ1 ∧ · · · ∧ θm , where the positive (negative) sign is chosen if E and H are consistent (not consistent). (c) If H is orthonormal, then ξ 1 ∧ · · · ∧ ξ m = ±θ1 ∧ · · · ∧ θm , where the signs are chosen as in part (b). Proof. (a): We have from Theorem 3.1.3 that gH =

 E T  E idV H gE idV H ,

and from (4.2.3) that det(gE ) = (−1)ν . The result follows. (b): By Theorem 7.3.3(a),  E  ξ 1 ∧ · · · ∧ ξ m = det idV H θ1 ∧ · · · ∧ θm ,  E 2 and by part (a), |det(gH )| = det( idV H ) , hence  p E  det idV H = ± |det(gH )| , where the positive (negative) sign is chosen if H and E are consistent (not consistent). The result follows. (c): This follows from (4.2.3) and part (b). Theorem 8.3.2. Let (V, g) be a scalar product space, let E = (e1 , . . . , em ) and F = (f1 , . . . , fm ) be orthonormal bases for V , and let $ be an orientation multicovector on V . Then $(e1 , . . . , em ) = ±$(f1 , . . . , fm ), where the positive (negative) sign is chosen if E and F are consistent (not consistent).

164

8 Orientation

Proof. Let (ξ 1 , . . . , ξ m ) and (ϕ1 , . . . , ϕm ) be the dual bases corresponding to E and F, respectively. We have $(e1 , . . . , em ) ξ 1 ∧ · · · ∧ ξ m

= $(f1 , . . . , fm ) ϕ1 ∧ · · · ∧ ϕm 1

m

= ±$(f1 , . . . , fm ) ξ ∧ · · · ∧ ζ ,

[Th 7.2.13(b)] [Th 8.3.1(c)]

where the positive (negative) sign is chosen if E and F are consistent (not consistent). Then Theorem 7.2.9 gives the result. Theorem 8.3.3. Let (V, g) be a scalar product space, let E = (e1 , . . . , em ) and F be orthonormal bases for V , and let (ϕ1 , . . . , ϕm ) be the dual basis corresponding to F. Then:  F  det idV E = ϕ1 ∧ · · · ∧ ϕm (e1 , . . . , em ) = ±1, where the positive (negative) sign is chosen if E and F are consistent (not consistent). Proof. The first equality follows from Theorem 7.2.11. Setting $ = ϕ1 ∧· · ·∧ ϕm in Theorem 8.3.2 and using Theorem 7.2.9 gives the second equality. Theorem 8.3.4 (Existence of Volume Multicovector). If (V, g, O) is an oriented scalar product space, then there is a unique orientation multicovector ΩV on V , called the volume multicovector, such that: (i) ΩV induces O, and (ii) if (e1 , . . . , em ) is an orthonormal basis for V that is positively oriented with respect to O, then ΩV (e1 , . . . , em ) = 1. In fact, if (f1 , . . . , fm ) is any orthonormal basis for V that is positively oriented with respect to O and (ϕ1 , . . . , ϕm ) is its dual basis, then ΩV = ϕ1 ∧ · · · ∧ ϕm . Proof. Existence. By Theorem 7.2.9, ϕ1 ∧ · · · ∧ ϕm (f1 , . . . , fm ) = 1, so ϕ1 ∧ · · · ∧ ϕm is an orientation multicovector on V . Then ϕ1 ∧ · · · ∧ ϕm induces an orientation of V , of which there are two possibilities, namely, O and −O. Since (f1 , . . . , fm ) is positively oriented with respect to O and ϕ1 ∧ · · · ∧ ϕm (f1 , . . . , fm ) > 0, by Theorem 8.2.3, ϕ1 ∧ · · · ∧ ϕm induces O. Since (e1 , . . . , em ) and (f1 , . . . , fm ) are positively oriented with respect to O, they are consistent. It follows from Theorem 8.3.3 that ϕ1 ∧ · · · ∧ ϕm (e1 , . . . , em ) = 1. Uniqueness. Suppose $ is an orientation multicovector on V such that $(f1 , . . . , fm ) = 1. It follows from Theorem 7.2.13(b) that $ = $(f1 , . . . , fm ) ϕ1 ∧ · · · ∧ ϕm = ϕ1 ∧ · · · ∧ ϕm . Theorem 8.3.5 (Expression for Volume Multicovector). Let (V, g, O) be an oriented scalar product space, let H be a basis for V that is positively oriented with respect to O, and let (θ1 , . . . , θm ) be its dual basis. Then p ΩV = |det(gH )| θ1 ∧ · · · ∧ θm .

165

8.3 Orientation of Scalar Product Spaces

Proof. Let (e1 , . . . , em ) be an orthonormal basis for V that is positively oriented with respect to O, and let (ξ 1 , . . . , ξ m ) be its dual basis. Since both (e1 , . . . , em ) and H are positively oriented, they are consistent. The result now follows from Theorem 8.3.1(b) and Theorem 8.3.4. Let (V, g, O) be an oriented scalar product space of dimension m ≥ 2, and let ΩV be its volume multicovector. By Theorem 8.3.4, ΩV induces O. Let U be a subspace of V of dimension m − 1 on which g is nondegenerate. We have from Theorem 4.1.2(b) that U ⊥ is a 1-dimensional subspace of V , and from Theorem 4.1.3 that V = U ⊕ U ⊥ . Let u be a unit vector in U ⊥ , so that Ru = U ⊥ , hence V = U ⊕ Ru. Since g is nondegenerate on U , g|U is a scalar product on U . By parts (a) and (b) of Theorem 8.2.5, iu (ΩV )|U is an orientation multicovector on U that induces a certain orientation of U denoted by OU . Thus, (U, g|U , OU ) is an oriented scalar product space of dimension m − 1. Let ΩU be its volume multicovector. The question arises as whether the orientation multicovectors iu (ΩV )|U and ΩU are related. As the next result shows, they are one and the same. Theorem 8.3.6. With the above setup: (a) iu (ΩV )|U = ΩU . (b) More generally, if v is a vector in V and  is the sign of hu, ui, then iv (ΩV )|U = hu, viΩU . Proof. (a): Let (e1 , . . . , em−1 ) be an orthonormal basis for U that is positively oriented with respect to OU . We have from Theorem 8.2.5(a) that (u, e1 , . . . , em−1 ) is an orthonormal basis for V that is positively oriented with respect to O, and then from Theorem 8.3.4 (applied to V ) that ΩV (u, e1 , . . . , em−1 ) = 1. By definition, iu (ΩV )|U (e1 , . . . , em−1 ) = ΩV (u, e1 , . . . , em−1 ), so iu (ΩV )|U (e1 , . . . , em−1 ) = 1. The result now follows from the uniqueness of ΩU , as given by Theorem 8.3.4 (applied to U ). (b): Since V = U ⊕ Ru, we have v = (v − v) + v, where v − v and v are (unique) vectors in U and Ru, respectively. By Theorem 7.4.1(a), iv (ΩV )|U = iv−v (ΩV )|U + iv (ΩV )|U .

(8.3.1)

We seek alternative expressions for the terms on the right-hand side of the preceding identity. For the first term, let w1 , . . . , wm−1 be vectors in U . Since v− v is a vector in U , which has dimension m−1, it follows that v −v, w1 , . . . , wm−1 are linearly dependent. Then Theorem 7.3.5 gives iv−v (ΩV )|U (w1 , . . . , wm−1 ) = ΩV (v − v, w1 , . . . , wm−1 ) = 0.

166

8 Orientation

Since w1 , . . . , wm−1 were arbitrary, iv−v (ΩV )|U = 0.

(8.3.2)

For the second term, we have from Theorem 4.1.5 that v = PRu (v) = hv, uiu, and then from Theorem 7.4.1(b) and part (a) that iv (ΩV )|U = hv, uiiu (ΩV )|U = hv, uiΩU .

(8.3.3)

Substituting (8.3.2) and (8.3.3) into (8.3.1) gives the result.

8.4

Vector Products

In this section, we generalize the well-known vector product in R3 to an arbitrary scalar product space. Let (V, g) be a scalar product space with signature (ε1 , . . . , εm ), let E = (e1 , . . . , em ) be an orthonormal basis for V , and let v1 , . . . , vm−1 be vectors in V . The vector product (or cross product) of v1 , . . . , vm−1 (in that order) with respect to E is defined by v1 × · · · × vm−1 =

m X

h  εi (−1)i−1 det v1 E

i=1

···

  i(i)c  vm−1 E ei ,

(8.4.1)

where (i)c is the multi-index (i)c = (1, . . . , bi, . . . , m). Clearly, the vector product of given vectors depends on the order in which they are taken and the choice of orthonormal basis. In this chapter, the vector product × on V is computed with respect to a given orthonormal basis E for V . Throughout, the vector product × on Rm ν is computed with respect to the standard basis for Rm ν . We obtain a computationally convenient alternative to (8.4.1) as follows. Let X vj = aij ei i

for j = 1, . . . , m − 1, so that from (2.2.3), h  v1 E

···

 1 a   i  .1 vm−1 E =  .. am 1

··· .. . ···

 a1m−1 .. . .  am m−1

(8.4.2)

167

8.4 Vector Products Then (8.4.1) can be expressed as the formal identity   ε1 e1 a11 · · · a1m−1  .. .. , .. v1 × · · · × vm−1 = det ... . . .  m m εm em a1 · · · am−1

(8.4.3)

provided we expand the “determinant” along the first column. When m = 2, even though the notation for the left-hand sides of (8.4.1) and (8.4.3) simplifies to a single vector, the right-hand sides can still be computed. Let v = a1 e1 + a2 e2 . Then the “vector product” of v is   ε1 e1 a1 det = ε1 a2 e1 − ε2 a1 e2 . ε2 e2 a2 When the context is meaningful, this quantity has the same properties as a vector product when m ≥ 3.

Example 8.4.1 (Vector Product on R20 and R30 ). Let (e1 , e2 ) be the standard basis for R20 , and let (a1 , a2 ) be a vector in R20 . The “vector product” of (a1 , a2 ) is (a2 , −a1 ). Now let (e1 , e2 , e3 ) be the standard basis for R30 , and let v = (a1 , a2 , a3 ) and w = (b1 , b2 , b3 ) be vectors in R30 . Then  1  1 a b     v E = a2  w E = b2 , and a3 b3 hence   e1 a1 b1 v × w = dete2 a2 b2  e3 a3 b3  2 2   1 a b a = det , −det a3 b3 a3

b1 b3



 1 a , det a2

b1 b2

!

= (a2 b3 − a3 b2 , a3 b1 − a1 b3 , a1 b2 − a2 b1 ). In particular, e1 × e2 = e3 ,

e1 × e3 = −e2 ,

and

e2 × e3 = e1 .



Example 8.4.2 (Vector Product on R21 and R31 ). Let (e1 , e2 ) be the standard basis for R21 , and let (a1 , a2 ) be a vector in R21 . The “vector product” of (a1 , a2 ) is (a2 , a1 ). Now let (e1 , e2 , e3 ) be the standard basis for R31 , and let v = (a1 , a2 , a3 ) and w = (b1 , b2 , b3 ) be vectors in R31 . Then  1  1 a b     v E = a2  w E = b2 , and a3 b3

168

8 Orientation

hence  e1 a1 b1 v × w = det e2 a2 b2  −e3 a3 b3  2 2   1 a b a = det , −det a3 b3 a3 

b1 b3



 1 a , −det a2

b1 b2

!

= (a2 b3 − a3 b2 , a3 b1 − a1 b3 , a2 b1 − a1 b2 ). In particular, e1 × e2 = −e3 ,

e1 × e3 = −e2 ,

e2 × e3 = e1 .

and



The vector product has a number of interesting algebraic properties, several of which, not surprisingly, are expressed in terms of determinants and wedge products. Theorem 8.4.3. Let (V, g) be a scalar product space of dimension m ≥ 2, let E be an orthonormal basis for V , let v1 , . . . , vm−1 , w be vectors in V , and let c be a real number. Then, for i, j = 1, . . . , m − 1: (a) hvi , v1 × · · · × vm−1 i = 0. (b)

v1 × · · · × cvi × · · · × vm−1 = c(v1 × · · · × vi × · · · × vm−1 ).

(c)

v1 × · · · × (cvi + w) × · · · × vm−1 = c(v1 × · · · × vi × · · · × vm−1 ) + v1 × · · · × w × · · · × vm−1 .

(d)

v1 × · · · × vi × · · · × vj × · · · × vm−1 (e) If vi = vj , then

= −(v1 × · · · × vj × · · · × vi × · · · × vm−1 ).

v1 × · · · × vi × · · · × vj × · · · × vm−1 = 0. Proof. (a): From (8.4.1) and (8.4.2), hvi , v1 × · · · × vm−1 i X h  X = aji ej , εk (−1)k−1 det v1 E j

=

X

=

X

jk

h  (−1)j−1 aji det v1 E

j

h  = det vi E

···

k

h  aji hej , ek iεk (−1)k−1 det v1 E

  v1 E

···

  vi E

···

···

  vi E

  vi E

···

  vi E ···

··· ···

  i(k)c  vm−1 E ek   i(k)c  vm−1 E

  i(j)c  vm−1 E

  i vm−1 E .



169

8.4 Vector Products

h        i v1 E · · · vi E · · · vm−1 E are Since (at least) two columns of vi E equal, it follows from Theorem 2.4.2 that its determinant equals 0. (b)–(d): Straightforward. In light of Theorem 8.4.3(b), we drop parentheses and denote c(v1 × · · · × vi × · · · × vm−1 )

c v1 × · · · × vi × · · · × vm−1 .

by

The next result is the main reason for considering vector products. Theorem 8.4.4. Let (V, g) be a scalar product space of dimension m ≥ 2, let E be an orthonormal basis for V , let v1 , . . . , vm−1 be vectors in V , and let U be the subspace of V spanned by v1 , . . . , vm−1 . Then v1 × · · · × vm−1 is a vector in U ⊥. Proof. This follows from Theorem 8.4.3(a). At first glance, Theorem 8.4.4 appears to promise more than it actually delivers. To make the result truly informative, we need v1 × · · · × vm−1 to be a nonzero vector and U ⊥ to be a 1-dimensional subspace of V . These conditions can be met with further assumptions: see Theorem 8.4.8 and Theorem 8.4.10(a). It was remarked above that the vector product depends on the order in which vectors are taken and the choice of orthonormal basis. According to Theorem 8.4.3(d), changing the order of vectors at most affects the sign of the resulting vector product. In view of (8.4.1), it might be expected that computing with respect to a different orthonormal basis would have a significant impact on results. As Theorem 8.4.4 shows, this is not necessarily the case: regardless of the choice of orthonormal basis, the vector product is always in the perp of the subspace spanned by the constituent vectors. In fact, several of the results to follow exhibit this same feature: see Theorem 8.4.5, Theorem 8.4.6, Theorem 8.4.8, and Theorem 8.4.9. Theorem 8.4.5. Let (V, g) be a scalar product space of dimension m ≥ 2 and index ν. Let E be an orthonormal basis for V , let (ξ 1 , . . . , ξ m ) be its dual basis, and let u, v1 , . . . , vm−1 be vectors in V . Then: (a) h      i v1 E · · · vm−1 E hu, v1 × · · · × vm−1 i = det u E = ξ 1 ∧ · · · ∧ ξ m (u, v1 , . . . , vm−1 ). (b) hu, v1 × · · · × vm−1 i2  hu, ui  hv1 , ui  = (−1)ν det ..  . hvm−1 , ui

hu, v1 i hv1 , v1 i .. .

··· ··· .. .

hvm−1 , v1 i

···

hu, vm−1 i hv1 , vm−1 i .. .



  .  hvm−1 , vm−1 i

170

8 Orientation

Proof. Let E = (e1 , . . . , em ), and let V have signature (ε1 , . . . , εm ).    T P (a): Let u = i bi ei , so that u E = b1 · · · bm . We have from Theorem 2.4.6 and (8.4.1) that hu, v1 × · · · × vm−1 i X h  X = bi e i , εj (−1)j−1 det v1 E i

=

X ij

=

X

j

h  bi hei , ej iεj (−1)j−1 det v1 E h  (−1)i−1 bi det v1 E

i

h  = det u E

  v1 E



···  vm−1

···



···

vm−1

 i E

  i c vm−1 E (j) ej

···

vm−1



 i(j)c  E

 i(i)c  E

,

P which proves the first equality. By Theorem 1.2.1(d), u = i ξ i (u)ei , hence    T u E = ξ 1 (u) · · · ξ m (u) ,     with corresponding expressions for v1 E , . . . , vm−1 E . Then Theorem 7.2.8 gives  1  ξ (u) ξ 1 (v1 ) · · · ξ 1 (vm−1 )   .. .. .. ξ 1 ∧ · · · ∧ ξ m (u, v1 , . . . , vm−1 ) = det ...  . . . ξ m (u) ξ m (v1 ) · · · ξ m (vm−1 ) h      i v1 E · · · vm−1 E , = det u E

which proves the second equality. (b): Let h    v1 E P = u E

···

  i vm−1 E ,

and note that gE = diag(ε1 , . . . , εm ). Then  hu, ui hu, v1 i ··· hu, vm−1 i  hv1 , ui hv , v i · · · hv 1 1 1 , vm−1 i  det .. .. .. . ..  . . . hvm−1 , ui hvm−1 , v1 i T

= det(P gE P )

···



    hvm−1 , vm−1 i

[Th 3.1.2(b)]

2

= det(gE ) det(P ) = (−1)ν det(P )2 ν

[(4.2.3)] 2

= (−1) hu, v1 × · · · × vm−1 i .

[part (a)]

Theorem 8.4.6. Let (V, g) be a scalar product space of dimension m ≥ 2 and index ν, let E be an orthonormal basis for V , and let v1 , . . . , vm−1 , w1 , . . . , wm−1 be vectors in V . Then:

171

8.4 Vector Products (a) hv1 × · · · × vm−1 , w1 × · · · × wm−1 i  hv1 , w1 i ···  . ν .. .. = (−1) det . hvm−1 , w1 i

···

 hv1 , wm−1 i  .. . . hvm−1 , wm−1 i

(b) v  u u hv1 , v1 i ··· u  . .. u . kv1 × · · · × vm−1 k = t det . . hvm−1 , v1 i · · ·

 hv1 , vm−1 i  ..  . . hvm−1 , vm−1 i

Proof. Let E = (e1 , . . . , em ), and let V have signature (ε1 , . . . , εm ). (a): Let h  h    i   i vm−1 E wm−1 E , P = v1 E · · · and Q = w1 E · · · and note that gE = diag(ε1 , . . . , εm ). It is easily shown that c c det (gE Q)(i) = (−1)ν εi det Q(i) .

(8.4.4)

Then hv1 × · · · × vm−1 , w1 × · · · × wm−1 i X  X c c = εi (−1)i−1 det P (i) ei , εj (−1)j−1 det Q(j) ej i

=

X

=

X

ij

[(8.4.1)]

j c

εi εj (−1)i+j hei , ej i det P (i) c

c

εi det(P (i) ) det Q(i)

c

det Q(j)







i

= (−1)ν

X

det P (i)

c



c

det (gE Q)(i)



[(8.4.4)]

i

= (−1)ν

X

det(P I ) det (gE Q)I



I∈Im−1,m ν

= (−1) det(P T gE Q)  hv1 , w1 i  .. ν = (−1) det .

··· .. . hvm−1 , w1 i · · ·

[Th 2.4.10(a)] 

hv1 , wm−1 i  .. . . hvm−1 , wm−1 i

[Th 3.1.2(b)]

(b): This follows from (4.1.1) and part (a). Theorem 8.4.7. Let (V, g) be a scalar product space of dimension m ≥ 2, and let E = (e1 , . . . , em ) be an orthonormal basis for V . Let U be a subspace of V

172

8 Orientation

of dimension m − 1, let u1 , . . . , um−1 be vectors in U , and let A : U −→ U be a linear map. Then A(u1 ) × · · · × A(um−1 ) = det(A) u1 × · · · × um−1 . Proof. Let V have signature (ε1 , . . . , εm ), and consider the function η i : U m−1 −→ R defined by η i (v1 , . . . , vm−1 ) = hv1 × · · · × vm−1 , ei i for all vectors v1 , . . . , vm−1 in U for i = 1, . . . , m. It follows from parts (b) and (c) of Theorem 8.4.3 that η i is a multicovector in Λm−1 (U ). Then  hA(u1 ) × · · · × A(um−1 ), ei i = η i A(u1 ), . . . , A(um−1 ) = A∗ (η i )(u1 , . . . , um−1 )

= det(A) η i (u1 , . . . , um−1 )

[Th 7.3.4(a)]

= det(A) hu1 × · · · × um−1 , ei i, hence A(u1 ) × · · · × A(um−1 ) X = εi hA(u1 ) × · · · × A(um−1 ), ei iei

[Th 4.2.7]

i

= det(A)

X i

εi hu1 × · · · × um−1 , ei iei

= det(A) u1 × · · · × um−1 .

[Th 4.2.7]

Theorems 8.4.3–8.4.7 are of little interest when the vector product equals the zero vector. The next result gives a straightforward condition that avoids this situation. Theorem 8.4.8. Let (V, g) be a scalar product space of dimension m ≥ 2, let E be an orthonormal basis for V , and let v1 , . . . , vm−1 be vectors in V . Then v1 , . . . , vm−1 are linearly independent if and only if v1 × · · · × vm−1 6= 0. Proof. We prove the logically equivalent assertion: v1 , . . . , vm−1 are linearly dependent if and only if v1 × · · · × vm−1 = 0. (⇒): Since v1 , . . . , vm−1 are linearly dependent, one of them can be expressed Pm−1 as a linear combination of the others, say, v1 = j=2 aj vj . Then h      i c v2 E · · · vm−1 E (i) det v1 E hP      i(i)c  m−1 v v = det · · · a v 2 E m−1 E j j E j=2 =

m−1 X j=2

h  aj det vj E

  v2 E

···

  i c vm−1 E (i) .

173

8.4 Vector Products

  i   vm−1 E has vj E in (at least) two positions, it h      i c vm−1 E (i) = 0 v2 E · · · follows from Theorem 2.4.2 that det vj E for j = 2, . . . , m − 1. Thus, h      i c v2 E · · · vm−1 E (i) = 0 det v1 E Since

h  vj E

  v2 E

···

for i = 1, . . . , m. It follows from (8.4.1) that v1 × · · · × vm−1 = 0. (⇐): Since the dimension of span({v1 , . . . , vm−1 }) is at most m − 1, there is a vector v0 in V span({v1 , . . . , vm−1 }). Then hv0 , v1 × · · · × vm−1 i = 0 h      i v1 E · · · vm−1 E = 0 ⇔ det v0 E       v0 E , v1 E . . . , v0 E are linearly dependent ⇔ ⇔ ⇔

[Th 8.4.5(a)] [Th 2.4.9]

v0 , v1 , . . . , vm−1 are linearly dependent

[Th 2.2.1]

v1 , . . . , vm−1 are linearly dependent.

[Th 1.1.2]

Thus, if v1 × · · · × vm−1 = 0, then v1 , . . . , vm−1 are linearly dependent. Whether vectors in a vector space are linearly independent or linearly dependent is unrelated to the presence or absence of a scalar product. This means that in Theorem 8.4.8, linear independence can be checked using any convenient choice of scalar product and orthonormal basis. Theorem 8.4.9. Let (V, g) be a scalar product space of dimension m ≥ 2, let E be an orthonormal basis for V , let U be a subspace of V of dimension m − 1, and let (u1 , . . . , um−1 ) be a basis for U . Then g is nondegenerate on U if and only if ku1 × · · · × um−1 k = 6 0.   Proof. The matrix of g|U with respect to (u1 , . . . , um−1 ) is hui , uj i , so g is nondegenerate on U ⇔ ⇔

g|U is nondegenerate on U   det hui , uj i 6= 0



ku1 × · · · × um−1 k = 6 0.



hu1 × · · · × um−1 , u1 × · · · × um−1 i = 6 0

[by definition] [Th 3.3.3] [Th 8.4.6(a)]

Theorem 8.4.10. Let (V, g) be a scalar product space of dimension m ≥ 2, and let E be an orthonormal basis for V . Let U be a subspace of V of dimension m − 1 on which g is nondegenerate, let (u1 , . . . , um−1 ) be a basis for U , and let  be the sign of hu1 × · · · × um−1 , u1 × · · · × um−1 i, so that   u1 × · · · × um−1 u1 × · · · × um−1 = , . (8.4.5) ku1 × · · · × um−1 k ku1 × · · · × um−1 k Then:

174

8 Orientation

(a) U ⊥ = R(u1 × · · · × um−1 ) is a 1-dimensional subspace of V . (b) (u1 × · · · × um−1 , u1 , . . . , um−1 ) is a basis for V that is consistent with E. (c) If (u1 , . . . , um−1 ) is an orthonormal basis for U , then (u1 × · · · × um−1 , u1 , . . . , um−1 ) is an orthonormal basis for V . Remark. By Theorem 8.4.9, hu1 × · · · × um−1 , u1 × · · · × um−1 i 6= 0, so  is defined and the denominators in (8.4.5) are nonzero. Proof. (a): We have from Theorem 4.1.2(b) that U ⊥ is a 1-dimensional subspace of V , from Theorem 8.4.3(a) that u1 × · · · × um−1 is in U ⊥ , and from Theorem 8.4.8 that u1 × · · · × um−1 6= 0. The result follows. (b): By Theorem 4.1.3, V = U ⊕ U ⊥ . Since (u1 , . . . , um−1 ) is a basis for U , it follows from Theorem 1.1.2 and part (a) that U = (u1 × · · · × um−1 , u1 , . . . , um−1 ) is a basis for V . We have from (2.2.7) and Theorem 8.4.5(a) that         E  u1 E · · · um−1 E det idV U = det u1 × · · · × um−1 E = hu1 × · · · × um−1 , u1 × · · · × um−1 i

= |hu1 × · · · × um−1 , u1 × · · · × um−1 i| > 0, so U is consistent with E. (c): Since (u1 , . . . , um−1 ) is orthonormal, we have from Theorem 8.4.6(b) that ku1 × · · · × um−1 k = 1, so u1 × · · · × um−1 is a unit vector. By Theorem 8.4.3(a), hu1 × · · · × um−1 , ui i = 0 for j = 1, . . . , m − 1. The result follows. Theorem 8.4.11. Let (V, g) be a scalar product space of dimension m ≥ 2, let E be an orthonormal basis for V , let U be a subspace of V of dimension m−1 on which g is nondegenerate, and let U = (u1 , . . . , um−1 ) and V = (v1 , . . . , vm−1 ) be bases for U . Then: (a)  V  u1 × · · · × um−1 = det idU U v1 × · · · × vm−1 . (b)

u1 × · · · × um−1 v1 × · · · × vm−1 =± , ku1 × · · · × um−1 k kv1 × · · · × vm−1 k

where the positive (negative) sign is chosen if U and V are consistent (not consistent). Remark. By Theorem 8.4.9, the denominators in part (b) are nonzero.  V Proof. (a): Let P = idU U , and let A : U −→ U be the linear isomorphism defined by A(U) = V. By Theorem 8.4.7, v1 × · · · × vm−1 = det(A) u1 × · · · × um−1 , and by Theorem 2.2.5,  V  V  V −1 P = idU A−1 (V) = A−1 V = A V ,

175

8.4 Vector Products

hence det(A) = det(P )−1 . The result follows. (b): We have from Theorem 2.4.12 and Theorem 8.4.9 that det(P ), ku1 × · · · × um−1 k, and kv1 × · · · × vm−1 k are each nonzero. Then part (a) gives u1 × · · · × um−1 det(P ) v1 × · · · × vm−1 = ku1 × · · · × um−1 k |det(P )| kv1 × · · · × vm−1 k  v1 × · · · × vm−1 = sgn det(P ) . kv1 × · · · × vm−1 k Theorem 8.4.12. Let (V, g) be a scalar product space of dimension m ≥ 2, let E = (e1 , . . . , em ) and F be orthonormal bases for V , and let  denote the vector product operation corresponding to F. Then  F  e1  · · ·  ebi  · · ·  em = εi (−1)i−1 det idV E ei for i = 1, . . . , m, where b indicates that an expression is omitted. Proof. Let Ui be the subspace of V spanned by e1 , . . . , ebi , . . . , em . Since g is nondegenerate on Rei , we have from Theorem 4.1.3 that V = Rei ⊕ (Rei )⊥ and g is nondegenerate on (Rei )⊥ . By definition, each of e1 , . . . , ebi , . . . , em is orthogonal to ei , so Ui ⊆ (Rei )⊥ . From Theorem 1.1.18,   m = dim(V ) = dim(Rei ) + dim (Rei )⊥ = 1 + dim (Rei )⊥ , hence  dim (Rei )⊥ = m − 1 = dim(Ui ).

By Theorem 1.1.7(b), Ui = (Rei )⊥ , so g is nondegenerate on Ui . It follows from Theorem 4.1.2(c) and Theorem 8.4.10(a) that Rei = Ui⊥ = R(e1  · · ·  ebi  · · ·  em ), so e1  · · ·  ebi  · · ·  em = ci ei for some real number ci . We have ci εi = hei , ci ei i = hei , e1  · · ·  ebi  · · ·  em i h    d   i = det ei e e em F · · · · · · 1 i F F F h      i ei F · · · em F = (−1)i−1 det e1 F · · ·  F  = (−1)i−1 det idV E , hence The result follows.

 F  ci = εi (−1)i−1 det idV E .

[Th 8.4.5(a)] [Th 8.4.3(d)] [(2.2.7)]

176

8 Orientation

Theorem 8.4.13. Let (V, g) be a scalar product space of dimension m ≥ 2, let E and F be orthonormal bases for V , and let × and  denote the corresponding vector product operations. Let U be a subspace of V of dimension m−1 on which g is nondegenerate, and let (u1 , . . . , um−1 ) be a basis for U . Then: (a)  F  u1  · · ·  um−1 = det idV E u1 × · · · × um−1 . (b) u1 × · · · × um−1 u1  · · ·  um−1 =± , ku1  · · ·  um−1 k ku1 × · · · × um−1 k where the positive (negative) sign is chosen if E and F are consistent (not consistent). Remark. By Theorem 8.4.9, the denominators in part (b) are nonzero. Proof. (a): Let E = (e1 , . . . , em ), and let  F h  P = idV E = e1 F

···

  i em F ,

where the second equality comes from (2.2.7). Let h    i um−1 E , Q = u1 E · · · and let c

Ri = P (i)

T

for i = 1, . . . , m. By Theorem 2.1.6(b), (j)c

Ri

(i)c T

= P(j)c

for j = 1, . . . , m. It follows from (2.2.5) that h    i  F h  u1 F · · · um−1 F = idV E u1 E

···

  i um−1 E = P Q,

and then from Theorem 2.1.6(c) that h  u1 F

···

  i(i)c c um−1 F = P (i) Q.

Then (8.4.1) gives u1  · · ·  um−1 =

X

=

X

i

i

h  εi (−1)i−1 det u1 F

···

 c εi (−1)i−1 det P (i) Q ei .



um−1

 i(i)c  F

ei

177

8.4 Vector Products By Theorem 2.4.10(a),   X c c (j)c  det P (i) Q = det RiT Q = det Ri det Q(j) j

=

X

  c (i)c det P(j)c det Q(j) .

j

We have (i)c

P(j)c =

h  e1 F

h  = e1 F

···

(i)c

  i em F

(j)c c

···

d ej F

···

  i(i) , em F

where b indicates that an expression is omitted. The preceding three identities combine to give u1  · · ·  um−1 X    X  (i)c i−1 (j)c = εi (−1) det P(j)c det Q ei i

=

X

j

det Q(j)

c

  X

F

j

=

X j

h  εi (−1)i−1 det e1

i

det Q(j)

c



 c     i(i)  ei ej F · · · em F ··· d

e1  · · ·  ebj  · · ·  em ,

where the last equality follows from (8.4.1). By Theorem 8.4.12, e1  · · ·  ebj  · · ·  em = εj (−1)j−1 det(P ) ej for j = 1, . . . , m. Thus, u1  · · ·  um−1 = det(P )

X

= det(P )

X

c εj (−1)j−1 det Q(j) ej

j

h  εj (−1)j−1 det u1 E

j

···

  i(j)c  um−1 E ej

= det(P ) u1 × · · · × um−1 , where the last equality follows from (8.4.1). (b): We have from Theorem 2.4.12 and Theorem 8.4.9 that det(P ), ku1 × · · · × um−1 k, and ku1  · · ·  um−1 k are each nonzero. Then Theorem 8.3.3 and part (a) give det(P ) u1 × · · · × um−1 u1  · · ·  um−1 = ku1  · · ·  um−1 k |det(P )| ku1 × · · · × um−1 k  u1 × · · · × um−1 = sgn det(P ) . ku1 × · · · × um−1 k

178

8 Orientation

8.5

Hodge Star

In this section, we define a map that assigns to a given multicovector another multicovector that “complements” the first. The methods that result add to our growing armamentarium of techniques for computing with multicovectors. Theorem 8.5.1. Let (V, g, O) be an oriented scalar product space of dimension m, let ΩV be its volume multicovector, and let η be a multicovector in Λs (V ), where s ≤ m. Then there is a unique multicovector ?(η) in Λm−s (V ) such that η ∧ ζ = h?(η), ζiΛ ΩV for all multicovectors ζ in Λm−s (V ), where h·,·iΛ is the scalar product defined in Section 7.5. Proof. Existence. By Theorem 7.2.12(d), (ΩV ) is a basis for Λm (V ), so η ∧ ζ = fη (ζ) ΩV for some real number fη (ζ). The assignment ζ 7−→ fη (ζ) defines a function fη : Λm−s (V ) −→ R. Evidently, fη is linear, so fη is a multicovector in Λm−s (V )∗ . Recall the flat map and sharp map introduced at the end of Section 7.5: FΛ : Λm−s (V ) −→ Λm−s (V )∗

and

SΛ : Λm−s (V )∗ −→ Λm−s (V ).

Λ

Let us define ?(η) = (fη )S . Then ?(η) is a multicovector in Λm−s (V ) and  FΛ fη = ?(η) . According to (7.5.12), fη (ζ) = h?(η), ζiΛ , so η ∧ ζ = h?(η), ζiΛ ΩV

(8.5.1)

for all multicovectors ζ in Λm−s (V ). Thus, ?(η) satisfies the desired property. Uniqueness. Suppose ξ is a multicovector in Λm−s (V ) satisfying the specified property; that is, η ∧ ζ = hξ, ζiΛ ΩV (8.5.2)

for all multicovectors ζ in Λm−s (V ). Since (ΩV ) is a basis for Λm (V ), we have from (8.5.1) and (8.5.2) that hξ, ζiΛ = h?(η), ζiΛ , hence hξ − ?(η), ζiΛ = 0. Because ζ was arbitrary and, by Theorem 7.5.3(a), h·, ·iΛ is nondegenerate, ξ − ?(η) = 0. Hodge star is the family of linear maps ? : Λs (V ) −→ Λm−s (V ) defined for s ≤ m by the assignment η 7−→ ?(η) for all multicovectors η in Λs (V ). Let us denote ? ◦?

by

?2 .

179

8.5 Hodge Star

Theorem 8.5.2. Let (V, g, O) be an oriented scalar product space, let (e1 , . . . , em ) be an orthonormal basis for V that is positively oriented with respect to O, let (ξ 1 , . . . , ξ m ) be its dual basis, and let I be a multi-index in Is,m . Then

c c Λ I c ?(ξ I ) = sgn(σ(I,I c ) ) ξ I , ξ I ξ , where σ(I,I c ) is the permutation defined in Section B.2. I Proof. We have from  Theorem 7.5.3(b) that (ξ : I ∈ Is,m ) is an orthonormal s Λ basis for Λ (V ), g , and then from Theorem 4.2.7 that

X

?(ξ I ) =

J∈Im−s,m

hξ J , ξ J iΛ h?(ξ I ), ξ J iΛ ξ J .

(8.5.3)

According to Theorem 8.5.1,

I J Λ ?(ξ ), ξ ΩV = ξ I ∧ ξ J . For J = I c , we have c

ξ I ∧ ξ I = sgn(σ(I,I c ) ) ξ 1 ∧ · · · ∧ ξ m = sgn(σ(I,I c ) ) ΩV ,

hence

?(ξ I ), ξ I

c

Λ

[Th 7.2.10] [Th 8.3.4]

ΩV = sgn(σ(I,I c ) ) ΩV .

m

Since (ΩV ) is a basis for Λ (V ),

?(ξ I ), ξ I

c

Λ

= sgn(σ(I,I c ) ).

On the other hand, for J 6= I c , we have from Theorem 7.2.6 that ξ I ∧ ξ J = 0. In summary, (

I J Λ sgn(σ(I,I c ) ) if J = I c ?(ξ ), ξ = 0 if J 6= I c . Substituting into (8.5.3) gives the result. Theorem 8.5.2 provides a way to compute with ? on an orthonormal basis for Λs (V ). Computations are then extended to all of Λs (V ) by the linearity of ?. Theorem 8.5.3. Let (V, g, O) be an oriented scalar product space of dimension m and index ν, let ΩV be its volume multicovector, and let η, ζ be multicovectors in Λs (V ). Then: (a) ?(1) = (−1)ν ΩV . (b) ?(ΩV ) = 1. (c) ?2 (η) = (−1)s(m−s)+ν η. (d) η ∧ ?(ζ) = (−1)ν hη, ζiΛ ΩV .

180

8 Orientation

Proof. (b): Recall from (7.1.3) that Λ0 (V ) = R. Since 1 and ?(ΩV ) are in Λ0 (V ), by Theorem 8.5.1, ΩV = ΩV ∧ 1 = h?(ΩV ), 1iΛ ΩV = ?(ΩV ) ΩV , hence ?(ΩV ) = 1. (c): We have  ?2 (ξ I ) = ? ?(ξ I )

c c Λ I c  = ? sgn(σ(I,I c ) ) ξ I , ξ I ξ

I c I c Λ c = sgn(σ(I,I c ) ) ξ , ξ ?(ξ I )

c c Λ

Λ = sgn(σ(I,I c ) ) ξ I , ξ I sgn(σ(I c ,I) ) ξ I , ξ I ξ I = (−1)

[Th 8.5.2]

[Th 8.5.2]

s(m−s)+ν I

ξ .

[Th 7.5.2, Th B.2.5]

Since the ξ I span Λs (V ), the result for η follows from the linearity of ?. (a): We have ?(1) = ?2 (ΩV )

[part (b)]

= (−1)ν ΩV .

[part (c), s = m]

(d): We have η ∧ ?(ζ) = (−1)s(m−s) ?(ζ) ∧ η = (−1)

s(m−s)

2

[Th 7.2.3(a)] Λ

h? (ζ), ηi ΩV

[Th 8.5.1]

= (−1)s(m−s) (−1)s(m−s)+ν hζ, ηiΛ ΩV ν

[part (c)]

Λ

= (−1) hη, ζi ΩV .

Example 8.5.4 (Hodge Star on R30 ). Let E be the standard basis for R30 , and let (ξ 1 , ξ 2 , ξ 3 ) be its dual basis. Using Theorem 7.5.2 and Theorem 8.5.2, we obtain the following tables: I

ξI

?(ξ I )

(1)

ξ1

(2)

ξ

2

ξ2 ∧ ξ3

ξ

3

(3)

(2, 3)

1

1

(1, 3)

2

(1, 2)

−1

1

1

ξ ∧ξ

ξI

(1, 2)

ξ1 ∧ ξ2

(2, 3)

ξ ∧ξ

I c I c Λ ξ ,ξ

3

I

(1, 3)

sgn(σ(I,I c ) )

1

−ξ ∧ ξ

1

Ic

?(ξ I ) 3

ξ2 ∧ ξ3

Ic

1

sgn(σ(I,I c ) )

1



c

ξI , ξI

ξ3

(3)

1

1

2

(2) (1)

−1

1

ξ1

−ξ

1

c

Λ

1 ♦

181

8.5 Hodge Star

Example 8.5.5 (Hodge Star on R31 ). Let E be the standard basis for R31 , and let (ξ 1 , ξ 2 , ξ 3 ) be its dual basis. Using Theorem 7.5.2 and Theorem 8.5.2, we obtain the following tables: I

ξI

?(ξ I )

(1)

ξ1

(2)

ξ

2

−ξ 2 ∧ ξ 3

ξ

3

(3)

Ic

sgn(σ(I,I c ) )

(2, 3)

1

1

3

(1, 3)

1

2

(1, 2)

−1

ξ ∧ξ ξ ∧ξ

I

ξI

?(ξ I )

Ic

(1, 2)

ξ1 ∧ ξ2

1

sgn(σ(I,I c ) )

(3)

1

2

(2)

2

3

1

(1)

−1

ξ

Λ

1

3

ξ ∧ξ

(2, 3)

c

−1

1

−ξ

c

ξI , ξI −1

−ξ 3

ξ ∧ξ

(1, 3)





c

ξI , ξI

c

Λ

−1 1

1

1 ♦

Example 8.5.6 (Hodge Star on R41 ). Let E be the standard basis for R41 , and let (ξ 1 , ξ 2 , ξ 3 , ξ 4 ) be its dual basis. Using Theorem 7.5.2 and Theorem 8.5.2, we obtain the following tables: I

ξI

?(ξ I )

(1)

ξ1

(2)

ξ

2

−ξ 2 ∧ ξ 3 ∧ ξ 4

ξ

3

ξ

4

(3) (4)

Ic (2, 3, 4)

1

1

3

4

(1, 3, 4)

1

2

4

(1, 2, 4)

−1

1

2

3

(1, 2, 3)

ξ ∧ξ ∧ξ

−ξ ∧ ξ ∧ ξ −ξ ∧ ξ ∧ ξ

I

ξI

?(ξ I )

(1, 2)

ξ1 ∧ ξ2

−ξ 3 ∧ ξ 4

(1, 3) (1, 4) (2, 3) (2, 4) (3, 4)

1

3

1

4

2

3

ξ ∧ξ ξ ∧ξ ξ ∧ξ

ξ2 ∧ ξ4 3

sgn(σ(I,I c ) )

ξ ∧ξ

4

Ic

sgn(σ(I,I c ) )

(3, 4)

1

(2, 4)

2

3

(2, 3)

−1

1

−ξ ∧ ξ

4

(1, 4)

1

−ξ 1 ∧ ξ 3

(1, 3)

1

2

−1

ξ ∧ξ

(1, 2)

−1

−1

−1

4

ξ ∧ξ

−1

1

2

ξ ∧ξ

I c I c Λ ξ ,ξ

1

1

1



c

ξI , ξI −1 −1

1

−1 1

1

c

Λ

182

8 Orientation I

ξI

(1, 2, 3)

ξ1 ∧ ξ2 ∧ ξ3

(1, 2, 4) (1, 3, 4) (2, 3, 4)

?(ξ I )

Ic

sgn(σ(I,I c ) ) 1

2

4

−ξ 4

(4)

1

(3)

1

3

4

−ξ

3

ξ

2

(2)

−1

2

3

4

−ξ

1

(1)

ξ ∧ξ ∧ξ ξ ∧ξ ∧ξ ξ ∧ξ ∧ξ

I c I c Λ ξ ,ξ −1 1

1

1

−1

1 ♦

Theorem 8.5.7. Let (V, g, O) be an oriented scalar product space of index ν, let ΩV be its volume multicovector, and let w be a vector in V . Then  iw ΩV = (−1)ν ?(wF ). Proof. Let V have signature (ε1 , . . . , εm ), let (e1 , . . . , em ) be an orthonormal basis for V that is positively oriented with respect to O, and let (ξ 1 , . . . , ξ m ) be its dual basis. Since interior multiplication, Hodge star, and the flat map are all linear, it suffices to show that iej (ΩV ) = (−1)ν ?(eFj ) for j = 1, . . . , m. By Theorem 8.3.4, ΩV = ξ 1 ∧ · · · ∧ ξ m , so Theorem 7.4.2 gives X iej (ΩV ) = (−1)k−1 ξ k (ej ) ξ 1 ∧ · · · ∧ ξbk ∧ · · · ∧ ξ m k (8.5.4) = (−1)j−1 ξ 1 ∧ · · · ∧ ξbj ∧ · · · ∧ ξ m , c where b indicates that an expression is omitted. Since ξ (j) = ξ 1 ∧ · · · ∧ ξbj ∧  · · · ∧ ξ m and (j), (j)c = (j, 1, . . . , j − 1, j + 1, . . . , m), it is easily shown that c

c

εj hξ (j) , ξ (j) iΛ = (−1)ν

and

sgn(σ((j),(j)c ) ) = (−1)j−1 .

Then ?(eFj ) = εj ?(ξ j ) = εj sgn(σ((j),(j)c ) ) hξ

[Th 4.5.5(a)] (j)c

(j)c Λ

(j)c

,ξ i ξ = (−1)j−1+ν ξ 1 ∧ · · · ∧ ξbj ∧ · · · ∧ ξ m . The result now follows from (8.5.4) and (8.5.5).

[Th 8.5.2]

(8.5.5)

Chapter 9

Topology 9.1

Topology

Having completed an overview of linear and multilinear algebra, we now turn our attention to topology. This is an extensive area of mathematics, and of necessity the coverage presented here is highly selective. To the uninitiated, topology can be dauntingly abstract. The best example of a topology (and the motivation for much of what follows) is the Euclidean topology on Rm , covered briefly in Section 9.4 and preceded by preparatory material in Section 9.2 and Section 9.3. Readers new to topology might find it helpful to peruse these sections early on to get a glimpse of where the discussion below is heading. A topology on a set X is a collection T of subsets of X such that: [T1] ∅ and X are in T . [T2] The union of any subcollection of elements of T is in T . [T3] The intersection of any finite subcollection of elements of T is in T . The pair (X, T ) is referred as a topological space and each element of T is said to be an open set in T or simply open in T . Each element of X is called a point in X. Any open set in T containing a given point x in X is said to be a neighborhood of x in X. We say that a subset K of X is a closed set in T or simply closed in T if XK is an open set in X. It is often convenient to adopt the shorthand of referring to X as a topological space, with T understood from the context. Accordingly, if U is an open set in T and K is a closed set in T , we say that U is an open set in X or simply open in X, and that K is a closed set in X or simply closed in X. Theorem 9.1.1. If X is a topological space, then the following are closed sets in X: (a) ∅ and Rm . Semi-Riemannian Geometry, First Edition. Stephen C. Newman. c 2019 John Wiley & Sons, Inc. Published 2019 by John Wiley & Sons, Inc.

183

184

9 Topology

(b) The intersection of any collection of closed sets in X. (c) The union of any finite collection of closed sets in X. Proof. (a): Straightforward. T (b): Let {Kα : α ∈ A} be a collection of closed sets in X, and let K = α∈A Kα . Since each XKα is open in X and, by Theorem A.1(d), [

XK =

(XKα ),

α∈A

it follows that XK is open in X. Thus, K = X(XK) is closed in X. (c): SnLet {Ki : i = 1, . . . , m} be a finite collection of closed sets in X, and let K = i=1 Ki . Since each XKi is open in X and, by Theorem A.1(c), XK =

m \

(XKi ),

i=1

it follows that XK is open in X. Thus, K = X(XK) is closed in X. Let X be a topological space, and let S be a subset of X. The interior of S in X is denoted by intX (S) and defined to be the union of all open sets in X contained in S. The exterior of S in X is denoted by extX (S) and defined to be the union of all open sets in X contained in XS. The boundary of S in X is denoted by bdX (S) and defined to be the set of all points in X that are neither in intX (S) nor in extX (S). Thus, X is the disjoint union X = intX (S) ∪ extX (S) ∪ bdX (S). The closure of S in X is denoted by clX (S) and defined to be the intersection of all closed sets in X containing S. Here are some of the basic facts about interiors, exteriors, boundaries, and closures. Theorem 9.1.2. If X is a topological space and S is a subset of X, then: (a) intX (S) is the largest open set in X contained in S. (b) extX (S) is the largest open set in X contained in XS. (c) clX (S) is the smallest closed set in X containing S. (d) bdX (S) is a closed set in X. (e) S is closed in X if and only if S = clX (S). (f) bdX (S) is the set of points x in X such that every neighborhood of x intersects S and XS. (g) clX (S) is the set of points x in X such that every neighborhood of x intersects S. Proof. (a)–(c): Straightforward. (d): Since intX (S) and extX (S) are open in X, bdX (S) = X[intX (S) ∪ extX (S)]

185

9.1 Topology

is closed in X. (e)(⇒): Since S is closed in X, the intersection of all closed sets in X containing S is simply S. (e)(⇐): This follows from part (c). (f): We prove the logically equivalent assertion: (f0 ) A point x in X is not in bdX (S) if and only if it has a neighborhood in M that does not intersect either S or XS. (f0 )(⇒): Since x is not in bdX (S), it is in either intX (S) or extX (S). If x is in intX (S), then it is contained in an open set U in M that is a subset of S. Thus, U is a neighborhood of x that does not intersect XS. The argument when x is in extX (S) is similar. (f0 )(⇐): Let U be a neighborhood of x in X. If U does not intersect S, then U is an open set in XS, hence U ⊆ extX (S). Thus, x is in extX (S), which is disjoint from bdX (S). The argument when U does not intersect XS is similar. (g): The proof is similar to that of part (f). Rather than having to deal with all open sets in X, it is often convenient to work with a smaller collection that contains the essential information on “openness”. A basis for X is a collection B of subsets of X, each of which is called a basis element, such that: S [B1] X = B∈B B. [B2] If B1 , B2 are basis elements in B and x is a point in B1 ∩ B2 , then there is a basis element B in B containing x such that B ⊆ B1 ∩ B2 . Theorem 9.1.3. If X is a set and B is a basis for X, then the collection TB consisting of all unions of basis elements in B is a topology on X, called the topology generated by B.

Proof. We need to show that [T1]–[T3] are satisfied. It is vacuously true that ∅ is in TB and, by [B1], so is X. Thus, [T1] is satisfied. Evidently, [T2] is satisfied. For [T3], let B1 , B2 be basis elements in B. We claim that B1 ∩ B2 is in TB . This is clearly the case if B1 ∩ B2 = ∅, so suppose otherwise. By [B2], for each x in B1 ∩ B2 , there is a basis element S Bx in B such that x is in Bx and Bx ⊆ B1 ∩ B2 . It follows that B1 ∩ B2 = x∈B1 ∩B2 Bx , hence B1 ∩ S B2 is in TB . This proves the claim. Let U and U be elements of T , with U = 1 2 B 1 α1 ∈A1 Bα1 S and U2 = α2 ∈A2 Bα2 . By Theorem A.1(a),  [   [  [ U1 ∩ U2 = Bα1 ∩ Bα2 = (Bα1 ∩ Bα2 ). α1 ∈A1

α2 ∈A2

α1 ∈A1 , α2 ∈A2

Since each Bα1 ∩ Bα2 is in TB , so is U1 ∩ U2 . Using an inductive argument, it is easily shown that the intersection of any finite subcollection of elements of TB is in TB . Thus, [T3] is satisfied. Theorem 9.1.4 (Subspace Topology). If (X, T ) is a topological space and S is a subset of X, then TS = {U ∩ S : U ∈ T }

186

9 Topology

is a topology on S, called the subspace topology (induced by T ). Proof. That [T1] is satisfied is obvious, and that [T2] and [T3] are satisfied is easily shown using Theorem A.1. In the notation of Theorem 9.1.4, we say that (S, TS ) is a topological subspace of (X, T ) or simply that S is a topological subspace of X. Throughout, any subset of a topological space is viewed as a topological subspace. Theorem 9.1.5. Let X be a topological space, let U be an open set in X, let S be a subset of U , and view U as a topological subspace of X. Then S is open in U if and only if S is open in X. Proof. (⇒): Since S is open in U , by definition, there is an open set V in X such that S = V ∩ U . Because V and U are open in X, so is S. (⇐): Since S is open in X, by definition, S = S ∩ U is open in U . Theorem 9.1.6 (Product Topology). If (X1 , T1 ), . . . , (Xm , Tm ) are topological spaces, then B× = {U1 × · · · × Um : Ui ∈ Ti for i = 1, . . . , m} is a basis for X1 × · · · × Xm , and the topology it generates is called the product topology. Proof. It is clear that X1 ×· · ·×Xm is in B× , so [B1] is satisfied. Let U1 ×· · ·×Um and V1 × · · · × Vm be sets in B× . By Theorem A.2, (U1 × · · · × Um ) ∩ (V1 × · · · × Vm ) = (U1 ∩ V1 ) × · · · × (Um ∩ Vm ). Since Ui ∩ Vi is in Ti for i = 1, . . . , m, it follows that (U1 × · · · × Um ) ∩ (V1 × · · · × Vm ) is in B× , so [B2] is satisfied. Let X and Y be topological spaces, let F : X −→ Y be a map, and let x be a point in X. We say that F is continuous at x if for every neighborhood V of F (x) in Y , there is a neighborhood U of x in X (possibly depending on V ) such that F (U ) ⊆ V . Equivalently, F is continuous at x if for every neighborhood V of F (x) in Y , F −1 (V ) contains a neighborhood of x in X. We say that F is continuous on X or simply continuous if it is continuous at every x in X. Theorem 9.1.7. Let X and Y be topological spaces, and let F : X −→ Y be a map. Then the following are equivalent: (a) F is continuous. (b) If V is an open set in Y , then F −1 (V ) is an open set in X. (c) If K is a closed set in Y , then F −1 (K) is a closed set in X.

187

9.1 Topology

Proof. (a) ⇒ (b): Let x be a point in F −1 (V ). Since F is continuous at x and V is a neighborhood S of F (x) in Y , F −1 (V ) contains a neighborhood Ux of x in −1 X. Then F (V ) = x∈U Ux is open in X. (b) ⇒ (a): Let x be a point in X, and let V be a neighborhood of F (x) in Y . Since V is open in Y , F −1 (V ) is open in X and is therefore is a neighborhood of x in X. (b) ⇒ (c): We have K is closed in Y ⇒

Y K is open in Y



X[XF −1 (K)] = F −1 (K) is closed in X,



F −1 (Y K) = XF −1 (K) is open in X

where the second implication follows from Theorem A.3(e). (c) ⇒ (b): We have V is open in Y ⇒

Y V is closed in Y



X[XF −1 (V )] = F −1 (V ) is open in X,



F −1 (Y V ) = XF −1 (V ) is closed in X

where the second implication follows from Theorem A.3(e). Theorem 9.1.8. Let X, Y , and Z be topological spaces, and let F : X −→ Y and G : Y −→ Z be maps. If F and G are continuous, then so is G ◦ F . Proof. Let V be an open set in Z. Since G is continuous, by Theorem 9.1.7, G−1 (V ) is open  in Y , and because F is continuous, again by Theorem 9.1.7, F −1 G−1 (V ) = (G ◦ F )−1 (V ) is open in X. The result now follows from Theorem 9.1.7. Theorem 9.1.9. Let X and Y be topological spaces, let F : X −→ Y be a map, and let B be a basis that generates the topology on Y . If F −1 (B) is open in X for every basis element B in B, then F is continuous. Proof. Let V be an openSset in Y . According to Theorem 9.1.3, V can be expressed as a union S V = α∈A Bα of basis elements, and then Theorem A.3(b) gives F −1 (V ) = α∈A F −1 (Bα ). By assumption, each F −1 (Bα ) is open in X, and therefore, so is F −1 (V ). The result now follows from Theorem 9.1.7. Theorem 9.1.10. Let X1 , . . . , Xm be topological spaces, and suppose X1 ×· · ·× Xm has the product topology. Define a map Pi : X1 × · · · × Xm −→ Xi , called the ith projection map on X1 × · · · × Xm , by Pi (x1 , . . . , xi , . . . , xm ) = xi for all (x1 , . . . , xm ) in X1 × · · · × Xm for i = 1, . . . , m. Then each Pi is continuous.

188

9 Topology

Proof. Let Ui be an open set in Xi . Since Pi−1 (Ui ) = X1 × · · · × Ui × · · · × Xm is in the basis B× for the product topology on X1 × · · · × Xm , it is open in X1 × · · · × Xm . It follows from Theorem 9.1.7 that Pi is continuous. Theorem 9.1.11. Let X, Y1 , . . . , Ym be topological spaces, let F = (F1 , . . . , Fm ) : X −→ Y1 × · · · × Ym be a map, and suppose Y1 × · · · × Ym has the product topology. Then F is continuous if and only if Fi : X −→ Yi is continuous for i = 1, . . . , m. Proof. (⇒): Let Pi be the ith projection map on Y1 × · · · × Ym . We have from Theorem 9.1.10 that Pi is continuous, and then from Theorem 9.1.8 that so is F i = Pi ◦ F . (⇐): Let V1 × · · · × Vm be in the basis B× for the product topology on Y1 × · · · × Ym . Then Theorem A.5 gives F −1 (V1 × · · · × Vm ) =

m \

Fi−1 (Vi ).

i=1

Since Vi is open in Yi and Fi is continuous, by Theorem 9.1.7, Fi−1 (Vi ) is open in X for i = 1, . . . , m. It follows that F −1 (V1 × · · · × Vm ) is open in X. By Theorem 9.1.9, F is continuous. Theorem 9.1.12. Let Xi and Yi be topological spaces, let Fi : Xi −→ Yi be a continuous map for i = 1, . . . , m, and suppose X1 × · · · × Xm and Y1 × · · · × Ym have the respective product topologies. Then the map F1 × · · · × Fm : X1 × · · · × Xm −→ Y1 × · · · × Ym defined by  F1 × · · · × Fm (x1 , . . . , xm ) = F1 (x1 ), . . . , Fm (xm ) for all (x1 , . . . , xm ) in X1 × · · · × Xm is continuous. Proof. Let Pi be the ith projection map on X1 × · · · × Xm , and consider the map Fi ◦ Pi : X1 × · · · × Xm −→ Yi for i = 1, . . . , m. Then F1 × · · · × Fm = (F1 ◦ P1 , . . . , Fm ◦ Pm ). We have from Theorem 9.1.10 that Pi is continuous, and then from Theorem 9.1.8 that Fi ◦ Pi is continuous for i = 1, . . . , m. The result now follows from Theorem 9.1.11.

189

9.1 Topology

Let X and Y be topological spaces, let F : X −→ Y be a map, and let S be a subset (topological subspace) of X. We say that F is continuous on S if the restriction map F |S : S −→ Y is continuous on S. Theorem 9.1.13. With the above setup, if F is continuous, then so is F |S . Proof. Let V be an open set in Y . Since F is continuous, by Theorem 9.1.7, F −1 (V ) is open in X, hence F −1 (V ) ∩ S = (F |S )−1 (V ) is open in S. The result now follows from Theorem 9.1.7. Let X and Y be topological spaces, and let F : X −→ Y be a continuous map. We say that F is a homeomorphism, and that X and Y are homeomorphic, if F is bijective and F −1 is continuous. The next result shows that a homeomorphism is a map that preserves topological structure (in much the same way that a linear isomorphism preserves linear structure). Theorem 9.1.14. Let X and Y be topological spaces, and let F : X −→ Y be a bijective map. Then the following are equivalent: (a) F is homeomorphism. (b) U is an open set in X if and only if F (U ) is an open set in Y . (c) K is a closed set in X if and only if F (K) is a closed set in Y . Proof. This follows from Theorem 9.1.7. We say that a topological space X is disconnected if there are disjoint nonempty open sets U and V in X such that X = U ∪ V . In that case, U and V are said to disconnect X. If X is not disconnected, we say it is connected. A subset S of X is said to be disconnected (connected) in X or simply disconnected (connected) if it is disconnected (connected) as a topological subspace of X. We say that a subset C of X is a connected component of X if it is a connected set in X that is maximal, in the sense that C is not properly contained in any other connected set in X. The next result gives equivalent conditions for a subset of a topological space to be disconnected in the topological space. Theorem 9.1.15. Let X be a topological space, and let S be a subset of X. Then: (a) S is disconnected in X if and only if there are nonempty open sets U and V in X such that (i) (U ∩ V ) ∩ S = ∅, (ii) S ⊆ (U ∪ V ), and (iii) U ∩ S 6= ∅ and V ∩ S 6= ∅. (b) S is connected in X if and only if for all nonempty open sets U and V in X such that (U ∩ V ) ∩ S = ∅ and S ⊆ (U ∪ V ), either S ⊆ U or S ⊆ V . Proof. Let U and V be open sets in X. Then (U ∩ V ) ∩ S = ∅



(U ∩ S) ∩ (V ∩ S) = ∅,

(9.1.1)

and by Theorem A.1(a), S ⊆ (U ∪ V )



S = (U ∪ V ) ∩ S = (U ∩ S) ∪ (V ∩ S).

(9.1.2)

190

9 Topology

(a): By definition, S is disconnected if and only if there are open sets U 0 and V 0 in S such that (i) U 0 ∩ V 0 = ∅, (ii) S = U 0 ∪ V 0 , and (iii) U 0 6= ∅ and V 0 6= ∅. Since any such U 0 and V 0 are of the form U 0 = U ∩ S and V 0 = V ∩ S for some open sets U and V in X, the result follows from (9.1.1) and (9.1.2). (b): We have from part (a) that S is connected in X if and only if for all nonempty open sets U and V in X such that (U ∩ V ) ∩ S = ∅ and S ⊆ (U ∪ V ), either U ∩ S = ∅ or V ∩ S = ∅, which in turn, from (9.1.1) and (9.1.2), is equivalent to the statement of part (b). Theorem 9.1.16. Let X be a topological space, and let T {Sα : α ∈ A} be a collection of subsets of X that are connected in X. If α∈A Sα is nonempty, S then α∈A Sα is connected in X. S Proof. Let S = α∈A Sα , and let U and V be nonempty open sets in X such that (U ∩V )∩S = ∅ and S ⊆ (U ∪V ). Then (U ∩V )∩Sα = ∅ and Sα ⊆ (U ∪V ) for all α in A. Since each Sα is connected in X, we have from Theorem 9.1.15(b) that either Sα ⊆ U or Sα ⊆ V for all α in A. By assumption, there is a point T x in α∈A Sα ⊆ (U ∪ V ). Suppose without loss of generality that x is in U , hence U ∩ Sα 6= ∅ for all α in A. If Sα ⊆ V for some α, then (U ∩ V ) ∩ Sα = ∅ simplifies to U ∩ Sα = ∅, which is a contradiction. It follows that Sα ⊆ U for all α in A, hence S ⊆ U . Again by Theorem 9.1.15(b), S is connected in X. Theorem 9.1.17. The distinct connected components of a topological space form a partition of the topological space. Proof. Let X be a topological space, and let x be a point in X. Clearly, {x} is connected in X. Let C be the union of all connected subsets of X containing x. By Theorem 9.1.16, C is connected in X, and is evidently a connected component of X. Thus, X has one or more connected components, the union of which equals X. It remains to show that distinct connected components are disjoint. We prove the logically equivalent assertion: if connected components are not disjoint, then they are equal. Suppose C and D are connected components that are not disjoint. By Theorem 9.1.16, C ∪ D is connected in X. It follows from the maximality property that C = C ∪ D = D. Theorem 9.1.18. Let X and Y be topological spaces, and let F : X −→ Y be a continuous map. If X is connected, then F (X) is connected in Y . Proof. We prove the logically equivalent assertion: if F (X) is disconnected in Y , then X is disconnected. By Theorem 9.1.15(a), there are nonempty open sets U and V in Y such that (i) (U ∩ V ) ∩ F (X) = ∅, (ii) F (X) ⊆ (U ∪ V ), and (iii) U ∩ F (X) 6= ∅ and V ∩ F (X) 6= ∅. It follows from Theorem 9.1.7 and parts (b) and (c) of Theorem A.3 that F −1 (U ) and F −1 (V ) are disjoint nonempty open sets in X such that X = F −1 (U ) ∪ F −1 (V ). Thus, X is disconnected. Theorem 9.1.19 (Intermediate Value Theorem). Let X be a connected topological space, and let f : X −→ R be a continuous function. If x1 , x2 are points in X and c is a real number such that f (x1 ) < c < f (x2 ), then there is a point x0 in X such that f (x0 ) = c.

191

9.1 Topology

Proof. Consider the sets S1 = f (X) ∩ (−∞, c) and S2 = f (X) ∩ (c, +∞). It is clear that S1 and S2 are disjoint and nonempty. Since (−∞, c) and (c, +∞) are open in R, S1 and S2 are open in f (X). Suppose there is no point x0 in X such that f (x0 ) = c. Then f (X) = S1 ∪ S2 . Thus, f (X) is disconnected in R, which contradicts Theorem 9.1.18. Theorem 9.1.20. If X is a connected topological space and f : X −→ R is a nowhere-vanishing continuous function, then f is either strictly positive or strictly negative. Proof. Let x1 and x2 be distinct points in X. Since f is nowhere-vanishing, f (x1 ), f (x2 ) 6= 0. If f (x1 ) > 0, then f (x2 ) > 0; for if not, by Theorem 9.1.19, there is a point x0 in X such that f (x0 ) = 0, which contradicts the nowherevanishing assumption on f . Similarly, if f (x1 ) < 0, then f (x2 ) < 0. Since x1 and x2 were arbitrary, the result follows. Let X be a topological space, and let U = {Uα : α ∈ A} be a S collection of open sets in X. We say that U is an open cover of X if X = α∈A Uα . A subcollection of U that is also an open cover of X is said to be a subcover of U . A subcover that is a finite set is called a finite subcover. We say that X is a compact topological space if every open cover of X has a finite subcover. A subset S of X is said to be a compact set in X or simply compact in X if it is compact as a topological subspace. Theorem 9.1.21. Let X and Y be topological spaces, and let F : X −→ Y be a continuous map. If X is compact, then F (X) is compact in Y . Proof. Let V be an open cover of F (X). Then V = {Vα ∩ F (X) : α ∈ A}, where each Vα is an open set in Y . By Theorem A.1(a),  [  [ F (X) = Vα ∩ F (X) = [Vα ∩ F (X)], α∈A

α∈A

and by Theorem A.3(b), [ [   X = F −1 F (X) = F −1 (Vα ∩ F (X) = F −1 (Vα ). α∈A

α∈A

It follows from Theorem 9.1.7 that {F −1 (Vα ) : α ∈ A} is an open cover of X. Since X is compact, there is a finite subcover {F −1 (Vi ) : i = 1, . . . , k}, so [ X= F −1 (Vi ). i

By parts (a) and (d) of Theorem A.3, [  [ F (X) = F F −1 (Vi ) = [Vi ∩ F (X)]. i

i

Thus, {Vi ∩ F (X) : i = 1, . . . , k} is a finite subcover of V.

192

9 Topology

Let X be a topological space, and let f : X −→ R be a function. We say that f is bounded on X or simply bounded if there is a real number c > 0 such that |f (x)| < c for all points x in X. Let S be a nonempty subset of R. The element s1 in S is said to be the smallest element in S if s1 ≤ s for all s in S. Similarly, the element s2 in S is said to be the largest element in S if s ≤ s2 for all s in S. If S has a smallest element s1 and a largest element s2 , then s1 ≤ s ≤ s2 for all s in S. Consider the collection U = {(−∞, s)S∩ S : s ∈ S} of open sets in S. If S has a largest element s2 , then s2 is not in s∈S (−∞, s), so U is not an open cover of S in R. On the other hand, if S does not have a largest element, then U is an open cover of S in R. Theorem 9.1.22 (Extreme Value Theorem). If X is a compact topological space and f : X −→ R is a continuous function, then: (a) There are points x1 and x2 in X such that f (x1 ) ≤ f (x) ≤ f (x2 ) for all x in X. (b) f is bounded. Proof. (a): The assertion is equivalent to: There are points x1 and x2 in X such that f (x1 ) is the smallest element in f (X), and f (x2 ) is the largest element in f (X). Suppose, for a contradiction, that   f (X) does not have a largest element. In light of above remarks, −∞, f (x) ∩ f (X) : x ∈ X is an open cover of f (X). Since X is compact and f is continuous, by Theorem 9.1.21, f (X) is compact in R. It follows that there is a finite subcover   −∞, f (xi ) ∩ f (X) : i = 1, . . . , m , where we assume without loss of generality that f (x1 ) < · · · < f (xm ). By Theorem A.1(a), f (X) =

m [  i=1





−∞, f (xi ) ∩ f (X) =

[ m i=1

−∞, f (xi )



 ∩ f (X)

 = −∞, f (xm ) ∩ f (X),   hence f (X) ⊆ −∞, f (xm ) . Since f (xm ) is in f (X) but not in −∞, f (xm ) , we have a contradiction. Thus, f (X) has a largest element. The proof that f (X) has a smallest element is similar. (b): This follows from part (a). Let X be a topological space, and let f : X −→ R be a function. The support of f is denoted by supp(f ) and defined to be the closure in X of the set of points at which f is nonvanishing:  supp(f ) = clX {x ∈ X : f (x) 6= 0} . Thus, supp(f ) is the smallest closed set in X containing those points at which f is nonvanishing. We say that f has compact support if supp(f ) is compact in X. Given a subset S of X, we say that f has support in S if supp(f ) ⊆ S.

193

9.2 Metric Spaces

9.2

Metric Spaces

Let X be a nonempty set X. A function d : X × X −→ R is said to be a distance function on X if for all x, y, z in X: [D1] d(x, y) ≥ 0, with d(x, y) = 0 if and only if x = y. [D2] d(x, y) = d(y, x). [D3] d(x, y) ≤ d(x, z) + d(z, y). (triangle inequality) A metric space is a pair (X, d) consisting of a nonempty set X and a distance function d on X. For a given point x in X and real number r > 0, the open ball of radius r centered at x is defined by Br (x) = {y ∈ X : d(x, y) < r}, and the closed ball of radius r centered at x by B r (x) = {y ∈ X : d(x, y) ≤ r}. Theorem 9.2.1. With the above setup, if y is a point in Br (x), then there is a real number s such that Bs (y) ⊆ Br (x). Proof. Let s = r − d(x, y). For a point z in Bs (y), we have d(y, z) < s. Then [D3] gives d(x, z) ≤ d(x, y) + d(y, z) < r, hence z is in Br (x). Thus, Bs (y) ⊆ Br (x). Theorem 9.2.2. If (X, d) is a metric space, then Bd = {Br (x) : x ∈ X, r > 0} is a basis for X, and the topology it generates is called the metric topology induced by d. Proof. We need to show that conditions [B1] and [B2] of Section 9.1 are satisfied. [B1] is trivial. For [B2], let Br1 (x1 ), Br2 (x2 ) be elements of Bd , and let x be a point in Br1 (x1 ) ∩ Br2 (x2 ). By Theorem 9.2.1, there are real numbers s1 , s2 > 0 such that Bs1 (x) ⊆ Br1 (x1 ) and Bs2 (x) ⊆ Br2 (x2 ). Setting s = min(s1 , s2 ), we have Bs (x) ⊆ Br1 (x1 ) ∩ Br2 (x2 ). The next result justifies our use of the terms “open” and “closed” to describe balls in a metric space. Theorem 9.2.3. If (X, d) is a metric space, then open (closed) balls in X are open (closed) with respect to the metric topology induced by d.

194

9 Topology

Proof. For open balls, this is true by definition. Let B r (x) be a closed ball in X, and let y be a point in XB r (x), so that d(x, y) > r. Let s = d(x, y) − r, and let z be a point in Bs (y), which means d(y, z) < s. See Figure 9.2.1. From [D2] and [D3], r + s = d(x, y) ≤ d(x, z) + d(z, y) < d(x, z) + s, hence d(x, z) > r, so z is in XB r (x). Thus, Bs (y) ⊆ XB r (x). It follows that XB r (x) is the union of a collection of basis elements of Bd . By Theorem 9.1.3 and Theorem 9.2.2, XB r (x) is open with respect to the metric topology induced by d, hence B r (x) = X[XB r (x)] is closed with respect to that topology.

– Bs(y)

– Br(x)

z

y

s

r

x

Figure 9.2.1. Diagram for Theorem 9.2.3 The next result expresses continuity in metric spaces in terms familiar from the differential calculus of one real variable. Theorem 9.2.4 (ε-δ Criterion). Let (X, d) and (Y, e) be metric spaces, each with its induced metric topology, let F : X −→ Y be a map, and let x be a point in X. Then F is continuous at x if and only if for every real number ε > 0, there is a real number δ > 0 (possibly depending on ε) such that:  d(x, y) < δ ⇒ e F (x), F (y) < ε, or equivalently, y is in Bδ (x)



 F (y) is in Bε F (x) ,

or equivalently,  Bδ (x) ⊆ F −1 Bε F (x) . Proof. The equivalence of the three conditions is clear. Let us work with the last one.

195

9.3 Normed Vector Spaces

 (⇒): Since F is continuous at x and Bε F (x) is a neighborhood of F (x) in Y , by definition, there is a neighborhood U of x in X such that F (U ) ⊆ Bε F (x) . By Theorem 9.1.3 and Theorem 9.2.2, there is an open ball Bδ (x) such that  Bδ (x) ⊆ U ⊆ F −1 Bε F (x) . (⇐): Let V be a neighborhood of F (x) in Y . We have from Theorem 9.1.3 and Theorem 9.2.2 that there is an open ball Bε F (x) such that Bε F (x) ⊆  V. By assumption, there is an open ball Bδ (x) with Bδ (x) ⊆ F −1 Bε F (x) . Thus, Bδ (x) is a neighborhood of x in X such that F Bδ (x) ⊆ V .

9.3

Normed Vector Spaces

Let V be a vector space. A function k·k : V −→ R is said to be a norm on V if for all vectors v, w in V and all real numbers c: [N1] kvk ≥ 0, with kvk = 0 if and only if v = 0. [N2] kcvk = |c| kvk. [N3] kv + wk ≤ kvk + kwk. (triangle equality) A normed vector space is a pair (V, k·k) consisting of a vector space V and a norm k·k on V . Theoremp9.3.1. If (V, g) is an inner product space and k·k is the norm defined by kvk = hv, vi for all v in V , then (V, k·k) is a normed vector space. Proof. Using Theorem 4.6.5 and the basic properties of inner products, it is easily shown that [N1]–[N3] are satisfied. Theorem 9.3.1 justifies our use of the term “norm” in Section 4.6 (and to a lesser extent its use in Section 4.1). Theorem 9.3.2. If (V, k·k) is a normed vector space and d : V × V −→ R is the function defined by d(v, w) = kv − wk for all vectors v, w in V , then (V, d) is a metric space. Proof. It is easily shown that conditions [D1]–[D3] of Section 9.2 are satisfied.

9.4

Euclidean Topology on Rm

Let us put the results of Sections 9.1–9.3 to work constructing a series of “spaces” m starting with the inner product space Rm 0 = (R , e), as defined in Example m 4.6.1. We first apply Theorem 9.3.1 to (R , e) and obtain a normed vector space (Rm , k·k), where k·k : Rm × Rm −→ R

196

9 Topology

is the norm introduced in Example 4.6.1 and defined by

1

p

(x , . . . , xm ) = (x1 )2 + · · · + (xm )2 . We next apply Theorem 9.3.2 to (Rm , k·k) and obtain a metric space (Rm , d), where d : Rm × Rm −→ R is the distance function defined by

 d (x1 , . . . , xm ), (y 1 , . . . , y m ) = (x1 , . . . , xm ) − (y 1 , . . . , y m ) p = (x1 − y 1 )2 + · · · + (xm − y m )2 . Lastly, we apply Theorem 9.2.2 to (Rm , d) and obtain a topological space (Rm , T). Recall from Example 4.6.1 that e is called the Euclidean inner product. In a corresponding fashion, we refer to k·k as the Euclidean norm on Rm , to d as the Euclidean distance function on Rm , and to T as the Euclidean topology on Rm . With the above constructions, there is now a direct path from (Rm , e) to (Rm , k·k) to (Rm , d) to (Rm , T). We generally simplify notation by using Rm to denoted any of the latter three spaces, allowing the context to make it clear whether Rm is being thought of as a normed vector space, a metric space, or a topological space. On occasion, we also adopt Rm as notation for (Rm , e). Let us discuss a few examples based on the Euclidean topologies on R and R2 to illustrate some of the material in Sections 9.1–9.3. We first consider R. An open (closed) “ball” in R is nothing other than a finite open (closed) interval. By Theorem 9.1.3 and Theorem 9.2.2, the collection of open intervals in R is a basis for the Euclidean topology of R. An open set in R is obtained by forming an arbitrary union of open intervals, and a closed set in R results from taking the complement in R of such an open set. For example, (0, 1) and (0, +∞) are open in R, whereas [0, 1] and {0} are closed in R. Working through the definitions, we find that (0, 1) is the interior in R of both (0, 1) and [0, 1], and that {0, 1} is the boundary in R of both (0, 1) and [0, 1]. Consider the functions f1 , f2 , f3 : R −→ R given by f1 (x) = x3 , f2 (x) = |x|, and ( 1 if x 6= 0 f3 (x) = 0 if x = 0. Then f1 and f2 are continuous, but f3 is not. In fact, f1 is a homeomorphism. We now turn our attention to R2 . The open “ball” (actually “disk”) in R2 of radius R centered at (x0 , y0 ) is  BR (x0 , y0 ) = {(x, y) ∈ R2 : (x − x0 )2 + (y − y0 )2 < R2 }. For example, D = {(x, y) ∈ R2 : x2 + y 2 < 1},

9.4 Euclidean Topology on Rm

197

is the unit open disk in R2 (centered at the origin). By Theorem 9.1.3 and Theorem 9.2.2, the collection of open disks in R2 is a basis for the Euclidean topology of R2 . An open set in R2 is obtained by forming an arbitrary union of open disks, and a closed set in R2 results from taking the complement in R2 of such an open set. For example, (0, 1) × (0, 1) = {(x, y) ∈ R2 : 0 < x, y < 1} is open in R2 , whereas [0, 1] and [0, 1] × [0, 1] = {(x, y) ∈ R2 : 0 ≤ x, y ≤ 1} are closed in R2 . Suppose D, as given above, has the subspace topology induced by R2 . Consider the map F : D −→ R2 (between topological spaces) defined by F (x, y) = p

1

1 − x2 − y 2

(x, y).

It can be shown that F is a homeomorphism, with inverse F −1 : R2 −→ D given by 1 F −1 (x, y) = p (x, y). 1 + x2 + y 2 Thus, D is homeomorphic to all of R2 . Now consider the subsets {(x, 0) : x ∈ R} and {(x, |x|) : x ∈ R} of R2 , and suppose each has the subspace topology induced by R2 . It can be shown that the map G : {(x, 0) : x ∈ R} −→ {(x, |x|) ∈ R2 : x ∈ R} (between topological spaces) given by G(x, 0) = (x, |x|) is a homeomorphism. Thus, a “straight” line is homeomorphic to a line with a “corner”. The next result shows that in the Euclidean setting a homeomorphism has something to say about “dimension”. Theorem 9.4.1 (Homeomorphism Invariance of Dimension). For an open set in Rm to be homeomorphic to an open set in Rn , it is necessary that m = n. We close this section with a few remarks on compactness, perhaps the least intuitive of the concepts introduced in Section 9.1. As it stands, determining whether a subset of a topological space is compact is a seemingly complicated task. However, in the Euclidean case, matters are more straightforward. A subset S of Rm is said to be bounded if it is contained in some open ball of finite radius. Theorem 9.4.2 (Compactness in Rm ). A subset of Rm is compact in Rm if and only if it is closed in Rm and bounded. Thus, [0, 1] × [0, 1] is compact in R2 , but (0, 1) × (0, 1) and R × {0} are not. It should be emphasized that the preceding theorem rests on unique features of Rm and does not extend to an arbitrary metric space.

198

9 Topology

Chapter 10

Analysis in Rm 10.1

Derivatives

In this section, we review some of the key results in the differential calculus of one or more real variables. For the most part, proofs are not provided. Let U be an open set in Rm , let F : U −→ Rn be a map, and let p be a point in U . We say that F is differentiable at p if there is a linear map Lp : Rm −→ Rn such that lim

kvk→0

kF (p + v) − F (p) − Lp (v)k = 0. kvk

It can be shown that if such a map exists, it is unique. We call this map the differential of F at p and henceforth denote it by dp (F ) : Rm −→ Rn .

(10.1.1)

In the literature, the differential of F at p is also called the derivative of F at p or the total derivative of F at p, and is denoted variously by dFp , dFp , DF (p), Dp (F ), or F 0 (p). We have chosen to include parentheses in the notation dp (F ) to set the stage for viewing dp as a type of map. It is usual to characterize dp (F ) as being a “linear approximation” to F in the vicinity of p. For a more geometric interpretation, let us define the graph of F by   graph(F ) = p, F (p) ∈ Rm+n : p ∈ U .

We can think of dp (F )(Rm ), which is a vector space, as being the “tangent space” to graph(F ) at F (p). This is a generalization of the tangent line and tangent plane familiar from the differential calculus of one or two real variables. We say that F is differentiable (on U ) if it is differentiable at every p in U . To illustrate the difference between continuity and differentiability, consider Semi-Riemannian Geometry, First Edition. Stephen C. Newman. c 2019 John Wiley & Sons, Inc. Published 2019 by John Wiley & Sons, Inc.

199

10 Analysis in Rm

200

the function f : R −→ R given by f (x) = |x|. As was observed in Section 9.4, f is continuous; however, it is not differentiable because there are two candidates for “tangent line” at x = 0. We now specialize to the case n = 1 and consider differentiable functions, turning our attention to differentiable maps below. Let f : U −→ R be a function, and let p be a point in U . Consistent with (10.1.1), the differential of f at p is denoted by dp (f ) : Rm −→ R. Theorem 10.1.1. Let U be an open set in Rm , let f : U −→ R be a function, and let p be a point in U . If f is differentiable at p, then it is continuous at p. We need to establish notation for coordinates on Rm . This notation will serve for the present chapter, but will need to be revised later on. In this chapter, coordinates on Rm are denoted by (x1 , . . . , xm ) or (y 1 , . . . , y m ). Let U be an open set in Rm , let f : U −→ R be a function, and let p = (p , . . . , pm ) be a point in U . The partial derivative of f with respect to xi at p is defined by 1

∂f f (p1 , . . . , pi + h, . . . , pm ) − f (p1 , . . . , pm ) (p) = lim h→0 ∂xi h for i = 1, . . . , m, provided the limit exists. It is sometimes convenient to denote ∂f (p) ∂xi

∂ (f )(p). ∂xi

by

When m = 1, we denote ∂f (p) ∂x

df (p). dx

by

In this notation, the linear function dp (f ) : R −→ R is given by dp (f )(x) = x

df (p) dx

for all x in R. More generally, we have the following result. Theorem 10.1.2. Let U be an open set in Rm , let f : U −→ R be a function, let p be a point in U , and let v = (a1 , . . . , am ) be a vector in Rm . If f is differentiable at p, then (∂f /∂xi )(p) exists for i = 1, . . . , m and dp (f )(v) =

X i

ai

∂f (p). ∂xi

201

10.1 Derivatives

Thus, dp (f )(v) is nothing other than the directional derivative of f at p in the direction v, familiar from the differential calculus of two or more real variables. Theorem 10.1.3. Let U be an open set in Rm , let p be a point in U , and let c be a real number. If the functions f, g : U −→ R are differentiable at p, then: (a) dp (cf + g) = c dp (f ) + dp (g). (b) dp (f g) = f (p) dp (g) + g(p) dp (f ). Proof. For a vector v = (a1 , . . . , am ) in Rm , we have from Theorem 10.1.2 and the properties of partial derivatives that  X ∂(cf + g) X  ∂f ∂g i dp (cf + g)(v) = ai (p) = a c (p) + (p) ∂xi ∂xi ∂xi i i X ∂f X ∂g =c ai i (p) + ai i (p) = c dp (f )(v) + dp (g)(v) ∂x ∂x i i  = c dp (f ) + dp (g) (v) and  X  ∂(f g) ∂g ∂f i (p) = a f (p) (p) + g(p) (p) ∂xi ∂xi ∂xi i i X ∂g X ∂f = f (p) ai i (p) + g(p) ai i (p) ∂x ∂x i i

dp (f g)(v) =

X

ai

= f (p) dp (f )(v) + g(p) dp (f )(v)  = f (p) dp (f ) + g(p) dp (f ) (v). Since v was arbitrary, the result follows. Let U be an open set in Rm , let f : U −→ R be a function, and suppose (∂f /∂xi )(p) exists for all p in U for i = 1, . . . , m. The partial derivative of f with respect to xi is the function ∂f : U −→ R ∂xi defined by the assignment

∂f (p) ∂xi for all p in U . Now suppose (∂/∂xi )(∂f /∂xj )(p) exists for all p in U for i, j = 1, . . . , m. The second order partial derivative of f with respect to xi and xj is the function ∂2f : U −→ R ∂xi ∂xj defined by the assignment ∂2f p 7−→ (p) ∂xi ∂xj p 7−→

10 Analysis in Rm

202 for all p in U for i, j = 1, . . . , m, where we denote   ∂ ∂f ∂2f (p) by (p). i j ∂x ∂x ∂xi ∂xj

Iterating in an obvious way, higher-order partial derivatives of f are obtained. For k ≥ 2, a kth-order partial derivative of f is denoted by ∂kf , · · · ∂xik

∂xi1 ∂xi2

where the integers 1 ≤ i1 , i2 , . . . , ik ≤ m are not necessarily distinct. Let us denote by C 0 (U ) the set of functions f : U −→ R that are continuous on U . For k ≥ 1, we define C k (U ) to be the set of functions f : U −→ R such that all partial derivatives of f of order ≤ k exist and are continuous on U . Theorem 10.1.4 (Criterion for Differentiability of Functions). If U is an open set in Rm and f is a function in C 1 (U ), that is, if ∂f /∂xi exists and is continuous on U for i = 1, . . . , m, then f is differentiable on U . From Theorem 10.1.1, Theorem 10.1.2, and Theorem 10.1.4, we obtain the following result, which summarizes the relationship of continuity to differentiability for a function on an open set in Rm . Theorem 10.1.5. Let U be an open set in Rm , and let f : U −→ R be a function. Then: (a) If f is differentiable on U , then f is continuous on U , and ∂f /∂xi exists on U for i = 1, . . . , m. (b) If ∂f /∂xi exists and is continuous on U for i = 1, . . . , m, then f is differentiable on U . Theorem 10.1.6 (Equality of Mixed Partial Derivatives). Let U be an open set in Rm , let f be a function in C r (U ), where r ≥ 1, and let 1 ≤ k ≤ r be an integer. Then the mixed partial derivatives of f of order k are independent of the order of differentiation. That is, if 1 ≤ i1 , i2 , . . . , ik ≤ m are (not necessarily distinct) integers and j1 , j2 , . . . , jk are the same integers in some order, then ∂kf ∂kf = . i j 1 · · · ∂x k ∂x ∂xj2 · · · ∂xjk

∂xi1 ∂xi2

Let U be an open set in Rm , and let f : U −→ R be a function. We say that f is (Euclidean) smooth (on U ) if f is in C k (U ) for all k ≥ 0. The set of smooth functions on U is denoted by C ∞ (U ). In view of Theorem 10.1.5, C ∞ (U ) is the set of functions f on U with partial derivatives of all orders on U . It is sometimes said that smooth functions are “infinitely differentiable”. We make C ∞ (U ) into both a vector space and a ring by defining operations as follows: for all functions f, g in C ∞ (U ) and all real numbers c, let (f + g)(p) = f (p) + g(p),

203

10.1 Derivatives (f g)(p) = f (p)g(p), and (cf )(p) = cf (p)

for all p in U . The identity element of the ring is the constant function 1U that sends all points in U to the real number 1. We now turn our attention to differentiable maps. Theorem 10.1.7. Let U be an open set in Rm , let F : U −→ Rn be a map, and let p be a point in U . If F is differentiable at p, then it is continuous at p. Theorem 10.1.8. Let U be an open set in Rm , let F = (F 1 , . . . , F n ) : U −→ Rn be a map, and let p be a point in U . Then: (a) F is differentiable at p if and only if F i is differentiable at p for i = 1, . . . , m. (b) If F is differentiable at p and v is a vector in Rm , then  dp (F )(v) = dp (F 1 )(v), . . . , dp (F n )(v) . The next result is one of the workhorses of analysis and will be called upon frequently. Theorem 10.1.9 (Chain Rule). Let U and V be open sets in Rm and Rn , respectively, let F : U −→ Rn and G : V −→ Rk be maps such that F (U ) ⊆ V , and let p be a point in U . If F is differentiable at p and G is differentiable at F (p), then G ◦ F is differentiable at p and dp (G ◦ F ) = dF (p) (G) ◦ dp (F ). A remark on the above notation is in order. Instead of G ◦ F , it would be more precise, although somewhat cluttered, to write G|F (U ) ◦ F . When there is a possibility of confusion or if it improves exposition, notation will be modified in this way. Let U be an open set in Rm , let F = (F 1 , . . . , F n ) : U −→ Rn be a differentiable map, and let p be a point in U . The Jacobian matrix of F at p is the n × m matrix defined by   ∂F 1 ∂F 1  ∂x1 (p) · · · ∂xm (p)   .. .. .. . JF (p) =  (10.1.2) .   . .  ∂F n  n ∂F (p) · · · (p) ∂x1 ∂xm Denoting by E and F the standard bases for Rm and Rn , respectively, we have from (2.2.3), Theorem 10.1.2, and Theorem 10.1.8(b) that  F JF (p) = dp (F ) E .

(10.1.3)

10 Analysis in Rm

204

 When m = n, the determinant det JF (p) is called the Jacobian determinant of F at p. Let 1 ≤ i1 < · · · < is ≤ m and 1 ≤ j1 < · · · < js ≤ n be integers, where 1 ≤ s ≤ min(m, n). Using multi-index notation, the submatrix of JF (p) consisting of the intersection of rows i1 , . . . , is and columns j1 , . . . , js (in that order) is  i1  ∂F ∂F i1 (p) · · · (p)  ∂xj1  ∂xjs   (i1 ,...,is ) . .. . . . . JF (p)(j1 ,...,js ) =  . . .    is  is ∂F ∂F (p) · · · (p) ∂xj1 ∂xjs In the literature, the above matrix is commonly denoted by ∂(F i1 , . . . , F is ) (p). ∂(xj1 , . . . , xjs ) Theorem 10.1.10 (Classical Chain Rule). Let U and V be open sets in Rm and Rn , respectively, let F : U −→ Rn and G : V −→ Rk be maps such that F (U ) ⊆ V , and let p be a point in U . If F is differentiable at p and G is differentiable at F (p), then G ◦ F is differentiable at p and  JG◦F (p) = JG F (p) JF (p). Equivalently, let F = (F 1 , . . . , F n ) and G = (G1 , . . . , Gk ), and let (x1 , . . . , xm ) and (y 1 , . . . , y n ) be coordinates on Rm and Rn , respectively. Then n X  ∂F l ∂(Gi ◦ F ) ∂Gi (p) = F (p) (p) j l ∂x ∂y ∂xj

(10.1.4)

l=1

for i = 1, . . . , k and j = 1, . . . , m. Proof. Let E, F, and G be the standard bases for Rm , Rn , and Rk , respectively. The Jacobian matrix of G at F (p) is the k × n matrix     ∂G1 ∂G1  ∂y 1 F (p) · · · ∂y n F (p)       .. .. .. JG F (p) =  (10.1.5) . . . .   k  ∂Gk    ∂G F (p) · · · F (p) ∂y 1 ∂y n We have  G  G JG◦F (p) E = dp (G ◦ F ) E  G = dF (p) (G) ◦ dp (F ) E  G  F = dF (p) (G) F dp (F ) E  = JG F (p) JF (p)    ∂F l P ∂Gi = F (p) (p) . l ∂xj l ∂y

[(10.1.3)] [Th 10.1.9] [Th 2.2.2(c)] [(10.1.3)] [(10.1.2), (10.1.5)]

205

10.1 Derivatives

To give (10.1.4) a more traditional appearance, we continue with the above notation, let (z 1 , . . . , z k ) be standard coordinates on Rk , and replace  Gi ◦ F (p) = Gi F 1 (p), . . . , F n (p) with  z i = z i y 1 (x1 , . . . , xm ), . . . , y n (x1 , . . . , xm ) . Then (10.1.4) can be expressed as n

X ∂z i ∂y l ∂z i = . ∂xj ∂y l ∂xj l=1

Let U be an open set in R , and let F = (F 1 , . . . , F n ) : U −→ Rn be a map. We say that F is smooth (on U ) if the function F i is smooth for i = 1, . . . , m. Smooth maps will be our focus for the rest of the book. The next result, which says that smoothness is ultimately a local phenomenon, will be used frequently, but usually without attribution. m

Theorem 10.1.11. Let U be an open set in Rm , and let F : U −→ Rm be a map. Then: (a) If F is smooth and U 0 is an open set in U , then F |U 0 is smooth. (b) Conversely, if every point p in U has a neighborhood U 0 in U such that F |U 0 is smooth, then F is smooth. Theorem 10.1.12. Let U and V be open sets in Rm and Rn , respectively, and let F : U −→ Rn and G : V −→ Rk be maps such that F (U ) ⊆ V . If F and G are smooth, then so is G ◦ F .

A (parametrized) curve in Rm is a map λ : I −→ Rm , where I is an interval in R that is either open, closed, half-open, or half-closed, and where the possibility that I is infinite is not excluded. Our focus will be on the case where I is a finite open interval, usually denoted by (a, b). Rather than provide a separate statement identifying the independent variable for the curve, most often denoted by t, and sometimes by u, it is convenient to incorporate this into the notation for λ, as in λ(t) : (a, b) −→ R3 . Let λ = (λ1 , . . . , λm ). By definition, λ is smooth [on (a, b)] if and only if λi is smooth for i = 1, . . . , m. Suppose λ is in fact smooth. The (Euclidean) velocity of λ and the (Euclidean) acceleration of λ are the smooth curves dλ (t) : (a, b) −→ Rm dt respectively; that is,

and

for all t in (a, b).

and

dλ (t) = dt



d2 λ (t) = dt2



d2 λ (t) : (a, b) −→ Rm , dt2

 dλ1 dλm (t), . . . , (t) dt dt

 d2 λ1 d2 λ m (t), . . . , (t) dt2 dt2

10 Analysis in Rm

206

Theorem 10.1.13. Let U be an open set in Rm , let p be a point in U , and let v be a vector in Rm . Then there is a real number ε > 0 and a smooth curve λ(t) : (−ε, ε) −→ U such that λ(0) = p and (dλ/dt)(0) = v. Proof. Define a smooth curve λ(t) : (−ε, ε) −→ U by λ(t) = p + tv, where ε is chosen small enough that λ (−ε, ε) ⊂ U . Clearly, λ has the desired properties. The next result is the key to later discussions about “vectors” and “tangent spaces”. Theorem 10.1.14. Let U be an open set in Rm , let F : U −→ Rn be a map, let p be a point in U , and let v be a vector in Rm . Then: (a) If F is differentiable at p, then dp (F )(v) =

d(F ◦ λ) (t0 ), dt

where λ(t) : (a, b) −→ U is any smooth curve such that λ(t0 ) = p and (dλ/dt)(t0 ) = v for some t0 in (a, b) (as given by Theorem 10.1.13). (b) If ψ(t) : (a, b) −→ U is a smooth curve, then   dψ d(F ◦ ψ) dψ(t) (F ) (t) = (t) dt dt for all t in (a, b). Proof. (a): Let v = (a1 , . . . , am ), λ = (λ1 , . . . , λm ), and F = (F 1 , . . . , F n ). Then F ◦ λ = (F 1 ◦ λ, . . . , F n ◦ λ), so   d(F ◦ λ) d(F 1 ◦ λ) d(F n ◦ λ) (t0 ) = (t0 ), . . . , (t0 ) . dt dt dt We have X ∂F j  dλi d(F j ◦ λ) (t0 ) = λ(t ) (t0 ) 0 dt ∂xi dt i =

X i

ai

[Th 10.1.10]

∂F j (p) ∂xi

= dp (F j )(v)

[Th 10.1.2]

for j = 1, . . . , n. Thus,  d(F ◦ λ) (t0 ) = dp (F 1 )(v), . . . , dp (F n )(v) = dp (F )(v), dt where the last equality follows from Theorem 10.1.8(b). (b): This follows from part (a).

10.2 Immersions and Diffeomorphisms

207

Not all maps of interest have domains that are open sets. For this reason we need an “extended” definition of smoothness to handle maps defined on sets that are not necessarily open. Let S be an arbitrary subset of Rm , and let F : S −→ Rn be a map. We say that F is (extended) smooth (on S) if for every point p in S, there is a neighborhood U of p in Rm and a (Euclidean) smooth map Fe : U −→ Rn such that F and Fe agree on S ∩ U; that is, F |S∩U = Fe|S∩U . Although in the preceding definition both U and Fe might very well depend on p, the next result shows that this dependence can be avoided. Theorem 10.1.15. Let S be a subset of Rm , and let F : S −→ Rn be a map. Then F is (extended) smooth if and only if there is an open set U in Rm containing S and a (Euclidean) smooth map Fe : U −→ Rn such that F and Fe agree on S; that is, F = Fe|S . For example, a curve λ(t) : [a, b] −→ Rm is smooth if and only if there is e : (e an interval (e a, eb) containing [a, b] and a smooth curve λ(t) a, eb) −→ Rm such e [a,b] . that λ = λ| Theorem 10.1.16. Let S be a subset of Rm , and let F : S −→ Rn be a map. If F is (extended) smooth, then it is continuous. Theorem 10.1.17. Let S and T be subsets of Rm and Rn , respectively, and let F : S −→ Rn and G : T −→ Rk be maps such that F (S) ⊆ T . If F and G are (extended) smooth, then so is G ◦ F .

10.2

Immersions and Diffeomorphisms

Let U be an open set in Rm , let F : U −→ Rn be a smooth map, where m ≤ n, and let p be a point in U . Since F is smooth on U , hence differentiable at p, we have from remarks in Section 10.1 that dp (F )(Rm ) can be viewed as the “tangent space” to the graph of F at F (p). We say that F is an immersion at p if the differential map dp (F ) : Rm −→ Rn is injective, and that F is an immersion (on U ) if it is an immersion at every p in U . The next result gives alternative ways of characterizing an immersion. Theorem 10.2.1. Let U be an open set in Rm , let F : U −→ Rn be a smooth map, where m ≤ n, and let p be a point in U . Then the following are equivalent: (a) F is an immersion at p. (b) dp (F )(Rm ) is m-dimensional. (c) JF (p) has rank m. If m = n, then each of the following is equivalent to each of the above: (d) dp (F ) is a linear isomorphism. (e) det JF (p) 6= 0. Proof. The equivalence of (a) and (b) follows from Theorem 1.1.12, and the equivalence of (b) and (c) from (10.1.3). The equivalence of (d) and (e) follows

10 Analysis in Rm

208

from Theorem 2.5.3. When m = n, the equivalence of (c) and (e) follows from Theorem 2.4.8. Let U be an open set in Rm , let F, G : U −→ Rm be smooth maps, and let p be a point in U , where we note that both the domain and codomain of F and G are subsets of Rm . We say that F : U −→ F (U ) is a diffeomorphism, and that U and F (U ) are diffeomorphic, if F (U ) is open in Rm , F : U −→ F (U ) is bijective, and F −1 : F (U ) −→ Rm is smooth. We say that G is a local diffeomorphism at p if there is a neighborhood U 0 ⊆ U of p in Rm and a neighborhood V of G(p) in Rm such that G|U 0 : U 0 −→ V is a diffeomorphism. Then G is said to be a local diffeomorphism (on U ) if it is a local diffeomorphism at every p in U . Evidently, every diffeomorphism is a local diffeomorphism. It is straightforward to give “extended” versions of the preceding definitions using the extended definition of smoothness presented in Section 10.1. p Example 10.2.2. Let D = {(x, y) ∈ R2 : x2 + y 2 < 1} be the unit open disk in R2 , and recall the map F : D −→ R2 from Section 9.4 defined by F (x, y) = p

1 1 − x2 − y 2

(x, y).

It can be shown that F is a diffeomorphism. Thus, D is diffeomorphic to all of R2 . ♦ Theorem 10.2.3 (Inverse Map Theorem). Let U be an open set in Rm , let F : U −→ Rm be a smooth map, and let p be a point in U . Then F is a local diffeomorphism at p if and only if it is an immersion at p. Thus, F is a local diffeomorphism if and only if it is an immersion. The inverse map theorem (also called the inverse function theorem) underscores the close relationship between “smoothness” and “tangent space”. It is one of the most important results in analysis and will be called upon repeatedly. Theorem 10.2.4. Let f (t) : (a, b) −→ (c, d) be a smooth function. Then: (a) If f is a diffeomorphism, then df /dt is nowhere-vanishing. (b) If df /dt is nowhere-vanishing, then f is either strictly increasing or strictly decreasing. (c) If f is a diffeomorphism, then it is either strictly increasing or strictly decreasing. Proof. (a): Since f −1 ◦ f (t) = t, by Theorem 10.1.10,  df df −1 f (t) (t) = 1 dt dt for all t in (a, b). The result follows. (b): For a point t2 in (a, b), we have (df /dt)(t2 ) 6= 0. Suppose (df /dt)(t2 ) > 0. Since f is smooth, so is df /dt, and therefore, by Theorem 10.1.1, df /dt is

209

10.3 Euclidean Derivative and Vector Fields

continuous. If there is a point t1 in (a, b) such that (df /dt)(t1 ) < 0, then by Theorem 9.1.19, there is a point t0 in (c, d) such that (df /dt)(t0 ) = 0, which contradicts the nowhere-vanishing assumption on df /dt. Thus, (df /dt)(t) > 0 for all t in (a, b); that is, f is strictly increasing. Similarly, if (df /dt)(t2 ) < 0, then f is strictly decreasing. (c): This follows from parts (a) and (b).

10.3

Euclidean Derivative and Vector Fields

Let U be an open set in Rm , and let X : U −→ Rm be a map, where we note that both the domain and codomain of X are subsets of Rm . In the present context, we refer to X as a vector field (on U ). To highlight the appearance of vector fields and to distinguish them from other types of maps, let us denote X(p)

by

Xp

for all p in U . We say that X vanishes at p if Xp = (0, . . . , 0), is nonvanishing at p if Xp 6= (0, . . . , 0), and is nowhere-vanishing (on U ) if it is nonvanishing at every p in U . Let us denote the set of smooth vector fields on U by X(U ). We make X(U ) into both a vector space over R and a module over C ∞ (U ) by defining operations as follows: for all vector fields X, Y in X(U ), all functions f in C ∞ (U ), and all real numbers c, let (X + Y )p = Xp + Yp , cXp = cXp , and (f X)p = f (p)Xp for all p in U . Let X = (α1 , . . . , αm )

and

Y = (β 1 , . . . , β m ),

(10.3.1)

where αi , β j are functions in C ∞ (U ) for i, j = 1, . . . , m. Then (X + Y )p = α1 (p) + β 1 (p), . . . , αm (p) + β m (p)



and  (f X)p = f (p) α1 (p), . . . , f (p) αm (p) . The αi are called the components of X. Let Ei be the (constant) vector field in X(U ) defined by Ei = (0, . . . , 1, . . . , 0), where 1 is in the ith position and 0s are elsewhere for i = 1, . . . , m. The Euclidean derivative with respect to X consists of two maps, both denoted by DX . The first is DX : C ∞ (U ) −→ C ∞ (U )

10 Analysis in Rm

210 defined by DX (f )(p) = dp (f )(Xp )

(10.3.2)

for all functions f in C ∞ (U ) and all p in U . The second is DX : X(U ) −→ X(U ) defined by DX (Y )p = dp (Y )(Xp )

(10.3.3)

for all vector fields Y in X(U ) and all p in U . It follows from Theorem 10.1.14(a) that (10.3.2) and (10.3.3) can be expressed as DX (f )(p) =

d(f ◦ λ) (t0 ) dt

and DX (Y )p =

d(Y ◦ λ) (t0 ), dt

respectively, where λ(t) : (a, b) −→ U is any smooth curve such λ(t0 ) = p and (dλ/dt)(t0 ) = Xp for some t0 in (a, b). The existence of such a smooth curve is guaranteed by Theorem 10.1.13. Evidently, the Euclidean derivatives of f and Y with respect to X evaluated at p have the same mathematical content as the differentials at p of f and Y evaluated at Xp . Their difference, such as it is, amounts to a change of notation that emphasizes the role of X in the Euclidean derivative. For vector fields X, Y in X(U ), we define a function hX, Y i : U −→ R in C ∞ (U ) by the assignment p 7−→ hXp , Yp i for all p in U . The Euclidean derivative with respect to X satisfies fundamental algebraic properties, versions of which will reappear later in several settings. Theorem 10.3.1. Let U be an open set in Rm , let X, Y, Z be vector fields in X(U ), and let f be a function in C ∞ (U ). Then: (a) DX+Y (Z) = DX (Z) + DY (Z). (b) Df X (Y ) = f DX (Y ). (c) DX (Y + Z) = DX (Y ) + DX (Z). (d) DX (f Y ) = DX (f ) Y + f DX (Y ). (e) DX (hY, Zi) = hDX (Y ), Zi + hY, DX (Z)i. Proof. Using Theorem 10.1.3 and Theorem 10.1.8(b) gives the result.

211

10.3 Euclidean Derivative and Vector Fields

Theorem 10.3.2. Let U be an open set in Rm , let f be a function in C ∞ (U ), and let X, Y be vector fields in X(U ), with X = (α1 , . . . , αm )

Y = (β 1 , . . . , β m ).

and

Then: (a) DX (f ) =

X

αi

i

∂f . ∂xi

(b) DX

  2  X i ∂β j ∂f i j ∂ f . DY (f ) = α +α β ∂xi ∂xj ∂xi ∂xj ij

Proof. (a): For a point p in U , we have DX (f )(p) = dp (f )(Xp )

[(10.3.2)]

1

m

= dp (f ) α (p), . . . , α (p) X ∂f = αi (p) i (p). ∂x i

 [Th 10.1.2]

Since p was arbitrary, the result follows. (b): By part (a), X  X X   j ∂f i ∂ j ∂f DX DY (f ) = DX β = α β ∂xj ∂xi j ∂xj j i   X  2 X ∂ ∂f ∂β j ∂f i j ∂ f = αi i β j j = αi i + α β . ∂x ∂x ∂x ∂xj ∂xi ∂xj ij ij Theorem 10.3.3. Let U be an open set in Rm , and let X, Y, Z be vector fields in X(U ), with X = (α1 , . . . , αm ),

Y = (β 1 , . . . , β m ),

and

Z = (γ 1 , . . . , γ m ).

Then: (a) DX (Y ) =

X X j

i

αi

 ∂β j Ej . ∂xi

(b) DX

 2 k XX ∂β j ∂γ k i i j ∂ γ DY (Z) = α +α β Ek . ∂xi ∂xj ∂xi ∂xj ij 

k

(c) DDX (Y ) (Z) =

X X k

ij

 ∂γ k α Ek . ∂xi ∂xj i ∂β

j

10 Analysis in Rm

212

Proof. We make repeated use of Theorem 10.3.1 and Theorem 10.3.2. Let f be a function in C ∞ (U ), and observe that DX (Ei ) = 0

and

DEi (f ) =

∂f ∂xi

for i = 1, . . . , m. (a): We have DX (Y ) =

DP

=

X

i

α i Ei

X



j

β Ej

=

j i

X

αi DEi (β j Ej )

ij j

j

α [DEi (β )Ej + β DEi (Ej )]

ij

=

X

αi

ij

XX ∂β j  ∂β j E = αi i Ej . j ∂xi ∂x j i

(b): By part (a), DY (Z) =

X X k

j

 ∂γ k β Ek , ∂xj j

hence X  X    ∂γ k ∂γ k DX DY (Z) = DPi αi Ei β j j Ek = αi DEi β j j Ek ∂x ∂x jk ijk     k k X ∂γ k i j ∂γ j j ∂γ = α DEi (β ) j Ek + β DEi Ek + β DEi (Ek ) ∂x ∂xj ∂xj ijk  2 k X  ∂β j ∂γ k j ∂ γ = αi E + β E k k ∂xi ∂xj ∂xi ∂xj ijk  2 k XX ∂β j ∂γ k  X i i j ∂ γ Ek . = α + αβ ∂xi ∂xj ∂xi ∂xj ij ij k

(c): By part (a), DDX (Y ) (Z) =

DP

ij

αi (∂β j /∂xi )Ej

X

k



γ Ek

k

X ∂β j  = αi i DEj (γ k Ek ) ∂x ijk   X ∂β j = αi i [DEj (γ k )Ek + γ k DEi (Ek )] ∂x ijk X ∂β j ∂γ k XX ∂β j ∂γ k  i = α Ek = αi i Ek . ∂xi ∂xj ∂x ∂xj ij ijk

k

213

10.4 Lie Bracket

Let U be an open set in Rm , and let X, Y be vector fields in X(U ). The second order Euclidean derivative with respect to X and Y consists of 2 two maps, both denoted by DX,Y . The first is 2 DX,Y : C ∞ (U ) −→ C ∞ (U )

defined by  2 DX,Y (f ) = DX DY (f ) − DDX (Y ) (f )

for all functions f in C ∞ (U ). The second is

2 DX,Y (Z) : X(U ) −→ X(U )

defined by  2 DX,Y (Z) = DX DY (Z) − DDX (Y ) (Z)

for all vector fields Z in X(U ).

Theorem 10.3.4. Let U be an open set in Rm , and let X, Y, Z be vector fields in X(U ), with X = (α1 , . . . , αm ),

Y = (β 1 , . . . , β m ),

Z = (γ 1 , . . . , γ m ).

and

Then (a) 2 DX,Y

(Z) =

X X ij

k

 ∂2γk Ek . αβ ∂xi ∂xj i j

(b) 2 2 DY,X (Z) = DX,Y (Z).

Proof. (a): This follows from Theorem 10.3.3(b) and Theorem 10.3.3(c). (b): This follows from Theorem 10.1.6 and part (a).

10.4

Lie Bracket

Let U be an open set in Rm . Lie bracket is the map [·,·] : X(U ) × X(U ) −→ X(U ) defined by [X, Y ] = DX (Y ) − DY (X)

for all vector fields X, Y in X(U ). We refer to [X, Y ] as the Lie bracket of X and Y . Theorem 10.4.1. Let U be an open set in Rm , and let X, Y be vector fields in X(U ), with X = (α1 , . . . , αm ) Then [X, Y ] =

X X  j

Y = (β 1 , . . . , β m ).

and

i

α

i ∂β

j

∂xi

−β

i ∂α

j

∂xi

 Ej .

10 Analysis in Rm

214 Proof. By Theorem 10.3.3(a), DX (Y ) =

X X j

αi

i

 ∂β j Ej ∂xi

and

DY (X) =

XX j

i

βi

 ∂αj Ej , ∂xi

from which the result follows. Theorem 10.4.2. Let U be an open set in Rm , let X, Y, Z be vector fields in X(U ), and let f, g be functions in C ∞ (U ). Then: (a) [Y, X] = −[X, Y ]. (b) [X + Y, Z] = [X, Z] + [Y, Z]. (c) [X, Y + Z] = [X, Y ] + [X, Z]. (d) [f X, gY ] = f g[X, Y ] +f DX (g)Y − gD  Y (f )X. (e) X, [Y, Z] + Y, [Z, X] + Z, [X, Y ] = (0, . . . , 0). (Jacobi’s identity) (f) [Ei , Ej ] = (0, . . . , 0) for i, j = 1, . . . , m. Proof. (a): Straightforward. (b), (c): This follows from parts (a) and (c) of Theorem 10.3.1. (d): Let X = (α1 , . . . , αm ),

Y = (β 1 , . . . , β m ),

and

Z = (γ 1 , . . . , γ m ),

so that f X = (f α1 , . . . , f αm ) and gY = (gβ 1 , . . . , gβ m ). By Theorem 10.3.2 and Theorem 10.4.1, [f X, gY ] =

X X  j

f αi

i

∂(gβ j ) ∂(f αj ) − gβ i i ∂x ∂xi

 Ej

 j j ∂g j i ∂β i ∂f j i ∂α = fα β + f α g i − gβ α − gβ f i Ej ∂xi ∂x ∂xi ∂x j i  X X  j XX ∂β j i i ∂α i ∂g j = fg α −β Ej + f α β Ej ∂xi ∂xi ∂xi j i i j X X  i ∂f j −g β α Ej ∂xi i j X X 

i

= f g[X, Y ] + f DX (g)Y − gDY (f )X. (f): This follows from Theorem 10.4.1. (e): It follows from parts (d) and (f) that [β j Ej , γ k Ek ] = β j γ k [Ej , Ek ] + β j DEj (γ k )Ek − γ k DEk (β j )Ej = βj

∂γ k ∂β j Ek − γ k k Ej j ∂x ∂x

215

10.4 Lie Bracket for j, k = 1, . . . , m. We have " X # X X   i j k X, [Y, Z] = α Ei , β Ej , γ Ek i

=

X

j i

j

k k

 α Ei , [β Ej , γ Ek ]

ijk

=

X ijk

j ∂γ k k ∂β α Ei , β E − γ Ej k ∂xj ∂xk i

j



(10.4.1)

 X  (1)  (2)  k j X i j ∂γ i k ∂β = α Ei , β Ek − α Ei , γ Ej , ∂xj ∂xk ijk

ijk

where summations are numbered for later reference. We also have   k i j ∂γ α Ei , β Ek ∂xj   k k j ∂γ j ∂γ = α β [Ei , Ek ] + α DEi β E − β DEk (αi )Ei k ∂xj ∂xj   ∂ ∂γ k ∂γ k ∂αi = αi i β j j Ek − β j j Ei x ∂x ∂x ∂xk i j

= αi

i

∂β j ∂γ k ∂2γk ∂γ k ∂αi Ek + αi β j i j Ek − β j j Ei , i j ∂x ∂x ∂x ∂x ∂x ∂xk

hence  (1)  X ∂γ k αi Ei , β j j Ek ∂x ijk

=

X ijk

αi β j

X ∂β j ∂γ k X ∂γ k ∂αi ∂2γk i E + α E − βj j Ei k k ∂xi ∂xj ∂xi ∂xj ∂x ∂xk ijk

ijk

X ∂β j ∂γ k X ∂γ j ∂αk ∂2γk = α i β j i j Ek + αi i E − βi i Ek k ∂x ∂x ∂x ∂xj ∂x ∂xj ijk ijk ijk  X X  ∂2γk ∂β j ∂γ k ∂αk i ∂γ j = αi β j i j + αi i − β Ek . ∂x ∂x ∂x ∂xj ∂xj ∂xi ij

(10.4.2)

X

k

Replacing (α, β, γ) with (α, γ, β) in

P(1)

gives

 X X   (2)  k 2 k j k X ∂αk i ∂β j i j ∂β i j ∂ β i ∂γ ∂β α Ei , γ Ek = αγ +α − γ Ek , ∂xj ∂xi ∂xj ∂xi ∂xj ∂xj ∂xi ij ijk

k

10 Analysis in Rm

216 so

 (2)  X ∂β j αi Ei , γ k k Ej ∂x ijk   X X ∂2βk ∂γ j ∂β k ∂αk i ∂β j = αi γ j i j + αi i − γ Ek ∂x ∂x ∂x ∂xj ∂xj ∂xi ij k  2 k k j X X  ∂αk ∂β j i i ∂ β j i ∂β ∂γ = α γ +α − γ Ek . ∂xi ∂xj ∂xj ∂xi ∂xj ∂xi ij

(10.4.3)

k

  It follows from (10.4.1)–(10.4.3) that the kth component of X, [Y, Z] is (3)  j k  k X i j ∂ 2 γ k ∂αk i ∂γ j i ∂β ∂γ X, [Y, Z] = αβ + α − β ∂xi ∂xj ∂xi ∂xj ∂xj ∂xi ij  ∂2βk ∂β k ∂γ j ∂αk ∂β j i − αi i j γ j − αi j + γ . ∂x ∂x ∂x ∂xi ∂xj ∂xi

Replacing (α, β, γ) with (β, γ, α) in

P(3)

gives

  k X i j ∂ 2 αk ∂γ j ∂αk ∂β k i ∂αj Y, [Z, X] = βγ + βi i − γ i j j ∂x ∂x ∂x ∂x ∂xj ∂xi ij  ∂2γk ∂γ k ∂αj ∂β k ∂γ j i − β i i j αj − β i j + α ∂x ∂x ∂x ∂xi ∂xj ∂xi  X ∂ 2 αk ∂αk i ∂γ j ∂αj ∂β k i i j = β γ + β − γ ∂xi ∂xj ∂xj ∂xi ∂xi ∂xj ij  2 k k j ∂αj i ∂γ k j i ∂ γ i ∂β ∂γ −α β − β +α . ∂xi ∂xj ∂xi ∂xj ∂xj ∂xi Replacing (α, β, γ) with (γ, α, β) in

P(3)

(10.4.5)

yields

j k ∂2βk ∂γ k i ∂β j i ∂α ∂β + γ − α ∂xi ∂xj ∂xi ∂xj ∂xj ∂xi ij  ∂ 2 αk ∂αk ∂β j ∂γ k ∂αj i − γi i j βj − γi j + β ∂x ∂x ∂x ∂xi ∂xj ∂xi  X ∂2βk ∂αj ∂β k i ∂β j ∂γ k = αj i j γ i + γ − αi i i j ∂x ∂x ∂x ∂x ∂x ∂xj ij  ∂ 2 αk j i ∂αk ∂β j i ∂αj i ∂γ k β γ − − γ + β . ∂xi ∂xj ∂xj ∂xi ∂xi ∂xj

 k Z, [X, Y ] =

X

(10.4.4)

γ i αj

(10.4.6)

Combining (10.4.4)–(10.4.6) and using of Theorem 10.1.6 gives the result.

217

10.4 Lie Bracket

Theorem 10.4.3. If U is an open set in Rm and X, Y, Z are vector fields in X(U ), then: DDX (Y ) (Z) − DDY (X) (Z) = D[X,Y ] (Z)

  = DX DY (Z) − DY DX (Z) .

Proof. Let X = (α1 , . . . , αm ),

Y = (β 1 , . . . , β m ),

and

Z = (γ 1 , . . . , γ m ).

(a): By Theorem 10.3.3(c), DDX (Y ) (Z) =

X X k

αi

ij

∂β j ∂xi



j



∂γ k Ek ∂xj

and DDY (X) (Z) =

X X k

ij

β

i ∂α

∂xi

∂γ k Ek , ∂xj

hence DDX (Y ) (Z) − DDY (X) (Z)  k j XX ∂β j ∂γ i i ∂α = α −β Ek i i ∂x ∂x ∂xj i kj  k X X j ∂γ = [X, Y ] Ek ∂xj j

[Th 10.4.1]

k

= D[X,Y ] (Z),

[Th 10.3.3(a)]

which proves the first equality.  It follows from Theorem 10.3.3(b) that the kth component of DX DY (Z) is   2 k k X i ∂β j ∂γ k i j ∂ γ DX (DY (Z)) = α +α β . (10.4.7) ∂xi ∂xj ∂xi ∂xj ij Replacing (α, β, γ) with (β, α, γ) in the preceding identity gives DY (DX (Z))

k

 2 k X ∂αj ∂γ k i i j ∂ γ = β +β α . ∂xi ∂xj ∂xi ∂xj ij

(10.4.8)

10 Analysis in Rm

218

  The kth components of DX DY (Z) − DY DX (Z) and D[X,Y ] (Z) are equal: k DX (DY (Z)) − DY (DX (Z))  j k X ∂β j ∂γ k i ∂α ∂γ = αi i − β ∂x ∂xj ∂xi ∂xj ij  k j XX ∂β j ∂γ i i ∂α = α −β i i ∂x ∂x ∂xj j i ∂γ k ∂xj j k = D[X,Y ] (Z) , =

X

[X, Y ]j

[(10.4.7), (10.4.8)]

[Th 10.4.1] [Th 10.3.3(a)]

which proves the second equality.

10.5

Integrals

Having discussed derivatives in Sections 10.1–10.4, we now briefly turn our attention to integrals. Let [ai , bi ] be a closed interval in R, where ai ≤ bi for i = 1, . . . , m. The set C = [a1 , b1 ] × · · · × [am , bm ] is called a closed cell in Rm . The content of C is defined by cont(C) = (b1 − a1 ) · · · (bm − am ).

(10.5.1)

We observe that cont(C) = 0 if and only if ai = bi for some 1 ≤ i ≤ m. When m = 1, 2, 3, “content” is referred to as “length”,“area”, “volume”, respectively. To illustrate, consider the subset [0, 1] of R and the “geometrically equivalent” subset [0, 1] × [0, 0] of R2 . Then [0, 1] has a length of 1, while [0, 1] × [0, 0] has an area of 0. A subset S of Rm is said to have content zero if for every real number ε > 0, there is a finite collection {Ci : 1 = 1, . . . , k} of closed cells such Sk Pk that S ⊆ i=1 Ci and i=1 cont(Ci ) < ε. For example, [0, 1] × [0, 0] has content zero. Let C be a closed cell in Rm , and let f : C −→ R be a bounded function. We say that a finite collection {Ci : 1 = 1, . . . , k} of closed cells in Rm is a partition of C if each SkCi is a subset of C, the Ci intersect only along their boundaries, and C = i=1 Ci . For example, {[0, 1] × [0, 1], [1, 2] × [0, 1]} is a partition of [0, 2] × [0, 1]. Given a point pi in Ci for i = 1, . . . , k, the sum P f (p ) cont(C i i ) is an approximation to what we intuitively think of as the i “content under the graph of f ”. For each choice of partition of C and each choice of the pi , we get a corresponding sum. Using an approach analogous to that adopted in the integral calculus of one real variable, where inscribed and superscribed rectangles are used to approximate the area under the graph of a

219

10.5 Integrals

real-valued function ofR one real Rvariable, we define lower and upper integrals of f over C, denoted by f and C f , respectively, to be limits of the preceding C R approximating sums. It can be shown that since f is bounded, both f and R R RC f are finite. We say that f is (Riemann) integrable if f and C f are C C equal. In that case, their common value, called the (Riemann) integral of f over C, is denoted by Z Z f dx1 · · · dxm or f (x1 , . . . , xm ) dx1 · · · dxm . C

C

We need to be able to integrate functions over sets less restrictive than closed cells. Let S be a bounded set in Rm , and let f : S −→ R be a bounded function. Since S is bounded, there is a closed cell C containing S. Let us define a function fC : C −→ R by ( f (p) if p ∈ S fC (p) = 0 if p ∈ CS. We say that f is (Riemann) integrable if fC is integrable. In that case, the R (Riemann) integral of f over S is denoted by S f dx1 · · · dxm and defined by Z Z S

f dx1 · · · dxm =

C

fC dx1 · · · dxm .

(10.5.2)

R It can be shown that the integrability of f and the value of S f dx1 · · · dxm are independent of the choice of closed cell containing S. We now introduce a type of bounded set in Rm over which a certain type of function is always integrable. A subset D of Rm is called a domain of integration if it is bounded and its boundary in Rm has content zero. For example, a closed cell in Rm is a domain of integration. Theorem 10.5.1 (Criterion for Integrability). If D is a domain of integration in Rm and f : D −→ R is a bounded continuous function, then f is integrable. Theorem 10.5.2. Let D be a domain of integration in Rm , let f, g : D −→ R be bounded continuous functions, and let c be a real number. Then cf + g is integrable and Z Z Z (cf + g) dx1 · · · dxm = c f dx1 · · · dxm + g dx1 · · · dxm . D

D

D

Let U be an open set in Rm , and let f : U −→ R be a continuous function that has compact support; that is, supp(f ) is compact in U . By Theorem 9.2.2, U is the union of a collection of open balls in Rm . Using Theorem 9.1.5, it is easily shown that since supp(f ) is compact in U , it is contained in the union D of a finite subcollection of the open balls. Thus, supp(f ) ⊆ D ⊆ U . Furthermore, it can also be shown that D is a domain of integration. The

10 Analysis in Rm

220

R (Riemann) integral of f over U is denoted by U f dx1 · · · dxm and defined by Z Z f dx1 · · · dxm = f dx1 · · · dxm , (10.5.3) U

D

where the right-hand side is given by (10.5.2). By Theorem 9.1.13, f is continuous on D, and by Theorem 9.1.22(b), f is bounded on supp(f ), hence bounded on D. We have from Theorem 10.5.1 that the integral exists. It can be shown that the value of the integral is independent of the choice of domain of integration containing supp(f ). Theorem 10.5.3 (Change of Variables). Let U and V be open sets in Rm , let F : U −→ V be a diffeomorphism, and let f : V −→ R be a continuous function that has compact support. Then Z Z f dx1 · · · dxm = (f ◦ F ) |det(JF )| dx1 · · · dxm , V

U

where JF is the Jacobian matrix of F . Theorem 10.5.4 (Iterated Integral). If C = [a1 , b1 ] × · · · × [am , bm ] is a closed cell in Rm and f : C −→ R is a continuous function, then    Z Z bm   Z b2  Z b1 1 m 1 2 f= ··· f (x , . . . , x ) dx dx · · · dxm . C

am

Furthermore, the value of are integrated.

a2

R C

a1

f is independent of the order in which the variables

Theorem 10.5.5 (Differentiating Under Integral Sign). Let U be an open set in Rm , and let f (x1 , . . . , xm , t) : U × [a, b] −→ R be a continuous function such that ∂f /∂xi exists and is continuous on U × [a, b] for i = 1, . . . , m. Define a function F : U −→ R by Z b F (x1 , . . . , xm ) = f (x1 , . . . , xm , t) dt a 1

m

for all (x , . . . , x ) in U . Then F is in C 1 (U ) and Z b ∂f 1 ∂F 1 m (x , . . . , x ) = (x , . . . , xm , t) dt i ∂xi ∂x a for i = 1, . . . , m. Let D be a domain of integration in Rm . It is clear that the function 1D : D −→ R with constant value 1 is continuous and bounded, so Theorem 10.5.1 applies. The content of D is defined by Z cont(D) = 1D dx1 · · · dxm . (10.5.4) D

It can be shown that when D is a closed cell in Rm , the definitions of cont(D) given by (10.5.1) and (10.5.4) agree.

221

10.6 Vector Calculus

Theorem 10.5.6 (Mean Value Theorem for Integrals). If D is a connected domain of integration in Rm and f : D −→ R is a bounded continuous function, then there is a point p0 in D such that Z f dx1 · · · dxm = f (p0 ) cont(D). D

10.6

Vector Calculus

In this section, we present a brief overview of the basic definitions and results of vector calculus. Much of what follows will be replaced later on with more modern counterparts expressed in the language of differential geometry. Let U be an open set in R3 , let f, g be functions in C ∞ (U ), and let F = 1 (F , F 2 , F 3 ) : U −→ R3 be a smooth map. The definitions of the classical differential operators in R3 are given in Table 10.6.1, with the classical vector calculus notation shown alongside the notation to be used in this book. The use of “lap” for the Laplacian is not conventional, but it fits well with notation to be introduced later.

Operator

Definition 

Gradient Divergence Curl Laplacian Laplacian

∇f =

∂f ∂f ∂f , , ∂x ∂y ∂z

 = grad(f )

∂F 1 ∂F 2 ∂F 3 + + = div(F ) ∂x ∂y ∂z  3  ∂F ∂F 2 ∂F 1 ∂F 3 ∂F 2 ∂F 1 ∇×F = − , − , − = curl(F ) ∂y ∂z ∂z ∂x ∂x ∂y ∇·F =

∂2f ∂2f ∂2f + 2 + 2 = lap(f ) 2 ∂x ∂y ∂z  ∇2 F = lap(F 1 ), lap(F 2 ), lap(F 3 ) = lap(F ) ∇2 f =

Table 10.6.1. Classical differential operators

Theorem 10.6.1. With the above setup: (a) grad(f g) = f grad(g) + g grad(f ). (b) div(f F ) = fdiv(F ) + hgrad(f ), F i. (c) curl grad(f ) = 0. (d) div curl(F )  = 0.  (e) curl curl(F ) = grad div(F ) − lap(F ).

222

10 Analysis in Rm

Theorem 10.6.2. With the above setup, the following are equivalent: (a) curl(F ) = 0. (b) F = grad(f ) for some function f in C ∞ (U ). R (c) For any closed path P in U , the line integral P F · ds equals 0. Theorem 10.6.3. With the above setup, the following are equivalent: (a) div(F ) = 0. (b) F = curl(G) for some smooth map G : U −→ R3 .

Part II

Curves and Regular Surfaces

223

Chapter 11

Curves and Regular Surfaces in R3 In earlier discussions, the set Rm appeared in a variety of contexts: as a vector space (also denoted by Rm ), an inner product space (Rm , e), a normed vector space (Rm , k·k), a metric space (Rm , d), and a topological space (Rm , T). Section 9.4 outlines the logical connections between these spaces. Looking back at Chapter 10, it would have been more precise, although cumbersome, to use the notation (Rm , e, k·k , d, T), or at least (Rm , e), instead of simply Rm when discussing Euclidean derivatives and integrals. In this chapter, we are concerned with many of the same concepts considered in Chapter 10, but this time exclusively for m = 3. We use the notation R3 in the preceding generic manner, allowing the structures relevant to a particular discussion to be left implicit. Aside from notational convenience, this has the added virtue of reserving the notation R30 = (Rm , e), and later R31 = (Rm , m), for more specific purposes in Chapter 12.

11.1

Curves in R3

Recall from Section 10.1 the definition of a (parametrized) curve in R3 and what it means for such a curve to be smooth. A smooth curve λ(t) : (a, b) −→ R3 is said to be regular if its velocity (dλ/dt)(t) : (a, b) −→ R3 is nowhere-vanishing. Let g(u) : (c, d) −→ (a, b) be a diffeomorphism. Since λ and g are smooth, by Theorem 10.1.12, so is λ ◦ g. We say that the curve λ ◦ g(u) : (c, d) −→ R3 is a smooth reparametrization of λ. Semi-Riemannian Geometry, First Edition. Stephen C. Newman. c 2019 John Wiley & Sons, Inc. Published 2019 by John Wiley & Sons, Inc.

225

11 Curves and Regular Surfaces in R3

226

Theorem 11.1.1. Let λ(t) : (a, b) −→ R3 be a smooth curve, and let g(u) : (c, d) −→ (a, b) be a diffeomorphism. Then λ is regular if and only if λ ◦ g is regular. Proof. By Theorem 10.1.10,  d(λ ◦ g) dg dλ (u) = (u) g(u) , du du dt and by Theorem 10.2.4(a), dg/du is nowhere-vanishing. Since g is bijective, as u varies over (c, d), g(u) varies over (a, b). It follows that dλ/dt is nowherevanishing if and only if d(λ ◦ g)/du is nowhere-vanishing. Theorem 11.1.1 can be used to define an equivalence relation on the collection of smooth curves as follows: for two such curves λ and ψ, we write λ ∼ ψ if ψ is a smooth reparametrization of λ. This idea will not be pursued further, but it makes the point that our focus should be on the intrinsic properties of a “curve”, for example, whether it is regular, and not the specifics of a particular parametrization.

11.2

Regular Surfaces in R3

Our immediate goal in this section is to define what we temporarily refer to as a “smooth surface”. We all have an intuitive idea of what it means for a geometric object to be “smooth”. For example, the sphere definitely has this property, but not the cube. The challenge is to translate such intuition into rigorous mathematical language. A feature of the sphere that gives it “smoothness” is our ability to attach to each of its points a unique “tangent plane”, something that is not possible for the cube. Let U be an open set in R2 , and let ϕ : U −→ R3 be a smooth map. In the present context, we refer to ϕ as a parametrized surface. The differential map at a point q in U is dq (ϕ) : R2 −→ R3 . It follows from Theorem 10.2.1 that if ϕ is an immersion at q, then dq (ϕ)(R2 ) is a 2-dimensional vector space, which we can view as a “tangent plane” to the graph of ϕ at ϕ(q). This suggests that a “smooth surface” might reasonably be defined to be the image of a parametrized surface when the latter has the added feature of being an immersion. Before exploring this concept, we need to establish the notation for coordinates in R2 and R3 . In this chapter and the next, coordinates on R2 are denoted by (r 1 , r 2 ) or (r, s), and those on R3 by (x1 , x2 , x3 ) or (x, y, z). We sometimes, especially in the examples, identify R2 with the xy-plane in R3 . In that setting, coordinates on R2 are denoted by (x, y). Let U be an open set in R2 , and let ϕ = (ϕ1 , ϕ2 , ϕ3 ) : U −→ R3 be a parametrized surface. Since ϕ is smooth, by definition, so are ϕ1 , ϕ2 , and ϕ3 . For each q in U , we have  ϕ(q) = ϕ1 (q), ϕ2 (q), ϕ3 (q) ,

11.2 Regular Surfaces in R3

227

hence

 1  ∂ϕ ∂ϕ2 ∂ϕ3 ∂ϕ (q) = (q), (q), (q) ∂ri ∂ri ∂ri ∂ri for i = 1, 2. For brevity, let us denote

(11.2.1)

∂ϕ (q) by Hi | q . ∂ri Theorem 11.2.1. Let ϕ : U −→ R3 be a parametrized surface, let q be a point in U , and let (e1 , e2 ) be the standard basis for R2 . Then: (a) dq (ϕ)(ei ) = Hi |q for i = 1, 2. (b) dq (ϕ)(R2 ) = span({H1 |q , H2 |q }). Proof. For a vector v = (a1 , a2 ) in R2 , we have  dq (ϕ)(v) = dq (ϕ1 )(v), dq (ϕ2 )(v), dq (ϕ3 )(v) X 1 X ∂ϕ2 X ∂ϕ3  i ∂ϕ i = a (q), a (q), ai i (q) i i ∂r ∂r ∂r i i i   X ∂ϕ1 ∂ϕ2 ∂ϕ3 = ai (q), i (q), i (q) i ∂r ∂r ∂r i X = ai Hi |q ,

[Th 10.1.8(b)] [Th 10.1.2]

i

from which the result follows. Theorem 11.2.2. If ϕ : U −→ R3 is a parametrized surface and q is a point in U , then the following are equivalent: (a) ϕ is an immersion at q. (b) dq (ϕ)(R2 ) is 2-dimensional. (c) Jϕ (q) has rank 2. (d) H1 |q and H2 |q are linearly independent. (e) H1 |q × H2 |q 6= (0, 0, 0).

Remark. As noted in connection with Theorem 8.4.8, we are free to compute the vector product in part (e) using any choice of scalar product and orthonormal basis. It is convenient in the present context to work with the Euclidean inner product and the standard basis. In other words, computations will be performed in R30 . Proof. (a) ⇔ (b) ⇔ (c): This follows from Theorem 10.2.1. (c) ⇔ (d): We have from (10.1.2) that  1  ∂ϕ ∂ϕ1 (q) (q)  ∂r1  ∂r2      2  2  ∂ϕ  ∂ϕ Jϕ (q) =  , (q) (q) 2  ∂r1  ∂r     3  ∂ϕ3  ∂ϕ (q) (q) 1 2 ∂r ∂r

11 Curves and Regular Surfaces in R3

228

from which the result follows. (d) ⇔ (e): This follows from Theorem 8.4.8. The vector product approach in part (e) of Theorem 11.2.2 is a computationally convenient way of determining whether a parametrized surface is an immersion, and we will use it often. For simplicity, the figures for the next two examples have been drawn in the xy-plane of R3 , leaving it to the reader to imagine the suppressed z-axis. Example 11.2.3. Consider the parametrized surface ϕ : R2 −→ R3 given by  ϕ(r, s) = (r3 , r2 , s . In the 3-dimensional version of Figure 11.2.1, ϕ(U ) is not “smooth” along the z-axis because of a cusp. The Jacobian matrix is  2  3r 0 Jϕ (r, s) =  2r 0, 0 1 and the corresponding vector product is (3r2 , 2r, 0) × (0, 0, 1) = (2r, −3r2 , 0), which equals (0, 0, 0) when r = 0. It follows from Theorem 11.2.2 that ϕ is not an immersion. ♦

y

x

Figure 11.2.1. Diagram for Example 11.2.3 Example 11.2.4. Consider the parametrized surface ϕ : R2 −→ R3 given by ϕ(r, s) = (r3 − r, r2 − 1, s). In the 3-dimensional version of Figure 11.2.2, ϕ(U ) is not “smooth” along the z-axis because of self-intersection. The Jacobian matrix is  2  3r − 1 0 0, Jϕ (r, s) =  2r 0 1

11.2 Regular Surfaces in R3

229

and the corresponding vector product is (3r2 − 1, 2r, 0) × (0, 0, 1) = (2r, −3r2 + 1, 0), which never equals (0, 0, 0). By Theorem 11.2.2, ϕ is an immersion. The selfintersection is parametrized by ϕ(1, s) = (0, 0, s) = ϕ(−1, s) for all real numbers s. We have from Theorem 11.2.1(b) that d(1,s) (ϕ)(R2 ) = span{(2, 2, 0), (0, 0, 1)} and d(−1,s) (ϕ)(R2 ) = span{(2, −2, 0), (0, 0, 1)}.

This shows that along the line of self-intersection, there are two candidates for “tangent plane” at each point. ♦

y

x

Figure 11.2.2. Diagram for Example 11.2.4 The upshot of the preceding examples is that ϕ being an immersion is necessary for ϕ(U ) to be “smooth”, but not sufficient. At a minimum, we need to add the requirement that ϕ(U ) does not self-intersect, or equivalently, that ϕ is injective. Further examples (that will not be presented) reveal additional deficiencies inherent in defining a “smooth surface” to be the image of some type of parametrized surface. We now take a different approach to the problem that can be loosely described as follows: a “smooth surface” is defined to be a topological subspace of R3 that can be covered in a piecewise fashion by a collection of parametrized surfaces in such a way that the pieces “fit together nicely”. We need to make all this precise. Let M be a topological subspace of R3 . A chart (on M ) is a pair (U, ϕ), where U is an open set in R2 and ϕ : U −→ R3 is a parametrized surface such that:

11 Curves and Regular Surfaces in R3

230

[C1] ϕ : U −→ R3 is an immersion. [C2] ϕ(U ) is an open set in M . [C3] ϕ : U −→ ϕ(U ) is a homeomorphism. Condition [C1] has been discussed in detail. Conditions [C2] and [C3] are far from intuitive, but we can at least say about [C3] that it ensures ϕ is injective, thereby avoiding the problem of self-intersection discussed above. When it is necessary to make the components of ϕ explicit in (U, ϕ), we  use the notation U, ϕ = (ϕi ) or U, ϕ = (ϕ1 , ϕ2 , ϕ3 ) . We refer to U as the coordinate domain of the chart, and to ϕ as its coordinate map. For each point p in ϕ(U ), (U, ϕ) is said to be a chart at p. When ϕ(U ) = M , we say that (U, ϕ) is a covering chart on M , and that M is covered by (U, ϕ). Two e , ϕ), e) charts, (U, ϕ) and (U e on M are said to be overlapping if V = ϕ(U ) ∩ ϕ( eU is nonempty. In that case, the map ϕ e−1 ◦ ϕ|ϕ−1 (V ) : ϕ−1 (V ) −→ ϕ e−1 (V ) is called a transition map. For brevity, we usually denote ϕ e−1 ◦ ϕ|ϕ−1 (V )

by

ϕ e−1 ◦ ϕ.

An atlas for M is a collection A = {(Uα , ϕα ) : α ∈ A} of charts on M such that the ϕα (Uα ) form an open cover of M ; that is, M=

[

ϕα (Uα ).

α∈A

We are now in a position to replace our preliminary attempt at describing a “smooth surface” with something definitive. A regular surface (in R3 ) is a pair (M, A), where M is a topological subspace of R3 and A is an atlas for M . A noteworthy feature of this definition is that it places no requirements on the choice of charts making up the atlas other than that their coordinate domains cover M . We usually adopt the shorthand of referring to M as a regular surface, with A understood from the context. Example 11.2.5 (Chart). Let M be a regular surface, and let (U, ϕ) be a chart on M . Then ϕ(U ) is a regular surface and (U, ϕ) is a covering chart. ♦ Throughout, any chart on a regular surface is viewed as a regular surface. Example 11.2.6 (S 2 ). The unit sphere (centered at the origin) is S 2 = {(x, y, z) ∈ R3 : x2 + y 2 + z 2 = 1}. Recall that D = {(r, s) ∈ R2 : r2 + s2 < 1}

11.2 Regular Surfaces in R3

231

is the unit open disk. In what follows, we identify R2 with the xy-plane in R3 . Let us define functions ϕ1 , . . . , ϕ6 : D −→ R3 by:

ϕ2 (x, y) = ϕ3 (x, y) = ϕ4 (x, y) = ϕ5 (x, y) = ϕ6 (x, y) =

p

 1 − x2 − y 2 p  x, y, − 1 − x2 − y 2 p  x, 1 − x2 − y 2 , y p  x, − 1 − x2 − y 2 , y p  1 − x2 − y 2 , x, y p  − 1 − x2 − y 2 , x, y .

ϕ1 (x, y) = x, y,

It can be shown that (D, ϕ1 ), . . . , (D, ϕ6 ) are charts on S 2 , with S 2 = ϕ1 (D) ∪ · · · ∪ ϕ6 (D). Thus, {(D, ϕ1 ), . . . , (D, ϕ6 )} is an atlas for S 2 , hence S 2 is a regular surface. ♦ Theorem 11.2.7. A regular surface is not an open set in R3 . Proof. Let M be a regular surface, and let (U, ϕ) be a chart on M . According to [C2], ϕ(U ) is open in M . By definition, there is an open set U in R3 such that ϕ(U ) = M ∩ U. It follows that M is not open in R3 ; for if it were, then ϕ(U ) would be open in R3 , and then Theorem 9.4.1 and [C3] would give a contradiction. An implication of Theorem 11.2.7 is that in order to investigate whether a function or map that has a regular surface as its domain is smooth, we need to rely on the extended version of smoothness described at the end of Section 10.1. The next result is an important case in point. Theorem 11.2.8 (Smoothness of Inverse Coordinate Map). If M is a regular surface and (U, ϕ) is a chart on M , then ϕ−1 : ϕ(U ) −→ U is (extended) smooth. Proof. Let ϕ = (ϕ1 , ϕ2 , ϕ3 ), and let q be a point in U . According to [C2], there is a neighborhood V of ϕ(q) in R3 such that ϕ(U ) = M ∩V. It follows from [C1] and Theorem 11.2.2 that Jϕ (q) has rank 2. Relabeling coordinates in R3 if necessary, we have from Theorem 2.3.2 and Theorem 2.4.8 that det(Jϕ (q)(1,2) ) 6= 0. Define a smooth map F : U × R −→ R3 by  F (r, s, t) = ϕ(r, s) + (0, 0, t) = ϕ1 (r, s), ϕ2 (r, s), ϕ3 (r, s) + t , so that F (r, s, 0) = ϕ(r, s) for all (r, s) in U . In geometric terms, F can be thought of as sending “horizontal slices” of an “infinite cylinder” over U to “horizontal slices” of an “infinite cylinder” over ϕ(U ). In particular, F sends

232

11 Curves and Regular Surfaces in R3

U × {(0, 0, 0)} to ϕ(U ). See Figure 11.2.3. Since ϕ is smooth, so is F . We have  1  ∂ϕ ∂ϕ1 (q) (q) 0  ∂r  ∂s      2  2  ∂ϕ  ∂ϕ JF (q, 0) =  , (q) (q) 0  ∂r  ∂s     3  ∂ϕ3  ∂ϕ (q) (q) 1 ∂r ∂s  hence det JF (q, 0) 6= 0. By Theorem 10.2.1 and Theorem 10.2.3, there is a neighborhood W ⊆ U × R of (q, 0) in R3 and a neighborhood V of ϕ(q) in R3 such that F |W : W −→ V is a diffeomorphism. Since V and V are open sets in R3 , so is V ∩ V, hence V ∩ V is a neighborhood of ϕ(q) in R3 . Replacing V with V ∩ V, and W with (F |W )−1 (V ∩ V) if necessary, we assume without loss of generality that V ⊆ V. In a similar way, we assume without loss of generality that W is a “finite cylinder” over a neighborhood U 0 ⊆ U of q in the rs-plane. It follows that V ∩ M is a neighborhood of ϕ(q) in M and ϕ(U 0 ) = V ∩ M . Let P : R3 −→ R2 be the projection map defined by P(x, y, z) = (x, y). Since P and (F |W )−1 are smooth, by Theorem 10.1.12, so is P ◦ (F |W )−1 : V −→ U 0 . We have   P ◦ (F |W )−1 ϕ(r, s) = P ◦ (F |W )−1 F (r, s, 0) = P(r, s, 0) = (r, s) for all (r, s) in U 0 , hence P ◦ (F |W )−1 |ϕ(U 0 ) = ϕ−1 |ϕ(U 0 ) . Thus, P ◦ (F |W )−1 is a smooth map on a neighborhood of ϕ(q) in R3 that restricts to ϕ−1 on a neighborhood of ϕ(q) in M . Since q was arbitrary, the result follows. e , ϕ) Theorem 11.2.9. If M is a regular surface, and (U, ϕ) and (U e are overlapping charts on M , then the transition map ϕ e−1 ◦ ϕ : ϕ−1 (V ) −→ ϕ e−1 (V ) e ). is a (Euclidean) diffeomorphism, where V = ϕ(U ) ∩ ϕ( eU Proof. We have from [C3] that ϕ e−1 ◦ ϕ is bijective. By definition, ϕ is (Euclidean) smooth, and according to Theorem 11.2.8, ϕ e−1 is (extended) smooth. −1 It follows from Theorem 10.1.17 that ϕ e ◦ ϕ is (Euclidean) smooth. Similarly, so is ϕ−1 ◦ ϕ e = (ϕ e−1 ◦ ϕ)−1 . Theorem 11.2.10. Let M be a regular surface, and let (U, ϕ) be a chart on M . If U 0 is an open set in U , then (U 0 , ϕ|U 0 ) is a chart on M . Proof. Since (U, ϕ) is a chart on M , we have [C1]–[C3] at our disposal. With U 0 open in U , and U open in R2 , by Theorem 9.1.5, U 0 is open in R2 . According to Theorem 10.1.11(a), ϕ|U 0 is smooth. Thus, ϕ|U 0 : U 0 −→ R3 is a parametrized surface. To show that (U 0 , ϕ|U 0 ) is a chart on M , we need to prove that:

11.2 Regular Surfaces in R3

233

z

F(U × R) V

t

F

ϕ(U )

U×R W

M

y ϕ

s

q U r

x



Figure 11.2.3. Diagram for Theorem 11.2.8 [C10 ] ϕ|U 0 : U 0 −→ R3 is an immersion. [C20 ] ϕ|U 0 (U 0 ) is an open set in M . [C30 ] ϕ|U 0 : U 0 −→ ϕ|U 0 (U 0 ) is a homeomorphism. The proofs are as follows: [C10 ]: This follows from [C1]. [C20 ]: Since U 0 is open in U , it follows from [C3] and Theorem 9.1.14 that ϕ(U 0 ) is open in ϕ(U ). According to [C2], ϕ(U ) is open in M . By Theorem 9.1.5, ϕ|U 0 (U 0 ) = ϕ(U 0 ) is open in M . [C30 ]: Let U 00 be a subset of U 0 . By assumption, U 0 is open in U , and according to [C3], ϕ : U −→ ϕ(U ) is a homeomorphism. We therefore have from Theorem 9.1.14 that U 00 is open in U 0 if and only if ϕ|U 0 (U 00 ) = ϕ(U 00 ) is open in ϕ|U 0 (U 0 ) = ϕ(U 0 ). The result now follows from Theorem 9.1.14. By definition, a regular surface is a patchwork of images of parametrized surfaces. The next result shows that instead of images of parametrized surfaces, we can use graphs of smooth functions. Theorem 11.2.11. If M is a regular surface and p is a point in M , then there is a chart (V, ψ) at p and a function f in C ∞ (V ) such that ψ(V ) = graph(f ),

11 Curves and Regular Surfaces in R3

234 where graph(f ) =



 x, y, f (x, y) ∈ R3 : (x, y) ∈ V

and R2 is identified with the xy-plane in R3 .  Proof. Let U, ϕ = (ϕ1 , ϕ2 , ϕ3 ) be a chart at p, and let q = ϕ−1 (p). It follows from [C1] and Theorem 11.2.2 that Jϕ (q) has rank 2. Relabeling coordinates in R3 if necessary, we have from Theorem 2.3.2 and Theorem 2.4.8 that det(Jϕ (q)(1,2) ) 6= 0. Consider the projection map P : R3 −→ R2 defined by P(x, y, z) = (x, y). See Figure 11.2.4. Since ϕ and P are smooth, by Theorem 10.1.12, so is P ◦ ϕ : U −→ R2 . The Jacobian matrix is  1  ∂ϕ ∂ϕ1  ∂x (q) ∂y (q)   (1,2) , JP◦ϕ (q) = Jϕ (q) =  2   ∂ϕ ∂ϕ2  (q) (q) ∂x ∂y  hence det JP◦ϕ (q) 6= 0. By Theorem 10.2.1 and Theorem 10.2.3, there is a neighborhood U 0 ⊆ U of q in R2 and a neighborhood V of P ◦ ϕ(q) in R2 such that P ◦ ϕ|U 0 : U 0 −→ V is a diffeomorphism. Since (P ◦ ϕ|U 0 )−1 : V −→ U 0 is a diffeomorphism, hence smooth, it follows from Theorem 10.1.12 that ψ = ϕ ◦ (P ◦ ϕ|U 0 )−1 : V −→ M is smooth. Thus, (V, ψ) is a chart on M . Since ϕ is smooth, by definition, so is ϕ3 , and therefore, by Theorem 10.1.12, so is f = ϕ3 ◦ (P ◦ ϕ|U 0 )−1 : V −→ R. Then

ψ(V ) = ϕ ◦ (P ◦ ϕ|U 0 )−1 (V )  = ϕ1 ◦ (P ◦ ϕ|U 0 )−1 (x, y), ϕ2 ◦ (P ◦ ϕ|U 0 )−1 (x, y),  ϕ3 ◦ (P ◦ ϕ|U 0 )−1 (x, y) : (x, y) ∈ V   = x, y, f (x, y) : (x, y) ∈ V = graph(f ).

Theorem 11.2.12. If M is a regular surface and F : M −→ Rn is a map, where n ≥ 1, then the following are equivalent: (a) F is (extended) smooth. (b) For every point p in M , there is a chart (U, ϕ) on M at p such that the map F ◦ ϕ : U −→ Rn is (Euclidean) smooth. (c) For every chart (U, ϕ) on M , the map F ◦ ϕ : U −→ Rn is (Euclidean) smooth. Proof. (a)⇒(c): Let q be a point in U . By assumption, there is a neighborhood U of ϕ(q) in R3 and a (Euclidean) smooth map Fe : U −→ Rn such that F and Fe agree on M ∩ U. According to [C2], ϕ(U ) is open in M . Since M ∩ U is open in M , we have that ϕ(U ) ∩ U = ϕ(U ) ∩ (M ∩ U) is open in ϕ(U ). Let U 0 = ϕ−1 ϕ(U ) ∩ U and observe that q is in U 0 . It follows from [C3] and

11.2 Regular Surfaces in R3

235

z M ϕ(U ) ϕ(Uʹ ) ϕ(q)

ϕ

P

U x



y

V

q P ◦ ϕ(q)

Figure 11.2.4. Diagram for Theorem 11.2.11 Theorem 9.1.7 that U 0 is open in U . Since ϕ and Fe are (Euclidean) smooth, by Theorem 10.1.11(a) and Theorem 10.1.12, so is F ◦ ϕ|U 0 = Fe ◦ ϕ|U 0 : U 0 −→ Rn . Because q was arbitrary, the result follows from Theorem 10.1.11(b). (c)⇒(b): Straightforward. (b)⇒(a): It follows from Theorem 10.1.17 and Theorem 11.2.8 that F |ϕ(U ) = (F ◦ ϕ) ◦ ϕ−1 : ϕ(U ) −→ Rn is (extended) smooth. By definition, there is a neighborhood U of p in R3 and a (Euclidean) smooth map F^ |ϕ(U ) : U −→ Rn such that F |ϕ(U ) and F^ |ϕ(U ) agree on ϕ(U ) ∩ U. Then F and F^ |ϕ(U ) agree on ϕ(U ) ∩ U. Since p was arbitrary, the result follows.

Theorem 11.2.12 shows that the existence of charts on regular surfaces makes it possible to answer questions about extended smoothness of maps on regular surfaces using methods developed for Euclidean smoothness. We close this section with an example of a chart on the unit sphere that is strikingly different from the charts constructed in Example 11.2.6. Example 11.2.13 (Stereographic Projection). Consider the set Σ2 = S 2 {(0, 0, 1)}, which is the unit sphere with the “north pole” removed. We define a map σ : Σ2 −→ R2 , called stereographic projection, as follows. For

11 Curves and Regular Surfaces in R3

236

each (x, y, z) in Σ2 , let σ(x, y, z) = (r, s) be the point where the straight line through (0, 0, 1) and (x, y, z) intersects the xy-plane, where we identify the latter with R2 . See Figure 11.2.5. Then (x, y, z) − (0, 0, 1) = c[(r, s, 0) − (0, 0, 1)] for some real number c, hence x = cr, y = cs, and z = 1 − c. This gives 1 = x2 + y 2 + z 2 = c2 r2 + c2 s2 + (1 − c)2 , so c=1−z =

2 . r2 + s2 + 1

Thus, σ(x, y, z) = (r, s) =

1 (x, y) 1−z

(11.2.2)

for all (x, y, z) in Σ2 . It is evident from Figure 11.2.5 that σ is bijective. Let ϕ = σ −1 . Then ϕ(R2 ) = Σ2 and ϕ(r, s) = (x, y, z) =

1 (2r, 2s, r2 + s2 − 1) r2 + s2 + 1

(11.2.3)

for all (r, s) in R2 . We claim that (R2 , ϕ) is a chart on S 2 . It is easily shown that ϕ : R2 −→ R3 is a parametrized surface. We need to prove that: [C1] ϕ : R2 −→ R3 is an immersion. [C2] Σ2 is an open set in S 2 . [C3] ϕ : R2 −→ Σ2 is a homeomorphism. The proofs are as follows: [C1]: The Jacobian matrix is  2  −r + s2 + 1 −2rs 2  −2rs r2 − s2 + 1, Jϕ (r, s) = 2 (r + s2 + 1)2 2r 2s and the corresponding vector product is 4 (−r2 + s2 + 1, −2rs, 2r) × (−2rs, r2 − s2 + 1, 2s) (r2 + s2 + 1)4  4 = 2 −2r(r2 + s2 + 1), −2s(r2 + s2 + 1), −(r2 + s2 )2 + 1 , 2 4 (r + s + 1) which never equals (0, 0, 0). By Theorem 11.2.2, ϕ is an immersion. [C2]: Since R3 {(0, 0, 1)} is open in R3 , it follows that Σ2 = 2 S ∩ [R3 {(0, 0, 1)}] is open in S 2 . [C3]: We observed above that ϕ is bijective, and it is clear from the form of (11.2.2) and (11.2.3) that ϕ and ϕ−1 are continuous. This proves the claim. ♦

11.3 Tangent Planes in R3

237 z

p

y

x

σ(p)

Figure 11.2.5. Stereographic projection: Diagram for Example 11.2.13

11.3

Tangent Planes in R3

Having defined a regular surface and established some of its basic properties, we are now in a position to present a rigorous definition of “tangent plane”. Let M be a regular surface. A curve (on M ) is a curve λ(t) : I −→ M as defined in Section 11.1, with the additional feature that it takes values in M . Let p be a point in M . We say that a vector v in R3 is a tangent vector to M at p if there is a smooth curve λ(t) : (a, b) −→ M such that λ(t0 ) = p and (dλ/dt)(t0 ) = v for some t0 in (a, b). The tangent plane of M at p is denoted by Tp (M ) and defined to be the set of all such tangent vectors:  Tp (M ) =

 dλ (t0 ) : λ(t) : (a, b) −→ M is smooth, λ(t0 ) = p, t0 ∈ (a, b) . dt

Theorem 11.3.1. Let M be a regular surface, let p be a point in M , let (U, ϕ) be a chart at p, and let q = ϕ−1 (p). Then: (a) Tp (M ) is a 2-dimensional subspace of R3 . (b) (H1 |q , H2 |q ) is a basis for Tp (M ), called the coordinate basis at p corresponding to (U, ϕ). (c) Tp (M ) = dq (ϕ)(R2 ). (d) dq (ϕ) : R2 −→ Tp (M ) is a linear isomorphism. Proof. (a), (b): We claim that Tp (M ) = span({H1 |q , H2 |q }). (⊆): Let v be a vector in Tp (M ). By definition, there is a smooth curve λ(t) : (a, b) −→ M such that λ(t0 ) = p and (dλ/dt)(t  0 ) = v for some t0 in (a, b). Suppose without loss of generality that λ (a, b) ⊂ ϕ(U ). It follows from

11 Curves and Regular Surfaces in R3

238

Theorem 10.1.17 and Theorem 11.2.8 that the map µ = (µ1 , µ2 ) = ϕ−1 ◦ λ : (a, b) −→ U is smooth. By Theorem 10.1.10, v=

X dµi dλ d(ϕ ◦ µ) (t0 ) = (t0 ) = (t0 )Hi |q , dt dt dt i

so v is in the span of H1 |q and H2 |q . Thus, Tp (M ) ⊆ span({H1 |q , H2 |q }). (⊇): Let (e1 , e2 ) be the standard basis for R2 . For given 1 ≤ i ≤ 2, we define a smooth map ζ = (ζ 1 , ζ 2 ) : (−ε, ε) −→ U by ζ(t) = q + tei , where ε > 0 is chosen small enough that (q − εei , q + εei ) ⊂ U . Consider the smooth curve λ(t) : (−ε, ε) −→ M defined by λ = ϕ ◦ ζ, and observe that λ(0) = ϕ(q). By Theorem 10.1.10, X dζ j dλ d(ϕ ◦ ζ) (0) = (0) = (0)Hj |q = Hi |q , dt dt dt j

(11.3.1)

so Hi |q is in Tp (M ) for i = 1, 2. Thus, span({H1 |q , H2 |q }) ⊆ Tp (M ). This proves the claim. We have from [C1] of Section 11.2 and Theorem 11.2.2 that H1 |q and H2 |q are linearly independent. The result follows. (c): This follows from Theorem 11.2.1(b) and part (b). (d): We have from part (c) that dq (ϕ) : R2 −→ Tp (M ) is well-defined, and from [C1] of Section 11.2 that it is injective. The result now follows from part (a) and Theorem 1.1.14. In the notation of Theorem 11.3.1, let us denote (H1 , H2 )

by

H,

and (H1 |q , H2 |q )

by

Hq .

We refer to H as the coordinate frame corresponding to (U, ϕ). Although there is a tendency to think of Tp (M ) as literally “tangent” to M at the point p, by Theorem 11.3.1(a), Tp (M ) is a subspace of R3 . As such, Tp (M ) passes through the origin (0, 0, 0) of R3 . In geometric terms, it is Tp (M ) + p, the translation of Tp (M ) by p, that is tangent to M at p. That said, it is convenient in the figures to label tangent planes as Tp (M ) rather than Tp (M ) + p. Example 11.3.2 (S 2 ). Continuing p with Example 11.2.6, consider the chart (D, ϕ1 ), where ϕ1 (x, y) = (x, y, 1 − x2 − y 2 ). The corresponding coordinate frame is given by      ! y ∂ϕ1 ∂ϕ1 x , 0, 1, − p . (x, y), (x, y) = 1, 0, − p ∂x ∂y 1 − x2 − y 2 1 − x2 − y 2  For example, at ϕ1 (0, 0) = (0, 0, 1), the coordinate basis is (1, 0, 0), (0, 1, 0) , which spans the xy-plane in R3 . ♦

11.3 Tangent Planes in R3

239

Theorem 11.3.3 (Change of Coordinate Basis). Let M be a regular sure , ϕ) face, let p be a point in M , let (U, ϕ) and (U e be charts at p, and let H and e H be the corresponding coordinate frames. Then 

idTp (M )

He qe Hq

= Jϕe−1 ◦ϕ (q),

where q = ϕ−1 (p) and qe = ϕ e−1 (p). e = (H e1, H e 2 ), and let (r1 , r2 ) and (e Proof. Let H = (H1 , H2 ) and H r1 , re2 ) be e , respectively. By Theorem 11.2.9, the transition map coordinates on U and U F = (F 1 , F 2 ) = ϕ e−1 ◦ ϕ is a diffeomorphism, hence smooth. It follows from Theorem 10.1.10 that Hj |q =

X ∂F i ∂ϕ ∂(ϕ e ◦ F) e i |qe, (q) = (q) = (q)H j j ∂r ∂r ∂rj i

and then from (2.2.6), (2.2.7), and (10.1.2) that   i  He idTp (M ) Hqe = ∂F (q) = Jϕe−1 ◦ϕ (q). q ∂rj Example 11.3.4 (Polar Coordinates). The xy-plane in R3 , denoted here by Pln, is clearly a regular surface. Consider the open set U = {(ρ, φ) ∈ R2 : ρ > 0, 0 < φ < 2π} in R2 , and define a map ϕ : U −→ R3 by  ϕ(ρ, φ) = ρ cos(φ), ρ sin(φ), 0 for all (ρ, φ) in U . The image of ϕ is Pln with the nonnegative x-axis removed. It is easily shown that (U, ϕ) is a chart on Pln, which we call polar coordinates. e = R2 , ϕ), A covering chart on Pln is given by (U e where ϕ e : R2 −→ Pln is defined e , ϕ) by ϕ(x, e y) = (x, y, 0). The coordinate frames corresponding to (U, ϕ) and (U e are given by      ∂ϕ ∂ϕ H(ρ,φ) = (ρ, φ), (ρ, φ) = cos(φ), sin(φ), 0 , −ρ sin(φ), ρ cos(φ), 0 ∂ρ ∂φ and

 e(x,y) = H

  ∂ϕ e ∂ϕ e (x, y), (x, y) = (1, 0, 0), (0, 1, 0) , ∂x ∂y

e is defined by respectively. The transition map ϕ e−1 ◦ ϕ : U −→ U  ϕ e−1 ◦ ϕ(ρ, φ) = ρ cos(φ), ρ sin(φ) . Classically, x and y are viewed as functions of ρ and φ, and expressed as x(ρ, φ) = ρ cos(φ)

and

y(ρ, φ) = ρ sin(φ).

11 Curves and Regular Surfaces in R3

240

By Theorem 11.3.3, the change of coordinates matrix is ∂x (ρ, φ)  ∂ρ  Jϕe−1 ◦ϕ (ρ, φ) =   ∂y (ρ, φ) ∂ρ 

 ∂x (ρ, φ)   ∂φ cos(φ)  = sin(φ)  ∂y (ρ, φ) ∂φ

 −ρ sin(φ) . ρ cos(φ)



The next result is reminiscent of Theorem 10.1.13. Theorem 11.3.5. Let M be a regular surface, let p be a point in M , and let v be a vector in Tp (M ). Then there is a real number ε > 0 and a smooth curve λ(t) : (−ε, ε) −→ M such that λ(0) = p and (dλ/dt)(0) = v. Proof. By definition, there is a smooth curve ψ(u) : (a, b) −→ M such that ψ(u0 ) = p and (dψ/du)(u0 ) = v for some u0 in (a, b). Let g(t) : R2 −→ R2 be the function defined by g(t) = t + u0 . Then for sufficiently small ε > 0, by Theorem 10.1.12, λ(t) = ψ ◦ g(t) : (−ε, ε) −→ M is the desired smooth curve. Here is an alternative argument that anticipates the proof of Theorem 14.7.2. Let (U, ϕ) be a chart at p such that ϕ(0, 0) = p,Plet (H1 |(0,0) , H2 |(0,0) ) be the corresponding coordinate basis at p, and let v = i ai Hi |(0,0) . Define a smooth 1 2 curve λ(t) : (−ε,  ε) −→ M by λ(t) = ϕ(ta , ta ), where ε is chosen small enough that λ (−ε, ε) ⊂ ϕ(U ). Clearly, λ(0) = p. It follows from Theorem 10.1.10 that X ∂ϕ X dλ (0) = ai i (0, 0) = ai Hi |(0,0) = v. dt ∂r i i

11.4

Types of Regular Surfaces in R3

In this section, we define four types of regular surfaces: open sets in regular surfaces, graphs of functions, surfaces of revolution, and level sets of functions. The table below provides a list of the worked examples of graphs of functions and surfaces of revolution presented in Chapter 13. Section

Geometric object

Parametrization

13.1

plane

graph of function

13.2

cylinder

surface of revolution

13.3

cone

surface of revolution

13.4

sphere

surface of revolution

13.5

tractoid

surface of revolution

13.6

hyperboloid of one sheet

graph of function

13.7

hyperboloid of two sheets

graph of function

13.8

torus

surface of revolution

11.4 Types of Regular Surfaces in R3

241

Open set in a regular surface. As we now show, an open set in a regular surface is itself a regular surface. Theorem 11.4.1 (Open Set). Let M be a regular surface, let V be an open set in M , and view V as a topological subspace of R3 . Then: (a) V is a regular surface. (b) For all points p in V , Tp (V ) = Tp (M ). Proof. (a): Let p be a point in V , and let (U, ϕ) be a chart on M at p. We have properties [C1]–[C3] of Section 11.2 at our disposal. According to [C2], ϕ(U ) is open in M , and therefore, so is ϕ(U ) ∩ V . It follows from [C3] and Theorem 9.1.14 that U 0 = ϕ−1 V ∩ ϕ(U ) is open in U . By Theorem 11.2.10, (U 0 , ϕ|U 0 ) is a chart at p such that ϕ|U 0 (U 0 ) ⊆ V . Since p was arbitrary, the result follows. (b): Let v be a vector in Tp (V ), and let λ(t) : (a, b) −→ V be a smooth curve such that λ(t0 ) = p and (dλ/dt)(t0 ) = v for some t0 in (a, b). Since V ⊆ M , the same smooth curve shows that v is also a vector in Tp (M ). Thus, Tp (V ) ⊆ Tp (M ). By Theorem 11.3.1(a), both Tp (V ) and Tp (M ) are 2-dimensional, and then Theorem 1.1.7(b) gives Tp (V ) = Tp (M ). Throughout, any open set in a regular surface is viewed as a regular surface. Graph of a function. According to Theorem 11.2.11, a regular surface is covered by graphs of functions. We now consider the graph of a function in isolation. Let U be an open set in R2 , and let f be a function in C ∞ (U ). Recall from Theorem 11.2.11 that   graph(f ) = x, y, f (x, y) ∈ R3 : (x, y) ∈ U , where we identify R2 with the xy-plane in R3 . Defining a map ϕ : U −→ R3 by  ϕ(x, y) = x, y, f (x, y) for all (x, y) in U , we see that graph(f ) is the image of ϕ; that is, graph(f ) = ϕ(U ). Theorem 11.4.2 (Graph of Function). With the above setup, graph(f ), viewed as a topological subspace of R3 , is a regular surface, and (U, ϕ) is a covering chart on graph(f ). Proof. Clearly, ϕ : U −→ R3 is a parametrized surface. We need to prove that: [C1] ϕ : U −→ R3 is an immersion. [C2] ϕ(U ) is an open set in graph(f ). [C3] ϕ : U −→ ϕ(U ) is a homeomorphism. The proofs are as follows:

11 Curves and Regular Surfaces in R3

242 [C1]: The Jacobian matrix is   Jϕ (x, y) = 

1 0 ∂f (x, y) ∂x

0 1



 , ∂f (x, y) ∂y

and the corresponding vector product is       ∂f ∂f ∂f ∂f 1, 0, (x, y) × 1, 0, (x, y) = − (x, y), − (x, y), 1 , ∂x ∂y ∂x ∂y which never equals (0, 0, 0). By Theorem 11.2.2, ϕ is an immersion. [C2]: This follows from ϕ(U ) = graph(f ). [C3]: Clearly, ϕ is bijective. Since ϕ is smooth, by Theorem 10.1.7 it is continuous. Let P : R3 −→ R2 be the projection map defined by P(x, y, z) = (x, y), where we identify R2 with the xy-plane in R3 . Since P is continuous and ϕ−1 |graph(f ) = P|graph(f ) , the result follows. Thus, (U, ϕ) is a chart on graph(f ), and since ϕ(U ) = graph(f ), it is a covering chart. Surface of revolution. Let ρ(t), h(t) : (a, b) −→ R be smooth functions such that: [R1] ρ is strictly positive on (a, b). [R2] h is strictly increasing or strictly decreasing on (a, b). We refer to ρ and h as the radius function and height function, respectively. Throughout, it is convenient to denote the derivatives of ρ and h with respect to t by an overdot. Consider the smooth curve σ(t) : (a, b) −→ R3 defined by  σ(t) = ρ(t), 0, h(t) for all t in (a, b). We observe that [R2] is equivalent to h˙ being strictly positive or strictly negative on (a, b), from which it follows that σ is a regular curve. Let U = (a, b) × (−π, π), and consider the smooth map ϕ : U −→ R3 defined by  ϕ(t, φ) = ρ(t) cos(φ), ρ(t) sin(φ), h(t) . The surface of revolution corresponding to σ is denoted by rev(σ) and defined to be the image of ϕ: rev(σ) = ϕ(U ). Thus, rev(σ) is obtained by revolving the image of σ around the z-axis. A remark is that (−π, π) was chosen when defining U rather than, for example, [−π, π) or [0, 2π), to ensure that U is an open set in R2 . As a result, a surface of revolution does not quite make a complete circuit around the z-axis.

11.4 Types of Regular Surfaces in R3

243

For a given point t in (a, b), we define a smooth curve ϕt (φ) : (−π, π) −→ R, called the latitude curve corresponding to t, by ϕt (φ) = ϕ(t, φ). Similarly, for a given point φ in (−π, π), we define a smooth curve ϕφ (t) : (a, b) −→ R, called the longitude curve corresponding to φ, by ϕφ (t) = ϕ(t, φ). From   ϕt (φ) = ρ(t) cos(φ), sin(φ), 0 + 0, 0, h(t) , we see that the image of ϕt is, except for a single missing point, a circle of radius ρ(t) centered on the z-axis and lying in the plane parallel to the xy-plane at a height h(t). Theorem 11.4.3 (Surface of Revolution). With the above setup, rev(σ), viewed as a topological subspace of R3 , is a regular surface and (U, ϕ) is a covering chart on rev(σ). Proof. Since ρ and h are smooth, by definition, so is ϕ. Thus, ϕ : U −→ R3 is a parametrized surface. We need to prove that: [C1] ϕ : U −→ R3 is an immersion. [C2] ϕ(U ) is an open set in rev(σ). [C3] ϕ : U −→ ϕ(U ) is a homeomorphism. The proofs are as follows: [C1]: The Jacobian matrix is   ρ(t) ˙ cos(φ) −ρ(t) sin(φ) ˙ sin(φ) ρ(t) cos(φ), Jϕ (t, φ) =  ρ(t) ˙ h(t) 0 and the corresponding vector product is   ˙ ρ(t) ˙ cos(φ), ρ(t) ˙ sin(φ), h(t) × −ρ(t) sin(φ), ρ(t) cos(φ), 0  ˙ cos(φ), −h(t) ˙ sin(φ), ρ(t) = ρ(t) −h(t) ˙ , which never equals (0, 0, 0); for if it did, then taking the Euclidean inner product of the preceding vector with itself gives  ˙ cos(φ)]2 + [−h(t) ˙ sin(φ)]2 + ρ(t) ˙ 2 + ρ(t) 0 = ρ(t)2 [−h(t) ˙ 2 = ρ(t)2 [h(t) ˙ 2]

11 Curves and Regular Surfaces in R3

244

for some t in (a, b), which contradicts either [R1] or [R2]. By Theorem 11.2.2, ϕ is an immersion. [C2]: This follows from ϕ(U ) = rev(σ). [C3]: We provide only a sketch of the proof. Let  (x, y, z) = ρ(t) cos(φ), ρ(t) sin(φ), h(t) , so that ρ(t) =

p x2 + y 2 .

(11.4.1)

Substituting into the trigonometric identity   φ sin(φ) tan = 2 1 + cos(φ) yields   φ y p tan = . 2 x + x2 + y 2

(11.4.2)

Using (11.4.1) and (11.4.2), it can be shown that ϕ−1 : rev(σ) −→ U is continuous. Thus, (U, ϕ) is a chart on rev(σ), and since ϕ(U ) = rev(σ), it is a covering chart. Level set of a function. Let U be an open set in R3 , and let f be a function in C ∞ (U). The gradient of f (in R3 ) is the map grad(f ) : U −→ R3 defined by  grad(f )p =

 ∂f ∂f ∂f (p), (p), (p) ∂x ∂y ∂z

(11.4.3)

for all p in U. Given a real number c in f (U), the corresponding level set of f is f −1 (c) = {p ∈ U : f (p) = c}. Theorem 11.4.4 (Level Set of Function). With the above setup, if grad(f )p 6= (0, 0, 0) for all p in f −1 (c), then f −1 (c), viewed as a topological subspace of R3 , is a regular surface. Proof. Since grad(f )p 6= (0, 0, 0), relabeling coordinates in R3 if necessary, we 3 have (∂f /∂z)(p)  6= 0. Let us define a map F : U −→ R by F (x, y, z) = x, y, f (x, y, z) for all (x, y, z) in U. The Jacobian matrix is 1  0 JF (p) =  ∂f (p) ∂x 

0 1 ∂f (p) ∂y

 0 0  , ∂f (p) ∂z

11.4 Types of Regular Surfaces in R3

245

 so det JF (p) 6= 0. By Theorem 10.2.1 and Theorem 10.2.3, there is a neighborhood V of p in R3 and a neighborhood W of q = F (p) in R3 such that F |V : V −→ W is a diffeomorphism. Then G = (F |V )−1 : W −→ V is a diffeomorphism, hence smooth. See Figure 11.4.1, where we note that G takes points in {(x, y, z) ∈ R3 : z = c} ∩ W to f −1 (c) ∩ V. Let P : R3 −→ R2 be the projection map defined by P(x, y, z) = (x, y), where we identify R2 with the xy-plane in R3 . Since W is open in R3, there is anopen ball Bε (q) ⊆ W  for sufficiently small ε > 0. Then P Bε (q) = Bε P(q) . Let U = Bε P(q) , and define a parametrized surface ϕ : U −→ R3 by ϕ(x, y) = G(x, y, c). In light of the preceding remarks, G(x, y, c) is in f −1 (c) ∩ V for all (x, y) in U , hence ϕ(U ) ⊆ f −1 (c) ∩ V. Let G = (G1 , G2 , G3 ), and consider the function g : U −→ R given by g(x, y) = G3 (x, y, c). Since G is smooth, by definition, so is G3 . Thus, g is in C ∞ (U ). We have   G(x, y, c) = G1 (x, y, c), G2 (x, y, c), G3 (x, y, c) = x, y, g(x, y) for all (x, y) in U , hence ϕ(U ) = graph(g). By Theorem 11.4.2, (U, ϕ) is a chart at p. Since p was arbitrary, f −1 (c) is a regular surface.

z F z

p

c

V

q

W

G y

x

f –1(c) ∩ V

y

ϕ(U)

x U

P (q)

Figure 11.4.1. Diagram for Theorem 11.4.4 Example 11.4.5 (S 2 ). Continuing with Example 11.2.6, let f be the function in C ∞ (R3 ) given by f (x, y, z) = x2 + y 2 + z 2 . Then S 2 = f −1 (1) and grad(f )(x,y,z) = (2x, 2y, 2z), so that grad(f )(x,y,z) 6= (0, 0, 0) for all (x, y, z) in S 2 . By Theorem 11.4.4, S 2 is a regular surface, something previously established in Example 11.2.6. ♦

11 Curves and Regular Surfaces in R3

246

11.5

Functions on Regular Surfaces in R3

Theorem 11.5.1 (Smoothness Criteria for Functions). If M is a regular surface and f : M −→ R is a function, then the following are equivalent: (a) f is (extended) smooth. (b) For every point p in M , there is a chart (U, ϕ) on M at p such that the function f ◦ ϕ : U −→ R is (Euclidean) smooth. (c) For every chart (U, ϕ) on M , the function f ◦ ϕ : U −→ R is (Euclidean) smooth. Proof. Setting n = 1 and F = f in Theorem 11.2.12 gives the result. Let M be a regular surface. The set of smooth functions on M is denoted by C ∞ (M ). We make C ∞ (M ) into both a vector space over R and a ring by defining operations as follows: for all functions f, g in C ∞ (M ) and all real numbers c, let (f + g)(p) = f (p) + g(p), (f g)(p) = f (p)g(p), and (cf )(p) = cf (p) for all p in M . The identity element of the ring is the constant function 1M that sends all points in M to the real number 1. Let f be a function in C ∞ (M ), and let p be a point in M . The differential of f at p is the map dp (f ) : Tp (M ) −→ R defined by

d(f ◦ λ) (t0 ) (11.5.1) dt for all vectors v in Tp (M ), where λ(t) : (a, b) −→ M is any smooth curve such that λ(t0 ) = p and (dλ/dt)(t0 ) = v for some t0 in (a, b). dp (f )(v) =

Theorem 11.5.2. Let M be a regular surface, let f be a function in C ∞ (M ), let p be a point in M , and let v be a vector in Tp (M ). Then: (a) dp (f )(v) is independent of the choice of smooth curve used to express v. (b) dp (f ) is linear. Proof. Let (U, ϕ) be a chart at p, and let λ(t) : (a, b) −→ M be a smooth curve such that λ(t0 ) = p and (dλ/dt)(t  0 ) = v for some t0 in (a, b). Suppose without loss of generality that λ (a, b) ⊂ ϕ(U ), and let q = ϕ−1 (p). It follows from Theorem 10.1.17 and Theorem 11.2.8 that the map µ = (µ1 , µ2 ) = ϕ−1 ◦ λ : (a, b) −→ U is (Euclidean) smooth, and from Theorem 10.1.9 that   dλ d(ϕ ◦ µ) dµ v= (t0 ) = (t0 ) = dq (ϕ) (t0 ) . dt dt dt

11.5 Functions on Regular Surfaces in R3

247

By Theorem 11.3.1(d), dq (ϕ) is invertible, so dµ (t0 ) = dq (ϕ)−1 (v). dt

(11.5.2)

Then d(f ◦ λ) (t0 ) dt d(f ◦ ϕ ◦ µ) = (t0 ) dt   dµ = dµ(t0 ) (f ◦ ϕ) (t0 ) dt

dp (f )(v) =

= dq (f ◦ ϕ) ◦ dq (ϕ)−1 (v).

[(11.5.1)] (11.5.3) [Th 10.1.9] [(11.5.2)]

The preceding identity makes sense because we have the maps dq (ϕ)−1 : Tp (M ) −→ R2 and dq (f ◦ ϕ) : R2 −→ R, hence dp (f ) : Tp (M ) −→ R. (a): This follows from (11.5.3). (b): Since v was arbitrary, we have from (11.5.3) that dp (f ) = dq (f ◦ ϕ) ◦ dq (ϕ)−1 . Thus, dp (f ) is the composition of linear maps, so it too is linear.

The next result is a counterpart of Theorem 10.1.3. Theorem 11.5.3. Let M be a regular surface, let f, g be functions in C ∞ (M ), let p be a point in M , and let c be a real number. Then: (a) dp (cf + g) = c dp (f ) + dp (g). (b) dp (f g) = f (p) dp (g) + g(p) dp (f ). Proof. Let v be a vector in Tp (M ), and let λ(t) : (a, b) −→ M be a smooth curve such that λ(t0 ) = p and (dλ/dt)(t0 ) = v for some t0 in (a, b). Then  d (cf + g) ◦ λ (t0 ) dt  d (cf ◦ λ) + (g ◦ λ) = (t0 ) dt d(f ◦ λ) d(g ◦ λ) =c (t0 ) + (t0 ) dt dt = c dp (f )(v) + dp (g)(v)  = c dp (f ) + dp (g) (v)

dp (cf + g)(v) =

[(11.5.1)]

[(11.5.1)

11 Curves and Regular Surfaces in R3

248 and

d(f g ◦ λ) (t0 ) dt  d (f ◦ λ)(g ◦ λ) = (t0 ) dt d(g ◦ λ) d(f ◦ λ) = [f ◦ λ(t0 )] (t0 ) + [g ◦ λ(t0 )] (t0 ) dt dt = f (p) dp (g)(v) + g(p) dp (f )(v)  = f (p) dp (g) + g(p) dp (f ) (v).

dp (f g)(v) =

[(11.5.1)]

[(11.5.1)]

Since v was arbitrary, the result follows.

11.6

Maps on Regular Surfaces in R3

Theorem 11.6.1 (Smoothness Criteria for Maps). If M and N are regular surfaces and F : M −→ N is a map, then the following are equivalent: (a) F is (extended) smooth. (b) For every point p in M , there is a chart (U, ϕ) on M at p and a chart (V, ψ) on N at F (p) such that the map ψ −1 ◦F◦ϕ : ϕ−1 (W ) −→ R2 is (Euclidean) smooth, where W = ϕ(U ) ∩ F −1 ψ(V ) . (c) F is continuous, and for every chart (U, ϕ) on M and every chart (V, ψ) on N such that F ϕ(U ) ⊆ ψ(V ), the map ψ −1 ◦ F ◦ ϕ : ϕ(U ) −→ R2 is (Euclidean) smooth. Proof. (a) ⇒ (c): By Theorem 10.1.16, F is continuous. It follows from Theorem 11.2.12 that F ◦ϕ is (Euclidean) smooth and from Theorem 11.2.8 that ψ −1 is (extended) smooth. By Theorem 10.1.17, ψ −1 ◦ F ◦ ϕ is (Euclidean) smooth. (c) ⇒ (b): Let (U 0 , ϕ0 ) be a chart on M at p, and let (V, ψ) be a chart on N at F (p). According to [C2] of Section 11.2, ϕ0 (U 0 ) is open in M, and ψ(V ) −1 is open in N . Since F is continuous, by Theorem open in  9.1.7, F0 0 ψ(V ) is 0 0 −1 M , and therefore, so is W = ϕ (U ) ∩ F ψ(V ) . Both ϕ (U ) and F −1 ψ(V ) contain p, so W is nonempty. It follows from [C3] of Section 11.2 and Theorem 9.1.14 that U = (ϕ0 )−1 (W ) is open in U 0 . Let ϕ = ϕ0 |U . We have from Theorem 11.2.10 that (ϕ, U ) is a chart at p such that F ϕ(U ) ⊆ ψ(V ). The result now follows from part (c). (b) ⇒ (a): By Theorem 11.2.8, ϕ−1 is (extended) smooth, and by assumption, ψ and ψ −1 ◦F ◦ϕ are (Euclidean) smooth. It follows from Theorem 10.1.17 that F |ϕ(U ) = ψ ◦ (ψ −1 ◦ F ◦ ϕ) ◦ ϕ−1 : ϕ(U ) −→ R3 is (extended) smooth. By definition, there is a neighborhood U of p in R3 and a (Euclidean) smooth map F^ |ϕ(U ) : U −→ R3 such that F |ϕ(U ) and F^ |ϕ(U ) agree

on ϕ(U ) ∩ U. Then F and F^ |ϕ(U ) agree on ϕ(U ) ∩ U. According to [C2] of

11.6 Maps on Regular Surfaces in R3

249

Section 11.2, ϕ(U ) is open in M , so there is an open set V in R3 such that ϕ(U ) = M ∩ V. Then M ∩ (U ∩ V) = (M ∩ V) ∩ U = ϕ(U ) ∩ U, so F and F^ |ϕ(U ) agree on M ∩ (U ∩ V), where we observe that U ∩ V is open in R3 . Since p was arbitrary, the result follows. Theorem 11.6.2. Let M , N , and P be regular surfaces, and let F : M −→ N and G : N −→ P be maps. If F and G are (extended) smooth, then so is G ◦ F . Proof. Let p be a point in M . By Theorem 11.6.1, there is a chart (U, ϕ) on M at p and a chart (V1 , ψ1 ) on N at F (p) such that ψ1−1 ◦ F ◦ ϕ is (Euclidean) smooth. For the same reason, there is a chart (V2 , ψ2 ) on N at F (p) and a chart  (W, µ) at G F (p) such that µ−1 ◦ G ◦ ψ2 is (Euclidean) smooth. By Theorem 11.2.9, ψ2−1 ◦ ψ1 is (Euclidean) smooth. It follows from Theorem 10.1.12 that µ−1 ◦ (G ◦ F ) ◦ ϕ = (µ−1 ◦ G ◦ ψ2 ) ◦ (ψ2−1 ◦ ψ1 ) ◦ (ψ1−1 ◦ F ◦ ϕ) is (Euclidean) smooth. Since p was arbitrary, the result follows from Theorem 11.6.1. Let M and N be regular surfaces, let F : M −→ N be a smooth map, and let p be a point in M . The differential of F at p is the map dp (F ) : Tp (M ) −→ TF (p) (N ) defined by d(F ◦ λ) (t0 ) (11.6.1) dt for all vectors v in Tp (M ), where λ(t) : (a, b) −→ M is any smooth curve such that λ(t0 ) = p and (dλ/dt)(t0 ) = v for some t0 in (a, b). See Figure 11.6.1. dp (F )(v) =

Theorem 11.6.3. With the above setup: (a) dp (F )(v) is independent of the choice of smooth curve used to express v. (b) dp (F ) is linear. Proof. The proof is similar to that given for Theorem 11.5.2. Let (U, ϕ) be a chart on M at p, let (V, ψ) be a chart on N at F (p), and let λ(t) : (a, b) −→ M be a smooth curve such that λ(t0 ) = p and (dλ/dt)(t 0 ) = v for some t0 in (a, b).  Suppose without loss of generality that λ (a, b) ⊂ ϕ(U ), and let q = ϕ(p). It follows from Theorem 10.1.17 and Theorem 11.2.8 that the map µ = (µ1 , µ2 ) = ϕ−1 ◦ λ : (a, b) −→ U is (Euclidean) smooth, and from Theorem 10.1.9 that   dλ d(ϕ ◦ µ) dµ v= (t0 ) = (t0 ) = dq (ϕ) (t0 ) . dt dt dt

11 Curves and Regular Surfaces in R3

250

F M

N

λ(t) p

dp(F ) v

F (p) dp(F )(v)

Tp(M )

TF (p) (N ) Figure 11.6.1. Differential map By Theorem 11.3.1(d), dq (ϕ) is invertible, so dµ (t0 ) = dq (ϕ)−1 (v). dt

(11.6.2)

We have from Theorem 11.6.1 that G = ψ −1 ◦ F ◦ ϕ is (Euclidean) smooth, and then from Theorem 10.1.12 that so is F ◦ λ = F ◦ ϕ ◦ µ = ψ ◦ G ◦ µ. Then

d(F ◦ λ) (t0 ) dt d(ψ ◦ G ◦ µ) = (t0 ) dt   dµ = dµ(t0 ) (ψ ◦ G) (t0 ) dt

dp (F )(v) =

= dq (ψ ◦ G) ◦ dq (ϕ)−1 (v).

[(11.6.1)] (11.6.3) [Th 10.1.9] [(11.6.2)]

(a): This follows from (11.6.3). (b): Since v was arbitrary, we have from (11.6.3) that dp (F ) = dq (ψ ◦ G) ◦ dq (ϕ)−1 . Thus, dp (F ) is the composition of linear maps, so it too is linear. Theorem 11.6.4 (Chain Rule). Let M , N , and P be regular surfaces, let F : M −→ N and G : N −→ P be smooth maps, and let p be a point in M . Then dp (G ◦ F ) = dF (p) (G) ◦ dp (F ).

11.6 Maps on Regular Surfaces in R3

251

Proof. Let v be a vector in Tp (M ), and let λ(t) : (a, b) −→ M be a smooth curve such that λ(t0 ) = p and (dλ/dt)(t0 ) = v for some t0 in (a, b). Then d(G ◦ F ◦ λ) (t0 ) dt   d(F ◦ λ) = dF ◦λ(t0 ) (G) (t0 ) dt  ! dλ = dF (λ(t0 )) (G) dλ(t0 ) (F ) (t0 ) dt

dp (G ◦ F )(v) =

[(11.6.1)] [(11.6.1)] [(11.6.1)]

= dF (p) (G) ◦ dp (F )(v). Since v was arbitrary, the result follows. The next result is a generalization of Theorem 11.3.3. f be regular surfaces, let F : M −→ M f be a Theorem 11.6.5. Let M and M smooth map, and let p be a point in M . Let (U, ϕ) be a chart on M at p, let e , ϕ) f at F (p), and let H and H e be the corresponding coordinate (U e be a chart on M frames. Then  He dp (F ) Hqe = Jϕe−1 ◦F ◦ϕ (q), q where q = ϕ−1 (p) and qe = ϕ e−1 ◦ F (p). e = (H e1, H e 2 ), let (r1 , r2 ) and (e Proof. Let H = (H1 , H2 ) and H r1 , re2 ) be coordie , respectively, and let (e1 , e2 ) be the standard basis for R2 . nates on U and U For given 1 ≤ j ≤ 2, we define a (Euclidean) smooth map ζ = (ζ 1 , ζ 2 ) : (−ε, ε) −→ U by ζ(t) = q + tej , where ε > 0 is chosen small enough that (q − εej , q + εej ) ⊂ U . Consider the smooth curve λ(t) : (−ε, ε) −→ M defined by λ = ϕ ◦ ζ, and observe that λ(0) = ϕ(q). By Theorem 10.1.10, X dζ k dλ d(ϕ ◦ ζ) (0) = (0) = (0)Hk |q = Hj |q dt dt dt

(11.6.4)

k

and X dζ k d(F ◦ λ) d(F ◦ ϕ ◦ ζ) ∂(F ◦ ϕ) (0) = (0) = (0) (q) dt dt dt ∂rk k

∂(F ◦ ϕ) = (q). drj It follows from Theorem 11.6.1 that the map G = (G1 , G2 ) = ϕ e−1 ◦ F ◦ ϕ

(11.6.5)

11 Curves and Regular Surfaces in R3

252 is smooth. By Theorem 10.1.10,

X ∂Gi ∂(F ◦ ϕ) ∂(ϕ e ◦ G) e i |qe. (q) = (q) = (q)H j j dr ∂r ∂rj i

(11.6.6)

We have d(F ◦ λ) (0) dt ∂(F ◦ ϕ) = (q) drj X ∂Gi e i |qe. = (q)H ∂rj i

dp (F )(Hj |q ) =

[(11.6.1), (11.6.4)] [(11.6.5)] [(11.6.6)]

Then (2.2.2), (2.2.3), and (10.1.2) give    He ∂Gi dp (F ) Hqe = (q) = Jϕe−1 ◦F ◦ϕ (q). q ∂rj

11.7

Vector Fields Along Regular Surfaces in R3

Let M be a regular surface, let V : M −→ R3 be a map, and let p be a point in M . In the present context, we refer to V as a vector field along M . We say that V vanishes at p if Vp = (0, 0, 0), is nonvanishing at p if Vp 6= (0, 0, 0), and is nowhere-vanishing (on M ) if it is nonvanishing at every p in M . Let us denote by XR3 (M ) the set of smooth vector fields along M . Then XR3 (M ) is nothing more than the set of (extended) smooth maps from M to R3 . With operations on XR3 (M ) defined in a manner analogous to those in Section 10.3, XR3 (M ) is a vector space over R and a module over C ∞ (M ). We say that a vector field X : M −→ R3 along M is a (tangent) vector field on M if Xp is in Tp (M ) for all p in M . The set of smooth vector fields on M is denoted by X(M ). Clearly, X(M ) ⊂ XR3 (M ). In fact, X(M ) is a vector subspace and a C ∞ (M )-submodule of XR3 (M ). As an example, for a regular surface that has a covering chart, each of the components of the corresponding coordinate frame is a tangent vector field. Theorem 11.7.1 (Smoothness Criteria for Vector Fields). If M is a regular surface and V : M −→ R3 is a vector field along M , then the following are equivalent: (a) V is (extended) smooth. (b) For every point p in M , there is a chart (U, ϕ) on M at p such that the map V ◦ ϕ : U −→ R3 is (Euclidean) smooth. (c) For every chart (U, ϕ) on M , the map V ◦ ϕ : U −→ R3 is (Euclidean) smooth. Proof. Setting n = 3 and F = V in Theorem 11.2.12 gives the result.

11.7 Vector Fields along Regular Surfaces in R3

253

Let M be a regular surface, let V be a vector field in XR3 (M ), and let p be a point in M . The differential of V at p is the map dp (V ) : Tp (M ) −→ R3 defined by d(V ◦ λ) (t0 ) (11.7.1) dt for all vectors v in Tp (M ), where λ(t) : (a, b) −→ M is any smooth curve such that λ(t0 ) = p and (dλ/dt)(t0 ) = v for some t0 in (a, b). Observe that this is not a special case of (11.6.1) because R3 is not a regular surface. dp (V )(v) =

Theorem 11.7.2. With the above setup, if V = (α1 , α2 , α3 ), then  dp (V )(v) = dp (α1 )(v), dp (α2 )(v), dp (α3 )(v) . Proof. Since V ◦ λ = (α1 ◦ λ, α2 ◦ λ, α3 ◦ λ), we have d(V ◦ λ) (t0 )  dt 1  d(α ◦ λ) d(α2 ◦ λ) d(α3 ◦ λ) = (t0 ), (t0 ), (t0 ) dt dt dt  1 2 3 = dp (α )(v), dp (α )(v), dp (α )(v) .

dp (V )(v) =

[(11.7.1)]

[(11.5.1)]

Theorem 11.7.3. With the above setup: (a) dp (V )(v) is independent of the choice of smooth curve used to express v. (b) dp (V ) is linear. Proof. In light of Theorem 11.7.2, the result follows from applying Theorem 11.5.2 to each component of dp (V )(v).

254

11 Curves and Regular Surfaces in R3

Chapter 12

Curves and Regular Surfaces in R3ν Chapter 11 was devoted to a discussion of curves and regular surfaces in R3 . A regular surface was defined to be a subset of R3 with certain properties specified in terms of the subspace topology, smooth maps, immersions, homeomorphisms, and so on. The fact that R3 has an inner product (which gives rise to a norm, which in turn gives rise to a distance function, which in turn gives rise to a topology) was relegated to the background—present but largely unacknowledged. The topological and metric aspects of R3 were central to our discussion of what it means for a regular surface to be “smooth”, and in that way the inner product (through the distance function) was involved. In this chapter, we continue our discussion of regular surfaces, but this time endow each tangent plane with additional linear structure induced by the linear structure on R3 . Specifically, we view R3 as either Euclidean 3-space, that is, R30 = (R3 , e), or Minkowski 3-space, that is, R31 = (R3 , m), and give each tangent plane the corresponding inner product or Lorentz scalar product obtained by restriction. It must be stressed that this additional linear structure changes nothing regarding the underlying regular surface. The definitions introduced in Chapter 11 remain in force, but we now express them somewhat differently. To that end, let us denote R30 and R31 collectively by R3ν , with the understanding that ν = 0 or 1 depending on the context. After introducing a series of definitions, we will speak of a regular surface as being a “regular surface in R3ν ”. Again it must be emphasized that aside from the additional structure given to tangent planes, a regular surface in R3ν is the same underlying regular surface considered in Chapter 11. Semi-Riemannian Geometry, First Edition. Stephen C. Newman. c 2019 John Wiley & Sons, Inc. Published 2019 by John Wiley & Sons, Inc.

255

12 Curves and Regular Surfaces in R3ν

256

12.1

Curves in R3ν

Let λ = (λ1 , λ2 , λ3 ) : (a, b) −→ R3ν be a smooth curve, and recall from Section 10.1 that the velocity of λ is the smooth curve dλ/dt : (a, b) −→ R3ν . When ν = 1, we say that λ is spacelike (resp., timelike, lightlike) if (dλ/dt)(t) is spacelike (resp., timelike, lightlike) for all t in (a, b). According to (4.1.1), the norm of (dλ/dt)(t) is v

u  2  2 2  3 2

dλ u dλ1 dλ dλ t ν

(t) = (t) + (t) + (−1) (t) ,

dt dt dt dt where we note the presence of (−1)ν and the absolute value bars. The function kdλ/dtk : (a, b) −→ R is called the speed of λ. Recall that λ is said to be regular if its velocity is nowhere-vanishing. When ν = 0, this is equivalent to its speed being nowhere-vanishing. We say that λ has constant speed if there is a real number c such that k(dλ/dt)(t)k = c for all t in (a, b). Let λ(t) : [a, b] −→ R3ν be an (extended) smooth curve. The length of λ (more precisely, the length of the image of λ) is defined by

Z b



L(λ) =

dt (t) dt. a Other than their role in defining the above integral, we have little interest in the endpoints of [a, b]. In order to avoid having to consider one-sided limits, we continue to frame the discussion in terms of λ(t) : [a, b] −→ R3ν but compute with λ(t) : (a, b) −→ R3ν . In short, we systematically confuse the distinction between [a, b] and (a, b). As the next result shows, the length of a smooth curve does not depend on the choice of parametrization. Theorem 12.1.1 (Diffeomorphism Invariance of Length). If λ(t) : [a, b] −→ R3ν is a smooth curve and g(u) : [c, d] −→ [a, b] is a diffeomorphism, then L(λ) = L(λ ◦ g). Proof. It follows from Theorem 10.1.10 that





d(λ ◦ g) dλ  dg

=

(u) (u) g(u)

du

dt

du for all u in [c, d]. By the change of variables theorem from the differential calculus of one real variable,

Z b Z g−1 (b)



dλ  dg

(t) dt =

(u) du. L(λ) = g(u)

dt

dt

du a g −1 (a) According to Theorem 10.2.4(c), g is either strictly increasing or strictly decreasing. In the former case, dg dg −1 −1 g (a) = c, g (b) = d, and (u) = (u) ; du du

12.2 Regular Surfaces in R3ν

257

and in the latter case, g −1 (a) = d,

g −1 (b) = c,

dg dg (u) = − (u) . du du

and

Either way, Z

g −1 (b)

g −1 (a)



Z c



dλ  dg  dg





dt g(u) du (u) du =

dt g(u) du (u) du d

Z d

d(λ ◦ g)

du = (u)

du

c = L(λ ◦ g).

The result follows.

12.2

Regular Surfaces in R3ν

A regular surface is by definition a subset of R3 . We now view a regular surface as a subset of R3ν , where ν is left unspecified. The scalar product on R3ν is given by ( e if ν = 0 h·,·i = m if ν = 1, where e and m are the Euclidean inner product and Minkowski scalar product, respectively. Let M be a regular surface, and let p be a point in M . We obtain a symmetric  tensor gp in T 02 Tp (M ) by restricting the scalar product on R3ν to Tp (M ) × Tp (M ): ( e|Tp (M ) if ν = 0 gp = (12.2.1) m|Tp (M ) if ν = 1. For brevity, we usually denote gp (·,·)

by

h·,·i.

Whether the notation h·,·i refers to the scalar product on R3ν or the tensor gp will be clear from the context. The first fundamental form on M is the map denoted by g and defined by the assignment p 7−→ gp for all p in M . In the literature, g is often denoted by I. For vector fields X, Y in X(M ), we define a function g(X, Y ) = hX, Y i : M −→ R in C ∞ (M ) by the assignment p 7−→ gp (Xp , Yp ) = hXp , Yp i

258

12 Curves and Regular Surfaces in R3ν

for all p in M . Since a subspace of an inner product space is itself an inner product space, when ν = 0, gp is an inner product on Tp (M ) for all p in M . On the other hand, when ν = 1, gp is bilinear and symmetric on Tp (M ), but there is no guarantee that it is nondegenerate. Furthermore, even if gp is nondegenerate on each Tp (M ), it might be an inner product for some p and a Lorentz scalar product for others. In other words, gp might not have the same index for all p in M . For these reasons, we make the following definition. We say that g is a metric (on M ) if: [G1] gp is nondegenerate on Tp (M ) for all p in M . [G2] ind(gp ) is independent of p in M . When [G1] is satisfied, gp is a scalar product on Tp (M ) for all p in M . [G1] and [G2] are automatically satisfied when ν = 0. We say that a vector v in R3ν is normal at p if v is in Tp (M )⊥ , where ⊥ is computed using the scalar product in R3ν . If v is also a unit vector, it is said to be unit normal at p. Let V be a vector field along M . Recall that this means nothing more than V is a map from M to R3ν . Looked at another way, V is effectively a collection of vectors in R3ν , one for each p in M . Without further assumptions, there is no reason to expect V to be smooth; that is, V is not necessarily a vector field in XR3ν (M ). We say that V is a unit vector field if Vp is a unit vector for all p in M , and that V is a normal vector field if Vp is normal at p for all p in M . Clearly, a unit vector field is nowhere-vanishing. When V is both a unit vector field and a normal vector field, it is said to be a unit normal vector field. For vector fields V, W along M , let us define the function hV, W i : M −→ R by the assignment p 7−→ gp (Vp , Wp ) = hVp , Wp i

(12.2.2)

for all p in M . Let us also define the function kV k : M −→ R by the assignment p 7−→ kVp k =

q

|hVp , Wp i|

for all p in M . When kV k is nowhere-vanishing, we define V / kV k to be the vector field along M given by the assignment p 7−→ Vp / kVp k for all p in M . Here are two properties that a vector field V along M might satisfy: [V1] Tp (M )⊥ = RVp for all p in M . [V2] hVp , Vp i is positive for all p in M , or negative for all p in M . We observe that [V2] is equivalent to Vp being either nonzero spacelike for all p in M , or timelike for all p in M .

12.2 Regular Surfaces in R3ν

259

Theorem 12.2.1. With the above setup, if V satisfies [V2], then: (a) V is nowhere-vanishing on M ; that is, Vp 6= (0, 0, 0) for all p in M . (b) kV k is nowhere-vanishing on M ; that is, kVp k = 6 0 for all p in M . (c) V /kV k is a unit vector field along M , and if V is smooth on M , then so is V /kV k. (d) If ν = 0, then the converse of part (a) holds: V satisfies [V2] if and only if V is nowhere-vanishing. Proof. (a), (b), (d): Straightforward. (c): The first assertion follows from part (b). For the second assertion, since V satisfies [V2], either |hV, V i| = hV, V i on M , or |hV, V i| = −hV, p V i on M . Since V is smooth on M , so is |hV, V i|, and therefore, so is kV k = |hV, V i|. It follows that V /kV k is smooth. We now show that properties [G1]–[G2] and [V1]–[V2] are closely related. For convenience of exposition, most of the results to follow are presented for arbitrary ν. However, the findings for ν = 0 are essentially trivial; it is the case ν = 1 that is of primary interest. Theorem 12.2.2. Let M be a regular surface, let g be the first fundamental form on M , and let p be a point in M such that gp is nondegenerate. Then: (a) There are precisely two unit normal vectors at p. (b) If v is a unit normal vector at p, then Tp (M )⊥ = Rv

and

hv, vi = (−1)ν−ind(gp ) .

Proof. It follows from Theorem 4.1.3 that R3ν = Tp (M )⊕Tp (M )⊥ , and then from Theorem 1.1.18 that Tp (M )⊥ is 1-dimensional. Thus, Tp (M )⊥ = Rv, where v is one of the two unit vectors in Tp (M )⊥ , the other being −v. Let (e1 , e2 ) be an orthonormal basis for Tp (M ). Then (e1 , e2 , v) is an orthonormal basis for R3ν . Using (4.2.3) twice yields (−1)ν = he1 , e1 ihe2 , e2 ihv, vi = (−1)ind(gp ) hv, vi. Theorem 12.2.3. Let M be a regular surface, and let g be the first fundamental form on M . Then: (a) g satisfies [G1] if and only if there is a unit normal vector field V along M satisfying [V1]. (b) If V and Ve are unit normal vector fields along M satisfying [V1], then hVp , Vp i = hVep , Vep i for all p in M . Proof. (a)(⇒): This follows from parts (a) and (b) of Theorem 12.2.2. (a)(⇐): Let p be a point in M . If v is a vector in Tp (M ) ∩ Tp (M )⊥ , then hv, vi = 0. Since Tp (M )⊥ = RVp , we have v = cVp for some real number c. Then 0 = hcVp , cVp i = c2 hVp , Vp i = ±c2 ,

hence c = 0. Thus, Tp (M ) ∩ Tp (M )⊥ = {0}. By Theorem 4.1.3, gp is nondegenerate on Tp (M ). Since p was arbitrary, the result follows.

260

12 Curves and Regular Surfaces in R3ν

(b): Let p be a point in M . Since RVp = Tp (M )⊥ = RVep , we have Vep = cVp for some real number c. Then ±1 = hVep , Vep i = c2 hVp , Vp i = ±c2 , hence c = ±1. Thus, hVep , Vep i = h±Vp , ±Vp i = hVp , Vp i. Since p was arbitrary, the result follows. Theorem 12.2.4. Let M be a regular surface, let V be a unit normal vector field along M , and suppose g, the first fundamental form on M , satisfies [G1]. Then g satisfies [G2] if and only if V satisfies [V2]. Proof. Since V is a unit vector field along M , [V2] is equivalent to: hVp , Vp i = 1 for all p in M , or hVp , Vp i = −1 for all p in M . By Theorem 12.2.2(b), (−1)ν−ind(gp ) = hVp , Vp i. The result follows. Let M be a regular surface, and let g be the first fundamental form on M . When g is a metric, the pair (M, g) is called a regular surface in R3ν . In that case, we ascribe to g those properties of gp that are independent of p. Accordingly, g is said to be bilinear, symmetric, nondegenerate, and so on. The common value of the ind(gp ) is denoted by ind(g) and called the index of g or the index of M . The next result shows that we could have defined a regular surface in R3ν using properties [V1] and [V2] instead of [G1] and [G2]. Theorem 12.2.5. Let M be a regular surface. Then (M, g) is a regular surface in R3ν if and only if there is a unit normal vector field along M satisfying [V1] and [V2]. Proof. This follows from Theorem 12.2.3(a) and Theorem 12.2.4. Theorem 12.2.6. Let M be a regular surface, and let g be the first fundamental form on M . Let (U, ϕ) be a chart on M , let (H1 , H2 ) be the corresponding coordinate frame, and, using Example 11.2.5, view ϕ(U ) as a regular surface.  Then ϕ(U ), g|ϕ(U ) is a regular surface in R3ν if and only if hH1 |q ×H2 |q , H1 |q × H2 |q i is positive for all q in U , or negative for all q in U , where the vector product is computed with respect to the standard basis for R3ν . Remark. Since g|ϕ(U ) is the first fundamental form on ϕ(U ), the assertion makes sense. In light of earlier remarks, if ν = 0, then ϕ(U ), g|ϕ(U ) is a regular surface in R30 without any assumptions on the behavior of H1 and H2 . Proof. For brevity, we refer to the statement “hH1 |q × H2 |q , H1 |q × H2 |q i is positive for all q in U , or negative for all q in U ” as property (∗). (⇒): By Theorem 12.2.5, there is a unit normal vector field W along ϕ(U ) satisfying [V1] and [V2]. Since W satisfies [V1], we have from Theorem 8.4.4,

12.2 Regular Surfaces in R3ν

261

Theorem 8.4.8, and Theorem 11.4.1(b) that H1 |ϕ−1 (p) × H2 |ϕ−1 (p) = cp Wp for some nonzero real number cp , hence hH1 |ϕ−1 (p) × H2 |ϕ−1 (p) , H1 |ϕ−1 (p) × H2 |ϕ−1 (p) i = c2p hWp , Wp i for all p in ϕ(U ). Since W satisfies [V2], property (∗) is satisfied.

(⇐): We have from property (∗) that H1 |ϕ−1 (p) × H2 |ϕ−1 (p) 6= 0 for all p in ϕ(U ). Setting (V, g) = R3ν , U = Tp (M ), and (u1 , u2 ) = (H1 |ϕ−1 (p) , H2 |ϕ−1 (p) ) in Theorem 8.4.9, and using Theorem 11.4.1(b), shows that for all p in ϕ(U ),

  g|ϕ(U ) p is nondegenerate on Tp ϕ(U ) if and only if H1 |ϕ−1 (p) × H2 |ϕ−1 (p) 6= 0. Thus, g|ϕ(U ) satisfies [G1]. For a given point p in ϕ(U ), consider the vector H1 |ϕ−1 (p) × H2 |ϕ−1 (p)

Vp =

H1 |ϕ−1 (p) × H2 |ϕ−1 (p) . We have from Theorem 8.4.8 that Vp 6= (0, 0, 0), and from Theorem 8.4.4 and ⊥ Theorem 11.4.1(b) that Vp is in Tp ϕ(U ) . Thus, the assignment p 7−→ Vp defines a unit normal vector field V along ϕ(U ). It follows from property (∗) that V satisfies [V2]. By Theorem 12.2.4, g|ϕ(U ) satisfies [G2]. Theorem 12.2.7 (Open Set). If (M, g) is a regular surface in R3ν and W is an open set in M , then (W, g|W ) is a regular surface in R3ν . In particular, if  (U, ϕ) is a chart on M , then ϕ(U ), g|ϕ(U ) is a regular surface in R3ν . Proof. By Theorem 11.4.1, W is a regular surface and Tp (W ) = Tp (M ) for all p in W . It follows that (g|W )|p is nondegenerate on Tp (W ) and ind(gp ) = ind (g|W )p for all p in W . Thus, g|W satisfies [G1] and [G2], which proves the first assertion. According to [C2] of Section 11.2, ϕ(U ) is an open set in M , so the second assertion follows from the first. Theorem 12.2.8 (Graph of Function). In the notation of Theorem 11.4.2, graph(f ) is a regular surface in R3ν if and only if 

∂f (x, y) ∂x

2

 +

∂f (x, y) ∂y

2 −1

is positive for all (x, y) in U , or negative for all (x, y) in U . Proof. As remarked in connection with Theorem 12.2.6, the case ν = 0 is straightforward, so assume ν = 1. We have from Example 11.2.5 that graph(f ) is a regular surface, from Theorem 11.4.2 that (U, ϕ) is a covering chart, and from the proof of part (b) of Theorem 12.10.2 that  hH1 |(x,y) × H2 |(x,y) , H1 |(x,y) × H2 |(x,y) i = The result now follows from Theorem 12.2.6.

∂f (x, y) ∂x

2

 +

∂f (x, y) ∂y

2 − 1.

12 Curves and Regular Surfaces in R3ν

262

Theorem 12.2.9 (Surface of Revolution). In the notation of Theorem ˙ 2 − ρ(t) 11.4.3, rev(σ) is a regular surface in R3ν if and only if h(t) ˙ 2 is positive for all t in (a, b), or negative for all t in (a, b). Proof. As remarked in connection with Theorem 12.2.6, the case ν = 0 is straightforward, so assume ν = 1. We have from Example 11.2.5 that rev(σ) is a regular surface, from Theorem 11.4.3 that (U, ϕ) is a covering chart, and from the proof of part (b) of Theorem 12.10.4 that ˙ 2 − ρ(t) hH1 |(t,φ) × H2 |(t,φ) , H1 |(t,φ) × H2 |(t,φ) i = ρ(t)2 [h(t) ˙ 2 ]. By definition, ρ(t) is positive, hence nonzero, for all t in (a, b). The result now follows from Theorem 12.2.6. Let U be an open set in R3ν , and let f be a function in C ∞ (U). The gradient of f (in R3ν ) is the map Grad(f ) : U −→ R3 defined by  Grad(f )p =

∂f ∂f ∂f (p), (p), (−1)ν (p) ∂x ∂y ∂z



for all p in U, where we note the presence of (−1)ν . When ν = 0, the above identity simplifies to (11.4.3), in which case, Grad(f )p = grad(f )p . Theorem 12.2.10 (Level Set of Function). With the above setup, let c be a real number in f (U), and let M = f −1 (c). Then: (a) If Grad(f ) is nowhere-vanishing on M , then M is a regular surface. (b) If Grad(f ) satisfies [V2], then (M, g) is a regular surface in R3ν and Grad(f ) /kGrad(f )k is a unit normal vector field in XR3ν (M ). Remark. We have from Theorem 12.2.1(d) that when ν = 0, the assumptions in parts (a) and (b) are equivalent. Proof. For brevity, let V = Grad(f ). (a): Since V is nowhere-vanishing on M , so is grad(f ). By Theorem 11.4.4, M is a regular surface. (b): Let p be a point in M , and let v be a vector in Tp (M ). By definition, there is a smooth curve λ = (λ1 , λ2 , λ3 ) : (a, b) −→ M such that λ(t0 ) = p and (dλ/dt)(t0 ) = v for some t0 in (a, b). Since f ◦ λ(t) = c for all t in (a, b), by Theorem 10.1.10, X ∂f d(f ◦ λ) dλi (t0 ) = (p) (t0 ) dt ∂xi dt i    1  ∂f ∂f dλ dλ2 dλ3 ν ∂f = (p), 2 (p), (−1) (p) , (t0 ), (t0 ), (t0 ) ∂x1 ∂x ∂x3 dt dt dt

0=

= hVp , vi.

12.2 Regular Surfaces in R3ν

263

Because p and v were arbitrary, it follows that V is a normal vector field along M and Tp (M ) ⊆ (RVp )⊥ for all p in M . We have from parts (a) and (c) of Theorem 12.2.1 that V is nowhere-vanishing on M and W = V /kV k is a unit normal vector field along M . Since f is smooth, so is V , and therefore, by Theorem 12.2.1(c), so is W ; that is, W is a vector field in XR3ν (M ). Clearly, W satisfies [V2]. Since Tp (M ) ⊆ (RVp )⊥ = (RWp )⊥ , and by Theorem 11.3.1(a), Tp (M ) is 2-dimensional, it follows that (RWp )⊥ is either 2-dimensional or 3dimensional. The latter possibility is excluded because we would then have (RWp )⊥ = R3ν , hence hWp , Wp i = 0, which contradicts the fact that W satisfies [V2]. Thus, (RWp )⊥ is 2-dimensional, so Tp (M ) = (RWp )⊥ . By Theorem 4.1.2(c), Tp (M )⊥ = (RWp ) for all p in M ; that is, W satisfies [V1]. It follows from part (a) and Theorem 12.2.5 that (M, g) is a regular surface in R3ν . Let (M, g) be a regular surface in R3ν . We have from Theorem 12.2.5 that there is a (not necessarily smooth) unit normal vector field V along M satisfying [V1] and [V2]. As pointed out in the proof of Theorem 12.2.4, since V is a unit vector field, [V2] is equivalent to: hVp , Vp i = 1 for all p in M , or hVp , Vp i = −1 for all p in M . The common value of the hVp , Vp i is denoted by M and called the sign of M . Thus, M = hVp , Vp i (12.2.3) for all p in M . By Theorem 12.2.3(b), M is independent of the choice of unit normal vector field along M satisfying [V1] and [V2]. We have from Theorem 12.2.2(b) that M = (−1)ν−ind(g) . (12.2.4)

A convenient way to determine ind(g) that avoids having to construct an orthonormal basis is to find M using (12.2.3) and then compute ind(g) using (12.2.4). The values of ν, ind(g), and M are related to each other as follows: ν

ind(g)

M

0

0

1

1

1

1

1

0

−1

(12.2.5)

Theorem 12.2.11. Let (M, g) be a regular surface in R3ν , let (U, ϕ) be a chart on M , let (H1 , H2 ) be the corresponding coordinate frame, and, using Example 11.2.5, view ϕ(U ) as a regular surface. Define a map G : U −→ R3ν , called the coordinate unit normal vector field corresponding to (U, ϕ), by Gq = M

H1 |q × H2 |q kH1 |q × H2 |q k

for all q in U , where the vector product is computed with respect to the standard basis for R3ν . Then:

12 Curves and Regular Surfaces in R3ν

264

(a) G is smooth. (b) M = hGq , Gq i for all q in U .  (c) G ◦ ϕ−1 is a unit normal vector field in XR3ν ϕ(U ) .  Proof. (a), (c): By Theorem 12.2.7, ϕ(U ), g|ϕ(U ) is a regular surface in R3ν , so g|ϕ(U ) satisfies [G1] and [G2]. Setting (V, g) = R3ν , U = Tp (M ), and (u1 , u2 ) = −1 (p) ) in Theorem 8.4.9, and using Theorem 11.4.1(b), it follows (H1 |ϕ−1 (p) , H2 |ϕ

from [G1] that H1 |ϕ−1 (p) × H2 |ϕ−1 (p) 6= 0 for all p in ϕ(U ). Using similar reasoning, by Theorem 8.4.8, H1 |ϕ−1 (p) × H2 |ϕ−1 (p) 6= (0, 0, 0), and by Theorem ⊥ 8.4.4 and Theorem 11.4.1(b), H1 |ϕ−1 (p) × H2 |ϕ−1 (p) is in Tp ϕ(U ) for all p in ϕ(U ). It follows that G ◦ ϕ−1 is a unit normal vector field along ϕ(U ). We have from [G2] and Theorem 12.2.4 that G ◦ ϕ−1 satisfies [V2]. Since H1 and H2 are smooth on U , by Theorem 10.1.17 and Theorem 11.2.8, H1 ◦ ϕ−1 × H2 | ◦ ϕ−1 is smooth on ϕ(U ), and therefore, by Theorem 12.2.1(c), so is G ◦ ϕ−1 . We then have from Theorem 10.1.17 that G = (G ◦ ϕ−1 ) ◦ ϕ is smooth on U . (b): This follows from (12.2.3) and part (c). Continuing with the setup of Theorem 12.2.11, we note that the existence of a unit normal smooth vector field corresponding to each chart on M does not guarantee the existence of a unit normal smooth vector field along M . The reason is that the unit normal vector fields corresponding to different charts may not agree on the overlaps of images of their coordinate domains. In Section 12.7, we place additional structure on M that resolves this problem. Let us now turn our attention to a special class of regular surfaces in R3ν . Recall from Section 3.1 that the quadratic function q corresponding to R3ν is given by q(·) = h·,·i. We consider three level sets of q, the first of which we have seen previously. For ν = 0, the unit sphere is S 2 = {p ∈ R30 : q(p) = 1} = q−1 (1).

(12.2.6)

For ν = 1, we define the pseudosphere by P 2 = {p ∈ R31 : q(p) = 1} = q−1 (1),

(12.2.7)

and hyperbolic space by H2 = {p ∈ R31 : q(p) = −1} = q−1 (−1).

(12.2.8)

Thus, S 2 is the set of (spacelike) unit vectors in R30 , P 2 is the set of spacelike unit vectors in R31 , and H2 is the set of timelike unit vectors in R31 . Taken together, S 2 , P 2 , and H2 are called the hyperquadrics in R3ν and are denoted collectively by Q2 . We have the following table: Q2

ν

Type of vectors

S2

0

spacelike

2

1

spacelike

H2

1

timelike

P

(12.2.9)

12.2 Regular Surfaces in R3ν

265

Theorem 12.2.12 (Hyperquadrics). Let q be the quadratic function corresponding to R3ν , let Q2 be the hyperquadrics in R3ν , and let p be a point in Q2 . Then: (a) Q2 is a regular surface in R3ν ; that is, S 2 is a regular surface in R30 , and P 2 and H2 are regular surfaces in R31 . (b) Grad(q)p = p. kGrad(q)p k (c) p is a unit vector in Tp (Q2 )⊥ , where the first p is viewed as a vector in R3ν and the second p is viewed as a point in Q2 . (d) The hyperquadrics have the following features: Q2

ν

Type of vectors

ind(g)

S2

0

spacelike

0

1

2

1

spacelike

1

1

2

1

timelike

0

−1

P

H

Q2

 Proof. Let p = (x, y, z), and, for brevity, let V = Grad(q). Since 1, 1, (−1)ν is the signature of R3ν , by Theorem 4.2.8, q(x, y, z) = x2 + y 2 + (−1)ν z 2 , hence V(x,y,z) = 2(x, y, z); that is, Vp = 2p, (12.2.10) so hVp , Vp i = 4hp, pi = 4 q(p).

(12.2.11)

(b): As remarked in connection with (12.2.6)–(12.2.8), each p in Q2 is a unit vector. Then (12.2.11) gives p kVp k = |4hp, pi| = 2. The result now follows from (12.2.10). (a), (c): We have from (12.2.6)–(12.2.8) that Q2 is a level set of q, and from (12.2.11) that V satisfies [V2]. It follows from Theorem 12.2.10(b) and part (b) that Q2 is a regular surface in R3ν and p is a unit normal vector at p. (d): The entries in columns two and three come directly from (12.2.9). It follows from (12.2.3) and part (c) that Q2 = hp, pi, so the entries in column five follow from those in column three. Using (12.2.4), the entries in column four follow from those in columns two and five [see (12.2.5)]. It is interesting to observe that according to the table in part (d) of Theorem 12.2.12, the index of H2 is 0. Thus, the tangent plane Tp (H2 ) for each p in H2 is an inner product space, despite the fact that Tp (H2 ) is a subspace of the Lorentz vector space R31 .

12 Curves and Regular Surfaces in R3ν

266

Example 12.2.13 (S 2 ). It was demonstrated in Example 11.2.6 and again in Example 11.4.5 that S 2 a regular surface. By Theorem 12.2.12(a), S 2 is a regular surface in R30 . ♦ We close this section with some definitions that will be used later on. Let (M, g) be a regular surface in R3ν , let (U, ϕ) be a chart on M , and let H = (H1 , H2 ) be the corresponding coordinate frame. We define functions gij in C ∞ (U ) by gij (q) = hHi |q , Hj |q i for all q in U for i, j = 1, 2. The matrix of g with respect to H is denoted by gH and defined by   g (q) g12 (q) gH (q) = 11 g21 (q) g22 (q) for all q in U . Setting p = ϕ(q), we recall from Section 3.1 that the matrix of gp with respect to Hq is (gp )Hq = [gij (q)]. Thus, as a matter of notation, gH (q) = (gp )Hq . The inverse matrix of g with respect to H is denoted by g−1 H and defined by −1 g−1 H (q) = gH (q)

for all q in U . It is usual to express the entries of gH (q)−1 with superscripts: g−1 H (q) =

 −1  11  g11 (q) g12 (q) g (q) g12 (q) = 21 . g21 (q) g22 (q) g (q) g22 (q)

ij ij ∞ The  assignment   q 7−→ g (q) defines functions g in C (U ) for i, j = 1, 2. Since gij and gij are symmetric matrices, the functions gij and gij are symmetric in i, j.

12.3

Induced Euclidean Derivative in R3ν

Let M be a regular surface, and let X be a vector field in X(M ). The induced Euclidean derivative with respect to X consists of two maps, both denoted by DX . The first is DX : C ∞ (M ) −→ C ∞ (M ) defined by DX (f )(p) = dp (f )(Xp )

(12.3.1)



for all functions f in C (M ) and all p in M . The second is DX : XR3 (M ) −→ XR3ν (M ) defined by DX (V )p = dp (V )(Xp )

(12.3.2)

12.3 Induced Euclidean Derivative in R3ν

267

for all vector fields V in XR3ν (M ) and all p in M . (It will be clear from the context when the notation DX denotes the induced Euclidean derivative with respect to X as opposed to the Euclidean derivative with respect to X discussed in Section 10.3.) We have from (11.5.1) and (11.7.1) that DX (f )(p) and DX (V )p can be expressed as d(f ◦ λ) DX (f )(p) = (t0 ) (12.3.3) dt and DX (V )p =

d(V ◦ λ) (t0 ), dt

(12.3.4)

where λ(t) : (a, b) −→ M is any smooth curve such λ(t0 ) = p and (dλ/dt)(t0 ) = Xp . Let V = (α1 , α2 , α3 ). It follows from Theorem 11.7.2 and (12.3.2) that DX (V )p can also be expressed as  DX (V )p = dp (α1 )(Xp ), dp (α2 )(Xp ), dp (α3 )(Xp ) .

(12.3.5)

Following (12.2.2), for vector fields V, W in XR3ν (M ), we define a function hV, W i : M −→ R in C ∞ (M ) by the assignment p 7−→ hVp , Wp i for all p in M . The next result is a counterpart of Theorem 10.3.1. Theorem 12.3.1. Let M be a regular surface, let X, Y and V, W be vector fields in X(M ) and XR3ν (M ), respectively, and let f be a function in C ∞ (M ). Then: (a) DX+Y (V ) = DX (V ) + DY (V ). (b) Df X (V ) = f DX (V ). (c) DX (V + W ) = DX (V ) + DX (W ). (d) DX (f V ) = DX (f ) V + f DX (V ). (e) DX (hV, W i) = hDX (V ), W i + hV, DX (W )i. Proof. (a)–(d): Using Theorem 11.5.3 and Theorem 11.7.2 gives the result. (e): Let p be a point in M , and let λ(t) : (a, b) −→ M be a smooth curve such that λ(t0 ) = p and  (dλ/dt)(t0 ) = Xp for some t0 in (a, b). Also, let (ε1 , ε2 , ε3 ) = 1, 1, (−1)ν be the signature of R3ν , and let V = (α1 , α2 , α3 ) and W = (β 1 , β 2 , β 3 ). By Theorem 4.2.8, hV, W i ◦ λ = hV ◦ λ, W ◦ λi =

X i

εi (αi ◦ λ)(β i ◦ λ),

12 Curves and Regular Surfaces in R3ν

268 hence

d(hV, W i ◦ λ) (t0 ) dt   X d(αi ◦ λ) d(β i ◦ λ) i i = εi (t0 ) (β ◦ λ)(t0 ) + (α ◦ λ)(t0 ) (t0 ) dt dt i     d(V ◦ λ) d(W ◦ λ) = (t0 ), W ◦ λ(t0 ) + V ◦ λ(t0 ), (t0 ) dt dt

DX (hV, W i)p =

= hDX (V )p , Wp i + hVp , DX (W )p i, where the first equality follows from (12.3.3), the third equality from Theorem 4.2.8, and the last equality from (12.3.4). Since p was arbitrary, the result follows. By definition, if V is a vector field in XR3ν (M ), then DX (V ) is a vector field in XR3ν (M ). In particular, if Y is a vector field in X(M ) ⊂ XR3ν (M ), then DX (Y ) is a vector field in XR3ν (M ). However, as the following example shows, DX (Y ) might not be a vector field in X(M ). In other words, even though Yp is a vector in Tp (M ) for all p in M , the same might not be true of DX (Y )p . Example 12.3.2 (Hemisphere). Continuing with Example 11.2.6 and Example 11.3.2, it follows from Example 11.2.5 that p ϕ1 (D) = {(x, y, 1 − x2 − y 2 ) : (x, y) ∈ D} is a regular surface. Then (D, ϕ1 ) is a covering chart on ϕ1 (D), and   x X = 1, 0, − p , 1 − x2 − y 2 which is the first vector field in the coordinate frame corresponding to (D, ϕ1 ), is a tangent vector field on ϕ1 (D). Thus, X(x,y) is in Tp(x,y) ϕ1 (D) , where p p(x, y) = (x, y, 1 − x2 − y 2√). Consider the smooth curve λ(t) : (−1, 1) −→ ϕ1 (D) given by λ(t) = (t, 0, 1 − t2 ), the image of which is the intersection of ϕ1 (D) with the xz-plane in R3ν . We have   t Xλ(t) = 1, 0, − √ . 1 − t2 Let us examine the behavior of DX (X) on the image of λ. According to (12.3.5),   1 DX (X)λ(t) = 0, 0, − . (1 − t2 )3/2 When t = 0, for example, λ(0) = (0, 0, 1),

Xλ(0) = (1, 0, 0),

and

DX (X)λ(0) = (0, 0, −1).

This shows that at ϕ1 (0, 0) = (0, 0, 1), Xλ(0) is tangent to ϕ1 (D), but DX (X)λ(0) is not. ♦

12.3 Induced Euclidean Derivative in R3ν

269

Let (M, g) be a regular surface in R3ν , let (U, ϕ) be a chart on M , and let H = (H1 , H2 ) and G be the corresponding coordinate frame and coordinate unit normal vector field. In keeping with earlier notation for a vector-valued map, we denote ∂Hi ∂Hi (q) by ∂rj ∂rj q for all q in U for i, j = 1, 2. It follows from Theorem 8.4.10(b) and Theorem 12.2.11 that (Gq , H1 |q , H2 |q ) is a basis for R3ν . Then (∂Hi /∂rj ) can be expressed as X ∂Hi = Γkij Hk + ϑij G, (12.3.6) j ∂r k

Γkij ,

where the called the Christoffel symbols, and the ϑij are uniquely determined functions on U for i, j, k = 1, 2. Theorem 12.3.3. With the above setup, for i, j, k = 1, 2: (a) Γkij and ϑij are functions in C ∞ (U ). (b)   1 X kl ∂gjl ∂gil ∂gij Γkij = g + − . 2 ∂ri ∂rj ∂rl l

(c)  ϑij = M

 ∂Hi , G . ∂rj

(d) Γkij and ϑij are symmetric in i, j; that is, Γkij = Γkji

and

ϑij = ϑji .

Proof. (b): It follows from (12.3.6) that   X  X X ∂Hi n n , H = Γ H + ϑ G, H = Γ hH , H i = gkn Γnij , k n ij k k n ij ij ∂rj n n n and then from gij = hHi , Hj i that     X ∂gij ∂Hi ∂Hj = , Hj + Hi , k = (gjn Γnik + gin Γnjk ). ∂rk ∂rk ∂r n Thus, X ∂gjl = (gln Γnij + gjn Γnil ) i ∂r n X ∂gil = (gln Γnij + gin Γnjl ) ∂rj n

(12.3.7)

12 Curves and Regular Surfaces in R3ν

270

X ∂gij = (gjn Γnil + gin Γnjl ), l ∂r n hence

  X 1 ∂gjl ∂gil ∂gij + − = gln Γnij . 2 ∂ri ∂rj ∂rl n

Multiplying both sides of the preceding identity by gkl and summing over l gives   X X  X X  1 X kl ∂gjl ∂gil ∂gij kl n kl n g + − = g g Γ = g g ln ij ln Γij 2 ∂ri ∂rj ∂rl n n l l l X k n k = δn Γij = Γij . n

(c): We have from Theorem 12.2.11(b) and (12.3.6) that   X  X ∂Hi k ,G = Γij Hk + ϑij G, G = Γkij hHk , Gi + ϑij hG, Gi ∂rj k

k

= M ϑij . (a): This follows from parts (b) and (c). (d): The symmetry of the Γkij follows from part (b) and the symmetry of the gij . We have from Theorem 10.1.6 and (11.2.1) that ∂Hi ∂Hj = . j ∂r ∂ri The symmetry of the ϑij now follows from part (c). We will make frequent use of the symmetry of the Christoffel symbols given by Theorem 12.3.3(d), usually without attribution. A quantity is said to be intrinsic to the geometry of a regular surface in R3ν if its definition depends only on the metric. Accordingly, Theorem 12.3.3(b) demonstrates that the Christoffel symbols are intrinsic. We will see later that the Christoffel symbols are closely related to the “curvature” of a regular surface in R3ν . In particular, when all Christoffel symbols have constant value 0, the surface is “flat”. For example, consider Pln, the xyplane in R30 discussed in Section 13.1. Since gH = I2 , it follows from Theorem 12.3.3(b) that each Γkij = 0. Thus, not surprisingly, Pln is “flat”. Let (M, g) be a regular surface in R3ν , and let X be a (not necessarily smooth) vector field on M . Let (U, ϕ) be a chart on M , and let (H1 , H2 ) be the corresponding coordinate frame. Then X ◦ ϕ can be expressed as X X ◦ϕ= αi Hi , (12.3.8) i

where the αi are uniquely determined functions on U , called the components of X with respect to (U, ϕ). The right-hand side of (12.3.8) is said to express

12.3 Induced Euclidean Derivative in R3ν

271

X in local coordinates with respect to (U, ϕ). Let us introduce the notation i α;j =

∂αi X k i + α Γjk ∂rj

(12.3.9)

k

for i, j = 1, 2. Theorem 12.3.4. With the above setup, if X is in X(M ), then αi is a function in C ∞ (U ) for i = 1, 2. Proof. We have hX ◦ ϕ, Hk i =

X



j

α Hj , Hk

=

X

j

j

αj hHj , Hk i =

X

αj gjk ,

j

hence X k

gik hX ◦ ϕ, Hk i =

X

=

X

gik

X

αj gjk

 =

X

j

k

αj δji

αj

X

j

gik gkj



k

i

=α.

j

By assumption, X is smooth. It follows from Theorem 11.7.1 that X ◦ ϕ is smooth, and therefore, so is αi . Theorem 12.3.5. Let (M, g) be a regular surface in R3ν , let (U, ϕ) be a chart on M , and let (H1 , H2 ) be the corresponding coordinate frame. Let X, Y be vector fields in X(M ), and, in local coordinates, let X X X ◦ϕ= αi Hi and Y ◦ϕ= β j Hj . i

j

Then, for i, j = 1, 2: (a) X DHi ◦ϕ−1 (Hj ◦ ϕ−1 ) ◦ ϕ = Γkij Hk + ϑij G. k

(b) DHi ◦ϕ−1 (Y ) ◦ ϕ =

X

β;ij Hj +

X

j

 β j ϑij G.

j

(c) DX (Y ) ◦ ϕ =

X X j

αi β;ij

 Hj +

X

i

i j



α β ϑij G.

ij

Proof. Theorem 12.3.1 is used repeatedly in what follows. (a): Let p be a point in ϕ(U ), let q = ϕ−1 (p), and let (e1 , e2 ) be the standard basis for R2 . For given 1 ≤ i ≤ 2, we define a smooth map ζ = (ζ 1 , ζ 2 ) : (−ε, ε) −→ U

12 Curves and Regular Surfaces in R3ν

272

by ζ(t) = q + tei , where ε > 0 is chosen small enough that (q − εei , q + εei ) ⊂ U . Consider the smooth curve λ(t) : (−ε, ε) −→ M defined by λ = ϕ ◦ ζ, and observe that λ(0) = ϕ(q) = p. By Theorem 10.1.10, X dζ k dλ d(ϕ ◦ ζ) (0) = (0) = (0)Hk |q = Hi |q dt dt dt

(12.3.10)

k

and X dζ k d(Hj ◦ ζ) ∂Hj ∂Hj (0) = (0) k = . dt dt ∂r q ∂ri q

(12.3.11)

k

It follows from Theorem 10.1.17 and Theorem 11.2.8 that Hj ◦ ϕ−1 is smooth. We have DHi ◦ϕ−1 (Hj ◦ ϕ−1 )|p

d(Hj ◦ ϕ−1 ◦ λ) (0) dt d(Hj ◦ ζ) = (0) dt ∂Hj = ∂ri q X = Γkij (q)Hk |q + ϑij (q)Gq =

[(12.3.4), (12.3.10)]

[(12.3.11)] [(12.3.6), Th 12.3.3(d)]

k

=

X

Γkij Hk

k

=

X k

 + ϑij G

ϕ−1 (p)

  k −1 Γij Hk + ϑij G ◦ ϕ . p

Since p was arbitrary, DHi ◦ϕ−1 (Hj ◦ ϕ−1 ) =

X k

 Γkij Hk + ϑij G ◦ ϕ−1 ,

from which the result follows. (b): Arguing as in part (a) gives d(β j ◦ ζ) ∂β j (0) = (q). dt ∂ri

(12.3.12)

12.3 Induced Euclidean Derivative in R3ν

273

It follows from Theorem 10.1.17 and Theorem 11.2.8 that β j ◦ ϕ−1 is smooth. We have

d(β j ◦ ϕ−1 ◦ λ) (0) dt d(β j ◦ ζ) = (0) dt ∂β j = (q) ∂ri  j  ∂β j −1  ∂β −1 = ϕ (p) = ◦ ϕ (p). ∂ri ∂ri

DHi ◦ϕ−1 (β j ◦ ϕ−1 )(p) =

[(12.3.3)]

[(12.3.12)]

Since p was arbitrary,

DHi ◦ϕ−1 (β j ◦ ϕ−1 ) =

∂β j ◦ ϕ−1 . ∂ri

(12.3.13)

Then

DHi ◦ϕ−1 (Y ) = DHi ◦ϕ−1

X j

= DHi ◦ϕ−1

X j

=

X

=

X

j

j

  β j Hj ◦ ϕ−1 −1

j

(β ◦ ϕ

−1

)(Hj ◦ ϕ

 )

 DHi ◦ϕ−1 (β j ◦ ϕ−1 )(Hj ◦ ϕ−1 ) [DHi ◦ϕ−1 (β j ◦ ϕ−1 ) (Hj ◦ ϕ−1 )

+ (β j ◦ ϕ−1 ) DHi ◦ϕ−1 (Hj ◦ ϕ−1 )]  X ∂β j −1 = ◦ ϕ (Hj ◦ ϕ−1 ) i ∂r j X   X + (β j ◦ ϕ−1 ) Γkij Hk + ϑij G ◦ ϕ−1 j

=

k

X j ∂β j

∂ri

Hj +

X j

β

j

X k

Γkij Hk

 + ϑij G ◦ ϕ−1 ,

12 Curves and Regular Surfaces in R3ν

274

where the fifth equality follows from (12.3.13) and part (a). Thus, DHi ◦ϕ−1 (Y ) ◦ ϕ = =

X ∂β j ∂ri

j

Hj +

X

β

j

X

j

X ∂β j

Γkij Hk

 + ϑij G

Γjik Hj

 + ϑik G

k

X

k

X

Hj + β ∂ri j k  X  X ∂β j X k j k = + β Γ H + β ϑ j ik G ik ∂ri j k k X  X j = β;i Hj + β j ϑij G. j

j

[(12.3.9)]

j

(c): We have DX (Y ) = DPi (αi ◦ϕ−1 )(Hi ◦ϕ−1 ) (Y ) =

X i

=

X i

(αi ◦ ϕ−1 ) DHi ◦ϕ−1 (Y )



αi [DHi ◦ϕ−1 (Y ) ◦ ϕ] ◦ ϕ−1 ,

hence DX (Y ) ◦ ϕ =

X

=

X

i

αi [DHi ◦ϕ−1 (Y ) ◦ ϕ] α

i

X

i

=

X X j

12.4

j

i

β;ij Hj

+

X

  β ϑij G j

[part (b)]

j

 X  αi β;ij Hj + αi β j ϑij G. ij

Covariant Derivative on Regular Surfaces in R3ν

Let (M, g) be a regular surface in R3ν , and let X, Y be vector fields in X(M ). As remarked in conjunction with Example 12.3.2, although the vector field DX (Y ) is in XR3ν (M ), it may not be in X(M ). In other words, even though Yp is a vector in Tp (M ) for all p in M , the same might not be true of DX (Y )p . We need a definition of “derivative” that sends vector fields in X(M ) to vector fields in X(M ), thereby avoiding this problem. Our approach is pragmatic: we modify the induced Euclidean derivative, discussed in Section 12.3, by eliminating the part that is not tangential to M . For each point p in M , we have by definition that gp is nondegenerate on the subspace Tp (M ) of R3ν . It follows from Theorem 4.1.3 that R3ν is the direct sum R3ν = Tp (M ) ⊕ Tp (M )⊥ . For brevity, let us denote the projection maps PTp (M )

12.4 Covariant Derivative on Regular Surfaces in R3ν

275

and PTp (M )⊥ by tanp and norp , respectively, so that tanp : R3ν −→ Tp (M )

and

norp : R3ν −→ Tp (M )⊥ .

The covariant derivative with respect to X consists of two maps, both denoted by ∇X . The first is ∇X : C ∞ (M ) −→ C ∞ (M ) defined by ∇X (f )(p) = dp (f )(Xp )

(12.4.1)

for all functions f in C ∞ (M ) and all p in M . The second is ∇X : X(M ) −→ X(M ) defined by ∇X (Y )p = tanp DX (Y )p



(12.4.2)

for all vector fields Y in X(M ) and all p in M , where DX (Y )p is given by (12.3.2). Observe that in the definition of the covariant derivative, all vector fields reside in X(M ). This is in contrast to the definition in Section 12.3 of the induced Euclidean derivative where vector fields in XR3ν (M ) also appear. For vector fields X, Y in X(M ), we define a function hX, Y i : M −→ R in C ∞ (M ) by the assignment p 7−→ hXp , Yp i for all p in M . Theorem 12.4.1. Let (M, g) be a regular surface in R3ν , let X, Y, Z be vector fields in X(M ), and let f be a function in C ∞ (M ). Then: (a) ∇X+Y (Z) = ∇X (Z) + ∇Y (Z). (b) ∇f X (Y ) = f ∇X (Y ). (c) ∇X (Y + Z) = ∇X (Y ) + ∇X (Z). (d) ∇X (f Y ) = ∇X (f ) Y + f ∇X (Z). (e) ∇X (hY, Zi) = h∇X (Y ), Zi + hY, ∇X (Z)i. Proof. (a)–(d): This follows from parts (a)–(d) of Theorem 12.3.1. (e): Let p be a point in M . We have ∇X (hY, Zi)(p) = DX (hY, Zi)(p)

= hDX (Y )p , Zp i + hYp , DX (Z)p i.

[(12.3.1), (12.4.1)] [Th 12.3.1(e)]

We also have   DX (Y )p = tanp DX (Y )p + norp DX (Y )p  = ∇X (Y )p + norp DX (Y )p ,

[Th 4.1.4] [(12.4.2)]

12 Curves and Regular Surfaces in R3ν

276 so

 hDX (Y )p , Zp i = h∇X (Y )p , Zp i + norp DX (Y )p , Zp = h∇X (Y )p , Zp i. Similarly, hYp , DX (Z)p i = hYp , ∇X (Z)p i. Thus, ∇X (hY, Zi)(p) = h∇X (Y )p , Zp i + hYp , ∇X (Z)p i. Since p was arbitrary, the result follows. Here are the basic formulas for computing with covariant derivatives. Theorem 12.4.2. Let (M, g) be a regular surface in R3ν , let (U, ϕ) be a chart on M , and let (H1 , H2 ) be the corresponding coordinate frame. Let X, Y be vector fields in X(M ), and, in local coordinates, let X X X ◦ϕ= αi Hi and Y ◦ϕ= β j Hj . i

j

Then, for i, j = 1, 2: (a) ∇Hi ◦ϕ−1 (Hj ◦ ϕ−1 ) ◦ ϕ =

X

Γkij Hk .

k

(b) ∇Hi ◦ϕ−1 (Y ) ◦ ϕ =

X

β;ij Hj .

j

(c) ∇X (Y ) ◦ ϕ =

X X j

 αi β;ij Hj .

i

Proof. This follows from Theorem 12.3.5. Let (M, g) be a regular surface in R3ν , and let X, Y be vector fields in X(M ). The second order covariant derivative with respect to X and Y consists of two maps, both denoted by ∇2X,Y . The first is ∇2X,Y : C ∞ (M ) −→ C ∞ (M ) defined by  ∇2X,Y (f ) = ∇X ∇Y (f ) − ∇∇X (Y ) (f )

(12.4.3)



for all functions f in C (M ). The second is ∇2X,Y : X(M ) −→ X(M ) defined by  ∇2X,Y (Z) = ∇X ∇Y (Z) − ∇∇X (Y ) (Z)

(12.4.4)

for all vector fields Z in X(M ). These definitions are counterparts of the Euclidean versions given in Section 10.3.

12.4 Covariant Derivative on Regular Surfaces in R3ν

277

Theorem 12.4.3. Let (M, g) be a regular surface in R3ν , let X, Y, Z be vector fields in X(M ), and, in local coordinates, let X ◦ϕ=

X

αi Hi ,

Y ◦ϕ=

i

X

β j Hj ,

Z ◦ϕ=

and

j

X

γ k Hk .

k

Then: (a) ∇2X,Y (Z) ◦ ϕ XX X X ∂2γl ∂γ k ∂γ l = αi β j i j + (αi β j + αj β i ) i Γljk − αi β j k Γkij ∂r ∂r ∂r ∂r ij l ijk ijk   X ∂Γljk X l n + αi β j γ k + (Γin Γjk − Γlkn Γnij ) Hl . ∂ri n ijk

(b) If Γijk = 0 for i, j, k = 1, . . . , m, then ∇2X,Y (Z) ◦ ϕ = =

X X

αi β j

ij

l

∇2Y,X (Z)

 ∂2γl Hl ∂ri ∂rj

◦ ϕ.

Proof. (a): The proof is a lengthy computation that uses Theorem 12.4.1 repeatedly.  Step 1. Compute ∇X ∇Y (Z) . Let X l µl = β j γ;j . (12.4.5) j

Using Theorem 12.4.2(c) twice gives ∇Y (Z) ◦ ϕ = and 

∇X ∇Y (Z) ◦ ϕ =

X

µl Hl

l

X X

αi µl;i

 Hl .

(12.4.6)

i

l

We have from (12.3.9) and (12.4.5) that µl;i =

  ∂µl X k l ∂µl X X j k l + µ Γ = + β γ ik ;j Γik ∂ri ∂ri j k

l

∂µ = + ∂ri

X jk

k

k l β j γ;j Γik ,

(12.4.7)

12 Curves and Regular Surfaces in R3ν

278 where l γ;j =

∂γ l X n l + γ Γjn . ∂rj n

(12.4.8)

We seek alternative expressions for the two terms in the second row of (12.4.7). It follows from (12.4.5) that l X ∂β j X ∂γ;j ∂µl l j = γ + β , ∂ri ∂ri ;j ∂ri j j

(12.4.9)

and from (12.4.8) that X ∂β j j

∂r

γl = i ;j

X ∂β j ∂γ l X ∂β j + γ n Γljn i ∂r j i ∂r ∂r j jn

(12.4.10)

and l X ∂γ n X ∂Γljn ∂γ;j ∂2γl l = + Γ + γn . i jn ∂ri ∂ri ∂rj ∂r ∂ri n n

(12.4.11)

Then (12.4.11) gives X j

βj

l 2 l X X ∂γ n X ∂γ;j ∂Γljn j ∂ γ j l j n = β + β Γ + β γ . ∂ri ∂ri ∂rj ∂ri jn ∂ri j jn jn

(12.4.12)

Combining (12.4.9), (12.4.10), and (12.4.12) yields 2 l X ∂β j ∂γ l X ∂β j X ∂µl n l j ∂ γ = + γ Γ + β jn ∂ri ∂ri ∂rj ∂ri ∂ri ∂rj j jn j

+

X jn

βj

X ∂Γljn ∂γ n l Γjn + βj γn , i ∂r ∂ri jn

(12.4.13)

which is the desired expression for the first term in the second row of (12.4.7). We have from (12.4.8) that X

k l β j γ;j Γik =

X

=

X

jk

βj

jk

jk



 ∂γ k X n k l + γ Γ jn Γik ∂rj n

X ∂γ k l β Γ + β j γ n Γkjn Γlik , ik ∂rj j

jkn

(12.4.14)

12.4 Covariant Derivative on Regular Surfaces in R3ν

279

which is the desired expression for the second term in the second row of (12.4.7). Substituting (12.4.13) and (12.4.14) into (12.4.7) gives 2 l X ∂β j ∂γ l X ∂β j X n l j ∂ γ + γ Γ + β jn ∂ri ∂rj ∂ri ∂ri ∂rj j jn j

µl;i =

X

+

βj

X ∂Γljn ∂γ n l j n Γ + β γ jn ∂ri ∂ri jn

βj

X ∂γ k l Γ + β j γ n Γkjn Γlik . ik ∂rj

jn

X

+

jk

(12.4.15)

jkn

 It follows from (12.4.6) and (12.4.15) that the lth component of ∇X ∇Y (Z) ◦ ϕ is l ∇X (∇Y (Z)) ◦ ϕ X = αi µl;i i

=

X

αi

ij

+

X

+

X

X ∂β j ∂γ l X i ∂β j n l ∂2γl + α γ Γjn + αi β j i j i j i ∂r ∂r ∂r ∂r ∂r ijn ij

αi β j

X ∂Γljn ∂γ n l i j n Γ + α β γ ∂ri jn ijn ∂ri

αi β j

X ∂γ k l Γik + αi β j γ n Γkjn Γlik j ∂r

ijn

ijk

=

(1) X ij

j

(3)

ijk

X

(5)

αi β j

ijk (6)

+

(2)

2 l X ∂γ l X i ∂β j k l i j ∂ γ α + α γ Γ + α β jk ∂ri ∂rj ∂ri ∂ri ∂rj ij i ∂β

(4)

+

(12.4.16)

ijkn

X ijk

X ∂Γljk ∂γ k l i j k Γ + α β γ ∂ri jk ∂ri ijk (7)

X ∂γ k l αβ Γ + αi β j γ k Γnjk Γlin , ik ∂rj i j

ijkn

where the summations have been numbered for easy reference. Step 2. Compute ∇∇X (Y ) (Z). By Theorem 12.4.2(c), ∇X (Y ) =

X X j

i

i

−1

(α ◦ ϕ

)(β;ij

−1

◦ϕ

 ) Hj ◦ ϕ−1 ,

12 Curves and Regular Surfaces in R3ν

280 so

∇∇X (Y ) (Z) = ∇P P i −1 j −1  (Z) )(β;i ◦ϕ ) Hj ◦ϕ−1 j i (α ◦ϕ X = (αi ◦ ϕ−1 )(β;ij ◦ ϕ−1 )∇Hj ◦ϕ−1 (Z) ij

=

X ij

=

X  l (αi ◦ ϕ−1 )(β;ij ◦ ϕ−1 ) (γ;j ◦ ϕ−1 )Hl ◦ ϕ−1 l

XX ij

l

=

 l (αi ◦ ϕ−1 )(β;ij ◦ ϕ−1 )(γ;j ◦ ϕ−1 ) Hl ◦ ϕ−1

XX

l αi β;ij γ;j

ij

l

  Hl ◦ ϕ−1 ,

where the third equality follows from Theorem 12.4.2(b). Then ∇∇X (Y ) (Z) ◦ ϕ =

XX l

l αi β;ij γ;j

 Hl ,

(12.4.17)

ij

where β;ij =

∂β j X k j + β Γik ∂ri

and

k

l γ;j =

∂γ l X n l + γ Γjn . ∂rj n

(12.4.18)

From (12.4.17) and (12.4.18), the lth component of ∇∇X (Y ) (Z) ◦ ϕ is l ∇∇X (Y ) (Z) ◦ ϕ X l = αi β;ij γ;j ij

 l X  X  ∂β j X ∂γ k j n l = αi + β Γ + γ Γ jn ik ∂ri ∂rj n ij k

=

X

j

αi

ij

+

X ijk

X

X ∂β j ∂β ∂γ + αi i γ n Γljn ∂ri ∂rj ∂r ijn

∂γ l αi β k j Γjik ∂r

(1)

=

+

X

αi β k γ n Γjik Γljn

ijkn

(2)

αi

ij

∂β j ∂γ l X i ∂β j k l + α γ Γjk ∂ri ∂rj ∂ri ijk

(8)

+

l

X ijk

l

αi β j

(9)

X ∂γ k Γij + αi β j γ k Γnij Γlkn . k ∂r ijkn

(12.4.19)

12.4 Covariant Derivative on Regular Surfaces in R3ν

281

Step 3. Compute ∇2X,Y (Z). By definition,  ∇2X,Y (Z) = ∇X ∇Y (Z) − ∇∇X (Y ) (Z). We have from (12.4.16) and (12.4.19) that the lth component of ∇2X,Y (Z) ◦ ϕ is l ∇2X,Y (Z) ◦ ϕ  l   = ∇X ∇Y (Z) − ∇∇X (Y ) (Z) ◦ ϕ =

X (1)

− =

+

X (1)

(2) X

+

(2) X

+

+

(8) X

(4) X

+

(5) X

+

(6) X

+

(7)  X

(9)  X

+

(5)

k X X ∂Γljk ∂2γl i j ∂γ l i j k αβ + α β Γ + α β γ ∂ri ∂rj ∂ri jk ∂ri

(12.4.20)

i j

ij

ijk

(6)

X (8) X

ijk

(7)

αi β j

ijk



(3) X

(4)

(3) X

+

+

X ∂γ k l Γik + αi β j γ k Γnjk Γlin j ∂r ijkn (9)

αi β j

ijk

X ∂γ l k Γ − αi β j γ k Γnij Γlnk . ij ∂rk ijkn

Since (4) X

αi β j

ijk

(6) (4)+(6) k X X ∂γ k l ∂γ k j i ∂γ l Γ + α β Γ = (αi β j + αj β i ) i Γljk jk jk i i ∂r ∂r ∂r ijk

(12.4.21)

ijk

and (5) X

(7)

αi β j γ k

ijk

ijkn

(5)

=

X

αi β j γ k

ijk



(9)  X

∂Γljk ∂ri

αi β j γ k

ijk

(5)+(7)−(9)

=

(9)

X ∂Γljk X i j k n l + α β γ Γjk Γin − αi β j γ k Γnij Γlkn i ∂r

X ijk

ijkn

(7)

 X X i j k n l + αβ γ Γjk Γin

X

n

ijk

Γnij Γlkn



n i j k

αβ γ



 ∂Γljk X l n l n + (Γin Γjk − Γkn Γij ) , ∂ri n

(12.4.22)

12 Curves and Regular Surfaces in R3ν

282 we have from (12.4.20)–(12.4.22) that l ∇2X,Y (Z) ◦ ϕ =

(3) X

αi β j

ij

(4)+(6) (8) k X X ∂2γl ∂γ l i j j i ∂γ l + (α β + α β ) Γ − αi β j k Γkij jk i j i ∂r ∂r ∂r ∂r ijk

(5)+(7)−(9)

+

X ijk

i j k

αβ γ



ijk

 ∂Γljk X l n l n + (Γin Γjk − Γkn Γij ) . ∂ri n

(b): This follows from Theorem 10.1.6 and part (a). It was remarked following Theorem 12.3.3 that the Christoffel symbols corresponding to Pln have constant value 0 and this is related to Pln being “flat”. We see from Theorem 12.4.3(b) that in the context of Pln, the order of vector fields is immaterial when computing the second order covariant derivative. This is reminiscent of the Euclidean situation in Rm [see (Theorem 10.3.4(b)]. The following example shows that for the sphere S 2 , order is important. Example 12.4.4 (S 2 ). In what follows, we use the Christoffel symbol results of Section 13.4. Setting X = H1 ◦ ϕ−1 and Y = H2 ◦ ϕ−1 in Theorem 12.4.3(a) (so that α1 = β 2 = 1 and α2 = β 1 = 0), and also setting r1 = θ and r2 = φ, a lengthy but straightforward computation yields  ∇2H1 ◦ϕ−1 ,H2 ◦ϕ−1 (Z) − ∇2H2 ◦ϕ−1 ,H1 ◦ϕ−1 (Z) ◦ ϕ = sin2 (θ)γ 2 H1 − γ 1 H2 . Due to the symmetry of the sphere, it is not surprising that the above expression is independent of φ. ♦

12.5

Covariant Derivative on Curves in R3ν

Let (M, g) be a regular surface in R3ν , let λ(t) : (a, b) −→ M be a smooth curve, and let J(t) : (a, b) −→ R3ν be a map. In the present context, we refer to J as a vector field along λ. The set of smooth vector fields along λ is denoted by XR3ν (λ). As an example, if V is a vector field in XR3ν (M ), then V ◦ λ is a vector field in XR3ν (λ). We say that J is a (tangent) vector field on λ if J(t) is in Tλ(t) (M ) for all t in (a, b). Let us denote the set of smooth vector fields on λ by XM (λ). For example, dλ/dt, the velocity of λ, is in XM (λ). As another example, if X is a vector field in X(M ), then X ◦ λ is a vector field in XM (λ). For a vector field J in XM (λ), we have by definition that J(t) is a vector in Tλ(t) (M ) for all t in (a, b). But this is not necessarily so for (dJ/dt)(t). In particular, although the velocity of λ is in XM (λ), its (Euclidean) acceleration may not be. We need a definition of “derivative” that avoids this problem. Our response is similar to the approach taken in Section 12.4. The covariant derivative on λ consists of two maps, both denoted by ∇/dt. The first is   ∇ : C ∞ (a, b) −→ C ∞ (a, b) dt

12.5 Covariant Derivative on Curves in R3ν defined by

283

∇f df (t) = (t) dt dt

 for all functions f in C ∞ (a, b) and all t in (a, b). The second is ∇ : XM (λ) −→ XM (λ) dt defined by   ∇J dJ (t) = tanλ(t) (t) dt dt

(12.5.1)

for all vector fields J in XM (λ) and all t in (a, b), where, following Section 12.4, tanλ(t) denotes the projection map PTλ(t) (M ) . The (covariant) acceleration of λ is defined to be the smooth curve   ∇ dλ (t) : (a, b) −→ M. dt dt For vector fields J, K in XM (λ), we define a function hJ, Ki : (a, b) −→ R  in C ∞ (a, b) by the assignment t 7−→ hJ(t), K(t)i and all t in (a, b). The definition of covariant derivative on a curve has an appealing physical interpretation. Imagine a “bug” that is confined to the 2-dimensional world of a given regular surface in R3ν . For this creature, there is no “up” or “down”, only movements “on” the surface. Suppose the bug is scurrying along, tracing a smooth curve as it goes. From our vantage point in R3ν , and knowing something about Newtonian physics, we determine that the bug has a certain velocity and nonzero (Euclidean) acceleration. For both us and the bug, velocity is entirely a tangential phenomenon. On the other hand, we observe the acceleration to have both tangential and normal components. But not so for the bug, which is oblivious to any such normal phenomena. This suggests that in order to quantify what we presume to be the acceleration felt by the bug, we should confine our attention to the tangential component. This is accomplished by taking the projection onto the tangent plane. Theorem 12.5.1. Let (M, g) be a regular surface in R3ν , and let λ(t) : (a, b) −→ M be a smooth curve. Let J, K be vector fields in XM (λ), let f be a function in  C ∞ (a, b) , and let g(u) : (c, d) −→ (a, b) be a diffeomorphism. Then, for all t in (a, b) and all u in (c, d): (a) ∇(J + K) ∇J ∇K (t) = (t) + (t). dt dt dt

12 Curves and Regular Surfaces in R3ν

284 (b)

∇(f J) df ∇J (t) = (t) J(t) + f (t) (t). dt dt dt (c) d(hJ, Ki) (t) = dt



   ∇J ∇K (t), K(t) + J(t), (t) . dt dt

(d)  ∇(J ◦ g) dg ∇J (u) = (u) g(u) . du du dt Let (M, g) be a regular surface in R3ν , let (U, ϕ) be a chart on M , and let (H1 , H2 ) and G be the corresponding coordinate frame and coordinate unit  normal vector field. Let λ : (a, b) −→ M be a smooth curve such that λ (a, b) ⊂ U , and let J be a vector field in XM (λ). By Theorem 10.1.17 and Theorem 11.2.8, the map µ = ϕ−1 ◦ λ = (µ1 , µ1 ) : (a, b) −→ U (12.5.2) is smooth. Then J(t) can be expressed as X J(t) = αi (t)Hi |µ(t) ,

(12.5.3)

i

where the αi are uniquely determined functions in C ∞ (U ), called the components of J with respect to (U, ϕ). The right-hand side of (12.5.3) is said to express J in local coordinates with respect to (U, ϕ). Theorem 12.5.2. With the above setup, for all t in (a, b): (a)  X dαk X  dJ dµj (t) = (t) + αi (t) (t) Γkij µ(t) Hk |µ(t) dt dt dt ij k X   dµj i + α (t) (t) ϑij µ(t) Gµ(t) . dt ij (b)  X dαk X  ∇J dµj (t) = (t) + αi (t) (t) Γkij µ(t) Hk |µ(t) . dt dt dt ij k

(c)  X d2 µk X dµi  d2 λ dµj k (t) = (t) + (t) (t) Γij µ(t) Hk |µ(t) dt2 dt dt dt ij k X i   dµj dµ + (t) (t) ϑij µ(t) Gµ(t) . dt dt ij (d) 

 X i  dλ dλ dµ dµj (t), (t) = (t) (t) gij µ(t) . dt dt dt dt ij

12.6 Lie Bracket in R3ν

285

Proof. (a): We have from (12.5.3) and Theorem 10.1.10 that  X dαi dJ d(Hi ◦ µ) i (t) = (t) Hi |µ(t) + α (t) (t) dt dt dt i  X j X dαi X dµ ∂Hi i = (t) Hi |µ(t) + α (t) (t) j dt dt ∂r µ(t) i i j X dαk X dµj ∂Hi = (t) Hk |µ(t) + αi (t) (t) j . dt dt ∂r µ(t) ij

(12.5.4)

k

Using (12.3.6), the second term in the last row of (12.5.4) can be expressed as dµj ∂Hi (t) j dt ∂r µ(t) ij X  X   dµj i k = α (t) (t) Γij µ(t) Hk |µ(t) + ϑij µ(t) Gµ(t) dt ij k  XX  dµj i k = α (t) (t) Γij µ(t) Hk |µ(t) dt ij k X   dµj + αi (t) (t) ϑij µ(t) Gµ(t) . dt ij

X

αi (t)

(12.5.5)

Combining (12.5.4) and (12.5.5) gives the result. (b): This follows from (12.5.1) and part (a). (c): Setting J = dλ/dt and αi = dµi /dt in part (a) gives the result. (d): By Theorem 10.1.10, dλ d(ϕ ◦ µ) X dµi (t) = = (t) Hi |µ(t) , dt dt dt i from which the result follows.

12.6

Lie Bracket in R3ν

Let (M, g) be a regular surface in R3ν . Lie bracket is the map [·,·] : X(M ) × X(M ) −→ X(M ) defined by [X, Y ] = ∇X (Y ) − ∇Y (X) for all vector fields X, Y in X(M ).

(12.6.1)

12 Curves and Regular Surfaces in R3ν

286

Theorem 12.6.1. Let (M, g) be a regular surface in R3ν , let (U, ϕ) be a chart on M , let X, Y be vector fields in X(M ), and, in local coordinates, let X ◦ϕ=

X

αi Hi

Y ◦ϕ=

and

i

X

β j Hj .

j

Then  XX ∂β j ∂αj αi i − β i i Hj ∂r ∂r j i  X X j = (αi β;ij − β i α;i ) Hj .

[X, Y ] ◦ ϕ =

j

i

Proof. By Theorem 12.4.2(c), ∇X (Y ) ◦ ϕ = and ∇Y (X) ◦ ϕ =

XX j

i

XX j

 αi β;ij Hj

j β i α;i

 Hj ,

i

hence [X, Y ] ◦ ϕ = ∇X (Y ) ◦ ϕ − ∇Y (X) ◦ ϕ =

X X j

i

(αi β;ij



j β i α;i )

 Hj .

We have from (12.3.9) that X i

  j X  X  ∂β j X j k j i ∂α k j αi β;ij − β i α;i = αi + β Γ − β + α Γ ik ik ∂ri ∂ri i k k  X X ∂β j X ∂αj = αi i − β i i + αi β k Γjik − αk β i Γjik ∂r ∂r i ik ik  j X ∂β j ∂α = αi i − β i i , ∂r ∂r i

where the last equality relies on Theorem 12.3.3(d). The result follows. The next result shows that the Lie bracket on a regular surface in R3ν , formulated above in terms of the covariant derivative, can also be expressed in terms of the induced Euclidean derivative. Theorem 12.6.2. If (M, g) is a regular surface in R3ν and X, Y are vector fields in X(M ), then ∇X (Y ) − ∇Y (X) = DX (Y ) − DY (X).

12.6 Lie Bracket in R3ν

287

Proof. We have from Theorem 12.3.5(c) that  X  X X i j i j DX (Y ) ◦ ϕ = α β;i Hj + α β ϑij G j

and DY (X) ◦ ϕ =

i

XX j

ij

 X  j β i α;i Hj + β i αj ϑij G.

i

ij

Then Theorem 12.3.3(d) gives  X X  j DX (Y ) − DY (X) ◦ ϕ = αi β;ij − β i α;i Hj . j

i

The result now follows from Theorem 12.6.1. Here is a counterpart of Theorem 10.4.2. Theorem 12.6.3. Let (M, g) be a regular surface in R3ν , let X, Y, Z be vector fields in X(M ), and let f, g be functions in C ∞ (M ). Let (U, ϕ) be a chart on M and let (H1 , H2 ) be the corresponding coordinate frame. Then: (a) [Y, X] = −[X, Y ]. (b) [X + Y, Z] = [X, Z] + [Y, Z]. (c) [X, Y + Z] = [X, Y ] + [X, Z]. (d) [f X, gY ] = f g[X, Y ] +f ∇X (g)Y − g∇Y (f )X. (e) X, [Y, Z] + Y, [Z, X] + Z, [X, Y ] = (0, 0, 0). (Jacobi’s identity) (f) [Hi ◦ ϕ−1 , Hj ◦ ϕ−1 ] = (0, 0, 0) for i, j = 1, 2. Proof. (a)–(d): Parts (a)–(d) of Theorem 12.4.1 and (12.6.1) give the result. (e): We have from parts (a) and (c) of Theorem 12.4.1 and (12.6.1) that   X, [Y, Z] = ∇X ([Y, Z]) − ∇[Y,Z] (X)  = ∇X ∇Y (Z) − ∇Z (Y ) − ∇∇Y (Z)−∇Z (Y ) (X)   = ∇X ∇Y (Z) − ∇X ∇Z (Y ) − ∇∇Y (Z) (X) + ∇∇Z (Y ) (X). Likewise,     Y, [Z, X] = ∇Y ∇Z (X) − ∇Y ∇X (Z) − ∇∇Z (X) (Y ) + ∇∇X (Z) (Y )     Z, [X, Y ] = ∇Z ∇X (Y ) − ∇Z ∇Y (X) − ∇∇X (Y ) (Z) + ∇∇Y (X) (Z). Summing the preceding identities and using (12.4.4) yields       X, [Y, Z] + Y, [Z, X] + Z, [X, Y ] = [∇2X,Y (Z) − ∇2Y,X (Z)] + [∇2Y,Z (X) − ∇2Z,Y (X)] + [∇2Z,X (Y ) − ∇2X,Z (Y )] = R(X, Y )Z + R(Y, Z)X + R(Z, X)Y = (0, 0, 0),

where the second identity follows from Theorem 12.9.1, and the last identity from Theorem 12.9.13. (f): This follows from Theorem 12.6.1.

12 Curves and Regular Surfaces in R3ν

288

12.7

Orientation in R3ν

In Section 12.2, we defined a regular surface to be a regular surface in R3ν provided its first fundamental form satisfies certain properties. We then proceeded to demonstrate an equivalent formulation based on the existence of a particular type of unit normal vector field. Aside from an increase in geometric intuition, the latter approach offers computational advantages. For example, as remarked in connection with (12.2.4), it is usually more convenient to compute the index of a regular surface in R3ν indirectly using its sign. In this section, we explore orientation in the context of regular surfaces in R3ν . The basic definition is given in terms of atlases, but once again unit normal vector fields play a prominent role. In what follows, we rely heavily on the discussion of orientation of vector spaces given in Section 8.2. Theorem 12.7.1. Let (M, g) be a regular surface in R3ν , let (U, ϕ) be a chart on M , let H = (H1 , H2 ) and G be the corresponding coordinate frame and coordinate unit normal vector field, and let q be a point in U . Then (Gq , H1 |q , H2 |q ) is a basis for R3ν that has the standard orientation. Proof. This follows from Theorem 8.4.10(b) and Theorem 12.2.11. e , ϕ) Let (M, g) be a regular surface in R3ν , and let (U, ϕ) and (U e be overlapping e be the corresponding coordinate frames, and let charts on M . Let H and H e be the corresponding coordinate unit normal vector fields. Let W = G and G e ), and let p be a point in W . Recall from Section 8.2 that the ϕ(U ) ∩ ϕ( eU eϕe−1 (p) are said to be consistent if coordinate bases Hϕ−1 (p) and H  He −1  det idTp (M ) Hϕe (p) > 0. ϕ−1 (p)

e , ϕ) eϕe−1 (p) are conWe say that (U, ϕ) and (U e are consistent if Hϕ−1 (p) and H sistent for all p in W . e , ϕ) Theorem 12.7.2. With the above setup, (U, ϕ) and (U e are consistent if and e only if Gϕ−1 (p) = Gϕe−1 (p) for all p in V . Proof. This follows from Theorem 8.4.11(b). Let (M, g) be a regular surface in R3ν . An atlas for M is said to be consistent if every pair of overlapping charts in the atlas is consistent. We say that M is orientable if it has a consistent atlas. Suppose M is in fact orientable, and let A be a consistent atlas for M . The triple (M, g, A) is called an oriented regular surface in R3ν . Let p be a point in M , let (U, ϕ) be a chart in A at p, and let H be the corresponding coordinate frame. Let O(p) = [Hϕ−1 (p) ], where we recall from Section 8.2 that [Hϕ−1 (p) ] is the equivalence class of all bases for Tp (M ) (not just coordinate bases) that are consistent with Hϕ−1 (p) .

12.7 Orientation in R3ν

289

e , ϕ) e be the corresponding coordinate Let (U e be another chart in A at p, and let H e frame. Since A is consistent, (U, ϕ) and (U , ϕ) e are consistent, hence [Hϕ−1 (p) ] = eϕe−1 (p) ]. This shows that the definition of O(p) is independent of the choice [H of representative chart at p. We call the set of equivalence classes O = {O(p) : p ∈ M } the orientation induced by A and say that M is oriented by A. The notation (M, g, O), and sometimes (M, g, A, O), is used as an alternative to (M, g, A). Consider the map ι : R2 −→ R2 given by ι(r1 , r2 ) = (−r1 , r2 ). Since ι is a diffeomorphism and ι−1 = ι, ι(U ), ϕ ◦ ι is a chart on M , where, for brevity, we denote ι|ι(U ) by ι. Because  ϕ ◦ ι(r1 , r2 ) = ϕ1 (−r1 , r2 ), ϕ2 (−r1 , r2 ), ϕ3 (−r1 , r2 ) , the corresponding coordinate frame and coordinate unit normal vector field are −H = (−H1 , H2 ) and −G. It is easily shown using Theorem 11.3.3 that   −A = ι(U ), ϕ ◦ ι : (U, ϕ) ∈ A is a consistent atlas for M . The orientation of M induced by −A is −O = {−O(p) : p ∈ M }, where −O(p) = [−Hϕ−1 (p) ] = [(−H1 |ϕ−1 (p) , H2 |ϕ−1 (p) )]. We say that the orientation −O is the opposite of O. Theorem 12.7.3. If (M, g, A) is an oriented regular surface in R3ν , then: (a) There is a unit normal vector field N in XR3ν (M ), called the Gauss map, such that for every chart (U, ϕ) in A, N ◦ ϕ is the corresponding coordinate unit normal vector field. (b) N satisfies properties [V1] and [V2] of Section 12.2. Remark. Calling N a “map” is reasonable because XR3ν (M ) is, by definition, a set of maps. Proof. (a): Since A is a consistent atlas for M , it follows from Theorem 12.7.2 that whenever two charts in A overlap, the corresponding coordinate unit normal vector fields agree on the overlap. We can therefore assign to each point p in M a vector Np that is unit normal at p, and do so in such a way that Np is independent of the choice of chart at p selected from A. More specifically, given the chart (U, ϕ) in A and the corresponding coordinate unit normal vector field G, we define Np = Gϕ−1 (p) for all p in ϕ(U ). Since G is smooth on U , it follows

290

12 Curves and Regular Surfaces in R3ν

from Theorem 10.1.17 and Theorem 11.2.8 that N is smooth on ϕ(U ). Because coordinate unit normal vector fields agree on overlaps, we can combine across charts to obtain a unit normal vector field N in XR3ν (M ). (b): This follows from Theorem 12.2.2(b) and Theorem 12.2.4. Theorem 12.7.4. Every regular surface M has an atlas B such that for each chart (U, ϕ) in B, ϕ(U ) is a connected set in M . Proof. Let A be an atlas for M , let p be a point in M , and let (W, ψ) be chart in A at p. By definition, W is an open set in R2 . Let U be an open connected set in W containing ψ −1 (p), for example, an open disk of sufficiently small radius centered at ψ −1 (p). According to [C3] of Section 11.2, ψ is a homeomorphism. It follows from Theorem 9.1.18 that ψ(U ) is a connected set in M . Let ϕ = ψ|U . By Theorem 9.1.5 and Theorem 11.2.10, (U, ϕ) is a chart at p with the desired property. Since p was arbitrary, the result follows. Theorem 12.7.5. If M is a regular surface that is connected as a topological space, then XR3ν (M ) contains either no unit vector fields or precisely two unit vector fields. In the latter case, if V is one of them, then −V is the other. Proof. Let V and W be unit normal vector fields in XR3ν (M ), and let p be a point in M . Since Vp and Wp are both unit normal vectors at p, they differ by at most a sign. Define a function φ : M −→ {1, −1} by Wp = φ(p)Vp , and define nowhere-vanishing functions f, g in C ∞ (M ) by f (p) = hVp , Wp i and g(p) = hVp , Vp i for all p in M . Then f (p) = φ(p)g(p) for all p in M , hence f = φg. Since f and g are smooth, by Theorem 10.1.1, they are continuous. Thus, φ = f /g is a nowhere-vanishing continuous function. It follows from Theorem 9.1.20 that either φ = 1 or φ = −1, so W equals V or −V . Theorem 12.7.6. Let (M, g) be a regular surface in R3ν . Then M is orientable if and only if there is a unit normal vector field in XR3ν (M ). Proof. (⇒): This follows from Theorem 12.7.3(a). (⇐): We construct a consistent atlas A for M as follows. Let V be a unit normal vector field in XR3ν (M ), and let B be an atlas for M of the type given by Theorem 12.7.4. Let (U, ϕ) be a chart in B, and let G be the corresponding coordinate unit normal vector field. Using Example 11.2.5, we view ϕ(U ) as a −1 regular surface. It follows from Theorem  12.2.11(c) that G ◦ ϕ and V |ϕ(U ) are unit normal vector fields in XR3ν ϕ(U ) . Since ϕ(U ) is a connected set in M , we have from Theorem 12.7.5 that either G ◦ ϕ−1 = V |ϕ(U ) or −G ◦ ϕ−1 = V |ϕ(U ) . In the former  case, we include (U, ϕ) in A, and in the latter case, we include ι(U ), ϕ ◦ ι , where the map ι is defined above and we note that −G ◦ ϕ−1 is the corresponding coordinate unit normal vector field. Thus, for each chart in A, the corresponding coordinate unit normal vector field is a restriction of V to the coordinate domain of the chart. The result now follows from Theorem 12.7.2. Example 12.7.7 (S 2 ). It follows from parts (a)–(c) of Theorem 12.2.12 and Theorem 12.7.6 that S 2 is an orientable regular surface in R30 . ♦

12.7 Orientation in R3ν

291

Reviewing the proof of Theorem 12.2.12, we see that the preceding example rests on the gradient in question satisfying property [V2] of Section 12.2. More generally, we have the following extension of Theorem 12.2.10(b). Theorem 12.7.8 (Level Set of Function). Let U be an open set in R3ν , let f be a function in C ∞ (U), let c be a real number in f (U), and let M = f −1 (c). If Grad(f ) satisfies property [V2] of Section 12.2, then (M, g) is an orientable regular surface in R3ν . Proof. This follows from Theorem 12.2.10(b) and Theorem 12.7.6. Example 12.7.9 (M¨ obius Band). The M¨obius band is the subset of R3 defined by M¨ ob = {ϕ(t, φ) ∈ R3 : (t, φ) ∈ U }, where ϕ(t, φ) = [1 − t sin(φ/2)] cos(φ), [1 − t sin(φ/2)] sin(φ), t cos(φ/2)



and U = (−1/2, 1/2) × (−π, π). A model of M¨ ob can be made by giving a strip of paper a half twist and then pasting the ends together. See Figure 12.7.1. It can be shown that M¨ob is a regular surface in R30 , something that is intuitively clear from the diagram. However, XR30 (M¨ ob) does not contain a unit normal vector field. This can be seen by choosing a starting point on the curve that bisects M¨ob longitudinally (dotted line in Figure 12.7.1), and then sliding a given unit normal vector (viewed as an arrow) along the curve on its base while keeping the shaft normal to the surface. After making a complete circuit, the unit normal vector is at the starting point, but now projects from the “opposite” side of the band. This shows that any unit normal vector field along M¨ob is not continuous, let alone smooth. However, completing a second circuit produces the original vector. In a manner of speaking, M¨ ob has only one “side”. ♦

Figure 12.7.1. M¨ obius band: Diagram for Example 12.7.9

12 Curves and Regular Surfaces in R3ν

292

12.8

Gauss Curvature in R3ν

In this section, we describe a way of measuring the “curvature” of a regular surface in R3ν . As part of the discussion of hyperquadrics Q2 in R3ν in Section 12.2, we observed that S 2 is the set of (spacelike) unit vectors in R30 , P 2 is the set of spacelike unit vectors in R31 , and H2 is the set of timelike unit vectors in R31 . In fact, more than just being sets, according to Theorem 12.2.12(a), S 2 is a regular surface in R30 , and P 2 and H2 are regular surfaces in R31 . Let (M, g, A, O) be an oriented regular surface in R3ν , let N : M −→ R3ν be the Gauss map, and let p be a point in M . Since Np is a unit normal vector at p, it follows from (12.2.3) that q(Np ) = hNp , Np i = M , where q is the quadratic function corresponding to g. Thus, Np is in the same hyperquadric for all p in M . Denoting the hyperquadric by Q2 , we can now say that Np is in Q2 for all p in M . Thus, N can be expressed more precisely as N : M −→ Q2 . The situation for S 2 is depicted in Figure 12.8.1, where Ni stands for Npi for i = 1, 2, 3.

N

S2

M N1

N2

p2 N1

p1

N2

N3

N3

p3

Figure 12.8.1. Gauss map The differential of N at p is dp (N ) : Tp (M ) −→ TNp (Q2 ). By definition, Np is in Tp (M )⊥ . On the other hand, since Np is in Q2 , we have from Theorem 12.2.12(c) that Np is also in TNp (Q2 )⊥ . Since Tp (M )⊥ and TNp (Q2 )⊥ are both 1-dimensional, it follows that Tp (M )⊥ = TNp (Q2 )⊥ , and then from Theorem 4.1.2(c) that Tp (M ) = TNp (Q2 ). We can therefore express the differential of N at p as dp (N ) : Tp (M ) −→ Tp (M ).

12.8 Gauss Curvature in R3ν

293

Thus, dp (N ) is a linear map from Tp (M ) to itself. For each point p in M , the Weingarten map at p is denoted by Wp : Tp (M ) −→ Tp (M ) and defined by Wp = −dp (N ).

(12.8.1)

For all vectors v in Tp (M ), we have from (11.6.1) that Wp (v) = −

d(N ◦ λ) (t0 ), dt

(12.8.2)

where λ(t) : (a, b) −→ M is any smooth curve such that λ(t0 ) = p and (dλ/dt)(t0 ) = v for some t0 in (a, b). The Weingarten map is the linear map W : X(M ) −→ X(M ) defined by W(X)p = Wp (Xp )

(12.8.3)

for all vector fields X in X(M ) and all p in M . Let (U, ϕ) be a chart in A, let H = (H1 , H2 ) be the corresponding coordinate frame, and let q be a point in U . The vector Wp (Hj |q ) can be expressed as Wp (Hj |q ) =

X i

wij (q)Hi |q ,

(12.8.4)

where the wij are uniquely determined functions in C ∞ (U ). We then have from (2.2.2) and (2.2.3) that  1   Hq w1 (q) w12 (q) Wp H = . (12.8.5) q w21 (q) w22 (q) Theorem 12.8.1. If (M, g, A, O) is an oriented regular surface in R3ν and X is a vector field in X(M ), then W(X) = −DX (N ). Proof. Let p be a point in M , and let λ(t) : (a, b) −→ M be a smooth curve such that λ(t0 ) = p and (dλ/dt)(t0 ) = Xp for some t0 in (a, b). Then d(N ◦ λ) (t0 ) dt = Wp (Xp )

−DX (N )p = −

= W(X)p .

Since p was arbitrary, the result follows.

[(12.3.4)] [(12.8.2)] [(12.8.3)]

12 Curves and Regular Surfaces in R3ν

294

Let (M, g, A, O) be an oriented regular surface in R3ν , and let p be a point in M . Since gp is bilinear and Wp is linear, we have the tensor hp in T 02 Tp (M ) defined by hp (v, w) = hWp (v), wi (12.8.6) for all vectors v, w in Tp (M ). The second fundamental form on M is the map denoted by h and defined by the assignment p 7−→ hp for all p in M . In the literature, h is often denoted by II. For vector fields X, Y in X(M ), we define a function h(X, Y ) = hW(X), Y i : M −→ R in C ∞ (M ) by the assignment p 7−→ hp (Xp , Yp ) = hW(X)p , Yp i

(12.8.7)

for all p in M . Let (U, ϕ) be a chart in A, and let H = (H1 , H2 ) be the corresponding coordinate frame. We define functions hij in C ∞ (U ) by hij (q) = hp (Hi |q , Hj |q )

(12.8.8)

for all q in U for i, j = 1, 2, where p = ϕ(q). The matrix of h with respect to H is denoted by hH and defined by   h (q) h12 (q) hH (q) = 11 h21 (q) h22 (q) for all q in U . Theorem 12.8.2. Let (M, g, A, O) be an oriented regular surface in R3ν , let N be the Gauss map, and let p be a point in M . Let (U, ϕ) be a chart in A at p, let (H1 , H2 ) be the corresponding coordinate frame, and let q = ϕ−1 (p). Then: (a)   ∂Hi hij (q) = , N = M ϑij (q), p ∂rj q where ϑij is given by (12.3.6). (b) Wp is self-adjoint with respect to gp ; that is, hWp (v), wi = hv, Wp (w)i for all vectors v, w in Tp (M ). (c) hp is a symmetric bilinear function. Proof. (a): Let (e1 , e2 ) be the standard basis for R2 . For given 1 ≤ i ≤ 2, we define a smooth map ζ = (ζ 1 , ζ 2 ) : (−ε, ε) −→ U

12.8 Gauss Curvature in R3ν

295

by ζ(t) = q + tei , where ε > 0 is chosen small enough that (q − εei , q + εei ) ⊂ U . Consider the smooth curve λ(t) : (−ε, ε) −→ M defined by λ = ϕ ◦ ζ, and observe that λ(0) = ϕ(q) = p. By Theorem 10.1.10, X dζ j dλ d(ϕ ◦ ζ) (0) = = (0)Hj |q = Hi |q . dt dt dt 0 j It also follows from Theorem 10.1.10 that X dζ j d(N ◦ λ) d(N ◦ ϕ ◦ ζ) ∂(N ◦ ϕ) (0) = = dt dt dt ∂rj q 0 0 j ∂(N ◦ ϕ) = , ∂ri q

(12.8.9)

and then from (12.8.2) and (12.8.9) that d(N ◦ λ) ∂(N ◦ ϕ) Wp (Hi |q ) = − = − ∂ri . dt 0 q

(12.8.10)

Taking the partial derivative with respect to ri of both sides of hNp , Hj |q i = 0 yields     ∂(N ◦ ϕ) ∂Hj 0= , H | + N , . j q p ∂ri q ∂ri q We have from Theorem 10.1.6 and (11.2.1) that ∂Hi ∂Hj = , ∂rj ∂ri hence

Then



   ∂(N ◦ ϕ) ∂Hi − , Hj |q = Np , j . ∂ri q ∂r q

hij (q) = hWp (Hi |q ), Hj |q i   ∂(N ◦ ϕ) = − , Hj |q ∂ri q   ∂Hi = , N p , ∂rj

[(12.8.6), (12.8.8)] [(12.8.10)] [(12.8.11)]

q

which proves the first equality. We also have     ∂Hi ∂Hi , Np = , Gq ∂rj q ∂rj q = M ϑij (q), which proves the second equality.

[Th 12.7.3(a)] [Th 12.3.3(c)]

(12.8.11)

12 Curves and Regular Surfaces in R3ν

296 (b): We have

hWp (Hi |q ), Hj |q i = hp (Hi |q , Hj |q )

[(12.8.6)]

= hij (q)

[(12.8.8)]

= M ϑij (q)

[part (a)]

= M ϑji (q)

[Th 12.3.3(d)]

= hji (q)

[part (a)]

= hp (Hj |q , Hi |q )

[(12.8.8)]

= hWp (Hj |q ), Hi |q i

[(12.8.6)]

= hHi |q , Wp (Hj |q )i. The result now follows from the observations that (H1 |q , H2 |q ) is a basis for Tp (M ), gp is bilinear, and Wp is linear. (c): It was established in connection with (12.8.6) that hp is bilinear. For vectors v, w in Tp (M ), we have hp (v, w) = hWp (v), wi

[(12.8.6)]

= hv, Wp (w)i

[part (b)]

= hp (w, v).

[(12.8.6)]

= hWp (w), vi

Since v and w were arbitrary, the result follows. Theorem 12.8.3. Let (M, g, A, O) be an oriented regular surface in R3ν , and let p be a point in M . Let (U, ϕ) be a chart in A at p, let H be the corresponding coordinate frame, and let q = ϕ−1 (p). Then: (a)      h11 (q) h12 (q) g (q) g12 (q) w11 (q) w12 (q) = 11 . h21 (q) h22 (q) g21 (q) g22 (q) w21 (q) w22 (q) (b) 

Wp

Hq Hq

 11   g (q) g12 (q) h11 (q) h12 (q) = 21 . g (q) g22 (q) h21 (q) h22 (q)

Proof. (a): Let H = (H1 , H2 ). We have hij (q) = hWp (Hi |q ), Hj |q i

= hHi |q , Wp (Hj |q )i   X = Hi | q , wkj (q)Hk |q k

=

X

gik (q) wkj (q),

k

from which the result follows. (b): This follows from (12.8.5) and part (a).

[(12.8.6), (12.8.8)] [Th 12.8.2(b)] [(12.8.4)]

12.8 Gauss Curvature in R3ν

297

Let (M, g, A, O) be an oriented regular surface in R3ν . The Gauss curvature is the smooth function K : M −→ R defined by K(p) = M det Wp



(12.8.12)

for all p in M . An intuitively appealing justification for this definition is provided below. For the moment, we simply observe that from (12.8.1), Wp is defined in terms of dp (N ), which is related to the “rate of change” of the unit normal vector field N at p. In geometric terms, the greater the rate of change of N , the greater the “curvature” we expect M to have at p. It follows from Theorem 4.7.4 and Theorem 12.8.2(b) that Wp has two (not necessarily distinct) real eigenvalues, which we denote by κ1 (p) and κ2 (p). Theorem 12.8.4. Let (M, g, A, O) be an oriented regular surface in R3ν , and let p be a point in M . Let (U, ϕ) be a chart in A at p, let H be the corresponding coordinate frame, and let q = ϕ−1 (p). Then  det hH (q)  = M κ1 (p) κ2 (p). K(p) = M det gH (q) Proof. By Theorem 4.7.5,  det Wp = κ1 (p) κ2 (p), and by Theorem 12.8.3(b),   det hH (q) . det Wp = det gH (q) The result now follows from (12.8.12). The next result uses material on “local diffeomorphisms” from Section 14.6 and “area” from Section 19.10. It is included here because it provides a rationale for the definition of Gauss curvature when ν = 0. Theorem 12.8.5. Let (M, g, A, O) be an oriented regular surface in R30 , let N be the Gauss map, and let p be a point in M . Let (U, ϕ) be a chart in A at p, let q = ϕ−1 (p), and let ε > 0 be a real number small enough that the open disk Bε = Bε (q) is contained in U . If K(p) 6= 0, then  K(p) = lim area N ◦ ϕ(Bε ) . ε→0 area ϕ(Bε ) Proof. By Theorem 11.2.10, (Bε , ϕ|Bε ) is a chart on M . Let (H1 , H2 ) be the coordinate frame corresponding  to (U, ϕ). Since K(p) 6= 0, we have from (12.8.1) and (12.8.12) that det dp (N ) 6= 0, and then from Theorem 2.5.3 that dp (N ) is a linear isomorphism. Thus, N is an immersion at p. Since M and S 2 are

12 Curves and Regular Surfaces in R3ν

298

2-dimensional, by Theorem 14.6.2, N is a local diffeomorphism at p. Taking ε to be smaller if necessary, it follows that (Bε , N ◦ ϕ|Bε ) is a chart on S 2 . See Figure 12.8.2. We have from Example 19.10.3 that ZZ  area ϕ(Bε ) = kH1 × H2 k dr1 dr2 Bε

and  area N ◦ ϕ(Bε ) =

ZZ



∂(N ◦ ϕ) ∂(N ◦ ϕ) 1 2

dr dr ×

∂r1 ∂r2 Bε

ZZ

kW(H1 ) × W(H2 )k dr1 dr2

[(12.8.10)]



[Th 8.4.7]



|det(W)| kH1 × H2 k dr1 dr2 .

= ZZ =

It can be shown using Theorem 10.5.6 that RR  |det(W)| kH1 × H2 k dr1 dr2 area N ◦ ϕ(Bε )  = lim Bε RR lim ε→0 ε→0 kH1 × H2 k dr1 dr2 area ϕ(Bε ) B  ε = det Wp = |K(p)|.

ϕ(Bε) p

M

N

N ◦ ϕ(Bε)

S2

Figure 12.8.2. Diagram for Theorem 12.8.5 Figure 12.8.2 provides the geometric intuition for Theorem 12.8.5. Since M as depicted is highly curved at p, the area of N ◦ ϕ(Bε ) is correspondingly greater than the area of ϕ(Bε ), leading to a larger value of |K(p)|. Example 12.8.6. The tables below present a summary of the Gauss curvatures of the regular surfaces in R30 and R31 detailed in Chapter 13. The notation is explained in the relevant sections of that chapter. A few observations are in order. In R30 , the plane, cylinder, and cone all have a constant Gauss curvature of 0. This is not surprising for the plane, but is perhaps counterintuitive for the

12.9 Riemann Curvature Tensor in R3ν

299

cylinder and cone. The explanation is that the cylinder and cone can be obtained from (portions of) the plane by smooth deformations that involve bending but not stretching. This keeps the “intrinsic” geometry of the deformed plane intact, thereby preserving the Gauss curvature at each point. The sphere has constant positive Gauss curvature, while the tractoid, which is shaped like a bugle, has constant negative Gauss curvature. The Gauss curvature of the hyperboloid of one sheet (two sheets) is negative (positive) but nonconstant. The torus has a region where the Gauss curvature is positive, and one where it is negative, with a transition zone in between where the Gauss curvature is 0. Geometric object in R30

Section

Gauss curvature

13.1

plane

0

13.2

cylinder

0

13.2

cone

0

13.4

sphere

1/R2

13.5

tractoid

13.6

hyperboloid of one sheet

13.7

hyperboloid of two sheets

13.8

torus

−1

−1/(2x2 + 2y 2 − 1)2 1/(2x2 + 2y 2 + 1)2

cos(φ)/[cos(φ) + R]

In R31 , the pseudosphere has constant positive Gauss curvature, while hyperbolic space has constant negative Gauss curvature. It is interesting to observe that the hyperboloid of one sheet and the pseudosphere are defined in terms of the same underlying surface. The difference in their Gauss curvatures is due entirely to the fact that one resides in the inner product space R30 , and the other in the Lorentz vector space R31 . A similar remark applies to the hyperboloid of two sheets and hyperbolic space. Section

Geometric object in R31

13.9

pseudosphere

13.10

hyperbolic space

Gauss curvature 1 −1 ♦

Riemann Curvature Tensor in R3ν

12.9

The Riemann curvature tensor for a regular surface (M, g) in R3ν is the map R : X(M )3 −→ X(M ) defined by   R(X, Y )Z = ∇X ∇Y (Z) − ∇Y ∇X (Z) − ∇[X,Y ] (Z)

(12.9.1)

12 Curves and Regular Surfaces in R3ν

300

for all vector fields X, Y, Z in X(M ); that is,     R(X, Y )Z p = ∇X (∇Y (Z)) p − ∇Y (∇X (Z)) p − ∇[X,Y ] (Z) p for all p in M . The large parentheses are included to make it clear that each of the four terms in the preceding identity is a vector field in X(M ) evaluated at the point p, and as such is a vector in Tp (M ). Since (R(X, Y )Z)p is not a real number, using the term “tensor” to describe R is something of a misnomer. This conflict is resolved in Theorem 19.5.5. The expression Rp (Xp , Yp )Zp has no meaning—at least not yet. We presented an instance in Example 12.4.4 where the second order covariant derivatives ∇2X,Y (Z) and ∇2Y,X (Z) are not equal. As the next result shows, the difference between these two vector fields is precisely R(X, Y )Z. Theorem 12.9.1. If (M, g) is a regular surface in R3ν , then R(X, Y )Z = ∇2X,Y (Z) − ∇2Y,X (Z) for all vector fields X, Y, Z in X(M ). Proof. We have from Theorem 12.4.1(a) and (12.6.1) that ∇[X,Y ] (Z) = ∇∇X (Y )−∇Y (X) (Z) = ∇∇X (Y ) (Z) − ∇∇Y (X) (Z). Then   R(X, Y )Z = ∇X ∇Y (Z) − ∇Y ∇X (Z) − ∇[X,Y ] (Z)       = ∇X ∇Y (Z) − ∇∇X (Y ) (Z) − ∇Y ∇X (Z) − ∇∇Y (X) (Z) = ∇2X,Y (Z) − ∇2Y,X (Z),

where the last equality follows from (12.4.4). For computational purposes, it is helpful to have a local coordinate expression for R. Theorem 12.9.2. Let (M, g) be a regular surface in R3ν , let X, Y, Z be vector fields in X(M ), and, in local coordinates, let X X X X ◦ϕ= α i Hi , Y ◦ϕ= β j Hj , and Z ◦ϕ= γ k Hk . i

j

Then R(X, Y )Z ◦ ϕ = where l Rijk =

for i, j, k, l = 1, 2.

k

XX l

 l αi β j γ k Rijk Hl ,

(12.9.2)

ijk

∂Γljk ∂Γlik X l n − + (Γin Γjk − Γljn Γnik ) i ∂r ∂rj n

(12.9.3)

12.9 Riemann Curvature Tensor in R3ν

301

Proof. We have from (12.4.20) that (3)

(4)

(5)

ijk

ijk

k X X l X i j ∂ 2 γ l ∂Γljk i j ∂γ l i j k ∇2X,Y (Z) ◦ ϕ = αβ + α β Γ + α β γ ∂ri ∂rj ∂ri jk ∂ri ij

+

(6) X

(7)

αi β j

ijk

ijkn

(8)



X ∂γ k l Γik + αi β j γ k Γnjk Γlin j ∂r (9)

X

αi β j

ijk

X ∂γ l k Γij − αi β j γ k Γnij Γlkn . k ∂r ijkn

Reversing the roles of X and Y yields k X X l X j i ∂ 2 γ l ∂Γljk j i ∂γ l j i k ∇2Y,X (Z) ◦ ϕ = α β + α β Γ + α β γ ∂ri ∂rj ∂ri jk ∂ri ij ijk

+

X



X

k

αj β i

ijk

ijk

(3)

=

X ij

αj β i γ k Γnjk Γlin

ijkn

ijkn

(6)

(10)

ijk

ijk

k l X X ∂2γl i j ∂γ l i j k ∂Γik αβ + α β Γ + α β γ ik ∂ri ∂rj ∂rj ∂rj i j

X

αi β j

X ∂γ l Γjk + αi β j γ k Γnik Γljn i ∂r ijkn

(8)

(9)

X ijk

(11)

k

ijk



ijk

X ∂γ l αj β i k Γkij − αj β i γ k Γnij Γlkn ∂r

(4)

+

∂γ l Γ + ∂rj ik

X

X ∂γ l k αβ Γ − αi β j γ k Γnij Γlkn , ij ∂rk i j

ijkn

where the numbering of sums that was initiated in the proof of Theorem 12.4.3 is continued. It follows from Theorem 12.9.1 that the lth component of R(X, Y )Z is  2  l ∇X,Y (Z) − ∇2Y,X (Z) ◦ ϕ =

(5) X

(7)

αi β j γ k

ijk

ijkn

(10)



X ijk

=

X

=

X

ijk

ijk

∂Γljk X i j k n l + α β γ Γjk Γin ∂ri

∂Γl αi β j γ k ik ∂rj

αi β j γ k





(11) X

αi β j γ k Γnik Γljn

ijkn

∂Γljk ∂Γlik X l n − + (Γin Γjk − Γljn Γnik ) i ∂r ∂rj n

l αi β j γ k Rijk .



12 Curves and Regular Surfaces in R3ν

302

We observed in Section 12.3 that the Christoffel symbols are intrinsic. It follows from (12.9.2) and (12.9.3) that the same is true of the Riemann curvature tensor. Example 12.9.3 (S 2 ). Substituting into (12.9.3) the values of the Christoffel symbols for S 2 given in Section 13.4 yields 1 R122 = sin2 (θ)

1 R212 = −sin2 (θ)

2 R121 = −1

2 R211 = 1,

l l with the remaining Rijk equal to 0. Observe that the Rijk are independent of φ, as would be expected due to the symmetry of the sphere. ♦

It is a remarkable feature of (12.9.2) that no partial derivatives of the component functions appear in the expression. This crucial observation underlies the next two results. Theorem 12.9.4. If (M, g) is a regular surface in R3ν , then R is determined pointwise on M in the following sense: for all points p in M , if X, Y, Z and e Ye , Ze are vector fields in X(M ) such that X,  ep , Yep , Z ep , (Xp , Yp , Zp ) = X then R(X, Y )Z Proof. In local coordinates, let X X ◦ϕ= αi Hi i

 p

 e Ye )Ze . = R(X, p

and

e ◦ϕ= X

X

α ei Hi .

i

ep translates into αi (p) = α Then Xp = X ei (p) for i = 1, 2, with corresponding identities for the other two vector fields. The result now follows from (12.9.2). Theorem 12.9.5. If (M, g) is a regular surface in R3ν , then R is C ∞ (M )multilinear in the following sense: for all vectors fields X, Y, Z, W in X(M ) and all functions f in C ∞ (M ), R(f X + W, Y )Z = f R(X, Y )Z + R(W, Y )Z, with corresponding identities for the other two arguments of R. Proof. This follows from reasoning similar to that used in the proof of Theorem 12.9.4. Theorem 12.9.6. Let (M, g, O) be an oriented regular surface in R3ν , let N be the Gauss map, and let X, Y be vector fields in X(M ). Then: (a) hDX (Y ), N i = hW(X), Y i.

12.9 Riemann Curvature Tensor in R3ν

303

(b) DX (Y ) = ∇X (Y ) + M hW(X), Y iN .

(12.9.4)

Proof. (a): Applying DX to both sides of hY, N i = 0 and using Theorem 12.3.1(e) yields hDX (Y ), N i = h−DX (N ), Y i. The result now follows from Theorem 12.8.1. (b): Let p be a point in M , and recall the definitions of tanp and norp given in Section 12.4. By Theorem 4.1.4,   DX (Y )p = tanp DX (Y )p + norp DX (Y )p , and by (12.4.2),  tanp DX (Y )p = ∇X (Y )p .

It follows from Theorem 12.7.3(b) that Tp (M )⊥ = RNp , and then from Theorem 4.1.5 and (12.2.3) that  hNp , DX (Y )p i norp DX (Y )p = Np = M hDX (Y )p , Np iNp . hNp , Np i Thus, DX (Y )p = ∇X (Y )p + M hDX (Y )p , Np iNp . Since p was arbitrary, DX (Y ) = ∇X (Y ) + M hDX (Y ), N iN . The result now follows from part (a). Theorem 12.9.7. If (M, g, O) is an oriented regular surface in R3ν and X, Y, Z are vector fields in X(M ), then: (a) R(X, Y )Z = M [h(Y, Z) W(X) − h(X, Z) W(Y )]. (b)   ∇X W(Y ) − ∇Y W(X) = W([X, Y ]). Proof. By Theorem 12.9.6(b), 

 DX DY (Z) = DX ∇Y (Z) + M W(Y ), Z N 

 = DX ∇Y (Z) + M DX W(Y ), Z N

(12.9.5)

and  

DX ∇Y (Z) = ∇X ∇Y (Z) + M W(X), ∇Y (Z) N , and by parts (d) and (e) of Theorem 12.3.1 and Theorem 12.8.1,





DX W(Y ), Z N = DX W(Y ), Z N + W(Y ), Z DX (N ) 



 = DX W(Y ) , Z + W(Y ), DX (Z) N

− W(Y ), Z W(X).

(12.9.6)

(12.9.7)

12 Curves and Regular Surfaces in R3ν

304

Substituting (12.9.6) and (12.9.7) into (12.9.5) gives  

DX DY (Z) = ∇X ∇Y (Z) − M W(Y ), Z W(X) 



+ M DX W(Y ) , Z + W(Y ), DX (Z)

 + W(X), ∇Y (Z) N . Also by Theorem 12.9.6(b),



 DX W(Y ) , Z = ∇X W(Y ) + M hW(X), W(Y )iN , Z





= ∇X W(Y ) , Z + M W(X), W(Y ) N , Z

 = ∇X W(Y ) , Z and



W(Y ), DX (Z) = W(Y ), ∇X (Z) + M hW(X), ZiN





= W(Y ), ∇X (Z) + M W(X), Z W(Y ), N

= W(Y ), ∇X (Z) .

Substituting (12.9.9) and (12.9.10) into (12.9.8) yields  

DX DY (Z) = ∇X ∇Y (Z) − M W(Y ), Z W(X) 



+ M ∇X W(Y ) , Z + W(Y ), ∇X (Z)

 + W(X), ∇Y (Z) N . Interchanging the roles of X and Y in (12.9.11) gives  

DY DX (Z) = ∇Y ∇X (Z) − M W(X), Z W(Y ) 



+ M ∇Y W(X) , Z + W(X), ∇Y (Z)

 + W(Y ), ∇X (Z) N .

(12.9.8)

(12.9.9)

(12.9.10)

(12.9.11)

(12.9.12)

Yet again by Theorem 12.9.6(b),

D[X,Y ] (Z) = ∇[X,Y ] (Z) + M W([X, Y ]), Z N .

(12.9.13)

We note that in the present context, [X, Y ] denotes ∇X (Y ) − ∇Y (X), not DX (Y ) − DY (X). But according to Theorem 12.6.2, the latter two vector fields are equal. By Theorem 10.1.15, there is an open set U in R3 containing M , and e Ye , Ze in X(U ) that restrict to X, Y, Z on M , respectively. By vector fields X, Theorem 10.4.3,   ee D e e (Z) e −D ee D e e (Z) e −D e e e e e (Z) e = (0, 0, 0) D X Y Y X D f(Y )−D e (X) X

Y

e denotes the Euclidean derivative (Section 10.3). It follows that on U , where D   (0, 0, 0) = DX DY (Z) − DY DX (Z) − DDX (Y )−DY (X) (Z)   (12.9.14) = DX DY (Z) − DY DX (Z) − D[X,Y ] (Z)

12.9 Riemann Curvature Tensor in R3ν

305

on M . Substituting (12.9.11)–(12.9.13) into (12.9.14) and using (12.9.1) gives 



 R(X, Y )Z − M W(Y ), Z W(X) − W(X), Z W(Y )

  + M ∇X W(Y ) − ∇Y W(X) − W([X, Y ]), Z N = (0, 0, 0). It follows that both the tangential and normal components of the left-hand side of the preceding expression equal (0, 0, 0), hence 



 R(X, Y )Z = M W(Y ), Z W(X) − W(X), Z W(Y ) (12.9.15) and

  ∇X W(Y ) − ∇Y W(X) − W([X, Y ]), Z = 0.

(12.9.16)

From (12.8.7) and (12.9.15), we obtain R(X, Y )Z = M [h(Y, Z) W(X) − h(X, Z) W(Y )], which proves part (a). Since g is nondegenerate and (12.9.16) holds for all vector fields Z in X(M ), we have   ∇X W(Y ) − ∇Y W(X) − W([X, Y ]) = (0, 0, 0), which proves part (b). Let (M, g) be a regular surface in R3ν , and define a map R : X(M )4 −→ C ∞ (M ), also called the Riemann curvature tensor, by R(X, Y, Z, W ) = hR(X, Y )Z, W i for all vector fields X, Y, Z, W in X(M ); that is,

 R(X, Y, Z, W )(p) = R(X, Y )Z p , Wp

(12.9.17)

for all p in M . By definition, R(X, Y, Z, W ) is a function in C ∞ (M ). Since R(X, Y, Z, W )(p) is a real number, calling R a “tensor” is perhaps justified. We return to this issue below. Theorem 12.9.8. Let (M, g) be a regular surface in R3ν , let X, Y, Z, W be vector fields in X(M ), and, in local coordinates, let X X X ◦ϕ= α i Hi Y ◦ϕ= β j Hj i

Z ◦ϕ=

X

j

k

γ Hk

k

W ◦ϕ=

X

δ l Hl .

l

Then R(X, Y, Z, W ) ◦ ϕ =

X ijkln

n αi β j γ k δ l gln Rijk .

(12.9.18)

12 Curves and Regular Surfaces in R3ν

306 Proof. We have from (12.9.2) that

R(X, Y, Z, W ) ◦ ϕ = hR(X, Y )Z ◦ ϕ, W ◦ ϕi XX   X n = αi β j γ k Rijk Hn , δ l Hl n

=

X

ijk

l

n αi β j γ k δ l gln Rijk .

ijkln

We noted in conjunction with (12.9.2) and (12.9.3) that the Riemann curvature tensor R is intrinsic. In view of (12.9.18), the same can be said of the Riemann curvature tensor R. Just as was the case for (12.9.2), there are no partial derivatives of the component functions in (12.9.18). This observation underlies the next two results, which are counterparts of Theorem 12.9.4 and Theorem 12.9.5, and are proved similarly. Theorem 12.9.9. If (M, g) is a regular surface in R3ν , then R is determined pointwise in the following sense: for all points p in M , if X, Y, Z, W and e Ye , Z, e W f are vector fields in X(M ) such that X, ep , Yep , Z ep , W fp ), (Xp , Yp , Zp , Wp ) = (X then e Ye , Z, e W f )(p). R(X, Y, Z, W )(p) = R(X, Theorem 12.9.10. If (M, g) is a regular surface in R3ν , then R is C ∞ (M )multilinear in the following sense: for all vector fields X, Y, Z, V, W in X(M ) and all functions f in C ∞ (M ), R(f X + W, Y, Z, V ) = f R(X, Y, Z, V ) + R(W, Y, Z, V ), with corresponding identities for the other three arguments of R. Let (M, g) be a regular surface in R3ν , let p be a point in M , and let v be a vector in Tp (M ). According to Theorem 15.1.2, there is a vector field X in X(M ) such that Xp = v. Taken in conjunction with Theorem 12.9.4, Theorem 12.9.5, Theorem 12.9.9, and Theorem 12.9.10, this allows us to give R and R interesting interpretations. We define a map Rp : Tp (M )3 −→ Tp (M ) by  Rp (v1 , v2 )v3 = R(X1 , X2 )X3 p , and a map Rp : Tp (M )4 −→ R

(12.9.19)

12.9 Riemann Curvature Tensor in R3ν

307

by Rp (v1 , v2 , v3 , v4 ) = R(X1 , X2 , X3 , X4 )(p) for all vectors v1 , v2 , v3 , v4 in Tp (M ), where X1 , X2 , X3 , X4 are any vector fields in X(M ) such that (X1 |p , X2 |p , X3 |p , X4 |p ) = (v1 , v2 , v3 , v4 ). By Theorem 12.9.4 and Theorem 12.9.9, respectively, Rp and Rp are independent of the choice of vector fields, so the definitions makes sense. It follows from (12.9.17) and the above identities that Rp (v1 , v2 , v3 , v4 ) = R(X1 , X2 , X3 , X4 )(p)

 = R(X1 , X2 )X3 p , X4 |p = hRp (v1 , v2 )v3 , v4 i.  By Theorem 12.9.10, Rp is in T 04 Tp (M ) , and by Theorem 12.9.5, Rp is in Mult Tp (M )3 , Tp (M ) . This provides a justification for calling R a “tensor”, and to a lesser extenta rationale for doing the same with R. Another tensor of interest in T 04 Tp (M ) is Dp , as defined by (6.6.7):  hv1 , v3 i Dp (v1 , v2 , v3 , v4 ) = det hv2 , v3 i

 hv1 , v4 i . hv2 , v4 i

(12.9.20)

Theorem 12.9.11 (Gauss’s Equation). If (M, g, O) is an oriented regular surface in R3ν and X, Y, Z, W are vector fields in X(M ), then   h(X, Z) h(X, W ) R(X, Y, Z, W ) = −M det . h(Y, Z) h(Y, W ) Proof. It follows from (12.8.7) and Theorem 12.9.7(a) that R(X, Y, Z, W ) = M hh(Y, Z) W(X) − h(X, Z) W(Y ), W i

= M [h(Y, Z) hW(X), W i − h(X, Z) hW(Y ), W i] = M [h(Y, Z) h(X, W ) − h(X, Z) h(Y, W )]   h(X, Z) h(X, W ) = −M det . h(Y, Z) h(Y, W )

Theorem 12.9.12 (Symmetries of Riemann Curvature Tensor). If (M, g, O) is an oriented regular surface in R3ν and X, Y, Z, W are vector fields in X(M ), then R satisfies the following symmetries: [S1] R(X, Y, Z, W ) = −R(Y, X, Z, W ). [S2] R(X, Y, Z, W ) = −R(X, Y, W, Z). [S3] R(X, Y, Z, W ) = R(Z, W, X, Y ). Proof. This follows from Theorem 12.9.11.

12 Curves and Regular Surfaces in R3ν

308

Theorem 12.9.13 (First Bianchi Identity). If (M, g) is a regular surface in R3ν and X, Y, Z are vector fields in X(M ), then R(X, Y )Z + R(Y, Z)X + R(Z, X)Y = (0, 0, 0). Proof. In local coordinates, let X X X ◦ϕ= αi Hi , Y ◦ϕ= β j Hj , i

and

j

Z ◦ϕ=

X

γ k Hk .

k

It follows from (12.9.2) that  R(X, Y )Z + R(Y, Z)X + R(Z, X)Y ◦ ϕ  X X l l l = αi β j γ k (Rijk + Rjki + Rkij ) Hl , l

ijk

and from (12.9.3) that l Rijk =

∂Γljk ∂Γlik X l n − + (Γin Γjk − Γljn Γnik ) ∂ri ∂rj n

l Rjki =

∂Γlji X l n ∂Γlki − + (Γjn Γki − Γlkn Γnji ) ∂rj ∂rk n

l Rkij =

∂Γlkj X l n ∂Γlij − + (Γkn Γij − Γlin Γnkj ), ∂rk ∂ri n

l l l hence Rijk + Rjki + Rkij = 0 for i, j, k, l = 1, 2. The result follows.

The name traditionally given to the next result is “Theorema Egregium”, which is Latin for “remarkable theorem”. The rationale for this impressive title is given below. Theorem 12.9.14 (Theorema Egregium). Let (M, g, O) be an oriented regular surface in R3ν , and let p be a point in M . (a) If (h1 , h2 ) is a basis for Tp (M ), then K(p) = −

Rp (h1 , h2 , h2 , h1 ) . Dp (h1 , h2 , h2 , h1 )

(b) If (e1 , e2 ) is an orthonormal basis for Tp (M ), then K(p) = (−1)ind(g) Rp (e1 , e2 , e2 , e1 ). Proof. Let q = ϕ−1 (p) and (f1 , f2 ) = (H1 |q , H2 |q ). (a): It follows from Theorem 6.6.4 and Theorem 12.9.12 that Rp (h1 , h2 , h2 , h1 ) Rp (f1 , f2 , f2 , f1 ) = , Dp (h1 , h2 , h2 , h1 ) Dp (f1 , f2 , f2 , f1 )

12.9 Riemann Curvature Tensor in R3ν

309

and from (12.8.8) and Theorem 12.9.11 that   hp (f1 , f2 ) hp (f1 , f1 ) Rp (f1 , f2 , f2 , f1 ) = −M det hp (f2 , f2 ) hp (f2 , f1 )    h11 (q) h12 (q) = M det = M det hH (q) . h21 (q) h22 (q) We also have    g11 (q) g12 (q) Dp (f1 , f2 , f2 , f1 ) = −det = −det gH (q) . g21 (q) g22 (q) Combining the above identities yields  det hH (q) Rp (h1 , h2 , h2 , h1 ) . = −M Dp (h1 , h2 , h2 , h1 ) det gH (q) The result now follows from Theorem 12.8.4. (b): By (4.2.3) and (12.9.20), Dp (e1 , e2 , e2 , e1 ) = −he1 , e1 ihe2 , e2 i = (−1)ind(g)+1 . The result now follows from part (a). As remarked earlier, the Riemann curvature is intrinsic, whether we are dealing with R or R. The Gauss curvature is defined using the Gauss map, which in turn is defined using the second fundamental form. For this reason, it would appear that the Gauss curvature depends on factors that are “external”. However, part (b) of the Theorema Egregium shows that the Gauss curvature is in fact intrinsic, something that is unexpected and indeed “remarkable”. The next result makes the same point using local coordinates. Theorem 12.9.15. Let (M, g, A, O) be an oriented regular surface in R3ν , and let p be a point in M . Then: (a) Rp (v1 , v2 )v3 = K(p) [hv2 , v3 iv1 − hv1 , v3 iv2 ] for all vectors v1 , v2 , v3 in Tp (M ). (b) If (U, ϕ) is a chart in A at p and q = ϕ−1 (p), then, in local coordinates, l Rijk (p) = K(p) [gjk (q)δil − gik (q)δjl ]

for i, j, k, l = 1, 2.  Proof. (a): We recall from Section 6.6 that S Tp (M ) is the subspace of T 04 Tp (M ) consisting of tensors satisfying [S1] and [S2]. By Theorem 6.6.3(a), Dp is such a tensor, and by parts (a) and (b) of Theorem 12.9.12, so is Rp .

12 Curves and Regular Surfaces in R3ν

310

It follows from Theorem 6.6.3(b) that Rp = cDp for some real number c. Let (e1 , e2 ) be an orthonormal basis for Tp (M ). We have (−1)ind(g) K(p) = Rp (e1 , e2 , e2 , e1 )

[Th 12.9.14(b)]

= cDp (e1 , e2 , e2 , e1 ) = −che1 , e1 ihe2 , e2 i = c(−1)ind(g)+1 ,

[(12.9.20)] [(4.2.3)]

hence c = −K(p), so Rp = −K(p)Dp . Then (12.9.20) gives

hRp (v1 , v2 )v3 , v4 i = K(p) hv2 , v3 iv1 − hv1 , v3 iv2 , v4 for all vectors v1 , v2 , v3 , v4 in Tp (M ). Since g is nondegenerate, Rp (v1 , v2 )v3 = K(p) [hv2 , v3 iv1 − hv1 , v3 iv2 ] for all vectors v1 , v2 , v3 in Tp (M ). (b): Let (H1 , H2 ) be the coordinate frame corresponding to (U, ϕ). We have from part (a) that Rp (Hi |q , Hj |q )Hk |q = K(p) [hHj |q , Hk |q iHi |q − hHi |q , Hk |q iHj |q ] = K(p) [gjk (q)Hi |q − gik (q)Hj |q ].

On the other hand, (12.9.2) and (12.9.19) give X l Rp (Hi |q , Hj |q )Hk |q = Rijk (p)Hl |q . l

It follows that

   K(p) gjk (q) if l = i l Rijk (p) = −K(p) gik (q) if l = j   0 otherwise = K(p) [gjk (q)δil − gik (q)δjl ]

for i, j, k, l = 1, 2.

12.10

Computations for Regular Surfaces in R3ν

We showed in Theorem 11.4.2 and Theorem 11.4.3 that graphs of surfaces and surfaces of revolution are regular surfaces. In this section, we view them as regular surfaces in R3ν and develop specific formulas for computing the coordinate frame, Gauss map, first and second fundamental forms, Gauss curvature, and sign. For surfaces of revolution in R30 , formulas for the Christoffel symbols and eigenvalues are also provided. Theorem 12.10.1 (Graph of Function in R30 ). Let ν = 0 and continue with the setup of Theorem 11.4.2. Then:

12.10 Computations for Regular Surfaces in R3ν

311

(a) H1 =

  ∂f 1, 0, ∂x

 H2 =

0, 1,

 ∂f . ∂y

(b) N ◦ϕ= s

 1+

∂f ∂x

1 2

  ∂f ∂f − , − , 1 .  2 ∂x ∂y ∂f + ∂y

(c)

gH

  2 ∂f 1 + ∂x  =   ∂f ∂f ∂x ∂y

∂f ∂f ∂x ∂y 1+



  .  2  ∂f  ∂y

(d) ∂2f  ∂x2  1  =s  2  2  2  ∂ f ∂f ∂f 1+ + ∂x ∂y ∂x∂y

 ∂2f ∂x∂y   .  2 ∂ f  ∂y 2



hH

(e)  2 2 ∂2f ∂2f ∂ f − ∂x2 ∂y 2 ∂x∂y K◦ϕ=   2  2 2 . ∂f ∂f 1+ + ∂x ∂y (f) graph(f ) = 1. Proof. (a): Straightforward. (b): This follows from Theorem 12.2.11(c), Theorem 12.7.3(a), part (f), and the computations  e1 e2 H1 × H2 = det e3

1 0 ∂f ∂x

 0   ∂f ∂f 1  = − ,− ,1 ∂f  ∂x ∂y ∂y

and s kH1 × H2 k = (c): Straightforward.

 1+

∂f ∂x

2

 +

∂f ∂y

2 .

12 Curves and Regular Surfaces in R3ν

312 (d): We have, for example, ∂H1 = ∂y



 ∂2f 0, 0, . ∂x∂y

By Theorem 12.8.2(a) and part (b),  h12 =

 ∂H1 ,N ◦ ϕ = s ∂y

∂2f ∂x∂y  2  2 . ∂f ∂f 1+ + ∂x ∂y

(e): By part (c),  det(gH ) = 1 +

∂f ∂x

2

 +

∂f ∂y

2 ,

and by part (d),  2 2 ∂2f ∂2f ∂ f − 2 2 ∂x ∂y ∂x∂y det(hH ) =  2  2 . ∂f ∂f 1+ + ∂x ∂y The result now follows from Theorem 12.8.4 and part (f). (f): This follows from ν = 0 and (12.2.5). Theorem 12.10.2 (Graph of Function in R31 ). Let ν = 1, continue with the setup of Theorem 11.4.2, and suppose graph(f ) is a regular surface in R31 . Then: (a)     ∂f ∂f H1 = 1, 0, H2 = 0, 1, . ∂x ∂y (b) N ◦ ϕ = −graph(f )

1 v u  2  2 u ∂f t 1 − ∂f − ∂x ∂y



where graph(f ) is given by part (f ). (c)

gH

  2 ∂f 1 −  ∂x  =   ∂f ∂f − ∂x ∂y

 ∂f ∂f ∂x ∂y   .  2  ∂f  1− ∂y −

 ∂f ∂f , ,1 , ∂x ∂y

12.10 Computations for Regular Surfaces in R3ν

313

(d) ∂2f  ∂x2  1  v u  2  2  2 u ∂f  ∂ f t 1 − f − ∂x∂y ∂x ∂y 

hH = graph(f )

 ∂2f ∂x∂y   .  ∂2f  ∂y 2

(e)  2 2 ∂2f ∂2f ∂ f − 2 2 ∂x ∂y ∂x∂y K ◦ ϕ = −"  2  2 # 2 . ∂f ∂f 1− − ∂x ∂y (f)  graph(f ) = −sgn 1 −

∂f ∂x

2

 −

∂f ∂y

2 ! .

Proof. (a): Straightforward. (b): By assumption, graph(f ) is a regular surface in R31 . The result follows from Theorem 12.2.11(c), Theorem 12.7.3(a), part (f), and the computations e1   e2 H1 × H2 = det −e3 

1 0 ∂f ∂x

 1   ∂f ∂f 1  =− , ,1 ∂f  ∂x ∂y ∂y

and s  2  2 ∂f ∂f − . kH1 × H2 k = 1 − ∂x ∂y (c): Straightforward. (d): We have, for example, ∂H1 = ∂y



 ∂2f 0, 0, . ∂x∂y

12 Curves and Regular Surfaces in R3ν

314

By Theorem 12.8.2(a) and part (b),   ∂H1 h12 = ,N ◦ ϕ ∂y     1 ∂2f ∂f ∂f = graph(f ) v 0, 0, , − , , 1 u  2  2 ∂x∂y ∂x ∂y u ∂f t 1 − ∂f − ∂x ∂y

= graph(f )

∂2f ∂x∂y v . u  2  2 u ∂f ∂f t 1 − − ∂x ∂y

(e): By part (c),  det(gH ) = 1 −

∂f ∂x

2

 −

∂f ∂y

2 ,

and by part (d),  2 2 ∂2f ∂2f ∂ f − ∂x2 ∂y 2 ∂x∂y det(hH ) =  2  2 . ∂f ∂f − 1 − ∂x ∂y The result now follows from Theorem 12.8.4 and part (f). (f): We have  2  2 ∂f ∂f hH1 × H2 , H1 × H2 i = + − 1. ∂x ∂y Then Theorem 12.2.11(b) gives  graph(f ) = sgn

∂f ∂x

2

 +

∂f ∂y

2

! −1

 = −sgn 1 −

∂f ∂x

2

 −

∂f ∂y

2 ! .

Theorem 12.10.3 (Surface of Revolution in R30 ). Let ν = 0 and continue with the setup of Theorem 11.4.3. Also, denote derivatives with respect to t by an overdot; let the subscripts 1 and 2 refer to t and φ, respectively; and for brevity, drop t and φ from the notation (except for φ in the case of the trigonometric functions). Then: (a)   H1 = ρ˙ cos(φ), ρ˙ sin(φ), h˙ H2 = −ρ sin(φ), ρ cos(φ), 0 .

12.10 Computations for Regular Surfaces in R3ν

315

(b) N ◦ϕ= q

1 ρ˙ 2 + h˙ 2

 −h˙ cos(φ), −h˙ sin(φ), ρ˙ .

(c) gH

 2 ρ˙ + h˙ 2 = 0

 0 . ρ2

(d)   ¨ − ρ¨h˙ 0 ρ˙ h =q . 0 ρh˙ ρ˙ 2 + h˙ 2 1

hH (e)

K◦ϕ=

˙ ρ˙ h ¨ − ρ¨h) ˙ h( . ρ(ρ˙ 2 + h˙ 2 )2

(f) rev(σ) = 1. (g) Γ111 =

¨ ρ¨ ˙ ρ + h˙ h 2 ˙ ρ˙ + h2

Γ211 = 0

Γ112 = Γ121 = 0

Γ122 = −

ρ˙ ρ

Γ222 = 0.

Γ212 = Γ221 =

ρ˙ 2

ρρ˙ + h˙ 2

(h) The eigenvalues of W are κ1 =

¨ − ρ¨h˙ ρ˙ h (ρ˙ 2 + h˙ 2 )3/2

h˙ κ2 = q , ρ ρ˙ 2 + h˙ 2

and H1 and H2 are eigenvectors corresponding to κ1 and κ2 , respectively. If ρ˙ 2 + h˙ 2 = 1 for all t in (a, b), then: (b0 )  N ◦ ϕ = −h˙ cos(φ), −h˙ sin(φ), ρ˙ . (c0 ) gH =

 1 0

 0 . ρ2

(d0 ) hH =

  ¨ − ρ¨h˙ 0 ρ˙ h . 0 ρh˙

(e0 ) ρ¨ K◦ϕ=− . ρ

12 Curves and Regular Surfaces in R3ν

316 (g0 ) ¨ Γ111 = ρ¨ ˙ ρ + h˙ h Γ211 = 0

Γ112 = Γ121 = 0 ρ˙ Γ212 = Γ221 = ρ

Γ122 = −ρρ˙ Γ222 = 0.

(h0 ) The eigenvalues of W are ¨ − ρ¨h˙ κ1 ◦ ϕ = ρ˙ h

κ2 ◦ ϕ =

h˙ . ρ

Proof. (a): Straightforward. (b): This follows from Theorem 12.2.11(c), Theorem 12.7.3(a), part (f), and the computations   e1 ρ˙ cos(φ) −ρ sin(φ)  ρ cos(φ) = ρ −h˙ cos(φ), −h˙ sin(φ), ρ˙ H1 × H2 = dete2 ρ˙ sin(φ) e3 h˙ 0 and (c): Straightforward. (d): We have

kH1 × H2 k = ρ

q

ρ˙ 2 + h˙ 2 .

 ∂H1 ¨ = ρ¨ cos(φ), ρ¨ sin(φ), h ∂t  ∂H2 ∂H1 = −ρ˙ sin(φ), ρ˙ cos(φ), 0 = ∂φ ∂t  ∂H2 = −ρ cos(φ), −ρ sin(φ), 0 . ∂φ By Theorem 12.8.2(a) and part (b), ¨ − ρ¨h˙ ρ˙ h h11 = q ρ˙ 2 + h˙ 2 (e): By part (c),

h12 = h12 = 0

ρh˙ h22 = q . ρ˙ 2 + h˙ 2

det(gH ) = ρ2 (ρ˙ 2 + h˙ 2 ),

and by part (d), det(hH ) =

˙ ρ˙ h ¨ − ρ¨h) ˙ ρh( . 2 2 ρ˙ + h˙

The result now follows from Theorem 12.8.4 and part (f). (f): This follows from ν = 0 and (12.2.5). (g): We make repeated use of part (c). From (12.3.7), ∂gij = gj1 Γ1i1 + gi1 Γ1j1 + gj2 Γ2i1 + gi2 Γ2j1 ∂t ∂gij = gj1 Γ1i2 + gi1 Γ1j2 + gj2 Γ2i2 + gi2 Γ2j2 ∂φ

12.10 Computations for Regular Surfaces in R3ν

317

for i, j = 1, 2. For (i, j; k) = (1, 1; 1), ¨ = ∂g11 = g11 Γ1 + g11 Γ1 + g12 Γ2 + g12 Γ2 2(ρ¨ ˙ ρ + h˙ h) 11 11 11 11 ∂t 2 2 1 ˙ = 2(ρ˙ + h )Γ11 , hence Γ111 =

¨ ρ¨ ˙ ρ + h˙ h . ρ˙ 2 + h˙ 2

For (i, j; k) = (1, 2; 1), ∂g12 = g21 Γ111 + g11 Γ121 + g22 Γ211 + g12 Γ221 ∂t = (ρ˙ 2 + h˙ 2 )Γ121 + ρ2 Γ211 .

0=

(12.10.1)

For (i, j; k) = (2, 2; 1), ∂g22 = g21 Γ121 + g21 Γ121 + g22 Γ221 + g22 Γ221 ∂t = 2ρ2 Γ221 ,

2ρρ˙ =

hence Γ221 =

ρ˙ = Γ212 . ρ

(12.10.2)

For (i, j; k) = (1, 1; 2), ∂g11 = g11 Γ112 + g11 Γ112 + g12 Γ212 + g12 Γ212 ∂φ = 2(ρ˙ 2 + h˙ 2 )Γ1 ,

0=

12

hence Γ112 = 0 = Γ121 .

(12.10.3)

For (i, j; k) = (1, 2; 2), ∂g12 = g21 Γ112 + g11 Γ122 + g22 Γ212 + g12 Γ222 ∂φ = (ρ˙ 2 + h˙ 2 )Γ1 + ρ2 Γ2 .

0=

22

12

For (i, j; k) = (2, 2; 2), ∂g22 = g21 Γ122 + g21 Γ122 + g22 Γ222 + g22 Γ222 ∂φ = 2ρ2 Γ222 ,

0=

hence Γ222 = 0.

(12.10.4)

12 Curves and Regular Surfaces in R3ν

318

It follows from (12.10.1) and (12.10.3) that Γ211 = 0, and from (12.10.2) and (12.10.4) that Γ122 = −

ρ˙ 2

ρρ˙ . + h˙ 2

(h): We have from part (c) that g−1 H =

 2 1 ρ ρ2 (ρ˙ 2 + h˙ 2 ) 0

 0 , ρ˙ 2 + h˙ 2

and then from part (d) and Theorem 12.8.3(b) that   ¨ − ρ¨h)  H 1 ρ(ρ˙ h 0 −1 W H = gH hH = ˙ ρ˙ 2 + h˙ 2 ) . 0 h( ρ(ρ˙ 2 + h˙ 2 )3/2 Thus, W(H1 ) =

¨ − ρ¨h˙ ρ˙ h H1 (ρ˙ 2 + h˙ 2 )3/2

h˙ W(H2 ) = q H2 , 2 2 ˙ ρ ρ˙ + h

from which the result follows. (b0 )–(g0 ): Parts (b0 )–(d0 ), (g0 ), and (h0 ) follow from ρ˙ 2 + h˙ 2 = 1. For part 0 (e ), differentiating both sides of the preceding identity with respect to t gives ¨ hence h( ˙ ρ˙ h ¨ − ρ¨h) ˙ = −¨ ρ¨ ˙ ρ = −h˙ h, ρ. Theorem 12.10.4 (Surface of Revolution in R31 ). Let ν = 1, continue with the setup of Theorem 11.4.3, and suppose rev(σ) is a regular surface in R31 . Also, denote derivatives with respect to t by an overdot; let the subscripts 1 and 2 refer to t and φ, respectively; and for brevity, drop t and φ from the notation (except for φ in the case of trigonometric functions). Then: (a)   H1 = ρ˙ cos(φ), ρ˙ sin(φ), h˙ H2 = −ρ sin(φ), ρ cos(φ), 0 . (b)

 1 ˙ ˙ N ◦ ϕ = −rev(σ) q h cos(φ), h sin(φ), ρ˙ , ρ˙ 2 − h˙ 2 where rev(σ) is given by part (f ).

(c) gH =

 2 ρ˙ − h˙ 2 0

 0 . ρ2

(d) hH = rev(σ)

  ¨ − ρ¨h˙ 0 ρ˙ h q . 0 ρh˙ ρ˙ 2 − h˙ 2 1

12.10 Computations for Regular Surfaces in R3ν

319

(e) K◦ϕ=−

˙ ρ˙ h ¨ − ρ¨h) ˙ h( . ρ(ρ˙ 2 − h˙ 2 )2

(f) rev(σ) = −sgn(ρ˙ 2 − h˙ 2 ). If ρ˙ 2 − h˙ 2 = 1 for all t in (a, b), then: (b0 )  N ◦ ϕ = h˙ cos(φ), h˙ sin(φ), ρ˙ . (c0 ) gH

 1 = 0

 0 . ρ2

(d0 )   ¨ − ρ¨h˙ 0 ρ˙ h hH = − . 0 ρh˙ (e0 ) ρ¨ K◦ϕ=− . ρ (f0 ) rev(σ) = −1. Proof. (a): Straightforward. (b): By assumption, rev(σ) is a regular surface in R31 . The result follows from Theorem 12.2.11(c), Theorem 12.7.3(a), part (f), and the computations 

e1 H1 × H2 = det e2 −e3

ρ˙ cos(φ) ρ˙ sin(φ) h˙

 −ρ sin(φ)  ρ cos(φ) = −ρ h˙ cos(φ), h˙ sin(φ), ρ˙ 0

and q kH1 × H2 k = ρ ρ˙ 2 − h˙ 2 . (c): Straightforward. (d): We have  ∂H1 ¨ = ρ¨ cos(φ), ρ¨ sin(φ), h ∂t  ∂H2 ∂H1 = −ρ˙ sin(φ), ρ˙ cos(φ), 0 = ∂φ ∂t  ∂H2 = −ρ cos(φ), −ρ sin(φ), 0 . ∂φ

12 Curves and Regular Surfaces in R3ν

320 By Theorem 12.8.2(a) and part (b), ¨ − ρ¨h˙ ρ˙ h h11 = rev(σ) q ρ˙ 2 − h˙ 2 (e): By part (c),

h12 = h12 = 0

ρh˙ h22 = rev(σ) q . ρ˙ 2 − h˙ 2

det(gH ) = ρ2 (ρ˙ 2 − h˙ 2 ),

and by part (d),  ¨ − ρ¨h˙ ρh˙ ρ˙ h . det(hH ) = ρ˙ 2 − h˙ 2 The result now follows from Theorem 12.8.4 and part (f). (f): We have hH1 × H2 , H1 × H2 i = ρ2 (h˙ 2 − ρ˙ 2 ). Since ρ has only positive values, Theorem 12.2.11(b) gives rev(σ) = sgn(h˙ 2 − ρ˙ 2 ) = −sgn(ρ˙ 2 − h˙ 2 ). (b0 )–(e0 ): Parts (b0 )–(d0 ) and (f0 ) follow from ρ˙ 2 − h˙ 2 = 1. For part (e0 ), differentiating both sides of the preceding identity with respect to t yields ρ¨ ˙ρ = ¨ hence h( ˙ ρ˙ h ¨ − ρ¨h) ˙ = ρ¨. h˙ h,

Chapter 13

Examples of Regular Surfaces This chapter provides worked examples of graphs of functions in R30 and R31 , and surfaces of revolution in R31 . The details of computations, which are not included, are based on formulas appearing in Theorems 12.10.1–12.10.4. In this chapter, we identify R2 with the xy-plane in R3 . See Example 12.8.6 for a summary of Gauss curvatures as well as related comments. Each of the regular surfaces to be considered can be parametrized as either the graph of a function or a surface of revolution. The former approach has the advantage that the regular surface can be depicted literally as a graph in R3 . On the other hand, when symmetries are present, the surface of revolution parametrization can be quite revealing and computationally convenient. The choice of parametrization made here is somewhat arbitrary. There is a small issue that differentiates the two computational methods. Parameterizing a regular surface as a surface of revolution leaves out certain points compared with the corresponding parametrization as the graph of a function; more specifically, with the former approach, part of a longitude curve is “missing”. Since we are interested exclusively in local aspects of regular surfaces, in particular, the Gauss curvature, this is not a concern and will not be discussed further.

13.1

Plane in R30

The set Pln = {(x, y, z) ∈ R3 : z = 0} Semi-Riemannian Geometry, First Edition. Stephen C. Newman. c 2019 John Wiley & Sons, Inc. Published 2019 by John Wiley & Sons, Inc.

321

322

13 Examples of Regular Surfaces

is the xy-plane in R3 . In the notation of Theorem 11.4.2, let f (x, y) = 0 ϕ(x, y) = (x, y, 0) U = R2 . By Theorem 11.4.2, Pln is a regular surface and (U, ϕ) is a chart. Viewing Pln as a regular surface in R30 , Theorem 12.10.1 gives: H1 = (1, 0, 0)

H2 = (0, 1, 0)

N ◦ ϕ = (0, 0, 1)   1 0 gH = 0 1   0 0 hH = 0 0 K ◦ ϕ = 0.

13.2

Cylinder in R30

The set Cyl = {(x, y, z) ∈ R3 : x2 + y 2 = 1, z > 0}

is an infinite cylinder standing on the xy-plane in R3 . In the notation of Theorem 11.4.3, let ρ(t) = 1 h(t) = t  ϕ(t, φ) = cos(φ), sin(φ), t U = (0, +∞) × (−π, π).

By Theorem 11.4.3, Cyl is a regular surface and (U, ϕ) is a chart. Viewing Cyl as a regular surface in R30 , Theorem 12.10.3 gives:  H1 = (0, 0, 1) H2 = −sin(φ), cos(φ), 0  N ◦ ϕ = −cos(φ), −sin(φ), 0   1 0 gH = 0 1   0 0 hH = 0 1 K◦ϕ=0 Γ111 = 0 Γ112 = Γ121 = 0 Γ122 = 0 Γ211 = 0

Γ212 = Γ221 = 0

κ1 ◦ ϕ = 0

Γ222 = 0

κ2 ◦ ϕ = 1.

An intuitive explanation for why the Gauss curvature of Cyl equals 0 is given in Example 12.8.6.

13.3 Cone in R30

13.3

323

Cone in R30

The set Con = {(x, y, z) ∈ R3 : x2 + y 2 − z 2 = 0, z > 0}

is an inverted infinite cone (minus its vertex) standing on the xy-plane in R3 . See Figure 13.3.1.

z

y

x Figure 13.3.1. Con In the notation of Theorem 11.4.3, let ρ(t) = t h(t) = t  ϕ(t, φ) = t cos(φ), t sin(φ), t U = (0, +∞) × (−π, π). By Theorem 11.4.3, Con is a regular surface and (U, ϕ) is a chart. Had the vertex (0, 0, 0) been included as part of Con, the resulting set would not be a regular surface because there is more than one tangent plane at (0, 0, 0). Viewing Con as a regular surface in R30 , Theorem 12.10.3 gives:   H1 = cos(φ), sin(φ), 1 H2 = −t sin(φ), t cos(φ), 0  1 N ◦ ϕ = √ −cos(φ), −sin(φ), 1 2   2 0 gH = 0 t2   1 0 0 hH = √ 2 0 t K◦ϕ=0

324

13 Examples of Regular Surfaces Γ111 = 0

Γ112 = Γ121 = 0

Γ122 = −

Γ211 = 0

Γ212 = Γ221 =

1 t

Γ222 = 0

t 2

1 κ2 ◦ ϕ = √ . 2t An intuitive explanation for why the Gauss curvature of Con equals 0 is given in Example 12.8.6. κ1 ◦ ϕ = 0

13.4

Sphere in R30

For a real number R > 0, the set 2 SR = {(x, y, z) ∈ R3 : x2 + y 2 + z 2 = R2 }

is a sphere of radius R centered at the origin. See Figure 13.4.1.

z

θ y φ

x 2 Figure 13.4.1. SR

When R = 1, we write S 2 in place of S12 . In the notation of Theorem 11.4.3, let ρ(θ) = R sin(θ) h(θ) = R cos(θ) ϕ(θ, φ) = R sin(θ) cos(φ), sin(θ) sin(φ), cos(θ)



U = (0, π) × (−π, π). 2 2 By Theorem 11.4.3, SR is a regular surface and (U, ϕ) is a chart. Viewing SR 3 as a regular surface in R0 , Theorem 12.10.3 gives:  H1 = R cos(θ) cos(φ), cos(θ) sin(φ), −sin(θ)

13.5 Tractoid in R30

325

 H2 = R −sin(θ) sin(φ), sin(θ) cos(φ), 0  1 N ◦ ϕ = sin(θ) cos(φ), sin(θ) sin(φ), cos(θ) = ϕ R   0 2 1 gH = R 0 sin2 (θ) 1 gH R 1 K◦ϕ= 2 R

hH = −

Γ111 = 0

Γ112 = Γ121 = 0

Γ211 = 0

Γ212 = Γ221 = cot(θ)

Γ122 = −cos(θ) sin(θ) Γ222 = 0

1 . R When R = 1, we have N ◦ ϕ = ϕ; that is, Nϕ(θ,φ) = ϕ(θ, φ) for all (θ, φ) in U . This is consistent with Theorem 12.2.12(c). κ1 ◦ ϕ = κ2 ◦ ϕ = −

Tractoid in R30

13.5 The set

p   p Trc = (x, y, z) ∈ R3 : arsech x2 + y 2 − 1 − x2 − y 2 − z = 0, x2 + y 2 < 1, z > 0 is the upper portion of the tractoid, also known as the tractricoid. It is better understood as the surface obtained by revolving around the z-axis the smooth curve σ(t) : (0, +∞) −→ R3 given  σ(t) = sech(t), 0, t − tanh(t) . See Figure 13.5.1. In the notation of Theorem 11.4.3, let ρ(φ) = sech(t) h(t) = t − tanh(t)

ϕ(t, φ) = sech(t) cos(φ), sech(t) sin(φ), t − tanh(t)



U = (0, +∞) × (−π, π).

By Theorem 11.4.3, Trc is a regular surface and (U, ϕ) is a chart. Viewing Trc as a regular surface in R30 , Theorem 12.10.3 gives:  H1 = −sech(t) tanh(t) cos(φ), −sech(t) tanh(t) sin(φ), tanh2 (t) H2 = −sech(t) sin(φ), sech(t) cos(φ), 0



326

13 Examples of Regular Surfaces

z

y

x Figure 13.5.1. Trc  N ◦ ϕ = − tanh(t) cos(φ), tanh(t) sin(φ), sech(t) gH =

hH

 tanh2 (t) 0

 −sech(t) tanh(t) = 0

0 sech2 (t)



0 sech(t) tanh(t)



K ◦ ϕ = −1 Γ111 =

sech2 (t) tanh(t)

Γ211 = 0

sech2 (t) tanh(t)

Γ112 = Γ121 = 0

Γ122 =

Γ212 = Γ221 = −tanh(t)

Γ222 = 0

κ1 ◦ ϕ = −

sech(t) tanh(t)

κ2 ◦ ϕ =

tanh(t) . sech(t)

Together with Pln and S 2 , Trc makes a trio of regular surfaces in R30 with constant Gauss curvatures of 0, 1, and −1, respectively.

13.6

Hyperboloid of One Sheet in R30

The set One = {(x, y, z) ∈ R3 : x2 + y 2 − z 2 = 1, z > 0} is the upper half of a hyperboloid of one sheet. See Figure 13.6.1.

13.7 Hyperboloid of Two Sheets in R30

327

z

y

x Figure 13.6.1. One, P 2 In the notation of Theorem 11.4.2, let p f (x, y) = x2 + y 2 − 1 p  ϕ(x, y) = x, y, x2 + y 2 − 1

U = {(x, y) ∈ R2 : x2 + y 2 > 1}.

By Theorem 11.4.2, One is a regular surface and (U, ϕ) is a chart. Viewing One as a regular surface in R30 , Theorem 12.10.1 gives: ! ! x y H1 = 1, 0, p H2 = 0, 1, p x2 + y 2 − 1 x2 + y 2 − 1 N ◦ϕ= p

1

−x, −y,

−1  2 1 2x + y 2 − 1 = 2 xy x + y2 − 1 2x2

+

2y 2

p

 x2 + y 2 − 1

 xy gH x2 + 2y 2 − 1  2  1 y − 1 −xy p hH = 2 (x2 + y 2 − 1) 2x2 + 2y 2 − 1 −xy x − 1 1 K◦ϕ=− . 2 (2x + 2y 2 − 1)2

We observe that the Gauss curvature is nonconstant and strictly negative.

13.7

Hyperboloid of Two Sheets in R30

The set Two = {(x, y, z) ∈ R3 : x2 + y 2 − z 2 = −1, z > 0}

328

13 Examples of Regular Surfaces

z

y

x Figure 13.7.1. Two, H2 is the upper sheet of a hyperboloid of two sheets. See Figure 13.7.1. In the notation of Theorem 11.4.2, let p x2 + y 2 + 1 p  ϕ(x, y) = x, y, x2 + y 2 + 1 f (x, y) =

U = R2 . By Theorem 11.4.2, Two is a regular surface and (U, ϕ) is a chart. Viewing Two as a regular surface in R30 , Theorem 12.10.1 gives: H1 =

1, 0, p

hH =

H2 =

x2 + y 2 + 1

N ◦ϕ= p gH =

!

x

1 2x2

+

2y 2

+1

0, 1, p

−x, −y,

 2 1 2x + y 2 + 1 2 2 xy x +y +1

x2 + y 2 + 1

p  x2 + y 2 + 1 xy x2 + 2y 2 + 1

 2 1 y +1 p 2 2 2 2 (x + y + 1) 2x + 2y + 1 −xy K◦ϕ=

!

y



−xy x2 + 1



1 . (2x2 + 2y 2 + 1)2

We observe that the Gauss curvature is nonconstant and strictly positive.

13.8 Torus in R30

13.8

329

Torus in R30

For a real number R > 1, the set p Tor = {(x, y, z) ∈ R3 : ( x2 + y 2 − R)2 + z 2 = 1, x2 + y 2 ≥ 0} is the torus obtained by rotating about the z-axis the unit circle in the xz-plane centered at (R, 0, 0). See Figure 13.8.1. z

y

θ

K= 0 x

φ K 0 K= 0

Figure 13.8.1. Tor In the notation of Theorem 11.4.3, let ρ(θ) = cos(θ) + R h(θ) = sin(θ)  ϕ(θ, φ) = [cos(θ) + R] cos(φ), [cos(θ) + R] sin(φ), sin(θ) U = (−π/2, π/2) × (−π, π). The domain for h was chosen to be (−π/2, π/2) instead of (0, 2π), for example, to ensure that property [R2] of Section 11.4 is satisfied. This parametrizes the “outer” half of the torus; a separate parametrization gives the “inner” half. It follows from Theorem 11.4.3 that Tor is a regular surface and (U, ϕ) is a chart. Viewing Tor as a regular surface in R30 , Theorem 12.10.3 gives:  H1 = −sin(θ) cos(φ), −sin(θ) sin(φ), cos(θ)  H2 = −[cos(θ) + R] sin(φ), [cos(θ) + R] cos(φ), 0

330

13 Examples of Regular Surfaces  N ◦ ϕ = − cos(θ) cos(φ), cos(θ) sin(φ), sin(θ)   1 0 gH = 0 [cos(θ) + R]2   1 0 hH = 0 [cos(θ) + R] cos(θ) K◦ϕ= Γ111 = 0

Γ112 = Γ121 = 0

Γ211 = 0

Γ212 = Γ221 = − κ1 ◦ ϕ = 1

cos(θ) cos(θ) + R Γ122 = [cos(θ) + R] sin(θ)

sin(θ) cos(θ) + R

Γ222 = 0

κ2 ◦ ϕ =

cos(θ) . cos(θ) + R

As depicted in Figure 13.8.1, the Gauss curvature takes positive, negative, and zero values at various points of Tor.

13.9

Pseudosphere in R31

The set P 2 = {(x, y, z) ∈ R3 : x2 + y 2 − z 2 = 1, z > 0} is the same upper half of the hyperboloid of one sheet described in Section 13.6, and just as before, it is a regular surface. Since 

∂f (x, y) ∂x

2

 +

∂f (x, y) ∂y

2 −1=

1 , x2 + y 2 + 1

the condition of Theorem 12.2.8 is satisfied. Thus, P 2 is a regular surface in R31 . Alternatively, this comes directly from Theorem 12.2.12(a). Theorem 12.10.2 gives: ! ! x y H1 = 1, 0, p H2 = 0, 1, p x2 + y 2 − 1 x2 + y 2 − 1 p  N ◦ ϕ = −x, −y, − x2 + y 2 − 1 = −ϕ  2  1 y − 1 −xy gH = hH = 2 x + y 2 − 1 −xy x2 − 1 K◦ϕ=1 P 2 = 1. We note that P 2 = 1 agrees with Theorem 12.2.12(d). In the context of R31 , we refer to P 2 as the pseudosphere.

13.10 Hyperbolic Space in R31

13.10

331

Hyperbolic Space in R31

The set H2 = {(x, y, z) ∈ R3 : x2 + y 2 − z 2 = −1, z > 0} is the same upper sheet of the hyperboloid of two sheets described in Section 13.7, and just as before, it is a regular surface. Since 

∂f (x, y) ∂x

2

 +

∂f (x, y) ∂y

2 −1=−

1 x2 + y 2 − 1

and, by definition, x2 + y 2 > 1, the condition of Theorem 12.2.8 is satisfied. Thus, H2 is a regular surface in R31 . Alternatively, this comes from directly from Theorem 12.2.12(a). Theorem 12.10.2 gives: ! ! x y H1 = 1, 0, p H2 = 0, 1 p x2 + y 2 + 1 x2 + y 2 + 1

gH

p

 x2 + y 2 + 1 = ϕ  2  1 y + 1 −xy = 2 x + y 2 + 1 −xy x2 + 1

N ◦ ϕ = x, y,

hH = −gH K ◦ ϕ = −1 H2 = −1.

We note that H2 = −1 agrees with Theorem 12.2.12(d). In the context of R31 , we refer to H2 as hyperbolic space. We can also parametrize H2 as a surface of revolution. In the notation of Theorem 11.4.3, let ρ(t) = sinh(t) h(t) = cosh(t)  ϕ(t, φ) = sinh(t) cos(φ), sinh(t) sin(φ), cosh(t) U = (0, +∞) × (−π, π). ˙ 2= From the well-known identity sinh2 (t)−cosh2 (t) = −1, we obtain ρ(t) ˙ 2 − h(t) 1. Evidently, the condition of Theorem 12.2.9 is satisfied, so once again we see that H2 is a regular surface in R31 . Theorem 12.10.4 gives:  H1 = cosh(t) cos(φ), cosh(t) sin(φ), sinh(t)  H2 = −sinh(t) sin(φ), sinh(t) cos(φ), 0  N ◦ ϕ = sinh(t) cos(φ), sinh(t) sin(φ), cosh(t) = ϕ

332

13 Examples of Regular Surfaces gH

 1 = 0

0 sinh2 (t)



hH = −gH K ◦ ϕ = −1 H2 = −1. We observe from gH that the coordinate frame H is orthogonal, giving this parametrization certain computational advantages.

Part III

Smooth Manifolds and Semi-Riemannian Manifolds

333

335 In Part II, we covered a range of topics on curves and surfaces. Despite the sometimes abstract nature of the mathematics, the fact that we were dealing with 1- and 2-dimensional geometric objects residing in 3-dimensional space made the undertaking relatively concrete. The restrictions on dimensions imposed in Part II reflect the classical nature of the study of curves and surfaces. With relatively little effort, it is possible to generalize results to (m − 1)dimensional “surfaces” in m-dimensional space. However, this does not resolve an important inherent limitation of this approach—the need for an ambient space. Perhaps the most important application of semi-Riemannian geometry, and the one that historically motivated this area of mathematics, is Einstein’s general theory of relativity. According to this cosmological construct, the universe we inhabit has precisely four dimensions; there is no 5-dimensional ambient space in which our universe resides. Of course, it is possible to fashion one mathematically, but that would not reflect physical reality. In order to model the general theory of relativity, we need a mathematical description of “surfaces” that dispenses with ambient space altogether. Let us consider where R3 entered (and did not enter) into our discussion of surfaces in an effort to see whether we can circumvent the role of an ambient space. Recall that in the first instance, a surface is defined to be a topological space, something that does not depend on an ambient space. However, after that we quickly run into roadblocks. Any of the concepts that rely on tangent vectors, and especially normal vectors, seem inextricably linked to an ambient space. We came to view Christoffel symbols, covariant derivatives, and even the Gauss curvature as “intrinsic” quantities, but their initial formulations relied on ambient space. It seems that as a first step, we need to find a way to define a “tangent vector” without resorting to this externality. Once that has been accomplished, we can then proceed to “tangent space”, “differential map”, and other fundamental notions of differential geometry. This is the goal of Chapters 14–17. One of the “problems” with R3 is its abundance of mathematical structure. As has been observed previously, R3 can be viewed as an inner product space, a Lorentz vector space, a topological space, a metric space, and more. In our study of curves and surfaces, this forced us to clarify what assumptions about R3 were being made at each stage of the discussion. We are about to embark on the development of a mathematical theory that builds on what we know about curves and surfaces, but is in many respects something quite new. This gives us the opportunity to assume only what is essential at each juncture, an approach that is greatly illuminating. Accordingly, we delay introducing the concept of “connection” until Chapter 18, and that of “metric” until Chapters 19 and 20. In Chapter 21, some of the classical results of vector calculus are presented using the new language at our disposal, and in Chapter 22, we conclude with some applications to areas of physics closely related to the special and general theories of relativity.

336

Chapter 14

Smooth Manifolds 14.1

Smooth Manifolds

Many of the definitions presented in this and subsequent chapters are adaptations of ones we encountered in the study of curves and surfaces. Let M be a topological space. A chart on M is a pair (U, ϕ), where U is an open set in M and ϕ : U −→ Rm is a map such that: [C1] ϕ(U ) is an open set in Rm . [C2] ϕ : U −→ ϕ(U ) is a homeomorphism. We refer to U as the coordinate domain of the chart, to ϕ as its coordinate map, and to m as the dimension of the chart. For each point p in U , we say that (U, ϕ) is a chart at p. (Note that in the definition of regular surfaces, coordinate maps went from open sets in R2 to open sets in the regular surface, whereas now traffic is in the opposite direction.) When U = M , we say that (M, ϕ) is a covering chart on M , and that M is covered by (M, ϕ). In the present context, the component functions of ϕ are denoted by ϕ = (x1 , . . . , xm ) rather than ϕ = (ϕ1 , . . . , ϕm ), and are said to be local coordinates on U . This choice of notation is adopted specifically to encourage the informal identification of the point p in M with its local coordinate counterpart ϕ(p) = x1 (p), . . . , xm (p) in Rm . We often denote (U, ϕ)

by

 U, (xi )

or

 U, ϕ = (xi ) ,

e , ϕ) where (xi ) = (x1 , . . . , xm ). The charts (U, ϕ) and (U e on M are said to be e overlapping if U ∩ U is nonempty. In that case, the map e ) −→ ϕ(U e) ϕ e ◦ ϕ−1 |ϕ(U ∩Ue ) : ϕ(U ∩ U e ∩U Semi-Riemannian Geometry, First Edition. Stephen C. Newman. c 2019 John Wiley & Sons, Inc. Published 2019 by John Wiley & Sons, Inc.

337

338

14 Smooth Manifolds

is called a transition map. For brevity, we usually denote ϕ e ◦ ϕ−1 |ϕ(U ∩Ue )

by

ϕ e ◦ ϕ−1 ,

and in a similar manner often (but not always) drop the “restriction” subscript from other notation when the situation is clear from the context. An atlas for M is a collection A = {(Uα , ϕα ) : α ∈ A} S of charts on M such that the Uα form an open cover of M ; that is, M = α∈A Uα . At this point, there is no requirement that charts have the same dimension. A topological m-manifold is a pair (M, A), where M is a topological space and A is an atlas for M such that: [T1] Each chart in A has dimension m. [T2] The topology of M has a countable basis. [T3] For every pair of distinct points p1 , p2 in M , there are disjoint open sets U1 and U2 in M such that p1 is in U1 and p2 is in U2 . Observe that [T1] refers exclusively to the atlas A, whereas [T2] and [T3] have to do with the topological structure of M . Properties [T2] and [T3] are technical requirements needed for certain constructions and will not be further elaborated upon. Example 14.1.1 (Rm ). Rm endowed with the Euclidean topology and paired  with the atlas consisting of the single chart Rm , idRm = (r1 , . . . , rm ) is a topological m-manifold. ♦ Theorem 14.1.2. Let M be a topological space. If A and B are atlases for M such that (M, A) is a topological m-manifold and (M, B) is a topological n-manifold, then m = n. Proof. Let (U, ϕ) and (V, ψ) be charts in A and B, respectively, such that U ∩V is nonempty. Since U ∩V is open in M , we have from [C2] and Theorem 9.1.14 that ϕ(U ∩ V ) is open in ϕ(U ), and then from [C1] and Theorem 9.1.5 that ϕ(U ∩ V ) is open in Rm . Similar reasoning shows that ϕ|U ∩V : U ∩ V −→ ϕ(U ∩ V ) is a homeomorphism. Likewise, ψ(U ∩V ) is open in Rn and ψ|U ∩V : U ∩V −→ ψ(U ∩ V ) is a homeomorphism. It follows that ψ −1 ◦ϕ|ϕ(U ∩V ) : ϕ(U ∩V ) −→ ψ(U ∩V ) is a homeomorphism. By Theorem 9.4.1, m = n. Let (M, A) be a topological m-manifold. We have from Theorem 14.1.2 that m is an invariant of (M, A), which we refer to as the dimension of M and denote by dim(M ). It can be shown that the connected components of a topological space are closed sets in the topological space. In the case of a topological manifold, the connected components have additional properties. Theorem 14.1.3. If (M, A) is a topological manifold, then M has countably many connected components, each of which is both an open and a closed set in M.

14.1 Smooth Manifolds

339

e , ϕ) Let (M, A) be a topological m-manifold, and let (U, ϕ) and (U e be overlape ping charts on M . We say that (U, ϕ) and (U , ϕ) e are smoothly compatible if the transition maps ϕ e ◦ ϕ−1 and ϕ ◦ ϕ e−1 are (Euclidean) smooth. Since (ϕ ◦ ϕ e−1 )−1 = ϕ e ◦ ϕ−1 , this is equivalent to either ϕ e ◦ ϕ−1 or ϕ ◦ ϕ e−1 being a (Euclidean) diffeomorphism. We say that the atlas A is smooth if any two overlapping charts in A are smoothly compatible. A smooth atlas for M is also called a smooth structure on M . Recall that in our study of charts on regular surfaces, coordinate maps were assumed to be smooth, where smoothness was defined using relevant properties of the coordinate domain in R2 and the ambient space R3 . Since M does not have such inherent properties, we have turned Theorem 11.2.9, a result on the smoothness of transition maps for regular surfaces, into a definition of smoothness for topological manifolds. We will see further examples of this approach as we proceed, where a theorem about “surfaces” becomes a definition for “manifolds”. We say that a topological m-manifold (M, A) is a smooth m-manifold if A is a smooth atlas. It is often convenient to adopt the shorthand of referring to M as a smooth m-manifold, with A understood from the context. Furthermore, when it is not important to specify the dimension of M , we refer to (M, A) or simply M as a smooth manifold. A smooth atlas A for M is said to be maximal if it is not properly contained in any larger smooth atlas for M . This means that any chart on M that is smoothly compatible with every chart in A is already in A. It can be shown that every smooth atlas for M is contained in a unique maximal smooth atlas. A given topological space can have distinct smooth atlases that generate the same maximal smooth atlas. On the other hand, it is also possible for a topological space to have distinct smooth atlases that give rise to distinct maximal smooth atlases. Accordingly, we adopt the following convention. Throughout, any chart on a smooth manifold comes from the underlying smooth atlas or its corresponding maximal atlas. Example 14.1.4 (Rm ). Continuing with Example 14.1.1, it is clear that the atlas consisting of the single chart Rm , idRm = (r1 , . . . , rm ) is smooth. Thus, Rm is not just a topological m-manifold,it is a smooth m-manifold. In this context, the chart Rm , idRm = (r1 , . . . , rm ) is referred to as standard coordinates on Rm . It is convenient to adopt the shorthand of saying that (r1 , . . . , rm ) are standard coordinates on Rm . ♦ Example 14.1.5 (Regular Surface). It can be shown with the help of Theorem 11.2.9 that a regular surface is a smooth 2-manifold. ♦ We noted in connection with Theorem 1.1.10 that all m-dimensional vector spaces are isomorphic to Rm (viewed as a vector space). In light of Example 14.1.4, it should come as no surprise that any m-dimensional vector space has a smooth structure induced by Rm (now viewed as a smooth m-manifold). Theorem 14.1.6 (Standard Smooth Structure). If V is an m-dimensional vector space, then there is a unique topology and maximal smooth atlas on V ,

340

14 Smooth Manifolds

called the standard smooth structure on V , making V into a smooth mmanifold such that for every linear isomorphism A : V −→ Rm , the pair (V, A) is a (covering) chart on V . Recall the discussion on product topologies in Theorem 9.1.6 and Theorem 9.1.12. Theorem 14.1.7 (Product Manifold). Let (Mi , Ai ) be a smooth mi manifold, let (Ui , ϕi ) be a chart in Ai for i = 1, . . . , k, and suppose M1 ×· · ·×Mk has the product topology. Then: (a) (U1 × · · · × Uk , ϕ1 × · · · × ϕk ) is a chart on M1 × · · · × Mk . (b) The topological space M1 × · · · × Mk paired with the smooth atlas consisting of the collection of charts as defined in part (a) is a smooth (m1 + · · · + mk )manifold, called the product manifold of M1 , . . . , Mk . For example, it can be shown that the product manifold of m copies of R is precisely Rm with the standard smooth structure.

14.2

Functions and Maps

Let M be a smooth manifold, and let f : M −→ R be a function. Since M and R are topological spaces, we know what it means for f to be continuous (on M ). Motivated by Theorem 11.5.1, we say that f is smooth (on M ) if for every point p in M , there is a chart (U, ϕ) at p such that the function f ◦ ϕ−1 : ϕ(U ) −→ R is (Euclidean) smooth. The set of smooth functions on M is denoted by C ∞ (M ). We make C ∞ (M ) into both a vector space and a ring by defining operations as follows: for all functions f, g in C ∞ (M ) and all real numbers c, let (f + g)(p) = f (p) + g(p), (f g)(p) = f (p)g(p), and (cf )(p) = cf (p) for all p in M . The next result is reminiscent of Theorem 11.5.1. Theorem 14.2.1 (Smoothness Criterion for Functions). Let M be a smooth manifold, and let f : M −→ R be a function. Then f is smooth if and only if the function f ◦ ϕ−1 : ϕ(U ) −→ R is (Euclidean) smooth for every chart (U, ϕ) on M . e , ϕ) Proof. (⇒): Let p be a point in M . By assumption, there is a chart (U e −1 e ) −→ R is (Euclidean) smooth. Since ϕ at p such that f ◦ ϕ e : ϕ(U e ◦ ϕ−1 : e ) −→ Rm is a transition map, it too is (Euclidean) smooth. It follows ϕ(U ∩ U from Theorem 10.1.12 that e ) −→ R f ◦ ϕ−1 = (f ◦ ϕ e−1 ) ◦ (ϕ e ◦ ϕ−1 ) : ϕ(U ∩ U

14.2 Functions and Maps

341

is (Euclidean) smooth. Since p was arbitrary, the result follows. (⇐): Straightforward. We now turn our attention to maps. Let M and N be smooth manifolds, and let F : M −→ N be a map. Since M and N are topological spaces, we understand what is meant by F being continuous (on M ). With Theorem 11.6.1 as motivation, we say that F is smooth (on M ) if for every point p in M , there is a chart (U, ϕ) at p and a chart (V, ψ) at F (p) such that F (U ) ⊆ V and the map ψ ◦ F ◦ ϕ−1 : ϕ(U ) −→ Rn is (Euclidean) smooth. The condition F (U ) ⊆ V is included as part of the definition of smoothness to ensure that the next result holds. Theorem 14.2.2. Let M and N be smooth manifolds, and let F : M −→ N be a map. If F is smooth, then it is continuous. Proof. Let p be a point in M . Since F is smooth, there are charts (U, ϕ) at p and (V, ψ) at F (p) such that F (U ) ⊆ V and ψ ◦ F ◦ ϕ−1 : ϕ(U ) −→ Rn is (Euclidean) smooth. By Theorem 10.1.7, ψ◦F ◦ϕ−1 is continuous, and according to property [C2] of Section 14.1, ϕ : U −→ ϕ(U ) and ψ −1 : ψ(V ) −→ V are homeomorphisms, hence continuous. It follows from Theorem 9.1.8 that F = ψ −1 ◦ (ψ ◦ F ◦ ϕ−1 ) ◦ ϕ : U −→ V is continuous. Since p was arbitrary, the result follows. Theorem 14.2.3 (Smoothness Criterion for Maps). Let M and N be smooth manifolds, and let F : M −→ N be a map. Then F is smooth if and only if (i) F is continuous, and (ii) for every chart (U, ϕ) on M and every chart (V, ψ) on N such that F (U ) ⊆ V , the map ψ ◦ F ◦ ϕ−1 : ϕ(U ) −→ Rn is (Euclidean) smooth. Proof. (⇒): By Theorem 14.2.2, F is continuous. Let (U, ϕ) and (V, ψ) be charts on M and N , respectively, such that F (U ) ⊆ V , and let p be a point in e at F (p) such that e , ϕ) U . By assumption, there are charts (U e at p and (Ve , ψ) −1 n e e e e F (U ) ⊆ V and ψ ◦ F ◦ ϕ e : ϕ( e U ) −→ R is (Euclidean) smooth. Since U and e are open in M , so is U ∩ U e , and by Theorem A.3(c), (U ∩ U e ) ⊆ F −1 (V ∩ Ve ). U −1 m −1 e ∩ Ve ) −→ Rn are transition e ) −→ R and ψ ◦ ψe : ψ(V Because ϕ◦ϕ e : ϕ(U ∩ U maps, they are (Euclidean) smooth. It follows from Theorem 10.1.12 from e ) −→ Rn ψ ◦ F ◦ ϕ−1 = (ψ ◦ ψe−1 ) ◦ (ψe ◦ F ◦ ϕ e−1 ) ◦ (ϕ e ◦ ϕ−1 ) : ϕ(U ∩ U is (Euclidean) smooth. Since p was arbitrary, the result follows. (⇐): Let p be a point in M , and let (U, ϕ) and (V, ψ) be charts at p and F (p), respectively. Since F is continuous, by Theorem 9.1.7, F −1 (V ) is open in M , and therefore, so is U 0 = U ∩ F −1 (V ). Then (ϕ|U 0 , U 0 ) is a chart at p such that F (U 0 ) ⊆ V . The argument used in the proof of (⇒) shows that the map ψ ◦ F ◦ ϕ−1 : ϕ(U 0 ) −→ Rn is (Euclidean) smooth.

342

14 Smooth Manifolds

Theorem 14.2.4. Let M , N , and P be smooth manifolds, and let F : M −→ N and G : N −→ P be maps. If F and G are smooth, then so is G ◦ F . Theorem 14.2.5. Let V and W be vector spaces, let A : V −→ W be a linear map, and suppose V and W have standard smooth structures. Then A is a smooth map. We close this section with a brief look at two important methods of construction on smooth manifolds—bump functions and partitions of unity. Theorem 14.2.6 (Bump Function). Let M be a smooth manifold, let p be a point in M , and let U be a neighborhood of p in M . Then there is a function β in C ∞ (M ), called a bump function at p, such that: (a) 0 ≤ β(p) ≤ 1 for all p in M . (b) supp(β) ⊂ U . e ⊆ U of p in M . (c) β = 1 on some neighborhood U See Figure 14.2.1.

U

~ U

supp(β) p

Figure 14.2.1. Bump function Proof. We consider only the case M = R and p = 0. Let [a, b] be a closed interval containing 0, and consider the smooth functions f (t), g(t), β(t) : R −→ R given by ( 0 if t ≤ 0 f (t) = −1/t e if t > 0, g(t) = and

f (t) , f (t) + f (1 − t)

 x2 − a 2 β(t) = 1 − g 2 . b − a2 

343

14.2 Functions and Maps As depicted in Figure 14.2.2(a),   g(t) = 0 0 < g(t) < 1   g(t) = 1 and

  β(t) = 0      0 < β(t) < 1 β(t) = 1    0 < β(t) < 1    β(t) = 0

if t ≤ 0 if t ∈ (0, 1) if t ≥ 1 if if if if if

t ≤ −b t ∈ (−b, −a) t ∈ [−a, a] t ∈ (a, b) t ≥ b,

e = (−a, a), for example, we find so that supp(β) = [−b, b]. Taking U = R and U that β is a bump function at 0.

1 g

β

–b

0 (a)

–a

0

a

b

(b)

Figure 14.2.2. Diagram for Theorem 14.2.6 A glance at Figure 14.2.2(b) explains why a function such as β is called a bump function. Bump functions are often called upon to extend the domain of a smooth map, as in the proof of the next result. Theorem 14.2.7 (Extension of Function). Let M be a smooth manifold, let p be a point in M , let U be a neighborhood of p in M , and let f be a function e ⊆U in C ∞ (U ). Then there is a function fe in C ∞ (M ) and a neighborhood U e e e of p in M such that f and f agree on U ; that is, f |Ue = f |Ue .

Proof. Let β be a bump function in C ∞ (M ) that has support in U and constant e ⊆ U of p in M . We define a function fe : M −→ R value 1 on a neighborhood U by ( β(q)f (q) if q ∈ U fe(q) = 0 if q ∈ M U. Because β and f are smooth on U , so is fe. Let q be a point in M U . Since β has support in U , q is in M supp(β), and because supp(β) is closed in M , M supp(β) is open in M . Thus, fe = 0 on the neighborhood M supp(β) of q in M . Since q was arbitrary, fe is smooth on M U . This shows that fe is e , f and fe agree on U e. smooth on M . Lastly, because β = 1 on U

344

14 Smooth Manifolds

Theorem 14.2.8 (Partition of Unity). If M is a smooth manifold and U = {Uα : α ∈ A} is an open cover of M , then there is a family {πα ∈ C ∞ (M ) : α ∈ A} of smooth functions on M , called a partition of unity on M subordinate to U , such that: (a) 0 ≤ πα (p) ≤ 1 for all α in A and all p in M . (b) supp(πα ) ⊂ Uα for all α in A. (c) Every point p in M has a neighborhood U in M that intersects supp(πα ) for only P finitely many values of α. (d) α∈A πα (p) = 1 for all p in M . Although far from being intuitive, partitions of unity are indispensable for certain constructions in differential geometry, and we will see several such applications. The basic idea is to define the mathematical object of interest (for example, a function, vector field, or integral) on each set in the given open cover, and then form a weighted average using the πα as weights to combine the individual contributions into a mathematical object defined on all of M . Because of part (c), there are no issues of convergence of infinite series.

14.3

Tangent Spaces

In the introduction to Part III, it was remarked that a crucial step in developing the theory of what we now call smooth manifolds is to devise a way of defining “tangent vector” when there is no ambient space. The definition created by differential geometers and provided here meets this challenge in an ingenious fashion. Framed in algebraic terms, it is both mathematically elegant and computationally convenient. However, it unfortunately lacks intuitive appeal compared to the methods adopted for surfaces, where tangent vectors were defined in terms of derivatives of smooth curves and could be thought of as “arrows”. Later on we will see that the algebraic approach leads to a theory closely resembling that developed for surfaces, thereby lending the algebraic theory a certain geometric flavor. Before proceeding, we need to establish some notation. In Chapter 10, coordinates on Rm were denoted by (x1 , . . . , xm ) or (y 1 , . . . , y m ). In Chapter 11 and Chapter 12, it was necessary to clearly distinguish between coordinates on R2 and R3 . For R2 , the notation used was (r1 , r2 ) or (r, s), and for R3 , it was (x1 , x2 , x3 ) or (x, y, z). In Chapter 13, coordinates on R2 and R3 were denoted by (x, y) and (x, y, z), respectively. The former choice was made because in the setting of graphs of functions, R2 was identified with the xy-plane in R3 . That brings us to the present chapter, and beyond. Henceforth, coordinate maps will be denoted by (x1 , . . . , xm ) or (y 1 , . . . , y m ), except for standard coordinates on Rm , which will be denoted by (r 1 , . . . , r m ) or (s1 , . . . , sm ).

345

14.3 Tangent Spaces

Let M be a smooth manifold, and let p be a point in M . A (tangent) vector at p is defined to be a linear function v : C ∞ (M ) −→ R that satisfies the following product rule: v(f g) = f (p) v(g) + g(p) v(f )

(14.3.1)

for all functions f, g in C ∞ (M ). The set of tangent vectors at p is denoted by Tp (M ) and called the tangent space of M at p. The zero vector in Tp (M ), denoted by 0, is the tangent vector that sends all functions in C ∞ (M ) to the real number 0. We make Tp (M ) into a vector space by defining operations as follows: for all vectors v, w in Tp (M ) and all real numbers c, let (v + w)(f ) = v(f ) + w(f ) and (cv)(f ) = c v(f ) ∞

for all functions f in C (M ). Let U, ϕ = (xi ) be a chart at p. The partial derivative with respect to xi at p is the map ∂ : C ∞ (M ) −→ R ∂xi p defined by

 ∂f ∂(f ◦ ϕ−1 ) (p) = ϕ(p) i i ∂x ∂r for all functions f in C ∞ (M ), where we denote ∂ ∂f (f ) by (p). ∂xi p ∂xi

(14.3.2)

(14.3.3)

The right-hand side of (14.3.2) is simply the ordinary (Euclidean) partial derivative of f ◦ ϕ−1 with respect to ri at ϕ(p). When m = 1, we denote ∂f (p) ∂x

by

df (p). dx

Note that although the xi have the domain U , the (∂/∂xi )|p have the domain C ∞ (M ), as opposed to C ∞ (U ). Theorem 14.3.1. With the above setup: (a) (∂/∂xi )|p is a tangent vector at p, hence (∂/∂xi )|p is in Tp (M ) for i = 1, . . . , m. (b) ∂xi (p) = δji ∂xj for i, j = 1, . . . , m, where δji is Kronecker’s delta.

346

14 Smooth Manifolds

Proof. (a): This follows from the properties of Euclidean partial derivatives. (b): Straightforward. Theorem 14.3.2. Let M be a smooth m-manifold, let p be a point in M , and let v be a vector in Tp (M ). Then: (a) If f and g are functions in C ∞ (M ) that are equal on a neighborhood of p in M , then v(f ) = v(g). (b) If h is a function in C ∞ (M ) that is constant on a neighborhood of p in M , then v(h) = 0. Proof. (a): Since v is linear, it suffices to show that if f = 0 on a neighborhood U of p in M , then v(f ) = 0. A first observation is that v(0M ) = v(0M + 0M ) = v(0M ) + v(0M ), hence v(0M ) = 0. Let β be a bump function at p that has support in U . Since f = 0 on U and supp(β) ⊂ U , we have βf = 0M . It follows from β(p) = 1, f (p) = 0, and (14.3.1) that 0 = v(βf ) = β(p) v(f ) + f (p) v(β) = v(f ). (b): By part (a), it suffices to consider the case where h is constant on M . We have from (14.3.1) that v(1M ) = v(1M 1M ) = 1M (p) v(1M ) + 1M (p) v(1M ) = 2 v(1M ), hence v(1M ) = 0. Since h = c 1M for some real number c, v(h) = v(c 1M ) = c v(1M ) = 0. The next result is reminiscent of Theorem 11.4.1. Theorem 14.3.3 (Open Set). Let (M, A) be a smooth m-manifold, let U be an open set in M , and let AU = {(V ∩ U, ψ|V ∩U ) : (V, ψ) ∈ A}. Then: (a) AU is a smooth atlas on U , hence (U, AU ) is a smooth m-manifold, where U has the subspace topology induced by M . (b) For all points p in U , Tp (U ) can be identified with Tp (M ), written Tp (U ) = Tp (M ). Proof. (a): Straightforward. (b): Using Theorem 14.3.4(c) and Theorem 14.7.3, an argument similar to that employed in the proof of Theorem 11.4.1(b) gives the result. Alternatively, consider the map ι : Tp (U ) −→ Tp (M ) defined by ι(v)(f ) = v(f |U ) for all vectors v in Tp (U ) and all functions f in C ∞ (M ). It is easily shown that ι(v) is a vector in Tp (M ), so ι is well-defined. Furthermore, it can also be shown that ι is a linear isomorphism.

347

14.3 Tangent Spaces Throughout, any open set in a smooth m-manifold is viewed as a smooth m-manifold.

Theorem 14.3.2(a) and Theorem 14.3.3(b) show that tangent vectors operate locally. Let V be an m-dimensional vector space that we suppose has the standard smooth structure, so that V is a smooth m-manifold. For each vector v in V , the tangent space Tv (V ) is an m-dimensional vector space. In an obvious way, we identify Tv (V ) with V (viewed as a vector space), and write Tv (V ) = V.

(14.3.4)

Theorem 14.3.4 (Existence of Coordinate Basis).  Let M be a smooth m-manifold, let p be a point in M , and let U, ϕ = (xi ) be a chart at p. Then: (a)   ∂ ∂ ,..., m ∂x1 p ∂x p is a basis for Tp (M ), called the coordinate basis at p corresponding to (U, ϕ). (b) For all vectors v in Tp (M ), X ∂ v= v(xi ) i . ∂x p i (c) Tp (M ) has dimension m. Remark. Theorem 14.3.3(b) justifies having Tp (M ) appear in part (b) rather than Tp (U ), even though the xi are functions in C ∞ (U ). Proof. (b): We consider only the case U = M . Suppose without loss of generality that ϕ(p) = (0, . . . , 0) and that ϕ(M ) is star-shaped about (0, . . . , 0). Let g : ϕ(M ) −→ R be a smooth function, and define a corresponding smooth function gi : ϕ(M ) −→ R by gi (r1 , . . . , rm ) =

Z 0

1

∂g (tr1 , . . . , trm ) dt ∂ri

for all (r1 , . . . , rm ) in ϕ(M ) for i = 1, . . . , m. For a given point (r1 , . . . , rm ) in ϕ(M ), define a smooth function h(t) : [0, 1] −→ R by h(t) = g(tr1 , . . . , trm ). This definition makes sense because ϕ(M ) is star-shaped about (0, . . . , 0). It follows from the fundamental theorem of calculus that Z 1 dh (t) dt = h(1) − h(0) = g(r1 , . . . , rm ) − g(0, . . . , 0). (14.3.5) 0 dt

348

14 Smooth Manifolds

On the other hand, by Theorem 10.1.10, X ∂g dh (t) = ri i (tr1 , . . . , trm ), dt ∂r i so

1

Z 0

 X Z 1 ∂g dh i 1 m (t) dt = r (tr , . . . , tr ) dt i dt 0 ∂r i X = ri gi (r1 , . . . , rm ).

(14.3.6)

i

Combining (14.3.5) and (14.3.6) yields g(r1 , . . . , rm ) = g(0, . . . , 0) +

X

ri gi (r1 , . . . , rm ).

(14.3.7)

i

By assumption,  x1 (p), . . . , xm (p) = ϕ(p) = (0, . . . , 0).

(14.3.8)

Let q = ϕ−1 (r1 , . . . , rm ), so that  x1 (q), . . . , xm (q) = ϕ(q) = (r1 , . . . , rm ).

(14.3.9)

From (14.3.7)–(14.3.9), we obtain g ◦ ϕ(q) = g ◦ ϕ(p) +

X i

xi (q) (gi ◦ ϕ)(q).

(14.3.10)

Let f be a function in C ∞ (M ). Setting g = f ◦ ϕ−1 and fi = gi ◦ ϕ in (14.3.10) gives X f (q) = f (p) + xi (q)fi (q). i 1

m

Since (r , . . . , r ), hence q, was arbitrary, X f = f (p) + xi fi . i

For a vector v in Tp (M ), we have   X i v(f ) = v f (p) + x fi i

=

X

i

v(x fi )

[Th 14.3.2(b)]

i

=

X

=

X

xi (p) v(fi ) +

i

i

X

(14.3.11) fi (p) v(xi )

[(14.3.1)]

i i

fi (p) v(x ).

[(14.3.8)]

349

14.3 Tangent Spaces As a special case, X ∂ ∂ (xj ) fj (p) (f ) = i ∂xi p ∂x p j =

X ∂xj j

∂xi

(p) fj (p)

[Th 14.3.1(a)] [(14.3.3)]

= fi (p)

(14.3.12)

[Th 14.3.1(b)]

for i = 1, . . . , m. It follows from (14.3.11) and (14.3.12) that v(f ) =

X i

 ∂ v(x ) i (f ). ∂x p i

Since f was arbitrary, v=

X i

∂ v(x ) i . ∂x p i

(a): We have from part (b) that the tangent vectors (∂/∂xi )|p span Tp (M ). Suppose X i ∂ a =0 i ∂x p i for some real numbers ai . Applying both sides of the preceding identity to xj and using Theorem 14.3.1(b) gives 0=

X i

ai

∂xj (p) = aj ∂xi

for j = 1, . . . , m. Thus, the (∂/∂xi )|p are linearly independent. (c): This follows from part (a). Theorem 14.3.5 (Change of Coordinate Basis). Let M be a smooth m  e, ϕ manifold, let p be a point in M , let U, ϕ = (xi ) and U e = (e xj ) be  charts at p, and let Xp = (∂/∂x1 )|p , . . . , (∂/∂xm )|p and Xep = (∂/∂e x1 )|p , . . . , m (∂/∂e x )|p be the corresponding coordinate bases at p. Then: (a) X ∂e ∂ xi ∂ = (p) i ∂xj p ∂xj ∂e x p i i −1 X ∂(e  ∂ x ◦ϕ ) . = ϕ(p) j i ∂r ∂e x p i

350

14 Smooth Manifolds

(b)  ∂e x1 ∂e x1  1 (p) · · · ∂xm (p)   Xep  ∂x . .. ..  .. idTp (M ) X =  .   . p  ∂e  m m x ∂e x (p) · · · (p) ∂x1 ∂xm   1 −1   ∂(e x ◦ϕ ) ∂(e x1 ◦ ϕ−1 ) ϕ(p) · · · ϕ(p)   ∂r1 ∂rm   . . .  . .. .. .. =     ∂(e xm ◦ ϕ−1 ) ∂(e xm ◦ ϕ−1 ) ϕ(p) · · · ϕ(p) ∂r1 ∂rm Remark. The second equalities in parts (a) and (b), both of which follow from (14.3.2), are included as a reminder that partial derivatives are ultimately computed using Euclidean methods.    Xep Proof. (a): Let aij = idTp (M ) X , so that from (2.2.6) and (2.2.7), p X ∂ k ∂ = a . j ∂xj p ∂e xk p 

k

Applying both sides of the preceding identity to x ei and using Theorem 14.3.1(b) and (14.3.3) yields X ∂e ∂e xi xi (p) = akj k (p) = aij . j ∂x ∂e x k

(b): This follows from part (a).  Let M be a smooth m-manifold, let p be a point in M , and let U, ϕ = (xi ) be a chart at p. The second order partial derivative with respect xi and xj at p is the map ∂ 2 : C ∞ (M ) −→ R ∂xi ∂xj p defined by   ∂2f ∂ ∂f (p) = (p) ∂xi ∂xj ∂xi ∂xj for all functions f in C ∞ (M ), where we denote ∂ 2 ∂2f (f ) by (p). ∂xi ∂xj p ∂xi ∂xj Theorem 14.3.6 (Equality of Mixed Partial Derivatives). With the above setup, ∂2f ∂2f (p) = (p) ∂xi ∂xj ∂xj ∂xi for i, j = 1, . . . , m. Proof. This follows from Theorem 10.1.6.

351

14.4 Differential of Maps

14.4

Differential of Maps

To define the differential map between two manifolds, we need a way to send vectors in one tangent space to vectors in another tangent space, and in a linear fashion. With the algebraic approach to vectors, this turns out to be surprisingly straightforward. Let M and N be smooth manifolds, let F : M −→ N be a smooth map, and let p be a point in M . For each vector v in Tp (M ), define a map dp (F )(v) : C ∞ (N ) −→ R by dp (F )(v)(g) = v(g ◦ F )

(14.4.1)

for all functions g in C ∞ (N ). Since g ◦ F is in C ∞ (M ), the definition makes sense. Theorem 14.4.1. With the above setup, dp (F )(v) is a vector in TF (p) (N ). Proof. Clearly, dp (F )(v) is linear. To show that dp (F )(v) satisfies (14.3.1), let g, h be functions in C ∞ (N ). Then dp (F )(v)(gh) = v(gh ◦ F )

[(14.4.1)]

= v (g ◦ F )(h ◦ F )



= (g ◦ F )(p) · v(h ◦ F ) + (h ◦ F )(p) · v(g ◦ F )   = g F (p) · dp (F )(v)(h) + h F (p) · dp (F )(v)(g).

[(14.3.1)] [(14.4.1)]

Thus, dp (F )(v) is a tangent vector at F (p). Continuing with above notation, the differential of F at p is the map dp (F ) : Tp (M ) −→ TF (p) (N ) defined by the assignment v 7−→ dp (F )(v) for all vectors v in Tp (M ). Theorem 14.4.2. With the above setup, dp (F ) is linear. Proof. Let v, w be vectors in Tp (M ), let g be a function in C ∞ (N ), and let c be a real number. Then dp (F )(cv + w)(g) = (cv + w)(g ◦ F )

= c v(g ◦ F ) + w(g ◦ F )

= c dp (F )(v)(g) + dp (F )(w)(g)  = c dp (F )(v) + dp (F )(w) (g). Since g was arbitrary, the result follows.

[(14.4.1)] [(14.4.1)]

352

14 Smooth Manifolds

The remaining results of this section give the basic properties of differential maps. Theorem 14.4.3. Let M be a smooth manifold, let idM : M −→ M be the identity map, and let p be a point in M . Then dp (idM ) = idTp (M ) . Proof. For a vector v in Tp (M ) and a function f in C ∞ (M ), we have from (14.4.1) that dp (idM )(v)(f ) = v(f ◦ idM ) = v(f ). Since v and f were arbitrary, the result follows.

Theorem 14.4.4. Let V and W be vector spaces, let A : V −→ W be a linear map, and let v be a vector in V . Suppose V and W have standard smooth structures, and make the identifications Tv (V ) = V and TA(v) (W ) = W . Then dv (A) = A. Proof. Straightforward. Theorem 14.4.5 (Chain Rule). Let M, N , and P be smooth manifolds, let F : M −→ N and G : N −→ P be smooth maps, and let p be a point in M . Then dp (G ◦ F ) = dF (p) (G) ◦ dp (F ). Proof. For a vector v in Tp (M ) and a function f in C ∞ (P ), we have dp (G ◦ F )(v)(f ) = v(f ◦ G ◦ F )

= dp (F )(v)(f ◦ G)

[(14.4.1)] [(14.4.1)] 

= dF (p) (G) dp (F )(v)(f )  = dF (p) (G) ◦ dp (F ) (v)(f ).

[(14.4.1)]

Since v and f were arbitrary, the result follows. The next result is a generalization of Theorem 14.3.5. Theorem 14.4.6 (Matrix of Differential Map). Let M be a smooth mmanifold, let N be a smooth n-manifold, and let  F : M −→ N be a smooth map. Let p be a point in M , let U, ϕ = (xi ) and V, (y j ) be charts at p and F (p), respectively, and let Xp = (∂/∂x1 )|p , . . . , (∂/∂xm )|p and YF (p) = (∂/∂y 1 )|F (p) , . . . , (∂/∂y n )|F (p) be the corresponding coordinate bases at p and F (p), respectively. Then: (a)  X  ∂ ∂(y i ◦ F ) ∂ dp (F ) = (p) i ∂xj p ∂xj ∂y F (p) i i −1 X ∂(y ◦ F ◦ ϕ )  ∂ = ϕ(p) . j i ∂r ∂y F (p) i

353

14.5 Differential of Functions (b)

 ∂(y 1 ◦ F ) ∂(y 1 ◦ F ) (p) · · · (p)  ∂x1  ∂xm   . . .   .. .. .. =   ∂(y n ◦ F ) ∂(y n ◦ F )  (p) · · · (p) ∂x1 ∂xm     ∂(y 1 ◦ F ◦ ϕ−1 ) ∂(y 1 ◦ F ◦ ϕ−1 ) ϕ(p) ··· ϕ(p)   ∂r1 ∂rm   . . . . .. .. .. =    n −1 n −1   ∂(y ◦ F ◦ ϕ ) ∂(y ◦ F ◦ ϕ ) ϕ(p) · · · ϕ(p) ∂r1 ∂rm 

 Y dp (F ) XF (p) p

Remark. The second equalities in parts (a) and (b), both of which follow from (14.3.2), are included as a reminder that partial derivatives are ultimately computed using Euclidean methods.    Y Proof. (a): Let akj = dp (F ) XFp (p) , so that from (2.2.6) and (2.2.7),  dp (F )

 X ∂ k ∂ = a . j j k ∂x p ∂y F (p) k

Applying both sides of the preceding identity to y i and using Theorem 14.3.1(b) and (14.3.3) yields   X ∂y i  ∂ i dp (F ) (y ) = akj k F (p) = aij . j ∂x p ∂y k

On the other hand, we have from (14.3.3) and (14.4.1) that     ∂ ∂ ∂(y i ◦ F ) i i dp (F ) (y ) = (y ◦ F ) = (p). ∂xj p ∂xj p ∂xj Combining the above identities gives the result. (b): This follows from (2.2.6), (2.2.7), and part (a).

14.5

Differential of Functions

Let M be a smooth manifold, let f be a function in C ∞ (M ), let p be a point in  i M , and let U, ϕ = (x ) be a chart at p. Viewing R as a smooth 1-manifold, we have the differential map dp (f ) : Tp (M ) −→ Tf (p) (R).

(14.5.1)

A covering chart for R is (R, idR = r), where r is the standard  coordinate on R. The corresponding coordinate basis at f (p) is (d/dr)|f (p) . Then Theorem

354

14 Smooth Manifolds

14.4.6(a) and idR = r give   ∂ ∂(r ◦ f ) d ∂f d dp (f ) = (p) = (p) . ∂xi p ∂xi dr f (p) ∂xi dr f (p)

(14.5.2)

Using (14.3.4), we identify Tf (p) (R) with R, andwrite Tf (p) (R) = R. To be consistent, we also identity the basis (d/dr)|f (p) for Tf (p) (R) with the basis (1) for R. Then (14.5.1) and (14.5.2) become

and

dp (f ) : Tp (M ) −→ R

(14.5.3)

  ∂ ∂f dp (f ) = (p) i ∂x p ∂xi

(14.5.4)

for i = 1, . . . , m. Let us denote the dual space Tp (M )∗

by

Tp∗ (M ).

The usual identification of a vector space with its double dual gives Tp∗∗ (M ) = Tp (M ). Theorem 14.5.1. With the above setup, dp (f ) is a covector in Tp∗ (M ). Proof. This follows from Theorem 14.4.2 and (14.5.3). Theorem 14.5.2 (Existence of Dual Coordinate Basis). Let M be a i smooth m-manifold, let p be a point  in M , let U, (x ) be a chart at p, and 1 m let Xp = (∂/∂x )|p , . . . , (∂/∂x )|p be the corresponding coordinate basis at p.  1 m Then dp (x ), . . . , dp (x ) is the dual  basis of Xp , called the dual coordinate basis at p corresponding to U, (xi ) . Proof. Setting f = xj in (14.5.4) and using Theorem 14.3.1(b) gives the result.

With the next result, we recover an identity that is familiar from the differential calculus of several real variables, except that here “differentials” replace “infinitesimals”. Theorem 14.5.3. Let M be a smooth m-manifold,  and let f be a function in C ∞ (M ). Let p be a point in M , let U, (xi )  be a chart at p, and let (∂/∂x1 )|p , . . . , (∂/∂xm )|p and dp (x1 ), . . . , dp (xm ) be the corresponding coordinate and dual coordinate bases at p. Then, in local coordinates, dp (f ) =

X ∂f (p) dp (xi ). i ∂x i

355

14.5 Differential of Functions

P Proof. By Theorem 14.5.2, dp (f ) = j aj dp (xj ) for some real numbers aj . We have   ∂f ∂ (p) = dp (f ) [(14.5.4)] ∂xi ∂xi p  X  ∂ = aj dp (xj ) ∂xi p j   X ∂ = aj dp (xj ) ∂xi p j = ai

[Th 14.5.2]

for i = 1, . . . , m. Theorem 14.5.4 (Change of Dual Coordinate Basis). Let M be a smooth   e , (e m-manifold, and let p be a point in M . Let U, (xi ) and U xj ) be charts at   p, let Xp = (∂/∂x1 )|p , . . . , (∂/∂xm )|p and Xep = (∂/∂e x1 )|p , . . . , (∂/∂e xm )|p  be the corresponding coordinate bases at p, and let Xp∗ = dp (x1 ), . . . , dp (xm )  and Xep∗ = dp (e x1 ), . . . , dp (e xm ) be the corresponding dual coordinate bases at p. Then: (a) X ∂xj dp (xj ) = (p) dp (e xi ) i ∂e x i for j = 1, . . . , m. (b) ∂x1 (p) · · · 1  ∂e  x. .. .. = .   1 ∂x (p) · · · ∂e xm 

 Xe∗ idTp∗ (M ) p∗ Xp

 ∂xm (p)   ∂e x1    T ..  = idT (M ) Xp . ep p .  X  m ∂x (p) ∂e xm

Proof. (a): Setting f = xj and xi = x ei in Theorem 14.5.3 gives the result. (b): The first equality follows from (2.2.6), (2.2.7), and part (a). The second equality follows from Theorem 14.3.5(b). Theorem 14.5.5. Let M be a smooth manifold, let f be a function in C ∞ (M ), let p be a point in M , and let v be a vector in Tp (M ). Then dp (f )(v) = v(f ). Proof. By Theorem 1.2.1(d) and Theorem 14.5.2, X ∂ i v= dp (x )(v) i , ∂x p i

356

14 Smooth Manifolds

hence v(f ) =

X

dp (xi )(v)

i

 ∂ (f ) ∂xi p

∂f (p) ∂xi i X  ∂f i = (p) dp (x ) (v) ∂xi i =

X

dp (xi )(v)

= dp (f )(v).

[(14.3.3)]

[Th 14.5.3]

Theorem 14.5.6. Let M be a smooth manifold, let p be a point in M , let f, g be functions in C ∞ (M ), and let c be a real number. Then: (a) dp (cf + g) = c dp (f ) + dp (g). (b) dp (f g) = f (p) dp (g) + g(p) dp (f ). Proof. For a vector v in Tp (M ), we have dp (cf + g)(v) = v(cf + g)

[Th 14.5.5]

= c v(f ) + v(g) = c dp (f )(v) + dp (g)(v)  = c dp (f ) + dp (g) (v) and

dp (f g)(v) = v(f g)

[Th 14.5.5]

[Th 14.5.5]

= f (p) v(g) + g(p) v(f )

[(14.3.1)]

= f (p) dp (g)(v) + g(p) dp (f )(v)  = f (p) dp (g) + g(p) dp (f ) (v).

[Th 14.5.5]

Since v was arbitrary, the result follows. Theorem 14.5.7. Let (e1 , . . . , em ) be the standard basis for Rm , and let (ξ 1 , . . . , ξ m ) be its dual basis. Let (r1 , . . . , rm ) be standard coordinates on Rm , let p be a  m 1 m point in R , let (∂/∂r )|p , . . . , ∂/∂r )|p be the corresponding coordinate basis at p, called the standard coordinate basis at p, and let dp (r1 ), . . . , dp (rm ) be the corresponding dual coordinate basis at p, called the  standard dual coordinate basis at p. Then (∂/∂r1 )|p , . . . , ∂/∂rm )|p can be identified with (e1 , . . . , em ), and dp (r1 ), . . . , dp (rm ) can be identified with (ξ 1 , . . . , ξ m ), written   ∂ ∂ , . . . , m = (e1 , . . . , em ) ∂r1 p ∂r p and  dp (r1 ), . . . , dp (rm ) = (ξ 1 , . . . , ξ m ). Proof. It is sufficient to prove either of the above identities, the other following by taking the dual. In keeping with (14.3.4), we make the identification

357

14.6 Immersions and Diffeomorphisms

Tp (Rm ) = Rm . By definition, (r1 , . . . , rm ) is the coordinate map of the chart  m 1 m R , idRm = (r , . . . , r ) . Since ri is linear, by Theorem 14.4.4, dp (ri ) = ri for i = 1, . . . , m. For a point q in Rm , X X  dp (ri )(q) ei = ri (q)ei = r1 (q), . . . , rm (q) = (r1 , . . . , rm )(q) i

i

= idRm (q) = q =

X

ξ i (q)ei ,

i

where the last equality follows from Theorem 1.2.1(d). Since q was arbitrary, dp (ri ) = ξ i for i = 1, . . . , m.

14.6

Immersions and Diffeomorphisms

In this brief section, we generalize the discussion of immersions and diffeomorphisms in Section 10.2 to the setting of smooth manifolds. Let M and N be smooth manifolds, where dim(M ) ≤ dim(N ), let F : M −→ N be a smooth map, and let p be a point in M . We say that F is an immersion at p if the differential map dp (F ) : Tp (M ) −→ TF (p) (N ) is injective, and that F is an immersion (on M ) if it is an immersion at every p in M . Now suppose M and N have the same dimension, and let G, H : M −→ N be smooth maps. We say that G is a diffeomorphism, and that M and N are diffeomorphic, if G is bijective and G−1 : N −→ M is smooth. We say that H is a local diffeomorphism at p if there is a neighborhood U of p in M and a neighborhood V of H(p) in N such that H|U : U −→ V is a diffeomorphism. Then H is said to be a local diffeomorphism (on M ) if it is a local diffeomorphism at every p in M . Theorem 14.6.1. Let M and N be smooth manifolds with the same dimension, let F : M −→ N be a diffeomorphism, and let p be a point in M . Then the differential map dp (F ) : Tp (M ) −→ TF (p) (N ) is a linear isomorphism, with inverse dp (F )−1 = dF (p) (F −1 ). Proof. By Theorem 14.4.2, dp (F ) is a linear map. We have from Theorem 14.4.3 and Theorem 14.4.5 that idTp (M ) = dp (idM ) = dp (F −1 ◦ F ) = dF (p) (F −1 ) ◦ dp (F ). Similarly, idTF (p) (N ) = dp (F ) ◦ dF (p) (F −1 ). By Theorem A.4, dp (F ) is bijective. Theorem 14.6.2 (Inverse Map Theorem). Let M and N be smooth manifolds with the same dimension, let F : M −→ N be a smooth map, and let p be a point in M . Then F is a local diffeomorphism at p if and only if it is an immersion at p. Thus, F is a local diffeomorphism if and only if it is an immersion.

358

14.7

14 Smooth Manifolds

Curves

The following definitions are borrowed more or less directly from Section 10.1 and Section 11.1. A (parametrized) curve on a smooth manifold M is a map λ : I −→ M , where I is an interval in R that is either open, closed, half-open, or half-closed, and where the possibility that I is infinite is not excluded. Our focus will be on the case where I is a finite open interval, usually denoted by (a, b). Rather than provide a separate statement identifying the independent variable for the curve, most often denoted by t, and sometimes by u, it is helpful to incorporate this into the notation for λ, as in λ(t) : I −→ M . When I is a closed interval [a, b] and λ is continuous, we say that λ joins λ(a) to λ(b). It is convenient to adopt the following convention. Henceforth, when required by the context, the interval I is assumed to contain 0. Consider the curve λ(t) : (a, b) −→ M . Viewing (a, b) as a smooth 1manifold, we say that λ is smooth [on (a, b)] if it is smooth as a map between smooth manifolds. Suppose λ is in fact smooth. For a given point t in (a, b), we have the differential map  dt (λ) : Tt (a, b) −→ Tλ(t) (M ).  A covering chart for (a, b) is (a, b), id(a,b) = r , where r is the standard coordi nate on (a, b). The corresponding coordinate basis at t is (d/dr)|t . It follows that   d dt (λ) is a vector in Tλ(t) (M ) (14.7.1) dr t  for all t in (a, b). Thus, for all functions f in C ∞ (a, b) ,   d d dt (λ) (f ) = (f ◦ λ) [(14.4.1)] dr t dr t (14.7.2) d(f ◦ λ) = (t). [(14.3.3)] dr In an effort to ensure that the notation adopted for smooth manifolds resembles as much as possible the notation from differential calculus, let us denote   dλ d by (t). dt (λ) (14.7.3) dr t dr We continue to indulge in the usual (and sometimes confusing) practice of obscuring the difference between a variable and its value. With this understanding, (14.7.1) and (14.7.2) become: dλ (t) dt

is a vector in

Tλ(t) (M )

(14.7.4)

359

14.7 Curves and

dλ d(f ◦ λ) (t)(f ) = (t). dt dt We refer to (dλ/dt)(t) as the velocity of λ at t.

(14.7.5)

Theorem 14.7.1. Let M be a smooth m-manifold, let λ(t) : (a, b) −→ M   be a smooth curve, and let U, ϕ = (xi ) be a chart on M such that λ (a, b) ⊂ U . Then: (a) The smooth curve ϕ ◦ λ(t) : (a, b) −→ Rm is given by  ϕ ◦ λ(t) = x1 ◦ λ(t), . . . , xm ◦ λ(t) . (b) X d(xi ◦ λ) dλ ∂ (t) = (t) i dt dt ∂x λ(t) i for all t in (a, b). Thus, the components of (dλ/dt)(t) with  respect to the coordinate basis at λ(t) are the components of d(ϕ ◦ λ)/dt (t) with respect to the standard basis for Rm . Proof. (a): We have  ϕ ◦ λ(t) = (x1 , . . . , xm ) ◦ λ(t) = x1 ◦ λ(t), . . . , xm ◦ λ(t) . (b): This follows from Theorem 14.4.6(a), (14.7.3), and part (a). It was remarked in the introduction to Part III that the algebraic approach to defining tangent vectors gives rise to results that have something of the geometric flavor found in the theory of surfaces. We close this section with several such instances. Theorem 14.7.2. Let M be a smooth m-manifold, let p be a point in M , and let v be a vector in Tp (M ). Then there is a real number ε > 0 and a smooth curve λ(t) : (−ε, ε) −→ M such that λ(0) = p and (dλ/dt)(0) = v.  Proof. Let (U, ϕ = (xi ) be a chart at p such that ϕ(p) = (0, . . . , 0), and, in local coordinates, let X i ∂ v= a . i ∂x p i Define a smooth curve λ(t) : (−ε, ε) −→ M by λ(t) = ϕ−1 (ta1 , . . . , tam ), where ε is chosen small enough that λ (−ε, ε) ⊂ U . Clearly, λ(0) = p. We have from Theorem 14.7.1(a) that xi ◦ λ(t) = tai , hence ai =

d(xi ◦ λ) (0) dt

for i = 1, . . . , m. It follows from the preceding identities and Theorem 14.7.1(b) that X d(xi ◦ λ) dλ ∂ (0) = (0) i = v. dt dt ∂x λ(0) i

360

14 Smooth Manifolds

Theorem 14.7.3. If M is a smooth manifold and p is a point in M , then   dλ Tp (M ) = (t0 ) : λ(t) : (a, b) −→ M is smooth, λ(t0 ) = p, t0 ∈ (a, b) . dt Proof. This follows from Theorem 14.7.2 and (14.7.4). Let M be a smooth manifold, let p be a point in M , and let v be a vector in Tp (M ). We have from the preceding theorem that there is a smooth curve λ(t) : (a, b) −→ M such that λ(t0 ) = p and (dλ/dt)(t0 ) = v for some t0 in (a, b). Let X be a vector field in X(M ). Then Xp is a vector in Tp (M ), hence there corresponds such a smooth curve. We will make use of these observations frequently. Theorem 14.7.4. Let M and N be smooth manifolds, let F : M −→ N be a smooth map, let p be a point in M , and let v be a vector in Tp (M ). Then: (a) d(F ◦ λ) dp (F )(v) = (t0 ), dt where λ(t) : (a, b) −→ M is any smooth curve such that λ(t0 ) = p and (dλ/dt)(t0 ) = v for some t0 in (a, b). (b) If ψ(t) : (a, b) −→ M is a smooth curve, then   dψ d(F ◦ ψ) dψ(t) (F ) (t) = (t) dt dt for all t in (a, b). Remark. Since F ◦ λ, F ◦ ψ : (a, b) −→ N are smooth curves, the assertions makes sense. Proof. (a): For a function g in C ∞ (N ), we have dp (F )(v)(g) = v(g ◦ F ) dλ = (t0 )(g ◦ F ) dt d(g ◦ F ◦ λ) = (t0 ) dt d(F ◦ λ) = (t0 )(g). dt

[(14.4.1)]

[(14.7.5)] [(14.7.5)]

Since g was arbitrary, the result follows. (b): This follows from part (a).

14.8

Submanifolds

Having defined smooth manifolds, it is natural to consider subsets with corresponding properties. Let M be a smooth m-manifold, and let M be a subset of

361

14.8 Submanifolds

M that is a smooth m-manifold in its own right. Without further assumptions, there is no reason to expect a connection between the topologies on M and M , and likewise for their smooth structures. We say that M is an m-submanifold of M if: [S1] M has the subspace topology. [S2] The inclusion map ι : M −→ M is an immersion. Suppose M is in fact an m-submanifold of M . It follows from [S2] that m ≤ m. We say that M is a hypersurface of M if dim(M ) = dim(M ) − 1. Since ι : M −→ M is an immersion, for each point p in M , the differential map dp (ι) : Tp (M ) −→ Tp (M ) is injective. Given a vector v in Tp (M ), the image vector dp (ι)(v) behaves as follows: for all functions f in C ∞ (M ), dp (ι)(v)(f ) = v(f ◦ ι) = v(f |M ). We adopt the established convention of identifying Tp (M ) with its image under dp (ι). Thus, Tp (M ) is viewed as a vector subspace of Tp (M ), and we write Tp (M ) ⊆ Tp (M ).

(14.8.1)

Theorem 14.8.1. A subset of a smooth manifold can be made into a submanifold in at most one way. Thus, a subset of a smooth manifold is either a submanifold or not. Theorem 14.8.2 (Regular Surface). Any regular surface is a hypersurface of R3 . Remark. According to Example 14.1.4 and Example 14.1.5, a regular surface is a smooth 2-manifold, and R3 is a smooth 3-manifold, so the assertion makes sense. Proof. We need to verify that [S1] and [S2] are satisfied. [S1]: By definition, M has the subspace topology. [S2]: To prove that the inclusion map ι : M −→ R3 is an immersion, we have to show that it is smooth and the differential map dp (ι) : Tp (M ) −→ Tp (R3 ) is injective for all p in M . Smoothness. Let (U, ϕ) and (V, ψ) be charts on M and R3 , respectively, such that ι(U ) = U ⊂ V . By Theorem 11.2.9, the map ψ ◦ ι ◦ ϕ−1 = ψ ◦ ϕ−1 : ϕ(U ) −→ R3 is smooth. Since (U, ϕ) and (V, ψ) were arbitrary, it follows from Theorem 14.2.3 that ι is smooth.  Injectivity. Let p be a point in M , and let U, ϕ = (ϕ1 , ϕ2 , ϕ3 ) be a chart at p in the sense of Section 11.2. Then ϕ(U ), ϕ−1 is a chart at p in the sense of Section 14.1. Let Xp be the corresponding coordinate basis at p, and let (r1 , r2 ) and (s1 , s2 , s3 ) be standard coordinates on R2 and R3 , respectively.

362

14 Smooth Manifolds

 Then R3 , (s1 , s2 , s3 ) is a chart on R3 . Let Yp be the corresponding coordinate basis at p = ι(p). It follow from Theorem 14.4.6(b) that      Yp ∂(si ◦ ι ◦ ϕ) −1  ∂ϕi −1  dp (ι) X = ϕ (p) = ϕ (p) , p ∂rj ∂rj and then from property [C1] of Section 11.2 and Theorem 11.2.2 that dp (ι) is injective. Reflecting on the above definition of a submanifold, [S1] seems like an obvious requirement, but the same cannot be said of [S2]. Theorem 14.8.2 gives some insight into [S2], but its rationale is far from transparent. Rather than search for a deeper understanding of [S2], we change course and provide an alternative perspective on submanifolds. For a given integer 1 ≤ k ≤ m, we define a type of projection map Pk : Rm −→ Rk × {0}m−k by Pk (r1 , . . . , rk , rk+1 , . . . , rm ) = (r1 , . . . , rk , 0, . . . , 0). Let U be an open set in Rm , and let S be a subset of U . We say that S is a k-slice of U if S ⊆ Pk (U ). Theorem 14.8.3 (Slice Criterion for Submanifolds). Let M be a smooth m-manifold, let M be a subset of M with the subspace topology, and let 1 ≤ m ≤ m. Then: (a) M is an m-submanifold of M if and only if for every point p in M , there is a chart U, ϕ = (x1 , . . . , xm ) on M at p, called a slice chart for M in M , such that ϕ(M ∩ U ) is an m-slice of ϕ(U ). (b) If M is an m-submanifold of M , then corresponding to the slice chart  1 m U, ϕ = (x , . . . , x ) on M at p, there is the chart M ∩ U, (x1 |M ∩U , . . . ,  xm |M ∩U ) on M at p such that   ∂ ∂ ,..., 1 m ∂(x |M ∩U ) q ∂(x |M ∩U ) q can be identified with 

 ∂ ∂ , . . . , ∂x1 q ∂xm q

for all q in M ∩ U . Thus, Tq (M ) can be identified with the subspace of  Tq (M ) spanned by (∂/∂x1 )|q , . . . , (∂/∂xm )|q for all q in M ∩ U .

The above notation is rather cumbersome. Henceforth, xi |M ∩U will be abbreviated to xi for i = 1, . . . , m. In this revised notation, we denote   M ∩ U, (x1 |M ∩U , . . . , xm |M ∩U ) by M ∩ U, (x1 , . . . , xm ) , and 

 ∂ ∂ ,..., 1 m ∂(x |M ∩U ) q ∂(x |M ∩U ) q

 by

 ∂ ∂ , . . . , . ∂x1 q ∂xm q

363

14.8 Submanifolds

We require two further results on submanifolds, both of which are straightforward consequences of Theorem 14.8.3. We saw in Theorem 14.3.3(a) that an open set in a smooth manifold is itself a smooth manifold. The next result says that it is also a submanifold. Theorem 14.8.4 (Open Submanifold). If M is a smooth m-manifold and U is an open set in M , then U is an m-submanifold of M , called the open submanifold of M corresponding to U . Throughout, any open set in a smooth manifold is viewed as an open submanifold. Theorem 14.8.5 (Chart). Let M be a smooth m-manifold, let p be a point  in M , let (U, ϕ) be a chart at p, and let Xp = (∂/∂x1 )|p , . . . , (∂/∂xm )|p be the corresponding coordinate basis at p. Let (r1, . . . , rm ) be standard coordinates on Rm , and let (∂/∂r1 )|p , . . . , (∂/∂rm )|p be the standard coordinate basis at p. Viewing U and ϕ(U ) as open m-submanifolds of M and Rm , respectively, make the identifications Tp (U ) = Tp (M ) and Tϕ(p) ϕ(U ) = Rm . m Let E = (e1 , . . . , em ) be the  standard basis for R , and make the identification 1 m (∂/∂r )|p , . . . , (∂/∂r )|p = E (as given by Theorem 14.5.7). Then: (a) ϕ : U −→ ϕ(U ) is a diffeomorphism. (b) Xp is a basis for Tp (U ). (c)  E  X dp (ϕ) X = Im = dϕ(p) (ϕ−1 ) E p . p

(d) dϕ(p) (ϕ−1 )(ei ) =

∂ ∂xi p

for i = 1, . . . , m. Proof. (a): Straightforward. (b): This follows from Theorem 14.3.3(b) and Theorem 14.3.4(a). Alternatively, setting M = U and M = M in Theorem 14.8.3(b) gives the result. (c): We have  ∂(ri ◦ ϕ) ∂(ri ◦ ϕ ◦ ϕ−1 ) (p) = ϕ(p) j j ∂x ∂r  ∂ri = j ϕ(p) ∂r = δji

[(14.3.2)]

[Th 14.3.1(b)]  E for i, j = 1, . . . , m. Then Theorem 14.4.6(b) yields dp (ϕ) Xp = Im , which proves the first equality. We also have  X  X dϕ(p) (ϕ−1 ) E p = dp (ϕ)−1 E p [Th 14.6.1, part (a)]  E −1 = dp (ϕ) X [Th 2.2.5(b)] p

= Im ,

[first equality]

364

14 Smooth Manifolds

which proves the second equality. (d): This follows from (2.2.2), (2.2.3), and part (c).

14.9

Parametrized Surfaces

A parametrized surface on a smooth manifold M is a smooth map of the form σ(r, s) : (a, b) × (−ε, ε) −→ M , where ε > 0 is a real number. For a given point r in (a, b), we define a smooth curve σr (s) : (−ε, ε) −→ M by σr (s) = σ(r, s) for all s in (−ε, ε). Similarly, for a given point s in (−ε, ε), we define a smooth curve σs (r) : (a, b) −→ M by σs (r) = σ(r, s) for all r in (a, b). In keeping with the terminology introduced in the context of surfaces of revolution, we refer to σr as the latitude curve (or transverse curve) corresponding to r, and to σs as the longitude curve corresponding to s. In most applications, we tend to think of σ as a family of longitude curves indexed by s. Let us now consider the smooth curve σs from the perspective of Section 14.7. For a given point s in (−ε, ε), and using (14.7.3), we denote dσs (r) dr

by

∂σ (r, s). ∂r

(14.9.1)

According to (14.7.4), ∂σ (r, s) ∂r

is a vector in

Tσ(r,s) (M ).

We define (∂σ/∂s)(r, s) similarly. Example 14.9.1. By Theorem 11.4.3, a surface of revolution is a regular surface  with a chart of the form (a, b) × (−π, π), ϕ = (t, φ) , where  ϕ(t, φ) = ρ(t) cos(φ), ρ(t) sin(φ), h(t) for all (t, φ) in (a, b) × (−π, π). Thus, ϕ is a parametrized surface in the above sense, and ϕt and ϕφ are the latitude curve and longitude curve, respectively, as defined in Section 11.4. ♦

14.9 Parametrized Surfaces

365

Theorem 14.9.2. Let M be a smooth m-manifold,  let σ(r, s) : (a, b) × (−ε, ε) i −→ M be a parametrized surface, and let U, (x ) be a chart on M that inter sects σ (a, b) × (−ε, ε) . Then, in local coordinates, X ∂(xi ◦ σ) ∂σ ∂ (r, s) = (r, s) i ∂r ∂r ∂x σ(r,s) i and

X ∂(xi ◦ σ) ∂σ ∂ (r, s) = (r, s) i ∂s ∂s ∂x σ(r,s) i

for all (r, s) in the intersection. Remark. We observe that the ∂(xi ◦σ)/∂r and ∂(xi ◦σ)/∂s are usual (Euclidean) partial derivatives. Proof. It follows from Theorem 14.7.1(b) and (14.9.1) that X d(xi ◦ σs ) ∂σ dσs ∂ (r, s) = (r) = (r) i ∂r dr dr ∂x σs (r) i i X ∂(x ◦ σ) ∂ = (r, s) i , ∂r ∂x σ(r,s) i which gives the first identity. The second identity is demonstrated similarly.

366

14 Smooth Manifolds

Chapter 15

Fields on Smooth Manifolds In this chapter, we provide a generalization of vector fields to smooth manifolds and define a range of other types of “fields”.

15.1

Vector Fields

Vector fields arise in a variety of contexts. In this section, we discuss vector fields on smooth manifolds, curves, parametrized surfaces, and submanifolds. Smooth manifolds. Let M be a smooth manifold. A vector field on M is a map X that assigns to each point p in M a vector Xp in Tp (M ). As was the case for vector fields on regular surfaces, we sometimes use “|p ” notation as an alternative to “subscript p” notation, especially when other subscripts are involved. According to (14.3.1), Xp satisfies the product rule Xp (f g) = f (p) Xp (g) + g(p) Xp (f )

(15.1.1)

for all functions f, g in C ∞ (M ). Let f be a function in C ∞ (M ), and let X(f ) : M −→ R be the function defined by X(f )(p) = Xp (f )

(15.1.2)

for all p in M . It follows from (15.1.1) that X(f g) = f X(g) + gX(f ).

(15.1.3)

We say that X is smooth (on M ) if X(f ) is in C ∞ (M ) for all functions f in C ∞ (M ). The set of smooth vector fields on M is denoted by X(M ). We Semi-Riemannian Geometry, First Edition. Stephen C. Newman. c 2019 John Wiley & Sons, Inc. Published 2019 by John Wiley & Sons, Inc.

367

368

15 Fields on Smooth Manifolds

make X(M ) into both a vector space over R and a module over C ∞ (M ) by defining operations as follows: for all vector fields X, Y in X(M ), all functions f in C ∞ (M ), and all real numbers c, let (X + Y )p = Xp + Yp , (f X)p = f (p)Xp , and (cX)p = cXp for all p in M . Looking back at the definition of a smooth vector field X : U −→ Rm between Euclidean spaces as presented in Section 10.3, we observe that for each point p in U , the vector Xp was taken to be in Rm . With hindsight, it appears that we were implicitly identifying the tangent space Tp (Rm ) with Rm . Theorem 15.1.1. Let M be a smooth manifold, let X be a vector field in X(M ), and let p be a point in M . If λ(t) : (a, b) −→ M is a smooth curve such that λ(t0 ) = p and (dλ/dt)(t0 ) = Xp for some t0 in (a, b), then X(f )(p) =

d(f ◦ λ) (t0 ) dt

for all functions f in C ∞ (M ). Proof. We have X(f )(p) = Xp (f ) dλ = (t0 )(f ) dt d(f ◦ λ) = (t0 ). dt

[(15.1.2)]

[(14.7.5)]

The next result guarantees that for a given vector in a tangent space, there is always a smooth vector field with that vector as a value. Its proof (not given) relies on bump functions. Theorem 15.1.2 (Smooth Extension of Vector). Let M be a smooth manifold, let p be a point in M , and let v be a vector in Tp (M ). Then there is a vector field X in X(M ) such that Xp = v. Let M be a smooth m-manifold, and let U be an open set in M . Viewing U as an m-manifold, let X1 , . . . , Xm be vector fields in X(U ). The m-tuple X = (X1 , . . . , Xm ) is said to be a frame on U if Xp = (X1 |p , . . . , Xm |p ) is a basis for Tp (U ) for all p in U . We will see later in this section that for each point p in M , there is always a neighborhood of p on which there is a frame. However, there may not be a frame on all of M . Curves. Let M be a smooth manifold, and let λ : (a, b) −→ M be a smooth curve. A vector field on λ is a map J that assigns to each point t in (a, b)

369

15.1 Vector Fields

a vector J(t) in Tλ(t) (M ). We observe that there is no requirement that λ be injective, so the image of λ might self-intersect. As a consequence, there  could be two (or more) distinct vectors assigned to a given point in λ (a, b) . This represents a distinct difference between a vector field on a curve and a vector field on a smooth manifold. Let f be a function in C ∞ (M ), and consider the function J(f ) : (a, b) −→ R defined by J(f )(t) = J(t)(f )

(15.1.4)  for all t in (a, b). We say that J is smooth (on λ) if J(f ) is in C (a, b) for all functions f in C ∞ (M ). The set of smooth vector fields on λ is denoted by XM (λ). Recall from Section 14.7 that the velocity of λ at t is (dλ/dt)(t). The velocity of λ is the vector field on λ defined by the assignment t 7−→ (dλ/dt)(t) for all t in (a, b). We say that λ is regular if its velocity is nowhere-vanishing; that is, (dλ/dt)(t) is not the zero vector in Tλ(t) (M ) for any t in (a, b). ∞

Theorem 15.1.3. In the above notation, dλ/dt is a vector field in XM (λ). Proof. Setting J = dλ/dt in (15.1.4), we have from (14.7.5) that dλ dλ d(f ◦ λ) (f )(t) = (t)(f ) = (t) dt dt dt for all t in (a, b). Since f and λ are smooth, by Theorem 14.2.4, so is f ◦ λ. It  follows that (dλ/dt)(f ) is in C ∞ (a, b) for all functions f in C ∞ (M ). Theorem 15.1.4. Continuing with the above notation, if X is a vector field in X(M ), then X ◦ λ is a vector field in XM (λ). Remark. Clearly, X ◦ λ is a vector field on λ, so the assertion makes sense. Proof. Setting J = X ◦ λ in (15.1.4), we have from (15.1.2) that  (X ◦ λ)(f )(t) = (X ◦ λ)(t)(f ) = Xλ(t) (f ) = X(f ) λ(t)  = X(f ) ◦ λ (t) for all t in (a, b). Since X and f are smooth, by definition, so is X(f ), and because λis smooth, by Theorem 14.2.4, so is X(f ) ◦ λ. Thus, (X ◦ λ)(f ) is in C ∞ (a, b) for all functions f in C ∞ (M ). Depending on λ, not every vector field J in XM (λ) arises as the composition of λ with some vector field in X(M ). For example, suppose the image of λ self-intersects at the points t1 , t2 in (a, b) and that J(t1 ) 6= J(t2 ). Since every vector field in X(M ) assigns to each point p in M a distinct vector, there is no vector field X in X(M ) such that J = X ◦ λ.

370

15 Fields on Smooth Manifolds

Let J1 , . . . , Jm be vector fields in XM (λ). The m-tuple J = (J1 , . . . , Jm ) is  said to be a frame on λ if J (t) = J1 (t), . . . , Jm (t) is a basis for Tλ(t) (M ) for all t in (a, b). Parametrized surfaces. Let M be a smooth manifold, and let σ(r, s) : (a, b) × (−ε, ε) −→ M be a parametrized surface. A vector field on σ is a map V that assigns to each point (r, s) in (a, b) × (−ε, ε) a vector V (r, s) in Tσ(r,s) (M ). Once again, there is no requirement that σ be injective. Let f be a function in C ∞ (M ), and consider the function V (f ) : (a, b) × (−ε, ε) −→ R defined by V (f )(r, s) = V (r, s)(f )

(15.1.5)

for all (r, s) in (a, b) × (−ε, ε). We say that V is smooth (on σ) if V (f ) is in C ∞ (a, b) × (−ε, ε) for all functions f in C ∞ (M ). The set of smooth vector fields on σ is denoted by XM (σ). Recalling the notation in (14.9.1), we define a vector field ∂σ/∂r on σ by the assignment (r, s) 7−→ (∂σ/∂r)(r, s), and likewise for ∂σ/∂s. Theorem 15.1.5. With the above setup, ∂σ/∂r and ∂σ/∂s are vector fields in XM (σ). Proof. We have ∂σ ∂σ (f )(r, s) = (r, s)(f ) ∂r ∂r dσs = (r)(f ) dr d(f ◦ σs ) = (r) dr ∂(f ◦ σ) = (r, s) ∂r

[(15.1.5)] [(14.9.1)] [(14.7.5)]

for all (r, s) in (a, b)×(−ε, ε). Since f and σ are smooth, by Theorem 14.2.4, so is f ◦σ, and therefore, so is ∂(f ◦σ)/∂r. Thus, (∂σ/∂r)(f ) is in C ∞ (a, b)×(−ε, ε) for all functions f in C ∞ (M ), and likewise for (∂σ/∂s)(f ). Submanifolds. Let M be a smooth manifold, and let M be a submanifold. A vector field along M is a map V that assigns to each point p in M a vector Vp in Tp (M ). We note that Vp is required to be in Tp (M ) but not necessarily in Tp (M ). This explains the change in terminology to “along M ” from “on M ”. Let f be a function in C ∞ (M ), and consider the function V (f ) : M −→ R defined by V (f )(p) = Vp (f )

371

15.1 Vector Fields

for all p in M . We say that V is smooth (along M ) if V (f ) is in C ∞ (M ) for all functions f in C ∞ (M ). The set of smooth vector fields along M is denoted by XM (M ). In particular, for each vector field X in X(M ), the restriction X|M is in XM (M ). With the usual definitions of addition and scalar multiplication, XM (M ) is a vector space over R and a module over C ∞ (M ). Furthermore, after making the appropriate identifications, X(M ) is a C ∞ (M )-submodule of XM (M ). According to (14.8.1), for each point p in M , Tp (M ) is a subspace of Tp (M ). We say that a vector field V in XM (M ) is nowhere-tangent to M if Vp is not in Tp (M ) for all p in M , or equivalently, if Vp is in Tp (M )Tp (M ) for all p in M . Let M be a smooth m-manifold, and let U be an open set in M . Recall from Theorem 14.8.4 that U is an openm-submanifold of M . Suppose U is the coordinate domain of a chart U, (xi ) on M . The ith coordinate vector  field of U, (xi ) is the vector field ∂ ∂xi

in

X(U )

defined by the assignment p 7−→

∂ ∂xi p

for all p in U for i = 1, . . . , m, where we denote ∂ (p) ∂xi

by

∂ . ∂xi p

m Then (∂/∂x1 , . . . , ∂/∂x  ) is a frame on U , called the coordinate frame cori responding to U, (x ) . Let X be a (not necessarily smooth) vector field on M . Then X|U can be expressed as X ∂ X|U = αi i , (15.1.6) ∂x i

where the αi are uniquely determined functions on U , called the components  of X with respect to U, (xi ) . For brevity, we denote X|U

by

X.

The right-hand side  of (15.1.6) is said to express X in local coordinates with respect to U, (xi ) . We often give the local coordinate expression of a vector field without mentioning the underlying chart. This should not introduce any confusion because the notation for the coordinate frame is imbedded in the notation used in (15.1.6), and the specifics of the coordinate domain are usually of no immediate interest. Example 15.1.6 (Rm ). Let (r1 , . . . , rm ) be standard coordinates on Rm , let (∂/∂r1 , . . . , ∂/∂rm ) be the corresponding coordinate frame, called the standard coordinate frame, and let (e1 , . . . , em ) be the standard basis for Rm .

372

15 Fields on Smooth Manifolds

It follows from Theorem 14.5.7 that (∂/∂r1 , . . . , ∂/∂rm ) can be identified with (e1 , . . . , em ), written   ∂ ∂ , . . . , m = (e1 , . . . , em ). ♦ ∂r1 ∂r Theorem 15.1.7 (Smoothness Criterion for Vector Fields). Let M be a smooth manifold, and let X be a (not necessarily smooth) vector field on M .  Then X is in X(M ) if and only if for every chart U, (xi ) on M , the components of X are in C ∞ (U ). P Proof. (⇒): Let X =  i αi (∂/∂xi ) be the local coordinate expression of X with respect to U, (xi ) , and let p be a point in U . Since xi is a function in C ∞ (U ), by Theorem 14.2.7, there is function x ei in C ∞ (M ) and a neighborhood i i e e . By Theorem 14.3.1(b), U ⊆ U of p in M such that x and x e agree on U X  i X ∂e i j ∂ i j x X(e x)= α (e x ) = α = αi j j ∂x ∂x j j e . Since X is in X(M ) and x on U ei in C ∞ (M ), by definition, X(e xi ) in C ∞ (M ). i i e . Since p was arbitrary, α is smooth on U for It follows that α is smooth on U i = 1, . . . , m. P (⇐): Let X = i αi (∂/∂xi ) be the local coordinate expression of X with respect to U, (xi ) . For a function f in C ∞ (M ), we have X(f ) =

X

αi

i

∂f ∂xi

on U . By assumption, the αi are smooth on U , and therefore, so is X(f ). Since M is covered by the coordinate domains of its atlas, X(f ) is smooth on M . Theorem 15.1.8 (Change of Coordinate Frame). Let M be a smooth   e , (e m-manifold, let U, (xi ) and U xj ) be overlapping charts on M , and let  (∂/∂x1 , . . . , ∂/∂xm ) and (∂/∂e x1 , . . . , ∂/∂e xm be the corresponding coordinate frames. Then X ∂e ∂ xj ∂ = ∂xi ∂xi ∂e xj j e for i = 1, . . . , m. on U ∩ U Proof. This follows from Theorem 14.3.5(a).

15.2

Representation of Vector Fields

Let M be a smooth manifold. A linear map D : C ∞ (M ) −→ C ∞ (M )

15.2 Representation of Vector Fields

373

is said to be a derivation [on C ∞ (M )] if it satisfies the following product rule: D(f g) = f D(g) + gD(f ) (15.2.1) for all functions f, g in C ∞ (M ); that is,

D(f g)(p) = f (p) D(g)(p) + g(p) D(f )(p) for all p in M . The set of derivations on C ∞ (M ) is denoted by Der(M ). The zero derivation in Der(M ), denoted by 0, is the derivation that sends all functions in C ∞ (M ) to the zero function in C ∞ (M ). We make Der(M ) into both a vector space over R and a module over C ∞ (M ) by defining operations as follows: for all derivations D, E in Der(M ), all functions f, g in C ∞ (M ), and all real numbers c, let (D + E)(f )(p) = D(f )(p) + E(f )(p), (f D)(g)(p) = f (p) D(g)(p), and (cD)(f )(p) = c D(f )(p) for all p in M . We see from (15.1.3) that a vector field on M can be thought of as derivation on C ∞ (M ). Pursuing this line of reasoning, let us consider the map F : X(M ) −→ Der(M ) defined by F(X)(f ) = X(f ) for all vector fields X in X(M ) and all functions f in C ∞ (M ); that is, F(X)(f )(p) = X(f )(p) for all p in M , where the right-hand side of the above identity is given by (15.1.2). Theorem 15.2.1 (Representation of Vector Fields). If M is a smooth manifold, then F is a C ∞ (M )-module isomorphism: X(M ) ≈ Der(M ). Proof. It is easily shown that F is a C ∞ (M )-module homomorphism. It remains to show that F is bijective. Injectivity. Suppose X is a vector field in X(M ) such that F(X) = 0; that is, X(f ) = 0 for all functions f in C ∞ (M ). By definition, X = 0, so ker(F) = {0}. It follows from Theorem B.5.3 that F is injective. Surjectivity. Let D be a derivation on C ∞ (M ). For each point p in M , define a function XD |p : C ∞ (M ) −→ R

374

15 Fields on Smooth Manifolds

by XD |p (f ) = D(f )(p)

(15.2.2)

for all functions f in C ∞ (M ). Since D is linear, so is XD |p . For functions f, g in C ∞ (M ), we have from (15.2.1) that XD |p (f g) = D(f g)(p)

= f (p) D(g)(p) + g(p) D(f )(p)

= f (p) XD |p (g) + g(p) XD |p (f ), hence XD |p satisfies (14.3.1). Thus, XD |p is a tangent vector at p. Let us define a vector field XD on M by the assignment p 7−→ XD |p for all p in M . Then XD (f )(p) = XD |p (f ) = D(f )(p)

[(15.1.2)] [(15.2.2)]

for all p in M , so XD (f ) = D(f ). By definition, D(f ) is in C ∞ (M ) for all functions f in C ∞ (M ), and therefore, so is XD (f ). Thus, by definition, XD is a smooth vector field on M ; that is, XD is in X(M ). Since F(XD ) = D and D was arbitrary, F is surjective. This shows that F is a C ∞ (M )-module isomorphism. From now on, we often (but not always) identify X(M ) with Der(M ). However, we will continue to use the previous terminology and notation, and say, for example, that “X is a vector field in X(M )” rather than “X is a derivation in Der(M )”. It will usually be clear from the context whether the identification is being made, but sometimes, for emphasis, we make it explicit.

15.3

Lie Bracket

Let M be a smooth manifold, and let X and Y be vector fields in X(M ). The Lie bracket of X and Y is the map [X, Y ] : C ∞ (M ) −→ C ∞ (M ) defined by   [X, Y ](f ) = X Y (f ) − Y X(f )

(15.3.1)

for all functions f in C ∞ (M ). Observe that this definition employs the representation of vector fields given by Theorem 15.2.1. Reverting for the moment to the vector field formulation, we have from (15.1.2) that     X Y (f ) (p) = Xp Y (f ) and Y X(f ) (p) = Yp X(f ) , so that   [X, Y ](f )(p) = Xp Y (f ) − Yp X(f ) for all p in M .

375

15.3 Lie Bracket

Theorem 15.3.1. With the above setup, [X, Y ] is a derivation on C ∞ (M ). Proof. Since Y is in X(M ) and f is in C ∞ (M ), by definition, Y (f ) is in C ∞ (M ). ∞ Furthermore, since X is in X(M ) and  Y (f )∞is in C (M ), by definition, X Y (f ) ∞ is in C (M ). Similarly, Y X(f ) is in C (M ). It follows that [X, Y ](f ) is in C ∞ (M ). The remainder of the verification is straightforward. Lie bracket is the map [·, ·] : X(M ) × X(M ) −→ X(M ) defined by the assignment (X, Y ) 7−→ [X, Y ] = X ◦ Y − Y ◦ X for all vector fields X, Y in X(M ). Theorem 15.3.2. Let M be a smooth manifold, let X, Y, Z be vector fields in X(M ), and let f, g be functions in C ∞ (M ). Then: (a) [Y, X] = −[X, Y ]. (b) [X + Y, Z] = [X, Z] + [Y, Z]. (c) [X, Y + Z] = [X, Y ] + [X, Z]. (d) [f X, gY ] = f g[X, Y ] +f X(g)Y − gY (f )X.  (e) X, [Y, Z] + Y, [Z, X] + Z, [X, Y ] = 0. (Jacobi’s identity) Proof. (a)–(d): Straightforward. (e): We have   X, [Y, Z] = X ◦ [Y, Z] − [Y, Z] ◦ X

= X ◦ (Y ◦ Z − Z ◦ Y ) − (Y ◦ Z − Z ◦ Y ) ◦ X

= X ◦ Y ◦ Z − X ◦ Z ◦ Y − Y ◦ Z ◦ X + Z ◦ Y ◦ X. Likewise,   Y, [Z, X] = Y ◦ Z ◦ X − Y ◦ X ◦ Z − Z ◦ X ◦ Y + X ◦ Z ◦ Y   Z, [X, Y ] = Z ◦ X ◦ Y − Z ◦ Y ◦ X − X ◦ Y ◦ Z + Y ◦ X ◦ Z. Summing the identities gives the result. It was observed in Section 15.1 that X(M ) is a module over C ∞ (M ). It follows from Theorem 15.3.2 that X(M ) is also a Lie algebra over R. Theorem 15.3.3. Let M be a smooth manifold, let X, Y be vector fields in X(M ), and, in local coordinates, let X=

X i

Then:

αi

∂ ∂xi

and

Y =

X j

βj

∂ . ∂xj

376

15 Fields on Smooth Manifolds

(a) [X, Y ] =

X X  j

i

α

i ∂β

j

∂xi

−β

i ∂α

j

∂xi



∂ . ∂xj

(b) 

∂ ∂ , ∂xi ∂xj

 =0

for i, j = 1, . . . , m. Proof. (a): For a function f in C ∞ (M ), we have X X  X    i ∂ j ∂f i ∂ j ∂f (X ◦ Y )(f ) = X Y (f ) = α β = α β ∂xi ∂xj ∂xi ∂xj i j ij   2 X ∂β j ∂f j ∂ f = αi + β ∂xi ∂xj ∂xi ∂xj ij X  X ∂β j ∂ ∂2 i j = αi i + α β (f ). ∂x ∂xj ∂xi ∂xj ij ij Since f was arbitrary, X ◦Y =

X

Y ◦X =

X

αi

X ∂β j ∂ ∂2 + αi β j i j . i j ∂x ∂x ∂x ∂x ij

βi

X ∂αj ∂ ∂2 + β i αj i j . i j ∂x ∂x ∂x ∂x ij

ij

Likewise, ij

The preceding two identities and Theorem 14.3.6 give the result. (b): This follows from part (a).

15.4

Covector Fields

Let M be smooth m-manifold. A covector field on M is a map ω that assigns to each point p in M a covector ωp in Tp∗ (M ). We say that ω vanishes at p if ωp = 0, is nonvanishing at p if ωp 6= 0, and is nowhere-vanishing (on M ) if it is nonvanishing at every p in M . Let X be a vector field in X(M ), and let ω(X) : M −→ R be the function defined by ω(X)(p) = ωp (Xp )

(15.4.1)

for all p in M . We say that ω is smooth (on M ) if the function ω(X) is in C ∞ (M ) for all vector fields X in X(M ). The set of smooth covector fields on

377

15.4 Covector Fields

M is denoted by X∗ (M ). We make X∗ (M ) into both a vector space over R and a module over C ∞ (M ) by defining operations as follows: for all covector fields ω, ξ in X∗ (M ), all functions f in C ∞ (M ), and all real numbers c, let (ω + ξ)p = ωp + ξp , (f ω)p = f (p)ωp , and (cω)p = cωp for all p in M . With the identification Tp∗∗ (M ) = Tp (M ), we have from (5.1.3) that  ωp (Xp ) = Xp ωp for all p in M , which we express as ω(X) = X(ω).

(15.4.2)

Let U be an open set in M . Viewing U as a smooth m-manifold, let ω 1 , . . . , ω m be covector fields in X∗ (U ). The m-tuple Υ = (ω 1 , . . . , ω m ) is said to be a dual frame on U if Υ(p) = (ω 1 |p , . . . , ω m |p ) is a basis for Tp∗ (U ) for all p in U . Given a frame (X1 , . . . , Xm ) on U , there is a uniquely determined dual frame (ω 1 , . . . , ω m ) on U defined as follows: (ω 1 |p , . . . , ω m |p ) is the dual basis corresponding to (X1 |p , . . . , Xm |p ) for all p in U . Conversely, given a dual frame on U , there is a uniquely determined frame on U defined in the obvious way.  i Suppose U is the coordinate domain of a chart U, (x ) on M . The ith  i coordinate covector field of U, (x ) is the covector field d(xi )

in

X∗ (U )

defined by the assignment p 7−→ dp (xi )  for all p in U for i = 1, . . . , m. Then d(x1 ), . . . , d(xm ) is a dual frame on U , called the dual coordinate frame corresponding to U, (xi ) . Let ω be a (not necessarily smooth) covector field on M . Then ω|U can be expressed as X ω|U = αi d(xi ), (15.4.3) i

where the αi are uniquely determined functions on U , called the components  of ω with respect to U, (xi ) . For brevity, we denote ω|U

by

ω.

The right-hand side  of (15.4.3) is said to express ω in local coordinates with respect to U, (xi ) .

378

15 Fields on Smooth Manifolds

Example 15.4.1 (Rm ). Let (r1 , . . . , rm ) be standard coordinates on Rm , let d(r1 ), . . . , d(rm ) be the corresponding dual coordinate frame, called the standard dual coordinate frame, and let (ξ 1 , . . . , ξ m ) be the dual basis corresponding to the standard basis for Rm . It follows from Theorem 14.5.7 that d(r1 ), . . . , d(rm ) can be identified with (ξ 1 , . . . , ξ m ), written  d(r1 ), . . . , d(rm ) = (ξ 1 , . . . , ξ m ).



The next three results are the covector field counterparts to Theorem 15.1.2, Theorem 15.1.7, and Theorem 15.1.8. Theorem 15.4.2 (Smoothness Criterion for Covector Fields). Let M be a smooth manifold, and let ω be a (not necessarily smooth) covector field on  M . Then ω is in X∗ (M ) if and only if for every chart U, (xi ) on M , the components of ω are in C ∞ (U ). Theorem 15.4.3 (Change of Dual Coordinate Frame). Let M be a smooth   e , (e m-manifold, let U, (xi ) and U xj ) be overlapping charts on M , and let   X ∗ = d(x1 ), . . . , d(x) and Xe∗ = d(e x1 ), . . . , d(e xm ) be the corresponding dual coordinate frames. Then d(xj ) =

X ∂xj i

∂e xi

d(e xi )

e for j = 1, . . . , m. on U ∩ U Proof. This follows from Theorem 14.5.4(a). Theorem 15.4.4 (Smooth Extension of Covector). Let M be a smooth manifold, let p be a point in M , and let η be a covector in Tp∗ (M ). Then there is a covector field ω in X∗ (M ) such that ωp = η. Let M be a smooth manifold, and define a map d : C ∞ (M ) −→ X∗ (M ), called the exterior derivative, by d(f )p (v) = dp (f )(v) = v(f )

(15.4.4)

for all functions f in C ∞ (M ), all points p in M , and all vectors v in Tp (M ), where the second equality follows from Theorem 14.5.5. Part (a) of the next result shows that this definition makes sense. Theorem 15.4.5. Let M be a smooth manifold, let f be a function in C ∞ (M ), and let X be a vector field in X(M ). Then: (a) d(f ) is a covector field in X∗ (M ). (b) d(f )(X) = X(f ).

379

15.5 Representation of Covector Fields

Proof. (b): By Theorem 14.5.1, d(f )p is a covector in Tp∗ (M ), hence d(f ) is a covector field. For a point p in M , we have d(f )(X)(p) = d(f )p (Xp )

[(15.4.1)]

= Xp (f )

[(15.4.4)]

= X(f )(p).

[(15.1.2)]

Since p was arbitrary, the result follows. (a): Since X and f are smooth, by definition, so is X(f ). Because X was arbitrary, it follows from part (b) that, by definition, d(f ) is smooth. Theorem 15.4.6. Let M be a smooth manifold, let f, g be functions in C ∞ (M ), and let c be a real number. Then: (a) d(cf + g) = c d(f ) + d(g). (b) d(f g) = f d(g) + gd(f ). Proof. This follows from Theorem 14.5.6 and (15.4.4). Theorem 15.4.7. If M is a smooth manifold and f is a function in C ∞ (M ), then, in local coordinates, d(f ) =

X ∂f d(xi ). i ∂x i

Proof. For a point p in M and a vector v in Tp (M ), we have d(f )p (v) = dp (f )(v) X ∂f = (p) dp (xi )(v) i ∂x i X ∂f = (p) d(xi )p (v) i ∂x i X  ∂f i = d(x ) (v). ∂xi i

[(15.4.4)] [Th 14.5.3] [(15.4.4)]

p

Since p and v were arbitrary, the result follows.

15.5

Representation of Covector Fields

Theorem 15.5.1. If M is a smooth manifold and ω is a covector field in X∗ (M ), then ω is C ∞ (M )-linear. That is, for all vector fields X, Y in X(M ) and all functions f in C ∞ (M ): (a) ω(X + Y ) = ω(X) + ω(Y ) (b) ω(f X) = f ω(X).

380

15 Fields on Smooth Manifolds

Proof. For a point p in M , we have ω(X + Y )(p) = ωp (X + Y )p



[(15.4.1)]

= ωp (Xp + Yp ) = ωp (Xp ) + ωp (Yp ) = ω(X)(p) + ω(Y )(p)  = ω(X) + ω(Y ) (p) and

ω(f X)(p) = ωp (f X)p



= ωp f (p)Xp

[(15.4.1)]

[(15.4.1)]



= f (p) ωp (Xp ) = (f ω)p (Xp )  = f ω(X) (p).

[(15.4.1)]

Since p was arbitrary, the result follows. Following Section B.5, we denote by  LinC ∞ (M ) X(M ), C ∞ (M ) the C ∞ (M )-module of C ∞ (M )-linear maps from X(M ) to C ∞ (M ). Let us define a map  C : X∗ (M ) −→ LinC ∞ (M ) X(M ), C ∞ (M ) , called the characterization map, by

C(ω)(X) = ω(X)

(15.5.1)

for all covector fields ω in X∗ (M ) and all vector fields X in X(M ), where the right-hand side of (15.5.1) is given by (15.4.1). It follows from Theorem 15.5.1 that C(ω) is a map in LinC ∞ (M ) X(M ), C ∞ (M ) , so the definition makes sense. At this point, C(ω) amounts to little more than notational shorthand for viewing the covector field ω in X∗ (M ) from the perspective of (15.4.1): as a mechanism for turning vector fields into functions. The purpose of this formalism will become clear as we proceed. We say that an R-linear map F : X(M ) −→ C ∞ (M ) is determined pointe e wise if for all points p in M , we have F(X)(p) = F(X)(p) whenever X, X e e are vector fields in X(M ) such that Xp = Xp . Since F(X)(p) = F(X)(p) is e ep is equivalent to (X − X) e p = 0, equivalent to F(X − X)(p) = 0, and Xp = X F is determined pointwise if and only if for every point p in M , F(Y )(p) = 0 whenever Y is a vector field in X(M ) such that Yp = 0. Let ω be a covector field in X∗ (M ), let p be a point in M , and let X be a vector field in X(M ) such that Xp = 0. It follows from (15.4.1) and (15.5.1)  that C(ω)(X)(p) = 0. Thus, C(ω) is a map in LinC ∞ (M ) X(M ), C ∞ (M ) that is determined pointwise. Remarkably, as the next result shows, all maps in LinC ∞ (M ) X(M ), C ∞ (M ) have this property.

15.5 Representation of Covector Fields

381

Theorem 15.5.2. If M  is a smooth manifold and Υ is a map in LinC ∞ (M ) X(M ), C ∞ (M ) , then Υ is determined pointwise. Proof. Let p be a point in M , and let X be a vector field in X(M ) such that Xp = 0. We need to show that Υ(X)(p) = 0. Only the case where M is i covered P i by ai single chart M, (x ) is considered. In local coordinates, let X = i α (∂/∂x ). This gives an expression for X on all of M , so we can apply Υ to obtain   X ∂ i Υ(X) = αΥ , ∂xi i hence Υ(X)(p) =

X i

  ∂ αi (p) Υ (p). ∂xi

Since Xp = 0, it follows that each αi (p) = 0, so Υ(X)(p) = 0. Theorem 15.5.3 (Covector Field Characterization Theorem). If M is a smooth manifold, then C is a C ∞ (M )-module isomorphism:  X∗ (M ) ≈ LinC ∞ (M ) X(M ), C ∞ (M ) . Proof. It is easily shown that C is a C ∞ (M )-module homomorphism. We need to show that C is bijective. Injectivity. Suppose C(ω) = 0 for some covector field ω in X∗ (M ). It follows from (15.5.1) that ω(X) = 0 for all vector fields X in X(M ). Let p be a point in M , and let v be a vector in Tp (M ). By Theorem 15.1.2, there is a vector field Y in X(M ) such that Yp = v. Then ω(Y ) = 0, so (15.4.1) gives ωp (v) = ωp (Yp ) = ω(Y )(p) = 0. Since p and v were arbitrary, ω = 0. Thus, ker(C) = {0}. By Theorem B.5.3, C is injective.  Surjectivity. Let Υ be a map in LinC ∞ (M ) X(M ), C ∞ (M ) . We need to find e in X∗ (M ) such that C(Υ) e = Υ. Let p be a point in M , and a covector field Υ e p : Tp (M ) −→ R by define a map Υ e p (v) = Υ(X)(p) Υ

(15.5.2)

for all vectors v in Tp (M ), where X is any vector field in X(M ) such that Xp = v. By Theorem 15.1.2, such a vector field always exists. We have from Theorem e p is independent of the choice of X, so Υ e p is well-defined. Let w 15.5.2 that Υ be a vector in Tp (M ), let Y be a vector field in X(M ) such that Yp = w, and let c be a real number. Then cX + Y is a vector field in X(M ) and (cX + Y )p = cXp + Yp = cv + w,

382

15 Fields on Smooth Manifolds

hence e p (cv + w) = Υ(cX + Y )(p) Υ = cΥ(X)(p) + Υ(Y )(p) e p (v) + Υ e p (w), = cΥ

[(15.5.2)] [(15.5.2)]

e p is linear. Thus, Υ e p is a covector in Tp∗ (M ). Let us define a covector field so Υ e on M by the assignment p 7−→ Υ e p . We claim that Υ e is smooth; that is, the Υ e function Υ(Z) is smooth for all vector fields Z in X(M ). From (15.4.1) and (15.5.2), e e p (Zp ) = Υ(Z)(p) Υ(Z)(p) =Υ for all p in M , hence e Υ(Z) = Υ(Z). (15.5.3)  Since Υ is in LinC ∞ (M ) X(M ), C ∞ (M ) and Z is in X(M ), by definition, Υ(Z) e is a smooth function, and therefore, so is Υ(Z). This proves the claim. It e follows from (15.5.1) and (15.5.3) that C(Υ)(Z) = Υ(Z). Since Z was arbitrary, e = Υ. Thus, C is surjective. C(Υ) It is useful to isolate an aspect of the proof of Theorem 15.5.3. Let Υ be a map in LinC ∞ (M ) X(M ), C ∞ (M ) . We showed that C−1 (Υ) is the covector field in X∗ (M ) defined by C−1 (Υ)p (v) = Υ(X)(p)

(15.5.4)

for all points p in M and all vectors v in Tp (M ), where X is any vector field in X(M ) such that Xp = v, the existence of which is guaranteed by Theorem 15.1.2. Now that we have Theorem 15.5.3, we usually (but not always) view X∗ (M ) as the vector space over R and module over C ∞ (M ) consisting of all C ∞ (M )linear maps from X(M ) to C ∞ (M ). We will see a significant generalization of Theorem 15.5.3 in Section 15.7.

15.6

Tensor Fields

In this section, we generalize some of the material in Section 15.4. Let M be a smooth m-manifold, and let r, s ≥ 0 be integers. An (r, s)tensor field on M  is a map A that assigns to each point p in M an (r, s)-tensor Ap in T rs Tp (M ) . We also refer to A as an r-contravariant-s-covariant tensor field or simply a tensor field, and we define the rank of A to be (r, s). When s = 0, A is said to be an r-contravariant tensor field or just a contravariant tensor field; and when r = 0, A is said to be an s-covariant tensor field or simply a covariant tensor field. Let ω 1 , . . . , ω r be covector fields in X∗ (M ), let X1 , . . . , Xs be vector fields in X(M ), and consider the function A(ω 1 , . . . , ω r , X1 , . . . , Xs ) : M −→ R

383

15.6 Tensor Fields defined by A(ω 1 , . . . , ω r , X1 , . . . , Xs )(p) = Ap (ω 1 |p , . . . , ω r |p , X1 |p , . . . , Xs |p )

(15.6.1)

for all p in M . We say that A is smooth (on M ) if the function A(ω 1 , . . . , ω r , X1 , . . . , Xs ) is in C ∞ (M ) for all covector fields ω 1 , . . . , ω r in X∗ (M ) and all vector fields X1 , . . . , Xs in X(M ). The set of smooth (r, s)-tensor fields on M is denoted by T rs (M ). In particular, T 01 (M ) = X∗ (M )

and

T 10 (M ) = X(M ).

(15.6.2)

For completeness, we define T 00 (M ) = C ∞ (M ).

(15.6.3)

It is instructive to compare identities (15.6.2) and (15.6.3) to identities (5.1.1) and (5.1.2). From now on, we avoid the following trivial case. Throughout, unless stated otherwise, (r, s) 6= (0, 0). Defining operations on T rs (M ) in a manner analogous to that described for vector fields and covector fields, we make T rs (M ) into both a vector space over R and a module over C ∞ (M ). Many of the definitions presented for smooth manifolds are expressed in a pointwise fashion (not to be confused with “determined pointwise”) and ultimately rest on earlier definitions given in the context of vector spaces. For example, a tensor field on a smooth manifold is essentially a collection of tensors, one for each point in the smooth manifold. An important consequence of the pointwise approach is that earlier theorems presented for vectors spaces generalize immediately to smooth manifolds. We will say that the resulting smooth manifold theorem is the manifold version (abbreviated mv) of the earlier vector space theorem. Here is an example. Theorem 15.6.1. Let M be a smooth manifold, let A, A1 , A2 and B, B1 , B2 0 00 and C be tensor fields in T rs (M ) and T rs0 (M ) and T rs00 (M ), respectively, and let f be a function in C ∞ (M ). Then: (a) (A1 + A2 ) ⊗ B = A1 ⊗ B + A2 ⊗ B. (b) A ⊗ (B1 + B2 ) = A ⊗ B1 + A ⊗ B2 . (c) (f A) ⊗ B = f (A ⊗ B) = A ⊗ (f B). (d) (A ⊗ B) ⊗ C = A ⊗ (B ⊗ C). Proof. This is the manifold version of Theorem 5.1.2.  Let M be a smooth m-manifold, and let U, (xi ) be a chart on M . Let 1 ≤ i1 , . . . , ir ≤ m and 1 ≤ j1 , . . . , js ≤ m be integers, and consider the tensor field ∂ ∂ ⊗ · · · ⊗ ir ⊗ d(xj1 ) ⊗ · · · ⊗ d(xjs ) in T rs (U ) ∂xi1 ∂x

384

15 Fields on Smooth Manifolds

defined by the assignment

p 7−→

∂ ∂ ⊗ · · · ⊗ ⊗ dp (xj1 ) ⊗ · · · ⊗ dp (xjs ) ∂xi1 p ∂xir p

for all p in U . Suppose A is a (not necessarily smooth) (r, s)-tensor field on M . Then A|U can be expressed as A|U =

X 1≤i1 ,...,ir ≤m 1≤j1 ,...,js ≤m

r Aij11...i ...js

∂ ∂ ⊗ · · · ⊗ ir ⊗ d(xj1 ) ⊗ · · · ⊗ d(xjs ), (15.6.4) i 1 ∂x ∂x

r where the Aij11...i functions on U , called the compo...js are uniquely determined  nents of A with respect to U, (xi ) . For brevity, we denote

A|U

by

A.

The right-hand side  of (15.6.4) is said to express A in local coordinates with respect to U, (xi ) . Theorem 15.6.2 (Smoothness Criterion for (r, s)-Tensor Fields).  With the above setup, A is in T rs (M ) if and only if for every chart U, (xi ) on M , the components of A are in C ∞ (U ). Theorem 15.6.3 (Change of Coordinate Frame). Let M be a smooth   e , (e m-manifold, let A be a tensor field in T rs (M ), let U, (xi ) and U xi ) be r ei1 ...ir overlapping charts on M , and let Aij11...i ...js and Aj1 ...js be the components of A   e , (e with respect to U, (xi ) and U xi ) , respectively. Then

r Aeij11...i ...js =

X 1≤k1 ,...,kr ≤m 1≤l1 ,...,ls ≤m

∂e x i1 ∂e xir ∂xl1 ∂xls ...kr . . . kr . . . js Akl11...l k j s 1 1 ∂x ∂x ∂e x ∂e x

e. on U ∩ U

Proof. This is the manifold version of Theorem 5.1.4, but it is instructive to work through the details. We have from Theorem 15.1.8 and Theorem 15.4.3

385

15.7 Representation of Tensor Fields that   ∂ ∂ i1 ...ir i1 ir e Aj1 ...js = A d(e x ), . . . , d(e x ), j1 , . . . , js ∂e x ∂e x X i1 ir X ∂e x ∂e x =A d(xk1 ), . . . , d(xkr ), ∂xk1 ∂xkr k1 kr X ∂xl1 ∂ X ∂xls ∂  ,..., ∂e xj1 ∂xl1 ∂e xjs ∂xls l1

=

ls

X 1≤k1 ,...,kr ≤m 1≤l1 ,...,ls ≤m

=

i1

∂e x ∂e xir ∂xl1 ∂xls . . . . . . k k j ∂x 1 ∂x r ∂e x1 ∂e xjs

  ∂ ∂ · A d(xk1 ), . . . , d(xkr ), l1 , . . . , ls ∂x ∂x i1 ir l1 X ∂e x ∂e x ∂x ∂xls k1 ...kr . . . . . . A . k k j ∂x 1 ∂x r ∂e x1 ∂e xjs l1 ...ls

1≤k1 ,...,kr ≤m 1≤l1 ,...,ls ≤m

15.7

Representation of Tensor Fields

In this section, we present generalizations of the definitions and results of Section 15.5. Let M be a smooth manifold, and let r, s ≥ 0 be integers.  Following Section B.5, we denote by MultC ∞ (M ) X∗ (M )r × X(M )s , C ∞ (M ) the C ∞ (M )-module of C ∞ (M )-multilinear maps from X∗ (M )r × X(M )s to C ∞ (M ). Let us define a map  Crs : T rs (M ) −→ MultC ∞ (M ) X∗ (M )r × X(M )s , C ∞ (M ) , called the characterization map, by Crs (A)(ω 1 , . . . , ω r , X1 , . . . , Xs ) = A(ω 1 , . . . , ω r , X1 , . . . , Xs )

(15.7.1)

for all tensor fields A in T rs (M ), all covector fields ω 1 , . . . , ω s in X∗ (M ), and all vector fields X1 , . . . , Xs in X(M ), where the right-hand side of (15.7.1) is given by (15.6.1). It follows from a generalization of Theorem 15.5.1 that A is C ∞ (M )-multilinear. Thus, Crs (A) is in MultC ∞ (M ) X∗ (M )r ×X(M )s , C ∞ (M ) , so the definition makes sense. We say that an R-linear map F : X∗ (M )r × X(M )s −→ C ∞ (M ) is determined pointwise if for all points p in M , e1 , . . . , X es )(p) F(ω 1 , . . . , ω s , X1 , . . . , Xs )(p) = F(e ω1 , . . . , ω es, X whenever ω i , ω e i are covector fields in X∗ (M ) such that ω i |p = ω e i |p for i = e ej |p for j = 1, . . . , r, and Xj , Xj are vector fields in X(M ) such that Xj |p = X 1, . . . , s.

386

15 Fields on Smooth Manifolds

Theorem 15.7.1. If M is a smooth manifold and Υ is a map in  MultC ∞ (M ) X∗ (M )r × X(M )s , C ∞ (M ) , then Υ is determined pointwise. Theorem 15.7.2 (Tensor Field Characterization Theorem). If M is a smooth manifold, then Crs is a C ∞ (M )-module isomorphism:  T rs (M ) ≈ MultC ∞ (M ) X∗ (M )r × X(M )s , C ∞ (M ) .  Let Υ be a map in MultC ∞ (M ) X∗ (M )r × X(M )s , C ∞ (M ) . Analogous to (15.5.4), (Crs )−1 (Υ) is the tensor field in T rs (M ) defined by (Crs )−1 (Υ)p (η 1 , . . . , η r , v1 , . . . , vs ) = Υ(ω 1 , . . . , ω r , X1 , . . . , Xs )(p) 1

r

(15.7.2)

Tp∗ (M ),

for all points p in M , all covectors η , . . . , η in and all vectors v1 , . . . , vs 1 r ∗ in Tp (M ), where ω , . . . , ω are any covector fields in X (M ) such that ω i |p = η i for i = 1, . . . , r, and X1 , . . . , Xs are any vector fields in X(M ) such that Xj |p = vj for j = 1, . . . , s. For the remainder of this section, we attempt to place the above technical material in a larger context. r Let A be a tensor point p in M , Ap is a  field in T s (M ). For a given r tensor in T s Tp (M ) , and for given covectors η 1 , . . . , η r in Tp∗ (M ) and vectors v1 , . . . , vs in Tp (M ), Ap (η 1 , . . . , η r , v1 , . . . , vs ) is its value in R. Making the identification given by the isomorphism in Theorem 15.7.2, A can now viewed as a map in MultC ∞ (M ) X∗ (M )r × X(M )s , C ∞ (M ) . For given covector fields ω 1 , . . . , ω r in X∗ (M ) and vector fields X1 , . . . , Xs in X(M ), A(ω 1 , . . . , ω r , X1 , . . . , Xs ) is a function in C ∞ (M ), and for a given point p in M , A(ω 1 , . . . , ω r , X1 , . . . , Xs )(p) is its value in R. The innovation introduced by Theorem 15.7.2 is that we have gone from evaluating the tensor Ap at covectors and vectors to evaluating the function A at forms and vector fields. Now that we have Theorem 15.7.2 at our disposal, we often (but not always) view T rs (M ) as the vector space over R and module over C ∞ (M ) consisting of all C ∞ (M )-multilinear maps from X∗ (M )r × X(M )s to C ∞ (M ). We will not be fastidious about whether “Crs ” is included in the notation, allowing the context to make the situation clear and thereby providing a welcome simplification of notation. An advantage of our new approach to tensor fields is the mechanism it provides for deciding whether a given map F : X∗ (M )r × X(M )s −→ C ∞ (M ) is (or at least can be identified with) a tensor field in T rs (M ). According to Theorem 15.7.2, this identification can be made as long as F can be shown to be C ∞ (M )-multilinear. In practice, deciding if F is additive is usually straightforward. The challenge typically resides in determining whether functions in C ∞ (M ) can be “factored out” of F. That is, if for all covector fields ω 1 , . . . , ω r in X∗ (M ), all vector fields X1 , . . . , Xs in X(M ), and all functions f in C ∞ (M ), we have F(ω 1 , . . . , f ω i , . . . , ω r , X1 , . . . , Xs ) = f F(ω 1 , . . . , ω i , . . . , ω r , X1 , . . . , Xs )

387

15.8 Differential Forms for i = 1, . . . , r, and

F(ω 1 , . . . , ω r , X1 , . . . , f Xj , . . . , Xs ) = f F(ω 1 , . . . , ω r , X1 , . . . , Xj , . . . , Xs ) for j = 1, . . . , s. We will encounter several instances of such computations in subsequent chapters. Let us close this section with a few remarks on “representations”. In Section 15.2, we showed that a vector field in X(M ) is equivalent to a type of map from C ∞ (M ) to C ∞ (M ). In Section 15.5, it was demonstrated that a covector field in X∗ (M ) is equivalent to a type of map from X(M ) to C ∞ (M ). In this section, we showed (or at least asserted) that a tensor field in T rs (M ) is equivalent to a type of map from X∗ (M )r × X(M )s to C ∞ (M ). Loosely speaking, we have been involved in a campaign to represent “fields” as maps that produce “functions”.

15.8

Differential Forms

Let M be a smooth m-manifold, and let 0 ≤ s ≤ m be an integer. A differential s-form on M  is a map ω that assigns to each point p in M an s-covector ωp in Λs Tp (M ) . In the literature, a differential s-form is usually referred to as an s-form or simply a form. Observe that 1-forms and covector fields are the same thing. Let X1 , . . . , Xs be vector fields in X(M ), and define a function ω(X1 , . . . , Xs ) : M −→ R by ω(X1 , . . . , Xs )(p) = ωp (X1 |p , . . . , Xs |p ) for all p in M . We say that ω is smooth (on M ) if the function ω(X1 , . . . , Xs ) is in C ∞ (M ) for all vector fields X1 , . . . , Xs in X(M ). The set of smooth s-forms on M is denoted by Λs (M ). Clearly, Λs (M ) is an R-subspace and C ∞ (M )submodule of T 0s (M ), and Λ1 (M ) = X∗ (M ) = T 01 (M ).

(15.8.1)

For completeness, and to be consistent with (15.6.3), let us define Λ0 (M ) = C ∞ (M ).

(15.8.2)

In view of Theorem 7.2.12(b), we set Λs (M ) = {0} for s > m. 0 Let ω and ξ be forms in Λs (M ) and Λs (M ), respectively. We define a form 0 ω ∧ ξ in Λs+s (M ), called the wedge product of ω and ξ, by (ω ∧ ξ)p = ωp ∧ ξp for all p in M . In particular, for a function f in C ∞ (M ) = Λ0 (M ), we have f ∧ ω = f ω.

(15.8.3)

388

15 Fields on Smooth Manifolds  Let U, (xi ) be a chart on M , let 1 ≤ i1 < · · · < ir ≤ m be integers, and let d(xi1 ) ∧ · · · ∧ d(xis ) : U −→ Λs (U )

be the map defined by the assignment p 7−→ dp (xi1 ) ∧ · · · ∧ dp (xis ) for all p in U . Suppose ω is a form in Λs (M ). Then ω|U can be expressed as X ω|U = αi1 ,...,is d(xi1 ) ∧ · · · ∧ d(xis ), (15.8.4) 1≤i1 0, 0 ≤ φ < 2π}.

Let F : U −→ R2 be the smooth map given by

 F (ρ, φ) = ρ cos(φ), ρ sin(φ) ,

and consider the covector field s r d(r) + 2 d(s) 2 +s r + s2     in X∗ R2 {(0, 0)} . Setting U, (xi ) = U, (ρ, φ) and V, (y j ) = R2 {(0, 0)}, (r, s) in Theorem 15.11.3 yields      s ∂(r ◦ F ) r ∂(s ◦ F ) F ∗ (ω)(ρ,φ) = − 2 ◦ F + ◦ F d(ρ) r + s2 ∂ρ r2 + s2 ∂ρ      s ∂(r ◦ F ) r ∂(s ◦ F ) + − 2 ◦ F + ◦ F d(φ) r + s2 ∂φ r2 + s2 ∂φ   ∂ ρ cos(φ) ρ sin(φ) = − [ρ cos(φ)]2 + [ρ sin(φ)]2 ∂ρ  ∂ ρ sin(φ) ρ cos(φ) + d(ρ) [ρ cos(φ)]2 + [ρ sin(φ)]2 ∂ρ   ∂ ρ cos(φ) ρ sin(φ) + − [ρ cos(φ)]2 + [ρ sin(φ)]2 ∂φ  ∂ ρ sin(φ) ρ cos(φ) + d(φ) [ρ cos(φ)]2 + [ρ sin(φ)]2 ∂φ ω(r,s) = −

= d(φ).

r2



398

15 Fields on Smooth Manifolds

Theorem 15.11.8. Let M be a smooth manifold, let λ(t) : (a, b) −→ M be a smooth curve, let ω be a covector field in X∗ (M ), and view (a, b) as a smooth 1-manifold. Then   dλ ∗ λ (ω)t = ωλ(t) (t) d(t) dt for all t in (a, b).  Proof. Let p be a point in M , let U, (xi ) be a chart at p, and, in local coordinates, let X ω= αi d(xi ). i

By Theorem 15.11.3, ∗

λ (ω)t =

X i

  d(xi ◦ λ) αi λ(t) (t) d(t). dt

(15.11.5)

On the other hand, (15.4.4) gives X X   ωλ(t) = αj λ(t) d(xj )λ(t) = αj λ(t) dλ(t) (xj ), j

j

hence  ωλ(t)

 X     ∂ ∂ j = α λ(t) d (x ) = αi λ(t) j λ(t) i i ∂x λ(t) ∂x λ(t) j

for i = 1, . . . , m. By Theorem 14.7.1(b), X d(xi ◦ λ) dλ ∂ (t) = (t) i , dt dt ∂x λ(t) i so

 ωλ(t)

  X  dλ d(xi ◦ λ) ∂ (t) = (t) ωλ(t) dt dt ∂xi λ(t) ij =

X ij

 d(xi ◦ λ) αi λ(t) (t). dt

(15.11.6)

Substituting (15.11.6) into (15.11.5) gives the result.

15.12

Pullback of Covariant Tensor Fields

Let M and N be smooth manifolds, let F : M −→ N be a smooth map, and let p be a point in M . The corresponding differential map is dp (F ) : Tp (M ) −→ TF (p) (N ). According to (5.2.1), the pullback by dp (F ) for covariant tensors is the family of linear maps   dp (F )∗ : T 0s TF (p) (N ) −→ T 0s Tp (M )

399

15.12 Pullback of Covariant Tensor Fields defined for s ≥ 1 by dp (F )∗ (B)(v1 , . . . , vs ) = B dp (F )(v1 ), . . . , dp (F )(vs )



(15.12.1)

 for all tensors B in T 0s TF (p) (N ) and all vectors v1 , . . . , vs in Tp (M ). Pullback by F (for covariant tensor fields) is the family of linear maps F ∗ : T 0s (N ) −→ T 0s (M ) defined for s ≥ 1 by F ∗ (A)p (v1 , . . . , vs ) = dp (F )∗ (AF (p) )(v1 , . . . , vs )  = AF (p) dp (F )(v1 ), . . . , dp (F )(vs )

(15.12.2)

for all tensor fields A in T 0s (N ), all points p in M , and all vectors v1 , . . . , vs in Tp (M ), where the second equality follows from setting B = AF (p) in (15.12.1). We refer to F ∗ (A) as the pullback of A by F . To give meaning to F ∗ when s = 0, recall from (15.6.3) that T 00 (N ) = ∞ C (N ). We therefore define  F ∗ (g)(p) = g F (p) = F • (g)(p) for all functions g in C ∞ (N ) and all p in M ; that is, we define F ∗ = F •.

(15.12.3)

Theorem 15.12.1. Let M , N , and P be smooth manifolds, let F : M −→ N and G : N −→ P be smooth maps, let A, B and C be tensor fields in T 0s (N ) and T 0s0 (N ), respectively, and let g be a function in C ∞ (N ). Then: (a) F ∗ (A + B) = F ∗ (A) + F ∗ (B). (b) F ∗ (gA) = F ∗ (g)F ∗ (A). (c) F ∗ (A ⊗ C) = F ∗ (A) ⊗ F ∗ (C). (d) (G ◦ F )∗ = F ∗ ◦ G∗ . Since g ⊗ A = gA, part (b) of Theorem 15.12.1 follows from part (c). Identity (15.12.3) has several implications: Theorem 15.9.2 follows from Theorem 15.12.1(d); the identity in Theorem 15.10.4 can be expressed as   F ∗ Y (g) = F ∗ (Y ) F ∗ (g) for all vector fields Y in X(N ) and all functions g in C ∞ (N ); Theorem 15.11.1(b) follows from Theorem 15.12.1(b); and (15.11.4) can be expressed as   F ∗ d(g) = d F ∗ (g) for all functions g in C ∞ (N ).

400

15 Fields on Smooth Manifolds

Theorem 15.12.2 (Pullback of Covariant Tensor Field). Let M be a smooth m-manifold, let N be  a smooth n-manifold, let F : M −→ N be a smooth map, and let U, (xj ) and V, (y i ) be charts on M and N , respectively, such U ∩ F −1 (V ) is nonempty. Let A be a tensor field in T 0s (N ), and, in local coordinates, let X A= Ai1 ...is d(y i1 ) ⊗ · · · ⊗ d(y is ). 1≤i1 ,...,is ≤n

Then 

X

F ∗ (A) =

1≤j1 ,...,js ≤m

X 1≤i1 ,...,is ≤n

(Ai1 ...is ◦ F )

∂(y i1 ◦ F ) ∂(y is ◦ F ) · · · j ∂x 1 ∂xjs



j1

· d(x ) ⊗ · · · ⊗ d(xjs ). Proof. We have from Theorem 15.12.1 that   X ∗ ∗ i1 is F (A) = F Ai1 ...is d(y ) ⊗ · · · ⊗ d(y ) 1≤i1 ,...,is ≤n

X

=

1≤i1 ,...,is ≤n

(15.12.4)

 F ∗ (Ai1 ...is ) F ∗ d(y i1 ) ⊗ · · · ⊗ d(y is ) ,

and from (15.9.2) and (15.12.3) that F ∗ (Ai1 ...is ) = Ai1 ...is ◦ F.

(15.12.5)

We also have  F ∗ d(y i1 ) ⊗ · · · ⊗ d(y is )   = F ∗ d(y i1 ) ⊗ · · · ⊗ F ∗ d(y is )   = d F ∗ (y i1 ) ⊗ · · · ⊗ d F ∗ (y is ) i1

[Th 15.12.1(c)] [Th 15.11.2, (15.12.3)]

is

= d(y ◦ F ) ⊗ · · · ⊗ d(y ◦ F ),

[(15.9.2), (15.12.3)]

and from Theorem 15.4.7 that d(y ik ◦ F ) =

X ∂(y ik ◦ F ) d(xjk ) jk ∂x j k

for k = 1, . . . , s. Thus,  F ∗ d(y i1 ) ⊗ · · · ⊗ d(y is ) X  X  ∂(y i1 ◦ F ) ∂(y is ◦ F ) j1 js = d(x ) ⊗ · · · ⊗ d(x ) ∂xj1 ∂xjs j j 1

=

s

X 1≤j1 ,...,js ≤m

i1

(15.12.6)

is

∂(y ◦ F ) ∂(y ◦ F ) ··· d(xj1 ) ⊗ · · · ⊗ d(xjs ). ∂xj1 ∂xjs

Substituting (15.12.5) and (15.12.6) into (15.12.4) and reversing summations gives the result.

15.13 Pullback of Differential Forms

401

Example 15.12.3 (Polar Coordinates). Let (r, s) be standard coordinates on R2 , and let (ρ, φ) be polar coordinates on U = {(ρ, φ) ∈ R2 : ρ > 0, 0 ≤ φ < 2π}.

Let F : U −→ R2 be the smooth map given by

 F (ρ, φ) = ρ cos(φ), ρ sin(φ) , and consider the tensor field e(r,s) = d(r) ⊗ d(r) + d(s) ⊗ d(s)     in T 02 (R2 ). Setting U, (xi ) = U, (ρ, φ) and V, (y j ) = R2 , (r, s) in Theorem 15.12.2, and observing that e11 ◦ F = e22 ◦ F = 1 and e12 ◦ F = e21 ◦ F = 0, yields F ∗ (e)(ρ,φ)   ∂(r ◦ F ) ∂(r ◦ F ) ∂(s ◦ F ) = + df rac∂(s ◦ F )∂ρ d(ρ) ⊗ d(ρ) ∂ρ ∂ρ ∂ρ   ∂(r ◦ F ) ∂(r ◦ F ) ∂(s ◦ F ) ∂(s ◦ F ) + + d(ρ) ⊗ d(φ) ∂ρ ∂φ ∂ρ ∂φ   ∂(r ◦ F ) ∂(r ◦ F ) ∂(s ◦ F ) ∂(s ◦ F ) + + d(φ) ⊗ d(ρ) ∂φ ∂ρ ∂φ ∂ρ   ∂(r ◦ F ) ∂(r ◦ F ) ∂(s ◦ F ) ∂(s ◦ F ) + + d(φ) ⊗ d(φ) ∂φ ∂φ ∂φ ∂φ      ∂ ρ cos(φ) ∂ ρ cos(φ) ∂ ρ sin(φ) ∂ ρ sin(φ) = + d(ρ) ⊗ d(ρ) ∂ρ ∂ρ ∂ρ ∂ρ      ∂ ρ cos(φ) ∂ ρ cos(φ) ∂ ρ sin(φ) ∂ ρ sin(φ) + + d(ρ) ⊗ d(φ) ∂ρ ∂φ ∂ρ ∂φ      ∂ ρ cos(φ) ∂ ρ cos(φ) ∂ ρ sin(φ) ∂ ρ sin(φ) + + d(φ) ⊗ d(ρ) ∂φ ∂ρ ∂φ ∂ρ       ∂ ρ cos(φ) ∂ ρ cos(φ) ∂ ρ sin(φ) ∂ ρ sin(φ) + + d(φ) ⊗ d(φ) ∂φ ∂φ ∂φ ∂φ = [cos2 (φ) + sin2 (φ)] d(ρ) ⊗ d(ρ)

+ [−ρ cos(φ) sin(φ) + ρ sin(φ) cos(φ)] d(ρ) ⊗ d(φ) + [−ρ sin(φ) cos(φ) + ρ cos(φ) sin(φ)] d(φ) ⊗ d(ρ) + [ρ2 sin2 (φ) + ρ2 cos2 (φ)] d(φ) ⊗ d(φ)

= d(ρ) ⊗ d(ρ) + ρ2 d(φ) ⊗ d(φ).

15.13



Pullback of Differential Forms

Let M and N be smooth manifolds, and let F : M −→ N be a smooth map. In Section 15.12, we defined F ∗ : T 0s (N ) −→ T 0s (M ), the pullback by F for

402

15 Fields on Smooth Manifolds

covariant tensor fields. We seek a corresponding pullback for differential forms. An observation is thatΛs (M ) is a subspace of T 0s (M ), Λs (N ) is a subspace of T 0s (N ), and F ∗ Λs (N ) is a subspace of Λs (M ), so we can proceed by restricting the maps defined in (15.12.1)–(15.12.3). Pullback by F (for differential forms) is the family of linear maps F ∗ : Λs (N ) −→ Λs (M ) defined for s ≥ 1 by

F ∗ (ω)p (v1 , . . . , vs ) = dp (F )∗ (ωF (p) )(v1 , . . . , vs ) = ωF (p) dp (F )(v1 ), . . . , dp (F )(vs )



for all differential forms ω in Λs (N ), all points p in M , and all vectors v1 , . . . , vs in Tp (M ). We refer to F ∗ (ω) as the pullback of ω by F . As before, when s = 0, we define F ∗ = F • . Theorem 15.13.1. Let M , N , and P be smooth manifolds, let F : M −→ N 0 and G : N −→ P be smooth maps, let ω, ξ, and ζ be forms in Λs (N ) and Λs (N ), respectively, and let g be a function in C ∞ (N ). Then: (a) F ∗ (ω + ξ) = F ∗ (ω) + F ∗ (ξ). (b) F ∗ (gω) = F ∗ (g)F ∗ (ω). (c) F ∗ (ω ∧ ζ) = F ∗ (ω) ∧ F ∗ (ζ). (d) (G ◦ F )∗ = F ∗ ◦ G∗ .

Theorem 15.13.2. Let M be a smooth m-manifold, and let f 1 , . . . , f s be  ∞ j functions in C (M ), where s ≤ m. Let  U, (x ) be a chart on M , and let (∂/∂x1 , . . . , ∂/∂xm ) and d(x1 ), . . . , d(xm ) be the corresponding coordinate and dual coordinate frames. Then, in local coordinates, d(f 1 ) ∧ · · · ∧ d(f s )

 ∂f 1  1  ∂x X  . .. = det   1≤j1