Ordinary Differential Equations With Applications (third Edition) (Series On Applied Mathematics) [3 ed.] 981125074X, 9789811250743

Written in a straightforward and easily accessible style, this volume is suitable as a textbook for advanced undergradua

204 130 6MB

English Pages 378 Year 2022

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Ordinary Differential Equations With Applications (third Edition) (Series On Applied Mathematics) [3 ed.]
 981125074X, 9789811250743

Table of contents :
Contents
Preface to the First Edition
Preface to the Second Edition
Preface to the Third Edition
1. INTRODUCTION
1.1 Where do ODEs arise
2. FUNDAMENTAL THEORY
2.1 Introduction and Preliminaries
2.2 Local Existence and Uniqueness of Solutions of I.V.P.
2.3 Continuation of Solutions
2.4 Continuous Dependence Properties
2.5 Differentiability of I.C. and Parameters
2.6 Differential Inequalities
2.7 Exercises
3. LINEAR SYSTEMS
3.1 Introduction
3.2 Fundamental Matrices
3.3 Linear Systems with Constant Coefficients
3.4 Two-Dimensional Linear Autonomous Systems
3.5 Linear Systems with Periodic Coefficients
3.6 Adjoint Systems
3.7 Exercises
4. STABILITY OF NONLINEAR SYSTEMS
4.1 Definitions
4.2 Linearization
4.3 Saddle Point Property
4.4 Orbital Stability
4.5 Traveling Wave Solutions
4.6 Exercises
5. METHOD OF LYAPUNOV FUNCTIONS
5.1 An Introduction to Dynamical Systems
5.2 Lyapunov Functions
5.3 Simple Oscillatory Phenomena
5.4 Gradient Vector Fields
5.5 Exercises
6. TWO-DIMENSIONAL SYSTEMS
6.1 Poincaré-Bendixson Theorem
6.2 Levinson-Smith Theorem
6.3 Hopf Bifurcation
6.4 Exercises
7. SECOND ORDER LINEAR EQUATIONS
7.1 Sturm's Comparison Theorem and Sturm-Liouville Boundary Value Problem
7.2 Distributions
7.3 Green's Function
7.4 Fredholm Alternative
7.5 Exercises
8. THE INDEX THEORY AND BROUWER DEGREE
8.1 Index Theory in the Plane
8.2 Introduction to the Brouwer Degree in Rn
8.3 Lienard Equation with Periodic Forcing
8.4 Exercises
9. PERTURBATION METHODS
9.1 Regular Perturbation Methods
9.2 Singular Perturbation: Boundary Value Problem
9.3 Singular Perturbation: Initial Value Problem
9.4 Exercises
10. INTRODUCTION TO MONOTONE DYNAMICAL SYSTEMS
10.1 Monotone Dynamical System with Applications to Cooperative Systems and Competitive Systems
10.2 Uniform Persistence
10.3 Application: Competition of Two Species in a Chemostat with Inhibition
10.4 Two Species Competition Models
10.5 Exercises
11. INTRODUCTION TO HAMILTONIAN SYSTEMS
11.1 Definitions and Classic Examples
11.2 Linear Hamiltonian Systems
11.3 First Integrals and Poisson Bracket
11.4 Symplectic Transformations
11.5 Generating Functions and Hamilton-Jacobi's Method
11.6 Exercises
APPENDIX A
A.1
A.2
A.3
APPENDIX B
Bibliography
Index

Citation preview

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS Third Edition

SERIES ON APPLIED MATHEMATICS Editor-in-Chief: Zhong-ci Shi Published * Vol. 9

Finite Element Methods for Integrodifferential Equations by C. M. Chen and T. M. Shih

Vol. 10 Statistical Quality Control — A Loss Minimization Approach by D. Trietsch Vol. 11

The Mathematical Theory of Nonblocking Switching Networks by F. K. Hwang

Vol. 12

Combinatorial Group Testing and Its Applications (2nd Edition) by D.-Z. Du and F. K. Hwang

Vol. 13

Inverse Problems for Electrical Networks by E. B. Curtis and J. A. Morrow

Vol. 14

Combinatorial and Global Optimization eds. P. M. Pardalos, A. Migdalas and R. E. Burkard

Vol. 15

The Mathematical Theory of Nonblocking Switching Networks (Second Edition) by F. K. Hwang

Vol. 16

Ordinary Differential Equations with Applications by S.-B. Hsu

Vol. 17

Block Designs: Analysis, Combinatorics and Applications by D. Raghavarao and L. V. Padgett

Vol. 18

Pooling Designs and Nonadaptive Group Testing — Important Tools for DNA Sequencing by D.-Z. Du and F. K. Hwang

Vol. 19

Partitions: Optimality and Clustering Vol. I: Single-parameter by F. K. Hwang and U. G. Rothblum

Vol. 20

Partitions: Optimality and Clustering Vol. II: Multi-Parameter by F. K. Hwang, U. G. Rothblum and H. B. Chen

Vol. 21

Ordinary Differential Equations with Applications (Second Edition) by S.-B. Hsu

Vol. 22

Inverse Problems: Tikhonov Theory and Algorithms by K. Ito and B. Jin

Vol. 23

Ordinary Differential Equations with Applications (Third Edition) by S.-B. Hsu and K.-C. Chen *For the complete list of the volumes in this series, please visit http://www.worldscientific.com/series/sam

Series on Applied Mathematics Volume 23

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS Third Edition

Sze-Bi Hsu Kuo-Chang Chen National Tsing Hua University, Taiwan

World Scientific NEW JERSEY



LONDON



SINGAPORE



BEIJING



SHANGHAI



HONG KONG



TA I P E I



CHENNAI

Published by World Scientific Publishing Co. Pte. Ltd. 5 Toh Tuck Link, Singapore 596224 USA office: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601 UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE

Library of Congress Cataloging-in-Publication Data Names: Hsu, Sze-Bi, 1948– author. | Chen, Guozhan, 1937– author. Title: Ordinary differential equations with applications / Sze-Bi Hsu, Kuo-Chang Chen, National Tsing Hua University, Taiwan. Description: Third edition. | New Jersey : World Scientific, [2022] | Series: Series on applied mathematics, 1793-0928 ; vol. 23 | Includes bibliographical references and index. Identifiers: LCCN 2022013691 | ISBN 9789811250743 (hardcover) | ISBN 9789811250750 (ebook for institutions) | ISBN 9789811250767 (ebook for individuals) Subjects: LCSH: Differential equations. Classification: LCC QA372 .H85 2022 | DDC 515/.352--dc23/eng20220621 LC record available at https://lccn.loc.gov/2022013691

British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library.

Copyright © 2022 by World Scientific Publishing Co. Pte. Ltd. All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means, electronic or mechanical, including photocopying, recording or any information storage and retrieval system now known or to be invented, without written permission from the publisher.

For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy is not required from the publisher.

For any available supplementary material, please visit https://www.worldscientific.com/worldscibooks/10.1142/12682#t=suppl

Printed in Singapore

Preface to the First Edition

Ordinary Differential Equations (ODE) is a well-developed discipline. It is a classical subject for most graduate students to study if they want to learn analysis and applied mathematics. There are many standard texts of ODE, for instance, the books, Coddington and Levinson [CL], Hartman [Ha], J. Hale [H1], Miller and Michel [MM]. Based on my past twenty years of teaching experience of the graduate course in ODE, I find that for most graduate students these books are either too difficult to understand or too abstract to follow. This is the reason why I have decided to write my own ODE lecture notes. This book is aimed to present the classical ODE materials in an easy way with rigorous proofs. In this text many examples from mathematical biology and physical science to interpret the meanings and the applications of the theorems have been given. Besides the classical materials, Brouwer degree in finite dimensional space following the index theory has been discussed in Chapter 8. Brouwer degree is a powerful topological tool for studying the nonlinear analysis. Introducing Brouwer degree paves the way to the Leray-Schauder degree in infinite dimensional space. In Chapter 9 we introduce perturbation method, the regular perturbation and singular perturbation methods which are very important in applied mathematics. This book is designed for a one-year graduate course. Let’s briefly state the contents of each chapter in this book. Chapter 1: Introduction. In this chapter important examples of nonlinear systems of ODEs from physical science and mathematical biology have been introduced. Chapter 2: Fundamental Theory. In this chapter I first prove the local existence and uniqueness of solutions of initial value problem of ODE. Then

v

vi

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

the continuation of the solutions and the global existence of the solutions are studied. The continuous dependence on initial conditions and parameters is proved. Applying continuous dependence on initial conditions and parameters gives the results of scalar differential inequalities. The differential inequalities for systems are also discussed. Chapter 3: Linear Systems. In this chapter we discuss the fundamental matrices of general linear systems x0 = A(t)x. Then for the linear system with constant coefficients x0 = Ax we introduce exponential matrix eAt as a fundamental matrix. For periodic linear system x0 = A(t)x, A(t + ω) = A(t), ω > 0, we study the structure of the solutions by proving Floque’s Theorem. For two-dimensional linear autonomous  0    x ab x = , we introduce the important notions of node, focus, y cd y center and saddle. Similar to linear algebra, we introduce adjoint systems of x0 = Ax. We obtain important results similar to the fundamental theorem of linear algebra. Chapter 4: Stability of Nonlinear Systems. In this chapter we introduce concepts of stability, asymptotic stability and instability of an equilibrium solution. To verify these stabilities, we introduce the method of linearization by checking the eigenvalues of variational matrix. When the equilibrium is a saddle, we prove the existence of stable and unstable manifolds. For periodic orbit, we introduce the concept of orbit stability which is different from the asymptotic stability. The results related to Floquet multipliers have been discussed in Chapter 3. In Section 4.5 we study the existence of travelling wave solutions of the well-known partial differential equations, the Fisher’s equation and the bistable equation by applying the Stable Manifold Theorem. Chapter 5: Method of Lyapunov Functions. In this chapter we first introduce the concept of dynamical system, α-limit set, ω-limit set and their properties in abstract setting. In fact, autonomous ODE system is an important example of dynamical system. Then we discuss Lyapunov functions and their use in determining the global asymptotic stability or the domain of attraction of a locally stable equilibrium. We prove the LaSalle’s invariance principle which explains how to apply Lyapunov function to locate ω-limit set. Chapter 6: Two-Dimensional Systems. In this chapter we prove one of the most important theorems in nonlinear dynamics, Poincar´e-Bendixson Theorem which states that a bounded trajectory of a two-dimensional system either converges to an equilibrium or approaches a limit cycle. We also

PREFACE TO THE FIRST EDITION

vii

discuss Dulac Criterion which provides a method to eliminate the existence of periodic solutions. In Section 6.2 we discuss the Levinson-Smith Theorem which proves the uniqueness of limit cycle for Lienard equation. An important example of Lienard equation is van der Pol equation. We discuss the relaxation property of the van der Pol oscillators. In Section 6.3, we introduce Hopf Bifurcation Theorem which is an important tool to detect the existence of periodic solution for autonomous system x0 = f (λ, x) where λ is a bifurcation parameter. Chapter 7: Second Order Linear Equations. In this chapter we first discuss the Sturm’s Comparison Theorem which is an important tool to study oscillation properties of a solution of a second order linear equation. Similar to the eigenvalue problems Ax = λx, we consider the Sturm-Liouville boundary value problem and prove the existence of infinitely many discrete eigenvalues and show that the corresponding eigenfunctions satisfy the “node” properties. For second order nonlinear equation x00 = f (t, x, x0 ), x(a) = x0 , x(b) = x1 , we introduce Green’s function so that we convert the boundary value problem into an integral equation. Chapter 8: The Index Theory and Brouwer Degree. In this chapter we first introduce the Index If (C) for a vector field f in the plane and a simple closed curve C. Then we prove various properties of index and apply index theory to prove fundamental theorem of algebra and Brouwer fixed point theorem. In Section 8.2 we generalize the index If (C) to the topological degree in finite dimensional space, the Brouwer degree in Rn . Topological degree is an important tool in studying nonlinear problems. Chapter 9: Introduction to Regular and Singular Perturbation Methods. Perturbation methods are very important in applied mathematics. In this chapter we explain why regular perturbation method works by using Implicit Function Theorem and Fredholm Alternatives. Then we discuss singular perturbation methods for both boundary value problems and initial value problems. Chapter 10: Introduction to Monotone Dynamical System. In this chapter we state without proof the main results of Monotone Dynamical Systems which was developed by Morris Hirsch in 1980. In the first section we apply the theory to dynamics of cooperative systems of n-species. Then we prove Poincar´e-Bendixson Theorem holds for 3-species competition system. In section two we discuss the uniform persistence for n-species population interaction model. We prove Butler-McGehee Lemma which is a basic ingredient in proving the Theorem of Uniform Persistence. In section three we introduce my joint work with Paul Waltman, the competition of two

viii

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

species for a single nutrient with inhibition. In this model, we illustrate the application of Poincar´e-Bendixson Theorem and Theorem of Persistence to the model. In section four, we state three abstract theorems in the frame of ordered Banach space for two species competition models. The theorems can be applied for the cases of competitive exclusion, stable coexistence and bistability. In summary Monotone Dynamics was developed in the late 20th century. It is a powerful tool to prove the global asymptotic behavior for some important models in population biology. Chapter 11: Introduction to Hamiltonian Systems. We begin with classic examples from classical mechanics. Then we discuss fundamental matrices and stability of linear Hamiltonian systems. As preparation, we discuss properties of skew-symmetric matrices and symplectic matrices. For investigations of nonlinear Hamiltonian systems, we introduce first integrals, Poisson bracket, Poisson theorem, together with several applications to mechanical systems. Next we introduce symplectic transformations and their connections with Hamiltonian systems. We show that a system is locally Hamiltonian if and only if the induced local flow is symplectic. Action-angle variables were introduced, brief remarks on perturbations of integrable systems were made. The last section introduces generating functions and the Hamilton-Jacobi method. We use simple examples to illustrate how Hamilton-Jacobi’s method can be used to find suitable generating functions so as to solve nonlinear Hamiltonian systems. Acknowledgments: I would like to express my gratitude to my wife Taily for her support in the past ten years; to my former Ph.D. thesis adviser Professor Paul Waltman for his encouragement to write this book. I want to thank my colleagues Professor Wen-Wei Lin, Professor Shin-Hwa Wang, Professor Shuh-Jye Chern of National Tsing Hua University, Professor Chin-An Wang, Professor Dong-Ho Tsai of National Chung-Cheng University for using this text in their graduate ODE courses and for giving me some suggestions to improve the text. I also want to thank my post doctor Dr. Cheng-Che Li and my Ph.D. student Yun-Huei Tzeng for their patience in proofreading the draft. Finally I especially want to thank Miss Alice Feng for typing this text in the past three years. Without her help, it is impossible to have this book published. S. B. Hsu National Tsing Hua University Hsinchu, Taiwan Aug. 30, 2005

Preface to the Second Edition

This new Second Edition contains corrections, additional materials and suggestions from various readers and users. I have added several new exercises for each chapter. A new chapter, Chapter 10, on Monotone Dynamical Systems is added to take into account of new developments in ordinary differential equations and dynamical systems. The description of Chapter 10 is presented in the preface of the 1st edition. I would like to take this opportunity to thank my colleague Prof. ShinHwa Wang and my Ph.D. student Chiu-Ju Lin who have pointed out errors in the first edition and make useful suggestions. The author is indebted to the Publisher as well as the readers for their invaluable assistance in the publication of the new edition. S. B. Hsu National Tsing Hua University Hsinchu, Taiwan Feb. 27, 2013

ix

This page intentionally left blank

Preface to the Third Edition

This new Third Edition contains corrections, additional materials and suggestions from various readers and users. We have added several new exercises to each chapter; moving several complicated proofs to Appendices. A new chapter, Chapter 11, “An Introduction to Hamiltonian Systems”, written by Professor Kuo-Chang Chen is added to take celestial mechanics into the material of ordinary differential equations and dynamical systems. The description of Chapter 11 is presented in the preface of the 1st edition. The authors are indebted to the Publisher as well as the readers for their invaluable assistance in the publication of the new edition. S. B. Hsu and K. C. Chen National Tsing Hua University Hsinchu, Taiwan Mar. 31, 2021

xi

This page intentionally left blank

Contents

Preface to the First Edition

v

Preface to the Second Edition

ix

Preface to the Third Edition

xi

1.

2.

INTRODUCTION

1

1.1

1

FUNDAMENTAL THEORY 2.1 2.2 2.3 2.4 2.5 2.6 2.7

3.

Where do ODEs arise . . . . . . . . . . . . . . . . . . . .

11

Introduction and Preliminaries . . . . . . . Local Existence and Uniqueness of Solutions Continuation of Solutions . . . . . . . . . . Continuous Dependence Properties . . . . . Differentiability of I.C. and Parameters . . . Differential Inequalities . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . .

. . . . . . of I.V.P. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

LINEAR SYSTEMS 3.1 3.2 3.3 3.4 3.5 3.6 3.7

11 14 21 24 25 28 32 39

Introduction . . . . . . . . . . . . . . . . . . . . Fundamental Matrices . . . . . . . . . . . . . . Linear Systems with Constant Coefficients . . . Two-Dimensional Linear Autonomous Systems Linear Systems with Periodic Coefficients . . . Adjoint Systems . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . xiii

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

39 40 45 54 58 66 70

xiv

4.

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

STABILITY OF NONLINEAR SYSTEMS 4.1 4.2 4.3 4.4 4.5 4.6

5.

7.2 7.3 7.4 7.5

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. 77 . 79 . 88 . 94 . 102 . 109 115

An Introduction to Dynamical Systems Lyapunov Functions . . . . . . . . . . Simple Oscillatory Phenomena . . . . Gradient Vector Fields . . . . . . . . . Exercises . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

Poincar´e-Bendixson Theorem Levinson-Smith Theorem . . . Hopf Bifurcation . . . . . . . Exercises . . . . . . . . . . . .

Sturm’s Comparison Theorem Boundary Value Problem . . Distributions . . . . . . . . . Green’s Function . . . . . . . Fredholm Alternative . . . . . Exercises . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

9.1 9.2

149 160 177 186 191

and Sturm-Liouville . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Index Theory in the Plane . . . . . . . . . Introduction to the Brouwer Degree in Rn Lienard Equation with Periodic Forcing . Exercises . . . . . . . . . . . . . . . . . . .

PERTURBATION METHODS

115 121 134 137 140 149

. . . . .

. . . . .

. . . . .

. . . . .

THE INDEX THEORY AND BROUWER DEGREE 8.1 8.2 8.3 8.4

9.

. . . . . .

SECOND ORDER LINEAR EQUATIONS 7.1

8.

. . . . . .

TWO-DIMENSIONAL SYSTEMS 6.1 6.2 6.3 6.4

7.

. . . . . .

METHOD OF LYAPUNOV FUNCTIONS 5.1 5.2 5.3 5.4 5.5

6.

Definitions . . . . . . . . . Linearization . . . . . . . Saddle Point Property . . Orbital Stability . . . . . Traveling Wave Solutions Exercises . . . . . . . . . .

77

. . . .

. . . .

. . . .

191 199 201 208 211 215

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

215 223 230 234 237

Regular Perturbation Methods . . . . . . . . . . . . . . . 237 Singular Perturbation: Boundary Value Problem . . . . . 243

CONTENTS

9.3 9.4

xv

Singular Perturbation: Initial Value Problem . . . . . . . 249 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261

10. INTRODUCTION TO MONOTONE DYNAMICAL SYSTEMS 263 10.1 Monotone Dynamical System with Applications to Cooperative Systems and Competitive Systems . . . . . . 10.2 Uniform Persistence . . . . . . . . . . . . . . . . . . . . . 10.3 Application: Competition of Two Species in a Chemostat with Inhibition . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Two Species Competition Models . . . . . . . . . . . . . . 10.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11. INTRODUCTION TO HAMILTONIAN SYSTEMS 11.1 11.2 11.3 11.4 11.5 11.6

Definitions and Classic Examples . . . . . . . . . . . Linear Hamiltonian Systems . . . . . . . . . . . . . . First Integrals and Poisson Bracket . . . . . . . . . . Symplectic Transformations . . . . . . . . . . . . . . Generating Functions and Hamilton-Jacobi’s Method Exercises . . . . . . . . . . . . . . . . . . . . . . . . .

APPENDIX A A.1 A.2 A.3

263 269 278 291 295 299

. . . . . .

. . . . . .

. . . . . .

299 302 310 316 322 333 339

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345

APPENDIX B

347

Bibliography

355

Index

359

This page intentionally left blank

Chapter 1

INTRODUCTION

1.1

Where do ODEs arise

The theory of ordinary differential equations deals with the large time behavior of the solution x(t, x0 ) of the initial value problem (I.V.P.) of the first order system of differential equations: dx1 = f1 (t, x1 , x2 , · · · , xn ) dt .. . dxn = fn (t, x1 , x2 , · · · , xn ) dt xi (0) = xi0 ,

i = 1, 2, · · · , n

or in vector form dx = f (t, x), dt x(0) = x0

(1.1)

where x = (x1 , · · · , xn ), f = (f1 , · · · , fn ), f : D → Rn , D is open in R×Rn . If the right-hand side of (1.1) is independent of time t, i.e., dx = f (x), x ∈ Ω ⊆ Rn , (1.2) dt then we say that (1.2) is an autonomous system. In this case, we call f a vector field on its domain Ω. If the right-hand side depends on time t, then we say that (1.1) is a nonautonomous system. The most important nonautonomous system is the periodic system, i.e., f (t, x) satisfies f (t + w, x) = f (t, x), 1

2

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

for some w > 0 (w is called the period). If f (t, x) = A(t)x where A(t) ∈ Rn×n , then we say that dx = A(t)x (1.3) dt is a linear system of differential equations. It is easy to verify that if ϕ(t), ψ(t) are solutions of (1.3), then αϕ(t) + βψ(t) is also a solution of the linear system (1.3) for α, β ∈ R. The system dx = A(t)x + g(t) (1.4) dt is called a linear system with nonhomogeneous part g(t). If A(t) ≡ A, then dx = Ax (1.5) dt is a linear system with constant coefficients. We say that system (1.1) is nonlinear if it is not linear. It is usually much harder to analyze nonlinear systems than the linear ones. The main difference between linear systems and nonlinear systems is the superposition principle. The superposition principle states that the linear combination of solutions is also a solution. For linear systems (1.3), (1.4), (1.5) as we can see in Chapter 3 that we have nice solution structures. However, nonlinear systems arise in many areas of science and engineering. It is still a great challenge to understand the nonlinear phenomena. In the following we present some important examples of differential equations from physics, chemistry and biology. Example 1.1.1 m¨ x + cx˙ + kx = 0. The equation describes the motion of a spring with damping and restoring forces. Applying Newton’s law, F = ma, we have ma = m¨ x = F = −cx˙ − kx = Friction + restoring force. Let y = x. ˙ Then we convert the equation into a first order system of two equations  x˙ = y, c k y˙ = − m y− m x.

The analysis of the model can be found in Example 5.2.3, Chapter 5.

INTRODUCTION

3

Example 1.1.2 m¨ x + cx˙ + kx = F cos wt. The equation describes the motion of a spring with external periodic force. It can be rewritten as     x˙ y . = k c y− m x + F cos wt y˙ −m p If c = 0 and w = k/m, then we have “resonance” ([BDiP], p. 184). The resonance is discussed in Example 3.6.2 and Exercise 3.25, Chapter 3. Example 1.1.3 Electrical Networks ([BDiP], p. 184). Let Q(t) be the charge in the RLC circuit at time t. Use the Kirchoff’s 2nd law: In a closed circuit, the impressed voltage equals the sum of the voltage drops in the rest of the circuit.

Fig. 1.1

(1) The voltage drop across a resistance of R(ohms) equals RI (Ohm’s law). (2) The voltage drop across an inductance of L(henrys) equals L dI dt . (3) The voltage drop across a capacitance of C(farads) equals Q/C. Hence E(t) = L

Since I(t) =

dQ dt ,

Q dI + RI + . dt C

it follows that L

d2 Q dQ 1 +R + Q = E(t). 2 dt dt C

4

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

The equation of electric network is similar to that of mechanical vibration with L → m, R → c and C1 → k. Example 1.1.4 Van der Pol Oscillator ([Kee1], p. 481, [HK], p. 172) u00 + u0 (u2 − 1) + u = 0, Let E(t) =

02

u 2

+

u2 2

0 <   1.

be the energy. Then

E 0 (t) = u0 u00 + uu0 = u0 (−u0 (u2 − 1) − u) + uu0 ,  < 0, |u| > 1, 2 = −(u0 ) (u2 − 1) = > 0, |u| < 1. Hence the oscillator is “self-excited”. Example 1.1.5 Van der Pol Oscillator with periodic forcing u00 + u0 (u2 − 1) + u = A cos wt. This is the equation Cartwright and Littlewood studied in 1945 and it led Smale to construct Smale’s horseshoe in 1960. It is one of the model equations in chaotic dynamical systems [Sma]. The analysis related to the model can be found in Section 8.3, Chapter 8. Example 1.1.6 Second order conservative system x ¨ + g(x) = 0, or equivalently x˙ 1 = x2 , x˙ 2 = −g(x1 ). Rx The energy E(x1 , x2 ) = + V (x1 ), where V (x1 ) = 0 1 g(s)ds, is the potential. Then the energy E satisfies d E = 0. dt The analysis of the equation can be found in Example 5.2.2, Chapter 5. 1 2 2 x2

Example 1.1.7 Duffing’s equation x ¨ + (x3 − x) = 0. The potential V (x) = −(1/2)x2 + (1/4)x4 is a double-well potential (see Fig. 1.2).

INTRODUCTION

Fig. 1.2

Example 1.1.8 Duffing’s equation with damping and periodic forcing. x ¨ + β x˙ + (x3 − x) = A cos wt. This is also a typical model equation in chaotic dynamical systems. Example 1.1.9 Simple pendulum equation (see Fig. 1.3) d2 θ g + sin θ = 0. dt2 ` F = ma, −mg sin θ = m` ·

d2 θ . dt2

We analyze the simple pendulum equation in Example 5.3.1, Chapter 5.

Fig. 1.3

5

6

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Example 1.1.10 Gradient Vector Fields x˙ = −∇U (x), n

2

where U : R → R is a C function. In classical mechanics, U is the potential energy. We note that U (x, t) satisfies d U (x(t)) = −|∇U (x(t))|2 ≤ 0. dt We analyze the gradient vector field in Section 5.4, Chapter 5. Example 1.1.11 N -Body Problem By Newton’s law, the equation is n X mi mj (~qi − ~qj ) mi q¨i = −G , rij := |~qi − ~qj |. 3 rij j6=i

Example 1.1.12 Lorentz equation ([Str], p. 301, [V]) (also see Example 6.3.4, Chapter 6) x˙ = σ(y − x), y˙ = rx − y − xz where σ, r, b > 0, z˙ = xy − bz. When σ = 10, b = 83 , r = 28, (x(0), y(0), z(0)) ≈ (0, 0, 0), we have the “butterfly effect” phenomenon. Example 1.1.13 Michaelis-Menten Enzyme Kinetics ([Kee1] p. 511, [LS] p. 302). Consider the conversion of a chemical substrate S to a product P by enzyme catalysis. The reaction scheme k1

k

E + S ES →2 E + P k−1

was proposed by Michaelis and Menten in 1913. The law of Mass action states that the rate of reaction is proportional to the concentrations of reactants. Then by the law of mass action, we have the following equations: d [E] = −k1 [E][S] + k−1 [ES] + k2 [ES], dt d [S] = −k1 [E][S] + k−1 [ES], dt d [ES] = k1 [E][S] − k−1 [ES] − k2 [ES], dt d [P ] = k2 [ES], dt

INTRODUCTION

7

with initial concentrations [E](0) = E0 , [S](0) = S0 , [ES](0) = [P ](0) = 0. Since d d [ES] + [E] = 0, dt dt then [ES] + [E] ≡ E0 . Let u = [ES]/C0 , v = [S]/S0 , τ = k1 E0 t, and κ = (k−1 + k2 ) /k1 S0 ,  = C0 /S0 , λ =

k−1 , C0 = E0 /1 + κ. k−1 + k2

Then we have the following equations du (v + κ)u =v− , 0 <   1, dτ 1+κ (v + κλ)u dv = −v + , dτ 1+κ u(0) = 0, v(0) = 1.



We shall introduce method of singular perturbation to study this system in Chapter 9. Example 1.1.14 Belousov-Zhabotinskii Reaction [Murr] dx = qy − xy + x(1 − x), dt dy = −qy − xy + 2f z, δ dt dz = x − z, dt where , δ, q are small, f ≈ 0.5. This is an important model of chemical oscillator. The analytic work can be found in Section 6, Chapter 3 in [Smi]. 

Example 1.1.15 Logistic equation. Let x(t) be the population of a species. Then x(t) satisfies  x dx = rx − bx2 = rx 1 − , r, b > 0, dt K

8

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

where K and r are the carrying capacity and the intrinsic growth rate of the species respectively. Example 1.1.16 The Lotka-Volterra model for Predator-Prey interaction. Let x(t), y(t) be the population sizes of prey and predator at time t respectively ([Murr] pp. 124 and 62). Then we have the following two predatorprey models of Lotka-Volterra type: dx dt

= ax − bxy, a, b, c, d > 0,

dy dt

= cxy − dy,

(the analysis of above model, see Example 5.2.4, Chapter 5) and  dx  dt = rx(1 −  dy dt

x K)

− bxy,

= cxy − dy.

(The analysis of above model, see Example 5.2.7, Chapter 5.) The first model assumes the prey species grows exponentially in the absence of predation while the second assumes the prey species grows logistically with carrying capacity K. Example 1.1.17 The Lotka-Volterra two species competition model ([Murr]). Let xi (t), i = 1, 2, be the population of i-th competing species. We assume that the i-th species grows logistically in the absence of competition with intrinsic rate ri and carrying capacity Ki . In the following model, α, β > 0 are called the competition coefficients. The model takes the form:   x1 dx1 = r1 x1 1 − − αx1 x2 , dt K1   dx2 x2 = r2 x2 1 − − βx1 x2 , dt K2 x1 (0) > 0, x2 (0) > 0.

We analyze the model in Example 4.2.4, Chapter 4 and Example 5.2.8, Chapter 5.

INTRODUCTION

9

Example 1.1.18 Rosenzweig-McArthur Predator-Prey model [M-S]. Let x(t), y(t) be population density of prey and predator at time t respectively. Assume that x(t) grows logistically in the absence of predation with intrinsic growth rate r and carrying capacity K. Predator’s growth mx is of Holling-type II functional response where m is the maximal rate a+x growth rate and a is the half-saturation constant. The positive constants d, c represent predator’s death rate and conversion constant. The model takes the form  dx mx x −c = rx 1 − y, dt K a+x   dy mx = − d y, dt a+x x(0) > 0, y(0) > 0. The analysis of the Rosenzweig-McArthur model can be found in Example 4.2.3, Chapter 4. Example 1.1.19 Food chain model with Holling-type II functional responses [KH]. Let x(t), y(t), z(t) be population densities of prey, predator and top predator respectively. The model takes the form  x m1 x dx = rx 1 − − c1 y, dt K a1 + x   dy m1 x m2 y = − d1 y − c2 z, dt a1 + x a2 + y   m2 y dz = − d2 z, dt a2 + y x(0) > 0, y(0) > 0, z(0) > 0. This is a well-known model in mathematical ecology with chaotic dynamics. Example 1.1.20 SIR model in Epidemics [Murr] p. 612. Let S(t), I(t), R(t) be the population densities of susceptible, infective and removed respectively. The model takes the form dS = −rSI, dt dI = rSI − aI, dt dR = aI, dt

10

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

S(0) = S0 > 0, I(0) = I0 > 0, R(0) = 0 where r > 0 is the infection rate and a > 0 is the removable rate of infectives. For analytic results, see Exercise 4.23. Example 1.1.21 Model of two predators competing for a prey [HHW].   dS S 1 m1 S 1 m2 S = rS 1 − − x1 − x2 , dt K y1 a1 + S y2 a2 + S   dx1 m1 S = − d1 x1 , dt a1 + S   m2 S dx2 = − d2 x2 , dt a2 + S S(0) > 0, x1 (0) > 0, x2 (0) > 0.

Chapter 2

FUNDAMENTAL THEORY

2.1

Introduction and Preliminaries

In this chapter we shall study the fundamental properties of the initial value problem (I.V.P.)  dx f : D ⊆ R × R n → Rn , dt = f (t, x), (2.1) x(t0 ) = x0 , n where D is an open set of R × R containing (t0 , x0 ). We shall answer the following questions: (Q1) What is the least condition on f (t, x) to ensure the local existence of a solution of I.V.P. (2.1)? (Q2) When is the solution of (2.1) unique? (Q3) When does the solution of (2.1) exist globally? That is, the solution’s maximal interval of existence is the whole real line R? (Q4) Is the initial value problem well-posed? That is, the solutions of (2.1) continuously depend on the initial conditions and parameters. Before we study these questions, we need the following preliminaries: Let x ∈ Rn . | | is a norm if the function | | : Rn −→ R+ satisfies: (i) |x| ≥ 0 and |x| = 0 iff x = 0, (ii) |αx| = |α||x|, α ∈ R, x ∈ Rn , (iii) |x + y| ≤ |x| + |y|. Three most commonly used norms of a vector x = (x1 , · · · , xn ) are (from [IK]) |x|∞ = sup1≤i≤n |xi |, Pn |x|1 = i=1 |xi |,  Pn 2 1/2 |x|2 = . i=1 |xi | 11

12

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

It is easy to verify that these norms satisfy √ |x|2 ≤ |x|1 ≤ n|x|2 , √ |x|∞ ≤ |x|2 ≤ n|x|∞ , |x|∞ ≤ |x|1 ≤ n|x|∞ , i.e., these norms are equivalent. For A ∈ Rn×n , given norm | of A is defined as follows: |Ax| kAk = sup . |x|

|, the norm

It is well known from linear algebra ([IK] p. 9) that for A, B ∈ Rn×n , α ∈ R |Ax| ≤ kAk|x|, kABk ≤ kAkkBk, kA + Bk ≤ kAk + kBk, kαAk = |α|kAk. From [IK], it follows that  1  2 , kAk2 = max |λi | : λi is an eigenvalue of AT A 1≤i≤n

kAk1 = sup k

n X

|aik |,

i=1 n X

kAk∞ = sup i

|aik |.

k=1

From (iii) we obtain by induction |x1 + x2 + · · · + xm | ≤ |x1 | + |x2 | + · · · + |xm |. From this inequality and (ii) we can deduce the inequality Z Z b b x(t)dt ≤ |x(t)|dt, a a for any vector function x(t) which is continuous on [a, b]. In fact, let δ = and tk = a + kδ, k = 1, · · · , m. Then R Pm b a x(t)dt = |limm→∞ k=1 x(tk )δ| Rb Pm ≤ limm→∞ k=1 |x(tk )|δ = a |x(t)|dt.

b−a m

A vector space X, not necessarily a finite dimensional vector space, is called a normed space if a real-valued function |x| is defined for all x ∈ X having

FUNDAMENTAL THEORY

13

properties (i)–(iii). A Banach space is a complete normed space, i.e., any Cauchy sequence {xn } is a convergent sequence. Let I = [a, b] be a bounded, closed interval and C(I) = {f |f : I → Rn is continuous } with the norm k f k∞ = sup |f (t)|, a≤t≤b

then k k∞ is a norm of C(I). We note that fm → f in C(I) means k fm − f k∞ → 0, i.e., fm → f uniformly on I. Theorem 2.1.1 ([Apo] p. 222). C(I) is a Banach space. In the following, we need the notion of equicontinuity. A family of functions F = {f } defined on an interval I is said to be equicontinuous on I if given ε > 0 there exists a δ = δ() > 0, independent of f ∈ F , and such that for any t, t¯ ∈ I |f (t) − f (t¯)| <  whenever

|t − t¯| < δ, f ∈ F.

We say that F is uniformly bounded if there exists M > 0 such that k f k∞ < M for all f ∈ F . The following Ascoli-Arzela’s Theorem is a generalization of Bolzano-Weierstrass Theorem in Rn , which deals the property of compactness in the Banach space C(I). Theorem 2.1.2 (Ascoli-Arzela) [Cop]. Let fm ∈ C(I), I = [a, b]. If {fm }∞ m=1 is equicontinuous and uniformly bounded, then there exists a convergent subsequence {fmk }.

Proof. Let {γk } be the set of rational numbers in I. Since {fm (γ1 )} is a bounded sequence, we extract a subsequence {f1m }∞ m=1 of {fm } such that f1m (γ1 ) converges to a point denoted as f (γ1 ). Similarly {f1m (γ2 )}∞ m=1 is a bounded sequence, we can extract a subsequence {f2m } of {f1m } such that f2m (γ2 ) converges to f (γ2 ). Continue this process, inductively, we can extract a subsequence {fkm } of {fk−1,m } such that fk,m (γk ) converges to a point, denoted as f (γk ), by the diagonal process,

14

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

f11 , & f21 , .. . .. . fk1 ,

f12 , f13 , f14 , · · · f22 , f23 , f24 , · · · & & fk2 , · · · fkk , · · · def

fmm (γk ) → f (γk ) as m → ∞ for all k = 1, 2, · · · . Let gm = fmm . Claim: {gm } is a Cauchy sequence. Then {gm } is a desired convergent sequence. Since {gm (γj )} converges for each j, given ε > 0 there exists Mj (ε) > 0 such that |gm (γj ) − gi (γj )| < ε for m, i ≥ Mj (ε). By the equicontinuity of {gm }, there exists δ > 0 such that for each i |gi (x) − gi (y)| < ε for |x − y| < δ. From Heine-Borel’s Theorem, the open covering {B(γj , δ)}∞ j=1 of [a, b], has a finite subcovering {B(γjk , δ)}L . Let M (ε) = max{M (ε), · · · , MjL (ε)}. j1 k=1 Then x ∈ [a, b] implies x ∈ B(γj` , δ), for some `, 1 ≤ ` ≤ L. For m, i ≥ M (ε), |gm (x) − gi (x)| ≤ |gm (x) − gm (γj` )| +|gm (γj` ) − gi (γj` )| + |gi (γj` ) − gi (x)| < ε + ε + ε = 3ε. Thus {gm } is a Cauchy sequence and we have completed the proof of the theorem by the completeness of C(I). 2.2

Local Existence and Uniqueness of Solutions of I.V.P.

Before we prove the theorems for the local existence of solutions of I.V.P. (2.1), we present some examples to illustrate local existence. Example 2.2.1 The system  dx

= 1 + x2 , x(0) = 0, dt

has a unique solution x(t) = tan t defined on (−π/2, π/2). The solution x(t) is not defined outside the interval (−π/2, π/2).

FUNDAMENTAL THEORY

15

Example 2.2.2 Consider the system  dx

= x2 , x(0) = x0 . dt

By separation of variables, the solution x(t) =

1 . −t

1 x0

If x0 > 0 then x(t)

is defined on the interval (−∞, x10 ). If x0 < 0 then x(t) is defined on the interval ( x10 , ∞). If x0 = 0 then x(t) = 0 for all t. dx dt

Example 2.2.3 Consider the I.V.P.

√ = f (t, x) =

x, x ≥ 0 , x(0) = 0, 0, x < 0

then there are infinitely many solutions ( (t−c)2 ,t≥c≥0 4 x(t) = 0, t ≤ c x(t) ≡ 0 is also a solution. We note that f (t, x) is not Lipschitz at x = 0. (Prove it!) Remark 2.2.1 From the application’s viewpoint, most of I.V.P.’s from the physical world should have existence and uniqueness properties and the solution is defined globally, i.e., the interval of existence of the solution is (−∞, ∞). A nice thing about the solution of an ODE system is that the time can be “reversed”, i.e., x(t) is defined for t < t0 ,where t0 is the initial time. Lemma 2.2.1 If the function f (t, x) and x(t) are continuous, then the initial value problem (2.1) is equivalent to the integral equation. Z

t

x(t) = x0 +

f (s, x(s))ds.

(2.2)

t0

Proof. Obviously, a solution x(t) of (2.1) satisfies (2.2). Conversely, let x(t) be a solution of (2.2). Substituting t = t0 in (2.2), we obtain x(t0 ) = x0 . Moreover, from the assumptions we have that f (t, x(t)) is continuous. From (2.2) and fundamental theorem of Calculus it follows that x(t) is differentiable and x0 (t) = f (t, x(t)).

16

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Next we state a theorem for the local existence of initial value problem (2.1) and defer its proof to Appendix A.1. Theorem 2.2.1 For any ε > 0, there exists an ε-approximate solution of I.V.P. (2.1) on the interval I = {t : |t − t0 | ≤ c}. Theorem 2.2.2 Let f ∈ C(D), (t0 , x0 ) ∈ D. Then the I.V.P. (2.1) has a solution on the interval I = [t0 − c, t0 + c] for some c > 0. To prove the uniqueness of the solutions of I.V.P., we need the following Gronwall inequality. Theorem 2.2.3 Let λ(t) be a real value continuous function and µ(t) a nonnegative continuous function on [a, b]. If a continuous function y(t) has the property that Z t y(t) ≤ λ(t) + µ(s)y(s)ds, a ≤ t ≤ b, (2.3) a

then on [a, b] we have Z y(t) ≤ λ(t) +

t

Z

t

λ(s)µ(s) exp a

 µ(τ )dτ

ds.

s

In particular if λ(t) ≡ λ is a constant, Z t  y(t) ≤ λ exp µ(s)ds . a

Proof. Let Z z(t) =

t

µ(s)y(s)ds. a

Then z(t) is differentiable and from (2.3) we have z 0 (t) − µ(t)z(t) ≤ λ(t)µ(t). Let  Z t  w(t) = z(t) exp − µ(τ )dτ . a

(2.4)

FUNDAMENTAL THEORY

17

Then from (2.4), it follows that  Z t  w0 (t) ≤ λ(t)µ(t) exp − µ(τ )dτ . a

Since w(a) = 0, integrating above from a to t yields   Z s Z t µ(τ )dτ ds. λ(s)µ(s) exp − w(t) ≤ a

a

By the definition of w(t), it follows that  Z t Z t µ(τ )dτ ds. λ(s)µ(s) exp z(t) ≤ s

a

Then from (2.3) and above inequality we complete the proof. Remark 2.2.2 Gronwall inequality is frequently used to estimate a bound on the solutions of an ordinary differential equation. The next theorem concerns the uniqueness of the solutions of I.V.P. Theorem 2.2.4 Let x1 (t), x2 (t) be differentiable functions such that |x1 (a) − x2 (a)| ≤ δ and |x0i (t) − f (t, xi (t))| ≤ i , i = 1, 2, for a ≤ t ≤ b. If the function f (t, x) satisfies a Lipschitz condition in x, |f (t, x1 ) − f (t, x2 )| ≤ L|x1 − x2 |, then L(t−a)

|x1 (t) − x2 (t)| ≤ δe

h

L(t−a)

+ (1 + 2 ) e

i −1 L

for a ≤ t ≤ b.

Proof. Put  = 1 + 2 , γ(t) = x1 (t) − x2 (t). Then |γ 0 (t)| ≤ |f (t, x1 (t)) − f (t, x2 (t))| +  ≤ L|γ(t)| + . Since Z a

t

γ 0 (s)ds = γ(t) − γ(a),

18

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

it follows that Z t 0 γ (s)ds |γ(t)| ≤ |γ(a)| + a Z t ≤δ+ |γ 0 (s)|ds a Z t ≤δ+ (L|γ(t)| + )ds a Z t L|γ(s)|ds. = δ + (t − a) + a

Therefore, by Gronwall’s inequality, Z t L {δ + (s − a)} eL(t−s) ds |γ(t)| ≤ δ + (t − a) + a

or, after integrating by parts the right-hand side, h i L(t−a) L(t−a) |γ(t)| ≤ δe + e −1 L.

Corollary 2.2.1 Let 1 = 2 = δ = 0 in Theorem 2.2.4. Then the uniqueness of solutions of I.V.P. follows if f (t, x) satisfies Lipschitz condition in x. In the following Corollary 2.2.2, the definition of continuous dependence of solutions on initial conditions is stated as follows: For fixed compact interval [t0 , t0 +T ], given ε > 0, there exists δ = δ(ε, T ) > 0 such that |x1 (t0 ) − x2 (t0 )| < δ

implies

|x1 (t) − x2 (t)| < ε

for all t0 ≤ t ≤ t0 + T . Corollary 2.2.2 The continuous dependence of the solutions on the initial conditions holds when f satisfies a global Lipschitz condition. Proof. Let x1 (t), x2 (t) be two solutions of I.V.P. (2.1) with |x1 (t0 ) − x2 (t0 )| ≤ δ, then applying Theorem 2.2.4 with 1 = 2 = 0 yields |x1 (t) − x2 (t)| ≤ δeL(t−t0 )

for t ≥ t0 .

(2.5)

Choose δ > 0 satisfying δ < εe−LT . Then from (2.5) we obtain the continuous dependence on initial conditions.

FUNDAMENTAL THEORY

19

Corollary 2.2.3 If f ∈ C 1 (D), then we have local existence and uniqueness of the solutions of I.V.P. Proof. It suffices to show that if f ∈ C 1 (D) then f is locally Lipschitz. Since f ∈ C 1 (D), (t0 , x0 ) ∈ D then Dx f (t, x) is continuous on R = {(t, x) : |x − x0 | ≤ δ1 , |t − t0 | ≤ δ2 } for some δ1 , δ2 > 0. Claim: f (t, x) satisfies Lipschitz condition in x on the rectangle R. Since Z 1 d f (t, sx1 + (1 − s)x2 )ds f (t, x1 ) − f (t, x2 ) = ds 0 Z 1 Dx f (t, sx1 + (1 − s)x2 ) · (x1 − x2 )ds, = 0

we have |f (t, x1 ) − f (t, x2 )| ≤ M |x1 − x2 |, where M=

sup

k Dx f (t, sx1 + (1 − s)x2 ) k .

0 ≤ s ≤ 1 |t − t0 | ≤ δ2

Let X be a Banach space and F ⊆ X be a closed subset of X. We say T : F → F is a contraction if |T x1 − T x2 | ≤ θ|x1 − x2 | for some 0 < θ < 1 and for any x1 , x2 ∈ F . It is easy to verify that a contraction map has a unique fixed point. Theorem 2.2.5 (Contraction mapping principle). There exists a unique fixed point for a contraction map T : F → F . Proof. Given any x0 ∈ F , defined a sequence {xn }∞ n=0 by xn+1 = T xn . Then |xn+1 − xn | ≤ θ|xn − xn−1 | ≤ · · · ≤ θn |x1 − x0 |. Claim: {xn } is a Cauchy sequence. Let m > n. Then |xm − xn | ≤ |xm − xm−1 | + |xm−1 − xm−2 | + · · · + |xn+1 − xn |  ≤ θn 1 + θ + · · · + θm−n−1 |x1 − x0 | θn |x1 − x0 | → 0 as n → ∞. ≤ 1−θ

20

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Since F is a closed set, then xn → x ∈ F as n → ∞. Claim: x is the desired fixed point of T . Since |T x − x| ≤ |T x − xn+1 | + |xn+1 − x| = |T x − T xn | + |xn+1 − x| ≤ θ|x − xn+1 | + |x − xn+1 | → 0, as n → ∞. It follows that T x = x. Now we shall apply contraction principle to show the existence and uniqueness of solutions of I.V.P. (2.1). Theorem 2.2.6 Let f (t, x) be continuous on S = {(t, x) : |t − t0 | ≤ a, |x − x0 | ≤ b} and let f (t, x) satisfy a Lipschitz condition in x with Lipschitz constant L. Let M = max{|f (t, x)| : (t, x) ∈ S}. Then there exists a unique solution of I.V.P. dx = f (t, x) dt x(t0 ) = x0 on I = {t : |t − t0 | ≤ α}, where α < min{a, b/M, 1/L}.

Proof. Let B be the closed subset of C(I) defined by B = {ϕ ∈ C(I) : |ϕ(t) − x0 | ≤ b,

t ∈ I}.

Define a mapping on B by Z

t

(T ϕ)(t) = x0 +

f (s, ϕ(s))ds.

(2.6)

t0

First we show that T maps B into B. Since f (t, ϕ(t)) is continuous, T certainly maps B into C(I). If ϕ ∈ B, then Z t |(T ϕ)(t) − x0 | = f (s, ϕ(s))ds t0 Z t ≤ |f (s, ϕ(s))|ds ≤ M |t − t0 | ≤ M α < b. t0

FUNDAMENTAL THEORY

21

Hence T ϕ ∈ B. Next we prove that T is a contraction mapping. Let k x k be the sup norm of x. Then Z t (f (s, x(s)) − f (s, y(s)))ds |T x(t) − T y(t)| = Z

t0 t



|f (s, x(s)) − f (s, y(s))|ds t0

Z

t

|x(s) − y(s)|ds ≤ L k x − y k |t − t0 |

≤L t0

≤ Lα k x − y k= θ k x − y k where θ = Lα < 1. Hence k T x − T y k≤ θ k x − y k. Thus we complete the proof by the contraction mapping principle. Remark 2.2.3 From (2.6), {xn+1 (t)} is a successive approximation of x(t). It is used for theoretical purpose, not for numerical computation. If Z t xn+1 (t) = x0 + f (s, xn (s))ds, t0

x0 (t) ≡ x0 ,

|t − t0 | ≤ α,

then {xn (t)} is called Picard iterations of I.V.P. (2.1). 2.3

Continuation of Solutions

In this section we discuss the properties of the solution on the maximal interval of existence and the global existence of the solution of the I.V.P. (2.1). Theorem 2.3.1 Let f ∈ C(D) and |f | ≤ M on D. Suppose ϕ is a solution of (2.1) on the interval J = (a, b). Then (i) the two limits lim+ ϕ(t) = ϕ(a+ ) and lim− ϕ(t) = ϕ(b− ) exist; t→a

t→b

(ii) if (a, ϕ(a+ )) (respectively, (b, ϕ(b− ))) is in D, then the solution ϕ can be continued to the left passing through the point t = a (respectively, to the right passing through the point t = b). Proof. Consider the right endpoint b. The proof for the left endpoint a is similar. Fix τ ∈ J and set ξ = ϕ(τ ). Then for τ < t < u < b, we have Z u |ϕ(u) − ϕ(t)| = f (s, ϕ(s))ds ≤ M (u − t). (2.7) t

22

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Given any sequence {tm } ↑ b, from (2.7) we see that {ϕ(tm )} is a Cauchy sequence. Thus, the limit ϕ(b− ) exists. If (b, ϕ(b− )) is in D, then by local existence theorem we can extend the solution ϕ to b + δ for some δ > 0 by the following: Let ϕ(t) ˜ be the solution of I.V.P. for b ≤ t ≤ b + δ, dx = f (t, x), dt x(b) = ϕ(b− ). Then we verify that the function  ϕ(t), ϕ∗ (t) = ϕ(t), ˜

a≤t≤b b≤t≤b+δ

is a solution of the I.V.P. (2.1). Remark 2.3.1 The assumption |f | ≤ M on D is too strong for the theorem. In fact from (2.7) we need |f (t, ϕ(t))| ≤ M for t closed to b− . Definition 2.3.1 We say (t, ϕ(t)) → ∂D as t → b− if for any compact set K ⊆ D, there exists t∗ ∈ (a, b) such that (t, ϕ(t)) ∈ / K for all t ∈ (t∗ , b). 2 Example 2.3.1 dx dt = 1 + x , x(0) = 0. Then the solution is x(t) = tan(t), −π π 2 < t < 2. In this case D = R × R, ∂D is an empty set.

Corollary 2.3.1 If f ∈ C(D) and ϕ is a solution of (2.1) on an interval J, then ϕ can be continued to a maximal interval J ∗ ⊃ J in such a way that (t, ϕ(t)) → ∂D as t → ∂J ∗ (and |t| + |ϕ(t)| → ∞ if ∂D is empty). The extended solution ϕ∗ on J ∗ is noncontinuable. Proof. By Zorn’s Lemma, there exists a noncontinuable solution ϕ∗ on a maximal interval J ∗ . By Theorem 2.3.1, J ∗ = (a, b) must be open. If b = ∞, then obviously from Definition 2.3.1 (t, ϕ∗ (t)) → ∂D as t → ∞. Assume b < ∞. Let c be any number in (a, b). Claim: For any compact set K ⊆ D, {(t, ϕ∗ (t)) : t ∈ [c, b)} is not contained in K. If not, then f (t, ϕ∗ (t)) is bounded on [c, b) and (b, ϕ∗ (b−)) ∈ K ⊆ D. By Theorem 2.3.1, ϕ∗ can be continued to pass b. This leads to a contradiction. Now we prove (t, ϕ∗ (t)) → ∂D as t → b− . Let K be any compact set in D, we claim that there exists δ > 0 such that (t, ϕ∗ (t)) ∈ / K for all t ∈ (b−δ, b).

FUNDAMENTAL THEORY

23

If not, then there exists a sequence {tm } ↑ b such that (tm , ϕ∗ (tm )) → (b, ξ) ∈ K ⊆ D. Choose r > 0 such that B((b, ξ), r) ⊆ D and set ε = 3r , B ≡ B((b, ξ), 2) ⊆ D. Let M = sup{|f (t, x)| : (t, x) ∈ B} < ∞. Since B is a compact set, from the above claim, {(t, ϕ∗ (t)) : t ∈ [tm , b)} is not contained / B. Without in B. Hence there exists τm ∈ (tm , b) such that (τm , ϕ∗ (τm )) ∈ loss of generality, we assume tm < τm < tm+1 . Let m∗ be sufficiently large such that for all m ≥ m∗ we have |tm − b| + |ϕ∗ (tm ) − ξ| < ε. Obviously ∗ |τ m − b| + |ϕ∗ (τm ) − ξ| > 2ε. Then there exists tm ∈ (tm , τm ) such that tm − b + ϕ (tm ) − ξ = 2ε. Then ε < tm − b + ϕ∗ (tm ) − ξ − |tm − b| − |ϕ∗ (tm ) − ξ| ≤ (b − tm ) − (b − tm ) + ϕ∗ (tm ) − ϕ∗ (tm ) Z tm < f (s, ϕ∗ (s))ds ≤ M tm − tm tm ≤ M |τm − tm | → 0 as m → ∞. This is a contradiction. Thus the proof is complete. Corollary 2.3.2 If the solution x(t) of (2.1) has an apriori bound M , i.e., |x(t)| ≤ M whenever x(t) exists. Then x(t) exists for all t ∈ R. Proof. Let T > t0 be arbitrary. Define the rectangle D1 = {(t, x) : t0 ≤ t ≤ T, |x| ≤ M }. Then f (t, x) is bounded on D1 . Then the solution x(t) can be continued to the boundary of D1 . Hence x(t) exists for t0 ≤ t ≤ T and T is arbitrary. Similarly x(t) exists for T1 ≤ t ≤ t0 , T1 arbitrary. Thus we have global existence with J ∗ = (−∞, ∞). Remark 2.3.2 To get an apriori bound M for the solution x(t) of (2.1), we apply differential inequalities (see §2.6) or Lyapunov method (see Chapter 5). Example 2.3.2 There exists a unique solution ϕ(t) of the linear system dx = A(t)x + h(t) = f (t, x), dt x(t0 ) = x0 , for all t ∈ R where A(t) ∈ Rn×n and h(t) ∈ Rn are continuous functions on R.

24

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Proof. Let ϕ(t) be the solution for t0 ≤ t < t0 + c. Then Z t f (s, ϕ(s))ds ϕ(t) − x0 = t0 t

Z

Z

t

[f (s, ϕ(s)) − f (s, x0 )] ds +

=

f (s, x0 )ds,

t0

t0

and it follows that Z

t

Z

t

|A(s)x0 + h(s)|ds

k A(s) k |ϕ(s) − x0 |ds +

|ϕ(t) − x0 | ≤

t0

t0

Z

t

≤L

|ϕ(s) − x0 |ds + δ, t0

where L=

sup

k A(t) k

and

t0 ≤t≤t0 +c

δ = c · max{|A(s)x0 + h(s)| : t0 ≤ s ≤ t0 + c}. From Gronwall’s inequality, we have |ϕ(t) − x0 | ≤ δ exp(Lc). Hence ϕ(t) is bounded on [t0 , t0 +c). Then by Theorem 2.3.1, we can extend ϕ to go beyond t0 + c. 2.4

Continuous Dependence Properties

In the theory of differential equations, well-posedness is an important property. We say a problem is well-posed if (1) a solution exists locally. (2) the solution is unique. (3) The solution’s behavior changes continuously with initial conditions and parameters. Hardmard was the first one to give the examples of well-posedness in partial differential equation [E]. For the initial valued problem in ordinary differential equations, we have proved Theorem 2.2.6 which states if f (t, x) is locally Lipschitz then the solution exists in a small interval I = {t : |t − t0 | ≤ α} for some α > 0 and the solution is unique. In this section we shall prove in following Theorem 2.4.1 and Theorem 2.4.2 the solution’s behavior continuously depends

FUNDAMENTAL THEORY

25

on the initial condition and parameters. Hence the initial-value problem of (2.1) is well-posed. In the following we shall prove the continuous dependence properties of I.V.P. (2.1). First we state discrete version of continuous dependence properties. Theorem 2.4.1 Let {fn (t, x)} be a sequence of continuous functions on D and lim fn = f uniformly on any compact set in D. Let n→∞

(τn , ξn ) ∈ D, (τn , ξn ) → (τ, ξ) and ϕn (t) be any noncontinuable solution of x0 = fn (t, x), x(τn ) = ξn . If ϕ(t) is the unique solution of x0 = f (t, x), x(τ ) = ξ, defined on [a, b]. Then ϕn (t) is defined on [a, b] for n large and ϕn → ϕ uniformly on [a, b]. Then we state the continuous version of continuous dependence properties. Theorem 2.4.2 Let f (t, x, λ) be a continuous function of (t, x, λ) for all (t, x) in open set D and for all λ near λ0 and let ϕ(t; τ, ξ, λ) be any noncontinuable solution of x0 = f (t, x, λ), x(τ ) = ξ. If ϕ(t; t0 , ξ0 , λ0 ) is defined on [a, b] and is unique, then ϕ(t; τ, ξ, λ) is defined on [a, b] for all (τ, ξ, λ) near (t0 , ξ0 , λ0 ) and is a continuous function of (t, τ, ξ, λ). The proofs will be deferred to Appendix A.2. 2.5

Differentiability of I.C. and Parameters

For fixed τ, ξ, if f (t, x, λ) is C 1 in x and λ then we shall show that the unique solution ϕ(t, λ) of the I.V.P. dx = f (t, x, λ) dt x(τ, λ) = ξ is differentiable with respect to λ. Furthermore from   d d d ϕ(t, λ) = (f (t, ϕ(t, λ), λ)) , dλ dt dλ

26

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

we have     d d d ∂f ϕ(t, λ) = fx (t, ϕ(t, λ), λ) ϕ(t, λ) + (t, ϕ(t, λ), λ). dt dλ dλ ∂λ Hence

d dλ ϕ(t, λ)

≡ ψ(t, λ) satisfies the variational equation  dy  dt = fx (t, ϕ(t, λ), λ)y + fλ (t, ϕ(t, λ), λ), 

y(τ ) = 0.

Similarly, for fixed τ , if f (t, x) is C 1 in x then we shall show that the unique solution ϕ(t, ξ) of the I.V.P. dx = f (t, x) dt x(τ ) = ξ is differentiable with respect to ξ. Furthermore from   d d d ϕ(t, ξ) = (f (t, ϕ(t, ξ))) , dξ dt dξ we obtain   d d d ϕ(t, ξ) = fx (t, ϕ(t, ξ)) ϕ(t, ξ), dt dξ dξ d ϕ(τ, ξ) = I, I is the n × n identity matrix. dξ Then the n × n matrix equation

d dξ ϕ(t, ξ)

≡ ψ(t, ξ) satisfies the linear variational

 dY  dt = fx (t, ϕ(t, ξ))Y 

Y (τ ) = I.

In this following we state Theorem 2.5.1 and defer its proof to Appendix A.3. Theorem 2.5.1 [Cop]. Let ϕ(t, λ0 ) be the unique solution of dx = f (t, x, λ0 ), dt x(t0 ) = ξ0 ,

FUNDAMENTAL THEORY

27

on compact interval J = [a, b]. Assume f ∈ C 1 in x and λ at all points (t, ϕ(t, λ0 ), λ0 ), t ∈ J. Then for λ sufficiently near λ0 , the system Eλ : dx dt = f (t, x, λ), x(t0 ) = ξ0 has a unique solution ϕ(t, λ) defined on J. Moreover, ϕλ (t, λ0 ) exists and satisfies y 0 = fx (t, ϕ(t, λ0 ), λ0 )y + fλ (t, ϕ(t, λ0 ), λ0 ), y(t0 ) = 0. Theorem 2.5.2 (Peano) [Cop]. Let ϕ(t, t0 , ξ0 ) be the unique solution of dx = f (t, x) dt x(t0 ) = ξ0 on J = [a, b]. Assume fx exists and is continuous at all points (t, ϕ(t, t0 , ξ0 )), t ∈ J. Then the system Eτ,ξ : dx dt = f (t, x), x(τ ) = ξ, has a unique solution ϕ(t, τ, ξ) for (τ, ξ) near (t0 , ξ0 ). Moreover ϕξ (t, t0 , ξ0 ) exists (t ∈ J) and satisfies the linear homogeneous equation dY = fx (t, ϕ(t, t0 , ξ0 ))Y dt Y (t0 ) = I.

Proof. Put x = u + ξ. Then the I.V.P. dx = f (t, x), x(t0 ) = ξ dt is transformed into the I.V.P. du = f (t, u + ξ), u(t0 ) = 0 dt with ξ as a parameter. Then the first part of theorem follows from Theorem 2.5.1. From Theorem 2.5.1, we have     d du du (t, ξ0 ) = fx (t, u(t, ξ0 ) + ξ0 ) (t, ξ0 ) + I , dt dξ dξ or   dx d dx (t, ξ0 ) = fx (t, x(t, ξ0 )) (t, ξ0 ). dt dξ dξ

28

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

2.6

Differential Inequalities

We shall apply differential inequality to estimate bounds of a solution of I.V.P. It is usually in the scalar form. In the following, we shall apply the property of continuous dependence on initial conditions and parameters to obtain differential inequalities. Theorem 2.6.1 Let x(t) be a scalar, differentiable function. Let ϕ(t) be the unique solution of the I.V.P. x0 = f (t, x), x(t0 ) = x0 . (i) If x0 (t) ≥ f (t, x(t)), t0 ≤ t ≤ b x(t0 ) ≥ x0 .

(2.8)

Then ϕ(t) ≤ x(t) for t0 ≤ t ≤ b. (ii) If x0 (t) ≤ f (t, x(t)), t0 ≤ t ≤ b x(t0 ) ≤ x0 .

(2.9)

Then ϕ(t) ≥ x(t) for t0 ≤ t ≤ b.

Proof. Consider the following I.V.P. x0n = f (t, xn ) +

1 , n

1 . n Then, from the continuously dependence on initial conditions and parameters, we have xn (t) → ϕ(t) uniformly on [t0 , b]. We shall only consider the case (2.9). The other case (2.8) can be done by similar arguments. It suffices to show that x(t) ≤ xn (t) on [t0 , b] for n sufficiently large. If not, then there exists a large n such that there exist t1 , t2 , with t0 < t1 < t2 < b satisfying (see Fig. 2.1) xn (t0 ) = x0 +

x(t) > xn (t) on (t1 , t2 ) x(t1 ) = xn (t1 ). Then for t > t1 , t near t1 , we have xn (t) − xn (t1 ) x(t) − x(t1 ) > . t − t1 t − t1

(2.10)

FUNDAMENTAL THEORY

29

Fig. 2.1

Let t → t1 in (2.10), then it follows that x0 (t1 ) ≥ x0n (t1 ) = f (t1 , xn (t1 )) + 1/n = f (t1 , x(t1 )) + 1/n > f (t1 , x(t1 )). This contradicts to the assumption (2.9). Hence we have completed the proof of Theorem 2.6.1. Corollary 2.6.1 Let Dr x(t) = lim+ h→0

x(t+h)−x(t) . h 0

Then the conclusions of

Theorem 2.6.1 also hold if we replace x (t) in (2.9) by Dr x(t). Example 2.6.1 Consider the Lotka-Volterra two species competition model   x1 dx1 = r1 x1 1 − − αx1 x2 , dt K1   dx2 x2 = r2 x2 1 − − βx1 x2 , dt K2 x1 (0) > 0, x2 (0) > 0. It is easy to verify that x1 (t) > 0, x2 (t) > 0 for all t > 0. Then we have   xi dxi ≤ ri xi 1 − , dt Ki xi (0) > 0. By Theorem 2.6.1, for any ε > 0, xi (t) ≤ Ki + ε for t ≥ T for some T sufficiently large.

30

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

For system of differential equations, we have the following comparison theorem Theorem 2.6.2. First we prove the following lemma. Lemma 2.6.1 Let x(t) = (x1 (t), · · · , xn (t)) ∈ Rn be differentiable. Then Dr |x(t)| ≤ |x0 (t)|.

Proof. For h > 0, from the triangle inequality, we have |x(t + h)| − |x(t)| x(t + h) − x(t) ≤ . h h Then the lemma follows directly by letting h → 0+ in the above inequality.

Theorem 2.6.2 (Comparison theorem). Let F (t, v) be continuous and |f (t, x)| ≤ F (t, |x|). Let ϕ(t) be a solution of the system dx dt = f (t, x) and v(t) be the unique solution of scalar equation dv dt = F (t, v), v(t0 ) = η with |ϕ(t0 )| ≤ η. Then |ϕ(t)| ≤ v(t) for all t ≥ t0 . Proof. Let u(t) = |ϕ(t)|. From Lemma 2.6.1, we have Dr u(t) = Dr |ϕ(t)| ≤ |ϕ0 (t)| = |f (t, ϕ(t))| ≤ F (t, |ϕ(t)|) = F (t, u(t)), u(t0 ) = |ϕ(t0 )| ≤ η. Then, by Theorem 2.6.1, |ϕ(t)| ≤ v(t) for all t ≥ 0. Corollary 2.6.2 Let f (t, x) = A(t)x + h(t), and A(t) ∈ Rn×n , h(t) ∈ Rn be continuous on R. Then we have global existence for the solution of I.V.P., dx dt = f (t, x), x(t0 ) = x0 . Proof. For t ≥ t0 , |f (t, x)| ≤ kA(t)k |x| + |h(t)| ≤ max {kA(τ )k , |h(τ )|}(|x| + 1) t0 ≤τ ≤t

= g(t)ψ(|x|), where ψ(u) = u + 1.

FUNDAMENTAL THEORY

31

Claim: The solution of du = g(t)ψ(u) dt u(t0 ) = |x0 | = u0 exists globally on R. If not, we assume that the maximal interval of existence is [t0 , b). Then Z b Z +∞ du g(t)dt. = ψ(u) t0 u0 In the above identity, the left-hand side is infinite while the right-hand side is finite. Thus we obtain a contradiction. Hence u(t) exists on R. From Theorem 2.6.2, we have |x(t)| ≤ u(t)

for t ≥ t0 .

Thus x(t) exists for all t by Theorem 2.3.1. For t ≤ t0 , we reverse time by introducing new time scale τ = t0 − t, then the system becomes dx = −f (t0 − τ, x), dτ x(0) = x0 . Then using the above argument, we can show that x(τ ) exists for τ ≥ 0, i.e. x(t) exists for t ≤ t0 . Now we consider the differential inequalities for certain special type of systems. Let Rn+ = {x = (x1 , · · · , xn ) ∈ Rn : xi ≥ 0, i = 1, · · · , n} be the nonnegative cone. Define the following partial orders: x ≤ y if y − x ∈ Rn+ , i.e., xi ≤ yi for all i. x < y if x ≤ y and x 6= y. x  y if xi < yi for all i.

Definition 2.6.1 We say f = (f1 , · · ·, fn ) : D ⊆ Rn → Rn is of type K on D if for each i, fi (a) ≤ fi (b) for any a, b ∈ D satisfying a ≤ b and ai = bi . Theorem 2.6.3 (Kamke Theorem [Cop]). Let f (t, x) be of type K for each fixed t and let x(t) be a solution of dx dt = f (t, x) on [a, b]. If y(t) is continuous on [a, b] and satisfies Dr y(t) ≥ f (t, y)

32

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

and y(a) ≥ x(a), then y(t) ≥ x(t) for a ≤ t ≤ b. If z(t) is continuous on [a, b] and satisfies D` z(t) ≤ f (t, z) and z(a) ≤ x(a) then z(t) ≤ x(t) for a ≤ t ≤ b.

Proof. We only prove the second case. First we prove that if D` z(t)  f (t, z(t)), z(a)  x(a) then z(t)  x(t) for a ≤ t ≤ b. If not, we let c be the least number in [a, b] such that z(t) < x(t) for a ≤ t < c and z(c) ≤ x(c), zi (c) = xi (c) for some 1 ≤ i ≤ n. Then D` zi (c) < fi (c, z(c)) ≤ fi (c, x(c)) = x0i (c). Since zi (c) = xi (c), it follows that zi (t) > xi (t) for certain values of t less than and near c. This contradicts the definition of c. Now we are in a position to complete the proof of Theorem 2.6.3. Assume D` z(t) ≤ f (t, z(t)), z(a) ≤ x(a). We want to show that z(t) ≤ x(t) for a ≤ t ≤ b. Let c be the largest value of t such that z(s) ≤ x(s) for a ≤ s ≤ t. Suppose, contrary to the theorem, that c < b. Let vector  = (1, ...1)T and let ϕn (t) be the solution of the following I.V.P. dw dt

= f (t, w) +

 n

, c ≤ t ≤ c + δ. w(c) = x(c) +

 n

Then ϕn (t) → x(t) on [c, c + δ] as n → ∞ and z(t) < ϕn (t) on [c, c + δ]. Let n → ∞, then z(t) ≤ x(t) on [c, c + δ]. This contradicts the definition of c. Hence c = b and z(t) ≤ x(t) on [a, b]. Remark 2.6.1 Kamke’s Theorem is an essential tool to study the behavior of the solutions of cooperative system, competitive system [H1] and the monotone flow [Str]. See Chapter 10. 2.7

Exercises

Exercise 2.1 Find a bounded sequence {fm }∞ m=1 ⊆ C(I), I = [a, b] such that there is no convergent subsequence. Exercise 2.2 Let J = [a, b], a, b < ∞ and F be a subset of C(J). Show that if each sequence in F contains a uniformly convergent subsequence, then F is both equicontinuous and uniformly bounded.

FUNDAMENTAL THEORY

33

Exercise 2.3 Consider initial value problem ( 3 4t y (t, y) 6= (0, 0), dy 4 2, = f (t, y) = t +y dt 0, (t, y) = (0, 0), and y(0) = 0. Verify that f (t, y) is continuous at (t, y) = (0, 0) but does not satisfy the Lipschitz condition. Show that the initial value problem has infinitely many solutions p y(t) = c2 − t4 + c4 for c ∈ R. Exercise 2.4 Show that if g ∈ C 1 (R) and f ∈ C(R) then the solution of I.V.P. y 00 + f (y)y 0 + g(y) = 0, y(t0 ) = A, y 0 (t0 ) = B exists locally, is unique and can be continued so long as y and y 0 remains bounded. Hint: Let z = y 0 + F (y) where F is any anti-derivative of f , and convert the equation to an equation of (y, z). Exercise 2.5 If f : W ⊆ Rn → R is locally Lipschitz and A ⊆ W is a compact set, then f |A is Lipschitz. Exercise 2.6 Prove that any solution of the system x0 = f (t, x) can be extended indefinitely if |f (t, x)| ≤ k|x| for all t and all |x| ≥ r, r, k > 0. Exercise 2.7 Let K = K(x, y) : [a, b] × [a, b] → R be continuous with 0 ≤ K(x, y) ≤ d for all x, y ∈ [a, b]. Let 2(b − a)d ≤ 1 along with u0 (x) ≡ 0 and v0 (x) ≡ 2. Then the iterates Z b un+1 (x) = K(x, y)un (y)dy + 1, Z

a b

vn+1 (x) =

K(x, y)vn (y)dy + 1, a

converge uniformly on [a, b] to a unique solution u ∈ X = C(a, b) of the integral equation Z b u(x) = K(x, y)u(y)dy + 1, x ∈ [a, b] a

where u0 ≤ u1 (x) ≤ ... ≤ v1 (x) ≤ v0 (x).

34

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Exercise 2.8 Consider the pendulum equation with constant torque θ¨ = ˙ a) = 0. Compute d θ(t, a)|a=0 . a − sin θ, θ(0, a) = 0, θ(0, da Exercise 2.9 Let f : R → R be C k with f (0) = 0. Then f (x) = xg(x), g ∈ C k−1 . Exercise 2.10 Prove the second part of Theorem 2.6.1. Exercise 2.11 Let the continuous function f (t, x) be nondecreasing in x for each fixed value of t and let x(t) be a unique solution of the differential equation dx dt = f (t, x) on an interval [a, b]. If the continuous function y(t) satisfies the integral inequality Z t y(t) ≤ x(a) + f (s, y(s))ds, a

for a ≤ t ≤ b, then y(t) ≤ x(t) on [a, b]. A solution x(t) is said to be a unique solution of the differential equation dx/dt = f (t, x) on a interval [a, b] if for every solution y on [a, b] of this equation with y(a) = x(a), we have that y(b) ≤ x(b). Exercise 2.12 Consider x˙ = f (x), f : D ⊆ Rn → Rn is of K-type on D and x0 , y0 ∈ D. Let 0      x(0) > 0, y(0) > 0. Show that the solutions x(t), y(t) are defined for all t > 0 and the solutions are positive and bounded for all t > 0. Exercise 2.18 Consider the equation x˙ = f (t, x) where f is continuous on R × Rn . Suppose |f (t, x)| ≤ φ(t) |x| for allRt, x, where φ is a non-negative ∞ continuous function defined on R satisfying c φ(t) dt < ∞ for every c ∈ R. (1) Prove that every solution approaches to a constant as t → ∞.

36

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

(2) If, in addition, |f (t, x) − f (t, y)| ≤ φ |x − y|

for all x, y ∈ Rn

prove that there is a one-to-one correspondence between the initial values and the limit value of the solution, that is, the map which sends the value of a solution x at a fixed t0 to the limit of x at infinity is a bijection on Rn . (Hint: First, take the initial time sufficient large to obtain the desired correspondence.) (3) Does the above result imply anything for the equation x˙ = −x + a(t)x, where a is a continuous function satisfying Z ∞ |a(t)| dt < ∞ for every c ∈ R? c

(Hint: Consider the transformation x = e−t y.) (4) Does this imply anything about the system x˙ 1 = x2 , x˙ 2 = −x1 + a(t)x, where x1 and x2 are scalar functions, and a is a continuous function satisfying Z ∞ |a(t)| dt < ∞ for every c ∈ R? c

Exercise 2.19 Consider the population model  x mx dx = rx 1 − − y, dt k a+x  dy y  = sy 1 − , dt hx x(0) > 0, y(0) > 0, a, h, k, m, r, s > 0. Show that the solutions x(t), y(t) are positive and bounded. Exercise 2.20 Consider the initial value problem z¨ + α(z, z) ˙ z˙ + β(z) = u(t), z(0) = ξ, z(0) ˙ = η, where α, β together with all of their first partial derivatives are continuous, and u is a bounded, continuous function defined on R. Suppose α ≥ 0

FUNDAMENTAL THEORY

37

on R2 and zβ(z) ≥ 0 for all z ∈ R. Show that there is one and only one solution to this problem and the solution can be defined on [0, ∞). (Hint: Write the equations as a system by letting z = x, z˙ = y, define Rx 2 V (x, y) = y /2 + 0 β(s)ds and study the rate of change of V (x(t), y(t)) along the solutions of the two-dimensional system.) Exercise 2.21 Let g : R → R be Lipschitz and f : R → R continuous. Show that the system x0 = g(x), y 0 = f (x)y, has at most one solution on any interval, for a given initial value. (Hint: Use Gronwall’s inequality.) Exercise 2.22 Consider the differential equation x0 = x2/3 . (1) There are infinitely many solutions satisfying x(0) = 0 on every interval [0, β]. (2) For what values of α are there infinitely many solutions on [0, α] satisfying x(0) = −1. Exercise 2.23 Let f ∈ C 1 on the (t, x, y) set given by 0 ≤ t ≤ 1, and all x, y. Let ϕ be a solution of the second-order equation x00 = f (t, x, x0 ) on [0, 1], and let ϕ(0) = a, ϕ(1) = b. Suppose ∂f /∂x > 0 for t ∈ [0, 1] and for all x, y. Prove that if β is near b then there exists a solution ψ of x00 = f (t, x, x0 ) such that ψ(0) = a, ψ(1) = β. Hint: Consider the solution θ (as a function of (t, α)) with initial values θ(0, α) = a, θ0 (0, α) = α. Let ϕ0 (0) = α0 . Then for |α − α0 | small, θ exists for t ∈ [0, 1]. Let u(t) =

∂θ (t, α0 ). ∂α

Then u00 −

∂f ∂f (t, ϕ(t), ϕ0 (t))u0 − (t, ϕ(t), ϕ0 (t))u = 0, ∂y ∂x

where u(0) = 0, u0 (0) = 1. Since ∂f /∂x > 0, u is monotone nondecreasing and thus u(1) = (∂θ/∂α) > 0. Thus the equation θ(1, α) − β = 0 can be solved for α as a function of β for (α, β) in a neighborhood of (α0 , b).

38

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Exercise 2.24 Let Lip(ϕ) be defined as Lip(ϕ) =

sup x,y∈E, x6=y

k ϕ(x) − ϕ(y) k , kx−y k

where ϕ : E → E is Lipschitz continuous, E is a Banach space. Assume L : E → E to be a linear, invertible operator. Show that if Lip(ϕ) < kL1−1 k then L + ϕ is invertible with  1 Lip (L + ϕ)−1 < . (k L−1 k−1 −Lip(ϕ)) Hint: (i) Show that L + ϕ is 1-1 by  k (L + ϕ)x − (L + ϕ)y k≥

 1 − Lip(ϕ) k x − y k . k L−1 k

(ii) Prove that L + ϕ is onto by solving (L + ϕ)(x) = y, given y ∈ E. Apply the contraction mapping principle.

Exercise 2.25 Prove that the Initial Value Problem  dx dt = f (t, x), x(t0 ) = x0 , has a unique solution defined on the interval [t0 , t1 ] if f (t, x) is continuous in the strip t0 ≤ t ≤ t1 , |x| < ∞ and satisfies |f (t, x1 ) − f (t, x2 )| ≤ L|x1 − x2 | for some L > 0. Hint: Let X be C(I), I = [t0 , t1 ], K > L and define the norm n o kxkK = sup e−K(t−t0 ) |x(t)| . t0 ≤t≤t1

Then (X, k kK ) is a Banach space and apply the contraction mapping principle.

Chapter 3

LINEAR SYSTEMS

3.1

Introduction

In this chapter we first study the general properties of the linear homogeneous system dx = A(t)x, dt

(LH)

and linear nonhomogeneous system dx = A(t)x + g(t), dt

(LN )

where A(t) = (aij (t)) ∈ Rn×n and g(t) ∈ Rn are continuous on R. There are two important special cases, namely (i) A(t) ≡ A is an n × n constant matrix; (ii) A(t + ω) = A(t), A(t) is a periodic matrix. Example 3.1.1 From elementary ordinary differential equations, we know that the solution of scalar equations x0 = a(t)x,

x(0) = x0 ,

and x0 = ax, Z are x(t) = x0 exp

t

x(0) = x0 ,

a∈R

 a(s)ds and x(t) = x0 eat respectively. For the

0

systems of linear equations, 39

40

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

dx dt

= Ax and x(0) = x0 At

dx dt

= A(t)x x(0) = x0 . Z

Is it true that x(t) = e x0 and x(t) = exp

t

 A(s)ds x0 (where eAt will

0

be defined in Section 3.3)? Remark 3.1.1 Theory of linear system has many applications in engineering, especially in linear control theory. It is also very important to understand linear systems in order to study nonlinear systems. For instance, if we do the linearization about an equilibrium x∗ of a nonlinear system dx 0 dt = f (x), then the behavior of the solutions of linear system x = Ax, ∗ A = Df (x ) gives us the local behavior of the solutions of nonlinear system dx ∗ dt = f (x) in a neighborhood of x . To understand the local behavior of the solutions near a periodic orbit {p(t)}0≤t≤T , of the nonlinear system dx dx dt = f (x), the linearization yields the linear periodic system dt = A(t)x, where A(t) = Df (p(t)). This is the orbital stability we shall study later in Chapter 4. Also there are some well-known equations arising from physics like Matheiu’s equations and Hill’s equations which are second order linear periodic equations. 3.2

Fundamental Matrices

In this section we shall study the structure of the solutions of (LH) and (LN). Theorem 3.2.1 The set V of all solutions of (LH) on J = (−∞, ∞) is an n-dimensional vector space.

Proof. It is easy to verify that V is a vector space over C. We shall show dim V = n. Let ϕi (t), i = 1, · · · , n be the unique solution of dx dt = A(t)x, x(0) = ei = (0 · · · 1 · · · 0)T . Pn Claim: {ϕi }ni=1 are linearly independent. Let i=1 αi ϕi = 0. It follows Pn Pn that i=1 αi ϕi (t) = 0 for all t. Setting t = 0 yields i=1 αi ei = 0, i.e., αi = 0 for all i = 1, 2, · · · , n. Next we show that any solution ϕ of (LH) can be generated by ϕ1 , · · · , ϕn . Let ϕ(0) = ξ = (ξ1 , · · · , ξn )T . Then Pn y(t) = i=1 ξi ϕi (t) is a solution of (LH) with y(0) = ξ. From uniqueness Pn of solutions, we have y(t) ≡ ϕ(t) for all t, i.e., ϕ = i=1 ξi ϕi .

LINEAR SYSTEMS

41

Next we introduce the fundamental matrices. Definition 3.2.1 Let ϕ1 , · · · , ϕn be n linearly independent solutions of (LH) on R. We call the matrix Φ = [ϕ1 , · · · , ϕn ] ∈ Rn×n a fundamental matrix of (LH). There are infinitely many fundamental matrices. Consider the following matrix equation X 0 = A(t)X,

(3.1)

 where X = (xij (t)) and X 0 = x0ij (t) . Theorem 3.2.2 A fundamental matrix Φ(t) of (LH) satisfies (3.1).

Proof. Φ0 (t) = [ϕ01 (t), · · · , ϕ0n (t)] = [A(t)ϕ1 , · · · , A(t)ϕn ] = A(t) [ϕ1 , · · · , ϕn ] = A(t)Φ(t).

Obviously for any ξ ∈ Rn , Φ(t)ξ is a solution of (LH) since (Φ(t)ξ)0 = Φ0 (t)ξ = A(t)(Φ(t)ξ). In the following we prove the Abel’s formula in which det Φ(t) is a generalization of Wronskian for n-th order scalar linear equations. Theorem 3.2.3 (Liouville’s formula or Abel’s formula). Let Φ(t) be a solution of (3.1) on J = (−∞, ∞) then Z det Φ(t) = det Φ(τ ) exp

t

 trA(s)ds .

(3.2)

τ

Proof. Let Φ(t) = (ϕij (t)), A(t) = (aij (t)). From Φ0 = A(t)Φ, we have Pn ϕ0ij = k=1 aik ϕkj . By induction (Exercises!), we have

42

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

ϕ11 · · · · · · ϕ1n ϕ2n d ϕ21 d (det Φ(t)) = . .. dt dt .. . ϕ ϕ n1 nn 0 ϕ11 ϕ012 · · · ϕ01n ϕ11 ϕ12 · · · ϕ1n 0 ϕ21 ϕ22 ϕ02n ϕ2n ϕ21 ϕ022 = . + . .. .. ϕ ϕ ϕn1 ϕnn ϕnn n1 n2 ϕ11 · · · ϕ1n ϕ21 ϕ2n +··· + . .. ϕ0 · · · ϕ0 nn n1 Pn Pn k=1 a1k ϕk1 , · · · · · · , k=1 a1k ϕkn ϕ21 ······ ϕ2n + ··· + = ϕn1 ······ ϕnn ϕ11 ϕ12 · · · ϕ1n .. . ······ ϕn−1,n P ϕn−1,1 Pn n a ϕ , , k=1 nk k1 k=1 ank ϕkn = a11 (t) det Φ(t) + · · · + ann (t) det Φ(t) = trace(A(t)) det Φ(t). Hence we have Z det Φ(t) = det Φ(τ ) exp

t

 tr(A(s))ds .

τ

Remark 3.2.1 The Abel’s formula can be interpreted geometrically as follows: From (3.1) and Theorem 3.2.2, we have Φ0 = A(t)Φ. Obviously (3.1) describes the evolutions of the initial vector (ϕ1 (τ ), · · · , ϕn (τ )) according to the dynamics of (3.1). Then Liouville’s formula describes the evolution of the volume of a parallelepiped generated by ϕ1 (τ ), · · · , ϕn (τ ).

LINEAR SYSTEMS

43

Theorem 3.2.4 A solution Φ(t) of (3.1) is a fundamental matrix of (LH) iff det Φ(t) 6= 0 for all t. Proof. From (3.2) we have det Φ(t) 6= 0 for all t iff det Φ(t) 6= 0 for some t. If Φ(t) is a fundamental matrix, then ϕ1 , · · · , ϕn are linearly independent and hence det Φ(t) 6= 0 for all t. Conversely, if Φ(t) satisfies (3.1) and det Φ(t) 6= 0 for all t, then ϕ1 , · · · , ϕn are linearly independent and Φ(t) is a fundamental matrix of (LH). Example 3.2.1 Consider n-th order linear equation x(n) (t) + a1 (t)x(n−1) (t) + · · · + an (t)x(t) = 0.

(3.3)

It can be reduced to a first order linear system. Let x1 = x, x2 = x0 , · · · , xn = x(n−1) . Then we have  0     x1 0 1 0 ··· 0 x1  x2   0   x2  0 1 0       (3.4)  .  = .. ..   ..  .  ..   . .  .  xn −an (t) · · · −a1 (t) xn If ϕ1 (t), ϕ2 (t), · · · , ϕn (t) are n linearly independent solutions of (3.3), then     ϕ1 ϕn  ϕ0    ϕ0  1   n   .   ,··· , .  .    .  .    . (n−1) (n−1) ϕ1 ϕn are n linearly independent solutions of (3.4). Let 

ϕ1  ϕ0  1 Φ(t) =   ..  .

· · · ϕn ϕ0n .. .

ϕ1

ϕn

(n−1)

(n−1)

     

and W (ϕ1 , · · · , ϕn ) = det Φ(t) is the Wronskian of (3.3).

44

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

From (3.2) we have  Z t  W (ϕ1 , · · · , ϕn )(t) = W (ϕ1 , · · · , ϕn )(t0 ) · exp − a1 (s)ds . t0

Remark 3.2.2 It is very difficult to compute explicitly a fundamental matrix Φ(t) for general A(t). In the case of constant coefficients, A(t) ≡ A, we shall show that the exponential matrix eAt is a fundamental matrix in the following section. Theorem 3.2.5 Let Φ(t) be a fundamental matrix of (LH) and C ∈ Rn×n be nonsingular. Then Φ(t)C is also a fundamental matrix. Conversely if Ψ(t) is also a fundamental matrix, then there exists a nonsingular matrix P such that Ψ(t) = Φ(t)P for all t. Proof. Since 0

(Φ(t)C) = Φ0 (t)C = A(t)(Φ(t)C) and det(Φ(t)C) = det Φ(t) · det C 6= 0, then Φ(t)C is a fundamental matrix. Now we consider Φ−1 (t)Ψ(t) and we d want to show dt (Φ−1 (t)Ψ(t)) ≡ 0. Then we complete the proof by setting −1 P ≡ Φ (t)Ψ(t). From d −1 (Φ Ψ) = (Φ−1 )0 Ψ + Φ−1 Ψ0 . dt Since ΦΦ−1 = I, we have Φ0 (Φ−1 ) + Φ(Φ−1 )0 = 0 and (Φ−1 )0 = −Φ−1 Φ0 Φ−1 . Then,  d Φ−1 Ψ = −Φ−1 Φ0 Φ−1 Ψ + Φ−1 Ψ0 dt = −Φ−1 AΦΦ−1 Ψ + Φ−1 AΨ = 0.

Example 3.2.2 The solution ϕ(t) of the scalar equation x0 (t) = ax(t) + g(t) x(τ ) = ξ can be found by method of integrating factor. Then Z t a(t−τ ) ϕ(t) = e ξ+ ea(t−η) g(η)dη. τ

LINEAR SYSTEMS

45

In the following, we shall discuss the “variation of constant formula” which is very important in the linearized stability theory. Theorem 3.2.6 (Variation of Constant formula). Let Φ(t) be a fundamental matrix of (LH) and ϕ(t, τ, ξ) be the solution of (LN)  dx dt = A(t)x + g(t), (LN ) x(τ ) = ξ. Then ϕ(t, τ, ξ) = Φ(t)Φ−1 (τ )ξ +

Z

t

Φ(t)Φ−1 (η)g(η)dη.

(3.5)

τ

Proof. Let ψ(t) be the RHS of (3.5). Then Rt 0 −1 −1 ψ 0 (t) = Φ0 (t)Φ h (τ )ξ + g(t) +R τ Φ (t)Φ (η)g(η)dη i t = A(t) Φ(t)Φ−1 (τ )ξ + τ Φ(t)Φ−1 (η)g(η)dη + g(t) = A(t)ψ(t) + g(t). Since ψ(τ ) = ξ, then by the uniqueness of the solutions, it follows that ψ(t) ≡ ϕ(t, τ, ξ) for all t. Thus we complete the proof. Rt Remark 3.2.3 Set ξ = 0 then ϕρ (t) = τ Φ(t)Φ−1 (η)g(η)dη is a particular solution of (LN). Obviously Φ(t)Φ−1 (τ )ξ is a general solution of (LH). Thus we have general solution of (LN) = general solution of (LH) + particular solution of (LN). 3.2.4 If A(t) ≡ A, then Φ(t) = eAt and ϕ(t, τ, ξ) = eA(t−τ ) ξ + RRemark t A(t−η) e g(η)dη which will be used in the theory of linearization. τ 3.3

Linear Systems with Constant Coefficients

In this section we shall study the linear system with constant coefficients,  0 x = Ax, A = (aij ) ∈ Rn×n , (LC) x(0) = x0 . It is easy to guess that the solution of the above should be x(t) = eAt x0 . We need to define the exponential matrix eA .

46

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Definition 3.3.1 For A ∈ Rn×n , eA = I + A +

A2 An + ··· + + ··· 2! n!

∞ X An = . n! n=0

In order to show that the above series of matrices is well-defined, we need to define the norms of matrix A ∈ Rm×n for any m, n ∈ Z+ . Definition 3.3.2 Let A ∈ Rm×n , A = (aij ). Define k A k = sup x6=0

k Ax k . kxk

Remark 3.3.1 Obviously we have k A k = sup{k Ax k:k x k= 1} = sup{k Ax k:k x k≤ 1}. The following facts are well known from Linear Algebra ([IK], p. 9). Let x = (x1 , · · · , xn ) and A = (aij ). If k x k∞ = sup |xi |, then k A k∞ = P P P |aik |; if k x k2 = |aik |; if k x k1 = |xi |, then k A k1 = sup sup i k k i i 1/2  p P |xi |2 , then k A k2 = ρ(AT A), where i

ρ(AT A) = Spectral radius of AT A = Max {|λ| : λ ∈ σ(AT A)}, and σ(AT A) = Spectrum of AT A = set of eigenvalues of AT A. From the definition of k A k, it follows that k Ax k ≤ k A kk x k, A ∈ Rm×n , x ∈ Rn ; k A + B k ≤ k A k + k B k, A, B ∈ Rm×n ; k cA k = k c kk A k, c ∈ R; k AB k ≤ k A kk B k, A ∈ Rm×n , B ∈ Rn×p ; k An k ≤ k A kn , A ∈ Rm×m . Theorem 3.3.1 eA is well-defined and k eA k≤ ekAk .

LINEAR SYSTEMS

Proof. Since for any a ∈ R, ea = k

m+p X m

an n=0 n!

P∞

47

converges, it follows that

m+p X k A kn An k≤ , n! n! m

we complete the proof. Next we state and prove the properties of eA . Theorem 3.3.2 (i) eO = I; (ii) If AB = BA then eA+B = eA eB ; (iii) eA is invertible and (eA )−1 = e−A ; (iv) If B = P AP −1 then eB = P eA P −1 . A Proof. (i) follows directly from the definition of  e . Pn n (ii) Since AB = BA, then (A + B)n = m=0 Am B n−m and m  ∞ ∞ n  X X (A + B)n 1 X n = eA+B = Am B n−m m n! n! n=0 ∞ X

n=0

m=0

n 1 X n! = Am B n−m n! m!(n − m)! n=0 m=0 ∞ X X Aj B k j! k! n=0 j+k=n    ∞ ∞ k j X X B  A  = eA eB . = j! k! j=0 j=0

=

The 2nd last equality holds because the two series converge absolutely. (iii) Since A(−A) = (−A)(A), then I = eO = eA+(−A) = eA e−A and (iii) follows. P  n Ak (iv) Since B = P AP −1 then B k = P Ak P −1 and P P −1 = k=0 k! Pn B k k=0 k! . Let n → ∞ and we complete the proof of (iv). Theorem 3.3.3 eAt is a fundamental matrix of x0 = Ax.

48

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Proof. Let Y (t) = eAt . Then Y (t + h) − Y (t) = eA(t+h) − eAt   = eAh − I eAt = hA + O(h2 ) eAt . Thus we have Y 0 (t) = lim

h→0

Y (t + h) − Y (t) = AeAt = AY (t). h

Consider the linear nonhomogeneous system x0 = Ax + g(t), x(τ ) = ξ.

(3.6)

From (3.5) and Theorem 3.3.3, we have the following important variation of constant formula: the solution of (3.6) is Z t A(t−τ ) ϕ(t, τ, ξ) = e ξ+ eA(t−η) g(η)dη. (3.7) τ

Equation (3.7) is very important in showing linearized stability implies the nonlinear stability for the equilibrium solution of the nonlinear system x0 = f (x). Since the solution of I.V.P. of (LC) is x(t) = eAt x0 , it is important to estimate |x(t)| by |x(t)| ≤k eAt k |x0 |. Thus we need to estimate k eAt k and compute eAt . In the following we compute eAt . Example 3.3.1 If A = D = diag (λ1 , · · · , λn ) then by the definition of eAt , we have eAt = I + At +

A2 t2 + · · · = diag (eλ1 t , · · · , eλn t ). 2!

Example 3.3.2 If A is diagonalizable, then we have D = P −1 AP = diag (λ1 , · · · , λn ) for some nonsingular P . Since AP = P D, if P = [x1 , · · · , xn ],

LINEAR SYSTEMS

49

then we have Axk = λk xk , k = 1, · · · , n i.e. xk is an eigenvector of the corresponding eigenvalue λk of A. From Theorem 3.3.2(iv), it follows that eDt = P −1 eAt P or eAt = P eDt P −1 . Example 3.3.3 Let J be the Jordan block of A and J = P −1 AP , where   J0  J1    J = , ..  .  Js J0 = diag (λ1 , · · · , λk ), and  1 O   .. ..  . .  .  Ji =   ..  .1  O λi 

λi

Then, eJt = P −1 eAt P . In order to evaluate k eJt k we need to review the Jordan forms. Review of Jordan forms: Let A ∈ Rn×n and the characteristic polynomial of A be f (λ) = det(λI − A) = (λ − λ1 )n1 · · · (λ − λs )ns , where λ1 , · · · , λs are distinct eigenvalues of A. We call ni the algebraic multiplicity of λi . Let Vi = {v ∈ Cn : Av = λi v}. We call Vi the eigenspace of λi and mi = dim Vi , the geometric multiplicity of λi . It is a well-known fact that mi ≤ ni . Obviously Vi = Null space of (A − λi I) = N (A − λi I) and N (A − λi I) ⊆ N (A − λi I)2 ⊆ N (A − λi I)3 ⊆ · · · . From linear algebra, it is a well-known result that N (A − λi I)ni = N (A − λi I)ni +1 ⊆ Cn . Let M (λi , A) = N (A − λi I)ni . We call M (λi , A) the generalized eigenspace of λi . We say that an eigenvalue λ has simple elementary divisor if M (λ, A) = N (A − λI), i.e., algebraic multiplicity of λ is equal to the geometric multiplicity. The following is a theorem from linear algebra.

50

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Theorem 3.3.4 ([HS1]). Let Cn = M (λ1 , A) ⊕ · · · ⊕ M (λs , A). To understand the structure of Jordan blocks, for simplicity, we may assume that the n × n matrix A has only one eigenvalue λ. Let v1 , · · · , vk (1) be a basis of eigenspace of λ. Let vi be solution of (A − λI)v = vi and (j) vi be a solution of (j−1)

(A − λI)v = vi

,

j = 2, · · · , `i .

(j)

i the generalized eigenvectors of eigenvector vi . It is easy We call {vi }`j=1 Pk (1) (` ) to show that {vi , vi , · · · , vi i }ki=1 is a basis of Cn , i=1 (`i + 1) = n. Let h i (1) (` ) (1) (` ) (1) (` ) P = v1 , v1 , · · · , v1 1 , v2 , v2 , · · · , v2 2 , · · · , vk , vk , · · · , vk k ,

and iT h (1) (` ) (1) (` ) . P −1 = w1 , w1 , · · · , w1 1 , · · · , wk , wk , · · · , wk k Then it follows that i h (1) (` ) (1) (` ) AP = Av1 , Av1 , · · · , Av1 1 , · · · , Avk , Avk , · · · , Avk k h (1) (2) (1) (` ) = λv1 , λv1 + v1 , λv1 + v1 , · · · , λv1 1 i (` −1) (1) (` ) (` −1) +v1 1 , · · · , λvk , λvk + vk , · · · , λvk k + vk k h i (1) (` ) (1) (` ) = λ v1 , v1 , · · · , v1 1 , · · · , vk , vk , · · · , vk k h i (1) (` −1) (` −1) (` −1) + 0, v1 , v1 , · · · , v1 1 , 0, v2 , · · · , v2 2 , · · · , 0, vk , · · · , vk k . Then, 

 w1  w(1)   1   .   ..     (`1 )   w1     .. h i   . −1   0, v1 , v (1) , · · · , v (`1 −1) , · · · , 0, vk , · · · , v (`k −1) P AP = λI +  1 1 k   wk   .   ..     (1)   wk     ..   .  (` )

wk k

LINEAR SYSTEMS



0 1 0 ··· ·  0 0 1 ··· ·   ..  .   ..  .    0 ······ · ·     = λI +               

51



0 0 1 0 01 .. .. . . ..

.

1 0 01 .. .. . . ..



λ 1  .. ..  . .   ..  .         =             

.

                            1  0 

1 λ λ

1 .. .. . . ..

.

1 λ λ 1 .. .. . . ..

Thus, for general case we have    P −1 AP =  



J0 J1 ..

  , 

. Js

.

             .             1  λ

52

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

where J0 = diag (λ1 · · · λI ) , I = cardinal number of λ with nλ = mλ , 



Ji1 ..

 Ji = 

λi 1  ..  . Jij =   

  ,

. Ji`

O



λ 1  ..  . Lemma 3.3.1 Let J =    O

O ..

O ..



  .  .. .1  λi .



   be an np × np matrix.  .. .1  λ .

Then 

eJt

1, t,

t2 2! ,

···

 ,  0, 1, t ,  . λt  .. =e    ..  .

tnp −1 (np −1)! tnp −2 (np −2)!

     .   

(3.8)

1

Proof. eJt = e(λI+N )t = eλIt+N t = eλt eN t . Pnp −1 tk N k Since N np = 0, we have eJt = eλt k=0 k! . Then Lemma 3.3.1 follows directly from routine computations. Let Re λ(A) = max{Reλ : λ is an eigenvalue of A}. Theorem If Re λ(A) < 0 then there exists α > 0 and K > 0 such

At 3.3.5−αt

that e ≤ Ke , t ≥ 0. If Re λ(A) ≤ 0 and assume that those with zero real parts

eigenvalues

have simple elementary divisors, then eAt ≤ M for t ≥ 0 and some M > 0.

LINEAR SYSTEMS

53

Proof. Let Re λ(A) < 0. From (3.8) and Theorem 3.3.4, it follows that eAt = P eJt P −1 , and

At



e ≤ kP k eJt P −1 ≤ Ke−αt , where 0 < α < −Re λ(A) and K > 0. When Re λ(A) ≤ 0 and those eigenvalues with zero real parts have simple elementary divisors, then



At

e ≤ kP k eJt P −1 ≤ M.

Theorem 3.3.6 Let λ1 , · · · , λs be distinct eigenvalues of A and M (λ1 , A), · · · , M (λs , A) be the corresponding generalized eigenspaces. Let x0 = Ps 0,j where x0,j ∈ M (λj , A). Then, the solution of (LC) can be written j=1 x as: # "nj −1 s k X X k t At x0,j eλj t . (3.9) (A − λj I) x(t) = e x0 = k! j=1 k=1

Proof. Then,   eAt x0 = e(A−λI)t eλt x0 = e(A−λI)t x0 eλt "∞ # k X kt = (A − λI) x0 eλt . k! k=0

If x0 ∈ M (λ, A) then e(A−λI)t x0 =

nX λ −1 

(A − λI)k

k=0

 tk 0 x . k!

Hence for any x0 ∈ Cn , x(t) =

"nj −1 s X X j=1

k=1

# tk x0,j eλj t . (A − λj I) k! k

54

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Remark 3.3.2 Given A ∈ Rn×n . How do we verify analytically that A is a stable matrix, i.e. Re λ(A) < 0? Suppose the characteristic polynomial of A is g(z) = det(zI − A) = a0 z n + a1 z n−1 + · · · + an , (a0 > 0). The Routh-Hurwitz Criterion provides a necessary and sufficient condition for a real polynomial g(z) = a0 z n + · · · + an , a0 > 0 to have all roots with negative real parts (see [Cop] p. 158). In the following we list the conditions for n = 2, 3, 4 which are frequently used in applications.

n 2 3

f (z) a0 z + a1 z + a2 a0 z 3 + a1 z 2 + a2 z + a3 2

a0 z 4 + a1 z 3 + a2 z 2 + a3 z + a4 4

3.4

R–H criterion a2 > 0, a1 > 0 a3 > 0, a1 > 0 a1 a2 > a0 a3 a4 > 0, a2 > 0, a1 > 0, a3 (a1 a2 − a0 a3 ) > a21 a4

Two-Dimensional Linear Autonomous Systems

In this section we shall apply Theorem 3.3.6 to classify the behavior of the solutions of two-dimensional linear Systems [H1]   . ab x= Ax, A = , det A 6= 0 (3.10) cd where a, b, c, d are real constants. Then (0, 0) is the unique rest point of (3.10). Let λ1 , λ2 be the eigenvalues of A, consider the following cases: Case 1: λ1 , λ2 are real and λ2 < λ1 . Let v 1 , v 2 be unit eigenvectors of A associated with λ1 , λ2 respectively. Then from (3.9), the general real solution of (3.10) is x(t) = c1 eλ1 t v 1 + c2 eλ2 t v 2 . Case 1a (Stable node) λ2 < λ1 < 0. Let L1 , L2 be the lines generated by v 1 , v 2 respectively. Since λ2 < λ1 < 0, x(t) ≈ c1 eλ1 t v 1 as t → ∞ and the trajectories are tangent to L1 . The origin is a stable node. (See Fig. 3.1.)

LINEAR SYSTEMS

55

Fig. 3.1

Case 1b (Unstable node) 0 < λ2 < λ1 . Then x(t) ≈ c1 eλ1 t v 1 as t → ∞. The origin is an unstable node. (See Fig. 3.2.)

Fig. 3.2

Case 1c (Saddle point) λ2 < 0 < λ1 . In this case, the origin is called a saddle point and L1 , L2 are called unstable manifold and stable manifold of the rest point (0, 0) respectively. (See Fig. 3.3.) Case 2: λ1 , λ2 are complex. Let λ1 = α + iβ, λ2 = α − iβ and v 1 = u + iv and v 2 = u − iv be complex eigenvectors. Then   x(t) = ce(α+iβ)t v 1 + c¯e(α−iβ)t v 1 = 2Re ce(α+iβ)t v 1 . Let c = aeiδ . Then x(t) = 2aeαt (u cos(βt + δ) − v sin(βt + δ)) .

56

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Fig. 3.3

Let U and V be the lines generated by u, v respectively. Case 2a (Center) α = 0, β 6= 0. The origin is called a center. (See Fig. 3.4.)

Fig. 3.4

Case 2b (Stable focus, spiral) α < 0, β 6= 0. The origin is called a stable focus or stable spiral. (See Fig. 3.5.) Case 2c (Unstable focus, spiral) α > 0, β 6= 0. The origin is called an unstable focus or unstable spiral. (See Fig. 3.6.) Case 3: (Improper nodes) λ1 = λ2 = λ. Case 3a There are two linearly independent eigenvectors v 1 and v 2 of

LINEAR SYSTEMS

57

Fig. 3.5

Fig. 3.6

the eigenvalue λ. Then,  x(t) = c1 v 1 + c2 v 2 eλt . If λ > 0 (λ < 0) then the origin 0 is called an unstable (stable) improper node. (See Fig. 3.7.) Case 3b There is only one eigenvector v 1 associated with eigenvalue λ. Then from (3.9) x(t) = (c1 + c2 t) eλt v 1 + c2 eλt v 2 where v 2 is any vector independent of v 1 . (See Fig. 3.8.)

58

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Fig. 3.7

Fig. 3.8

3.5

Linear Systems with Periodic Coefficients

In this section, we shall study the linear periodic systems x0 = A(t)x,

A(t) = (aij (t)) ∈ Rn×n ,

(LP )

where A(t) is continuous on R and is periodic with period T , i.e., A(t) = A(t + T ) for all t. We shall analyze the structure of the solutions x(t) of (LP). Before we prove the main results we need the following theorem concerning the logarithm of a nonsingular matrix. Theorem 3.5.1 Let B ∈ Rn×n be nonsingular. Then there exists A ∈ Cn×n , called logarithm of B, satisfying eA = B.

LINEAR SYSTEMS

59

Proof. Let B = P JP −1 where J is a Jordan form of B, J = diag (J0 , J1 , · · · , Js ) with     λi 1 0 λ1  .. ..    ..   . . .  0    ∈ Cni ×ni ,   J0 =    and Ji =  .. ..  0  .1  . 0 λi λk i = 1, · · · , s. ˜ Since B is nonsingular, λi 6= 0 for all i. If J = eA for some A˜ ∈ Cn×n then −1 def ˜ ˜ it follows that B = P eA P −1 = eP AP = eA . Hence it suffices to show that the theorem is true for Jordan blocks Ji , i = 1, · · · , s. Write   0 1 0  .. ..     . .  1 . Ji = λi I + Ni , Ni =   ..  λi  . 1 0 0

Then Nini = O. From the identity log(1 + x) =

∞ X (−1)p+1

p

p=1

xp , |x| < 1

and elog(1+x) = 1 + x,

(3.11)

we formally write  log Ji = (log λi )I + log I + = (log λi )I +

(−1)p+1 p=1 p

P∞

1 λ i Ni



Ni λi



p

(3.12) .

From (3.12) we define Ai = (log λi )I +

nX i −1 p=1

(−1)p+1 p



Ni λi

p .

Then from (3.11) we have e

Ai

= exp((log λi )I) exp

nX i −1 p=1

(−1)p+1 p



Ni λi

p !

  Ni = λi I + = Ji . λi

60

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Now we state our main results about the structure of the solutions of (LP). Theorem 3.5.2 (Floquet Theorem). If Φ(t) is a fundamental matrix of (LP), then so is Φ(t + T ). Moreover there exists P (t) ∈ Cn×n which is nonsingular and satisfies P (t) = P (t+T ) and there exists R ∈ Cn×n such that Φ(t) = P (t)etR . Proof. Let Ψ(t) = Φ(t + T ). Then Ψ0 (t) = Φ0 (t + T ) = A(t + T )Φ(t + T ) = A(t)Ψ(t). From Theorem 3.2.5, there exists C ∈ Rn×n nonsingular such that Φ(t + T ) = Φ(t)C. By Theorem 3.5.1, C = eT R for some R ∈ Cn×n . Then we have Φ(t + T ) = Φ(t)eT R .

(3.13)

Define P (t) = Φ(t)e−tR . Then Φ(t) = P (t)etR and we have P (t + T ) = Φ(t + T )e−(t+T )R = Φ(t)eT R e−(t+T )R = Φ(t)e−tR = P (t). Thus we complete the proof of the theorem. Definition 3.5.1 The eigenvalues of C = eT R are called the characteristic multipliers (Floquet multipliers) of (LP). The eigenvalues of R are called the characteristic exponents (Floquet exponents) of (LP). Next we shall establish the relationship between Floquet multipliers and Floquet exponents. We also show that the Floquet multipliers are uniquely determined by the system (LP). Theorem 3.5.3 Let ρ1 , · · · , ρn be the characteristic exponents of (LP). Then the characteristic multipliers are eT ρ1 , · · · , eT ρn . If ρi appears in the Jordan block of R, then eT ρi appears in the Jordan block of C = eT R with the same size.

LINEAR SYSTEMS

61

Proof. Since ρ1 · · · ρn are eigenvalues of R, T ρ1 · · · T ρn are eigenvalues of T R. Let P −1 RP = J = diag (J0 , J1 , · · · , Js ). Then C = eT R = P eT J P −1 where eT J = diag eT J0 , eT J1 , · · · , eT Js . Theorem 3.5.4 The characteristic multipliers λ1 , · · · , λn are uniquely determined by A(t) and all characteristic multipliers are nonzero. Proof. Let Φ1 (t) be another fundamental matrix of x0 = A(t)x. Then there exists a nonsingular matrix C1 such that Φ(t) = Φ1 (t)C1 . Then it follows that Φ1 (t + T )C1 = Φ(t + T ) = Φ(t)C = Φ(t)eT R = Φ1 (t)C1 eT R , or 0

Φ1 (t + T ) = Φ1 (t)C1 eT R C1−1 = Φ1 (t)eT R . 0

0

Then eT R = C1 eT R C1−1 and eT R , eT R have the same eigenvalues. Hence the characteristic multipliers λ1 · · · λn are uniquely determined by the system (LP). The fact that λi are nonzero follows directly from λ1 · · · λn = det eT R 6= 0. Now we are in a position to study the asymptotic behavior of the solution x(t) of (LP). Theorem 3.5.5 (i) Each solution x(t) of (LP) approaches zero as t → ∞ iff |λi | < 1 for each characteristic multiplier λi or, equivalently, Re ρi < 0 for each characteristic exponent ρi . (ii) Each solution of (LP) is bounded iff (1) |λi | ≤ 1 and (2) If |λi | = 1 then the corresponding Jordan block eT Ji is [λi ].

Proof. Consider the change of variable x = P (t)y. Then we have x0 = P 0 (t)y + P (t)y 0 = A(t)x = A(t)P (t)y, or P 0 (t)y + P (t)y 0 = A(t)P (t)y.

(3.14)

62

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

To compute P 0 (t), we differentiate both sides of Φ(t) = P (t)etR . Then we have A(t)Φ(t) = P 0 (t)etR + P (t)RetR , or P 0 (t) = A(t)P (t) − P (t)R.

(3.15)

Then from (3.14) and (3.15) it follows that y 0 = Ry.

(3.16)

Since x(t) = P (t)y(t) and P (t) is periodic and continuous, the theorem follows directly from Theorem 3.3.5. Remark 3.5.1 If the fundamental matrix Φ(t) satisfies Φ(0) = I, then from (3.13) the characteristic multipliers are eigenvalues of Φ(T ). Hence we may compute the Floquet multipliers by solving the numerical solutions of the I.V.P. x0 = A(t)x x(0) = ei and denote the solution by yi (t), i = 1, 2, ..., n. Then Φ(T ) = [y1 (T ), · · · , yn (T )] .

Example 3.5.1 Hill’s and Mathieu’s equations ([JS] p. 237): Consider a pendulum of length a with a bob mass m is suspended from a support which is constrained to move vertically with displacement ξ(t). (See Fig. 3.9.) The kinetic energy T and potential energy V are given by   2 1 2 ˙2 2 ˙ ˙ T = m ξ − aθ sin θ + a θ cos θ , 2 V = −mg(ξ + a cos θ). The Lagrange’s equation for L = T − V is   ∂L d ∂L = . dt ∂ θ˙ ∂θ From routine computations, it follows that ¨ sin θ = 0. aθ¨ + (g − ξ)

LINEAR SYSTEMS

63

Fig. 3.9

For small oscillation, sin θ ≈ θ, we have   aθ¨ + g − ξ¨ θ = 0. As a standard form for the above equation, we consider x00 + (α + p(t))x = 0. When p(t) is periodic, the above equation is known as Hill’s equation. If p(t) = β cos t, then the equation x ¨ + (α + β cos t) x = 0 is called Mathieu’s equation. Now we consider the Mathieu’s equation. Write it as a linear periodic system  0       x 0 1 x x = = A(t) . y −α − β cos t 0 y y Let Φ(t) be the fundamental matrix with Φ(0) = I. From trace(A(t)) = 0 and Abel’s formula, the characteristic multipliers µ1 , µ2 satisfy ! Z T µ1 µ2 = det Φ(T ) = det Φ(0) exp tr(A(t))dt = 1, 0

and µ1 , µ2 are the solutions of µ2 − µ (T race(Φ(T ))) + det Φ(T ) = 0. Let ϕ = ϕ(α, β) = T race(Φ(T )). Then i p 1h µ1,2 = ϕ ± ϕ2 − 4 . 2 From Floquet Theorem, (3.16) and (3.9), we have five cases:

64

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

(i) ϕ > 2. Let µ1 , µ2 > 0 and µ1 > 1, µ2 = µ11 < 1. Then the characteristic exponents satisfying ρ1 > 0 and ρ2 < 0 and x(t) = c1 eρ1 t p1 (t) + c2 eρ2 t p2 (t), where pi (t) is periodic with period 2π, i = 1, 2. Hence {(α, β) : ϕ(α, β) > 2} is an unstable region. (ii) ϕ = 2. Then, µ1 = µ2 = 1, ρ1 = ρ2 = 0 and one solution is of period 2π, the other is unbounded. (iii) −2 < ϕ < 2. The characteristic multipliers are complex. In fact |µ1 | = |µ2 | = 1, ρ1 = iv, ρ2 = −iv and x(t) = c1 eivt p1 (t) + c2 e−ivt p2 (t). Obviously the solutions are bounded and {(α, β) : −2 < ϕ(α, β) < 2} is a stable region. We note that the solutions are oscillatory but not periodic for v 6= 2π in general. (iv) ϕ = −2. Then µ1 = µ2 = −1. Set T = 2π and let x0 be an eigenvector of Φ(T ) corresponding to the eigenvalue µ1 = −1. Consider x(t) = Φ(t)x0 . Then, x(t + T ) = Φ(t + T )x0 = Φ(t)Φ(T )x0 (Exercises!) = Φ(t)(−x0 ) = −Φ(t)x0 . Hence x(t + 2T ) = −Φ(t + T )x0 = −Φ(t)Φ(T )x0 , = Φ(t)x0 = x(t) and there is one periodic solution with period 2T = 4π. (v) ϕ < −2. Then µ1 , µ2 are real and negative and µ1 µ2 = 1. Let −1 < µ1 < 0 and µ2 < −1. Write i

µ1 = eT ρ1 = e2π(−σ+ 2 ) = −e−2πσ i

µ2 = e2π(σ+ 2 ) = −e2πσ The general solutions take the form i i x(t) = c1 e(σ+ 2 )t p1 (t) + c2 e(−σ+ 2 )t p2 (t), where p1 , p2 are periodic with period 2π or equivalently x(t) = c1 eσt q1 (t) + c2 e−σt q2 (t), where q1 (t), q2 (t) are periodic with period 4π.

LINEAR SYSTEMS

65

The following theorem of Lyapunov gives a sufficient condition for stable solutions. Theorem 3.5.6 (Lyapunov [H1] p. R π130). Rπ Let p(t + π) = p(t) 6≡ 0 for all t, 0 p(t)dt ≥ 0 and 0 |p(t)| dt ≤ π4 , p(t) is continuous on R. Then, all solutions of the equation u00 + p(t)u = 0 are bounded.

Proof. It suffices to show that no characteristic multiplier is real. If µ1 , µ2 are complex numbers, then µ1 µ2 = 1 implies µ1 = eiν and µ2 = e−iν and u(t) = c1 eiνt p1 (t) + c2 e−iνt p2 (t) are bounded. If µ1 is real, then there exists a real solution u(t) with u(t + π) = ρu(t) for all t for some ρ 6= 0. Then, either u(t) 6= 0 for all t or u(t) has infinitely many zeros with two consecutive zeros a and b, 0 ≤ b − a ≤ π. Case I: u(t) 6= 0, for all t.  Then, we have u(π) = ρu(0), u0 (π) = ρu0 (0) and u0 (π) u(π) = u0 (0) u(0). Since

u00 (t) u(t)

Write Z

+ p(t) = 0, it follows that Z π 00 Z π u (t) dt + p(t)dt = 0. u(t) 0 0

π Z π 0 Z π  0 2 (u (t))2 u0 u (t) u00 (t) dt = + dt = dt > 0. 2 (t) u(t) u u u(t) 0 0 0 0 Rπ Thus, we obtain a contradiction to the assumption 0 p(t)dt ≥ 0. Case II: Assume u(t) > 0 for a < t < b, u(a) = u(b) = 0. Let u(c) = max |u(t)|. Then, we have π

a≤t≤b

4 ≥ π ≥

Z

π

Z

b

|p(t)|dt ≥ 0

1 u(c)

Z |p(t)|dt =

a

Z

b

|u00 (t)|dt ≥

a

a

00 u (t) u(t) dt

1 |u0 (α) − u0 (β)| u(c)

for any α < β in (a, b). We note that Z β Z |u0 (α) − u0 (β)| = u00 (t)dt ≤ α

b

β

α

|u00 (t)|dt ≤

Z a

b

|u00 (t)|dt.

66

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

By Mean Value Theorem we have u(c) − u(b) u(c) − u(a) = u0 (α), = −u0 (β), c−a b−c for some α ∈ (a, c), β ∈ (c, b). Hence it follows that 1 u(c) 1 0 u(c) 4 0 u (α) − u (β) = > + π u(c) u(c) c − a b − c 1 1 b−a 4 4 = + = > ≥ c−a b−c (c − a)(b − c) b−a π here we apply the inequality 4xy ≤ (x + y)2 with x = c − a, y = b − c. Thus we obtain a contradiction.

3.6

Adjoint Systems

Let Φ(t) be a fundamental matrix of x0 = A(t)x. Then we have 0 Φ−1 = −Φ−1 Φ0 Φ−1 = −Φ−1 A(t). Taking conjugate transpose yields  −1 0 −1 Φ∗ = −A∗ (t) (Φ∗ ) . −1

Hence (Φ∗ )

is a fundamental matrix of y 0 = −A∗ (t)y

and we call Y 0 = −A∗ (t)Y the adjoint of Y 0 = A(t)Y . Theorem 3.6.1 Let Φ(t) be a fundamental matrix of x0 = A(t)x. Then Ψ(t) is a fundamental matrix of Y 0 = −A∗ (t)Y iff Ψ∗ Φ = C for some nonsignular matrix C.

Proof. If Ψ(t) is a fundamental matrix of Y 0 = −A∗ (t)Y , then Ψ = def

(Φ∗ )−1 P for some nonsingular matrix P and Ψ∗ Φ = P ∗ Φ−1 Φ = P ∗ = C. On the other hand if Ψ∗ Φ = C, then Ψ = (Φ∗ )−1 C ∗ and hence Ψ is a fundamental matrix of the adjoint system.

LINEAR SYSTEMS

67

Example 3.6.1 Consider n-th order scalar linear equation Dn y + a1 (t)Dn−1 y + · · · + an (t)y = 0. Then we rewrite it in the form of first order system with x1 = y, x2 = y 0 , · · · , xn = y (n−1) and we have    0   x1 x1 0 1 ··· ··· 0  ..   ..      .  0 1 ··· 0    .    = 0  .   .     ..   ..  −an −an−1 · · · · · · −a1 xn xn or x0 = A(t)x. Then the adjoint equation is y 0 = −A(t)T y, or 0   y1 0, · · · · 0 an  ..   −1 0 · · · 0 an−1   .     =  . .   .  . .   . . .  .  0 · · · · −1 a1 yn 



 y1  ..   .   .  .   ..  yn

Hence we have y10 = an yn , y20 = −y1 + an−1 yn , y30 = −y2 + an−2 yn , .. . yn0 = −yn−1 + a1 yn . Let z = yn then we obtain the adjoint equation Dn z − Dn−1 (a1 z) + Dn−2 (a2 z) + ... + (−1)n an z = 0.

Fredholm Alternatives We recall that in Linear Algebra the solvability of the linear system Ax = b, A ∈ Rm×n , b ∈ Rm is stated as follows:

68

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

(Uniqueness): The solution of Ax = b is unique iff Ax = 0 has only trivial solution x = 0. (Existence): The equation Ax = b has a solution iff hb, vi = 0 for every vector v satisfying A∗ v = 0, where A∗ is the complex conjugate of A, i.e., the adjoint of A. Theorem 3.6.2 ([H1] p. 145). Let A(t) ∈ Rn×n be a continuous periodic matrix with period T and f ∈ PT , where PT = {f |f : R → Rn is continuous with period T }. Then, x0 = A(t)x + f (t)

(3.17)

has a solution in PT iff def

Z

hy, f i =

T

y ∗ (t)f (t)dt = 0,

0

for all y ∈ PT satisfying y 0 = −A∗ (t)y. Proof. Let x(t) be a periodic solution of (3.17) in PT . Then x(0) = x(T ). Let Φ(t) be the fundamental matrix of x0 = A(t)x with Φ(0) = I. From the variation of constant formula (3.5), we have Z T x(0) = x(T ) = Φ(T )x(0) + Φ(T )Φ−1 (s)f (s)ds. 0

Then, it follows that Φ−1 (T )x(0) = x(0) +

T

Z

Φ−1 (s)f (s)ds

0

or 

 Φ−1 (T ) − I x(0) =

Z

T

Φ−1 (s)f (s)ds

0

or Bx(0) = b. From Fredholm’s Alternative of linear algebra, the necessary and sufficient condition of the solvability of Bx(0) = b is hv, bi = 0 for all v satisfying B ∗ v = 0. We note that B ∗ v = 0 iff

∗ Φ−1 (T ) v = v,

LINEAR SYSTEMS

69

∗ and y(t) = Φ−1 (t) v is a solution of the adjoint equation y 0 = −A∗ y, where v is an initial value of a T -periodic solution of y 0 = −A∗ y. Then,   Z T −1 hv, bi = 0 iff v, Φ (s)f (s)ds = 0. 0 −1

For y ∈ PT , y(t) = (Φ



(t)) v, Z

T

∗ f ∗ (t) Φ−1 (t) vdt = 0, 0   Z T Φ−1 (s)f (s)ds = 0. iff v,

hy(t), f (t)i = 0 iff

0

Thus we complete the proof of Theorem 3.6.2. Example 3.6.2 u00 + u = cos wt. Rewrite the equation in the form  0      x1 01 x1 0 = + . x2 −1 0 x2 cos wt

(3.18)

The adjoint equation is 

y1 y2

0

 =

0 1 −1 0



y1 y2

 ,

(3.19)

or y10 = y2 , y20 = −y1 . Then, y1 (t) = a cos t + b sin t, y2 (t) = −a sin t + b cos t.

(3.20)

If w 6= 1, there are two cases: 1 n,

n ∈ N, n ≥ 2. Then the adjoint system (3.19) has nontrivial   y1 periodic solutions of period 2π = 2nπ. For all y = ∈ P2nπ w y2   Z 2nπ    t y1 (t) 0 = (−a sin t + b cos t) cos dt , y2 (t) cos nt n 0     2(sin2 nπ) n sin nπ = −a +b = 0. 1 − ( n1 )2 1 − ( n1 )2

(i) w =

70

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Then (3.19) has no “nontrivial” periodic solution of period 2π .       w   0 y1 (t) 0 y1 ,y = , = Then for all y = ∈ P 2π w 0 y2 (t) cos wt y     2 0 0 , = 0. 0 cos wt

(ii) w 6=

1 n.

Hence (3.18) has a unique periodic solution of period 2π/w. If w = 1 then every solution of the adjoint system is of period 2π. From (3.19), we have T   Z T Z T  y1 (t) 0 dt = −a sin t cos t + b cos2 t dt = πb 6= 0, y2 (t) cos t 0 0 for some solutions of (3.18). In fact, from elementary differential equations, t u(t) = a sin t + b cos t + t sin and we have the “resonance” phenomena. The 2 reader can find the generalization of “resonance” in Exercise 3.25. 3.7

Exercises

Exercise 3.1 (i) Show that det eA = etrA . (ii) If A is skew symmetric, i.e., AT = −A, then eA is orthogonal. (iii) Is it true that the solution of I.V.P. dx dt = A(t)x. x(0) = x0 , is x(t) = Z t  A(s)ds x0 ? If not, give a counterexample. Show that if exp 0 Z t  A(t)A(s) = A(s)A(t) for all s, t > 0, then x(t) = exp A(s)ds x0 . 0  A m , m ∈ Z+ . (iv) eA = limm→∞ I + m Exercise 3.2 Find the general solution of x˙ = Ax with     21 3 1 A =  0 2 −1  , x(0) =  2  . 00 2 1

Exercise 3.3 Prove that BeAt = eAt B for all t if and only if BA = AB.

LINEAR SYSTEMS

71

Exercise 3.4 Let A be an invertible matrix. Show that the only invariant lines for the linear system dx/dt = Ax, x ∈ R2 , are the lines ax1 + bx2 = 0, where v = (−b, a)T is an eigenvector of A. A line L ⊂ R2 is called an invariant line for a linear system x0 = f (t, x), x ∈ R2 if for every solution x of this system, we have x(t) ∈ L for all t ∈ R if x(t0 ) ∈ L for some t0 ∈ R. Exercise 3.5 Consider x0 = Ax where A ∈ Rn×n with Re λ(A) ≤ 0. (1) Show that all the solutions are bounded in positive time, when all the eigenvalues are simple. (2) Give two examples demonstrating that when the eigenvalues are not all simple, the solutions may or may not all be bounded in positive time. (3) Prove that all the solutions are bounded in positive time when the matrix A is symmetric. Exercise 3.6 Consider the differential equation x˙ = Ax, where   a −b 0 0 b a 0 0   A=  0 0 2 −5  . 1 0 1 −2 Find all real (a, b) such that every solution is bounded on [0, ∞). Exercise 3.7 Consider the two-dimensional system: X 0 = (A + B(t)) X, where  A=

−2 1 1 −2 2

B(t) =

e−t

1 1+t2002

 , 1 1+t2 −t

e

! .

Prove that limt→∞ X(t) = 0 for every solution X(t). Exercise 3.8 Find a fundamental matrix solution of the system   1 −1/t x˙ = x, t > 0. 1 + t −1   1 Hint: x(t) = is a solution. t

72

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Exercise 3.9 Suppose that A is a continuous matrix and every solution of x˙ = A(t)x is bounded for t ≥ 0 and let Φ(t) be a fundamental matrix −1 solution. R t Prove that Φ (t) is bounded for t ≥ 0 if and only if the function t 7→ 0 tr A(s)ds is bounded below. Hint: The inverse of a matrix is the adjugate of the matrix divided by its determinant. Exercise 3.10 Suppose that A is continuous, the linear system x˙ = A(t)x is defined on an open interval containing the origin whose right-hand end point is w ≤ ∞, and the norm of every solution has a finite limit as t → w. Show that R w there is a non-trivial solution converging to zero as t → w if and only if 0 tr A(s)ds = −∞. Hint: A matrix has a nontrivial kernel if and only if its determinant is zero. Exercise 3.11 Let A be a continuous n × n matrix such that the system x˙ = A(t)x has a bounded fundamental matrix Φ(t) over 0 ≤ t < ∞. (1) Show that all fundamental matrices are bounded on [0, ∞). (2) Show that if Z t  lim inf Re trA(s)ds > −∞, t→∞

0

then Φ−1 (t) is also bounded on [0, ∞). Exercise 3.12 Assume a(t) is a bounded continuous function, on [0, ∞) and φ(t) is a nontrivial solution of y 00 + a(t)y = 0,

(∗)

satisfying φ(t) → 0 as t → ∞. Show that (∗) has a solution which is not bounded over [0, ∞). Exercise 3.13 Let g be a bounded continuous function on (−∞, ∞) and let B and −C be matrices of dimensions k and n − k. All eigenvalues of B and −C have negative real parts. Let     B 0 g (t) A= and g(t) = 1 , 0 C g2 (t) so that the system x0 = Ax + g(t),

(∗∗)

LINEAR SYSTEMS

73

is equivalent to x01 = Bx1 + g1 (t), Show that the functions Z t φ1 (t) = eB(t−s) g1 (s)ds, −∞

x02 = Cx2 + g2 (t). Z φ2 (t) = −



eC(t−s) g2 (s)ds,

t

are defined for all t ∈ R and determine a solution of (∗∗). Exercise 3.14 Show that if g is a bounded continuous function on R1 and if A has no eigenvalues with zero real part, then (∗∗) has at least one bounded solution. (Hint: Use Exercise 3.13.) Exercise 3.15 Let A = λI + N consist of a single Jordan block. Show that for any α > 0, A is similar to a matrix B = λI + αN . (Hint: Let P = [αi−1 δij ] and compute P −1 AP .) Exercise 3.16 Let A be a real n × n matrix. Show that there exists a real nonsingular matrix P such that P −1 AP = B has the real Jordan canonical form J = diag(J0 , J1 , · · · , Js ) where Jk is given as before for real eigenvalues λj while for complex eigenvalue λ = α ± iβ the corresponding Jk has the form   Λ I2 · · · 02 02  02 Λ I2 02 02    Jk =  . . . . . .  .. .. . . .. ..  02 02 · · · 02 Λ Here 02 is the 2 × 2 zero matrix, I2 the 2 × 2 identity matrix, and   α −β Λ= . β α Exercise 3.17 Use the Jordan form to prove that all eigenvalues of A2 have the form λ2 , where λ is an eigenvalue of A. Exercise 3.18 If A = C 2 , where C is a real nonsingular n×n matrix, show that there is a real matrix L such that eL = A. Hint: Use Exercise 3.16 and the fact that if λ = α + iβ = reiθ , then   log r −θ Λ = exp . θ log r

74

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Exercise 3.19 Let A(t) be a 2 × 2 continuous, periodic matrix of period T and let Y (t) be the fundamental matrix of y 0 = A(t)y satisfying Y (0) = I. Let B(t) be a 3 × 3 continuous periodic matrix of period T of the form   A(t), b1 (t) B(t) =  b2 (t)  . 0, 0, b3 (t) Show that the fundamental matrix Φ(t) of z 0 = B(t)z is given by 

 Y (t), z1 Φ(t) =  z2  0, 0, z3 where z3 (t) = e

Rt 0

b3 (s)ds

and z = (z1 , z2 ) is given by Z z(t) =

t

Y (t)Y −1 (s)b(s)z3 (s)ds

0

where b(s) = col(b1 (s), b2 (s)). In particular, show that if ρ1 , ρ2 are the Floquet multipliers associated with the 2 × 2 system eigenvalues of Y (t) then the Floquet multipliers of the 3 × 3 system are ρ1 , ρ2 and ρ3 = z3 (T ). Exercise 3.20 Let a0 (t) and a1 (t) be continuous and T -periodic functions and let φ1 and φ2 be solutions of y 00 + a1 (t)y 0 + a0 (t)y = 0 such that     φ1 (0) φ2 (0) 10 Φ(0) = = = I2 . φ01 (0) φ02 (0) 01 Show that the Floquet multipliers λ satisfy λ2 + αλ + β = 0, where " Z # T 0 α = − [φ1 (T ) + φ2 (T )] , β = exp − a1 (t)dt . 0

Exercise 3.21 In Exercise 3.20, we assume a1 (t) = 0. (1) Show that if −2 < α < 2, then all solutions y are bounded over −∞ < t < ∞.

LINEAR SYSTEMS

75

(2) Show that if α > 2 or α < −2, then for every non-trivial solution y, y(t)2 + y 0 (t)2 must be unbounded over R. (3) Show that if α = −2, then there is at least one solution y of period T . (4) Show that if α = 2, then there is at least one solution y of period 2T .

Exercise 3.22 Show that Φ(t + T ) = Φ(t)Φ(T ) where Φ(t) is the fundamental matrix of linear T -periodic system X 0 = A(t)X with Φ(0) = I. Exercise 3.23 Let A be a continuous T -periodic n × n matrix. (1) Show that the linear system x0 = A(t)x has at least one nontrivial solution x = x(t) such that x(t + T ) = µx(t), µ is a characteristic multiplier. (2) Suppose µ1 , ..., µk are k distinct characteristic multipliers of x0 = A(t)x. Show that x0 = A(t)x has k linearly independent solutions xi (t) = pi (t)eρi t , pi (t) is periodic with period T , i = 1, 2, ..., k. Exercise 3.24 and Yamabe [MY]).  (Markus −1 + 23 cos2 t 1 − 32 cos t sin t If A(t) = then the eigenvalue λ1 (t), −1 − 32 sin t cos t −1 + 32 sin2 t √ λ2 (t) of A(t) are λ1 (t) = [−1 + i 7]/4, λ2 (t) = λ1 (t) and, in particular, the real parts of the eigenvalues have negative real parts. On the other hand, one can verify directly that the vector (− cos t, sin t) exp( 2t ) is a solution of x0 = A(t)x and this solution is unbounded as t → ∞. Show that one π of the characteristic multipliers is −e 2 and the other multiplier is −e−π . Exercise 3.25 (1) Consider linear inhomogeneous system x0 = Ax + f (t) (∗ ∗ ∗) where f (t) is a continuous 2π-periodic function. If there is a 2π-periodic solution y(t) of adjoint equation y 0 = −AT y such that Z 2π

y T (t)f (t)dt 6= 0,

0

show that every solution x(t) of (∗ ∗ ∗) is unbounded. d Hint: Compute dt y T (t)x(t) and integrate from 0 to ∞.

76

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

(2) Show that the resonance occurs for the second order linear equation x00 + w02 x = F cos wt, when w = w0 . Exercise 3.26 In x00 + a1 (t)x0R+ a2 (t)x = 0, make the change of variable t s = F (t), where F 0 (t) = exp[− a1 (s)ds] and let t = G(s). Show that this loads to d2 x + g(s)x = 0 where ds2

g(s) is

a2 (t) (F 0 (t))2

evaluated at t = G(s). Exercise 3.27 (1) Consider the system x˙ = Ax where  1 0 A= 1 0 1 0

 0 −1 . 0

Let x(t) be a solution of the above system. We are saying that x(t) = c > 0, and x(t) is is growing linearly if there exists limx→∞ |x(t)| t log |x(t)| = χ > 0. growing exponentially if limx→∞ t Find all initial conditions x(0) such that the respective solution x(t) are a) bounded; b) growing linearly; c) growing exponentially. In case c) find the respective constant χ. (2) Let f : R → R be an odd continuous function such that f (0) = R 2012 1 0, f (x) > 0 for x > 0 and 0 f (x) dx = 1. Determine all values of a for which there is a solution x(t) of the equation x˙ = f (x) with x(0) = 0, x(1) = a.

Chapter 4

STABILITY OF NONLINEAR SYSTEMS

4.1

Definitions

Let x(t) be a solution of the system dx = f (t, x). dt

(4.1)

In this section, we shall introduce three types of stability for a solution x(t) of (4.1), namely stability, instability and asymptotic stability in the sense of Lyapunov. Definition 4.1.1 We say that a solution x(t) of (4.1) is stable, more precisely, stable over the interval [t0 , ∞) if (i) for each  > 0 there exists δ = δ(ε) > 0 such that for any solution x ¯(t) of (4.1) with |¯ x(t0 ) − x(t0 )| < δ the inequality |¯ x(t) − x(t)| <  holds for all t ≥ t0 .

x(t) is said to be asymptotically stable if it is stable and

(ii) |¯ x(t) − x(t)| → 0 as t → ∞ whenever |¯ x(t0 ) − x(t0 )| is sufficiently small.

A solution x(t) is said to be unstable if it is not stable, i.e., there exist ε > 0 and sequences {tm } and {ξm }, ξm → ξ0 , as m → ∞ where ξ0 = x(t0 ), such that |ϕ(t0 + tm , t0 , ξm ) − x(t0 + tm )| ≥ ε.

77

78

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Example 4.1.1 The constant solution x(t) ≡ ξ of the equation x0 = 0 is stable, but it is not asymptotically stable. Example 4.1.2 The solutions of the equation x0 = a(t)x are  Z t a(s)ds . x(t) = ξ1 exp 0

The solution x(t) ≡ 0 is asymptotically stable if and only if Z t a(s)ds → −∞ as t → ∞. 0



0





 x1 . x2     x1 (t) 0 Then, (0, 0) is a saddle point. Hence ≡ is unstable, even x2 (t) 0 though there exist points, namely, the points (ξ1 , ξ2 ) on stable manifold such that ϕ (t, (ξ1 , ξ2 )) → (0, 0) as t → ∞.

Example 4.1.3

x1 x2

=

0 1 1 0

  v(t) Example 4.1.4 Consider the equation v 00 (t)+t sin v(t) = 0. Then, v 0 (t)       π v(t) 0 ≡ is unstable and ≡ is asymptotically stable. (Exer0 0 v (t) 0 cise 4.1.) Example 4.1.5 For the equation x00 + x = 0, the null solution   0 is stable, but not asymptotically stable. 0



x(t) x0 (t)

 ≡

Remark 4.1.1 For applications, we mostly deal with autonomous systems x0 = f (x) or periodic systems x0 = f (t, x) where f is periodic in t. For autonomous systems x0 = f (x), f : Ω ⊆ Rn → Rn , we consider the stability of the following specific solutions, namely (i) Equilibrium solutions x(t) ≡ x∗ , f (x∗ ) = 0. (ii) Periodic solutions whose existence usually follows from Poincar´eBendixson theorem for n = 2 (see Chapter 6) or Brouwer fixed point Theorem for n ≥ 3 (see Chapter 8). We shall discuss the “orbital stability” for a periodic orbit of x0 = f (x) in Section 4.4.

STABILITY OF NONLINEAR SYSTEMS

4.2

79

Linearization

Stability of linear systems: The stability of a solution x(t) of the linear system x0 = A(t)x is equivalent to the stability of the null solution x(t) ≡ 0. Consider the linear system x0 = Ax, where A is an n × n constant matrix. From Chapter 3, we have the following: (i) If all eigenvalues of A have negative real parts, then x(t) ≡ 0 is asymptotically stable. Moreover, there are positive constants K and α such that k eAt x0 k≤ Ke−αt k x0 k for all t ≥ 0, x0 ∈ Rn . (ii) x(t) ≡ 0 is stable iff all eigenvalues of A have nonpositive real parts and those with zero real parts have simple elementary divisors. (iii) x(t) ≡ 0 is unstable iff there exists an eigenvalue with positive real part or those with zero real part have no simple elementary divisors. There are two ways to justify the stability of equilibrium solutions of nonlinear autonomous system x0 = f (x), namely, linearization method and Lyapunov method. In this chapter we discuss the method of linearization and we shall discuss the Lyapunov method in the next chapter. Given a nonlinear autonomous system x0 = f (x), with initial condition x(0) = x0 , it is very difficult to study the global asymptotic behavior of the solution ϕ(t, x0 ). As a first step, we find all equilibria of the system, i.e., solving the nonlinear system f (x) = 0. Then, we check the stability property of each equilibrium x∗ . With these stability information, we may predict the behavior of the solution ϕ(t, x0 ). Let x∗ be an equilibrium of autonomous system x0 = f (x). (4.2) ∗ The Taylor series expansion of f (x) around x = x has the form f (x) = f (x∗ ) + A(x − x∗ ) + g(x − x∗ ) = A(x − x∗ ) + g(x − x∗ ), where     A = Dx f (x ) =    ∗

∂f1 ∂x1

, ··· ,

.. . .. .

∂fn ∂x1

, ··· ,

∂f1 ∂xn

∂fn ∂xn

       x=x∗

,

80

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

and g(y) = o(|y|) as y → 0, i.e. lim

y→0

|g(y)| = 0. |y|

The Jacobian Dx f (x∗ ) is called the variational matrix of (4.2) evaluated at x = x∗ . Let y = x − x∗ , the linear system y 0 = Dx f (x∗ )y

(4.3)

is said to be the linearized system of (4.2) about the equilibrium x∗ . The stability property of the linear system (4.3) is called the linearized stability of the equilibrium x∗ . The stability property of the equilibrium x∗ of (4.2) is called the nonlinear stability of x∗ . Does the linearized stability of x∗ imply the nonlinear stability of x∗ ? From the following examples, in general, the answer is “No”. Example 4.2.1 Consider the system   dx1 2 2 dt = x2 − x1 x1 + x2 , dx2 2 2 dt = −x1 − x2 x1 + x2 . (0, 0) is an equilibrium. The variational matrix A evaluated at (0, 0) is      − x21 + x22 − 2x21 1 − 2x1x2 0 1 = −1 − 2x1 x2 − x21 + x22 − 2x22 x1 = 0 −1 0 x2 = 0

whose eigenvalues are ±i. Hence (0, 0) is a center for the linearized system x0 = Ax. However 2 dx2 dx1 + x2 = − x21 + x22 x1 dt dt or 2 1 d 2  r (t) = − r2 (t) , r2 (t) = x21 (t) + x22 (t). 2 dt Then r2 (t) =

c , 1 + 2ct

c = x21 (0) + x22 (0)

and r(t) → 0 as t → ∞. It follows that (0, 0) is asymptotically stable.

STABILITY OF NONLINEAR SYSTEMS

81

Example 4.2.2 For the system   dx1 2 2 dt = x2 + x1 + x2 x 1 , dx2 2 2 = −x + x + x 1 1 2 x2 , dt and the equilibrium (0, 0), the linearized system is   0 1 x0 = x, with (0, 0) being a center. −1 0 However 2 d 2  r (t) = 2 r2 (t) , dt or r2 (t) =

r2 (0) . 1 − 2r2 (0)t

The equilibrium (0, 0) is unstable and the solution blows up in finite time. In the following Theorem 4.2.1 we show that if all eigenvalues of Dx f (x∗ ) have negative real parts, then linearized stability implies nonlinear stability. Theorem 4.2.1 Let x∗ be an equilibrium of the autonomous system (4.2). If the variational matrix Dx f (x∗ ) has all eigenvalues with negative real parts, then the equilibrium x∗ is asymptotically stable. Proof. Let A = Dx f (x∗ ) and y = x − x∗ , then we have y 0 = f (x) = f (y + x∗ ) = Ay + g(y), where g(y) = o(|y|) as y → 0, g(0) = 0.

(4.4)

Since Reλ(A) < 0, there exists M > 0, σ > 0 such that |eAt | ≤ M e−σt as t ≥ 0.

(4.5)

From (4.5) and the variation of constant formula Z t A(t−t0 ) y(t) = e y(t0 ) + eA(t−s) g(y(s))ds, t0

it follows that |y(t)| ≤ M |y(t0 )|e−σ(t−t0 ) + M

Z

t

t0

e−σ(t−s) |g(y(s))|ds.

(4.6)

82

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

From (4.4), given ε, 0 < ε < σ, there exists δ > 0 such that ε |y| for |y| < δ. (4.7) |g(y)| < M  δ δ Let 0 < γ < min 2M , 2 . We claim that if |y(t0 )| < γ then |y(t)| < δ and |y(t)| ≤ 2δ e−(σ−)(t−t0 ) for all t ≥ t0 . If |y(t)| < 2δ for t0 ≤ t < t∗ and |y(t∗ )| = 2δ , then from (4.7), for t0 ≤ t < t∗ we have Z t e−σ(t−s) |y(s)|ds. |y(t)| ≤ M |y(t0 )|e−σ(t−t0 ) +  t0 σt

Applying Gronwall’s inequality to e |y(t)| yields |y(t)| ≤ M |y(t0 )|e−(σ−)(t−t0 )
0, y(0) > 0. We may assume c = 1 if we do the scaling y → cy. Let m > d. Then we have three equilibria: (0, 0), (K, 0) and (x∗ , y ∗ ) where x∗ = ma −1 > 0, y ∗ > 0. The interior equilibrium (x∗ , y ∗ ) exists if x∗ < K. (d) Assume 0 < x∗ < K. Then the variational matrix is  # " ma mx γ 1 − 2x K − (a+x)2 y  − a+x  . A(x, y) = ma mx (a+x)2 y a+x − d At E0 = (0, 0) 

 γ 0 A(0, 0) = . 0 −d E0 is a saddle point for γ > 0, −d < 0.

84

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

At E1 = (K, 0), 

mK −γ − a+K

 .

A(K, 0) =  0

mK a+K

−d



E1 is a saddle point since x < K. At E ∗ = (x∗ , y ∗ ),   γ 1−  A(x∗ , y ∗ ) = 

2x∗ K





ma ∗ (a+x∗ )2 y

ma ∗ (a+x∗ )2 y ∗ ∗



mx − a+x ∗

  .

0

The characteristic equation of A(x , y ) is     ma mx∗ ma 2x∗ ∗ − y + · y ∗ = 0. λ2 − λ γ 1 − ∗ 2 K (a + x ) a + x∗ (a + x∗ )2 E ∗ is asymptotically stable iff   2x∗ ma γ 1− y ∗ < 0, − K (a + x∗ )2 or     2x∗ x∗ a γ 1− γ 1− − < 0, K a + x∗ K or K −a < x∗ . 2 If K−a > x∗ , then E ∗ is an unstable spiral. 2 If K−a = x∗ , then the eigenvalues of A(x∗ , y ∗ ) are ±ωi for some ω 6= 0. 2 This is “Hopf Bifurcation” which will be discussed in Section 6.3. < x∗ < K, the interior equilibrium E ∗ is a stable spiral, E0 , For K−a 2 E1 are saddles. From isocline analysis, we predict (x(t), y(t)) → E ∗ as ∗ t → ∞ (see Fig. 4.2). For x∗ < K−a 2 , E is an unstable spiral, E0 and E1 are saddles. It can be shown that the trajectory (x(t), y(t)) approaches a unique limit cycle [Cop] (see Fig. 4.3). Example 4.2.4 Lotka-Volterra two species competition model [Wal]:    x1 dx1  = γ x 1 − − α1 x1 x2 ,  1 1 dt K  1  dx2 x2 dt = γ2 x2 1 − K2 − α2 x1 x2 ,    x1 (0) > 0, x2 (0) > 0,

STABILITY OF NONLINEAR SYSTEMS

Fig. 4.2 K−a < x∗ < K, E ∗ is a sta2 ble spiral.

Fig. 4.3 0 < x∗ < stable spiral.

85

K−a , 2

E ∗ is an un-

where γ1 , γ2 , K1 , K2 , α1 , α2 > 0. Three equilibria, E0 = (0, 0), E1 = (K1 , 0) and E2 = (0, K2 ) always exist. The interior equilibrium E ∗ = (x∗ , y ∗ ) exists in the following cases (iii) and (iv). The variational matrix at E(x, y) is     γ1 x1 − α x − x −α x γ1 1 − K 1 2 1 1 1 K1 1   . A(x, y) =      γ2 x2 −α2 x2 γ2 1 − K2 − α2 x1 − K2 x2 At E0 , 

 γ1 0 A(0, 0) = . 0 γ2 E0 is an unstable node or a repeller for γ1 > 0, γ2 > 0. At E1 = (K1 , 0)  A(K1 , 0) =

 −γ1 −α1 K1 . 0 γ2 − α2 K1

At E2 = (0, K2 ) 

 γ1 − α1 K2 0 A(0, K2 ) = . −α2 K2 −γ2 There are  four cases according to of isoclines L1   the positions  x1 x2 γ1 1 − K1 − α1 x2 = 0 and L2 : γ2 1 − K2 − α2 x1 = 0.

:

(i) αγ22 > K1 , αγ11 < K2 (see Fig. 4.4). In this case, E2 = (0, K2 ) is a stable node, E1 = (K1 , 0) is a saddle point and E0 = (0, 0) is a unstable node. We may predict that (x1 (t), x2 (t)) → (0, K2 ) as t → ∞.

86

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Fig. 4.4

(ii)

γ2 α2

< K1 ,

γ1 α1

> K2 (see Fig. 4.5).

Fig. 4.5

In this case E1 = (K1 , 0) is a stable node, E2 = (0, K2 ) is a saddle point, and E0 = (0, 0) is an unstable node. We may predict that (x1 (t), x2 (t)) → (K1 , 0) as t → ∞. (iii) αγ11 > K2 , αγ22 > K1 (see Fig. 4.6). In this case, E1 = (K1 , 0) and E2 = (0, K2 ) are saddle points, and E0 = (0, 0) is an unstable node. The variational matrix at E ∗ is   A(x∗ , y ∗ ) = 

γ1 ∗ −K x −α1 x∗1 1 1

−α2 x∗2

γ2 ∗ −K x 2 2

  .

STABILITY OF NONLINEAR SYSTEMS

87

Fig. 4.6

The characteristic polynomial of A(x∗ , y ∗ ) is λ2 +



   γ2 ∗ γ1 γ2 γ1 ∗ x + x2 λ + x∗1 x∗2 − α1 α2 = 0. K1 K2 K1 K2

Since αγ22 > K1 , αγ11 > K2 , it follows that E ∗ = (x∗ , y ∗ ) is a stable node. We may predict the solution (x1 (t), x2 (t)) → (x∗1 , x∗2 ) as t → ∞. (iv)

γ1 α1

< K2 ,

γ2 α2

< K1 (see Fig. 4.7).

Fig. 4.7

In this case E1 = (K1 , 0) and E2 = (0, K2 ) are stable nodes. E0 = (0, 0) is an unstable node. From K1 > αγ22 , K2 > αγ11 , it follows that E ∗ = (x∗ , y ∗ ) is a saddle point. This is a well-known example of “bistability”.

88

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

4.3

Saddle Point Property

In this section, we discuss the existence of stable and unstable manifolds for a hyperbolic equilibrium of an autonomous system and prove that they are submanifolds with the same smoothness properties as the vector field. Consider (4.2) with x∗ as a hyperbolic equilibrium and A = Dx f (x∗ ). Let y = x − x∗ then we have y 0 = Ay + g(y), where g(0) = 0, Dg(0) = 0,

g(y) = o(|y|) as y → 0.

Consider the differential equation x0 = Ax + g(x)

(4.8)

where the equilibrium 0 is hyperbolic and g : Rn → Rn is a Lipschitz continuous function satisfying g(0) = 0 |g(x) − g(y)| ≤ ρ(δ)|x − y| if |x|, |y| ≤ δ

(4.9)

where ρ : [0, ∞] → [0, ∞) is continuous with ρ(0) = 0. We note that (4.9) implies g(x) = o(|x|) as x → 0. For any x ∈ Rn , let ϕt (x) be the solution of (4.8) through x. The unstable set W u (0) and the stable set W s (0) of 0 are defined as W u (0) = {x ∈ Rn : ϕt (x) is defined for t ≤ 0 and ϕt (x) → 0 as t → −∞}, W s (0) = {x ∈ Rn : ϕt (x) is defined for t ≥ 0 and ϕt (x) → 0 as t → ∞}. u s The local unstable set Wloc (0) and the local stable set Wloc (0) of 0 corresponding to a neighborhood U of 0 are defined by u Wloc (0) ≡ W u (0, U ) = {x ∈ W u (0) : ϕt (x) ∈ U, t ≤ 0}, s Wloc (0) ≡ W s (0, U ) = {x ∈ W s (0) : ϕt (x) ∈ U, t ≥ 0}.

Example 4.3.1 Consider case (iv) of Lotka-Volterra two species competition model in Example 4.2.4   x1 0 − α1 x1 x2 , x1 = γ1 x1 1 − K1   x2 x02 = γ2 x2 1 − − α2 x1 x2 . K2

STABILITY OF NONLINEAR SYSTEMS

89

Fig. 4.8

If αγ11 < K2 , αγ22 < K1 then x∗ = (x∗1 , x∗2 ) is saddle point with onedimensional stable set W s (x∗ ) (see Fig. 4.8) and one-dimensional unstable set W u (x∗ ). Example 4.3.2 Consider the following nonlinear equation  0 x1 = −x1 , x02 = x2 + x21 . The equilibrium (0, 0) is a saddle since the linearized equation is  0 x1 = −x1 , x02 = x2 .

Fig. 4.9

90

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Integrating the above nonlinear equations, we obtain x1 (t) = e−t x01 ,   2 1 1 0 2 t 0 x2 (t) = e x2 + (x1 ) − e−2t x01 . 3 3 From these formulas, we see that   W u (0) = x0 = x01 , x02 : x01 = 0 ,    1 0 2 s 0 0 0 0 W (0) = x = x1 , x2 : x2 = − x1 , 3 (see Fig. 4.9). Consider the linear system x0 = Ax

(4.10)

where A ∈ Rn×n has k eigenvalues with negative real parts and n − k eigenvalues with positive real parts. From the real Jordan form (see Exercise 3.16), there exists a nonsingular matrix U = [u1 · · · uk , uk+1 · · · un ] ∈ Rn×n such that   A− 0 , (4.11) U −1 AU = 0 A+ where A− ∈ Rk×k with Reλ(A− ) < 0 and A+ ∈ R(n−k)×(n−k) with Reλ(−A+ ) < 0. Then, there are constants K1 > 0, α > 0 such that |eA+ t | ≤ K1 eαt , t ≤ 0, |eA− t | ≤ K1 e−αt , t ≥ 0.

(4.12)

From (4.11), it follows that  A[u1 · · · uk , uk+1 · · · un ] = U

A− 0

0 A+

 .

Hence, for the linear system (4.10), 0 is a saddle point with E s = spanhu1 · · · uk i, E u = spanhuk+1 · · · un i,

STABILITY OF NONLINEAR SYSTEMS

91

where u1 · · · uk (or uk+1 · · · un ) are eigenvectors or generalized eigenvectors associated with eigenvalues with negative (or positive) real parts. The stable set E s and the unstable set E u are invariant under A. Define the projections P : Rn → E u

and Q : Rn → E s

with n X

P

! αi ui

i=1

Q

n X

n X

=

αi ui ,

i=k+1

! αi ui

=

i=1 s

k X

αi ui .

i=1

Then P Rn = E u , QRn = E and P Ax = P A

n X

! αi ui

n X

=P

i=1

=

n X

αi Aui = A

i=k+1

! αi Aui

i=1 n X

! αi ui

= AP x

k+1

i.e. P A = AP . L Similarly we have QA = AQ. Obviously Rn = P Rn QRn . For any  0 x ∈ Rn , P x = U y for some y = , v ∈ Rn−k . From (4.11), we have v  A− t    e 0 0 At At e Px = e Uy = U y=U . eA + t v 0 eA + t Then, from (4.12) it follows that |eAt P x| ≤ k U k |eA+ t ||v| = k U k |eA+ t ||U −1 P x| ≤ k U k K1 eαt k U −1 k |P x|, t ≤ 0. Hence we have |eAt P | ≤ Keαt ,

t ≤ 0.

(4.13)

Similarly we have |eAt Q| ≤ Ke−αt ,

t ≥ 0.

(4.14) Q.E.D.

92

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

The following is a basic lemma. Lemma 4.3.1 If x(t), t ≤ 0 is a bounded solution of (4.8), then x(t) satisfies the integral equation Rt Rt y(t) = eAt P y(0) + 0 eA(t−s) P g(y(s))ds + −∞ eA(t−s) Qg(y(s))ds. (4.15) If x(t), t ≥ 0 is a bounded solution of (4.8) then x(t) satisfies the integral equation Z t Z ∞ At A(t−s) y(t) = e Qy(0) + e Qg(y(s))ds − eA(t−s) P g(y(s))ds. (4.16) 0

t

Conversely, if y(t), t ≤ 0 (or t ≥ 0) is a bounded solution of (4.15) (or (4.16)), then y(t) satisfies (4.8). Proof. Let y(t) = x(t), t ≤ 0, be a bounded solution of (4.8). Since P A = AP, QA = AQ, from the variation of constant formula, for any τ ∈ (−∞, 0], we have Z t Qy(t) = eA(t−τ ) Qy(τ ) + eA(t−s) Qg(y(s))ds. τ

From (4.14) and the assumption that y(s) is bounded for s ≤ 0, let τ → −∞, we obtain Z t Qy(t) = eA(t−s) Qg(y(s))ds. −∞

Since P y(t) = eAt P y(0) +

Z

t

eA(t−s) P g(y(s))ds

0

then from y(t) = P y(t) + Qy(t), we obtain (4.15). The proof for the case that x(t) ≥ 0, t ≥ 0, is bounded is similar. The converse statement is proved by direct computation (Exercise 4.2).

Theorem 4.3.1 (Stable Manifold Theorem). If g satisfies (4.9) and Re(σ(A)) 6= 0, then there is a neighborhood U of 0 in Rn such that W u (0, U ) (or W s (0, U )) is a Lipschitz graph over E u := P Rn (or E s := QRn ) which is tangent to E u (or E s ) at 0. Also, there are positive constants K1 , α1 such that x ∈ W u (0, U ) (resp. W s (0, U )) then the solution ϕt (x) of (4.8) satisfies |ϕt (x)| ≤ K1 e−α1 |t| |x|,

t ≤ 0 (resp. t ≥ 0).

STABILITY OF NONLINEAR SYSTEMS

93

Furthermore if g is a C k function (or an analytic function) in a neighborhood U of 0, then so are W u (0, U ) and W s (0, U ). Then local stable and unstable manifold can be extended to global stable and unstable manifolds. Define W s (0) = the global stable manifold of 0 [ s = {ϕt (x), x ∈ Wloc (0)} t≤0

and W u (0) = the global unstable manifold of 0 [ u {ϕt (x), x ∈ Wloc (0)}. = t≥0

We defer the proof of stable manifold Theorem to Appendix B. We note that E u := P Rn and E s := QRn are unstable and stable manifold of equilibrium 0 of linear system x0 = Ax.

Fig. 4.10

94

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

4.4

Orbital Stability

Let p(t) be a periodic solution of period T of the autonomous system x0 = f (x),

f : Ω ⊆ Rn → Rn .

(4.17)

It is inappropriate to consider the notion of asymptotic stability for the periodic solution p(t). For example, consider ϕ(t) = p(t + τ ), with τ > 0 small, then ϕ(t) is also a solution of (4.17), but it is impossible that |ϕ(t) − p(t)| → 0 as t → ∞ no matter how small τ is. Hence instead we consider the periodic orbit γ = {p(t) ∈ Rn : 0 ≤ t < T } and the concept of “orbital stability” of the periodic orbit γ. Definition 4.4.1 We say that a periodic orbit γ is orbitally stable if given ε > 0 there exists δ > 0 such that  dist ϕt (x0 ), γ < ε for t ≥ 0, provided dist(x0 , γ) < δ. If in addition, dist(ϕt (x0 ), γ) → 0 as t → ∞ for dist(x0 , γ) sufficiently small, then we say that γ is orbitally asymptotically stable. γ is orbitally unstable if γ is not orbitally stable. Now the next question is that how do we verify the orbital stability of γ besides the definition. Linearization about periodic orbit: Let y = x − p(t). Then y 0 (t) = x0 (t) − p0 (t) = f (x(t)) − f (p(t)) = f (y + p(t)) − f (p(t))

(4.18)

= Dx f (p(t))y + h(t, y) where h(t, y) = 0(|y|) as y → 0 for all t. The linearized equation of (4.17) about p(t) is  0 y = Dx f (p(t))y = A(t)y (4.19) A(t + T ) = A(t).

STABILITY OF NONLINEAR SYSTEMS

95

We shall relate the characteristic multipliers of the periodic linear system (4.19) to the orbital stability of γ. Let Φ(t) be the fundamental matrix of (4.19) with Φ(0) = I. Since p(t) is a solution of (4.17), we have p0 (t) = f (p(t)) and d d 0 p (t) = f (p(t)) = Dx f (p(t))p0 (t) = A(t)p0 (t). dt dt Hence p0 (t) is a solution of (4.19) and it follows that p0 (t) = Φ(t)p0 (0). Setting t = T yields p0 (0) = p0 (T ) = Φ(T )p0 (0). Thus, 1 is an eigenvalue of Φ(T ) or equivalently 1 is a characteristic multiplier of (4.19). Assume µ1 , µ2 , · · · , µn are characteristic multipliers of T -periodic system (4.19) with µ1 = 1. Then from Liouville’s formula µ1 · · · µn = µ2 · · · µn = det Φ(T ) ! Z T = det Φ(0) exp tr(A(s))ds 0

Z = exp

!

T

tr(A(s))ds . 0

We note that  A(t) = Dx f (p(t)) =

∂fi ∂xj



. x=p(t)

Hence Z

T

µ2 · · · µn = exp 0

Z = exp

T

! ∂fn ∂f1 (p(t)) + · · · + (p(t))dt ∂x1 ∂xn ! div(f (p(t)))dt .

0

Definition 4.4.2 We say that γ is orbitally stable with asymptotic phase if for any nearby solution ϕ(t) we have |ϕ(t) − p(t + t0 )| → 0 for some 0 ≤ t0 < T .

as

t→∞

96

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Now we state the main result of this section. Theorem 4.4.1 If the (n − 1) characteristic multipliers µ2 · · · µn satisfy |µi | < 1, i = 2, · · · , n, then γ is orbitally asymptotically stable and nearby solution ϕ(t) of (4.17) possesses an asymptotic phase with |ϕ(t) − p(t + t0 )| ≤ Le−αt , L, α > 0 for some t0 , 0 ≤ t0 < T . Corollary 4.4.1 For n = 2. If the periodic solution p(t) satisfies Z 0

T

∂f2 ∂f1 (p(t)) + (p(t))dt < 0, ∂x1 ∂x2

(4.20)

then γ is orbitally stable with asymptotic phase. Remark 4.4.1 Even for the case of n = 2, it is difficult to verify (4.20) since the integral depends on how much we know about the location of the periodic orbit. Before we prove Theorem 4.4.1 where Poincar´e map will be introduced, we study several properties of nonlinear map f : RN → RN and consider the difference equation xn+1 = f (xn ).

(4.21)

Definition 4.4.3 We say that x ¯ is an equilibrium or a fixed point of (4.21) if x ¯ = f (¯ x). Definition 4.4.4 We say that x ¯ is stable if for any ε > 0 there exists δ > 0 such that for any x0 with |x0 − x ¯| < δ we have |xn − x ¯| < ε for all n ≥ 0. If in addition, xn → x ¯ as n → ∞ for x0 sufficiently close to x ¯ then we say that x ¯ is asymptotically stable. We say that x ¯ is unstable if x ¯ is not stable. Linearization of the map: xn+1 − x ¯ = f (xn ) − f (¯ x) = Dx f (¯ x)(xn − x ¯) + g(xn − x ¯) where g(y) satisfies

|g(y)| → 0 as y → 0. |y|

(4.22) (4.23)

STABILITY OF NONLINEAR SYSTEMS

97

Let yn = xn − x ¯. The linearized equation of (4.21) about x is yn+1 = Ayn ,

A = Dx f (¯ x).

(4.24)

From (4.24) we have y n = An y 0 . If k A k < 1 for some norm k0 k, then |yn | ≤ k An k |y0 | ≤ (k A k)n |y0 | → 0 or equivalently |λ| < 1 for all λ ∈ σ(A), then yn → 0 as n → ∞. Lemma 4.4.1 If all eigenvalues λ of Dx f (¯ x) satisfy |λ| < 1, then the fixed point x ¯ is asymptotically stable. Furthermore if k x0 − x ¯ k is sufficiently small, then k xn − x ¯ k ≤ ce−αn k x0 − x ¯k

for some c, α > 0,

for all n ≥ 0.

Proof. (Exercise 4.14.) Now let’s consider (4.17) and its periodic solution p(t) with period T and P the periodic orbit γ. Take a local cross section ⊆ Rn of dimension n − 1 P intersecting γ. The hypersurface need not be planar, but it must be P chosen in the way that the flow of (4.17) is transversal to , i.e. f (x)·n(x) 6= P 0 for all x ∈ Σ where n(x) is the normal vector to at x or equivalently P the vector field f (x) is not tangent to at x. Without loss of generality, we assume the periodic orbit γ intersect Σ at point p∗ where p∗ = p(0). Let U ⊆ Σ be a small neighborhood of p∗ . Let’s define the first return map (also called Poincar´e map) P : U → Σ P (ξ) = ϕτ (ξ) ∈ Σ where τ = τ (ξ) is the first time the trajectory {ϕt (ξ)}t≥0 returns to Σ. Obviously, p∗ is a fixed point of P . The stability of the fixed point p will reflect the orbital stability of the periodic orbit γ. We shall show that Dx P (p∗) has eigenvalues λ1 · · · λn−1 which are exactly the characteristic multipliers µ2 , · · · , µn of (4.19). Without loss of generality, we assume p∗ = 0. Then p(0) = 0 and p0 (0) = f (0). Let Π be the hyperplane {x ∈ Rn : x·f (0) = 0} and ϕ(t, ξ) be the solution of (4.17) with ϕ(0, ξ) = ξ. ∗

Lemma 4.4.2 For ξ ∈ Rn , |ξ| sufficiently small, there exists a unique real-valued C 1 function t = τ (ξ), the first return time, with τ (0) = T, ϕ(τ (ξ), ξ) ∈ Π.

98

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Proof. Let F (t, ξ) = ϕ(t, ξ) · f (0) and consider the equation F (t, ξ) = 0. Since F (T, 0) = ϕ(T, 0) · f (0) = 0, ∂F = f (ϕ(T, 0)) · f (0) = |f (0)|2 > 0. ∂t t=T, ξ=0 From Implicit Function Theorem, there exists a neighborhood of ξ = 0, t can be expressed as a C 1 function of ξ with t = τ (ξ), T = τ (0) and F (τ (ξ), ξ) = 0. Then ϕ(τ (ξ), ξ) · f (0) = 0 and it follows that ϕ(τ (ξ), ξ) ∈ Π. Lemma 4.4.3 The eigenvalues λ1 , . . . , λn−1 of DP (p∗ ) are exactly the characteristic multipliers µ2 , . . . , µn of (4.19). Proof. The first return map P : U ⊆ Π → Π satisfies P (ξ) = ϕ(τ (ξ), ξ), P (0) = ϕ(T, 0) = p(T ) = 0.

(4.25) T

Without loss of generality we may assume f (0) = (0, . . . , 0, 1) (why?) and hence Π = {x = (x1 , . . . , xn ) : xn = 0}. Since d dt ϕ(t, ξ)

= f (ϕ(t, ξ)), ϕ(0, ξ) = ξ,

(4.26)

then differentiating (4.26) with respect to ξ yields Φ0 (t, ξ) = Dx f (ϕ(t, ξ))Φ(t, ξ), Φ(0, ξ) = I,

(4.27)

where Φ(t, ξ) = ϕξ (t, ξ). Setting ξ = 0 in (4.27) yields Φ0 (t, 0) = Dx f (p(t))Φ(t, 0). Φ(0, 0) = I. Hence Φ(t, 0) is the fundamental matrix of (4.19) with Φ(0, 0) = I and the characteristic multipliers µ1 = 1, µ2 , . . . , µn of (4.19) are eigenvalues of Φ(T, 0). Since p0 (0) = f (0) and p0 (0) = Φ(T, 0)p0 (0), it follows that f (0) = Φ(T, 0)f (0). From f (0) = (0, . . . , 0, 1)T , it follows that the last

STABILITY OF NONLINEAR SYSTEMS

99

column of Φ(T, 0) is (0, . . . , 0, 1)T . Now we consider the Poincar´e map P in (4.25), and compute DP (0). Differentiating (4.25) with respect to ξ, we obtain DP (ξ) = ϕ0 (τ (ξ), ξ)

d τ (ξ) + ϕξ (τ (ξ), ξ). dξ

(4.28)

Setting ξ = 0 in (4.28), we have dτ (ξ) DP (0) = ϕ (T, 0) + Φ(T, 0) dξ ξ=0 dτ (ξ) = f (0) · + Φ(T, 0) dξ ξ=0   ∂τ ∂τ = (0, . . . , 0, 1)T (0), . . . , (0) + Φ(T, 0) ∂ξ1 ∂ξn     0 0 ··· · ···· 0    0        ..  = 0 ··· · ···· 0 + .        0 ∂τ ∂τ x x x x 1 ∂ξ1 (0) · · · ∂ξn (0)   0 ..    .   ..  . =  .    0 x x x x 0

Hence the eigenvalues of DP (0) are exactly the characteristic multipliers of (4.19). Proof of Theorem 4.4.1: [Ha] Let p(0) = 0 and P : Π ∩ B(0, δ) → Π be the Poincar´e map. From the assumption |µi | < 1, i = 2, · · · , n and Lemma 4.4.3, it follows that 0 is an asymptotically stable fixed point for the first return map P on U ⊆ Π. Thus from Lemma 4.4.1, if k ξ0 k is sufficiently small, then k ξn k ≤ e−nα k ξ0 k→ 0 as n → ∞ for some α > 0 where ξn = P n ξ0 . From the continuous dependence on initial data, given ε > 0 there exists δ = δ(ε) > 0 such that if dist(x0 , γ) < δ then there exists τ 0 = τ 0 (x0 ), where ϕ(t, x0 ) exists for 0 ≤ t ≤ τ 0 , ϕ(τ 0 , x0 ) ∈ Π and |ϕ(τ 0 , x0 )| < ε.

100

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Fig. 4.11

Let ξ0 = ϕ(τ 0 , x0 ) ∈ Π and τ 1 = τ 0 + τ (ξ0 ) .. .

such that

ξ1 = ϕ (τ (ξ0 ), ξ0 ) ∈ Π,

τ n = τ n−1 + τ (ξn−1 ) such that

ξn = ϕ (τ (ξn−1 ), ξn−1 ) ∈ Π,

i.e., τ k is the travelling time from x0 to ξk (see Fig. 4.11). We claim that n

τ limn→∞ nT = 1 and there exists t0 ∈ R such that t0 = limn→∞ (τ n − nT ) and |τ n − (nT + t0 )| ≤ L1 e−αn k ξ0 k . (4.29) We first show that {τ n − nT } is a Cauchy sequence. Consider

|(τ n − nT ) − (τ n−1 − (n − 1)T )| = |τ n − τ n−1 − T |

(Note that τ is C 1 )

= |τ (ξn−1 ) − T | = |τ (ξn−1 ) − τ (0)| dτ ≤ sup (θ) k ξn−1 k= L0 k ξn−1 k dξ θ∈B(0,δ)

≤ L0 e−α(n−1) k ξ0 k . Hence for m > n we have |(τ m − mT ) − (τ n − nT )| ≤ |(τ m − mT ) − (τ m−1 − (m − 1)T | + · · · + |(τ n+1 − (n + 1)T ) − (τ n − nT )|   ≤ L0 k ξ0 k e−α(m−1) + · · · + e−αn ≤ L0 k ξ0 k

e−αn < ε if n ≥ N. 1 − e−α

STABILITY OF NONLINEAR SYSTEMS

101

Then lim (τ n − nT ) exists and equals to a number, say, t0 and n→∞

|τ n − (nT + t0 )| = |t0 − (τ n − nT )| ≤ |t0 − (τ m − mT )| + |(τ m − mT ) − (τ m−1 − (m − 1)T )| + · · · + |(τ n+1 − (n + 1)T ) − (τ n − nT )| ∞ X ≤ |(τ (k+1) − (k + 1)T ) − (τ k − kT )| ≤

k=n ∞ X

L0 e−α(k−1) kξ0 k ≤

k=n

L0 e−αn k ξ0 k . 1 − e−α

Thus we prove the claim (4.29). Next we want to show that −α |ϕ(t + t0 , x0 ) − p(t)| ≤ Le T t |ξ0 |. Let 0 ≤ t ≤ T , |ϕ(t + τ n , x0 ) − p(t)| = |ϕ(t, ξn ) − ϕ(t, 0)| ≤

k ϕξ (t, θ) k |ξn | ≤ L2 |ξn |

sup 0≤t≤T, θ∈B(0,δ)

≤ L2 e−nα |ξ0 |, and |ϕ(t + τ n , x0 ) − ϕ(t + nT + t0 , x0 )| ≤ |ϕ0 (t˜, x0 )| |(t + τ n ) − (t + nT + t0 )| ≤ L3 |τ n − (nT + t0 )| ≤ L3 L1 e−αn |ξ0 |. Hence for 0 ≤ t ≤ T , we have |ϕ(t + nT + t0 , x0 ) − p(t)|

(4.30)

n

≤ |ϕ(t + nT + t0 , x0 ) − ϕ(t + τ , x0 )| +|ϕ(t + τ n , x0 ) − p(t)| ≤ (L2 + L3 L1 )e−αn k ξ0 k . For any t, we assume nT ≤ t ≤ (n + 1)T for some n. Then 0 ≤ t − nT ≤ T and t/T − 1 ≤ n ≤ t/T . Substituting t in (4.30) by t − nT , we have |ϕ(t + t0 , x0 ) − p(t − nT )| = |ϕ(t + t0 , x0 ) − p(t)| t

≤ (L2 + L3 )e−αn k ξ0 k ≤ (L2 + L3 )e−α( T −1) k ξ0 k . Hence |ϕ(t + t0 , x0 ) − p(t)| ≤ Le

−α T t

k ξ0 k for some L > 0. Q.E.D.

102

4.5

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Traveling Wave Solutions

In this section we shall apply stable manifold theorem to obtain the existence of traveling wave solutions for the scalar diffusion-reaction partial differential equation ut = Kuxx + f (u),

K > 0.

(4.31)

A traveling wave solution u(x, t) of PDE (4.31) is a special solution of the form u(x, t) = U (x + ct), c ∈ R,

(4.32)

where c is called the traveling speed. Let ξ = x + ct and substitute (4.32) into (4.31), we obtain K

dU d2 U + f (U ) = 0. −c dξ 2 dξ

(4.33)

In (4.33) there are two unknowns, namely, the wave speed c and U (ξ). We shall study two well-known nonlinear diffusion-reaction equations. One is the Fisher’s equation, and the other is the “bistable” equation. Fisher’s equations (or KPP equation) In 1936 famous geneticist and statistician R. A. Fisher proposed a model to describe the spatial distribution of an advantageous gene. Consider a one locus, two allele genetic trait in a large randomly mating population. Let u(x, t) denote the frequency of one gene, say, A, in the gene pool at position x. If A is dominant and different sites are connected by random migration of the organism, then the gene pool can be described by a diffusion model (the details of the derivation of the model see [AW]): ut = Kuxx + su(1 − u), s > 0.

(Fisher’s equation)

We shall consider the following general model ut = Kuxx + f (u), f (u) > 0, 0 < u < 1, f (0) = f (1) = 0, (4.34) f (u) for 0 < u < 1. f 0 (0) > u In 1936, Kolmogoroff, Petrovsky and Piscounov analyzed Fisher’s equation by looking for steady progressing waves (4.32) with boundary conditions u(−∞, t) = 0, u(+∞, t) = 1. We shall look for a solution U (ξ) of (4.33) with some speed c with boundary conditions U (−∞) = 0, U (+∞) = 1 (see

STABILITY OF NONLINEAR SYSTEMS

Fig. 4.12). Let W = order system:

dU dξ ,

103

then (4.33) can be rewritten as the following first

dU = W, dξ c f (U ) dW = W− , dξ K K (U (−∞), W (−∞)) = (0, 0), (U (+∞), W (+∞)) = (1, 0).

(4.35)

In (4.35), the system has two equilibria (0, 0) and (1, 0). The variational matrix of (4.35) is " # 0 1 0 M (U, W ) = . (U ) c −f K K For equilibrium (0, 0), " M (0, 0) =

0 0

− f K(0)

1

#

c K

.

The eigenvalues λ of M (0, 0) satisfy c f 0 (0) λ+ = 0. K K Hence there are two eigenvalues q  0 c 2 c ± − 4 f K(0) K K λ± = . 2 λ2 −

Multiplying yields

dU dξ

on both sides of (4.33) and integrating from −∞ to +∞ R1 c = R +∞0 −∞

f (u)du 2

(U 0 (ξ)) dξ

>0.

p p Hence if c ≥ 4Kf 0 (0) then (0, 0) is an unstable node; if 0 < c < 4Kf 0 (0) then (0, 0) is an unstable spiral. For the equilibrium (1, 0), ! 0 1 0 M (1, 0) = . − f K(1) Kc The eigenvalues λ satisfy c f 0 (1) λ+ = 0. K K Hence (1, 0) is a saddle point for f 0 (1) < 0. λ2 −

104

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

p Let c0 = 4Kf 0 (0). If 0 < c < c0 then there are no heteroclinic orbits connecting (0, 0) and (1, 0) because (0, 0) is an unstable spiral and the solution U (ξ) of (4.35) is positive (biological restriction!). We shall show for each c ≥ c0 , there is a heteroclinic orbit connecting unstable node (0, 0) and saddle point (1, 0) (see Fig. 4.13). First we reverse the time ξ → −ξ, then (4.35) becomes dU = −W, dξ dW c f (U ) =− W + , dξ K K (U (+∞), W (+∞)) = (0, 0), (U (−∞), W (−∞)) = (1, 0).

(4.36)

Choose m, ρ > 0 such that m>

f (U ) c

for all 0 ≤ U ≤ 1,

(4.37)

and 0 < ρ1 < ρ < ρ2 , where ρ1 , ρ2 are the positive roots of Kρ2 − cρ + A = 0, A = sup0 0

(4.45)

0

then c > 0. Consider the unstable manifold Γc of (0, 0). Obviously when c = 0, Γc cannot reach U = 1 otherwise from (4.44) we obtain Z 1 f (u)du = 0 0

which is a contradiction to the assumption (4.45) (see Fig. 4.16(a)). Since the slope of the unstable manifold Γc at (0, 0) is the positive eigenvalue λ+ of M (0, 0), p c + c2 − 4f 0 (0) > c. λ+ = 2

108

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Let K = sup0 σU for c large. If not, there exists U0 such that W (U0 ) = σU0 , W 0 (U0 ) < σ, W (U ) > σU > 0 for 0 < U < U0 , then for c large dW f (U0 ) f (U0 ) K =c− =c− >c− > σ. dU U =U0 W (U0 ) σU0 σ This is a contradiction.

(a)

(b) Fig. 4.15

Then for large c the unstable manifold Γc is above the line W = σU . By continuous dependence on the parameter c, there exists c0 such that the unstable manifold Γc0 of (0, 0) reach the saddle point (1, 0) (see Fig. 4.16(a)). Next we show the uniqueness of traveling waves. Suppose we have two traveling waves connecting (0, 0) and (1, 0) with two different speeds c1 and c0 . Assume c1 > c0 . From (4.41) we have f (U ) dW =c− . (4.46) dU W It follows that W (U, c1 ) ≥ W (U, c0 ). From (4.43), the slope of the stable manifold of (1, 0) is p c − c2 − 4f 0 (1) λ− = < 0. 2 Hence p c2 − 4f 0 (1) − c and |λ− | = 2 " # 1 c d|λ− | p = − 1 < 0. (4.47) dc 2 c2 − 4f 0 (1) However from Fig. 4.16(b), |λ( c1 )| > |λ( c0 )|, we obtain a contradiction.

STABILITY OF NONLINEAR SYSTEMS

(a)

109

(b) Fig. 4.16

4.6

Exercises

Exercise 4.1 For the equation v 00 (t) + t sin v(t) = 0, show that v(t) ≡ π, v 0 (t) ≡ 0 is an unstable solution and v(t) ≡ 0, v 0 (t) ≡ 0 is asymptotically stable. Exercise 4.2 Consider the equation y˙ = (a cos t + b)y − y 3 in R, where a, b > 0. (1) Study the stability of the zero solution. (2) Suppose it is known there is a unique 2π-periodic solution yp (t; ξ0 ) satisfying yp (0; ξ0 ) = ξ0 ∈ [c, a + b + 1], c > 0. Show that yp (t; ξ0 ) is stable. Exercise 4.3 Consider the spiral system x˙1 = −x2 + x1 (1 − r2 ), x˙2 = x1 + x2 (1 − r2 ) where r2 = x21 + x22 . Let x(t, τ, ξ) denote the solution which passes through ξ at time τ and note that x(t, τ, (1, 0)) = (cos(t − τ ), sin(t − τ )). (1) Explicitly calculate the linear system of differential equations for which the Jacobian matrix   ∂x (t, 0, (1, 0)) ∂ξ is a fundamental matrix solution.

110

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

(2) Use the given information to exhibit a periodic solution of period 2π for the linear system in part (a). (3) Find the characteristic multipliers of the linear system in part (a).

Exercise 4.4 (1) Check that (x = 3 cos 3t, y = sin 3t) is a periodic solution of the system   x2 2 x˙ = −9y + x 1 − −y , 9   x2 2 y˙ = x + y 1 − −y . 9 (2) Find the derivative (eigenvalue) of the Poincar´e first return map to the real axis at (3, 0). Exercise 4.5 Prove the converse statement in Lemma 4.3.1. Exercise 4.6 (1) Let B ∈ R2×2 . Show that for the linear difference equation xn+1 = Bxn , fixed point 0 is asymptotically stable if and only if | det B| < 1 and |TraceB| < 1 + det B. (2) Do the stability analysis of the map ayn , a, b ∈ R xn+1 = 1 + xn 2 bxn . yn+1 = 1 + yn 2 Exercise 4.7 Analyze the following Host-parasitoid system ( xn+1 = σxn exp(−ayn ) a > 0, σ > 0 yn+1 = σxn (1 − exp(−ayn )) x0 > 0,

y0 > 0.

Exercise 4.8 Analyze the following Predator-Prey system with refugee. dx = ax − c1 y(x − k) a, c1 , c2 , e, k > 0 dt dy = −ey + c2 y(x − k) dt x(0) > 0, y(0) > 0.

STABILITY OF NONLINEAR SYSTEMS

111

Exercise 4.9 Let p(λ) = λ3 + a1 λ2 + a2 λ + a3 and p(λi ) = 0, i = 1, 2, 3. Show that |λi | < 1, i = 1, 2, 3 if and only if (1) p(1) = 1 + a1 + a2 + a3 > 0 (2) (−1)3 p(−1) = 1 − a1 + a2 − a3 > 0 (3) 1 − a23 > |a2 − a1 a3 |. Exercise 4.10 Do the stability analysis of the Brusselator equations dx = a − (b + 1)x + x2 y, a, b > 0 dt dy = bx − x2 y. dt Exercise 4.11 Suppose a, b, c are nonnegative continuous functions on [0, ∞), u is a nonnegative bounded continuous solution of the inequality Z t Z ∞ u(t) ≤ a(t) + b(t − s)u(s)ds + c(s)u(t + s)ds, t ≥ 0, 0 0 R∞ R∞ and a(t) → 0, b(t) → 0 as t → ∞, 0 b(s)ds < ∞, 0 c(s)ds < ∞. Prove that u(t) → 0 as t → ∞ if Z ∞ Z ∞ b(s)ds + c(s)ds < 1. 0

0

Exercise 4.12 Let x0 = f (x), x ∈ Rn with flow ϕt (x) defined for all t ∈ R, x ∈ Rn . Show that if TraceDf (x) = 0 for all x ∈ Rn , then the flow ϕt (x) preserves volume, i.e., vol(Ω) = vol(ϕt (Ω)) for all t > 0. Exercise 4.13 Consider  x01 = −x2 + x1 1 − x21 − x22 ,  x02 = x1 + x2 1 − x21 − x22 . Show that (cos t, sin t) is a periodic solution. Compute the characteristic multipliers of the linearized system and discuss the orbital stability. Exercise 4.14 Prove Lemma 4.4.1. Exercise 4.15 Consider Fibonacci sequence {an } satisfying a0 = 0, a1 = 1, an = an−1 + an−2 , n ≥ 2. Find a formula for an . Show that an increases like a geometric progression and find lim lnnan . n→∞

112

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Exercise 4.16 Do the stability analysis for the following Henon map xn+1 = A − Byn − x2n , A, B ∈ R, yn+1 = xn . Exercise 4.17 Consider the following Predator-Prey system    dx x  dt = x γ 1 − K − αy , dy = (βx − d − δy) y, γ, K, α, β, d, δ > 0.  dt x(0) > 0, y(0) > 0. For various possible cases, do the following: (1) (2) (3) (4)

Find all equilibria with nonnegative components. Do stability analysis for each equilibrium. Find the stable manifold of each saddle point. Predict the global asymptotic behavior.

Exercise 4.18 Read pp. 273–275 of [Kee1] where the explicit form of U (ξ) is shown as    ξ 1 √ 1 + tan h , U (ξ) = 2 2 2 1 c = √ (1 − 2α) , 2 for (4.40). Exercise 4.19 Let K be a closed and bounded subset of Rn and let F : K → K be a continuous map, not necessarily either one-to-one or onto. A point p in K is said to be a wandering point for F if there exists a neighborhood U of p for which {F −m (U ) : m ≥ 0} is a pairwise disjoint family of sets. Let p be a point of K. (1) Given M > 0, prove that if p ∈ / F M (K), then {F −m ({p}) : 0 ≤ m < M } is a pairwise disjoint family of sets. (It is possible that some of the sets in this family are in fact empty.) (2) Prove that if there exists M > 0 such that p ∈ / F M (K), then p is a wandering point for F .

STABILITY OF NONLINEAR SYSTEMS

113

Exercise 4.20 Consider a mathematical model describing the Pollen coupling of forest trees: Let Yi (t) be the non-dimensionalized energy ( Yi (t + 1) =

Yi (t) + 1

if Yi (t) ≤ 0

−kPi (y)Yi (t) + 1

if Yi (t) > 0.

Where Pi (t) is given by β X 1 [Yj (t)]+  Pi (t) =  N −1 

i = 1, 2, . . . , N

j6=i

N is the number of trees, β > 0, k > 0, [Y ]+ = Y if Y > 0, [Y ]+ = 0 if Y ≤ 0. Find the synchronized equilibrium (Y ∗ , . . . , Y ∗ ) ∈ 0 and the condition that (Y ∗ , . . . , Y ∗ ) is locally stable. Exercise 4.21 Consider the system x0 = a + x2 − xy, y 0 = y 2 − x2 − 1 where a is a parameter.

(1) Sketch the phase portrait for a = 0. Show that there is a trajectory connecting two saddle points. (2) Sketch the phase portrait for a < 0 and a > 0 by computer simulations.

Exercise 4.22 The system x0 = xy − x2 y + y 3 , y 0 = y 2 + x3 − xy 2 has a nasty higher-order fixed point at the origin. Using polar coordinates to sketch the phase portrait. Exercise 4.23 Consider the equation x˙ = Ax + h(t) in Rn , where all eigenvalues of A have negative real parts and h is continuous and bounded on R.

(1) Prove that solutions are unique and defined for all t ∈ (−∞, ∞). (2) Prove that if in addition ||h(t)|| → 0 as t → ∞, then every solution tends to 0 as t → ∞.

114

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Exercise 4.24 (Kermack and McKendrick’s Epidemics model) Consider SIR model, S −→ I −→ R, dS = −rSI, dt dI = rSI − aI, dt dR = aI, dt S(0) = S0 > 0, I(0) = I0 > 0,

R(0) = 0

where S(t), I(t), R(t) are populations of susceptible, infectives and removed respectively; r > 0, a > 0 are infective rate, removal rate respectively. Let ρ = ar be the relative removable rate, N = S0 + I0 . Show that (1) S(t) + I(t) + R(t) = N for all t > 0. (2) If S0 < ρ then I(t) goes monotonely to zero. No epidemics can occur. (3) If S0 > ρ, I(t) increases as t increases then tends monotonely to zero. This is a threshold phenomenon — there is a critical value which the initial susceptible population must exceed for there to be an epidemic. (4) limt→∞ S(t) = S(∞) exists and is the unique root of the transcendental equation   1 S0 exp − (N − z) − z = 0. ρ

Chapter 5

METHOD OF LYAPUNOV FUNCTIONS

5.1

An Introduction to Dynamical Systems

Example 5.1.1 Consider the autonomous system x0 = f (x), f : D ⊆ Rn → Rn .

(5.1)

Let ϕ(t, x0 ) be the solution of (5.1) with initial condition x(0) = x0 . Then it follows that (i) ϕ(0, x0 ) = x0 (ii) ϕ(t, x0 ) is a continuous function of t and x0 (iii) ϕ(t + s, x0 ) = ϕ(s, ϕ(t, x0 )). Property (i) is obvious; property (ii) follows directly from the property of continuous dependence on initial conditions. Property (iii) which is called semi-group property, follows from uniqueness of ODE and the fact that (5.1) is an autonomous system. We call ϕ : R × Rn → Rn the flow induced by (5.1). Remark 5.1.1 Property (ii) does not hold. In general we shall consider flows defined on the metric space M . In Example 5.1.1, M = Rn . In fact, in many applications of dynamical systems to partial differential equations, functional differential equations, the metric space M is a Banach space and the flow ϕ is a semiflow, i.e., ϕ : R+ × M → M satisfying (i), (ii), (iii). Now let (M, ρ) be a matric space. In the rest of Section 5.1, we refer to the book in [NS]. Definition 5.1.1 We call a map π : R × M → M a continuous dynamical system if 115

116

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

(i) π(0, x) = x (ii) π(t, x) is continuous in x and t (iii) π(t + s, x) = π(t, π(s, x)), x ∈ M, t, s ∈ R. We may interpret π(t, x) as the position of a particle at time t when the initial (i.e. time = 0) position is x. Remark 5.1.2 A discrete dynamical system is defined as a continuous map π : Z×M → M satisfying (i), (ii), (iii) where Z = {n : n is an integer}. The typical examples of discrete dynamical systems are xn+1 = f (xn ), xn ∈ Rd and the Poincar´e map x0 7−→ ϕ(w, x0 ) where ϕ(t, x0 ) is the solution of a periodic system x0 = f (t, x), x(0) = x0 where f (t, x) = f (t + w, x). The next lemma says that property (ii) implies the property of continuous dependence on initial data. Lemma 5.1.1 Given T > 0 and p ∈ M . For any ε > 0, there exists δ > 0 such that ρ(π(t, p), π(t, q)) < ε for 0 ≤ t ≤ T whenever ρ(p, q) < δ. Proof. If not, then there exists {qn }, qn → p and {tn }, |tn | ≤ T and α > 0 with ρ(π(tn , p), π(tn , qn )) ≥ α. Without loss of generality, we may assume tn → t0 . Then 0 < α ≤ ρ (π(tn , p), π(tn , qn ))

(5.2)

≤ ρ (π(tn , p), π(t0 , p)) + ρ (π(t0 , p), π(tn , qn )) . From (ii), the right hand side of (5.2) approaches zero as n → ∞. This is a contradiction. Notations: γ + (x) = {π(t, x) : t ≥ 0} is the positive orbit through x. γ − (x) = {π(t, x) : t ≤ 0} is the negative orbit through x. γ(x) = {π(t, x), −∞ < t < ∞} is the orbit through x.

METHOD OF LYAPUNOV FUNCTIONS

117

Definition 5.1.2 We say that a set S ⊆ M is positively (negatively) invariant under the flow π if for any x ∈ S, π(t, x) ∈ S for all t ≥ 0 (t ≤ 0), i.e., π(t, S) ⊆ S for all t ≥ 0 (i.e., π(t, S) ⊆ S for all t < 0). S is invariant if S is both positively and negatively invariant, i.e., π(t, S) = S for −∞ < t < ∞. Lemma 5.1.2 The closure of an invariant set is invariant. Proof. Let S be an invariant set and S¯ be its closure. If p ∈ S, then ¯ If p ∈ S¯ \ S, then there exists {pn } ⊂ S such that pn → p. π(t, p) ∈ S ⊆ S. Then for each t ∈ R, we have lim π(t, pn ) = π(t, p). Since π(t, pn ) ∈ S, it n→∞ ¯ Hence π(t, S) ¯ ⊂ S. ¯ follows that π(t, p) ∈ S. Definition 5.1.3 We say a point p ∈ M is an equilibrium or a rest point of the flow π if π(t, p) ≡ p for all t. If π(T, p) = p for some T > 0 and π(t, p) 6= p for all 0 < t < T , then we say {π(t, p) : 0 ≤ t ≤ T } is a periodic orbit. Example 5.1.2 Rest points and periodic orbits are (fully) invariant. We note that positively invariance does not necessarily imply negatively invariance. For instance, if an autonomous system x0 = f (x) satisfies f (x) · ~n(x) < 0 for all x ∈ ∂Ω where Ω is a bounded domain in Rn and ~n(x) is an outward normal vector at x ∈ ∂Ω, then Ω is positively invariant, but not negatively invariant. Lemma 5.1.3 (i) The set of rest points is a closed set. (ii) No trajectory enters a rest point in finite time.

Proof. (i) Trivial. (ii) Suppose π(T, p) = p∗ where p∗ is a rest point and p 6= p∗ . Then p = π(−T, p∗ ) = p∗ . This is a contradiction. Lemma 5.1.4 (i) If for any δ > 0, there exists p ∈ B(q, δ) such that γ + (p) ⊂ B(q, δ), then q is a rest point. (ii) If lim π(t, p) = q then q is a rest point. t→∞

118

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Proof. We shall prove (i). (ii) follows directly from (i). Suppose q is not a rest point, then π(t0 , q) 6= q for some t0 > 0. Let ρ(q, π(t0 , q)) = d > 0. From continuity of the flow π, there exists δ > 0 such that if ρ(p, q) < δ < d/2 then ρ(π(t, q), π(t, p)) < d/2 for all t, |t| ≤ t0 . By hypothesis of (i) there exists p ∈ B(q, δ) such that γ + (p) ⊂ B(q, δ). Then d = ρ(q, π(t0 , q)) ≤ ρ(q, π(t0 , p)) + ρ(π(t0 , p), π(t0 , q)) < δ + d/2 < d/2 + d/2 = d. This is a contradiction. Next we introduce the notion of limit sets. Definition 5.1.4 We say that p is an omega limit point of x if there exists a sequence {tn }, tn → +∞ such that π(tn , x) → p. The set ω(x) = {p : p is an omega limit point of x} is called the ω-limit set of x. Similarly we say that p is an alpha limit point of x if there exists a sequence {tn }, tn → −∞ such that π(tn , x) → p. The set α(x) = {p : p is an alpha limit point of x} is called the α-limit set of x. Remark 5.1.3 Note that ω(x) represents where the positive orbit γ + (x) ends up, while α(x) represents where the negative orbit γ − (x) started. We note that α, ω are the initial and final alphabets of Greek characters. Remark 5.1.4  ω(x) =

\

cl 

t≥0

 [

s≥t

 α(x) =

\

t≤0

π(s, x)

cl 

 [

π(s, x) .

s≤t

Theorem 5.1.1 ω(x) and α(x) are closed, invariant sets.

METHOD OF LYAPUNOV FUNCTIONS

119

Proof. We shall prove the case of ω(x) only. First we show that ω(x) is invariant. Let q ∈ ω(x) and τ ∈ R. We want to show π(τ, q) ∈ ω(x). Let π(tn , x) → q as tn → +∞. By continuity of π, it follows that π(tn + τ, x) = π(τ, π(tn , x)) → π(τ, q), as tn → +∞. Thus π(τ, q) ∈ ω(x) and it follows that ω(x) is invariant. Next we show ω(x) is a closed set. Let qn ∈ ω(x) and qn → q as n → ∞. We want to show q ∈ ω(x). Since qn ∈ ω(x), there exists τn ≥ n such that ρ(π(τn , x), qn ) < 1/n. From qn → q, it follows that for any ε > 0 there exists N = N (ε), such that ρ(qn , q) < ε/2 for n ≥ Nε . Therefore, ρ(π(τn , x), q) < ρ(π(τn , x), qn ) + ρ(qn , q) < ε/2 + ε/2 = ε. Thus we have lim π(τn , x) = q and q ∈ ω(x). n→∞

Theorem 5.1.2 If the closure of γ + (p), Cl(γ + (p)), is compact then w(p) is nonempty, compact and connected. Furthermore lim ρ(π(t, p), w(p)) = 0. t→∞

+

Proof. Let pk = π(k, p). Since Cl(γ (p)) is compact, then there exists a subsequence of {pkj }, pkj → q ∈ Cl(γ + (p)). Then q ∈ w(p) and hence w(p) is nonempty. The compactness of w(p) follows directly from the facts that Cl(γ + (p)) = γ + (p) ∪ w(p), w(p) is closed and Cl(γ + (p)) is compact. We shall prove that w(p) is connected by contradiction. Suppose on the contrary, w(p) is disconnected. Then w(p) = A ∪ B, a disjoint union of two closed subsets A, B of w(p). Since w(p) is compact, then A, B are compact. Then d = ρ(A, B) > 0 where ρ(A, B) = inf ρ(x, y). Consider the x∈A,y∈B

neighborhoods N (A, d/3) and N (B, d/3) of A, B respectively. Then there exist tn → +∞ and τn → +∞, τn > tn such that π(tn , p) ∈ N (A, d/3) and π(τn , p) ∈ N (B, d/3). Since ρ(π(p, t), A) is a continuous function of t, then from the inequalities ρ(π(τn , p), A) > ρ(A, B) − ρ(π(τn , p), B) > d − d/3 = 2d/3, and ρ(π(tn , p), A) < d/3 ρ(π(t∗n , p), A)

we have = d/2 for some t∗n ∈ (tn , τn ). {π(t∗n , p)} contains a / A and q ∈ /B convergent subsequence, π(t∗nk , p) → q ∈ w(p). However q ∈ and we obtain a desired contradiction.

120

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Next we shall prove that a trajectory is asymptotic to its omega limit set. If not, then there exists a sequence {tn }, tn → +∞ and α > 0 such that ρ(π(tn , p), w(p)) ≥ α. Then there exists a subsequence of {π(tn , p)}, π(tnk , p) → q ∈ w(p) and we obtain the following contradiction: 0 = ρ(q, w(p)) = lim ρ(π(tnk , p), w(p)) ≥ α. nk →∞

x0 = x + y − x(x2 + y 2 ) y 0 = −x + y − y(x2 + y 2 ). Using the polar coordinate (r, θ), we have r0 = r(1 − r2 ) θ0 = −1. If r(0) = 0, then (0, 0) = w((0, 0)). If 0 < r(0) < 1, then w(x0 , y0 ) = unit circle, α(x0 , y0 ) = (0, 0). If r(0) = 1, then w(x0 , y0 ) = α(x0 , y0 ) = unit circle. If r(0) > 1 , then w(x0 , y0 ) = unit circle, α(x0 , y0 ) = ∅. 

Example 5.1.3 Consider

Example 5.1.4 Consider  0  x =

x−y , 1+(x2 +y 2 )1/2

(5.3)

 x+y  y0 = . 1+(x2 +y 2 )1/2 Then, in terms of polar coordinate, we have  dr r  dt = 1+r , 1 = 1+r . From Fig. 5.1 it follows that “ω(x, y) = ∅ for (x, y) 6= (0, 0) and α(x, y) = {(0, 0)} for any (x, y).” The following examples show that an unbounded trajectory may have disconnected or noncompact ω-limit set.

 dθ dt

x Example 5.1.5 Let X = 1−x 2 , Y = y, where (X(t), Y (t)) satisfies (5.3). Then x(1 − x2 ) − y(1 − x2 )2 x0 =  1/2 ! = f (x, y) 2 x 2 2 (1 + x ) 1 + +y 1−x2

y0 = 1+



y 2

x 1−x2

+ y2

x 1/2 + 1 − x2 1+



1 2

x 1−x2

1/2 = g(x, y). + y2

METHOD OF LYAPUNOV FUNCTIONS

121

Fig. 5.1

Then lim f (x, y) = 0, ω((x0 , y0 )) = {x = 1}

S

x→±1

{x = −1} is not con-

nected (see Fig. 5.2).

Fig. 5.2

Example 5.1.6 Let X = log(1 + x), Y = y, −1 < x < +∞, y ∈ R. Where (X(t), Y (t)) satisfies (5.3). Then the equation log(1 + x) − y x0 = f (x, y) = (1 + x) 1 + {(log(1 + x))2 + y 2 }1/2 y + log(1 + x) 1 + {(log(1 + x))2 + y 2 }1/2 f (x, y), g(x, y) satisfies lim f (x, y) = 0, lim g(x, y) = 1. The ω-limit y 0 = g(x, y) = x→−1

x→−1

set ω(x) = {(x, y) : x = −1} is not compact (see Fig. 5.3). 5.2

Lyapunov Functions

Let V : Ω ⊆ Rn → R, 0 ∈ Ω, Ω is an open set in Rn .

122

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Fig. 5.3

Definition 5.2.1 V : Ω ⊆ Rn → R is said to be positive definite (negative definite) on Ω if V (x) > 0 for all x 6= 0 (V (x) < 0 for all x 6= 0) and V (0) = 0. V is said to be semi-positive definite (semi-negative definite) on Ω if V (x) ≥ 0 (V (x) ≤ 0) for all x ∈ Ω. Remark 5.2.1 In applications, we consider the autonomous system x0 = f (x) with x∗ as an equilibrium. Let y = x − x∗ , then y 0 = g(y) = f (y + x∗ ) and 0 is an equilibrium of the system y 0 = g(y). In general, V (x) satisfies V (x∗ ) = 0, V (x) > 0 for all, x ∈ Ω, x 6= x∗ . For the initial-value problem: x0 = f (x),

x(0) = x0 ,

(5.4)

we assume that x = 0 is the unique equilibrium of (5.4) and the solution ϕ(t, x0 ) exists for all t ≥ 0. We introduce “Lyapunov function” V (x) to locate the ω-limit set ω(x0 ). The function V (x) satisfies the following: V (0) = 0, V (x) > 0 for all x ∈ Ω and V (x) → +∞ as |x| → ∞. The level sets {x ∈ Ω : V (x) = c} is a closed surface. The surface {x ∈ Ω : V (x) = c1 } encloses the surface {x ∈ Ω : V (x) = c2 } if c1 > c2 . We want to prove lim ϕ(t, x0 ) = 0, even though we don’t have to know the exact loca-

t→∞

tion of the solution ϕ(t, x0 ). If we are able to construct a suitable Lyad V (ϕ(t, x0 )) < 0 as Fig. 5.4 shows, then punov function V (x) such that dt limt→∞ ϕ(t, x0 ) = 0. Example 5.2.1 m¨ x + kx = 0, x(0) = x0 , x0 (0) = x00 describes the motion of a spring without friction according to Hooke’s law.

METHOD OF LYAPUNOV FUNCTIONS

123

Fig. 5.4

Consider the total energy V (t) = kinetic energy + potential energy Z x(t) 1 = m(v(t))2 + ks ds 2 0 1 k = m(x0 (t))2 + x2 (t). 2 2 Then, it follows that d V (t) = mx0 (t)x00 (t) + kx(t)x0 (t) ≡ 0, dt and hence the energy is conserved. In this case the Lyapunov function is k 2 0 2 0 V (x, x0 ) = m 2 (x ) + 2 x . It is easy to verify that V (0, 0) = 0, V (x, x ) > 0 0 0 0 for all (x, x ) 6= (0, 0) and V (x, x ) → +∞ as |(x, x )| → +∞. Rx Example 5.2.2 mx00 + g(x) = 0, xg(x) > 0, x 6= 0, 0 g(s)ds → +∞ as |x| → ∞. This describes the motion of a spring whose restoring force is a nonlinear function g(x). The energy Z x(t) m 2 0 V (t) = (x (t)) + g(s) ds (5.5) 2 0 satisfies dV dt ≡ 0, i.e., the energy is conserved. Hence the solution (x(t), x0 (t)) is periodic. See Fig. 5.5. Example 5.2.3 mx00 + k(x)x0 + g(x) = 0, k(x) ≥ δ > 0 for all x. Use the energy function in (5.5), it follows that dV = x0 (t) [−k(x(t))x0 (t) − g(x(t))] + x0 (t)g(x(t)) dt = −k(x(t))(x0 (t))2 < −δ(x0 (t))2 .

124

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Fig. 5.5

Then, we expect lim x(t) = 0 and lim x0 (t) = 0. t→∞

t→∞

Let x(t) be the solution of (5.4) and V (x) be positive definite on Ω satisfying V (x) → ∞ as |x| → ∞. Compute the derivative of V along the trajectory of the solution x(t), d V (x(t)) = grad V (x(t)) · x0 (t) dt n X ∂V def = (x(t))fi (x(t)) = V˙ (x(t)). ∂x i i=1 Definition 5.2.2 A function V : Ω → R, V ∈ C 1 is said to be a Lyapunov function for (5.4) if V˙ (x) = grad V (x) · f (x) ≤ 0 for all x ∈ Ω. Remark 5.2.2 If V is merely continuous on Ω, we replace lim

h→0

d dt V

(x(t)) by

1 [V (x(t + h)) − V (x(t))] . h

Let ξ = x(t), then x(t + h) = ϕ(h, ξ). Then we define 1 V˙ (ξ) = lim [V (ϕ(h, ξ)) − V (ξ)]. h→0 h Remark 5.2.3 In applications to many physical systems, V (x) is the total energy. However for a mathematical problem, we may take V as a quadratic form, namely, V (x) = xT Bx, B is some suitable positive definite matrix.

METHOD OF LYAPUNOV FUNCTIONS

125

Example 5.2.4 Consider the Lotka-Volterra predator-prey system, dx dt = ax − bxy a, b, c, d > 0 (5.6) dy dt = cxy − dy x(0) = x0 > 0, y(0) = y0 > 0, where x = x(t), y = y(t) are the densities of prey and predator species respectively. Let x∗ = dc , y ∗ = ab . Then (5.6) can be rewritten as dx = −bx(y − y ∗ ), dt dy = cy(x − x∗ ). dt Consider the trajectory in phase plane, we have dy cy(x − x∗ ) = . (5.7) dx −bx(y − y ∗ ) Using separation of variables, from (5.7) it follows that c x − x∗ y − y∗ dy + dx = 0 y b x and Z Z y(t) c x(t) η − x∗ η − y∗ dη + dη ≡ const. η b x(0) η y(0) Hence we define a Lyapunov Z yfunction Z η − y∗ c x η − x∗ V (x, y) = dη + dη (5.8) η b x∗ η y∗     x i y ch ∗ ∗ = y − y ∗ − y ∗ ln x − x − x ln . + y∗ b x∗ Then it is straightforward to verify that V˙ (x, y) ≡ 0 and hence the system (5.6) is a conservative system. Each solution (x(t, x0 , y0 ), y(t, x0 , y0 )) is a periodic solution. From the following Fig. 5.6, the system (5.6) has a family of “neutrally stable” periodic orbits. Consider the system (5.4) and for simplicity we assume 0 is an equilibrium. To verify the stability property of the equilibrium 0, from previous chapter we can obtain the information regarding stability, from the eigenvalues of the variational matrix Dx f (0). In the following, we present another method by using Lyapunov functions. Theorem 5.2.1 (Stability, Asymptotic Stability [H1]) If there exists a positive definite function V (x) on a neighborhood Ω of 0 such that V˙ ≤ 0 on Ω, then the equilibrium 0 of (5.4) is stable. If, in addition, V˙ < 0 for all x ∈ Ω \ {0}, then 0 is asymptotically stable.

126

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Fig. 5.6

Proof. Let r > 0 such that B(0, r) ⊆ Ω. Given ε, 0 < ε < r and let k = min V (x). Then k > 0. From continuity of V at 0, we choose |x|=ε

δ > 0, 0 < δ ≤ ε such that V (x) < k when |x| < δ. Then the solution ϕ(t, x0 ) satisfies V (ϕ(t, x0 )) ≤ V (x0 ) because of V˙ ≤ 0 on Ω. Hence for x0 ∈ B(0, δ), ϕ(t, x0 ) stays in B(0, ε). Therefore, 0 is stable. Assume V˙ < 0 for all x ∈ Ω\{0}. To establish the asymptotic stability of the equilibrium 0, we need to show ϕ(t, x0 ) → 0 as t → ∞ if x0 is sufficiently close to 0. Let r > 0 such that B(0, r) ⊂ Ω. Let H = min{V (x) : |x| = r}. Then H > 0. It suffices to show that lim V (ϕ(t, x0 )) = 0 if x0 ∈ B(0, d) t→∞

where 0 < d < r is a number such that V (x) ≤ H 2 for every x ∈ B(0, d). If not, since V˙ < 0, V (ϕ(t, x0 )) is decreasing in t to some η > 0. Then V (ϕ(t, x0 )) ≥ η for all t ≥ 0. By continuity of V at 0, there is a δ > 0 such that 0 ≤ V (x) < η for every x with |x| < δ. Hence |ϕ(t, x0 )| ≥ η for all t ≥ 0. Set S = {x : η ≤ |x| ≤ r, V (x) ≤ H2 }. Then ϕ(t, x0 ) ∈ S for all t ≥ 0. Let γ = − min{−V˙ (x) : x ∈ S}. By compactness of S and continuity of V˙ , γ = max{V˙ (x) : x ∈ S} > 0 because 0 ∈ / S, and it d follows that − dt V (ϕ(t, x0 )) ≥ γ. Integrating both sides from 0 to t yields −[V (ϕ(t, x0 )) − V (x0 )] ≥ γt or 0 < V (ϕ(t, x0 )) ≤ V (x0 ) − γt.

Let t → +∞, we obtain a contradiction. Theorem 5.2.2 If there exists a neighborhood U of 0 and V : Ω ⊆ Rn → R, 0 ∈ Ω such that V and V˙ are positive definite on U ∩ Ω \ {0}, then the equilibrium 0 is completely unstable, i.e., 0 is a repeller. More specifically, if Ω0 is any neighborhood of 0, Ω0 ⊆ Ω then any solution ϕ(t, x0 ) of (5.4) with x0 ∈ U ∩ Ω0 \ {0} leaves Ω0 in finite time.

METHOD OF LYAPUNOV FUNCTIONS

127

Proof. If not, there exists a neighborhood Ω0 such that ϕ(t, x0 ) stays in U ∩ Ω0 \ {0} for some x0 ∈ U ∩ Ω0 \ {0}. Then V (ϕ(t, x0 )) ≥ V (x0 ) > 0 for all t ≥ 0. Let α = inf{V˙ (x) : x ∈ U ∩ Ω0 , V (x) ≥ V (x0 )} > 0. Then, Z t V (ϕ(t, x0 )) = V (x0 ) + V˙ (ϕ(s, x0 ))ds 0

≥ V (x0 ) + αt. Since ϕ(t, x0 ) remains in U ∩ Ω0 \ {0} for all t ≥ 0, V (ϕ(t, x0 )) is bounded for t ≥ 0. Thus for t sufficiently large, we obtain a contradiction from the above inequality. Example 5.2.5 

x0 = −x3 + 2y 3 y 0 = −2xy 2

(5.9) 

 00 00 whose eigenvalues λ1 = λ2 = 0 are not hyperbolic. However (0, 0) is indeed asymptotically stable. Use the Lyapunov function V (x, y) = 12 (x2 + y 2 ), then we have V˙ = x(−x3 + 2y 3 ) + y(−2xy 2 ) = −x4 ≤ 0. (0, 0) is an equilibrium. The variational matrix of (5.9) at (0, 0) is

From Theorem 5.2.1, (0, 0) is stable. In fact from invariance principle (see Theorem 5.2.4), (0, 0) is globally asymptotically stable.

Linear Stability by Lyapunov Method Consider linear system x0 = Ax,

A ∈ Rn×n .

(5.10)

From Chapter 3, the equilibrium x = 0 is asymptotically stable if Reλ(A) < 0. We recall that in order to verify Reλ(A) < 0 analytically, we first calculate the characteristic polynomial f (z) = det(λI − A) and then apply the Routh-Hurwitz criteria. In the following we present another criterion for Reλ(A) < 0. Theorem 5.2.3 Let A ∈ Rn×n . The matrix equation AT B + BA = −C,

(5.11)

has a positive definite solution B for every positive definite matrix C if and only if Reλ(A) < 0.

128

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Proof. Consider the linear system (5.10) and let V (x) = xT Bx, where B is a positive definite matrix to be determined. Then V˙ (x) = x˙ T Bx + xT B x˙ = (Ax)T Bx + xT BAx = xT (AT B + BA)x. If for any positive definite matrix C there exists a positive definite matrix B satisfying (5.11), then V˙ (x) = −xT Cx < 0 for x 6= 0. The asymptotic stability of x = 0, i.e., Reλ(A) < 0 follows directly from Theorem 5.2.1. Assume Reλ(A) < 0. Let C be any positive definite matrix, we define Z ∞ T B= eA t CeAt dt. 0

First we claim that B is well-defined and positive definite. From Reλ(A) < 0, there exist α, K > 0 such that k eAt k ≤ Ke−αt for t ≥ 0. For any s > 0 Z s Z s T T k eA t CeAt dt k ≤ k eA t k k C k k eAt k dt 0 Z0 s 2 −2αt ≤ K e k C k dt < ∞ for all s > 0. 0

Obviously B is positive definite for xT Bx > 0 for x 6= 0. Compute Z ∞h i T T AT B + BA = AT eA t CeAt + eA t CeAt A dt Z0 ∞ d  AT t At  e Ce dt = 0 − ICI = −C. = dt 0 Hence the proof is complete.

Linearization via Lyapunov Method Consider the nonlinear system x0 = Ax + f (x), f (0) = 0, f (x) = o(|x|) as x → 0.

(5.12)

Now we shall apply Theorem 5.2.3 to obtain the stability property of the equilibrium x = 0 of (5.12).

METHOD OF LYAPUNOV FUNCTIONS

129

Claim: If Reλ(A) < 0 then x = 0 is asymptotically stable. From Theorem 5.2.3, there exists a positive definite matrix B satisfying AT B + BA = −I. Let V (x) = xT Bx. Then V˙ = (Ax + f (x))T Bx + xT B(Ax + f (x)) = xT (AT B + BA)x + 2xT Bf (x) = −xT x + 2xT Bf (x). Since f (x) = o(|x|) as x → 0, for any ε > 0 there exists δ > 0 such that |f (x)| < ε|x| for |x| < δ. Then for |x| < δ it follows that V˙ ≤ −xT x + 2εxT Bx ≤ (−1 + 2ε k B k)xT x < 0. Hence V˙ is negative definite for |x| < δ if ε is sufficiently small. Thus x = 0 is asymptotically stable. If Reλ(A) 6= 0 and there exists an eigenvalue of A with positive real part, then x = 0 is an unstable equilibrium of (5.12). Without loss of generality we may assume that A = diag(A− , A+ ) where Reλ(A− ) < 0, Reλ(A+ ) > 0. Let B1 be a positive definite matrix satisfying AT− B1 + B1 A− = −I and B2 be a positive definite matrix satisfying  −AT+ B2 + B2 (−A+ ) = −I. Let x = (u, v) where u, v has the same dimension as matrices B1 , B2 respectively. Rewrite (5.12) as u0 = A− u + f1 (u, v) v 0 = A+ v + f2 (u, v). Introduce V (x) = −uT B1 u + v T B2 v. Then   V˙ = −uT AT− B1 + B1 A− u + v T AT+ B2 + B2 A+ v + O(|x|2 ) = xT x + O(|x|2 ) > c|x|2 > 0 for some c > 0 and for |x| < δ, δ > 0. On the region where V is positive definite, the conditions of Theorem 5.2.2 hold. Hence x = 0 is unstable.

130

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Invariance Principle The following invariance principle provides a method to locate ω-limit set ω(x0 ) of the solution of I.V.P. of autonomous system x0 = f (x), x(0) = x0 . It is also a tool to estimate the domain of attraction of an asymptotically stable equilibrium x∗ . Definition 5.2.3 We say that a scalar function V is a Lyapunov function on an open set G ⊆ Rn if V is continuous on G and V˙ (x) = gradV (x) · f (x) ≤ 0 for all x ∈ G. We note that V need not necessarily be positive definite on G. Let S = {x ∈ G : V˙ (x) = 0} and M be the largest invariant set in S with respect to the flow x˙ = f (x). Theorem 5.2.4 (LaSalle’s invariance principle) If V is a Lyapunov function on G and γ + (x0 ) is a bounded orbit of x˙ = f (x), γ + (x0 ) ⊆ G, then ω(x0 ) ⊆ M and lim dist(ϕ(t, x0 ), M ) = 0. t→∞

Proof. Since γ + (x0 ) ⊆ G is bounded and V is continuous on G, V (t) = V (ϕ(t, x0 )) is bounded for all t ≥ 0. From V˙ ≤ 0 on G, it follows that V (ϕ(t, x0 )) is nonincreasing in t and hence lim V (ϕ(t, x0 )) = c. Let t→+∞

y ∈ ω(x0 ) then there exists a sequence {tn } ↑ +∞ such that ϕ(tn , x0 ) → y as n → ∞, then we have V (y) = c. From the invariance of ω(x0 ), we have V (ϕ(t, y)) = c for all t ∈ R+ . Differentiating V (ϕ(t, y)) = c with respect to t yields V˙ (ϕ(t, y)) = 0. Hence ϕ(t, y) ⊆ S and ω(x0 ) ⊆ S. By the definition of M , it follows that ω(x0 ) ⊆ M . Thus the proof is complete. Next Corollary provides us a way to estimate the domain of attraction of the asymptotic stable equilibrium O. Corollary 5.2.1 If G = {x ∈ Rn : V (x) < ρ}, γ + (x0 ) ⊆ G and G is bounded, then ϕ(t, x0 ) → M as t → ∞. Corollary 5.2.2 If V (x) → ∞ as |x| → ∞ and V˙ ≤ 0 on Rn then every solution of x0 = f (x) is bounded and approaches M . In particular, if M = {0}, then 0 is globally asymptotically stable.

METHOD OF LYAPUNOV FUNCTIONS

131

0 Example 5.2.6 Consider the nonlinear equation x00 + f (x)x R x + g(x) = 0 where xg(x) > 0, x 6= 0, f (x) > 0, x 6= 0 and G(x) = 0 g(s)ds → ∞ as |x| → ∞. The equation describes a spring motion with friction. It is equivalent to the system  0 x =y (5.13) y 0 = −g(x) − f (x)y. 2

Let V (x, y) be the total energy of the system, that is, V (x, y) = y2 + G(x). Then V˙ (x, y) = −f (x)y 2 ≤ 0. For any ρ > 0, the function V is a Lyapunov function on the bounded set Ω = {(x, y) : V (x, y) < ρ}. Also, the set S where V˙ = 0, belongs to the union of x-axis and y-axis. From (5.13), it is easy to check M = {(0, 0)} and Corollary 5.2.2 implies that (0, 0) is globally asymptotically stable. Example 5.2.7 Consider the Lotka-Volterra Predator-Prey system  dx x = γx 1 − − αxy, dt K dy = βxy − dy, γ, α, β, d, K > 0, dt x(0) = x0 > 0, y(0) = y0 > 0.

Fig. 5.7

Case 1: x∗ = d/β < K Let Z

x

V (x, y) = x∗

With the choice of c =

α β,

η − x∗ dη + c η

Z

y

y∗

we have γ V˙ = − (x − x∗ )2 . K

η − y∗ dη. η

132

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Then S = {(x, y) : x = x∗ } and it is easy to verify M = {(x∗ , y ∗ )} (see Fig. 5.7). Hence lim x(t) = x∗ , lim y(t) = y ∗ . t→∞

t→∞

Case 2: x∗ > K Then with Lyapunov function Z x η−K α V (x, y) = dη + cy, c = , η β K it follows that γ V˙ = − (x − K)2 + cβy(K − x∗ ) ≤ 0. K Then S = {(K, 0)} = M and hence lim x(t) = K, lim y(t) = 0. t→∞

t→∞

We note that in this example the Lyapunov function V does not satisfy the hypotheses of Theorem 5.2.4. We leave it as an exercise to show that the invariance principle holds. Example 5.2.8 Consider Lotka-Volterra two species competition model (Example 4.2.4).     x1 dx1 = r1 1 − − α1 x2 x1 dt K1     x2 dx2 = r2 1 − − α2 x1 x2 . dt K2 γ2 γ1 If α1 > K2 , α2 > K1 , then the positive equilibrium (x∗1 , x∗2 ) is globally asymptotically stable.

Proof. Let the Lyapunov function be Z x2 Z x1 ξ − x∗2 ξ − x∗1 dξ + c dξ, V (x1 , x2 ) = ξ ξ x∗ x∗ 2 1 for some c > 0 to be determined. Then V˙ (x1 , x2 )   r1 ∗ ∗ ∗ = (x1 − x1 ) − (x1 − x1 ) − α1 (x2 − x2 ) K1   r2 ∗ ∗ ∗ + c(x2 − x2 ) − (x2 − x2 ) − α2 (x1 − x1 ) K2   r1 (x1 − x∗1 )2 = − K1   r2 + (−α1 − α2 c)(x1 − x∗1 )(x2 − x∗2 ) + − c (x2 − x∗2 )2 . K2

METHOD OF LYAPUNOV FUNCTIONS

133

The discriminant r1 r2 P (c) = (α1 + α2 c)2 − 4 c K1 K2   r1 r2 2 = α2 c + 2c α1 α2 − 2 + α12 . K1 K2 Since P (0) > 0, the discriminant of quadratic polynomial P (c) is  2   r1 r2 r1 r2 r1 r2 D = α1 α2 − 2 − α12 α22 = 4 − α1 α2 > 0. K1 K2 K1 K2 K1 K2 Hence there are two positive roots c∗1 , c∗2 of P (c) = 0 and there exists c, c∗1 < c < c∗2 such that P (c) < 0. With this choice of c, we have V (x1 , x2 ) < 0 for (x1 , x2 ) 6= (x∗1 , x∗2 ). Thus we complete the proof that (x∗1 , x∗2 ) is globally asymptotically stable. Example 5.2.9 Consider the Van der Pol equation x00 + ε(x2 − 1)x0 + x = 0 and its equivalent “Lienard form” (see Chapter 6)   3 ( x0 = y − ε x3 − x , y 0 = −x.

(5.14)

In the next chapter we shall show that (0, 0) is an unstable focus and the equation has a unique asymptotically stable limit cycle for every ε > 0. The exact location of this cycle in (x, y) plane is extremely difficult to obtain but the above theory allows one to determine a region near (0, 0) in which the limit cycle cannot lie. Such a region can be found by determining the domain of repulsion, i.e., the domain of attraction of (0, 0) with t replaced by −t. This has the same effect of ε < 0. Suppose ε < 0 and let V (x, y) = x2 +y 2 . Then 2   2 x 2 ˙ −1 V (x, y) = −εx 3 and V˙ (x, y) ≤ 0 if x2 < 3. Consider the region G = {(x, y) : V (x, y) < 3/2}. Then G is bounded and V is a Lyapunov function on G. √ √ Furthermore S = {(x, y) : V˙ = 0} = {(x, 0) : x = 3, − 3, 0}. From (5.14), M = {(0, 0)}. Then every solution starting in the circle x2 + y 2 < 3 approaches zero as t → ∞. Finally, the limit cycle of (5.14) with ε > 0 must be outside this circle.

134

5.3

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Simple Oscillatory Phenomena

Consider the second order scalar equation u00 + g(u) = 0

(5.15)

u0 = v v 0 = −g(u)

(5.16)

or the equivalent system

where g(u) is continuous on R. System (5.16) is a Hamiltonian system with v2 the Hamiltonian R ufunction or total energy given by E(u, v) = 2 + G(u) where G(u) = 0 g(s)ds. As we did for Example 5.2.2, it is easy to show that dE dt ≡ 0. Hence the orbits of solutions of (5.16) in the (u, v)-plane must lie onpthe level curves of E(u, v) = h, h is a constant. Then we have v = ± 2(h − G(u)). We note that g(u) does not necessarily satisfy ug(u) > 0, x 6= 0, G(u) → +∞ as |u| → ∞ as we did in Example 5.2.2. Example 5.3.1 Simple Pendulum (for the derivation see Example 1.1.9) g θ00 (t) + sin θ(t) = 0. (5.17) l Let k 2 = gl , g(θ) = k 2 sin θ, G(θ) = k 2 (1 − cos θ). G(u) has the graph shown in Fig. 5.8(a). The curves E(θ, ψ) = h, ψ = θ0 , are shown in Fig. 5.8(b). We note that (5.17) can be written as θ0 = ψ, ψ 0 = −k 2 sin θ.

(5.18)

The equilibria of (5.18) are (±hπ, 0), h = 0, 1, 2, . . .. It is easy to verify from variational matrix   0 1 −k 2 cos θ 0 θ=±hπ that the equilibria (±2hπ, 0) are linear centers and ((2k + 1)π, 0) are saddles. For 0 < h < 2k 2 , the level curves E(θ, ψ) = h are periodic orbits. For h = 2k 2 , we obtain heteroclinic orbits connecting saddles from ((2k − 1)π, 0) to ((2k + 1)π, 0). If h > 2k 2 then we obtain level curves lying above and below the heteroclinic orbits. Physically it means “whirling”, the pendulum will rotate around the axis θ = 0 infinitely many times provided the initial velocity θ0 = ψ is sufficiently large.

METHOD OF LYAPUNOV FUNCTIONS

135

Fig. 5.8

Example 5.3.2 Suppose the potential G(u) of (5.16) has the graph shown in Fig. 5.9(a) with A, B, C, D being extreme points of G. The orbits of the solution curves are sketched in Fig. 5.9(b), all curves of course being symmetric with respect to the u-axis. The equilibria corresponding to A, B, C, D are also labeled as A, B, C, D on the phase plane. The point A, C are centers, B is a saddle point and D is like the coalescence of a saddle and a center. The curves connecting B to B are homoclinic orbits. Example 5.3.3 Simple Pendulum with Damping. The equation of the motion is θ00 + βθ0 + k 2 sin θ = 0 ,

β > 0.

(5.19)

The equivalent system is θ0 = ψ ψ 0 = −k 2 sin θ − βψ. 2

(5.20)

If E(θ, ψ) = ψ2 + k 2 (1 − cos θ) is the energy of the system, then the 2 derivative of E along the solution of (5.20) satisfies dE dt = −βψ ≤ 0.

136

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Fig. 5.9

Since dE dt ≤ 0, ψ(t) is bounded for all t ≥ 0 and from second equation of (5.20) ψ 0 (t) is also bounded t ≥ 0. We claim limt→∞ ψ(t) = 0. If ψ(t) does not approach zero as t → ∞. Then there exist  > 0, δ > 0 and a sequence {tn }, tn → ∞ such that ψ 2 (t) >  on nonoverlapping intervals In , In = [tn − δ, tn + δ]. Let t > 0 sufficiently large and let p(t) = max{n|tn < t}. Then  Z t dE E (θ(t), ψ(t)) − E (θ(0), ψ(0)) = ds dt 0 Z t X Z tn +δ ≤− βψ 2 (s)ds ≤ − βψ 2 (s)ds 0

≤ −2βδ

tn 0, b > 0.

Determine the maximal region of asymptotic stability of the zero solution which can be obtained by using the total energy of the system as a Lyapunov function. Exercise 5.3 Verify that the origin is asymptotically stable and its basin of attraction contains the unit disk for the system ( x˙ = y, y˙

= −x − (1 − x2 )y.

(Basin of attraction for the origin := the set of all initial points that flow to the origin as t → ∞.) Exercise 5.4 Let f ∈ C 1 (Rn ), n ∈ N, and let x(t) be a solution of x˙ = f (x).

(5.22)

(1) Suppose limt→∞ x(t) = x ∈ Rn , show that x must be an equilibrium of (1). (2) Find an example of f so that there exists a solution x(t) satisfying ˙ limt→∞ x(t) = 0, yet limt→∞ x(t) 6= x, for any equilibrium x of (1). (Hint: Try n = 1.) (3) Suppose every equilibrium of (1) is isolated, lim|x|→∞ f (x) 6= 0 and ˙ limt→∞ x(t) = 0, show that limt→∞ x(t) = x, for some equilibrium x of (1). Exercise 5.5 Consider the system x˙ = y, y˙ = z Z− ay, z˙ = −cx − F (y), y F (0) = 0, a > 0, c > 0, aF (y)/y > c for y 6= 0 and [F (ξ) − cξ/a]dξ → ∞ 0

as |y| → ∞. If F (y) = ky where k > c/a, verify that the characteristic

METHOD OF LYAPUNOV FUNCTIONS

141

roots of the linear system have negative real parts. Show that the origin is asymptotically stable even when F is nonlinear. R y Hint: Choose V as a quadratic form plus the term 0 F (s)ds. Exercise 5.6 Suppose there is a positive definite matrix Q such that J 0 (x)Q + QJ(x) is negative definite for all x 6= 0, where J(x) is the Jacobian matrix of f (x). Prove that the solution x = 0 of x˙ = f (x), f (0) = 0, is globally asymptotically stable. Z 1

J(sx)xds.

Hint: Prove and make use of the fact that f (x) = 0

Exercise 5.7 Suppose h(x, y) is a positive definite function such that for any constant c, 0 < c < ∞, the curve defined by h(x, y) = c is a Jordan curve, h(x, y) → ∞ as x2 + y 2 → ∞. Discuss the behavior in the phase plane of the solution of the equations x˙ = εx + y − xh(x, y), y˙ = εy − x − yh(x, y), for all values of ε in (−∞, ∞). Exercise 5.8 Consider the system x0 = x3 + yx2 , y 0 = −y + x3 . Show that (0, 0) is an unstable equilibrium by using Lyapunov function 2 2 V (x, y) = x2 − y2 . We note that the variational matrix at (0, 0) is   0 0 0 −1 and the eigenvalue λ1 = 0 is not hyperbolic. Exercise 5.9 Consider the n-dimensional system x˙ = f (x) + g(t) where xT f (x) ≤ −k|x|2 , k > 0, for all x and |g(t)| ≤ M for all t. Find a sphere of sufficiently large radius so that all trajectories enter this sphere. Show this equation has a ω-periodic solution if g is ω-periodic. If, in addition, (x − y)T [f (x) − f (y)] < 0 for all x 6= y show there is a unique ω-periodic solution. Hint: Use Brouwer’s fixed point theorem (see p. 217).

142

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS open

Exercise 5.10 We say an autonomous system x0 = f (x), f : D ⊆ Rn → Rn is dissipative if there exists a bounded domain Ω ⊆ Rn such that any trajectory of x0 = f (x), ϕ(t, x0 ), x0 ∈ Rn will enter Ω and stay there, i.e. there exists T = T (x0 ) such that ϕ(t, x0 ) ∈ Ω for all t ≥ T . Show that the Lorenz system x0 = σ(y − x), y 0 = ρx − y − xz,

σ, ρ, β > 0

0

z = xy − βz, is dissipative. Hint: Consider the function 1 2 (x + σy 2 + σz 2 ) − σρz. 2 Prove that the ellipsoid V (x, y, z) = c, c > 0 sufficiently large, is the desired bounded domain Ω. V (x, y, z) =

Exercise 5.11 Consider predator-prey system with Holling-Type II functional response  x dx mx = rx 1 − y, − dt K a+x   dy mx = − d y, dt a+x x(0) > 0, y(0) > 0. Prove that the equilibrium (x∗ , y ∗ ) is globally asymptotically stable if x∗ > K−a by using the following Lyapunov function 2 Z y Z x ξ − x∗ dξ, V (x, y) = s(θ−1) (s − y ∗ )ds + cy θ ξ y∗ x∗ for some c > 0, θ > 0. Exercise 5.12 Consider the predator-prey system of Gause type: dx = xg(x) − yp(x), (*) dt dy = y[cp(x) − q(x)], dt where x and y represent the prey density and predator density respectively satisfying

METHOD OF LYAPUNOV FUNCTIONS

143

(H1) g(0) = 0. There exists K > 0 such that g(K) = 0 and (x−K)g(x) < 0 for x 6= K. (H2) p(0) = 0, p0 (x) > 0. (H3) q(0) > 0, q 0 (x) ≤ 0 for x ≥ 0, lim q(x) = q∞ > 0. x→∞





g(x ) Let x∗ > 0 satisfying cp(x∗ ) = q(x∗ ) and x∗ < K. Let y ∗ = xp(x ∗) i h xg(x) ∗ ∗ ∗ d ∗ and H(x ) = x g(x ) dx ln p(x) . Show that if H(x ) < 0 then (x∗ , ∗ x=x

y ∗ ) is asymptotically stable; if H(x∗ ) > 0 then (x∗ , y ∗ ) is unstable focus. Hence the stability condition can be stated graphically as follows: if the ∗ ∗ ∗ prey isocline y = xg(x) p(x) is decreasing (increasing) at x , then (x , y ) is asymptotically stable (an unstable focus). Exercise  5.13 Show that for the predator-prey system (∗) if xg(x) ∗ − y (x−x∗ ) ≤ 0 for all x ≥ 0 then (x∗ , y ∗ ) is globally stable in R2+ . p(x) Rx Ry ∗ (Hint: Use Lyapunov function V (x, y) = x∗ cp(ξ)−q(ξ) dξ + y∗ ξ−y p(ξ) ξ dξ). Exercise 5.14 Let Ai = (aijk ) be an n × n symmetric matrix for i = 1, 2, . . . , n. Assume aijk = akij = ajik for all i, j, k = 1, 2, . . . , n. Pn T i For the system dx i=1 xi x A x . Prove that dt = −∇V with V (x) =  P 2 n V˙ (x) = 3 xT A(i) x ≥ 0. i=1

Exercise 5.15 Prove Lemma 5.4.1. Exercise 5.16 Sketch the orbits of the gradient system in Example 5.4.2 for λ = 0, λ < 0 and λ > 0. Exercise 5.17 Consider the system of differential equations (∗)

dx = f (x), dt

f : Ω ⊆ Rn → R n

is assumed to be continuous. We call V a Lyapunov function on G ⊆ Ω for (∗) if (1) V is continuous on G (2) If V is not continuous at x ¯ ∈ cl(G) \ G then limx∈G,x→¯x V (x) = +∞ (3) V˙ = gradV · f ≤ 0 on G.

144

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Prove the following invariance principle: ¯ Assume that V is a Lyapunov function of (1) on G. Define S = {x ∈ G∩Ω : ˙ V (x) = 0}. Let M be the largest invariant set in S. Then every bounded trajectory (for t ≥ 0) of (∗) that remains in G for t ≥ 0 approaches the set M as t → +∞. Remark: In the example of Lotka-Volterra Predator-prey system, we did apply the above modified invariance principle. Exercise 5.18 Consider the mathematical model of n microorganisms competing for a single-limited nutrient in the chemostat [SW; Hsu] n   X 1 mi S xi , S 0 (t) = S (0) − S D − y a +S i=1 i i   mi S 0 − di xi , xi = ai + S S(0) ≥ 0 , xi (0) > 0 , i = 1, 2, ..., n, where S(t) is the concentration of the single-limiting nutrient, xi (t) is the concentration of i-th microorganism, S (0) is the input concentration, D is the dilution rate, yi is the yield constant of i-th microorganism, mi , ai , di are the maximal birth rate, Michaelis-Menten constant, death rate of i-th species respectively. Show that if ai 0 < λ1 < λ2 ≤ . . . ≤ λn ≤ S (0) , λi =   > 0, i = 1, . . . , n mi − 1 di then lim S(t) = λ1 , lim x1 (t) = x∗1 > 0

t→∞

t→∞

lim xi (t) = 0, i = 2, . . . , n, x∗1 =

t→∞

and  S (0) − λ1 Dy1 . d1

Hint: Construct Lyapunov function Z S Z x1 n X ξ − λ1 η − x∗1 V (S, x1 , . . . , xn ) = dξ + c1 dη + ci xi , ξ η λ1 x∗ 1 i=2 where ci > 0 to be determined, i = 1, 2, . . . , n and apply Exercise 5.9. Exercise 5.19 Consider x00 + g(x) = 0 or equivalent system x0 = y, y 0 = −g(x).

METHOD OF LYAPUNOV FUNCTIONS

145

Verify that this is a conservative system R x with the Hamiltonian function 2 H(x, y) = y2 + G(x), where G(x) = 0 g(s)ds. Show that any periodic orbit of this system must intersect x-axis at two points (a, 0), (b, 0), a < b. Using symmetry of the periodic orbit with respect to x-axis, show that the minimal period T of a periodic orbit passing through two such points is given by Z b du p . T =2 2[G(b) − G(u)] a Exercise 5.20 Sketch the phase portrait for the unforced, undamped Duffing oscillator x00 + x3 − x = 0, 2

4

2

by considering the energy E = y2 +V (x), where V (x) = x4 − x2 is a doublewell potential. Show that there is a homoclinic orbit. Also consider the same problem for unforced, damped oscillator x00 + δx0 + x3 − x = 0, δ > 0. Exercise 5.21 Sketch the phase portrait of x00 − x + x2 = 0, and x00 − (1 − x)e−x = 0. Exercise 5.22 (Transport Theorem) Let φt denote the flow of the system x˙ = f (x), x ∈ Rn , and let Ω be a bounded region in Rn . Define Z V (t) = dx1 dx2 · · · dxn φt (Ω)

and recall that the divergence of a vector field f = (f1 , f2 , . . . , fn ) on Rn with the usual Euclidean structure is n X ∂fi . div f = ∂x i i=1 (1) Use Liouville’s theorem and the change of variables formula for multiple integrals to prove that Z V˙ (t) = div f (x)dx1 dx2 · · · dxn . φt (Ω)

(2) Prove: The flow of a vector field whose divergence is everywhere negative contracts volume.

146

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

(3) Suppose that g : Rn × R → R and, for notational convenience, let dx = dx1 dx2 · · · dxn . Prove the transport theorem: Z Z d g(x, t)dx = gt (x, t) + div(gf )(x, t)dx. dt φt (Ω) φt (Ω) (4) Suppose that the mass in every open set remains unchanged as it is moved by the flow (that is, mass is conserved) and let ρ denote the corresponding mass-density. Prove that the density satisfies the equation of continuity ∂ρ + div(ρf ) = 0. ∂t (5) The flow of the system x˙ = y, y˙ = x is area preserving. Show directly that the area of the unit disk is unchanged when it is moved forward two time units by the flow. Exercise 5.23 Consider x˙ = Ax + h(t, x), where A is a constant symmetric matrix with all of its eigenvalues positive, p h is continuously differentiable and has period 1 in t, and ||h(t, x)|| ≤ ||x|| + 2004. Prove that there is a periodic solution. Exercise 5.24 Hamiltonian flows preserve area: Given a C 1 function H : R2 → R, the planar system ∂H(q, p) , q˙ = ∂p ∂H(q, p) p˙ = − ∂q is called a Hamiltonian system with the Hamiltonian function H. (1) Show that the total energy of a second-order conservative system is a Hamiltonian function. (2) Let D0 be a region, say, with a smooth boundary, in the plane and consider the image of D0 under the flow of a planar differential equation. That is, consider the set D(t) = {ϕ(t, x0 ) : x0 ∈ D0 }, where ϕ(t, x0 ) is the solution of the equation x˙ = f (x) through x0 . If A(t) is the area of D(t), prove that Z ˙ A(t) = div f (x)dx. D(t)

Suggestion: Use the fact that Z A(t) = D0

det

∂ϕ(t, x) dx. ∂x

METHOD OF LYAPUNOV FUNCTIONS

147

(3) Prove now that the flow of a Hamiltonian system preserves area. Exercise 5.25 Find a Hamiltonian for the system x˙1 = − cos x1 sin x2 ,

x˙2 = sin x1 cos x2

and draw the flow. Exercise 5.26 Rotating pendulum: Consider a pendulum of mass m and length l constrained to oscillate in a plane rotating with angular velocity w about a vertical line. If u denotes the angular deviation of the pendulum from the vertical and I is the moment of inertia, then Iu ¨ − mw2 l2 sin u cos u + mgl sin u = 0. By changing the time scale, this is equivalent to u ¨ − (cos u − λ) sin u = 0, where λ = g/(w2 l). Discuss the flows for each λ > 0 paying particular attention to the bifurcations in the flow. Exercise 5.27 Show that each of the systems below is a gradient system, determine the values of the parameters for which the vector field is generic, and discuss the bifurcations in the flows: (1) x˙1 = x1 + βx2 , x˙2 = x2 + βx1 for β ∈ R. (2) x˙1 = µ(x2 −x1 )+x1 (1−x21 ), x˙2 = −µ(x2 −x1 )+x2 (1−x22 )

for µ > 0.

Exercise 5.28 Analyze the flow of the gradient system x˙1 = −x31 − bx1 x22 + x1 ,

x˙2 = −x32 − bx21 x2 + x2

coming from vibrating membranes for the parameter values b > 1. Exercise 5.29 Find the critical (= equilibrium) points for the scalar equation x ¨ + 2bx˙ + sin(x3 ) = 0 where b is a positive constant. The linearization of this equation about a stable critical point (ξ, 0) in the (x, x)-plane ˙ with ξ > 0 has three qualitatively different kinds of phase portraits, one for small positive b, one for large positive b, and one for a single positive value of b, with these ranges depending on ξ. Give a careful sketch of such a phase portrait for a large value of b. Be sure to identify important directions with their slopes and include arrows.

This page intentionally left blank

Chapter 6

TWO-DIMENSIONAL SYSTEMS

6.1

Poincar´ e-Bendixson Theorem

Assume f : Ω ⊆ R2 → R2 and Ω is open in R2 . Consider the twodimensional autonomous system x0 = f (x).

(6.1)

Let ϕ(t) be a solution of (6.1) for t ≥ 0 and ω(ϕ) be the ω-limit set of ϕ(t). We recall that if ϕ(t) is bounded for t ≥ 0, then ω(ϕ) is nonempty, compact, connected and invariant. The following Poincar´e-Bendixson Theorem characterizes the ω-limit set ω(ϕ) of the solution ϕ(t) of the two-dimensional autonomous system (6.1). Theorem 6.1.1 (Poincar´e-Bendixson Theorem) If the solution ϕ(t) of (6.1) is bounded for all t ≥ 0, then either (i) ω(ϕ) contains an equilibrium or (ii) (a) ϕ(t) is periodic or (b) ω(ϕ) is a periodic orbit. Remark 6.1.1 In case (ii), we assume that ω(ϕ) contains no equilibria and in the case (b) we call ω(ϕ) a limit cycle. There is a difference between a periodic orbit and a limit cycle. A limit cycle must be a periodic orbit but not vice versa. In Example 5.2.4 the periodic orbits are not limit cycles. Remark 6.1.2 The Poincar´e-Bendixson Theorem is one of the most important results of nonlinear dynamical systems. It says that the dynamical possibilities in the phase plane are very limited: if a trajectory is confined to a closed, bounded region such that its ω-limit set contains no fixed 149

150

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

points, then the trajectory must eventually approach a closed orbit. Nothing else is possible. This result depends crucially on the Jordan Curve Theorem in the plane. In higher dimensional system (n ≥ 3, for example, Lorenz system), the Poincar´e-Bendixson Theorem is no longer true and we may have “strange attractors”. However, for three-dimensional competitive systems [Hir1; SW] and n-dimensional feedback systems [M-PS], the Poincar´e-Bendixson Theorem holds. For Poincar´e-Bendixson Theorem on two-dimensional manifold, the interested reader may find in the book [Ha]. Remark 6.1.3 To show the existence of a limit cycle by Poincar´eBendixson Theorem, we usually apply the standard trick to construct a trapping region R which contains no equilibrium. Then there exists a closed orbit in R (see Fig. 6.1).

Fig. 6.1

Example 6.1.1 Consider the system r0 = r(1 − r2 ) + µr cos θ θ0 = 1. Show that a closed orbit exists for µ ∈ (0, 1). To construct an annulus 0 < rmin ≤ r ≤ rmax to be a desired trapping region. To find rmin , we required r0 = r(1 − r2 ) + µr cos θ > 0 for all θ. Since cos θ ≥ −1 then r0 ≥ r(1 − r2 ) − µr = r[(1 − r2 ) − µ]. √ Hence any rmin < 1 − µ will work as long as µ < 1. By similar argu√ ment, the flow is inward on the outer circle if rmax > 1 + µ. Therefore, a closed orbit exists for all µ < 1 and it lies somewhere in the annulus √ √ 0.999 1 − µ < r < 1.001 1 + µ.

TWO-DIMENSIONAL SYSTEMS

151

Example 6.1.2 Consider the following system of glycolytic oscillation arising from biochemistry. x0 = −x + ay + x2 y y 0 = b − ay − x2 y. From Fig. 6.2,

Fig. 6.2 →

(1) On S1 , n= (−1, 0), →



n · f (x) = −(−x + ay + x2 y)|x=0,

0≤y≤b/a

= −ay ≤ 0.



(2) On S2 , n= (0, 1), →



n · f (x) = (b − ay − x2 y)|y=b/a,0≤x≤b = −x2 · b/a ≤ 0. →

(3) On S3 , n= (1, 1), →



n · f (x) = b − x|b≤x≤˜x ≤ 0.



(4) On S4 , n= (1, 0), →



n · f (x) = −x + ay + x2 y|x=˜x,

0≤y≤˜ y,

where x ˜ + y˜ = b + b/a,

x ˜ is close to b + b/a, i.e. y˜ is close to 0,

then →



n · f (x) ≤ y˜(˜ x2 + a) − x ˜ ≤ 0.

152

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS →

(5) On S5 , n= (0, −1), →



n · f (x) = −(b − ay − x2 y)|0≤x≤˜x

y=0

= −b < 0.

Thus the region is a trapping region. Next we verify that the equilibrium b (x∗ , y ∗ ) where x∗ = b, y ∗ = a+b 2 is an unstable focus if b4 + (2a − 1)b2 + (a + a2 ) > 0. a + b2 Thus by Poincar´e-Bendixson Theorem, there exists a limit cycle for τ > 0. τ =−

Example 6.1.3 Consider the Lotka-Volterra Predator-Prey system x0 = ax − bxy, y 0 = cxy − dy. Each periodic orbit is neutrally stable. There is no limit cycle. We note that the system is a conservative system. Example 6.1.4 Consider the predator-prey system with Holling type II functional response  x mx dx = γx 1 − − y = f (x, y), dt K a+x   dy mx = − d y = g(x, y), dt a+x x(0) > 0, y(0) > 0. Assume K−a > λ where λ = x∗ , (x∗ , y ∗ ) is the equilibrium. We note that 2 in Example 4.2.3, we proved that (x∗ , y ∗ ) is an unstable focus. First we show that the solution (x(t), y(t)) is positive and bounded. It is easy to verify that x(t) > 0, y(t) > 0 for all t ≥ 0. Then from differential inequality  x dx ≤ γx 1 − dt K it follows that for small  > 0, x(t) ≤ K + ε for t ≥ T (ε). Consider dx dy + ≤ γx − dy ≤ (γ + d)K − d(x + y), dt dt then x(t) + y(t) ≤

γ+d d K

for t large. Hence x(t), y(t) are bounded.

From Poincar´e-Bendixson Theorem, there exists a limit cycle in the first quadrant of x − y plane (see Remark 6.1.4 and Fig. 6.3).

TWO-DIMENSIONAL SYSTEMS

153

Fig. 6.3

To establish the global asymptotic stability of the equilibrium by using Poincar´e-Bendixson’s Theorem, it suffices to eliminate the possibility of the existence of periodic solutions. Theorem 6.1.2 (Bendixson Negative Criterion) Consider the system dx dt = f (x, y), dy dt = g(x, y).

(6.2)

∂g 2 If ∂f ∂x + ∂y is of the same sign in a simple-connected domain D in R , then there are no periodic orbits in D.

Proof. Suppose, on the contrary, there is a periodic orbit C = {(x(t), y(t))}0≤t≤T . Then, by Green’s Theorem,  I Z Z  ∂f ∂g + dxdy = (−g(x, y)) dx + f (x, y)dy ∂x ∂y D C Z T = [−g(x(t), y(t))x0 (t) + f (x(t), y(t))y 0 (t)] dt 0

= 0. R R  ∂f ∂g ∂f + But ∂x + ∂y is of same sign, then D ∂x or negative. This is a contradiction.

∂g ∂y



dxdy is either positive

Example 6.1.5 Show that x00 +f (x)x0 +g(x) = 0 cannot have any periodic solution in a region where f (x) is of the same sign (i.e. such region has only “positive damping” or “negative damping”). Write the equation as  0     x y F (x, y) = = . y0 −f (x)y − g(x) G(x, y)

154

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Then ∂G ∂F + = −f (x) is of same sign. ∂x ∂y We leave it as an exercise to generalize the Bendixson Negative criterion to the following Dulac’s criterion which is much more powerful than Negative Bendixson Criterion. Theorem 6.1.3 (Dulac’s criterion) Let h(x, y) be continuously differentiable in a simply-connected region D. h) ∂(gh) For the system (6.2), if ∂(f is of the same sign in D, then (6.2) ∂x + ∂y has no periodic orbit in D. Example 6.1.6 Consider Lotka-Volterra two species competition model   x − αxy = f (x, y), x0 = γ1 x 1 − K1   y − βxy = g(x, y). y 0 = γ2 y 1 − K2 Show that there is no periodic orbit in the first quadrant. If we try the Bendixson Negative criterion, then     ∂f ∂g γ1 γ2 + = γ1 − 2 x − αy + γ2 − 2 y − βx ∂x ∂y K1 K2 which is not of the same sign. Choose h(x, y) = xξ y η where ξ, η ∈ R are to be determined. Then from routine computation, ∂(f h) ∂(gh) + ∂x ∂y    γ1 = xξ y η (γ1 + γ2 + ξγ1 + ηγ2 ) + (−2 − ξ) − β(1 + η) x K1    γ2 − α(1 + ξ) y . + (−2 − η) K2

∆=

Choose ξ = η = −1, then ∆ < 0 for x, y > 0. Thus, from Dulac’s criterion, we complete the proof. Now we return to the proof of Poincar´e-Bendixson Theorem [MM] for the system x0 = f (x),

f : Ω ⊆ R2 → R2 .

TWO-DIMENSIONAL SYSTEMS

155

The proof needs the following Jordan Curve Theorem. Jordon Curve Theorem: A simple closed curve separates plane into two parts, Ωi and Ωe . The interior component Ωi is bounded while the exterior component Ωe is unbounded. →

Given a closed line segment L = ξ1 ξ2 . Let the vector ~b = ξ2 − ξ1 and ~a is the normal vector to L (see Fig. 6.4).

Fig. 6.4

We say that a continuous curve ϕ : (α, β) → R2 cross L at time t0 if ϕ(t0 ) ∈ L and there exists δ > 0 such that (ϕ(t) − ξ1 , ~a) > 0 for t0 −δ < t < t0 and (or for t0 < t < t0 +δ) (ϕ(t)−ξ1 , ~a) < 0 for t0 < t < t0 +δ (or for t0 − δ < t < t0 ). Definition 6.1.1 We say that a closed segment L in R2 is a transversal with respect to continuous vector field f : R2 → R2 if (i) for each ξ ∈ L, ξ is a regular point of the vector field f , i.e. f (ξ) 6= 0. (ii) The vector f (ξ) is not parallel to the direction of ~b.

Assume that L is a transversal with respect to vector field f (x). Lemma 6.1.1 Let ξ0 be an interior point of a transversal L, then for any ε > 0 there exists δ > 0 such that, for any x0 ∈ B(ξ0 , δ), ϕ(t, x0 ) crosses L at some time t ∈ (−ε, ε).

Proof. Let the equation of L be g(x) = a1 x1 + a2 x2 − c = 0.

156

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Consider G(t, ξ) = g(ϕ(t, ξ)) where ϕ(t, ξ) is the solution of I.V.P. x0 = f (x), x(0) = ξ, then G(0, ξ0 ) = g(ξ0 ) = 0, ∂G → (t, ξ) = g 0 (ϕ(t, ξ))f (ϕ(t, ξ)) = a ·f (ϕ(t, ξ)), ∂t ∂G → (0, ξ0 ) = a ·f (ξ0 ) 6= 0. ∂t By Implicit Function Theorem, we may express t as a function of ξ, i.e., there exist δ > 0 and a map t : B(ξ0 , δ) → R s.t. G(t(ξ), ξ) = 0, t(ξ0 ) = 0. Hence g(ϕ(t(ξ), ξ)) = 0 and the proof of Lemma 6.1.1 is complete. Lemma 6.1.2 Let ϕ(t) be an orbit of (6.1) and S = {ϕ(t) : α ≤ t ≤ β}. If S intersects L, then L ∩ S consists of only finite points whose order is monotone with respect to t. If ϕ(t) is periodic, then L ∩ S is a singleton set. Proof. First we show that L ∩ S is finite. If not, then there exists {tm } ⊂ [α, β] such that ϕ(tm ) ∈ L. We may assume tm → t0 ∈ [α, β]. Then ϕ(tm ) → ϕ(t0 ) and lim

m→∞

ϕ(tm ) − ϕ(t0 ) = ϕ0 (t0 ) = f (ϕ(t0 )). tm − t0

Thus the vector f (ϕ(t0 )) is parallel to L and we obtain a contradiction to the transversality of L. Next we prove the monotonicity. We may have two possibilities as Fig. 6.5 shows:

Fig. 6.5

TWO-DIMENSIONAL SYSTEMS

157

We consider case (a) only. From the uniqueness of O.D.E., it is imposP d sible for the trajectory to cross the arc P Q. Let be the simple closed d curve made up of the curve P Q and the segment T between P and Q. Let P D be the closed region bounded by . Since T is transverse to the flow, at each point of T , the trajectory either leaves or enters D. We assert that at any point of T the trajectory leaves D. Let T− be the set of points whose trajectories leave D and T+ be the set of points whose trajectories enter D. From the continuity of the flow, T− , T+ are open sets in T . Since T is a disjoint union of T+ and T− and T is connected, then we have T+ is empty and T = T− . Obviously the monotonicity of L ∩ S follows. It is obvious from the monotonicity that L ∩ S is singleton if ϕ(t) is periodic. Lemma 6.1.3 A transversal L cannot intersect ω(ϕ), the ω-limit set of a bounded trajectory, in more than one point. Proof. Let ω(ϕ) interest L at ξ 0 , then there exists {t0m } ↑ +∞, t0m+1 > t0m + 2, ϕ(t0m ) → ξ 0 . By Lemma 6.1.1, there exists M ≥ 1 sufficiently large, such that ϕ must cross L at some time tm , |tm − t0m | < 1 for m ≥ M . From Lemma 6.1.2, ϕ(tm ) ↓ ξ ∈ L ∩ ω(ϕ). From Lemma 6.1.1, we may assume |tm − t0m | → 0 as m → ∞. Then from Mean Value Theorem, we have ϕ(tm ) = ϕ(t0m ) + (ϕ(tm ) − ϕ(t0m )) = ϕ(t0m ) + (tm − t0m )f (ϕ(ηm )). Since ϕ(t) is bounded, lim ϕ(tm ) = ξ 0 and ξ = ξ 0 . Hence there exists m→∞

{tm } ↑ ∞ such that ϕ(tm ) ↓ ξ. If η is a second point in L ∩ ω(ϕ), then there exists {sm } ↑ ∞ such that ϕ(sm ) ∈ L and ϕ(sm ) ↓ η or ϕ(sm ) ↑ η. Assume the sequences {sm }, {tm } interlace i.e., t1 < s1 < t2 < s2 < · · · , then {ϕ(t1 ), ϕ(s1 ), ϕ(t2 ), · · · } is a monotone sequence. Thus ξ = η. Lemma 6.1.4 Let ϕ(t) be the solution of (6.1) and ω(ϕ) be the ω-limit set of γ + (ϕ). (i) If ω(ϕ) ∩ γ + (ϕ) 6= φ, then ϕ(t) is a periodic solution. (ii) If ω(ϕ) contains a nonconstant periodic orbit Γ, then ω(ϕ) = Γ. Proof. (i) Let η ∈ ω(ϕ) ∩ γ + (ϕ), then η is a regular point and there exists a transversal L through η. Let η = ϕ(τ ), then from the invariance of ω(ϕ), γ(η) ⊂ ω(ϕ).

158

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Fig. 6.6

Since η ∈ ω(ϕ), there exists {t0m } ↑ +∞ s.t. ϕ(t0m ) → η (see Fig. 6.6). By Lemma 6.1.1, there exists tm near t0m , ϕ(tm ) ∈ L, |tm − t0m | → 0, ϕ(tm ) → η as m → ∞ as we did in the proof of Lemma 6.1.3. Since η = ϕ(τ ) ∈ w(ϕ) and w(ϕ) is invariant, it follows that ϕ(tm ) ∈ w(ϕ) for m sufficiently large. From Lemma 6.1.3 and ϕ(tm ) ∈ L ∩ w(ϕ), we have ϕ(tm ) = η for m sufficiently large. This implies that ϕ(t) is a periodic solution. (ii) Assume a periodic orbit Γ is contained in ω(ϕ). We claim that Γ = ω(ϕ). If not, Γ 6= ω(ϕ). Since ω(ϕ) is connected, there exists {ξm } ⊆ ω(ϕ) \ Γ and ξ0 ∈ Γ such that ξm → ξ0 (see Fig. 6.7).

Fig. 6.7

Let L be a transversal through ξ0 . By Lemma 6.1.1, for m sufficiently 0 ∈ L∩ large, the orbit through ξm must intersect L. Let ϕ(τm , ξm ) = ξm 0 0 ω(φ). From Lemma 6.1.3, {ξm } is a constant sequence and hence ξm = ξ0 . 0 ) = ϕ(−τm , ξ0 ) ∈ Γ. This is a contradiction. Then ξm = ϕ(−τm , ξm Proof of Poincar´e-Bendixson Theorem. Assume ω(ϕ) contains no equilibria and ϕ is not a periodic solution. Let y0 ∈ ω(ϕ) and C + = γ + (y0 ), then C + ⊆ ω(ϕ). Let ω(y0 ) be the ω-limit set of C + . Obviously ω(y0 ) ⊆ ω(ϕ) and hence ω(y0 ) contains no equilibria. Let η ∈ ω(y0 ) and introduce a transversal L through η.

TWO-DIMENSIONAL SYSTEMS

159

Fig. 6.8

Since η ∈ ω(y0 ), there exists a sequence {tn } ↑ ∞ such that ϕ(tn , y0 ) → η (see Fig. 6.8). By Lemma 6.1, there exists {t0n }, |tn − t0n | → 0, ϕ(t0n , y0 ) ∈ L ∩ ω(y0 ). But C + ⊆ ω(ϕ). C + meets L only once. Then η ∈ C + . From Lemma 6.1.4(i), we have that C + is a periodic orbit. Since C + ⊆ ω(ϕ), from Lemma 6.1.4(ii), ω(ϕ) = C + . Thus we complete the proof. In the following, we present a result related to the Poincar´e-Bendixson’s Theorem: A closed orbit in the plane must enclose an equilibrium. We shall prove it in Chapter 8 by using the index theory. Before that we present a result in Rn by using Brouwer’s fixed point theorem. Theorem 6.1.4 If K is a positively invariant set of the system x0 = f (x), f : Ω ⊆ Rn → Rn and K is homeomorphic to the closed unit ball in Rn , then there is at least one equilibrium in K. Proof. For any fixed τ1 > 0, consider the map ϕ(τ1 , ·) : K → K. From Brouwer’s fixed point theorem, there is a p1 ∈ K such that ϕ(τ1 , p1 ) = p1 , thus, a periodic orbit of period τ1 . Choose τm → 0, τm > 0 as m → ∞ and corresponding points pm such that ϕ(τm , pm ) = pm . Without loss of generality, we assume pm → p∗ ∈ K. For any t and any integer m, there is an integer km (t) such that km (t)τm ≤ t < km (t)τm + τm and ϕ(km (t)τm , pm ) = pm for all t. Furthermore, |ϕ(t, p∗ ) − p∗ | ≤ |ϕ(t, p∗ ) − ϕ(t, pm )| + |ϕ(t, pm ) − pm | + |pm − p∗ | = |ϕ(t, p∗ ) − ϕ(t, pm )| + |ϕ(t − km (t)τm , pm ) − pm | + |pm − p∗ | → 0 as m → ∞. Therefore, p∗ is an equilibrium. Remark 6.1.4 In the book of [H1] or [ASY] p. 341, we have further results on the Case (i) of Poincar´e-Bendixson Theorem. If ω(ϕ) contains only finite number of equilibria of (6.1), then either (a) ω(ϕ) is a single equilibrium of (6.1)

160

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

or (b) ω(ϕ) consists of a finite number of equilibria q1 · · · qn together with a finite set of orbits γ1 · · · γn such that the α(γj ) and ω(γj ) are equilibria, i.e. ω(ϕ) = {q1 · · · qn } ∪ γ1 ∪ · · · ∪ γn , which is a closed contour. 6.2

Levinson-Smith Theorem

Consider the following Lienard equation x00 + f (x)x0 + g(x) = 0

(6.3)

where f (x), g(x) satisfy (H1) f : R → R is continuous and f (x) = f (−x), i.e., f is even. (H2) g : R → R is continuous, xg(x) > 0 for all x 6= 0, g(x) = −g(−x), i.e., g is odd.R x (H3) F (x) = 0 f (s)ds satisfies 0 < x < a, F (x) < 0 x > a, FR(x) > 0, f (x) > 0. x (H4) G(x) = 0 g(s)ds → ∞ as |x| → ∞; F (x) → +∞ as x → +∞. Example 6.2.1 One of the most important examples of Lienard equations is van der Pol equation, x00 + ε(x2 − 1)x0 + x = 0, ε > 0. The derivation of the model can beconsulted  in [Kee1] p. 481. In this exx3 2 ample, f (x) = ε(x − 1), F (x) = ε 3 − x , g(x) = x, G(x) = x2 /2 with √ a = 3 in (H3). Theorem 6.2.1 (Levinson-Smith) There exists a unique of limit cycle of (6.3), which is globally orbital stable.

Proof. Rewrite (6.3) in the following Lienard form  0 x = y − F (x), y 0 = −g(x).

(6.4)

Consider the energy function v(x, y) = y 2 /2 + G(x).

(6.5)

TWO-DIMENSIONAL SYSTEMS

161

Then dv = y 0 y + g(x)x0 = −g(x)F (x). (6.6) dt We note that F is odd, i.e., F (−x) = −F (x), and the trajectory (x(t), y(t)) is symmetric with respect to origin because of the fact that (x(t), y(t)) is a solution of (6.3) if and only if (−x(t), −y(t)) is a solution of (6.3). Let α > 0, A = (0, α) and D = (0, y(t(α))) be the points of the trajectory (x(t), y(t)) on the y-axis with (x(0), y(0)) = A and (x(t(α)), y(t(α))) = D (see Fig. 6.9). From (6.5) we have α2 − y 2 (t(α)) = 2(v(A) − v(D)). (6.7)

Fig. 6.9

Due to the symmetry of (x(t), y(t)) with respect to the origin, we prove the theorem in three steps. Step 1: To show v(A) − v(D) is a monotonically increasing function of α (see Fig. 6.12). From (6.4), (6.6), we have dv dv/dt −g(x)F (x) = = , (6.8) dx dx/dt y − F (x) dv dv dt = F (x). (6.9) = dy dy dt From (6.8), (6.9), we have the following inequalities (see Fig. 6.10) Z a Z a  −g(x)F (x) dv dx = dx v(B) − v(A) = dx y(x) − F (x) 0 0 Z a −g(x)F (x) < dx = v(B 0 ) − v(A0 ); (6.10) y ˜ (x) − F (x) 0

162

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Fig. 6.10

Z

y2



v(E) − v(B) = − y1

dv dy



Z

y2

dy = −

F (x(y))dy < 0;

(6.11)

y1

Ry R y dv dy = − y31 F (x(y))dy v(G) − v(E) = − y31 dy R y1 x(y))dy = v(C 0 ) − v(B 0 ) < − y3 F (˜

(6.12)

(here we use the fact that f (x) > 0 for x > a to deduce F (x(y)) > F (˜ x(y))) Z y4 Z y3 dv dv v(C) − v(G) = dy = − dy y3 dy y4 dy Z y3 =− F (x(y))dy < 0 (6.13) y4

Z a −g(x)F (x) dv dx = − dx (6.14) v(D) − v(C) = − dx 0 yL (x) − F (x) 0 Z a Z a −g(x)F (x) −g(x)F (x) = dx < dx = v(D0 ) − v(C 0 ). ˜L (x) 0 F (x) − yL (x) 0 F (x) − y Z a



Summing the inequalities, (6.10)–(6.14), yields v(D) − v(A) < v(D0 ) − v(A0 ) or v(A0 ) − v(D0 ) < v(A) − v(D). Thus we complete the proof of Step 1.

TWO-DIMENSIONAL SYSTEMS

Step 2: For α > 0 sufficiently small, v(A) − v(D) < 0. Since −g(x)F (x) > 0 for α small, Step 2 follows. Step 3: For α sufficiently large, v(A) − v(D) > 0 (see Fig. 6.11)

163 dv dt

=

Fig. 6.11

v(A) − v(D) = (v(A) − v(B)) + (v(B) − v(C)) + (v(C) − v(D)) Z 0 Z y1 Z a dv dv dv = dx + dy + dx dx dy dx y2 Zaa Z a0 Z y1 g(x)F (x) −g(x)F (x) = dx + dx + F (x)dy. 0 yL (x) − F (x) y2 0 yu (x) − F (x)

(6.15)

Case 1: x(α) is bounded for all α. Since the map (x(α), F (x(α))) → (0, y(t(α))) is continuous in α and hence y(t(α)) is bounded. Hence v(A) − v(D) = 12 α2 − y 2 (t(α)) > 0 for α \ sufficiently large. (R is the region bounded by BC and BHC.) Case 2: x(α) → +∞ and y(t(α)) → +∞ as α → +∞. From (6.4), we have dy −g(x) = . dx y − F (x)

(6.16)

For 0 < x < a, consider I.V.P. (6.16) with different initial values y(0) = α and y(0) = y(t(α)), it follows that for 0 ≤ x ≤ a, yu (x) → +∞ and yL (x) → −∞ if α → ∞. Hence the first and second integrals in (6.15) go to zero as α → ∞. Consider the third integral in (6.15), from Green’s

164

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

theorem weZ have Z Z y1 F (x)dy = F (x)dy + F (x)dy d y2 CB BC I Z Z ∂ = F (x)dy = (F (x))dxdy ∂x R Z Z = f (x)dxdy → +∞ as α → +∞. R

Hence v(D) − v(A) > 0 for α sufficiently large. (R is the region bounded \ by BC and BHC.) Combining Step 1–Step 3 and the symmetry of the trajectory with respect to origin, there exists a unique solution α∗ of v(A) − v(D) (see Fig. 6.12).

Fig. 6.12

And obviously the corresponding unique of limit cycle attracts every point (x, y) 6= (0, 0). Remark 6.2.1 For a two-dimensional autonomous, dissipative system, we can prove the existence of limit cycle by Poincar´e-Bendixson Theorem. To prove the uniqueness of limit cycles is a very difficult job. If we are able to show that every periodic solution is orbitally asymptotically stable, then uniqueness of limit cycles follows. Henceif we are able to prove that I  ∂g ∂f + dt < 0 (6.17) ∂x ∂y Γ for system (6.2), then from Corollary 4.4.1, the periodic orbit Γ is orbitally asymptotically stable. Since the exact position of Γ is unknown, hence it is difficult to evaluate the integral (6.17). However, for some systems, like the predator-prey system with Holling type II functional response (Example 6.1.4), K. S. Cheng [Che] prove the uniqueness of limit cycle by the following method of reflection.

TWO-DIMENSIONAL SYSTEMS

165

Consider the predator-prey system with Holling type II functional responses: (

S Sx m S S 0 = rS(1 − K ) − (m y ) a+S = y a+S (f (S) − x) = g(S, x), mS − d)x = h(S, x) = (m − d) (S−λ)x x0 = ( a+S a+S ,

(6.18)

S Let Γ = where x = f (S) = ry m (a + S)(1 − K ) is the prey-isocline. {(S(t), x(t)), 0 ≤ t ≤ T } be a periodic orbit of the predator-prey system (6.18).

To show the unique of limit cycle, it suffices to prove the Floque multiple ρ2 , satisfies ρ2 < 1, or to show I

∂h ∂g + = ∂S ∂x

I

∂g = ∂S

I

m Sf 0 (S) m a(f (S) − x) + < 0. y a+S y (a + S)2

(6.19)

From symmetry of the graph of x = f (S) with respect to the line S = (see Fig. 6.13)

Fig. 6.13

we find that for

K−a 2

< S < K − a − λ,

K−a 2

166

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

f (S) = f (K − a − S)

(6.20)

f 0 (S) = −f 0 (K − a − S).

(6.21)

Since Z

T

0= 0

aS 0 = a+S

T

Z 0

m a(f (S) − x) y (a + S)2

from (6.19) I Γ

Since S 0 (t) =

∂h ∂g + = ∂S ∂x

m S y a+S (f (S)

I Γ

I Γ

∂g = ∂S

− x), then I

∂g ∂h + = ∂S ∂x

Γ

I Γ

S 0 (t) f (S)−x

∂g = ∂S

I Γ

m Sf 0 (S) . y a+S

=

m S y a+S .

To show

m Sf 0 (S) < 0. y a+S

0 LP 0 JA with respect to the line S = \ By Fig. 6.14, we reflect the arc BHQ K−a 2 , then

I Γ

Since

m Sf 0 (S) = y a+S

S 0 (t) f (S)−x

Z

=

m S y a+S ,

Z



+ 0A Pd SA

Z =

SP SP

Z =

SA SP

Z =

SA

d AP

Z

Z

Z

+ d AP

+ \ P RQ

Z +

d QB

Z +

d0 BQ

Z +

0 LP 0 \ Q

. 0A Pd

then from (6.20), (6.21)

m Sf 0 (S) y a+S SP

f 0 (S) dS SA f (S) − x2 (S) Z SP f 0 (K − a − S) f 0 (S) dS + dS f (K − a − S) − x1 (K − a − S) SA f (S) − x2 (S)

f 0 (S) dS + f (S) − x1 (S)

f 0 (S)[

Z

x2 (S) − x1 (K − a − S) ]dS < 0 (f (S) − x2 (S))(f (S) − x1 (K − a − S))

(6.22) where x2 (S) > x1 (K − a − S) for SA < S < SP , and f 0 (S) < 0 for SA < S < SP .

TWO-DIMENSIONAL SYSTEMS

167

Fig. 6.14

Similarly Z

Z



+ d QB Z SQ

= SB

d0 BQ

m Sf 0 (S) y a+S

 dt

 x3 (K − a − S) − x4 (S) dS < 0. f (S) (x4 (S) − f (S))(x3 (K − a − S) − f (S)) 0



(6.23)

Since x0 (t) = x(m − d)

S−λ a+S

it follows that 1 a+S dx = dt. m − d (S − λ)x ¯ 0 LP 0 and line segment P 0 Q0 and Ω ¯ be the region bounded by arc Q \ Let Ω \ be the region bounded by arc P RQ and line segment QP .

168

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Then from Green’s Theorem, we have   Z Z m Sf 0 (S) m 1 Sf 0 (S) dt = dx y a+S y m − d Q\ 0 LP 0 0 LP 0 x(S − λ) \ Q Z  Z Z Sf 0 (S) m 1 + + dx = y m − d Q\ 0 LP 0 P 0 Q0 Q0 P 0 x(S − λ) ZZ r 2 1 [2(S −λ) +λ(K −a−2λ)] 1 dSdx − K = m−d Ω x (S − λ)2 Z Sf 0 (S) m 1 dx + y m − d Q0 P 0 x(S − λ) Z x Q0 m 1 S1 (x)f 0 (S1 (x)) < dx. y m − d xp0 x(λ − S1 (x)) Similarly, we have   ZZ 1 1 r [2(S − λ)2 + λ(K − a − 2λ)] m Sf 0 (S) dt = − K dSdx ¯ m−d x (S − λ)2 \ Ω P RQ y a + S Z Sf 0 (S) m 1 + dx y m − d P Q x(S − λ) Z xQ S2 (x)f 0 (S2 (x)) m 1 dx. < y m − d xp x(S2 (x)) − λ

Z

Then Z



Z +

0 LP 0 \ Q

\ P RQ x Q0

m 1 − y m−d

Z

m 1 = y m−d

Z

xP 0 x Q0 xP 0

m Sf 0 (S) m 1 dt < y a+S y m−d

Z

xQ0

xP 0

S1 (x)f 0 (S1 (x)) dx x(λ − S1 (x))

(K − a − S1 (x))f 0 (S1 (x)) dx x(K − a − λ − S1 (x)) f 0 (S1 (x)) G(S1 (x)) dx x (λ − S1 (x))(K − a − λ − S1 (x))

where 0



0

G(S ) = (S − λ)(λ − S )

S S0 − 0 λ−S S−λ

 .

(6.24)

Let S = K − a − S 0 , and S 0 = S1 (x), from Lemma 6.2.1 and Lemma 6.2.2 (see below) S1 (x) ≤ max{SP 0 , SQ0 } ≤ S− , G(S1 (x)) ≤ 0,

xP 0 ≤ x ≤ xQ0 xP 0 ≤ x ≤ xQ0 ,

(6.25)

then Z

Z +

0 LP 0 \ Q

\ P RQ



m Sf 0 (S) y a+S

 dt < 0.

(6.26)

TWO-DIMENSIONAL SYSTEMS

From (6.22), (6.23), (6.24) it follow that

0 m Sf (S) Γ y a+S

H

169

< 0 and (6.19) follows.

Lemma 6.2.1 From Fig. 6.14, P 0 (SP 0 , xP 0 ), Q0 (SQ0 , xQ0 ) are respectively the mirror images of P (SP , xP ), Q(SQ , xQ ) with respect to the mirror S = K−a 2 , then SQ0 SQ 0< 6 (6.27) λ − SQ0 SQ − λ and SP SP 0 6 . (6.28) 0< λ − SP 0 SP − λ _

0

_

dx 0 0 Proof. Let ( dx ds )Q and ( ds )Q be the slopes of BH QL and BRA at point Q respectively. It is obvious from Fig. 6.14 that    0 dx dx 0> > . (6.29) ds Q ds Q Since   mS   xQ a+SQQ − d dx      = xQ SQ S ds Q rSQ 1 − KQ − m y a+SQ  y  x (S − λ) Q Q = −(m − d) m SQ (xQ − f (SQ )) and  0  y  x 0 (λ − S 0 ) dx Q Q = −(m − d) ds m SQ0 (xQ0 − f (SQ0 ))  y  x (λ − S 0 ) Q Q . = −(m − d) m SQ0 (xQ − f (SQ )) (Recall that xQ = xQ0 , f (SQ ) = f (SQ0 ).) Thus from (6.27), (6.28), we have SQ0 S SP 0 0 < λ−S 6 SQQ−λ . Similarly we have 0 < λ−S 6 SPSP−λ . 0 0 Q

P

Lemma 6.2.2 S1 (x) 6 max {SP 0 , SQ0 } 6 S− , xP 0 6 x 6 xQ0 and G(S1 (x)) 6 0 for xP 0 6 x 6 xQ0 . 0

Proof. Consider the quadratic function G(S ) in (6.24). A straight calculation shows that 0 0 0 G(S ) = −2(S )2 + 2S (K − a) − λ(K − a). (6.30) 0 The two positive roots of G(S ) = 0 are s 2   K −a K −a K −a ± −λ . S± = 2 2 2

170

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Hence G(S 0 ) < 0 if S 0 < S− or S 0 > S+ and G(S 0 ) > 0 if S− < S 0 < S+ . < S+ , from (6.25), (6.27) we conclude that Since SQ0 < K−a 2 SQ0 6 S−

or

SQ > S+ .

(6.31)

SP 0 6 S−

or

SP > S+ .

(6.32)

Similarly

Then from (6.27), (6.28) we prove (6.26). Remark 6.2.2 In the following we present another method to prove the uniqueness of limit cycle for van der Pol equation ([HK] p. 379) x00 + (x2 − 1)x0 + x = 0,  > 0. Let x0 = y = f (x, y) y 0 = (1 − x2 )y − x = g(x, y), and (x(t), y(t)), 0 ≤ t ≤ T , be a periodic solution. We want to show the periodic orbit Γ = {(x(t), y(t)) : 0 ≤ t ≤ T } is orbitally asymptotically stable. It suffices to show Z T Z T  ∂f ∂g (x(t), y(t)) + (x(t), y(t)) dt =  1 − x2 (t) dt < 0. ∂y 0 ∂x 0 Consider V (x, y) =

 1 2 x + y2 . 2

Then    2  x2 0 0 2 2 ˙ V (x, y) = xx + yy =  1 − x y = 2 1 − x . V (x, y) − 2 Assume V (x(t), y(t)) attains the minimum on the periodic orbit Γ at some point, say, (x(t¯), y(t¯)). Then V˙ (x(t¯), y(t¯)) = 0 and it follows that either y(t¯) = 0 or x2 (t¯) = 1. First we prove that it is impossible that y(t¯) = 0. Suppose y(t¯) = 0 then from uniqueness of ODE, y 0 (t¯) 6= 0, otherwise y(t¯) = 0 and y 0 (t¯) = 0 imply (x(t¯), y(t¯)) ≡ (0, 0). Hence y 0 (t¯) 6= 0 i.e. x00 (t¯) = y 0 (t¯) 6= 0. This says that x(t) has either a maximum or a minimum 2 at t = t¯. Then 1 − (x(t)) has a fixed sign for t near t¯ and V˙ (x(t), y(t)) has a constant sign for t near t¯. It follows that V (x(t), y(t)) is strictly monotone for t near t¯. This leads to a contradiction that V (x(t), y(t)) attains a minimum at t¯.

TWO-DIMENSIONAL SYSTEMS

171

Hence we have y(t¯) = x0 (t¯) 6= 0 and x2 (t¯) = 1. Then V (x(t), y(t)) ≥ V (x(t¯), y(t¯)) = 12 1 + y 2 (t¯) > 21 for all t and " # 2   V − x2 V˙ (x(t), y(t)) 2 2 − + 2 1 − x (t) = 2 1 − x (t) − +1 V (x(t), y(t)) − 12 V − 12   2 2 x − 12   1 − x2 (t) 2 2 =− . = 2 1 − x (t) V − 12 V (x(t), y(t)) − 12 Integrating both sides from 0 to T yields Z T Z  2 0 + 2 1 − x (t) dt = − 0

Hence

RT 0

∂f ∂x

+

∂g ∂y dt

T

0

=

RT 0

2 1 − x2 dt < 0. V − 12



1 − x2 (t) dt < 0 and we complete the proof.

Relaxation of the van der Pol oscillator: Consider van der Pol equation with k  1 u00 − k(1 − u2 )u0 + u = 0. Let ε = k1 . Then 0 < ε  1 and εu0 = v − F (u), v 0 = −εu,

(6.33)

3

where F (u) = −u + u3 . Consider the Jordan curve J as follows (see Fig. 6.15).

Fig. 6.15

Fig. 6.16

172

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

By Theorem 6.2.1, there is a unique limit cycle Γ(ε) of (6.33). If the orbit is away from the isocline v = F (u), then, from (6.24), u0 = ε−1 (v − F (u)), v 0 = −εu. It follows that |u0 |  1 and |v 0 | ≈ 0, and the orbit has a tendency to jump except when it is closed to the curve v = F (u). Theorem 6.2.2 As ε ↓ 0, the limit cycle Γ(ε) approaches the Jordan curve J [H1].

Proof. We shall construct closed region U containing J such that the distance dist(U, J) is any preassigned constant and for any ε > 0 sufficiently small, the vector field of (6.33) is directed inward U at the boundary ∂U , i.e. U is a trapped region. By Poincar´e-Bendixson Theorem, U contains the limit cycle Γ(ε). Consider the following Fig. 6.17, due to the symmetry, we only need to check the flow on the following curves.

Fig. 6.17

(1) On the segment 12, the flow is inward since ~n · (u, ˙ v) ˙ = (0, 1) · (u, ˙ v) ˙ = −εu < 0;

TWO-DIMENSIONAL SYSTEMS

173

(2) On the segment 23 ~n · (u, ˙ v) ˙ = (1, 0) · (u, ˙ v) ˙ =

1 (v − F (u)) < 0; ε

(3) On the segment 13, 14 ~n · (u, ˙ v) ˙ = (0, 1) · (u, ˙ v) ˙ = −εu < 0; (4) On the segment 11, 12 1 ~n · (u, ˙ v) ˙ = (−1, 0) · (u, ˙ v) ˙ = − (v − F (u)) < 0; ε (5) On the arc 1\ 2, 13, ~n · (u, ˙ v) ˙ = (−F 0 (u), 1) · (0, v) ˙ = −εu < 0; c4, (6) On the arc 3, ~n · (u, ˙ v) ˙ = (F 0 (u), −1) · (u, ˙ v) ˙    −h 2 , −εu = u − 1, −1 · ε  −h = u2 − 1 + εu < 0, if 0 < ε  1; ε c n1 > 0, n2 < 0 (7) On the arc 45,   1 (v − F (u)), −εu ~n · (u, ˙ v) ˙ = (n1 , n2 ) · ε   1 = n1 (v − F (u)) − εun2 < 0 ε if 0 < ε  1; (8) On the arc 1\ 0, 11, n1 < 0, n2 < 0,   1 (v − F (u)) − εun2 < 0, if 0 < ε  1. ~n · (u, ˙ v) ˙ = n1 ε

Remark 6.2.3 As ε  1, we may evaluate the period of the periodic solution of the relaxation. Since the periodic trajectory spends most of the time traveling on the isocline (see Fig. 6.16, Fig. 6.18), we need to consider the equation dv dv = −εu or dt = − . dt εu

174

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Fig. 6.18

Then Z T (ε) ≈ 2

Z

u=1

dt = 2 u=2

Z = −2 1

2

dv −εu

u2 − 1 3 − 2 log 2 du = . −εu ε

Generalized Lienard Equations Consider generalized Lienard equation [Zh]:  dx   = − ϕ(y) − F (x) dt   dy =g(x). dt

(6.34)

Theorem 6.2.3 [Zh] Assume (i) xg(x) > 0, x 6= 0, G(+∞) = G(−∞) = +∞. G(x) = (ii)

F 0 (x) g(x)

is non-decreasing on (−∞, 0), (0, +∞), neighborhood of x = 0.

F 0 (x) g(x)

Rx 0

g(ξ)dξ.

6= constant in

(iii) yϕ(y) > 0, y 6= 0, ϕ(y) is non-decreasing. ϕ(−∞) = −∞, ϕ(+∞) = +∞. ϕ(y) has right and left derivative at y = 0 which are non-zero in case F 0 (0) = 0.

TWO-DIMENSIONAL SYSTEMS

175

Then the system (6.34) has at most one limit cycle. We state the following Theorem 6.2.3 without proof. In 1988, Kuang Yang and Freedman prove uniqueness of limit cycle for Gauss-type Predator-Prey System by reducing it to generalized Lienard equation [KF]. Consider Predator-Prey System:  dx   = φ(x)(F (x) − π(y)) dt   dy = ρ(y)ψ(x). dt For example, Gauss-type predator-prey system satisfies  π(y) = y, φ(x) = p(x).     xg(x) = prey isocline F (x) =  p(x)    ρ(y) = y, ψ(x) = p(x) − p(x∗ ).

(6.35)

Theorem 6.2.4 Assume the system (6.35) satisfies (i) ϕ(0) = π(0) = ρ(0) = 0. φ0 (x) > 0, ψ 0 (x) > 0 for x > 0, ρ0 (y) > 0, π 0 (y) > 0 for y > 0, π(+∞) = +∞. (ii) ψ(x∗ ) = 0, k > x∗ , F (K) = 0, (x − K)F (x) < 0, ∀x 6= K. (iii)

−F 0 (x)φ(x) ψ(x)

is non-decreasing for −∞ < x < x∗ , x∗ < x < +∞.

Then there is at most one limit cycle for the system (6.35). Proof. Let (x∗ , y ∗ ) be the equilibrium. Consider change of variable (x, y) → (u, v). x = ξ(u) + x∗ y = η(v) + y ∗ where ξ(u), η(v) are to be determined. Then  du 1   = 0 φ(ξ(u) + x∗ )(F (ξ(u) + x∗ ) − π(η(v) + y ∗ )  dt ξ (u) dv 1    = 0 ρ(η(v) + y ∗ )ψ(ξ(u) + x∗ ). dt η (v)

176

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Let (

ξ 0 (u) = φ(ξ(u) + x∗ ) ξ(0) = 0

and ( η 0 (v) = ρ(η(v) + y ∗ ) η(0) = 0. Write du = −[π(η(v) + y ∗ ) − y ∗ ] − [−F (ξ(u) + x∗ ) + y ∗ ] dt = −Φ(v) − A(u) dv = ψ(ξ(u) + x∗ ) = g(u). dt Check: The conditions in Theorem 6.2.3. Since ξ(0) = 0, ξ 0 (u) = φ(ξ(u) + x∗ ) > 0, we have ξ(u) > 0 for u > 0. Thus u and ξ(u) have the same sign. ug(u) > 0 if and only if ξ(u)g(u) > 0. Since (x − x∗ )ψ(x) > 0 for all x, ξ(u)g(u) = ξ(u)ψ(ξ(u) + x∗ ) = ((ξ(u) + x∗ ) − x∗ )ψ(ξ(u) + x∗ ) > 0 and consequently ug(u) > 0. Z u G(u) = g(s)ds → +∞ as |u| → +∞. 0

vΦ(v) = v(II(η(v) + y ∗ ) − y ∗ ) > 0, v 6= 0. Check:

F 0 (u) g(u)

is non-decreasing on (−∞, 0), (0, +∞).

A0 (u) = −F 0 (ξ(u) + x∗ )ξ 0 (u) = −F 0 (ξ(u) + x∗ )φ(ξ(u) + x∗ ). A0 (u) −F 0 (ξ(u) + x∗ )φ(ξ(u) + x∗ ) = g(u) ψ(ξ(u) + x∗ ) 0

(x)φ(x) is non-decreasing for −∞ < u < 0, 0 < u < +∞, since −F ψ(x) in ∗ ∗ nondecreasing in x, −∞ < x < x , x < x < +∞. ∴ (iii) holds.

TWO-DIMENSIONAL SYSTEMS

177

Example 6.2.2 Prove uniqueness of limit cycle for Rosenzweig-McArthur model (Example 6.1.4). Here  x r be the prey isocline. (a + x) 1 − m K mx φ(x) = = p(x) a+x x − x∗ mx − d = (m − d) . ψ(x) = a+x a+x

F (x) =

0

(x)φ(x) is nondecreasing on (−∞, x∗ ), (x∗ , +∞). Show that −F ψ(x) Compute the following:  φ(x) r 2 K−a m x Proof. F 0 (x) = − m K 2 − x , ψ(x) = m−d x−x∗ ,  0    x 0 φ(x) r 2 m K−a −F 0 (x) ψ(x) = −m − x >0 K m−d 2 x−x∗    x 0 < 0 for x 6= x∗ ⇔ K−a 2 −x x−x∗ (−x∗ ) (x−x∗ )2

x ∗ + (−1) x−x ∗ < 0 for x 6= x



K−a 2

−x



K−a 2

 − x (−x∗ ) − (x − x∗ )x < 0 for x 6= x∗





K−a ∗ 2 (−x )

− x2 + 2xx∗ < 0 for x 6= x∗



K−a ∗ 2 (−x )

+ (x∗ )2 < (x − x∗ )2 for x 6= x∗

 ∗ ∗ ⇔ − K−a x < (x − x∗ )2 for x 6= x∗ 2 +x since 0 < x∗
0) for µ < µ0 , α(µ0 ) = 0, β(µ0 ) 6= 0 and α(µ) > 0 (α(µ) < 0) for µ > µ0 . The Hopf Bifurcation Theorem says that the equation (Hµ ) has a oneparameter family of nontrivial periodic solution x(t, ) with period T () = 2π β(µ0 ) + O(), µ = µ() and µ0 = µ(0). Let x ˜ = x − x(µ), µ ˜ = µ − µ0 , x ˜0 = f (˜ x + x(˜ µ + µ0 ), µ ˜ + µ0 ) = F (˜ x, µ ˜). Hence we assume (Hµ ) satisfies (H1) f (0, µ) = 0, for all µ; (H2) The eigenvalues of Dfµ (0) are α(µ) ± iβ(µ) with α(0) = 0, β(0) = β0 6= 0 and α0 (0) 6= 0, so the eigenvalues are crossing the imaginary axis. Example 6.3.1 Consider the predator-prey system  ( 0 mx x x = γx 1 − K  − a+x y, y0 =

mx a+x

− d y.

a Hopf Bifurcation occurs at λ0 = K−a 2 , where λ = (m/d)−1 is the parameter. In order to prove Hopf-Bifurcation Theorem, we need to introduce the concept of “normal form”.

Normal form for Hopf Bifurction Consider (Hµ ) under the assumption (H1), (H2), with the form  0     1  x α(µ) −β(µ) x f (x, y, µ) = + . y0 β(µ) α(µ) y f 2 (x, y, µ) t and let µ ˜ = α(µ) With the scaling τ = β(µ) β(µ) , we may assume α(µ) = µ, β(µ) = 1. Then we have P x0 = µx − y + amn xm y n , P m + n ≥ 2. (6.36) y 0 = x + µy + bmn xm y n ,

TWO-DIMENSIONAL SYSTEMS

 Since µ ± i are eigenvalues of

179

 µ −1 , we set 1 µ

u = x + iy, v = x − iy. Then we have P u0 = (µ + i)u + Amn um v n , P v 0 = (µ − i)v + Bmn um v n .

(6.37)

Claim Bmn = Anm . Since u = v, then X v 0 = u0 = (µ − i)v + Amn um v n X = (µ − i)v + Amn v m un . Hence Bmn = Anm . Now we look for the following change of variables P u = ξ + αmn ξ m η n , P v = η + βmn ξ m η n .

(6.38)

Write P ξ 0 = (µ + i)ξ + A0mn ξ m η n , P 0 m n ξ η . η 0 = (µ − i)η + Bmn Now we substitute (6.38) into (6.37) and use (6.39), X  u0 = ξ 0 + αmn mξ 0 ξ m−1 η n + nξ m η n−1 η 0   X = (µ + i) ξ + αmn ξ m η n  `  k X X X + A`k ξ + αmn ξ m η n η+ βmn ξ m η n   X  X = 1+ αmn mξ m−1 η n ξ 0 + αmn nξ m η n−1 η 0    X X = 1+ αmn mξ m−1 η n (µ + i)ξ + A0mn ξ m η n X   X 0 + αmn nξ m η n−1 (µ − i)η + Bmn ξm ηn .

(6.39)

(6.40)

(6.41)

Compare the coefficients ξ m η n , m + n = 1, 2 in (6.40) and (6.41). For m + n = 1, we have (µ + i)ξ = (µ + i)ξ.

180

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

For m + n = 2, 3 (µ + i)αmn ξ m η n + Amn ξ m η n = αmn m(µ + i)ξ m η n + A0mn ξ m η n + αmn n(µ − i)ξ m η n .

(6.42)

We want A0mn = 0 as many terms as possible. From (6.42), A0mn = Amn − αmn ((µ + i)(m − 1) + (µ − i)n) .

(6.43)

If A0mn = 0, then αmn =

Amn . (µ + i)(m − 1) + (µ − i)n

(6.44)

For m + n = 2, we obviously have m − n − 1 6= 0. For µ = 0 or µ near 0 we choose αmn as in (6.44) and hence we are able to kill all quadratic terms in (6.39). Now for m + n = 3, µ = 0 or µ near 0, we want to check m − n − 1 6= 0 or not. Then we find that except m = 2, n = 1, we are able to kill all other cubic terms. For m = 2, n = 1 α21 =

A21 . 3µ

Then (6.39) becomes ξ 0 = (µ + i)ξ + A021 ξ 2 η + 4th-order terms, η 0 = (µ − i)η + A021 ξη 2 + 4th-order terms. The last change of variables is ξ = ρeiϕ ,

η = ρe−iϕ .

Then  or 

ρ0 ϕ0



ξ0 η0



 =

eiϕ iρeiϕ e−iϕ −iρe−iϕ



ρ0 ϕ0



−1  0  eiϕ iρeiϕ ξ = e−iϕ −iρe−iϕ η0    (µ + i)ρeiϕ + A021 ρ3 eiϕ + · · · −iρe−iϕ , −iρeiϕ 1    = (−2iρ) −iϕ iϕ −iϕ 3 −iϕ 0 −e , e (µ − i)ρe + A21 ρ e + ···   µρ + Gρ3 + · · ·  . = 2 0 1 + (ImA21 )ρ + · · · 

TWO-DIMENSIONAL SYSTEMS

181

Hence we have ρ˙ = µρ + Gρ3 + O(ρ4 ), ϕ˙ = 1 + Kρ2 + O(ρ3 ).

(6.45)

Neglecting the higher order terms in (6.45) we have ρ˙ = µρ + Gρ3 , (6.46) ϕ˙ = 1 + O(ρ2 ). p p − −µ/G). If G < 0, Let G 6= 0. If µ/G < 0, then ρ˙ = ρG(ρ + −µ/G)(ρ p µ > 0, then (0, 0) is an unstable spiral and ρ = −µ/G is asymptotically stable (see Fig. 6.19).

Fig. 6.19

This is the case of supercritical Hopf Bifurcation. IfpG > 0, µ < 0 then (0, 0) is asymptotically stable and the limit cycle ρ = −µ/G is unstable (see Fig. 6.20).

Fig. 6.20

This is the case of subcritical Hopf Bifurcation. When G = 0, we call the Hopf Bifurcation degenerated.

182

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Now we returnpto (6.46). Consider the annulus A = {(ρ, ϕ) : ρp 1 ≤ ρ ≤ ρ2 } where ρ1 < −µ/G < ρ2 , ρ1 , ρ2 are sufficiently close to −µ/G. Then, for G < 0, the annulus is a trapped region for µ sufficiently small. For G > 0, the annulus region A is a trapped region in reverse time. Apply Poincar´e-Bendixson Theorem, we obtain a closed orbit. Remark 6.3.1 Return to original parameter, ρ˙ = α(µ)ρ + Gρ3 + O(ρ4 ), θ˙ = β(µ) + O(ρ2 ). Hence, for µ sufficiently small, from the scaling τ = closed orbit is 2π/β0 + O(µ).

t β(µ) ,

the period of the

Remark 6.3.2 To check the supercritical or subcritical Hopf Bifurcation, we need to evaluate G. It is a complicated matter. According to [WHK], if we have        x˙ 0 −β0 x f (x, y, 0) = + y˙ β0 0 y g(x, y, 0) at µ = 0, then the formula is G=

1 (fxxx + fxyy + gxxy + gyyy ) 16 1 + (fxy (fxy + fyy ) − gxy (gxx + gyy ) − fxx gxx + fyy · gyy ) 16β0

where all partial derivatives are evaluated at the bifurcation point (0, 0, 0). Example 6.3.2 x00 + µx0 + sin x = 0 has degenerated Hopf bifurcation at µ = 0. Example 6.3.3 Consider the following system x01 = x2 + F (λ, r2 )x1 , x2 = −x1 + F (λ, r2 )x2 , where r2 = x21 + x22 . Then the polar equations are dr = F (λ, r2 )r, dt dθ = 1. dt

TWO-DIMENSIONAL SYSTEMS

183

Case 1: If F (λ, r2 ) ≡ λ, then r(t) = aeλt . There is no nontrivial periodic orbit except λ = 0. If λ < 0, then (0, 0) is stable equilibrium. If λ > 0 then (0, 0) is a repeller. If λ = 0 then r(t) ≡ a > 0 is a periodic orbit for any a > 0. This is degenerated Hopf bifurcation. Case 2: If F (λ, r2 ) = λ − r2 then dr = r(λ − r2 ). dt If λ < 0 then (0, 0) is√stable equilibrium. If λ > 0 then r(t) ≡ λ is a stable limit cycle. This is supercritical Hopf bifurcation. The Hopf bifurcation occurs at λ = 0. Case 3: If F (λ, r2 ) = −(r2 − c)2 + c2 + λ where c > 0, then (i) λ < −c2 Then F (λ, r2 ) < 0 and it follows that (0, 0) is a stable equilibrium and r(t) → 0 as t → ∞. (ii) −c2 < λ < 0 Then λ + c2 > 0, λ < 0.  qp  qp dr c2 + λ + c + r c2 + λ + c − r = dt    q q p p · r − c − c2 + λ r + c − c2 + λ r. p √ It follows that (0, 0) is stable; r = c − c2 + λ is a unstable limit p√ cycle; r = c2 + λ + c is a stable limit cycle. This is a subcritical Hopf bifurcation. The bifurcation occurs at λ = −c2 . (iii) λ > 0 qp qp h i p dr = c2 + λ + c − r c2 + λ + c + r r2 + ( c2 + λ − c) r dt p√ c2 + λ + c is a stable limit cycle. r= Hopf Bifurcation in Rn . Consider the autonomous system x0 = f (x, µ),

x ∈ Rn

(6.47)

184

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

under the assumption that f ∈ C 2 (Rn × R, Rn ) and f (0, µ) ≡ 0.

(6.48)

From (6.48), the system (6.47) possesses for any µ ∈ R the trivial solution x(t) ≡ 0 and we want to find the possible values of µ where there is a branching off nonconstant periodic solution of (6.47). Let Aµ := Dfx (0, µ). We shall assume there exists µ0 such that, Aµ0 , satisfies (H1) Aµ0 is non-singular and has a pair of simple purely imaginary eigenvalues ±iω0 , ω0 > 0. (H2) Aµ0 has no other eigenvalues of the form ±ikω0 , k ∈ N, k 6= 1. (H3) α0 (µ0 ) 6= 0 where the eigenvalues of Aµ for µ near µ0 are α(µ)±iβ(µ), β(µ0 ) = ω0 . In the following, we state without proof the Hopf Bifurcation Theorem in Rn ([AP], p. 144). Theorem 6.3.1 Let f ∈ C 2 (Rn × R, Rn ) be such that f (0, µ) = 0 for all µ ∈ R and suppose that for µ = µ0 (H1)–(H3) hold. Then from µ0 there 2π bifurcates a branch of periodic solutions of (6.47) with period close to ω . 0 More precisely, there exist a neighborhood I of s = 0, function ω(s), µ(s) ∈ C 0 (I) and a family xs of non-constant, periodic solutions of (6.47) such that (i) ω(s) → ω0 , µ(s) → µ0 as s → s0 , (ii) xs has period Ts =

2π w(s) ,

(iii) the amplitude of the orbit xs tends to 0 as s → 0. Example 6.3.4 Lorenz equation ([Str] p. 136) x0 = σ(y − x), y 0 = rx − y − xz, σ, r, b > 0

(6.49)

0

z = xy − bz or the system (6.49), the equilibrium E0 = (0, 0, 0) always exists. If r > 1 then we have two equilibria, p p E+ = ( b(r − 1), b(r − 1), r − 1) = (x∗ , y ∗ , z ∗ ), p p E− = (− b(r − 1), − b(r − 1), r − 1) = (−x∗ , −y ∗ , z ∗ ).

TWO-DIMENSIONAL SYSTEMS

185

It is easy to show E0 is a stable node if 0 < r < 1. When  r >1, E0 is a σ+b+3 provided saddle point and E+ , E− are stable if 1 < r < rH = σ σ−b−1 σ > b + 1. Let r be the bifurcation parameter and B + , B − be the bifurcation branches of E + , E − respectively. We note that it can be shown that Hopf bifurcation occurs at r = rH and the bifurcation is subcritical, i.e., for r slightly less than rH , the limit cycles are unstable and exist only for r < rH . For r < rH . the phase portrait near B + is shown schematically in Fig. 6.21.

Fig. 6.21

The fixed point is stable. It is encircled by a saddle cycle, a new type of unstable limit cycle that is possible only in the phase space of three or more dimensions. The cycle has a two-dimensional unstable manifold (the sheet in Fig. 6.21), and a two-dimensional stable manifold. As r → rH from below, the cycle shrinks down around the fixed point. At the Hopf bifurcation, the fixed point absorbs the saddle cycle and changes into a saddle point. For r > rH there are no attractors in the neighborhood (see Fig. 6.22).

Fig. 6.22

186

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

For the details related to chaotic behavior of the solutions of Lorenz equations (6.49), the reader may consult Section 9.3 in [Str]. 6.4

Exercises

Exercise 6.1 Write the van der Pol equation x00 + α(x2 − 1)x0 + x = 0 in the form  0 x = y, (1) y 0 = −x − α(x2 − 1)y. Compare (1) with auxiliary system  0 x =y √ y 0 = −x − α sgn(|x| − 2)y.

(2)

Show that system (2) has a unique limit cycle C and the vector field of (1) is directed inside C. Exercise 6.2 Is it possible to have a two-dimensional system such that each orbit in an annulus is a periodic orbit and yet the boundaries of the annulus are limit cycles? Exercise 6.3 Show that the system of equations x0 = x − xy 2 + y 3 , y 0 = 3y − yx2 + x3 , has no nontrivial periodic orbit in the region x2 + y 2 ≤ 4. Exercise 6.4 Show that there exist µ 6= 0 for which the system x˙ = µx + y + xy − xy 2 , y˙ = −x + µy − x2 − y 3 has periodic solutions. Exercise 6.5 The second-order equation y 00 + (y 0 )3 − 2λy 0 + y = 0 where λ is a small scalar parameter, arises in the theory of sound and is known as Rayleigh’s equation. Convert this into a first-order system and investigate the Hopf bifurcation.

TWO-DIMENSIONAL SYSTEMS

187

Exercise 6.6 Investigate the Hopf bifurcation for the following predatorprey system  mx x − y, x0 = rx 1 − K a+x   y , r, K, m, a, s, ν > 0. y 0 = sy 1 − νx Exercise 6.7 Consider the two-dimensional system x0 = Ax − r2 x where A is a 2 × 2 constant real matrix with complex eigenvalues α ± iw and x ∈ R2 , r =k x k. Prove that there exists at least one limit cycle for α > 0 and that none for α < 0. Exercise 6.8 Show that the planar system x0 = x − y − x3 , y0 = x + y − y3 , has a periodic orbit in some annulus region centered at the origin. Exercise 6.9 Is it true that the solution (x(t), y(t)) of x0 = −x + ay + x2 y, y 0 = b − ay − x2 y, is bounded for any initial condition x(0) > 0, y(0) > 0? K−a Exercise 6.10 Consider Example 6.1.4. Show that,  if 2 < λ, then ∗ ∗ 2 (x , y ) is globally asymptotically stable in Int R+ . Hint: Apply Dulac’s criterion with h(S, x) = (a + S)α xβ for α, β ∈ R to be determined.

Exercise 6.11 Consider the differential equations 4xy dx =a−x− , dt 1 + x2   dy y = bx 1 − , dt 1 + x2 for a, b > 0. (1) Show that x∗ = a5 , y ∗ = 1 + (x∗ )2 is the only equilibrium point. 25 (2) Show that the equilibrium is repelling for b < 3a 5 − a , a > 0. (3) Let x1 be the value of x where the isocline {x˙ = 0} crosses the xaxis. Let y1 = 1 + x21 . Prove that the rectangle {(x, y) : 0 ≤ x ≤ x1 , 0 ≤ y ≤ y1 } is positively invariant.

188

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

(4) Prove that there is a periodic orbit in the first quadrant for a > 0, 25 0 < b < 3a 5 − a . Exercise 6.12 Show that subcritical Hopf bifurcation occurs at µ = 0 for the equations r˙ = µr + r3 − r5 , θ˙ = w + br2 . Exercise 6.13 Consider a model of the cell division cycle du = b(v − u)(α + u2 ) − u, dt dv = c − u, dt where b  1 and α  1 are fixed and satisfy 8αb < 1, c > 0. Show that the system exhibits relaxations for c1 < c < c2 , where c1 and c2 are to be determined approximately. Exercise 6.14 Consider the equation x00 + µ(|x| − 1)x0 + x = 0. Find the approximate period of the limit cycle for µ  1. Exercise 6.15 Let Ω be an annulus in R2 . Assume the hypothesis of Dulac’s criterion. Using Green’s theorem, show that there exists at most one closed orbit in Ω. Exercise 6.16 (Predator-prey system with Beddington-DeAngles type [Hw1; Hw2]) Let x(t), y(t) be population of prey and predator  x mxy dx = rx 1 − − , dt K a + x + by  dy  emx = − d y, dt a + x + by x(0) > 0, y(0) > 0, where the parameters r, K, a, b, m, e, d > 0 (1) Find the conditions for the existence of limit cycle. (2) Find the conditions for the global stability of interior equilibrium (x∗ , y ∗ ).

TWO-DIMENSIONAL SYSTEMS

189

Exercise 6.17 Show that the system x0 = y, y 0 = −x + y(1 − x2 − 2y 2 ), is positively invariant in the annulus 21 < x2 + y 2 < 1 and there exists at least one periodic solution in the annulus. Exercise 6.18 Show that there is no periodic orbit for the quadratic system x0 = −y + x2 − xy, y 0 = x + xy, by Dulac criterion with function h(x, y) = (1 + y)−3 (−1 − x)−1 . Exercise 6.19 Verify the stability properties of E0 , E+ , E− in Example 6.3.4. Exercise 6.20 Verify that there exists a periodic orbit for the system  2 2   x˙ = −z + (1 − x − z )x, y˙   z˙

= x − z + y, = x.

Exercise 6.21 Given a smooth autonomous ordinary differential equation on the plane, prove that for a non-periodic point when the intersection of its alpha-limit set and omega-limit set is not empty, it contains only fixed points. Exercise 6.22 Consider the two-dimensional system: x˙ = px + y − 2x + x2 + y 2 , y˙ = py + x − 2y + 2xy. (1) Find those real values of p such that the origin is a hyperbolic fixed point. For each such p, determine the type and stability of the origin. (2) For those values of p such that the origin is not a hyperbolic fixed point, sketch the global phase portrait of the system. Exercise 6.23 Consider the system: p p x˙ = x − x x2 + y 2 − y x2 + y 2 + xy, p p y˙ = y − y x2 + y 2 + x x2 + y 2 − x2 .

190

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Observe that the positive x-axis is invariant. Find the w-limit set w(p) for every point p in the plane. Exercise 6.24 Prove the uniqueness of limit cycle for the predator-prey system [Hxc]: K−a x

If 0 < λ
b + 1. (3) Find the third eigenvalue. Exercise 6.26 Consider a C 1 system x˙ = f (x) in R2 . Suppose Γ is a periodic orbit. Let us denote ω(x) the ω−limit set of x. Prove that the set of points x which do not belong to Γ and satisfy ω(x) = Γ is open.

Chapter 7

SECOND ORDER LINEAR EQUATIONS

7.1

Sturm’s Comparison Theorem and Sturm-Liouville Boundary Value Problem

Consider the general second order linear equation u00 + g(t)u0 + f (t)u = h(t), a ≤ t ≤ b. Rt Multiplying exp( 0 g(s)ds) ≡ p(t) on both sides of (7.1) yields (p(t)u0 )0 + q(t)u = H(t),

(7.1) (7.2)

with p(t) > 0, p(t) and q(t) are continuous on [a, b]. (7.3) The advantage of the form (7.2) over (7.1) is that the linear operator Lu = (p(t)u0 )0 + q(t)u is self-adjoint (see Lemma 7.1.2). Pr¨ ufer Transformation: Let u(t) 6≡ 0 be a real-valued solution of (p(t)u0 )0 + q(t)u = 0. (7.4) 0 Since u(t) and u (t) cannot vanish simultaneously at any point t0 in J = [a, b], we introduce the following “polar” coordinate (see Fig. 7.1) h i1/2 2 (7.5) ρ = u2 + (pu0 ) , ϕ = tan−1

u pu0 .

Then we have pu0 = ρ cos ϕ, u = ρ sin ϕ. From (7.4), (7.5), it follows that ϕ0 =

=

pu0 ·u0 −u(pu0 )0 (pu0 )2  2 u 1 + pu 0

1 p(t)



(pu0 )2 ρ2

=

(pu0 )u0 + q(t)u2 ρ2

 + q(t)

191

(7.6)

 2 u , ρ

192

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Fig. 7.1

i.e., ϕ0 =

1 cos2 ϕ + q(t) sin2 ϕ. p(t)

(7.7)

We note that, in (7.6), the R.H.S. is independent of ρ. A straightforward computation shows   1 0 ρ = − q(t) − ρ sin ϕ cos ϕ. p(t)

(7.8)

From (7.5), we have ϕ(t0 ) = 0 (mod π) if and only if u(t0 ) = 0. Lemma 7.1.1 Let u(t) ≡ 0 be a real-valued solution of (7.3) with p(t) > 0 and q(t) continuous on J = [a, b]. Assume that u(t) has exactly n zeros, n ≥ 1, t1 < t2 < · · · < tn in [a, b]. Let ϕ(t) be a continuous solution of (7.6), (7.7) with 0 ≤ ϕ(a) < π. Then ϕ(tk ) = kπ and ϕ(t) > kπ for tk < t ≤ b. Proof. Note that u(t0 ) = 0 iff ϕ(t0 ) = 0 (mod π). From (7.6) it follows that ϕ0 (tk ) = p(t1k ) > 0, if ϕ(tk ) = 0. Hence Lemma 7.1.1 follows. Definition 7.1.1 Consider (p1 (t)u0 )0 + q1 (t)u = 0,

t ∈ J = [a, b];

(I)

(p2 (t)u0 )0 + q2 (t)u = 0,

t ∈ J = [a, b],

(II)

we say that equation (II) is a Sturm majorant of (I) on J if p1 (t) ≥ p2 (t), q1 (t) ≤ q2 (t), t ∈ J. In addition, if q1 (t) < q2 (t) or p1 (t) > p2 (t) > 0 at some point t ∈ J, we say that (II) is a strict Sturm majorant of (I) on J.

SECOND ORDER LINEAR EQUATIONS

193

Theorem 7.1.1 (Sturm’s 1st Comparison Theorem) Let pi (t), qi (t) be continuous on J = [a, b], i = 1, 2, and (II) be a Strum majorant of (I). Assume that u1 (t) 6≡ 0 is a solution of (I) and u1 (t) has exactly n zeros t1 < t2 < · · · < tn on J and u2 (t) 6≡ 0 is a solution of (II) satisfying p2 (t)u02 (t) p1 (t)u01 (t) ≥ , at t = a u1 (t) u2 (t) (if ui (a) = 0, set

(7.9)

pi (a)u0i (a) = +∞). ui (a)

Then u2 (t) has at least n zeros on (a, tn ]. Furthermore, if either the inequality holds in (7.8) or (II) is a strict Sturm majorant of (I), then u2 (t) has at least n zeros in (a, tn ). Proof. Let ϕi (t) = tan−1

ui (t) pi (t)u0i (t) , i = 1, 2. Then ϕi (t) satisfies qi (t) sin2 ϕi (t) = fi (t, ϕi ). From (7.8),

(7.6), i.e.

ϕ0i (t) = pi1(t) cos2 ϕi (t) + it follows that 0 ≤ ϕ1 (a) ≤ ϕ2 (a) < π. Since (II) is a Sturm majorant of (I), it follows that f1 (t, ϕ) ≤ f2 (t, ϕ) on J. From differential inequality, we have ϕ1 (t) ≤ ϕ2 (t) on J. Since ϕ1 (tn ) = nπ ≤ ϕ2 (tn ), u2 (t) has at least n zeros on (a, tn ]. If ϕ1 (a) < ϕ2 (a) or f1 (t, ϕ) < f2 (t, ϕ) on J, then we have ϕ1 (t) < ϕ2 (t) on J. Hence u2 (t) has at least n zeros on (a, tn ). Corollary 7.1.1 (Sturm’s Separation Theorem) Let (II) be a Sturm majorant of (I) on J and u1 (t), u2 (t) be nonzero solutions of (I) and (II) respectively. Let u1 (t) vanish at t1 , t2 ∈ J, t1 < t2 . Then u2 (t) has at least one zero in [t1 , t2 ]. In particular if p1 (t) ≡ p2 (t) ≡ p(t), q1 (t) ≡ q2 (t) ≡ q(t) and u1 (t), u2 (t) are two linearly independent solutions of (7.3), then the zeros of u1 separate and are separated by those zeros of u2 .

Proof. We may assume ϕ1 (t1 ) = 0, ϕ1 (t2 ) = π and 0 < ϕ2 (t1 ) < π. From the above proof it follows that ϕ1 (t) < ϕ2 (t) for t1 ≤ t ≤ t2 . Then there exists t∗1 , t1 < t∗1 < t2 , such that ϕ2 (t∗1 ) = π, i.e., there exists at least one zero of u2 (t) in [t1 , t2 ]. Hence we finish the proof of first part of the corollary. If p1 (t) ≡ p2 (t) ≡ p(t), q1 (t) ≡ q2 (t) ≡ q(t), then, from the first part of the corollary, the zeros of two linearly independent solutions u1 (t) and u2 (t) are separated by each other.

194

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Example 7.1.1 sin t and cos t are two linearly independent solutions of u00 + u = 0. Their zeros are separated by each other. Example 7.1.2 Consider Airy’s equation u00 (t) + tu(t) = 0. Comparing with u00 + u = 0, it is easy to show that Airy’s equation is oscillatory. Remark 7.1.1 Sturm’s Comparison Theorem is used to show that a second order linear equation (sometimes nonlinear) is oscillatory. For example in [HH] we consider the equation arising from the deformation of heavy cantilever v 00 (x) + x sin v = 0, v 0 (0) = 0, v(K) = 0,

(7.10)

where K is a parameter. Let v(x, a) be the solution of v 00 (x) + x sin v = 0, v 0 (0) = 0, v(0) = a.

(7.11)

From the uniqueness of solution of ordinary differential equations, we have v(x, 2π + a) = 2π + v(x, a), v(x, a) = −v(x, −a), v(x, 0) ≡ 0, v(x, π) ≡ π. Hence we only need to consider the solution v(x, a) with 0 < a < π. We next show that v(x, a) is oscillatory over [0, ∞). Let V (x) = (1 − cos v(x)) +

1 (v 0 (x))2 . 2 x

Then V 0 (x) = −

1 2



v 0 (x) x

2 ≤0

and 1 − cos v(x) ≤ V (x) ≤ V (0) = 1 − cos a.

SECOND ORDER LINEAR EQUATIONS

195

Since |v(x)| ≤ π, then |v(x)| ≤ a for all x ≥ 0. Rewrite (7.9) as   sin v(x) 00 v (x) + x v(x) = 0. v(x)  Let 0 < δ < min sinv v . Apply Sturm’s Comparison Theorem to compare 0≤v≤a

the above differential equation with v 00 + δv = 0 which is oscillatory over [0, ∞), the solution v(x, a) of (7.9) is oscillatory over [0, ∞) with zeros y1 (a) < y2 (a) < . . . (see Fig. 7.2).

Fig. 7.2

To see how the number of solutions of (7.9) varies with the parameter K, we plot the curves z = yi (a), i = 1, 2, . . . , n and find the numbers of intersections of these curves with z = K (see Fig. 7.3).

Fig. 7.3

Sturm-Liouville boundary value problem Let Lu = (p(t)u0 )0 + q(t)u, p(t) > 0, q(t) continuous on [a, b]. Consider

196

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

the following eigenvalue problem: Lu + λu = 0, u(a) cos α − p(a)u0 (a) sin α = 0

(Pλ )

0

u(b) cos β − p(b)u (b) sin β = 0. Obviously the boundary conditions in (Pλ ) are ϕ(a) = α, ϕ(b) = β, where ϕ is the angle in the “polar” coordinate (7.4) or (7.5). Let y, z : [a, b] → C and define the inner product Z b y(t)z(t)dt. hy, zi = a

Then we have the following (i) (ii) (iii) (iv)

hy + z, wi = hy, wi + hz, wi, hαy, zi = αhy, zi, α ∈ C, hz, yi = hy, zi, hy, yi > 0 when y 6= 0.

Lemma 7.1.2 The linear operator L is self-adjoint, i.e., hLy, zi = hy, Lzi for all y, z satisfy the boundary condition in (Pλ ). Proof. hLy, zi − hy, Lzi Z Z b 0 0 = [(p(t)y ) + q(t)y] z(t)dt −

b

y(t) [(p(t)z 0 (t))0 + q(t)z(t)] dt

a

a

b Z = p(t)y (t)z(t) − 0

a

a

b

b Z z (t)p(t)y (t)dt − p(t)z (t)y(t) + 0

0

0

a

b

y 0 (t)p(t)z 0 (t)dt

a

= p(b)y 0 (b)z(b) − p(a)y 0 (a)z(a) − p(b)z 0 (b)y(b) + p(a)z 0 (a)y(a) = y(b) cot β z¯(b) − y(a) cot α¯ z (a) − z¯(b) cot βy(b) + z¯(a) cot αy(a) = 0.

Lemma 7.1.3 (i) All eigenvalues of (Pλ ) are real. (ii) The eigenfunctions φm and φn corresponding to distinct eigenvalues λm , λn are orthogonal, i.e. hφn , φm i = 0. (iii) All eigenvalues are simple.

SECOND ORDER LINEAR EQUATIONS

197

Proof. Let Lϕm = −λm ϕm , Lϕn = −λn ϕn , then we have λm hϕm , ϕn i = hλm ϕm , ϕn i = −hLϕm , ϕn i = −hϕm , Lϕn i = hϕm , λn ϕn i = λn hϕm , ϕn i. If m = n then λm = λm . Hence (i) follows. If m 6= n, then (λm − λn )hϕm , ϕn i = 0 and it follows that hϕm , ϕn i = 0 and we complete the proof of (ii). To show that each eigenvalue is simple, we note that the eigenfunction φ(t) satisfies the boundary condition φ(a) cos α − p(a)φ0 (a) sin α = 0. φ(a) If cos α = 0 then φ0 (a) = 0. If cos α 6= 0, then p(a)φ 0 (a) = tan α or φ(a) 0 φ0 (a) = p(a) tan α . In either case φ (a) is determined uniquely by φ(a). Hence the eigenvalues are simple by the uniqueness theorem of ODE.

Now we return to Sturm-Liouville boundary value problem.  Example 7.1.3

u00 + λu = 0 has eigenvalues λn = (n + 1)2 with u(0) = 0, u(π) = 0

eigenfunctions un (x) = sin(n + 1)x,

n = 0, 1, 2, . . . .

Theorem 7.1.2 [Ha] There is a sequence of eigenvalues λ0 , λ1 , · · · , forming a monotone increasing sequence with λn → ∞ as n → ∞. Moreover, the eigenfunction corresponding to λn has exactly n zeros on (a, b).

Proof. Let u(t, λ) be the solution of initial value problem (p(t)u0 )0 + q(t)u + λu = 0,

(7.12)

u(a) = sin α, cos α . u0 (a) = p(a) We note that (Pλ ) has a nontrivial solution if and only if u(t, λ) satisfies the second boundary condition. For fixed λ, we define ϕ(t, λ) = tan−1

u(t, λ) . p(t)u0 (t, λ)

198

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Then ϕ(a, λ) = α and ϕ(t, λ) satisfies 1 cos2 ϕ(t, λ) + (q(t) + λ) sin2 ϕ(t, λ), p(t) ϕ(a, λ) = α.

ϕ0 (t, λ) =

The Sturm comparison theorem or the differential inequality shows that ϕ(b, λ) is strictly increasing in λ. Without loss of generality, we may assume that 0 ≤ α < π. We shall show that lim ϕ(b, λ) = +∞,

(7.13)

lim ϕ(b, λ) = 0.

(7.14)

λ→∞

and λ→−∞

We note that the second boundary condition in (Pλ ) is equivalent to ϕ(b, λ) = β + nπ for some n ≥ 0. Then if (7.12), (7.13), hold then there exist eigenvalues λ0 , λ1 , · · · , λn , λn → +∞ such at ϕ(b, λn ) = β + nπ, n ≥ 0. From Lemma 7.1.1, the corresponding eigenfunction R t u1n (x) has exactly n zeros. Now we prove (7.12). Introduce scaling, s = a p(τ ) dτ and set U (s) = u(t). Then du dt du U˙ (s) = = p(t) dt ds dt 0  du ¨ (s) = p(t) p(t) . U dt And (7.12) becomes ¨ (s) + p(t)(q(t) + λ)U = 0. U

(7.15)

From arbitrary fixed n > 0 and for any fixed M > 0, choose λ sufficiently large such that p(t)(q(t) + λ) ≥ M 2 ,

a ≤ t ≤ b.

Compare (7.15) with u ¨ + M 2 u = 0.

(7.16)

Then the solution u(s) of (7.16) has at least n zeros provided M is sufficiently large. RBy Sturm Comparison Theorem, U (s) has at least n zeros b dt on 0 ≤ s ≤ a p(t) , or equivalently, u(t) has at least n zeros on [a, b]. Then ϕ(b, λ) ≥ nπ if λ is sufficiently large. Hence we complete the proof of (7.13). Next we show that (7.14) holds. Obviously ϕ(b, λ) ≥ 0 for all λ ∈ R.

SECOND ORDER LINEAR EQUATIONS

199

Choose λ < 0 with |λ| sufficiently large and p(t) · (q(t) + λ) ≤ −M 2 < 0. Compare (7.15) with u ¨ − M 2 u = 0, u(0) = sin α, u0 (0) = cos α. Then 1 cos α sin h(M s). M u(s) , ψ(0, M ) = α. Since it can be verified that u(s) ˙

u(s) = sin α cos h(M s) + Let ψ(s, M ) = tan−1

u(s) → 0 as M → ∞, u(s) ˙ R b dt it follows that ψ(b0 , M ) → 0 as M → ∞ where b0 = a p(t) . By Sturm Comparison Theorem, we have 0 ≤ ϕ(b, λ) ≤ ψ(b0 , M ) → 0. Thus (7.14) holds. 7.2

Distributions

In 1930 famous physicist Paul Dirac introduced the concept of δ-function, δξ (x) which satisfies δRξ (x) = 0, for x 6= ξ, ∞ δξ (x)dx = 1, R−∞ ∞ δ (x)ϕ(x)dx = ϕ(ξ), −∞ ξ

ϕ ∈ C ∞.

Physically δξ (x) represents that a unit force is applied at the position x = ξ. In 1950 L. Schwartz introduced the definition of distribution to interpret the mathematical meaning of δ-function. Definition 7.2.1 A test function is a function ϕ ∈ C ∞ (R) with compact support, i.e. there exists a finite interval [a, b] such that ϕ(x) = 0 outside [a, b]. We denote ϕ ∈ C0∞ (R).   ( exp x21−1 , |x| < 1; Then it is easy to verExample 7.2.1 Let ϕ(x) = 0 , |x| ≥ 1. ify that ϕ ∈ C0∞ (R). Definition 7.2.2 We say that t : C0∞ (R) → R is a linear functional if t(αϕ1 + βϕ2 ) = αt(ϕ1 ) + βt(ϕ2 ), α, β ∈ R, ϕ1 , ϕ2 ∈ C0∞ (R). We denote t(ϕ) = ht, ϕi.

200

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Definition 7.2.3 Let {ϕn } ⊆ C0∞ (R). We say that {ϕn } is a zero sequence if

(i) ∪n supp ϕn is bounded,  k (ii) limn→∞ maxx ddxϕkn (x) = 0, k = 0, 1, 2, · · ·. Definition 7.2.4 A linear functional t : C0∞ (R) → R is continuous if ht, ϕn i → 0 as n → ∞ for any zero sequence {ϕn }. A distribution is a continuous linear functional. R Example 7.2.2 Let f ∈ C0∞ (R) be locally integrable, i.e., I |f (x)| dx exists and bounded for any finiteRinterval I. Then we define a distribution tf ∞ associated with f by htf , ϕi = −∞ f (x)ϕ(x)dx. It is an exercise to verify that tf is a distribution. Example 7.2.3 (i) Heavisidedistribution tH . Let H(x) be the Heaviside function: 1, x ≥ 0; H(x) = 0, x < 0. Then R∞ R∞ htH , ϕi = −∞ H(x)ϕ(x)dx = 0 ϕ(x)dx. (ii) Delta distribution δξ def

hδξ , ϕi = ϕ(ξ) R Symbolically we write δξ (x)ϕ(x)dx = ϕ(ξ). (iii) Dipole distribution ∆ def

h∆, ϕi = ϕ0 (0). Definition 7.2.5 We say that a distribution t is regular if t = tf for some locally integrable function f . Otherwise t is singular. Remark 7.2.1 δξ is a singular distribution.

SECOND ORDER LINEAR EQUATIONS

201

Proof. If not, δξ = tf ( for some  locally  integrable f (x). 2 exp x2a−a2 , |x| < a, Consider ϕa (x) = 0 , |x| ≥ a. 1 ϕa (0) = e =max |ϕa (x)|. x

Then htf , ϕa i → 0 as a → 0 for R ∞ |htf , ϕa i| = −∞ f (x)ϕa (x)dx ≤

1 e

Ra −a

|f (x)| dx → 0.

However, if δξ = tf , then htf , ϕa i = ϕa (0) = 1e . This is a contradiction. Next we introduce some algebraic operations of distributions. Let f ∈ C ∞ (R) and t be a distribution. Then we define f t to be a distribution defined as hf t, ϕi = ht, f ϕi , ϕ ∈ C0∞ (R). We note that if t is a regular distribution, then Z ∞ Z ∞ hf t, ϕi = f (x)t(x)ϕ(x)dx = t(x)f (x)ϕ(x)dx. −∞

−∞

To define the derivative Z ∞of a distribution t, we let t be a Zregular distribution. Then ht0 , ϕi = t0 (x)ϕ(x)dx = ϕ(x)t(x) |∞ ϕ0 (x)t(x)dx = −∞ − −∞

− ht, ϕ0 i. Hence we have the following definition: Definition 7.2.6 ht0 , ϕi = − ht, ϕ0 i. It is an easy exercise to verify that t0 is a distribution. Example 7.2.4 H 0 = δ0Z= δ in distribution sense. Z ∞ ∞ hH 0 , ϕi = − hH, ϕ0 i = − H(x)ϕ0 (x)dx = − ϕ0 (x)dx −∞

0

= ϕ(0) = hδ, ϕi, ϕ ∈ C0∞ (R).

Example 7.2.5 δ 0 = −∆ in distribution sense. hδ 0 , ϕi = − hδ, ϕ0 i = −ϕ0 (0) = h−∆, ϕi. 7.3

Green’s Function n

n−1

Let Lu = an (x) ddxnu + an−1 (x) ddxn−1u + · · · + a1 (x) du dx + a0 (x)u be a linear differential operator.

202

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Consider the boundary value problem: (Lu)(x) = f (x), 0 ≤ x ≤ 1 with boundary conditions at x = 0 and x = 1. We say that u is a distribution solution of Lu = f if hLu, ϕi = hf, ϕi for all test functions ϕ. To motivate the concept of Green’s function, we consider the following example. Example 7.3.1 Let Lu = u00 . Consider two-point boundary value problem Lu = f , u00 (x) = f (x), u(0) = 0,

u(1) = 0.

From direct integrations and the boundary conditions, we obtain Z 1 Z x u(x) = − x(1 − ξ)f (ξ)dξ + (x − ξ)f (ξ)dξ 0

Z =

0

1

g(x, ξ)f (ξ)dξ,

(7.17)

0

where 

x(ξ − 1) , 0 ≤ x ≤ ξ ≤ 1; ξ(x − 1) , 0 ≤ ξ ≤ x ≤ 1. R1 Write u(x) = (L−1 f )(x) = 0 g(x, ξ)f (ξ)dξ. Applying L on both sides of the above identity, we have Z 1  Z 1 f (x) = L g(x, ξ)f (ξ)dξ = Lg(x, ξ)f (ξ)dξ. g(x, ξ) =

0

0

Then Lg(x, ξ) = δx (ξ) = δ(x − ξ) and g(x, ξ) is called the Green’s function of differential operator L subject to homogeneous boundary condition u(0) = 0, u(1) = 0. Physical Interpretation of Green’s Function Suppose a string is stretched between x = 0 and x = 1 in equilibrium state. Let u(x) be the vertical displacement from zero position and u(0) = u(1) = 0, i.e., two ends of the string are fixed. Since the string is in equilibrium state, the horizontal and vertical forces must be in balance. Then (see Fig. 7.4)

SECOND ORDER LINEAR EQUATIONS

203

Fig. 7.4

T0 = T (x) cos θ(x) = T (x + dx) cos θ(x + dx) T (x + dx) sin θ(x + dx) − T (x) sin θ(x) = ρ(x)gds where g is the constant of gravity, ρ(x) is the density per unit length of the string at x and ds is the arclength between x and x + dx. Then T0

tan θ(x + dx) − tan θ(x) ds = ρ(x)g . dx dx 2

d u Since du dx = tan θ(x), let dx → 0 and it follows that T0 dx2 = ρ(x)g(1 + du 2 1/2 ( dx ) ) .

Assume that du dx is sufficiently small, then it follows that d2 u ρ(x)g = f (x), f (x) = , dx2 T0 u(0) = u(1) = 0. R1 From Example 7.3.1, u(x) = 0 g(x, ξ)f (ξ)dξ, where g(x, ξ) satisfies d2 g(x, ξ) = δξ (x) = δ(x − ξ), dx2 g(0, ξ) = 0, g(1, ξ) = 0. Hence, g(x, ξ) is the vertical displacement of a string fixed at x = 0 and x = 1 subject to the unit force at ξ. If we partition [0,1] at points {ξk }nk=0 and apply force with magnitude f (ξk ) at point ξk , k = 1, · · ·, n. Then, by Pn superposition principle , we have u(x) = k=1 f (ξk )g(x, ξk )4ξk . Let n → ∞, then we obtain (7.16).

204

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Now we consider the following two-point boundary value problem Lu = f on a < x < b, where (7.18) n−1 n Lu = an (x) ddxnu + an−1 (x) ddxn−1u + · · · + a1 (x) du dx + a0 (x)u with homogeneous boundary conditions B1 u(a) = 0, B2 u(b) = 0. To solve (7.17), we introduce Green’s function g(x, ξ) which satisfies Lg(x, ξ) = δξ (x), for all x ∈ (a, b), B1 g(a, ξ) = 0, B2 g(b, ξ) = 0.

(7.19)

R b Then we expect the solution u(x) of (7.16) to be written as u(x) = g(x, ξ)f (ξ)dξ. a We shall find Green’s function g(x, ξ) by the following: (i) Lg(x, ξ) = 0 for x 6= ξ B1 g(a, ξ) = 0, B2 g(b, ξ) = 0; (ii)

dk g (x, ξ) dxk

is continuous at x = ξ for k = 0, 1, · · ·, n − 2;

(iii) Jump condition:

dn−1 g + dxn−1 (ξ , ξ)



dn−1 g − dxn−1 (ξ , ξ)

=

1 an (ξ) .

To see why the jump condition (iii) holds, we integrate (7.18) from ξ − R ξ+ dn g dn−1 g dg to ξ + . Then ξ− [an (x) dx n (x, ξ)+an−1 (x) dxn−1 (x, ξ)+···+a1 (x) dx (x, ξ)+ a0 (x)g(x, ξ)]dx = 1. From condition (ii), the above identity becomes dn−1 g x=ξ + an (ξ) n−1 (x, ξ) |x=ξ − = 1. dx Hence (iii) holds.  Example 7.3.2 Solve

u00 (x) = f (x), 0 < x < 1, u0 (0) = a, u(1) = b.

Let u1 (x), u2 (x) be the solution of u00 (x) = f (x), 0 < x < 1, u0 (0) = 0, u(1) = 0,

(7.20)

u00 (x) = 0, 0 < x < 1, u0 (0) = a, u(1) = b,

(7.21)

and

respectively.

SECOND ORDER LINEAR EQUATIONS

Then u(x) = u1 (x) + u2 (x), u1 (x) =

R1 0

205

g(x, ξ)f (ξ)dξ, u2 (x) = ax + (b −

a). Find Green’s function g(x, ξ) of (7.19) which satisfies

(i)

d2 dx2 g(x, ξ) = 0 dg dx (0, ξ) = 0,

for x 6= ξ, g(1, ξ) = 0;

(ii) g(x, ξ) is continuous at x = ξ; + dg (iii) dx (x, ξ) |x=ξ x=ξ − = 1.

From (i),

 g(x, ξ) =

dg dx (0, ξ)

Ax + B, Cx + D,

0 ≤ x < ξ ≤ 1; 0 < ξ < x ≤ 1.

= 0 implies A = 0.

g(1, ξ) = 0 implies D = −C. From (ii), we have Aξ + B = Cξ + D. Hence,

 g(x, ξ) =

C(ξ − 1), C(x − 1),

0 ≤ x ≤ ξ ≤ 1; 0 < ξ ≤ x ≤ 1.

dg + (ξ , ξ) − From the jump condition (iii) dx  ξ − 1, 0≤x t1 ), then π/m1/2 ≥ t2 − t1 ≥ π/M 1/2 . (2) Let q(t) be continuous for t ≥ 0 and q(t) → 1 as t → ∞. Show that if u = u(t) 6= 0 is a real-valued solution of (∗), then the zeros of u(t) form a sequence 0 ≤ t1 < t2 < · · · such that tn − tn−1 → π as n → ∞. (3) Consider the Bessel equation   v0 µ2 (∗∗) v 00 + + 1 − 2 v = 0 t t where µ is a real parameter. The change of variable u = forms (∗∗) into  α where α = µ2 − u00 + 1 − 2 u = 0 t

t1/2 v trans1 . 4

Show that the zeros of a real-valued solution v(t) of (∗∗) on t > 0 form a sequence t1 < t2 < · · · such that tn − tn−1 → π as n → ∞. Exercise 7.2 Show that all solutions of y 00 + (|t + 1t |)y + y 3 = 0 are oscillatory on [1, ∞). (I.e. they have infinitely many zeros in [1, ∞).) Exercise 7.3 Suppose a(t) > 0 increases to ∞ as t → ∞, show that all solutions of u00 + a(t)u = 0 are bounded as t → ∞. (Hint: Multiply the equation by u0 and integrate, then try to apply some inequality.) Exercise 7.4 Consider the second order equation y¨ + p(t)y˙ + q(t)y = 0 where p(t) and q(t) are smooth positive functions on the real line and q(t) is decreasing. Rewrite this equation as a linear system and show that the function E(t, y, y) ˙ = q(t)y 2 + (y) ˙ 2 is decreasing along the trajectories of the system.

212

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Exercise 7.5 Consider, for t > 0, the equation λ x ¨ + 2002 x = 0. t (1) Prove that for any interval (a, b) ⊂ (0, ∞) there exists λ such that if φ(t) is a solution of the above equation, then there t0 ∈ (a, b) satisfying φ(t0 ) = 0. (2) Prove that if λ < 0, then any solution φ(t) which is not identically zero has no more than one root. Exercise 7.6 Find eigenvalues and eigenfunctions of y 00 + λy = 0, y(0) + y 0 (0) = 0, y(1) = 0. Exercise 7.7 Show that the solution v(x, a) of initial value problem (7.10) with 0 < a < π satisfies limx→∞ v(x, a) = 0. Exercise 7.8 Consider the following boundary value problem  00 v (x) = K 3 x sin v, 0 ≤ x ≤ 1, K > 0, (P )α = v 0 (0) = 0, v(1) = α, 0 ≤ α ≤ π, show that (1) If α = 0, then (P )0 has a unique solution v(x) ≡ 0, 0 ≤ x ≤ 1, for any K > 0; 3

(2) If K
0, then (0, 0) is a source, a sink, or a center. On the other hand, if ∆ < 0, then (0, 0) is a saddle point. Let C = {(x, y) : x = cos s, y = sin s, 0 ≤ s ≤ 2π} be the unit circle. Then the index for the linear vector field f = (P, Q) is Z 2π 1 dθ If (C) = ds, 2π 0 ds where θ(s) = tan−1 Since

dx ds

= −y, dθ = ds

=

dy ds

cx + dy Q(x(s), y(s)) = tan−1 . P (x(s), y(s)) ax + by

= x, we have   cx+dy ax+by

d ds

1+



cx+dy ax+by

2

h i   dy dy dx (ax + by) c dx − a + d + b ds ds ds ds (cx + dy)

(ax + by)2 + (cx + dy)2 ad − bc . = (ax + by)2 + (cx + dy)2 Hence we have If (C) =

ad − bc 2π

Z 0



dt . (a cos t + b sin t)2 + (c cos t + d sin t)2

(8.3)

We note the R.H.S. of (8.3) is an integer and is continuous in a, b, c, d for ad − bc 6= 0. If ad − bc > 0, we have two cases: (i) ad > 0: Let b, c → 0 and d → a then we have If (C) = homotopy invariance property.

1 2π

R 2π 0

dθ = 1 by

THE INDEX THEORY AND BROUWER DEGREE

221

(ii) ad ≤ 0: Then bc < 0. Let a, d → 0 and b → −c, it follows that If (C) = 1. If ad − bc < 0, then the same arguments with d → −a, b, c → 0 yields If (C) = −1. Theorem 8.1.4 Let f and g be vector fields which never have opposite directions at any point of Jordan curve C. Then If (C) = Ig (C).

Proof. Define a homotopy between f and g, hs = (1 − s)f + sg,

0 ≤ s ≤ 1.

Then h0 = f and h1 = g. We claim that hs 6= 0 at every point of C, for all 0 < s < 1. If not, then, for some 0 < s < 1, there exists a point s g(p) which contradicts the p on C such that hs (p) = 0, i.e., f (p) = − 1−s assumption. Hence the claim holds and by homotopy invariance property If (C) = Ihs (C) = Ig (C). Theorem 8.1.5 Let ad − bc 6= 0 and        f1 (x, y) ab x g1 (x, y) f (x, y) = = + f (x, y) cd y g2 (x, y)  2   (8.4) ab x = + g(x, y) cd y p p where g1 (x, y) = O( x2 + y 2 ), g2 (x, y) = O( x2 + y 2 ) as (x, y) → (0, 0). Then If (0) = Iv (0) where v is the linear vector field v(x, y) =      x ab x Dx f (0, 0) = . y cd y Proof. Let Cr = {(x, y) : x = r cos θ, y = r sin θ} be the circle with center (0, 0) and radius r which is sufficiently small. We show the vector fields f and v never point in opposite direction on Cr . Then, from Theorem 8.1.4, it follows that If (Cr ) = Iv (Cr ) and hence If (0) = Iv (0). Suppose, on the contrary, the vector fields f and v point in opposite direction at some point of Cr . Then f (x0 , y0 ) + sv(x0 , y0 ) = 0 for some s > 0

222

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

and for some (x0 , y0 ) ∈ Cr . From (8.4), we have (1 + s)v(x0 , y0 ) = −g(x0 , y0 ) and hence (1 + s)2 k v(x0 , y0 ) k2 =k g(x0 , y0 ) k2 . (8.5) Since Let m =  ad − bc 6= 0, v vanishes only at (0, 0). inf kv(x, y)k : x2 + y 2 = 1 . Then m > 0. Since ! y 1 x ,p =p v(x, y), v p 2 2 2 2 2 x +y x +y x + y2 it follows that 1r k v(x, y) k≥ m for (x, y) ∈ Cr . From (8.5), we have m2 (1 + s)2 r2 ≤k g(x0 , y0 ) k2 , or k g(x0 , y0 ) k2 → 0, as r → 0. m2 < m2 (1 + s)2 ≤ r2 This is a contradiction and we complete the proof. From Theorems 8.1.1, 8.1.3, 8.1.5 and the indices of linear vector fields, we have the following: Theorem 8.1.6 (i) (ii) (iii) (iv) (v)

The index of a sink, source, center is +1; The index of a saddle is −1; The index of a closed orbit is +1; The index of a closed curve not containing any fixed points is 0; The index of a closed curve is equal to the sum of indices of the fixed points within it.

Corollary 8.1.2 Inside any periodic orbit Γ, there exists at least one fixed point. If there is only one, then it must be a sink, a source or a center. If all fixed points within Γ are hyperbolic, there must be odd number 2n + 1 of which n are saddle, n + 1 either sink, source. Now we shall prove two classical theorems using index theory. Let z = x + iy and f (z) = z n , n is a positive integer. Then from Example 8.1.2, If (C) = n for C = {z : |z| = ρ}. Theorem 8.1.7 (Fundamental Theorem of Algebra) Let p(z) = an z n + · · · + a0 , an 6= 0, be a polynomial in complex variable z = x + iy, ai ∈ C, i = 1, · · · , n. Then p(z) = 0 has at least one zero.

THE INDEX THEORY AND BROUWER DEGREE

223

Proof. Assume an = 1. Define a homotopy of vector fields  ft (z) = z n + t an−1 z n−1 + · · · + a0 . Since |an−1 z n−1 + · · · + a0 | → 0 as |z| → ∞, |z n | there exists ρ > 0 such that ft (z) 6= 0 for |z| ≥ ρ and 0 ≤ t ≤ 1. We note that f0 (z) = z n , f1 (z) = p(z). Let C = {z : |z| = ρ}. Then n = If0 (C) = If1 (C) = Ip (C) 6= 0. If p(z) has no zeros inside C then Ip (C) = 0. This is a contradiction. Hence p(z) has at least one zero inside C. Theorem 8.1.8 (Brouwer Fixed Point Theorem for R2 ) Let D = {z ∈ R2 : |z| ≤ 1} and g : D → D be continuous. Then g has a fixed point.

Proof. Let C = ∂D = {z : |z| = 1} be the unit circle in R2 . We shall prove by contradiction. Suppose g(z) 6= z for all z ∈ D, i.e., there is no fixed point of g in D. Define a continuous family of vector fields, ft (z) = tg(z) − z. Verify ft (z) 6= 0 for all z ∈ C and 0 ≤ t ≤ 1. If not, for some 0 < t < 1 |z| = 1. and some z ∈ C such that ft (z) = 0 or tg(z) = z. Then 1 > t = |g(z)| This is a contradiction. Hence If1 (C) = If0 (C). Claim: If0 (C) = 1. Since   −1 0 f0 (z) = −z is the linear vector field f0 (z) = Az where A = 0 −1 has (0, 0) as a sink. Thus If0 (C) = 1. Hence If1 (C) = 1. But f1 (z) = g(z) − z 6= 0 on D, which implies If1 (C) = 0. This is a contradiction. 8.2

Introduction to the Brouwer Degree in Rn

Brouwer degree d(f, p, D) is a useful tool to study the existence of solutions of the nonlinear equation f (x) = p, f : D ⊆ Rn → Rn , where D is a bounded domain with boundary ∂D = C and f (x) 6= p for all x ∈ C. To

224

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

motivate the definition of d(f, p, D), we consider the special case n = 2, p = 0. From Theorem 8.1.6(v), (i), (ii), we have If (C) =

N X i=1

=

N X

If (zi ) =

N X

Ivi (0)

i=1

sgn(det Jf (zi ))

i=1

where Jf (x) = Df (x) is the Jacobian matrix of vi (x) = Df (xi )x and {zi }N i=1 is the set of all fixed points inside C. We now define the Brouwer degree d(f, p, D) as follows: Let f : D ⊆ Rn → Rn , p ∈ Rn and f (x) 6= p for all x ∈ ∂D. Step 1: Let f ∈ C 1 (D) and assume det Jf (x) 6= 0 for all x ∈ f −1 (p). P We define d(f, p, D) = x∈f −1 (p) sgn(det Jf (x)). From the assumption det Jf (x) 6= 0 for all x ∈ f −1 (p) and the Inverse Function Theorem, it follows that f −1 (p) is a discrete subset in the compact set D. Thus f −1 (p) is a finite set and d(f, p, D) is well-defined. Step 2: Let f ∈ C 1 (D) and assume that there exists x ∈ f −1 (p) such that det Jf (x) = 0. From Sard’s Theorem (see Theorem 8.2.1) there exists a sequence {pm }, pm → p as m → ∞ such that det Jf (x) 6= 0 for all x ∈ f −1 (pm ). Then we define d(f, p, D) = lim d(f, pm , D). We note that m→∞

it can be shown that the definition is independent of the choices of {pm }. Step 3: Let f ∈ C(D). Since C 1 (D) is dense in C(D), there exists {fm } ⊆ C 1 (D) such that fm → f uniformly on D. Then we define d(f, p, D) = lim d(fm , p, D). We note that it can be shown that the defim→∞

nition is independent of the choices of {fm }. In the following we state Sard’s Theorem without proof and explain the theorem. The proof can be found in [BB]. Theorem 8.2.1 (Special case of Sard’s Theorem) Let f : D ⊆ Rn → Rn be C 1 and B = {x ∈ D : det Jf (x) = 0}. Then f (B) has empty interior.

THE INDEX THEORY AND BROUWER DEGREE

225

Remark 8.2.1 If p ∈ f (B), then p = f (x) for some x ∈ B. Hence x ∈ f −1 (p) and det Jf (x) = 0. Since p is not an interior point, there exists {pm }, pm → p, m → ∞ such that pm ∈ / f (B). Then for all x ∈ f −1 (pm ), we have det Jf (x) 6= 0. Before we state Sard’s Theorem, we introduce the following definition. Definition 8.2.1 Let f : D ⊆ Rn → Rm be C 1 , m ≤ n. We say that x ∈ Rn is a regular point if RankJf (x) = m. (I.e., the m × n matrix Jf (x) is of full rank.) We say that x ∈ Rn is a critical point if x is not a regular point. We say that p ∈ Rn is a regular value if for each x ∈ f −1 (p), x is a regular point. We say that p ∈ Rm is a critical value if p is not a regular value. Theorem 8.2.2 (Sard’s Theorem) Let f : D ⊆ Rn → Rm be C 1 and C = {x ∈ D : RankJf (x) < m}, i.e., C is the set of critical points. Then f (C) is of measure zero. Remark 8.2.2 Sard’s Theorem implies that for almost all p ∈ Rm , p is a regular value. Let E = f (C). Then E is of measure zero. Let p ∈ Rm \E, then we have f −1 (p) ∩ C is empty, i.e., for each x ∈ f −1 (p), x is a regular point. Hence p is a regular value. Remark 8.2.3 If m = n then C = {x ∈ Rn : det Jf (x) = 0}. By Sard’s Theorem, f (C) is of measure zero and thus f (C) has empty interior. Hence we complete the proof of Theorem 8.2.1. Properties of d(f, p, D) (i) Homotopy invariance Let H : D ×[0, 1] → Rn , D ⊆ Rn be continuous in x and t and H(x, t) 6= p for all x ∈ ∂D, 0 ≤ t ≤ 1. Then d(H(·, t), p, D) = constant. Proof. Let h(t) = d(H(·, t), p, D). Then h(t) is continuous in t and h(t) is integer-valued. Hence h(t) ≡ constant for all t. Remark 8.2.4 Consider linear homotopy: H(x, t) = tf (x) + (1 − t)g(x). Then H(x, 0) = g(x), H(x, 1) = f (x) and d(f, p, D) = d(g, p, D) provided tg(x) + (1 − t)f (x) 6= 0 for all x ∈ ∂D and 0 ≤ t ≤ 1.

226

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

(ii) d(f, p, D) is uniquely determined by f |∂D .

Proof. Let fe(x) satisfies fe(x) = f (x) for x ∈ ∂D. Define H(x, t) = tfe(x) + (1 − t)f (x), x ∈ D, 0 ≤ t ≤ 1. Then for x ∈ ∂D, H(x, t) = f (x) 6= p. By (i) we have deg(f, p, D) = deg(fe, p, D). (iii) d(f, p, D) is a continuous function of f and p.

Proof. (iii) follows directly from the definition. (iv) (Poincar´ e-Bohl) Let f (x) 6= p, g(x) 6= p for all x ∈ ∂D. If f (x) − p and g(x) − p never point opposite on each x ∈ ∂D then d(f, p, D) = d(g, p, D).

Proof. Define a homotopy H(x, t) = t(f (x) − p) + (1 − t)(g(x) − p). We claim that H(x, t) 6= 0 for all x ∈ ∂D, 0 ≤ t ≤ 1. If not, then there exists t, 0 < t < 1 and, x ∈ ∂D such that H(x, t) = 0 and it follows that f (x)−p = − 1−t t (g(x)−p). This contradicts that f (x)−p and g(x)−p never point opposite for each x ∈ ∂D. By (i), deg(f, p, D) = deg(g, p, D). Corollary 8.2.1 If f (x) · x < 0 for all x ∈ ∂D or f (x) · x > 0 for all x ∈ ∂D, then there exists x0 ∈ D such that f (x0 ) = 0 or there exists an equilibrium of x0 = f (x).

P Proof. Let g(x) = x, p = 0. Then d(g, 0, D) = x∈g−1 (0) sgn(det Jg (x)) = 1. The assumption f (x) · x > 0 for all x ∈ ∂D says that f (x) and g(x) never point opposite on ∂D. From (iv) d(f, 0, D) = d(g, 0, D) = 1 6= 0 and by the following (v) there exists x0 ∈ D such that f (x0 ) = 0. If f (x) · x < 0 for x ∈ ∂D, then we replace f (x) by −f (x). (v) If d(f, p, D) 6= 0 then there exists x ∈ D such that f (x) = p.

THE INDEX THEORY AND BROUWER DEGREE

227

Proof. We prove (v) by contradiction. Suppose f (x) 6= p for all x ∈ D. Then by definition of d(f, p, D), it follows that d(f, p, D) = 0. This contradicts the assumption d(f, p, D) 6= 0. Next, we prove Brouwer Fixed Point Theorem Let Dn = {x ∈ Rn : |x| ≤ 1} and f : Dn → Dn be continuous. Then f has a fixed point, i.e., there exists x ∈ Dn such that f (x) = x. Proof. We shall show that f (x) − x = 0 has a solution in Dn . Consider the following homotopy H(x, t) = x − tf (x), x ∈ Dn , 0 ≤ t ≤ 1. Then H(x, 0) = x = g(x), H(x, 1) = x − f (x). We claim that H(x, t) 6= 0 for all x ∈ ∂Dn , 0 ≤ t ≤ 1. Obviously H(x, 0) = g(x) 6= 0 on ∂Dn . If H(x, t) = 0 for some 0 < t < 1 and some x ∈ ∂Dn , then x = tf (x). Since 1 = |x| = t |f (x)| ≤ t, we obtain a contradiction. Hence d(x−f (x), 0, Dn ) = d(g, 0, Dn ) = 1 6= 0 and it follows that there exists x ∈ Dn such that x = f (x). Remark 8.2.5 Brouwer Fixed Point Theorem has many applications in Economics. In 1976, Kellogg, Li and Yorke gave a constructive proof of Brouwer Fixed Point Theorem. Interested readers may consult their work [KLY] in computing numerically Brouwer fixed points. Generalized Brouwer Fixed Point Theorem Let K ⊂ Rn be a compact convex set. If f : K → K is a continuous function then f admits a fixed point. Proof. By translation we may assume Int K 6= φ and 0 ∈ Int K. Consider a map JK : Rn → R+ defined by JK (x) = inf {λ ≥ 0|x ∈ λK}. Define h(0) = 0, h(x) = JK (x) x if x ∈ K\{0}. Then h : K → Dn is a homeomorphism |x| with h(∂K) ⊂ ∂Dn . Let g = h ◦ f ◦ h−1 : Dn → Dn . Then g is continuous, and by Brouwer Fixed Theorem, there exists x0 ∈ Dn such that x0 = g(x0 ) = h ◦ f ◦ h−1 (x0 ) or h−1 (x0 ) = f (h−1 (x0 )). Hence h−1 (x0 ) ∈ K is a fixed point of f . (vi) Domain decomposition If {Di }N sets in D and f (x) 6= p for all i=1 is a finite disjoint open PN N x ∈ (D − ∪i=1 Di ), then d(f, p, D) = i=1 d(f, p, Di ).

228

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Proof. Assume f ∈ C 1 (D) and detJf (x) 6= 0 for all x ∈ f −1 (p). Then X d(f, p, D) = sgn(det Jf (x)) x∈f −1 (p)

=

N X

X

sgn(det Jf (x))

i=1 x∈Di ∩f −1 (p)

=

N X

d(f, p, Di ).

i=1

(vii) (Odd mapping) Let D ⊆ Rn symmetric domain about origin 0 and f (−x) = −f (x) on D with f : D → Rn , f (x) 6= 0 on ∂D, then d(f, 0, D) is an odd integer. Proof. We may assume f ∈ C 1 and det(Jf (x)) 6= 0 for all x ∈ f −1 (0) in D. The solutions of f (x) = 0 can be group in the pair (¯ x, −¯ x) together with x ¯ = 0. Since f (x) is an odd mapping, we have det(Jf (¯ x)) = (−1)n det(Jf (−¯ x)) 6= 0 and X d(f, p, D) = sgn(det Jf (¯ x)) x ¯∈f −1 (0)

= sgn det Jf (0) + even integer = odd integer.

Borsuk-Ulam Theorem Let f : S n → Rn be a continuous map where S n = {x ∈ Rn+1 | k x k2 = 1}. Then there exists x0 ∈ S n with f (x0 ) = f (−x0 ). Proof. Define f˜ : S n → Rn+1 , f˜ = (f1 (x), −f1 (x), f2 (x) − f2 (−x), . . ., fn (x) − fn (−x), 0). We prove by contradiction. Assume for all x ∈ S n , fi (x) 6= fi (−x) for some i ∈ {1, 2, . . . , n}. P P Let = {x ∈ Rn+1 | k x k2 ≤ 1}. Then d(f˜, 0, ) is defined since f˜(x) 6= 0 P for all x ∈ ∂ = Sn. P ˜ By (vii) and f is an odd mapping, d(f˜, 0, ) = odd 6= 0. P P Let g˜ : → Rn+1 . Then d(˜ g , 0, ) = 0.

THE INDEX THEORY AND BROUWER DEGREE

229

Consider homotopy H(x, t) = t˜ g (x) + (1 − t)f˜(x). Since H(x, t) 6= 0 on S n P P P for all 0 < t < 1, then d(f˜, 0, ) = d(H(·, t); 0, ) = d(˜ g , 0, ) = 0. We obtain a contradiction. Example 8.2.1 Let n be odd. Let f : S n−1 → Rn be continuous and f (x) 6= 0 for all x ∈ S n−1 . Then the direction of some normal vector is left unchanged under f, i.e., there exist λ 6= 0 and x0 ∈ S n−1 such that f (x0 ) = λx0 . Proof. Let Σ = {x ∈ Rn : |x| ≤ 1}. Then ∂Σ = S n−1 and by hypothesis, d(f, 0, Σ) is defined. Consider the following homotopies, H1 (x, t) = tf (x) + (1 − t)x, H2 (x, t) = tf (x) + (1 − t)(−x). Claim: One of the homotopies must vanish for some t0 ∈ (0, 1], some x0 ∈ S n−1 . Otherwise if both of H1 (x, t), H2 (x, t) do not vanish, then d(f, 0, Σ) = d(x, 0, Σ) = 1 d(f, 0, Σ) = d(−x, 0, Σ) = −1

(∵ n is odd).

We obtain a contradiction. Hence there exists x0 ∈ S n−1 such that f (x0 ) = λx0 for some λ 6= 0. Example 8.2.2 A nonvanishing tangent vector field exists on S n−1 if and only if n is even. Proof. (⇐) If n is even, say n = 2m, x = (x1 , x2 , . . . , x2m−1 , x2m ) ∈ S n−1 set V (x) = (x1 , x2 , . . . , x2m−1 , x2m ). Then V (x) · x = 0 and V (x) 6= 0 for all x ∈ S n−1 . Hence V (x) is a nonvanishing tangent vector field on S n−1 . (⇒) Suppose a tangent vector field V (x) exists on S n−1 . We want to show n is even. Consider H(x, t) = x cos πt −

V (x) sin πt. |V (x)|

Then H(x, 0) = x H(x, 1) = −x.

230

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Claim |H(x, t)| = 1 6= 0 for all x ∈ ∂Σ = S n−1 ⊆ Rn !2 n X (V (x))i 2 |H(x, t)| = xi cos πt − 2 sin πt |V (x)| i=1 =

n X

x2i cos2 πt +

i=1

(V (x))2i |V (x)|

2

sin2 πt − 2xi

(V (x))i cos πt sin πt |V (x)|

n

= cos2 πt + sin2 πt − x ·

V (x) X sin 2πt |V (x)| i=1

= 1. Then d(f, 0, Σ) = d(x, 0, Σ) = 1. Since d(−x, 0, Σ) = (−1)n , we obtain that n is even. Next we introduce the notion of index. Definition 8.2.2 Assume f (x) − p has an isolated zero x0 . Then the index of f at x0 is defined as i(x0 , f (x) − p) = d(f (x) − p, 0, B ) = d(f, p, B ), where B is the ball B(x0 , ε), 0 < ε  1, B ⊂ D. As we did in Theorem 8.1.6(vi), we have the following result. Theorem 8.2.3 d(f, p, D) =

P

xj ∈f −1 (p)

i(xj , f (x) − p).

Remark 8.2.6 For the infinite dimensional space, the Brouwer degree and Brouwer fixed point Theorem can be extended to Leray-Schauder degree and Schauder fixed point theorem, respectively. Interested reader may consult [BB]. 8.3

Lienard Equation with Periodic Forcing

Consider the Lienard equation (see Section 6.2) with a periodic forcing term: x00 + f (x)x0 + g(x) = p(t). Let F (x) =

Rx 0

f (s) ds. Assume f , g, p satisfy

(8.6)

THE INDEX THEORY AND BROUWER DEGREE

231

(A1) f ∈ C(R, R) is even, g, p ∈ C 1 (R, R) and p is T -periodic. (A2) g(x)/x → ∞ as |x| → ∞. (A3) There exist b, B > 0 such that |F (x) − bg(x)| ≤ B|x| for any x ∈ R. Example 8.3.1 Given ε > 0. By the Levinson-Smith theorem (Theorem 6.2.1), the Lienard equation x00 + ε(x2 − 1)x0 + x3 = 0 has unique limit cycle, and it is globally orbital stable. Clearly f (x) = ε(x2 − 1) and g(x) = x3 satisfy assumptions (A1), (A2), and (A3) with b = ε/3, B = ε. If a T -periodic forcing term p were added, where T > 0 is arbitrary, x00 + ε(x2 − 1)x0 + x3 = p(t), it is not apparent whether T -periodic solutions always exist. Theorem 8.3.1 There exists a T -periodic solution for (8.6). Proof. [Lef] The equation (8.6) can be written  0 x = y − F (x), y 0 = p(t) − g(x). Assumption (A1) ensures existence and uniqueness of solution for (8.6) with arbitrary initial condition (x(0), y(0)) = (x0 , y0 ). Denote this solution by φt (x0 , y0 ). The mapping P : R2 → R2 defined by P (x, y) = φT (x, y) is well-defined. If P has a fixed point (x0 , y0 ), then clearly φt (x0 , y0 ) is a T -periodic solution for (8.6). Thus, all we need to show is the existence of fixed point for P . Consider the quadratic form on R2 : u(x, y) =

x2 by 2 − xy + . b 2

It can be alternatively written      x x A , , y y

 A=

1 b − 21

− 12 b 2

 .

The quadratic form u is positive definite since A is positive definite, so its level sets are ellipses.

232

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Along any solution (x(t), y(t)), we have   2x − y x0 + (by − x)y 0 u0 = b   2x = − y (y − F (x)) + (by − x)(p(t) − g(x)) b   2F (x) 2 = −y 2 − − g(x) x + [F (x) − bg(x)] y + xy + p(t)(by − x). b b It follows from assumptions (A2) and (A3) that g(x) , x

F (x) → ∞ as |x| → ∞. x

Since we have assumed that f is even, F (x)/x is nonnegative for any x ∈ R. Using this observation and assumption (A3), we can compare the two brackets [· · · ] in the above equation for u0 : 2F (x) b

− g(x) F (x) − bg(x) B ≥ ≥ − x bx b

for any x ∈ R.

The difference F (x)/(bx) between the first and second terms is unbounded, so the first term tends to infinity as |x| → ∞. By (A3), the second term in above is bounded as |x| → ∞. Our next goal is to show that u0 < 0 outside a sufficiently large circular disc. We begin by choosing a cropped double cone CR and then consider its complement. See Fig. 8.6.

Fig. 8.6

Regions CR and DR .

THE INDEX THEORY AND BROUWER DEGREE

233

Use polar coordinates x = r cos θ, y = r sin θ. Given α ∈ (0, π/2). Consider the double cone C given by |θ − π/2| or |θ − 3π/2| ≤ α. Let CR = C ∩ {r ≥ R}. In CR , ! 2F (x) − g(x) u0 2 b = − sin θ − cos2 θ r2 x   F (x) − bg(x) 2 p(t)(b sin θ − cos θ) + + sin θ cos θ + x b r   B 2 |p(t)|(b + 1) ≤ − sin2 θ + cos2 θ + B + | sin θ|| cos θ| + . b b r By choosing α > 0 sufficiently small and r1 > 0 sufficiently large, in Cr1 the last equation can be as close to −1 as possible. In particular, this implies the existence of α ∈ (0, π/2), r1 > 0 such that u0 < 0 on Cr1 . With α ∈ (0, π/2) fixed as above. Consider (x, y) in the complement of C in R2 \ {r < R}, denoted by DR ; i.e. DR = C c ∩ {r ≥ R}. Since |x| is away from zero and y/x is bounded for (x, y) ∈ DR , the function h defined by p(t)(by − x) h(x, y, t) = x2 is bounded on DR , for any R > 0. Thus   2F (x) 2 u0 = −y 2 − − g(x) x + [F (x) − bg(x)] y + xy + p(t)(by − x) b b ( 2F (x) )   − g(x) F (x) − bg(x) 2 b − h(x, y, t) x2 + + xy. = −y 2 − x x b As |x| → ∞, the bracket {· · · } in the last line tends to ∞ but the bracket [· · · ] in the last line is bounded. Thus we may choose r2 ≥ r1 such that u0 < 0 on Dr for r ≥ r2 . Together with our discussions for u0 on the cropped double cone Cr1 , we proved the existence of r2 > 0 such that u0 < 0 whenever r ≥ r2 . Choose u0 > maxr≤r2 u(x, y). Then on the ellipse {u = u0 }, we must have r > r2 , and so u0 < 0 on this ellipse. This implies that the filled ellipse E = {(x, y) : u(x, y) ≤ u0 } is positively invariant for the flow φt . Therefore, the mapping P : R2 → R2 maps E into E. By the Brouwer fixed point theorem, there exists a fixed point of P in E.

234

8.4

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Exercises

Exercise 8.1 Prove that the system of differential equations dx = x + P (x, y) dt dy = y + Q(x, y), dt where P (x, y) and Q(x, y) are bounded functions on the entire plane, has at least one equilibrium. Exercise 8.2 Let f = Ax + b be a linear mapping of Rn into Rn , where A is an n × n matrix. Prove, by definition (ii) of degree, that if f (x0 ) = p then  1 if det(A) > 0 d(f, p, B ) = −1 if det(A) < 0 where B is a sphere of any radius ε centered about x0 in Rn . What happens if det(A) = 0? Exercise 8.3 (a) Let x0 be an isolated zero of f (x) = 0 where f : D ⊆ Rn → Rn . Suppose x0 is a regular point, show that i(x0 ) = (−1)β where β is the number of real negative eigenvalues of the Jacobian Df (x0 ). P (b) Let F (x1 , · · · , xn ) = aij xi xj be a real nonsingular quadratic form with aij = aji . Prove that i(grad, F, 0) = (−1)λ where λ is the number of negative eigenvalues of the symmetric matrix (aij ). Exercise 8.4 Let f be a continuous mapping of a bounded domain D of Rn into Rn with the property that, for all x ∈ ∂D, f (x) never points in the direction q(6= 0), that is, f (x) 6= kq for all real nonzero k. Then d(f, 0, D) = 0. Exercise 8.5 Let f (z) = u(x, y) + iv(x, y) be a complex analytic function defined on a bounded domain D and its closure D in R2 , where z = x + iy and i2 = −1. Suppose f (z) 6= p for z ∈ ∂D. Prove d(f, p, D) ≥ 0, where f denote the mapping (x, y) → (u, v).

THE INDEX THEORY AND BROUWER DEGREE

235

Exercise 8.6 (Frobenius Theorem) Suppose A is an n × n matrix (aij ) with aij > 0 for i, j = 1, 2, . . . , n. Prove by using Brouwer Fixed Point Theorem, that A has a positive eigenvalue and a corresponding eigenvector x = (x1 , . . . , xn ), with all xi ≥ 0 and some xj > 0. Ax ˜ (Hint: Let |x| = |x1 | + ... + |xn | and consider the mapping A(x) = |Ax| on the closed convex set ∂Σ+ 1 = {x : |x| = 1, x = (x1 , ..., xn ), xi ≥ 0, i = 1, 2, ..., n}.) Exercise 8.7 Use the Poincar´e-Bohl theorem to give an alternative proof for the Brouwer Fixed Point Theorem. Exercise 8.8 Assume f : R2 → R2 is continuous and lim

kxk→∞

f (x) · x = ∞. kxk

Consider the system x˙ = f (x). Show that there is an equilibrium point. Furthermore, if the equilibrium point is attractive, then either there is another equilibrium point, or there is periodic orbit. What happens if the space dimension is greater than or equal to 3? Exercise 8.9 In the proof for Theorem 8.2.5 we assumed (A1)–(A3) hold. Show that in (A1), the assumption that f is even can be removed.

This page intentionally left blank

Chapter 9

PERTURBATION METHODS

9.1

Regular Perturbation Methods

Suppose we have a problem P (ε) with small parameter 0 < ε  1 and the solution x(ε), ε ≥ 0. If lim x(ε) = x(0), then we say that the problem P (ε) ε→0

is regular. In the following, we give several examples to show how to solve the regular perturbation problem P (ε) by the perturbation methods. Then we explain how to verify that a problem is regular by the Implicit Function Theorem in Banach space. Example 9.1.1 Solve the quadratic equation P (ε) : x2 − x + ε = 0, 0 < ε  1.

(9.1)

When ε = 0, the problem P (0) has two solutions x = 0 and x = 1. Let x(ε) =

∞ X

an εn = a0 + a1 ε + a2 ε2 + · · · .

(9.2)

n=0

Substitute (9.2) into (9.1) x2 (ε) − x(ε) + ε = 0

(9.3)

or (a0 + a1 ε + a2 ε2 + · · · )2 − (a0 + a1 ε + a2 ε2 + · · · ) + ε = 0.

(9.4)

The next step is to find the coefficients a0 , a1 , a2 , a3 , . . . , an , · · · by comparing the coefficients εn in (9.4): O(1): Set ε = 0 into (9.3) or (9.4), we obtain x2 (0) − x(0) = 0

or

a20 − a0 = 0. Then a0 = 0 or a0 = 1. 237

238

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

O(ε): Differentiating (9.3) with respect to ε yields 2x(ε)

dx dx (ε) − (ε) + 1 = 0. dε dε

From (9.2), note that x(0) = a0 , then 2a0 a1 − a1 + 1 = 0

dx dε (0)

or

= a1 . If we set ε = 0 in (9.5), a1 =

−1 . 2a0 − 1

Thus a1 = 1 if a0 = 0 and a1 = −1 if a0 = 1. O(ε2 ): Differentiating (9.5) with respect to ε yields  2 d2 x d2 x dx − 2 = 0. 2x(ε) 2 (ε) + 2 dε dε dε From (9.2),

d2 x dε2 (0)

(9.5)

(9.6)

= 2a2 . If we set ε = 0 in (9.6), then we have

4a0 a2 + 2a21 − 2a2 = 0 or

a2 =

−a21 . 2a0 − 1

Thus a2 = −1 if a0 = 0, a1 = 1 and a2 = −1 if a0 = 1, a1 = −1. For O(εn ), n ≥ 3, we continue this process to find coefficients an , n ≥ 3. Thus we obtain two solutions of P (ε), namely, x1 (ε) = ε + ε2 + 2ε3 x2 (ε) = 1 − ε − ε2 − 2ε3 + . . . .

Remark 9.1.1 From the above, we observe (2a0 − 1)ak = fk (a0 , a1 , · · · , ak−1 ), k ≥ 1. d To determine ak , we need L = dx (x2 − x) |x=a0 = 2a0 − 1 to be an invertible linear operator (see Theorem 9.1.5). Example 9.1.2 Consider two-point boundary value problem  00 u + εu2 = 0, 0 < x < 1; P (ε) : u(0) = 1, u(1) = 1, 0 < ε  1. Let u(ε, x) =

∞ X n=0

εn un (x) = u0 (x) + εu1 (x) + ε2 u2 (x) + · · ·

(9.7)

PERTURBATION METHODS

239

be a solution of P (ε). Substitute (9.7) into P (ε). Then we have u00 (ε, x) + ε(u(ε, x))2 = 0, u(ε, 0) = 1, u(ε, 1) = 1,

(9.8)

or u000 (x) + εu001 (x) + ε2 u002 (x) + . . . + ε(u0 (x) + εu1 (x) + · · · )2 = 0, u(ε, 0) = 1 = u0 (0) + εu1 (0) + ε2 u2 (0) + · · · , u(ε, 1) = 1 = u0 (1) + εu1 (1) + ε2 u2 (1) + · · · .

(9.9)

In the following, we obtain u0 (x), u1 (x), · · · by comparing the coefficients of εn . O(1): Set ε = 0 in (9.8) and (9.9) we obtain u000 (x) = 0, 0 < x < 1, u0 (0) = u0 (1) = 1.

(9.10)

Hence u0 (x) = 1 for all x. O(ε): Differentiating (9.8) with respect to ε yields   d 00 du u (x, ε) + (u(x, ε))2 + ε 2u(x, ε) (x, ε) = 0. dε dε

(9.11)

Set ε = 0 in (9.11) and (9.9), we have  00 u1 (x) + (u0 (x))2 = 0, u1 (0) = 0, u1 (1) = 0.

(9.12)

From u0 ≡ 1, we solve (9.12) and obtain u1 (x) = −

x x2 + . 2 2

(9.13)

O(ε2 ): Differentiating (9.11) with respect to ε and setting ε = 0 yield  00 u2 (x) + 2u0 (x)u1 (x) = 0, (9.14) u2 (0) = 0, u2 (1) = 0. From u0 ≡ 1 and (9.13), we solve (9.14) and obtain u2 (x) =

x4 x3 12 − 6

Continue this process in step O(εn ), n ≥ 3, we obtain  2   4  x x x x3 x u(ε, x) = 1 + ε − + + ε2 − + + O(ε3 ). 2 2 12 6 12

x + 12 .

240

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

The regular perturbation method works in Example 9.1.1 and Example 9.1.2 because of the following Implicit Function Theorem (IFT). Before we state the IFT, we introduce the following definition. Definition 9.1.1 Let X, Y be Banach spaces and B(X, Y ) = {L|L : X → Y is a bounded linear operator}. We say f : X → Y is Fr´ echet-differentiable at x0 ∈ X if there exists a bounded linear operator L ∈ B(X, Y ) such that kf (x0 + u) − f (x0 ) − Luk lim = 0. kuk kuk→0 Remark 9.1.2 It is easy to show that L is unique and we denote L = fx (x0 ). Definition 9.1.2 We say f ∈ C 1 if fx : X → B(X, Y ) is continuous. Theorem 9.1.1 (Implicit Function Theorem) Let X, Y, Z be Banach spaces, U be an open set in X × Y , and f : U ⊆ X × Y → Z be continuous. Assume f is Fr´ echet differentiable with respect to x and fx (x, y) is continuous in U . Suppose (x0 , y0 ) ∈ U and f (x0 , y0 ) = 0. If A = fx (x0 , y0 ) : X → Z is an isomorphism of X onto Z, i.e., fx (x0 , y0 ) is onto and has a bounded inverse fx−1 (x0 , y0 ). Then (i) There exists a ball Br (y0 ) ⊆ Y and a unique map u : Br (y0 ) → X, satisfying u(y0 ) = x0 , f (u(y), y) = 0; (ii) If f ∈ C 1 then u(y) ∈ C 1 and uy (y) = −[fx (u(y), y)]−1 ◦ fy (u(y), y). Proof. See the book [Kee1] or [Mur]. Now we want to explain why the perturbation method works in Example 9.1.2. First we decompose the problem P (ε) into two parts  −u00 = εu2 , u(0) = 0, u(1) = 0, and  −u00 = 0, u(0) = 1, u(1) = 1. Then R 1 as we2 did in §7.3, the solution u(x) of P () satisfies u(x) = 1 + ε 0 g(x, ξ)u (ξ)dξ.

PERTURBATION METHODS

Define F : C[0, 1] × R → C[0, 1] by Z F (u, ε) = u − 1 − ε

241

1

g(x, ξ)u2 (ξ)dξ.

(9.15)

0

Obviously, F (u0 , 0) = 0, where u0 (x) ≡ 1. We want to solve F (u, ε) and express the solution u = u(ε) satisfying F (u(ε), ε) = 0. In order to apply Implicit Function Theorem, we need to verify Fu (u0 , 0) : C[0, 1] → C[0, 1] is an isomorphism. We note that for f : X → Y , where X, Y are Banach spaces, we may use Gateux derivative (directional derivative) to evaluate linear operator f 0 (x0 ) : X → Y , which satisfies   d 0 = f 0 (x0 + tv) · v |t=0 . f (x0 )v = f (x0 + tv) dt t=0

Now we compute Fu (u0 , 0)v, v ∈ C[0, 1], from   Z 1 d d 2 F (u0 + tv, ε) = u0 + tv − 1 − ε g(x, ξ)(u0 (ξ) + tv(ξ)) dξ dt dt 0 Z 1 = v − 2 g(x, ξ) (u0 (ξ) + tv(ξ)) v(ξ)dξ. 0

Set t = 0 in the above identity, we have for ε = 0, Fu (u0 , 0)v = v. Obviously Fu (u0 , 0) is an isomorphism. Differentiating F (u(ε), ε) = 0 with respect to ε, we have du ∂F + (u(ε), ε) = 0. (9.16) dε ∂ε R1 Let ε = 0 in (9.16), then Fu (u0 , 0)u1 − 0 g(x, ξ)(u0 (ξ))2 dξ = 0, where R1 u1 (x) = 0 g(x, ξ)(u0 (ξ))2 dξ, i.e., u1 (x) satisfies  −u001 = (u0 (x))2 , u1 (0) = 0, u1 (1) = 0. Fu (u(ε), ε)

Example 9.1.3 Find a periodic solution of the forced logistic equation u0 = au(1 + ε cos t − u). Let the periodic solution u(t, ) be of the form u(t, ε) = u0 (t) + εu1 (t) + ε2 u2 (t) + · · ·.

242

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Then u00 (t) + εu01 (t) + ε2 u02 (t) + · · · = a(u0 (t) + εu1 (t) + ε2 u2 (t) + · · · ) · (1 + ε cos t − u0 (t) − εu1 (t) − ε2 u2 (t) − · · · ).

(9.17)

O(1): Set ε = 0 in (9.17), we obtain u00 (t) = au0 (t)(1 − u0 (t)). We choose u0 (t) ≡ 1. O(ε): Comparing coefficient of ε, we obtain u01 (t) = a[u0 (t)(cos t − u1 (t)) + u1 (t)(1 − u0 (t))], or u01 (t) + au1 (t) = a cos t. at

(9.18) 0

at

Solving (9.18) by integration factor, we have (e u1 ) = ae cos t and the periodic solution u1 (t) = √a12 +1 cos(t − ϕ) for some ϕ. Continuing this process, we have the periodic solution ε u(t) = 1 + √ cos(t − ϕ) + O(ε2 ). a2 + 1 To explain why the perturbation method works, we define F : C 1 × R → C, F (u, ε) = u0 − au(1 + ε cos t − u), where C 1 , C are the Banach spaces of 2π-periodic differentiable and continuous functions, respectively. Obviously, F (u0 , 0) = 0 where u0 (x) ≡ 1. In order to apply Implicit Function Theorem we need to verify that Fu (u0 , 0) : C 1 → C is an isomorphism. It is easy to show that Lv = Fu (u0 , 0)v = v 0 + av. To show L has bounded inverse, we need to solve the equation Lv = f where f ∈ C. Then Z t v(t) = v(0)e−at + e−at eaξ f (ξ)dξ. 0

In order to have that v(t) is 2π-periodic, we need v(0) = v(2π). We choose R 2π e−2πa 0 eaξ f (ξ)dξ . v(0) = 1 − e−2πa

It is easy to verify that kvkC 1 = L−1 f C 1 ≤ M kf kC where kvkC 1 = kvkC + kv 0 kC .

PERTURBATION METHODS

243

Example 9.1.4 Perturbation of eigenvalues: Let A ∈ Rn×n and Ax = λx have eigenpair (λ0 , x0 ). Consider a perturbed eigenproblem (A + εB)x = λx. Let λ(ε) = λ = λ0 + ελ1 + ε2 λ2 + · · · , x(ε) = x = x0 + εx1 + ε2 x2 + · · · . Then we have (A + εB)x(ε) = λ(ε)x(ε)

(9.19)

O(1): Set ε = 0, we have Ax0 = λ0 x0 . O(ε): Differentiate (9.19) with respect to ε, then set ε = 0. We obtain Ax1 + Bx0 = λ1 x0 + λ0 x1 , or (A − λ0 I)x1 = λ1 x0 − Bx0 .

(9.20)

In order to find λ1 , x1 from Fredholm Alternative Theorem, (9.20) is solvable iff λ1 x0 − Bx0 ⊥N (A∗ − λ0 I). If we assume N (A∗ − λ0 I) = hy0 i is one-dimensional, then we are able to find λ1 by hλ1 x0 − Bx0 , y0 i = 0, or λ1 =

hBx0 , y0 i , hx0 , y0 i

if hx0 , y0 i = 6 0.

Then we solve x1 by (A − λ0 I)x1 = λ1 x0 − Bx0 . Remark 9.1.3 Regular perturbation is very useful in theory of bifurcation. Interested reader can consult [Kee1]. 9.2

Singular Perturbation: Boundary Value Problem

We say a problem P (ε) is singular if its solution x(ε) 6→ x(0) as ε → 0. In the following example, we introduce the notions of outer solution, inner solution, boundary layer and matching procedure.

244

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Example 9.2.1 Consider boundary value problem  00 εu + u0 = 0, 0 < x < 1, u(0) = u0 , u(1) = u1 , 0 < ε  1 is a small parameter.

(9.21)

The exact solution of (9.21) is x

u(x, ε) =

x

1

u0 [e− ε − e− ε ] 1

1 − e− ε

+

u1 [1 − e− ε ] 1

1 − e− ε

.

x

x

Then, for ε > 0 sufficiently small, u(x, ε) = u0 e− ε + u1 (1 − e− ε ). In the following Fig. 9.1, we sketch the graph of u(x, ):

Fig. 9.1

In the region near x = 0 with thickness O(ε), the solution u(x, ε) changes rapidly, i.e., εu00 (x) is not small in this region. The region is called a boundary layer since it is near the boundary x = 0. If the region is in the interior, then we call it the interior layer. Given a singular perturbation problem, it is nontrivial to find where is the boundary layer or interior layer. It needs intuition, experiences and trials and errors. Now for the problem (9.21), we pretend we don’t know anything about the solution u(x, ε). In the following steps, we shall use perturbation method to obtain the approximate solution to true solution u(x, ε). Step 1: Blindly use regular perturbation Let u(x, ε) = u0 (x) + εu1 (x) + ε2 u2 (x) + · · · and substitute it into (9.21), we obtain ε(u000 + εu001 + ε2 u002 + · · · ) + (u00 + εu01 + ε2 u02 + · · · ) = 0, u(0) = u0 = u0 (0) + εu1 (0) + ε2 u2 (0) + · · · , u(1) = u1 = u0 (1) + εu1 (1) + ε2 u2 (1) + · · · .

(9.22)

PERTURBATION METHODS

245

Then we compare coefficients of εn (n ≥ 0)  0 u0 = 0 O(1): , u0 (x) = constant. u0 (0) = u0 , u0 (1) = u1  00 u0 + u01 = 0 O(ε): , u1 (x) = constant. u1 (0) = 0 , u1 (1) = 0 Continue this process, we obtain un (x) = const. for n ≥ 0. Thus for u0 6= u1 we recognize that this is a singular problem, the regular perturbation method doesn’t work for the solution u(x, ε) of (9.21). The expansion (9.22) is not valid for all 0 ≤ x ≤ 1. Step 2: We guess the location of boundary layer or interior layer. Here we assume that the boundary layer is at x = 0. (If we assume the boundary layer is at x = 1, we shall find the following outer solution and inner solution will not be matched in the matching expansion.) (1) Outer expansion: Since we assume the boundary layer is at x = 0, we obtain outer solution U (x, ε) by regular perturbation method under the condition U (1, ε) = u1 . Let U (x, ε) = u0 (x) + εu1 (x) + ε2 u2 (x) + · · · . Then regular perturbation method yields u0 (x) = u1 , u1 (x) = 0 , u2 (x) = 0 · · · . (2) Inner expansion: Assume the inner solution in the boundary layer is W (τ, ε) = w0 (τ ) + εw1 (τ ) + ε2 w2 (τ ) + · · · , where τ = εxα , where α > 0 is to be determined and εα is the thickness of the boundary layer. The introduction of new variable τ means that we stretch the boundary layer. (3) Matching: Let the solution u(x, ε) = U (x, ε) + W (τ ε).

(9.23)

246

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Substitute (9.23) into equation (9.21), we obtain     d2 W 0 −α dW ε U 00 (x, ε) + ε−2α (τ, ε) + U (x, ε) + ε (τ, ε) = 0, dτ 2 dτ or 2

2

ε(u000 + εu001 + ε2 u002 + · · ·) + ε1−2α ( ddτw20 + ε ddτw21 + · · ·) dw1 0 +(u00 + εu01 + ε2 u02 + · · ·) + ε−α ( dw dτ + ε dτ + · · ·) = 0.

(9.24)

For the boundary conditions in (9.21), it follows that u(0) = u0 = U (0, ε) + W (0, ε) = (u0 (0) + εu1 (0) + · · ·) + (w0 (0) + εw1 (0) + · · ·). Here we note that τ = 0 if x = 0. Since u0 (0) = u1 , we have w0 (0) = u0 − u1 . As x lies in the boundary layer region, we stretch the boundary layer and consider W (τ, ε). Here we need ε1−2α = ε−α or α = 1. Multiplying ε on both sides of (9.24) yields 2

2

ε2 (u000 + εu001 + ε2 u002 + · · ·) + ( ddτw20 + ε ddτw21 + · · ·) dw1 0 +ε(u00 + εu01 + ε2 u02 + · · ·) + ( dw dτ + ε dτ + · · ·) = 0. Comparing the coefficients of εn , n ≥ 0, we obtain ( 2 dw0 d w0 dτ 2 + dτ = 0, O(1) : 0 w0 (0) = u0 − u1 , lim w0 (τ ) = 0, lim dw dτ (τ ) = 0. τ →∞

(9.25)

(9.26)

τ →∞

0 Integrating (9.26) from τ to ∞ yields dw dτ + w0 (τ ) = 0, w0 (0) = u0 − u1 and hence w0 (τ ) = (u0 − u1 )e−τ . ( 2 dw1 d w1 dτ 2 + dτ = 0, O(ε): 1 w1 (0) = 0, w1 (∞) = 0, dw dτ (∞) = 0. Then w1 (τ ) ≡ 0.

If we compute the solution u(x, ε) up to O(ε), then u(x, ε) = u1 + (u0 − u1 )e−x/ε + O(ε2 ). Remark 9.2.1 If we assume the boundary layer is near x = 1, then we assume 1−x u(x, ε) = U (x, ε) + W (η, ε), η = , (9.27) ε U (x, ε) = u0 (x) + εu1 (x) + · · · , W (η, ε) = w0 (η) + εw1 (η) + · · · .

(9.28)

PERTURBATION METHODS

247

Substitute (9.27) into (9.21), we have ε2 U 00 (x, ε) +

dW d2 W + εU 0 (x, ε) − = 0. dη 2 dη

(9.29)

Then, from (9.28) and (9.29), it follows that   2 d2 w1 d w0 2 00 00 2 00 +ε 2 +··· ε (u0 + εu1 + ε u2 + · · ·) + dη 2 dη   dw0 dw1 0 0 2 0 +ε(u0 + εu1 + ε u2 + · · ·) − +ε + · · · = 0. dη dη ( O(1) :

d2 w0 dη 2

0 − dw dη = 0, w0 (0) = u1 − u0 , w0 (∞) = 0,

dw0 dη (∞)

= 0.

(9.30)

Integrating (9.30) from η to ∞ yields dw0 = w0 (η), dη w0 (0) = u1 − u0 . Then w0 (η) = (u1 − u0 )eη → ∞ as η → ∞. This contradicts (9.30). Thus the assumption that the boundary layer is near x = 1, is not valid. Nonlinear boundary value problem: εy 00 + f (x, y)y 0 + g(x, y) = 0, y(0) = y0 , y(1) = y1 .

(9.31)

Assume the boundary layer is at x = 0. Let y(x, ε) = U (x, ε) + W (τ, ε), τ =

x . ε

(9.32)

(1) Find outer solution. Let U (x, ε) = u0 (x) + εu1 (x) + · · ·, U (1, ε) = y1 . Substituting U (x, ) into (9.31), we obtain F (x, ε) = εU 00 (x, ε) + f (x, U (x, ε))U 0 (x, ε) + g(x, U (x, ε)) = 0. O(1): Set ε = 0 in (9.33), then we have  f (x, u0 (x))u00 (x) + g(x, u0 (x)) = 0, u0 (1) = y1 .

(9.33)

(9.34)

248

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Thus, from (9.34), we obtain the solution u0 (x). O(ε): Differentiate (9.33) with respect to ε and then set ε = 0, we obtain ( ∂g 0 u000 (x)+f (x, u0 (x))u01 (x)+ ∂f ∂y (x, u0 (x))u1 (x)u0 (x)+ ∂y (x, u0 (x))u1 (x) = 0, u1 (1) = 0. (9.35) Then from (9.35), we obtain the solution u1 (x). (2) Find inner solution. Let W (τ, ε) = w0 (τ ) + εw1 (τ ) + ε2 w2 (τ ) + · · · . Substituting (9.32) into (9.31), we have 2

ε[U 00 (x, ε)+ε−2 ddτW2 (τ, ε)]+f (x, U (x, ε)+W (τ, ε))(U 0 (x, ε)+ε−1 dW dτ (τ, ε)) +g(x, U (x, ε) + W (τ, ε)) = 0. (9.36) Multiply (9.36) by ε, we get d2 W G(x, τ, ε) = ε2 U 00 (x, ε) + (τ, ε) + f (x, U (x, ε) + W (τ, ε)) dτ 2   dW 0 · εU (x, ε) + (τ, ε) + εg(x, U (x, ε) + W (τ, ε)) dτ = 0, (9.37) y(0, ε) = y0 = U (0, ε) + W (0, ε). Comparing the coefficients of εn , n ≥ 1, we have: O(1): Setting ε = 0 in (9.37), we obtain d2 w0 dτ 2

0 + f (0, u0 (0) + w0 (τ )) dw dτ = 0, w0 (0) = y0 − u0 (0), 0 w0 (∞) = 0 , dw dτ (∞) = 0.

Case 1: If y0 < u0 (0) then w0 (0) < 0. Integrating (9.38) from τ to ∞, we have Z ∞ dw0 dw0 − (τ ) + f (0, u0 (0) + w0 (τ )) dτ = 0. dτ dτ τ

(9.38)

PERTURBATION METHODS

249

Then dw0 (τ ) = dτ

Z

u0 (0)

f (0, t) dt. w0 (τ )+u0 (0)

We need Z

u0 (0)

f (0, t) dt > 0

for

y0 − u0 (0) < y < 0.

y+u0 (0)

Fig. 9.2

Case 2: If y0 > u0 (0). Then we need 9.3

R u0 (0) y+u0 (0)

f (0, t)dt < 0 for 0 < y < y0 − u0 (0).

Singular Perturbation: Initial Value Problem

We first consider two weakly nonlinear oscillators in Examples 9.3.1 and 9.3.2 and present the method of two timing (or method of multiple time scales). Example 9.3.1 [Str] Consider the weakly damped linear oscillator dx d2 x + 2 + x = 0, 2 dt dt

(9.39)

dx (0) = 1. dt

x(0) = 0,

(9.40)

The exact solution of (9.39), (9.40) is x(t, ) = 1 − 2

− 12

e−t sin



1 − 2

 12  t .

(9.41)

250

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Now let’s solve the problem (9.39), (9.40) by regular perturbation method. Suppose that ∞ X x(t, ) = n xn (t). n=0

Substitute it into (9.39), (9.40) and then require that the coefficients of all powers of  vanish, and we find that   d , (9.42) O(1) : x000 + x0 = 0, x0 (0) = 0, x00 (0) = 1, 0 = dt O() : x001 + 2x00 + x1 = 0, x1 (0) = 0, x01 (0) = 0. (9.43) Then the solution of (9.42) is x0 (t) = sin t. Plugging this solution to (9.43) gives x001 + x1 = −2 cos t.

(9.44)

We find that the right-hand side of (9.44) is a resonant forcing. The solution of (9.44) subjected to x1 (0) = 0, x01 (0) = 0 is x1 (t) = −t sin t

(9.45)

which is a secular term, i.e., a term that grows without bound as t → ∞. Hence by regular perturbation method the solution of (9.39), (9.40) is x(t, ) = sin t − t sin t + O(2 ).

(9.46)

How does it compare with the exact solution (9.41)? It is easy to verify that the first two terms in (9.46) are exactly the first two terms of (9.41) expanded as power series in . In fact, (9.46) is the beginning of a convergent series expansion for the true solution. Hence for fixed t and small , (9.46) is a good approximation for true solution (9.41). The following Fig. 9.3 shows the difference between true solution (9.41) and approximation solution (9.46) when  = 0.1. The reasons why regular perturbation method fails are that: 1. The true solution (9.41) exhibits two time scales:a fast time t ∼ O(1)  1 for the sinusoidal oscillations, sin (1 − 2 ) 2 t , and a slow time 1 1 t ∼ O  over which the amplitude (1 − 2 )− 2 e−t decays. The approximation solution (9.46) completely misrepresents the slow time scale behavior. In particular, because of the secular term t sin t, (9.46) falsely suggests that the solution grows with time where we know from 1 (9.41) that the amplitude A = (1 − 2 )− 2 e−t decay exponentially as t → ∞.

PERTURBATION METHODS

251

Fig. 9.3 1

2. The frequency of the oscillation in (9.41) is ω = (1 − 2 ) 2 ≈ 1 − 21 2 which is shifted slightly from the “resonance” frequency ω = 1 from  1 (9.46). After a very long time t ∼ O 2 , this frequency error will have a significant cumulative effect.

Two-Timing Method We guess that x is a function of two variables, a “fast” variable, s = t and a “slow” variable, τ = t. Suppose x = x(s, τ, ) and seek a power series solution of the form ∞ X x(t) = x(s, τ, ) = xn (s, τ )n . (9.47) n=0

First we treat s and τ as two new independent variables, then dx ∂x ds ∂x dτ ∂x ∂x = + = + , dt ∂s dt ∂τ dt ∂s ∂τ 2 d2 x ∂2x ∂2x ∂ x = + 2 · + 2 2 . dt2 ∂s2 ∂s∂τ ∂τ Now we substitute (9.47)–(9.49) into the governing equation (9.39), and obtain  2   ∂ x ∂2x ∂x ∂x 2 ∂2x ∂s2 + 2 ∂s∂τ +  ∂τ 2 + 2 ∂s +  ∂τ + x(s, τ, ) = 0, x(0, 0, ) = 0,

∂x ∂s

(0, 0, ) +  ∂x ∂τ (0, 0, ) = 1.

(9.48) (9.49) (9.40) (9.50) (9.51)

Set  = 0 in (9.50), (9.51) and we obtain O(1) :

∂ 2 x0 ∂s2

+ x0 (s, τ ) = 0, 0 x0 (0, 0) = 0, ∂x ∂s (0, 0) = 1.

(9.52)

252

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Differentiate (9.50), (9.51) with respect to  then set  = 0, and we obtain O() :

∂ 2 x1 ∂s2

2

∂ x0 0 + 2 ∂s∂τ + 2 ∂x ∂s + x1 (s, τ ) = 0, ∂x0 ∂x1 x1 (0, 0) = 0, ∂s (0, 0) + ∂t (0, 0) = 0.

(9.53)

To solve (9.52), we have x0 (s, τ ) = A(τ ) sin s + B(τ ) cos s

(9.54)

and from the initial conditions in (9.52) we obtain A(0) = 1, B(0) = 0.

(9.55)

Next we solve (9.53). By (9.54), we rewrite (9.53) as ∂ 2 x1 ∂s2

+ x1 (s, τ ) = −2 cos s (A(τ ) + A0 (τ )) +2 sin s (B(τ ) + B 0 (τ )) .

(9.56)

The right-hand side of (9.56) is a resonant forcing that will produce secular terms like s cos s, s sin s. Hence we set the coefficients of the resonant terms to zero. Here it yields A0 (τ ) = −A(τ ) B 0 (τ ) = −B(τ ). With initial conditions in (9.55) we obtain A(τ ) = e−τ and B(τ ) ≡ 0. Hence x(t) = e−τ sin s + O() = e−t sin t + O()

(9.57)

is the approximate solution predicted by two-timing. The following figure compares the two-timing solution (9.57) and exact solution (9.41) for  = 0.1. The two curves are almost indistinguishable. Example 9.3.2 [Str] We shall use two-timing method to show that the weakly nonlinear van der Pol oscillator x00 + x + (x2 − 1)x0 = 0

(9.58)

has a stable limit cycle that is nearly circular, with radius 2 + O() and frequency 1 + O(2 ). Substitute (9.48), (9.49) into (9.58), we have   2   ∂2x ∂x ∂ x 2 ∂2x + 2 +  +  x2 (s, τ, ) − 1 ∂x 2 2 ∂s ∂s∂τ ∂τ ∂s +  ∂τ (9.59) +x(s, τ, ) = 0.

PERTURBATION METHODS

253

Fig. 9.4

Set  = 0 in (9.59), we obtain O(1) :

∂ 2 x0 + x0 (s, τ ) = 0. ∂s2

(9.60)

Differentiating (9.59) with respect to  and then set  = 0, we obtain  ∂x0 ∂ 2 x0 ∂ 2 x1 + 2 + x20 (s, τ ) − 1 + x1 (s, τ ) = 0. ∂s2 ∂s∂τ ∂s

(9.61)

The general solution of (9.60) is x0 (s, τ ) = r(τ ) cos (s + φ(τ ))

(9.62)

where r(τ ) and φ(τ ) are the slowly-varying amplitude and phase of x0 (s, τ ). To find the equations governing r(τ ) and φ(τ ), we insert (9.62) into (9.61). This yields  ∂x0 ∂ 2 x0 ∂ 2 x1 2 ∂s2 + x1 (s, τ ) = −2 ∂s∂τ − x0 (s, τ ) − 1 ∂s (9.63) = 2 (r0 (τ ) sin (s + φ(τ )) + r(τ )φ0 (τ ) cos (s + φ(τ  ))) +r(τ ) sin (s + φ(τ )) r2 (τ ) cos2 (s + φ(τ )) − 1 . Use the trigonometric identity sin (s + φ(τ )) cos2 (s + φ(τ )) =

1 [sin (s + φ(τ )) + sin 3 (s + φ(τ ))] . 4

Substituting it into (9.63) yields   ∂ 2 x1 1 3 0 + x1 (s, τ ) = 2r (τ ) − r(τ ) + r (τ ) sin (s + φ(τ )) ∂s2 4 + 2r(τ )φ0 (τ ) cos (s + φ(τ )) 1 + r3 (τ ) sin 3 (s + φ(τ )) . 4

254

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

To avoid secular terms, we require 1 2r0 (τ ) − r(τ ) + r3 (τ ) = 0, 4 2r(τ )φ0 (τ ) = 0.

(9.64) (9.65)

First we consider (9.64), it can be rewritten as r0 (τ ) =

1 r(4 − r2 ). 8

(9.66)

Obviously, from (9.66), r(τ ) → 2 as τ → ∞. Secondly, from (9.65), we have φ0 (τ ) ≡ 0 or φ(τ ) ≡ φ0 for some φ0 . Hence, from (9.62), x0 (s, τ ) → 2 cos(s + φ0 ) as τ → ∞ and therefore x(t) → 2 cos(t + φ0 ) + O() as t → ∞.

(9.67)

Thus (x(t), x0 (t)) approaches a stable limit cycle of radius 2 + O(). To find the frequency implied by (43), let θ = t + ϕ(τ ) denote the argument of the cosine. Then the angular frequency ω is given by ω=

dθ dφ dτ =1+ = 1 + ϕ0 ≡ 1. dt dτ dt

Hence ω = 1 + O(2 ). Next we consider the singular perturbation for the following first order system dx = f (x, y), dt dy  = g(x, y), dt x(0, ) = x0 , y(0, ) = y0 .

Example 9.3.3 Consider enzymatic reactions proposed by MichaelisMenten [Mur] involving a substrate (molecule) S reacting with an enzyme E to form a complex SE which in turn is converted into a product P . Schematically we have k1

−→ S + E← −− SE, k−1

k

SE →2 P + E.

Let s = [S], e = [E], c = [SE], p = [P ]

PERTURBATION METHODS

255

where [ ] denotes the concentration. By law of mass action, we have the system of nonlinear equations  ds   dt = −k1 es + k−1 c,    de    dt = −k1 es + (k−1 + k2 ) c, dc (9.68) dt = k1 es − (k−1 + k2 ) c,   dp   dt = k2 c,     s(0) = s , e(0) = e , c(0) = 0, p(0) = p . 0

0

0

From (9.68), we have de dc + =0 dt dt

e(t) + c(t) ≡ e0 .

or

By (9.69), we substitute e(t) = e0 − c(t) into (9.68) and obtain  ds    dt = −k1 e0 s + (k1 s + k−1 ) c, dc dt = k1 e0 s − (k1 s + k−1 + k2 ) c,    s(0) = s , c(0) = 0. 0

(9.69)

(9.70)

With the following scaling c(t) τ = k1 e0 t, u(τ ) = s(t) s0 , v(τ ) = e0 , k +k e0 2 λ = kk1 2s0 , K = −1 k1 s0 , ε = s0 ,

the system (9.70) is in the following non-dimensional form:  du    dτ = −u + (u + K − λ) v, dv = u − (u + K) v, ε dτ

  

u(0) = 1,

v(0) = 0,

where 0 < ε  1 and, from (9.71), K > λ.

Fig. 9.5

(9.71)

(9.72)

256

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

From Fig. 9.5, the solution v(τ ) changes rapidly in dimensionless time dv τ = O(). After that v(τ ) is essentially in a steady state, or  dτ ≈ 0, i.e., the v-reaction is so fast that it is more or less in equilibrium at all times. This is so-called “Michaelis and Menten’s pseudo-steady state hypothesis”. In the following, we introduce method of singular perturbation for the system (9.72). Singular Perturbation: Initial Value Problem Consider the following system dx dt = f (x, y),  dy dt = g(x, y),

0 < |ε|  1, x(0, ) = x0 , y(0, ) = y0 .

(9.73)

If we set  = 0 in (9.73), then dx dt

= f (x, y), x(0) = x0 , 0 = g(x, y).

(9.74)

Assume g(x, y) = 0 can be solved as y = ϕ(x).

(9.75)

Substitute (9.75) into (9.74), then we have dx dt

= f (x, ϕ(x)) x(0) = x0 .

(9.76)

Let X0 (t), 0 ≤ t ≤ 1 be the unique solution of (9.76) and Y0 (t) = ϕ (X0 (t)). In general Y0 (0) 6= y0 . Assume the following hypothesis: There exists K > 0 such that for 0 ≤ t ≤ 1 ∂g = X0 (t) ≤ −K ∂y yx = Y0 (t) (H) and ∂g = X0 (t) ≤ −K ∂y yx = λ for all λ lying between Y (0) and y0 . We shall prove that lim x(t, ) = X0 (t), ↓0

lim y(t, ) = Y0 (t) ↓0

PERTURBATION METHODS

257

uniformly on 0 < t ≤ 1. Since Y0 (0) 6= y0 , we expect Y0 (t) to be nonuniformly valid near t = 0. Introduce a new variable, the stretch variable ξ = t/, and write x(t, ) = X(t, ) + u(ξ, ), (9.77) y(t, ) = Y (t, ) + v(ξ, ), where X(t, ), Y (t, ) are called “outer solutions” and u(ξ, ), v(ξ, ) are called “inner solutions”. There is a matching condition between inner and outer solutions, (see Fig. 9.6) lim u(ξ, ) = 0, lim v(ξ, ) = y0 − Y (0).

ξ↑∞

ξ↑∞

(9.78)

Fig. 9.6

Step 1: Find the outer solutions X(t, ) and Y (t, ). Let ∞ ∞ X X n X(t, ) =  Xn (t), Y (t, ) = n Yn (t). n=0

(9.79)

n=0

Do the regular perturbation for system (9.73), i.e., substitute (9.79) into (9.73). Then X(t, ), Y (t, ) satisfy dX dt = f (X, Y ) (9.80)  dY dt = g(X, Y ) X(0, ) = x0 , Y (0, ) = y0 . Compare n -term for n = 0, 1, 2, . . .. Compute X  X f (X(t, ), Y (t, )) = f n Xn , n Yn # "    ∂f ∂f = f (X0 , Y0 ) +  X1 + Y1 + O(2 ), ∂x X0 ,Y0 ∂y X0 ,Y0 and X  X g (X(t, ), Y (t, )) = g n Xn , n Yn "  #   ∂g ∂g = g (X0 , Y0 ) +  X1 + Y1 + O(2 ). ∂x X0 ,Y0 ∂y X0 ,Y0

258

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

The comparison of n term by substituting (9.79) into (9.80) yields O(1): dX0 dt

= f (X0 , Y0 ), 0 = g(X0 , Y0 ),

(9.81)

O(): dX1 dt dY0 dt

=



=



∂f ∂x

∂g ∂x



 X0 ,Y0 X0 ,Y0

X1 +



X1 +



∂f ∂y

∂g ∂y



 X0 ,Y0 X0 ,Y0

Y1 , (9.82) Y1 .

In (9.81) X0 (t), Y0 (t) satisfy Y0 (t) = ϕ (X0 (t)) , dX0 dt = f (X0 , ϕ(X0 )) , X0 (0) = x0 .

(9.83)

From (9.82), we obtain # ,  "   ∂g ∂g dY0 − X1 · Y1 (t) = dt ∂x X0 ,Y0 ∂y X0 ,Y0

(9.84)

From (9.82) and (9.84), X1 (t) satisfies dX1 dt

= ψ1 (t)X1 + µ1 (t), X1 (0) = 0,

(9.85)

where  ψ1 (t) =  µ1 (t) =

∂f ∂x ∂f ∂y







− X0 ,Y0

 X0 ,Y0

∂g ∂y

∂f ∂y





∂g ∂y

∂g ∂x



 

,

X0 ,Y0

dY0 dt



.

X0 ,Y0

Inductively we shall have, for i = 2, 3, . . .,   Yi (t) = αi (t) + βi (t)Xi (t), dXi (9.86) = ψi (t)Xi + µi (t),  dt Xi (0) = 0. P∞ For x(0, ) = X(0, ) = x0 = i=0 Xi (0)n , it follows that X0 (0) = x0 and Xi (0) = 0 for i = 1, 2, . . ..

PERTURBATION METHODS

259

Step 2: Inner expansion at singular layer near t = 0. From (9.73), (9.77) and (9.80), ξ = t/, we have du dξ

d dX = dξ (x(ξ, ) − X(ξ, )) =  dx dt (ξ, ) −  dt (ξ, ) = f (X(ξ, ) + u(ξ, ), Y (ξ, ) + v(ξ, )) −f (X(ξ, ), Y (ξ, )) , dv dξ = g (X(ξ, ) + u(ξ, ), Y (ξ, ) + v(ξ, )) −g (X(ξ, ), Y (ξ, )) , u(0, ) = x(0, ) − X(0, ) = x0 − x0 = 0, v(0, ) = y0 − Y (0, ) = y0 − Y (0) 6= 0.

(9.87)

Let u(ξ, ) =

∞ X

un (ξ)n ,

v(ξ, ) =

n=0

∞ X

vn (ξ)n .

(9.88)

n=0

Expand (9.87) in power series in  by (9.88) and compare the coefficients on both sides of (9.87). Set  = 0, and we obtain O(1):  du0  dξ = 0 

⇒ u0 (ξ) ≡ 0.

u0 (0) = 0

By mean value theorem, we have  dv0    dξ = g (X0 (0), Y0 (0) + v0 (ξ)) − g (X0 (0), Y0 (0)) ≡M.V.T v0 (ξ)G (v0 (ξ)) ,

  

(9.89)

(9.90)

v0 (0) = y0 − Y0 (0). (Boundary layer jump)

From hypothesis (H), G (v0 (ξ)) ≤ −K < 0, |v0 (ξ)| initially decreases and G(v0 (ξ)) will remain negative and |v0 (ξ)| will decrease monotonically to zero as ξ → ∞ such that |v0 (ξ)| ≤ |v0 (0)|e−Kξ . O(): (

du1 dξ

= f (X0 (0), Y0 (0) + v0 (ξ)) − f (X0 (0), Y0 (0)) ≡ v0 (ξ)F (v0 (ξ)) .

Once v0 (ξ) is solved by (9.90), we solve (9.91) and obtain Z ξ u1 (ξ) = v0 (s)F (v0 (s)) ds ∞

by the matching condition (9.78) u1 (∞) = 0.

(9.91)

260

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Hence x(t, ) ∼ X0 (t) +  [X1 (t) + u1 (t/)] + O(2 ), y(t, ) ∼ Y0 (t) + v0 (t/) + O(). Now we go back to the Michaelis-Menten kinetics. Consider (9.72), dx dt = f (x, y) = −x + (x + K −  dy dt = g(x, y) = x − (x + K) y,

λ) y, K > 0, λ > 0, (9.92)

x(0) = 1, y(0) = 0. Let x(t, ) = X(t, ) + u(ξ, ) =

∞ X

n Xn (t) +

n=0

y(t, ) = Y (t, ) + v(ξ, ) =

∞ X

∞ X

n un (ξ),

n=0

n Yn (t) +

n=0

∞ X

n vn (ξ).

n=0

Then, from (9.83), Y0 (t) = ϕ (X0 (t)) = where X0 (t) satisfies  dx

X0 (t) , X0 (t) + K

x = −x + (x + K − λ) x+K = x(0) = x0 = 1. dt

(9.93)

−λx x+K ,

Then X0 (t) satisfies X0 (t) + K ln X0 (t) = 1 − λt, t ≥ 0. From (9.90) we obtain dv0 = [x0 − (x0 + K) (Y0 (0) + v0 (ξ))] − [x0 − (x0 + K) Y0 (0)] dξ = − (x0 + K) v0 (ξ), v0 (0) = y0 − Y0 (0). 1 Since x0 = 1, y0 = 0 and Y0 (0) = 1+K , it follows that   x0 e−(x0 +K)ξ v0 (ξ) = y0 − x0 + K   −1 = e−(1+K)ξ . 1+K

(9.94)

PERTURBATION METHODS

261

Hence y(t, ) ∼

X0 (t) + X0 (t) + K



−1 1+K



e−(1+K)(t/) .

From (9.91) we have du1 = f (X0 (0), Y0 (0) + v0 (ξ)) − f (X0 (0), Y0 (0)) dξ λ − (1 + K) −(1+K)ξ e , = (1 + K − λ) v0 (ξ) = 1+K u1 (∞) = 0, and it follows that u1 (ξ) = ((1 + K) − λ) e−(1+K)ξ . Hence x(t, ) ∼ X0 (t) +  ((1 + K) − λ) e−(1+K)(t/) . 9.4

Exercises

Exercise 9.1 Approximate all roots of the equation x3 + x2 + x + ε = 0, 0 < ε  1. Exercise 9.2 Approximate all roots of εx3 + x − 1 = 0. (Hint: Set x = y1 and find y as a function of ε.) Exercise 9.3 Estimate the eigenvalues and eigenvectors of the matrix   1 1−ε . ε 1 Exercise 9.4 Approximate the eigenvalues and eigenvectors of y 00 + λ(1 + εx)y = 0, y(0) = y(π) = 0, for all ε 6= 0 small. Exercise 9.5 Assume 

εy 00 + y 0 + y = 0, y(0) = α0 , y(1) = β0 ,

has boundary layer at x = 0. Find the solution u(x, ε) by singular perturbation.

262

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Exercise 9.6 Consider the linear system x0 = ax + by y 0 = cx + dy where d < 0. Rewrite the linear system as z 0 = A()z for z =  A() =

a b c d  

  x and y

 .

(1) Determine two linearly independent solutions of the z 0 = A()z in the form z(t) = eλ()t p(). (2) Show that the solution z(t) obtained in (i) implies an asymptotic solution of the form x(t, ) = X0 (t) + O() y(t, ) = Y0 (t) + η0 (τ ) + O()   X0 on any bounded interval [0, τ ] where satisfies the reduced probY0 lem and η0 → 0as τ ≡ t → ∞. Exercise 9.7 Use the multi-scale technique to find solutions of the initial value problem u00 + ε(cos t)(u0 )2 = 0, u(0) = 0, u0 (0) = −1.

Exercise 9.8 Analyze the following singular perturbation problem similar to Michaelis-Menten kinetics du = −u + (u − a3 u + a1 )v1 + (a4 + u)v2 = f (u, v1 , v2 ), dt dv1 ε = u − (u + a3 u + a1 + a2 )v1 + (a4 + a5 − u)v2 = g1 (u, v1 , v2 ), dt dv2 ε = a3 uv1 − (a4 + a5 )v2 = g2 (u, v1 , v2 ), dt u(0) = 1, v1 (0) = v2 (0) = 0.

Chapter 10

INTRODUCTION TO MONOTONE DYNAMICAL SYSTEMS

10.1

Monotone Dynamical System with Applications to Cooperative Systems and Competitive Systems

The theory of monotone systems was discovered by Morris Hirscch in 1980 [Hir1; Hir2]. During the past thirty years, it was successfully applied to analyzing n-species’s cooperative systems, two species and three species competitive systems and their variants. The nice part of the theory is that it is a non-Lyapunov approach for the global behavior of the solutions of the above mentioned systems. It also has discrete-version monotone dynamical systems [HS2; Z]. In this chapter we state without proof the main theorems of monotone dynamical system. The proofs are deferred to the book [Smi]. We shall emphasize on the application of theory to mathematical models in population biology. We shall use the same notations in Section 5.1 of Chapter 5. Let π(x, t) denote the dynamical system generated by the autonomous system of differential equations dx = f (x) dt

(10.1)

where f is continuously differentiable on an open subset D ⊆ Rn . We recall that π(x, t) is a solution of (10.1) which starts at position x at time t = 0. π(x, t) satisfies the following properties: (i) π(x, 0) = x (ii) π(x, t + s) = π(π(x, s), t) (iii) π is continuous in t and x 263

264

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

where (i) states that the trajectory π(x, t) starting at position x at initial time t = 0, (ii) is the semi-group property and (iii) is the property of continuous dependence on initial data. Definition 10.1.1 Let X be a Banach Space. We say that set K ⊆ X is a cone if the following conditions are satisfied: (i) the set K is closed; (ii) if x, y ∈ K then αx + βy ∈ K for all α, β ≥ 0; (iii) for each pair of vectors x, −x at least one does not belong to K provided x 6= 0, where 0 is the origin of X.

Definition 10.1.2 Let x, y ∈ X. We say that (i) x ≤K y if and only if y − x ∈ K, (ii) x K y if and only if y − x ∈ Int(K), (iii) x 0. n Definition 10.1.4 Let dx dt = f (x), x ∈ D ⊆ R , where D is a convex, open n n subset of R , f = (f1 , · · · , fn ) : D → R be C 1 . We say that (1.1) is a

INTRODUCTION TO MONOTONE DYNAMICAL SYSTEMS

265

∂fi ≥ 0 for i 6= j. We say that (1.1) is irreducible if ∂x  j ∂fi = ∂xj (x) is irreducible for all x ∈ D, i.e. the matrix

cooperative system if the Jacobian Df (x)

 Df (x) cannot be put into the form

AB 0 C

 where A, C are square matrices

by reordering (x1 , ..., xn ). In the following, we may use the language of graph theory to verify the irreducibility of a n × n matrix A [SW]. Definition 10.1.5 Let A = (aij ) be an n × n matrix. Let P1 , P2 , · · · , Pn be n vertices. If aij 6= 0, we draw a directed line segment Pi Pj connecting Pi to Pj . The resulting graph is said to be strongly connected if, for each pair (Pi , Pj ), there is a directed path Pi P k1 , Pk1 Pk2 , · · · , Pkr Pj . A square matrix is irreducible if and only if its directed graph is strongly connected. In the following, a sufficient condition is given for the system (10.1) to generate a monotone (strongly monotone) dynamical system. Theorem 10.1.1 If the system (10.1) is cooperative in D then π is a monotone dynamical system with respect to ≤K in D, where K = Rn+ . If (10.1) is cooperative and irreducible in D then π is a strongly monotone system with respect to K .

Proof. From Theorem 2.6.3 [Smi], π is a monotone dynamical system. The proof of strongly monotonicity follows from Theorem 4.1.1 in [Smi]. Theorem 10.1.2 (Convergence Criterion) Let γ + (x) be an orbit of the monotone dynamical system (10.1) which has compact closure in D. If xK π(x, T ) or π(x, T )K x for some T > 0 then limt→∞ π(x, t) = e for some equilibrium point e.

Proof. If x K π(x, T ) then monotonicity implies that for n = 1, 2, ..., π(x, nT ) ≤K π(x, (n + 1)T ) then from assumption that γ + (x) has compact closure in D it follows that limt→∞ π(x, nT ) = e for some e.

266

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

From continuity of π,   π(e, T ) = π lim π(x, nT ), T = lim π(x, (n + 1)T ) = e. t→∞

t→∞

It follows that the omega limit set of x, ω(x) is a T -periodic orbit {π(e, t) : t ∈ R}. However, T may not be the minimal period of π(e, t). Let P = {τ : π(e, t + τ ) = π(e, t)} be the set of all periods of the solution π(e, t). It is easy to verify that P is a closed set which is closed under addition and substraction and which contains nT for every positive integer n. Since the assumption xK π(x, T ) holds, xK π(x, T + s) for all s satisfying |s| < ε for some ε > 0 small. Repeating the same argument as above, we have that ω(x) is a periodic orbit of the point e with period T + s. Hence P contains the interval (T − ε, T + ε). Since P is closed under addition, it must contain an open interval of length 2ε centered on each of its points. It follows that P = R which implies that e is a rest point and limt→∞ π(x, t) = p. Theorem 10.1.3 A monotone dynamical system cannot have a nontrivial attracting periodic orbit.

Proof. If there were an attracting periodic orbit, then there exists a point x in its domain of attraction such that xK p for some point p on the periodic orbit. As p is the limit point of the positive point γ + (x), there exists T > 0 such that xK π(x, T ). Then π(x, t) converges to a rest point by Theorem 10.1.2, contradicting to the fact that it converges to a nontrivial periodic orbit. Remark 10.1.1 From the [Sel], the author considered a simple model of positive feedback control in a biochemical pathway. 0

x1 = f (x5 ) − αx1 0

xi = xi−1 − αxi , 2 ≤ i ≤ 5

(10.2)

p

x where f (x) = (1+x p ) with p > 0 and α > 0. The author derives a necessary condition for a Hopf bifurcation to a periodic ∼ solution: If p > cos−5 ( 2π 5 ) = 355, then a Hopf bifurcation occurs at α = 1 5

1

5 2π p ( f (c) c ) where c = (p cos ( 5 ) − 1) . The periodic orbits that arise at the Hopf bifurcation point are necessarily unstable since from Theorem 10.1.3 and that (10.2) is a cooperative system.

INTRODUCTION TO MONOTONE DYNAMICAL SYSTEMS

267

Let x(t) be a solution of the monotone dynamical system (10.1) on an interval I. A subinterval [a, b] of I is called a rising interval if x(a) ≤K x(b) and x(a) 6= x(b); it is called a falling interval x(b) ≤K x(a) and x(a) 6= x(b). Lemma 10.1.1 A solution x(t) cannot have a rising interval and falling interval that are disjoint.

Proof. See [SW] p. 272.

Theorem 10.1.4 A compact limit set of a monotone dynamical system cannot contain two points related by K . If the system is strongly monotone then the limit set is unordered.

Proof. First we consider the case that the limit set L is the omega limit set of γ + (x0 ). Suppose that L contains distinct points x1 and x2 satisfying x1 K x2 . Then there exists t1 > 0 such that π(x0 , t1 ) K x2 . Similarly,there exists t2 > t1 such that π(x0 , t1 ) K π(x0 , t2 ) = π(π(x0 , t1 ), t2 − t1 ). By convergence criterion, (Theorem 10.1.2) L is a rest point. We obtain a contradiction. Consider the case that L is the alpha limit set of γ − (x0 ). Let x(t) = π(x0 , t), t ≤ 0. Arguing as before, there exists t1 < 0 such that x1 0 whenever x1 6= x2 are points in L. Hence Q−1 L is Lipschitz on Q(L). We can construct a dynamical system on Q(L) ⊂ Hv . Let y ∈ Q(L) then y = QL (x) for a unique x ∈ L. Define flow Φ(y, t) ≡ QL (π(x, t)). It is easy to verify that Φ(y, t) is a dynamical system generated by the vector field F (y) = QL (f (Q−1 L (y))) on Q(L) and QL send the trajectories of x˙ = f (x) on L onto the trajectories of y˙ = F (y) on Q(L) (see Fig. 10.1).

Fig. 10.1

Remark 10.1.2 Let f : D ⊆ Rn → Rn and dx = f (x), dt

x(0) = x0

(10.3)

∂fi be a competitive system, i.e., ∂x ≤ 0, i 6= j. Then with reverse time scaling j τ = −t we obtain a cooperative system.

dx = −f (x), x(0) = x0 . (10.4) dτ Thus the omega limit set of γ + (x0 ) of system (10.3) is exactly the alpha limit set of γ − (x0 ) of the system (10.4). Similarly,the alpha limit set of γ − (x0 ) of (10.3) is the omega limit set γ + (x0 ) of (10.4).

INTRODUCTION TO MONOTONE DYNAMICAL SYSTEMS

269

In the following, we shall consider a special case of the system (10.3) with n = 3. Let (10.5) be an irreducible competitive system in R3 . 0

x1 = f1 (x1 , x2 , x3 ) 0

x2 = f2 (x1 , x2 , x3 )

(10.5)

0

x3 = f3 (x1 , x2 , x3 ) where

∂fi ∂xj

≤ 0 , i 6= j , i, j = 1, 2, 3.

Theorem 10.1.6 (Poincar´e-Bendixson like theorem for 3-dimensional competitive systems) (i) A compact limit set L of the system (1.5) which contains no equilibrium is a periodic orbit. (ii) Let γ be a periodic orbit of system (1.5). Then there exists at least one equilibrium in the “interior” of γ.

Proof. From Theorem 10.1.5, the limit set L can be deformed to a compact invariant set A of a planar vector field. By the Poincar´e-Bendixson Theorem, A either contains equilibrium points or A contains periodic orbit. Thus we complete the proof of (i). The proof of (ii) can be found in [Smi] where the author applies Brouwer Fixed Point Theorem to show that there is at least one equilibrium in the “interior” of γ. Remark 10.1.3 The term interior needs to be interpreted. It is the c c bounded component of the set J = (γ + R3+ ) ∩ (γ − R3+ ) , where R3+ is the positive cone in R3 and the subscript c denotes complement. See [Smi] for more details.

10.2

Uniform Persistence

Consider a population model of n-species interaction which takes the form 0

xi = xi fi (x1 , · · · , xn ) xi (0) = xi0 ≥ 0, i = 1, 2, · · · , n.

(10.6)

270

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Definition 10.2.1 The system (10.6) is said to be persistent if lim inf xi (t) > 0, i = 1, 2, · · · , n t→∞

for every trajectory with positive initial conditions. The system (10.6) is said to be uniformly persistent if there exists a positive number δ such that lim inf xi (t) ≥ δ, i = 1, 2, · · · , n t→∞

for every trajectory with positive initial condition. To prove uniform persistence of (10.6), we need the following lemma. Theorem 10.2.1 (Butler-McGehee Lemma [SW] p. 12) Let P be a hyperbolic equilibrium of the system (10.6). Suppose P ∈ ω(x), {P } = 6 ω(x), the omega limit set of γ + (x), x ∈ Rn+ . Then there exists points q ∈ W s (P ) ∩ ω(x) and qˆ ∈ W u (P ) ∩ ω(x), where W s (P ), W u (P ) are stable and unstable manifold of equilibrium P , respectively.

Fig. 10.2

Proof. Since P is a hyperbolic equilibrium, from the definition of stable and unstable manifold, there exists a bounded open set U ⊆ Rn containing P ,

INTRODUCTION TO MONOTONE DYNAMICAL SYSTEMS

271

but not x with the property that if ϕt (y) ∈ U for all t > 0 (t < 0), then y ∈ W s (P ) (W u (P )). By taking a smaller open set V, P ∈ V ⊆ V¯ ⊂ U , we have that ϕt (y) ∈ V¯ for all t > 0 (t < 0) implies y ∈ W s (P ) (W u (P )). Since P ∈ ω(x), there exists a sequence {tn }, limn→∞ tn = ∞, such that limn→∞ xn = limn→∞ ϕtn (x) = P . It follows that xn ∈ V for all large n. Since x ∈ / W s (P ) and ω(x) 6= {P }, from the property of the neighborhood V , there exists rn , sn > 0 such that rn < tn , ϕt (xn ) ∈ V for −rn < t < sn and qn = ϕ−rn (xn ), qˆn = ϕsn (xn ) ∈ ∂V (see Fig. 10.2). By continuity of ϕt (x), solutions that start near P must remain near P ; hence it follows that limn→∞ rn = limn→∞ sn = ∞. However, V¯ is compact, so (passing to a subsequence if necessary) we have that limn→∞ ϕ−rn (xn ) = q ∈ V¯ and limn→∞ ϕsn (xn ) = qˆ ∈ V¯ . We continue the proof for q; the other case for qˆ is similar. Claim that ϕt (q) ∈ V¯ for all t > 0. Recall that limn→∞ qn = q, where qn = ϕ−rn (xn ). Fix t > 0. By the continuity of ϕt (x), limn→∞ ϕt (qn ) = ϕt (q). Since −rn < t − rn < 0 for all large n, ϕt (qn ) = ϕt−rn (xn ) ∈ V for all large n. It follows that ϕt (q) ∈ V¯ . Since t > 0 was arbitrary, the claim is established. Since ϕt (q) ∈ V¯ for all t > 0, we have q ∈ W s (P ) by the property that P ∈ V ⊆ V¯ ⊂ U . However, q ∈ γ + (x) = γ + (x) ∪ ω(x). Since q ∈ W s (P ), q ∈ / γ + (x) and hence q ∈ ω(x), which established. Example 10.2.1 Consider the predator-prey model in Example 4.2.3.  dx x mx = rx 1 − y − dt K a+x   dy mx (10.7) = −d y dt a+x x(0) = x0 > 0, y(0) = y0 > 0. ∗ ∗ If 0 < λ < K−a 2 , then the positive equilibrium (x , y ) is an unstable focus. By Poincar´e-Bendixson Theorem, if the omega limit set ω(x0 , y0 ) of the trajectory γ + (x0 , y0 ) does not contain any equilibrium then ω(x0 , y0 ) contains a periodic orbit. Thus the existence of a periodic solution follows. Since (x∗ , y ∗ ) is an unstable focus, then (x∗ , y ∗ ) ∈ / ω(x0 , y0 ). Claim that E1 = (K, 0), E0 = (0, 0) ∈ / ω(x0 , y0 ). If E1 ∈ ω(x0 , y0 ), then from Butler-McGehee Lemma, there exists (ˆ x, 0) ∈ ω(x0 , y0 ), x ˆ > 0 since the stable manifold of E1 is positive x-axis. If x ˆ > K, then by invariance of omega limit set γ − (ˆ x, 0) ⊆ ω(x0 , y0 ), − where γ (ˆ x, 0) = {(x, 0) : x > x ˆ}. It contradicts to the fact that the solution (x(t), y(t)) is bounded.

272

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

If x ˆ < K, then (0, 0) ∈ γ − (ˆ x, 0) ⊆ ω(x0 , y0 ). Apply Butler-McGehee Lemma again to P = (0, 0), then there exists yˆ > 0 such that (0, yˆ) ∈ W s (0, 0). γ − (0, yˆ) ⊆ ω(x0 , y0 ), where γ − (0, yˆ) = {(0, y) : y > yˆ}. It contradicts to the boundedness of solution (x(t), y(t)). Hence we prove the existence of periodic solutions for the system (10.7). Next we state without proof a theorem of uniform persistence for the system (10.6). Definition 10.2.2 (i) The flow F, F = ϕt (x), generated by (10.6) is called dissipative if for S each x ∈ Rn+ , ω(x) 6= ∅ and the invariant set Ω(F) = x∈Rn ω(x) has + compact closure. (ii) We said that M ⊆ Rn+ is an isolated invariant set for the flow F generated by (10.6) if M is an nonempty invariant set which is maximal invariant set in some neighborhood of itself. Note that if M is a compact,isolated invariant set. One may always choose a compact, isolated neighborhood. A typical examples of isolated invariant are equilibrium and periodic orbit. (iii) The stable set W s (M ) of an isolated invariant set M is defined to be {x ∈ Rn+ : ω(x) 6= ∅, ω(x) ⊂ M } and the unstable set W u (M ) is similarly defined in terms of alpha limit set α(x). (iv) If M, N are isolated invariant sets for the flow ϕt (x), we shall say that M is chained to N and write this as M → N if there exists x ∈ / M ∪N such that x ∈ W u (M ) ∩ W s (N ). (v) A chain of isolated invariant set is a finite sequence M1 , M2 , · · · , Mk with M1 → M2 → M3 → · · · → Mk (M1 → M1 if k=1). The chain is called a cycle if M k = M1 . (vi) Let ∂F = ϕt (x) n : ∂(Rn+ ) → ∂(Rn+ ), the boundary flow of ∂(R+ )

(10.6). ∂F will be called acyclic if for some isolated covering M of Sk Ω(∂F), M = i=1 Mi ,no subset of {Mi } forms a cycle. Theorem 10.2.2 (Uniform Persistence [SW] p. 280) Let F = ϕt (x) : Rn+ → Rn+ be the flow generated by (10.6) and ∂F : ∂(Rn+ ) → ∂(Rn+ ). Assume that

INTRODUCTION TO MONOTONE DYNAMICAL SYSTEMS

273

(i) F is dissipative. (ii) The boundary flow ∂F is acyclic with acyclic covering M {M1 , M2 , · · · , Mk }.

=

Then F is uniformly persistent if and only if W s (Mi ) ∩ Int(Rn+ ) = ∅ for each Mi ∈ M.

(10.8)

Furthermore, if (10.8) holds then there is an equilibrium in Int(Rn+ ). Example 10.2.2 Food Web Model [FW] 0

x = xg(x) − yp(x) 0

y = y(−d1 + c1 p(x)) − zq(y)

(10.9)

0

z = z(−d2 + c2 q(y))z x(0) = x0 > 0, y(0) = y0 ≥ 0, z(0) = z0 ≥ 0 where  x a2 y m1 x g(x) = r 1 − , q(y) = . , p(x) = K a1 + x a2 + y The equilibrium E ∗ = (x∗ , y ∗ , 0) in the xy-plane exists if there is a ∗ g(x∗ ) point x∗ < K such that p(x∗ ) = dc11 , in which case y ∗ = xp(x ∗ ) . The origin E0 = (0, 0, 0) and the equilibrium point E1 = (K, 0, 0) are each unstable with two dimensional unstable manifold. E ∗ may be stable or unstable in the plane. Assume E ∗ is globally stable in the x − y plane. From Theorem 10.2.2, the system (10.9) is uniformly persistent if and only if −d2 + c2 q(y ∗ ) > 0. Example 10.2.3 [ML; CHW] Rock-Scissor-Paper Model Consider the system three competing species 0

x1 = x1 (1 − x1 − α1 x2 − β2 x3 ) 0

x2 = x2 (1 − β2 x1 − x2 − α2 x3 )

(10.10)

0

x3 = x3 (1 − α3 x1 − β3 x2 − x3 ) x1 (0) > 0, x2 (0) > 0, x3 (0) > 0, subject to 0 < αi < 1 < βi , αi + βi > 2, i = 1, 2, 3.

(10.11)

The condition (10.11) says that x1 out-competes x2 (x3 = 0, orbits in x1 − x2 plane tend to a point a point x2 = 0), x2 out-competes x3 , and x3 out-competes x2 . In [CHW], the authors proved that if A1 A2 A3 6= B1 B2 B3

274

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

where Ai = 1 − αi > 0, Bi = βi − 1 > 0, i = 1, 2, 3, then there is no periodic orbits in the interior of R3+ by applying Stoke’s Theorem. Thus from Theorem 10.1.7, the w-limit set of the orbit contains equilibrium points. If A1 A2 A3 > B1 B2 B3 , then the interior equilibrium P = (x∗ , y ∗ , z ∗ ) exists and is globally asymptotically stable with respect to the interior of R3+ . If A1 A2 A3 < B1 B2 B3 , then P = (x∗ , y ∗ , z ∗ ) is saddle point with one-dimensional stable manifold Γ. If (x0 , y0 , z0 ) ∈ / Γ, then the ω-limit set ω(x0 , y0 , z0 ) of γ + (x0 , y0 , z0 ) is ω(x0 , y0 , z0 ) = O1 ∪ O2 ∪ O3 , where O1 is the orbit on the x2 x3 plane connecting equilibrium e3 = (0, 0, 1) to the equilibrium e2 = (0, 1, 0); O2 is the orbit on the x1 x3 plane connecting equilibrium e1 = (1, 1, 0) to e3 ; O3 is the orbit on x1 x2 plane connecting e2 to e1 . In this case,the system (10.10) is not persistent and the solutions of (10.10) exhibit aperiodic oscillation. Proof of nonexistence of periodic orbits of (10.10) by using Stoke’s Theorem: We shall prove that if A1 A2 A3 6= B1 B2 B3 , then the system (10.10) has no nontrivial periodic solutions. Consider the system (10.10) with the assumptions (10.11), x˙1 = f1 (x1 , x2 , x3 ) = x1 (1 − x1 − α1 x2 − β1 x3 ), x˙2 = f2 (x1 , x2 , x3 ) = x2 (1 − β2 x1 − x2 − α2 x3 ), x˙3 = f3 (x1 , x2 , x3 ) = x3 (1 − α3 x1 − β3 x2 − x3 ),

(10.12)

xi (0) > 0, i = 1, 2, 3. Define a new vector field (M1 , M2 , M3 ) = (x1 , x2 , x3 ) × (f1 , f2 , f3 ). Then the routine computations yield M1 = x2 x3 [(β2 − α3 )x1 + (1 − β3 )x2 + (α2 − 1)x3 ], M2 = x1 x3 [(α3 − 1)x1 + (β3 − α1 )x2 + (1 − β1 )x3 ],

(10.13)

M3 = x1 x2 [(1 − β2 )x1 + (α1 − 1)x2 + (β1 − α2 )x3 ], and curl(M1 , M2 , M3 )   ∂M2 ∂M1 ∂M3 ∂M2 ∂M1 ∂M3 = − , − , − ∂x2 ∂x3 ∂x3 ∂x1 ∂x1 ∂x2   x1 [(A3 − B2 )x1 − (3A1 + B3 )x2 + (3B1 + A2 )x3 ] =  x2 [(3B2 + A3 )x1 + (A1 − B3 )x2 − (3A2 + B1 )x3 ]  . x3 [−(3A3 + B2 )x1 + (A1 + 3B3 )x2 + (A2 − B1 )x3 ]

(10.14)

INTRODUCTION TO MONOTONE DYNAMICAL SYSTEMS

275

Let Γ = {(p1 t, p2 t, p3 t)| t > 0}.

(10.15)

Lemma 10.2.1 Γ is a positive invariant set under (10.12), and the solution ψ(t) of (10.12) with initial condition in Γ satisfies lim ψ(t) = P.

t→∞

Proof. If x(0) ∈ Γ, then x(0) = (p1 ξ, p2 ξ, p3 ξ) for some ξ > 0. Let φ(t) satisfy φ0 (t) = φ(t)(1 − φ(t)), φ(0) = ξ. Then it is easy to verify that ψ(t) = (p1 φ(t), p2 φ(t), p3 φ(t)) satisfies (10.12). Hence Γ is positively invariant and limt→∞ ψ(t) = P . Lemma 10.2.2 Let (x1 , x2 , x3 ) ∈ R3+ and xi > 0, i = 1, 2, 3. If (x1 , x2 , x3 ) ∈ / Γ, then (M1 , M2 , M3 ) 6= 0 at (x1 , x2 , x3 ). Proof. Since (M1 , M2 , M3 ) = (x1 , x2 , x3 ) × (f1 , f2 , f3 ),if (M1 , M2 , M3 ) = 0, then either (f1 , f2 , f3 ) = 0 or (f1 , f2 , f3 ) = (x1 , x2 , x3 )t for some t ∈ R. If (f1 , f2 , f3 ) = 0, then (x1 , x2 , x3 ) = P . If (f1 , f2 , f3 ) = (x1 , x2 , x3 )t, then (1 − x1 − α1 x2 − β1 x3 ) = (1 − β2 x1 − x2 − α2 x3 ) = (1 − α3 x1 − β3 x2 − x3 ) = t. It follows that (x1 , x2 , x3 ) = (1 − t)(p1 , p2 , p3 ) ∈ Γ. Hence either of the above two cases leads to a contradiction to the assumption (x1 , x2 , x3 ) ∈ / Γ. Lemma 10.2.3 The solution of (10.12) are positive and bounded, and furthermore, for any ε > 0, there exists T ≥ 0 such that for each i = 1, 2, 3, xi (t) < 1 + ε for all t ≥ T . We omit the proof of Lemma 10.2.3 because it is trivial. Theorem 10.2.3 If A1 A2 A3 6= B1 B2 B3 , then the system (2.7) has no periodic solutions in the interior of R3+ .

276

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Proof. Suppose there exists a periodic solution x(t) = (x1 (t), x2 (t), x3 (t)), with period w, in the interior of R3+ . Let C = {(x1 (t), x2 (t), x3 (t))| 0 ≤ t ≤ T }. We claim that the periodic orbit C is disjoint from the set Γ. From Lemma 10.2.1, it follows that if C ∩ Γ 6= ∅, then x(t) → P as t → ∞. This contradicts the fact that x(t) is a periodic solution. Next, we construct the following conical surface S: S = {λ (x1 (t), x2 (t), x3 (t))| λ ∈ [0, 1] and t ∈ [0, w]}. Since (10.12) is a competitive system, from Theorem 10.1.4, for any two points x, y ∈ C, x 6= y, x, y are unrelated; i.e. x − y ∈ / Int(R3+ ) or 3 y−x∈ / Int(R+ ). Hence the surface S does not cross itself. Given a point (x1 (t0 ), x2 (t0 )), x3 (t0 )) ∈ C, consider the segment from 0 to x(t0 ). Then from Lemma 10.2.2 ~ = (x1 (t0 ), x2 (t0 )), x3 (t0 )) × (f1 , f2 , f3 )| N x=x(t0 ) = (M1 , M2 , M3 )|x=x(t0 ) 6= 0 is a normal vector of the surface S at each point of the segment (0, x(t0 )). ~ . Then we have the unit normal vector, Normalize the vector N ~n =

1 (M1 , M2 , M3 )|x=x(t0 ) , K1

~ where K1 = N 6= 0. For each point on the segment (0, x(t0 )), we compute curl(M1 , M2 , M3 ) · ~n at the point x = s(x1 (t0 ), x2 (t0 )), x3 (t0 )), s ∈ [0, 1]. Then from (10.13) and (10.14), it follows that curl(M1 , M2 , M3 ) · ~n = s2 curl (M1 , M2 , M3 )|x=x(t0 ) · = s2

1 (M1 , M2 , M3 )|x=x(t0 ) K1

1 x1 x2 x3 (G(x1 , x2 , x3 ))|x=x(t0 ) K1

INTRODUCTION TO MONOTONE DYNAMICAL SYSTEMS

277

where G(x1 , x2 , x3 ) 



B 2 + A3 = (x1 , x2 , x3 )  −B3  (A3 − B2 , 3A1 − B3 , 3B1 + A2 ) −A2   −A3 +  B3 + A1  (3B2 + A3 , A1 − B3 , −3A2 − B1 ) −B1     −B2 x1     + (−3A3 − B2 , 3B3 + A1 , A2 − B1 ) −A1 x2  . B 1 + A2 x3 A routine computation shows G(x1 , x2 , x3 ) = 0. Hence curl(M1 , M2 , M3 ) · ~n = 0 on segment (0, x(t0 )) for all t0 ∈ [0, w] and curl(M1 , M2 , M3 ) · ~n = 0 on the surface S. 0

(10.16)

Let the surface C = {(x1 , x2 , x3 )| xδ11 xδ22 xδ33 = c}, where the positive num0 bers δ1 , δ2 , δ3 will be selected and c > 0 is sufficiently small such that C is disjoint from the periodic orbit C. Let Y be the intersection of the surface 0 0 C and the cone (bounded by S). Then C divides the surface S into two parts S1 and S2 such that C ⊂ S1 and (0, 0, 0) ∈ S2 . 0 0 0 Let S = Y ∪ S1 . Then S is the surface with ∂S = C. On the surface δ δ δ ~ = −∇(x 1 x 2 x 3 ) = −c( δ1 , δ2 , δ3 ). Thus Y , the outward normal vector N 1 2 3 x1 x2 x3 the outward unit normal vector ~n on Y is ~n = − Kc2 ( xδ11 , xδ22 , xδ33 ), where ~ K2 = N . From (10.14), it follows that on the surface Y , we have curl(M1 , M2 , M3 ) · ~n c =− [x1 ((δ1 + δ2 − 3δ3 )A3 − (δ1 − 3δ2 + δ3 )B2 ) K2 + x2 (−(3δ1 − δ2 − δ3 )A1 − (δ1 + δ2 − 3δ3 )B3 ) + x3 ((δ1 − 3δ2 + δ3 )A2 + (3δ1 − δ2 − δ3 )B1 )]. Choose δ1 , δ2 , δ3 satisfying δ1 + δ2 − 3δ3 = −A1 B2 δ1 − 3δ2 + δ3 = −A1 A3 3δ1 − δ2 − δ3 = B2 B3

278

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

or 1 (A1 B2 + A1 A3 + 2B2 B3 ) > 0, 4 1 δ2 = (A1 B2 + 2A1 A3 + B2 B3 ) > 0, 4 1 δ3 = (2A1 B2 + B2 B3 + A1 A3 ) > 0. 4 δ1 =

Then we have c x3 (B1 B2 B3 −A1 A2 A3 ) < 0 or > 0 for all x ∈ Y. K2 (10.17) Now we are in a position to prove nonexistence of periodic solution by Stoke’s Theorem. Since S1 and Y are smooth enough for the application of Stoke’s Theorem, I Z Z M1 dx1 + M2 dx2 + M3 dx3 = curl(M1 , M2 , M3 ) · ~ndA. curl(M1 , M2 , M3 )·~n = −

S1 ∪Y

C

(10.18) From the fact that (M1 , M2 , M3 ) = (x1 , x2 , x3 ) × (f1 , f2 , f3 ), it follows that I Z w M1 dx1 + M2 dx2 + M3 dx3 = (M1 f1 + M2 f2 + M3 f3 )dt = 0. C

0

(10.19) From (10.16) and (10.17) Z Z curl(M1 , M2 , M3 ) · ~ndA S ∪Y Z Z 1 Z Z = curl(M1 , M2 , M3 ) · ~ndA + curl(M1 , M2 , M3 ) · ~ndA (10.20) S1 Y Z Z c =0 − x3 (B1 B2 B3 − A1 A2 A3 )dA 6= 0. K2 Y Thus (10.18), (10.19), (10.20) lead to a desired contradiction. 10.3

Application: Competition of Two Species in a Chemostat with Inhibition

Let S(t) denote the nutrient concentration at time t in the culture vessel; x1 (t), x2 (t), the concentration of the competitors; and p(t), the concentration of the inhibitor (or toxicant or pollutant). The equations of the model

INTRODUCTION TO MONOTONE DYNAMICAL SYSTEMS

279

take the form [HW] m 2 x2 S m 1 x1 S f (p) − , S 0 = (S (0) − S)D − a1 + S a2 + S   0 m1 S x1 = x1 f (p) − D , a1 + S   0 m2 S x2 = x2 −D , a2 + S δx2 p , p0 = (p(0) − p)D − K +p S(0) = 0,

xi (0) > 0,

p(0) = 0,

(10.21)

i = 1, 2.

S (0) is the input concentration of the nutrient, and p(0) is the input concentration of the inhibitor, both of which are assumed to be constant. D is the dilution rate of the chemostat. S (0) , p(0) , and D are under the control of the experimenter. mi , ai , i = 1, 2 are the maximal growth rates of the competitors (without an inhibitor) and the Michaelis-Menten (or half saturation) constants, respectively. These parameters, inherent properties of the organism, are measurable in the laboratory. δ and K play similar roles for the pollutant, δ being the uptake by x2 , and K being a half saturation parameter. The function f (p) represents the degree of inhibition of p on the growth rate (or uptake rate) of x1 . By suitable scaling, for example p , t → Dt, we may assume S (0) = 1, D = 1, p(0) = 1 and S → SS(0) , p → p(0) system (10.21) takes the form m1 x1 S m 2 x2 S S0 = 1 − S − f (p) − , a1 + S a2 + S   m1 S f (p) − 1 , x01 = x1 a1 + S   m2 S 0 −1 , x2 = x2 a2 + S δx2 p p0 = 1 − p − , K +p S(0) = 0,

xi (0) > 0,

i = 1, 2,

(10.22)

p(0) = 0.

Concerning the function f (p), we assume that (i) f (p) = 0, f (0) = 1, (ii) f 0 (p) < 0, p > 0. The function f (p) = e−ηp , η > 0 has these properties.

(10.23)

280

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Let Σ = 1−S−x1 −x2 . Then Σ0 = −S 0 −x01 −x02 = −1+S+x1 +x2 = −Σ. System (10.22) may then be replaced by Σ0 = −Σ,   m1 (1 − Σ − x1 − x2 ) 0 f (p) − 1 , x1 = x1 a1 + 1 − Σ − x1 − x2   (10.24) m2 (1 − Σ − x1 − x2 ) 0 x2 = x2 −1 , a2 + 1 − Σ − x1 − x2 x2 p p0 = 1 − p − δ . K +p Clearly, limt→∞ Σ(t) = 0. Hence the solutions in the omega limit set of (10.24) must satisfy   m1 (1 − x1 − x2 ) 0 f (p) − 1 , x1 = x1 1 + a1 − x1 − x2   m2 (1 − x1 − x2 ) 0 x2 = x2 −1 , (10.25) 1 + a2 − x1 − x2 x2 p 0 , p =1−p−δ K +p xi (0) > 0,

i = 1, 2,

p(0) = 0,

x1 (0) + x2 (0) < 1.

More directly, we could also apply the theory of asymptotically autonomous systems (see [SW] p. 294). System (10.25) is competitive. Let a1 a2 λ1 = , λ2 = . (10.26) m1 − 1 m2 − 1 These are the break-even concentration for species 1 and 2 respectively for the chemostat and would determine the outcome if the inhibitor p were not present. The form of (10.25) guarantees that if xi (0) > 0, i = 1, 2, then xi (t) > 0 for t > 0. Moreover, p0 |p=0 = 1 > 0; so, if p(0) = 0, p(t) > 0 for t > 0. x1 (t) and x2 (t) satisfy   m1 (1 − x1 − x2 ) x01 5 x1 −1 , 1 + a1 − x1 − x2   (10.27) m2 (1 − x1 − x2 ) x02 5 x2 −1 , 1 + a2 − x1 − x2 so that an application of Kamke’s theorem, (Theorem 2.6.3) and the elementary knowledge of the behavior of trajectories of (10.27) with equalities, establishes the following proposition. Proposition 10.3.1 If mi 5 1 or if λi = 1, limt→∞ xi (t) = 0, i = 1 or 2.

INTRODUCTION TO MONOTONE DYNAMICAL SYSTEMS

281

This simply states the biologically intuitive fact that if one of the competitors could not survive in the simple chemostat, that competitor will not survive in the chemostat with an inhibitor. Thus we may assume that mi > 1 and 0 < λi < 1, i = 1, 2. Lemma 10.3.1 There exists a number γ > 0 such that p(t) = γ for t sufficiently large.

Proof. Suppose lim inf t→∞ p(t) = 0. If p(t) decreased to zero monotonically, then there would be a point t0 such that for t > t0 , p(t)+δp(t)/(K +p(t)) < 1. For such values, p0 (t) > 0, which contradicts p(t) decreasing. Hence there exists a set of points tn , tn → ∞, such that p0 (tn ) = 0 and p(tn ) → 0 as tn → ∞. For such values of tn , δp(tn )x2 (tn ) K + p(tn ) δp(tn ) > 1 − p(tn ) − K + p(tn )

0 = 1 − p(tn ) −

>0 for n large. This establishes the lemma. Theorem 10.3.1 If 0 < λ2 5 λ1 < 1, then lim x1 (t) = 0, lim x2 (t) = 1 − λ2 = x∗2 , lim p(t) = p∗2 < 1,

t→∞

where

p∗2

t→∞

t→∞

is the positive root of the quadratic (1 − p)(K + p) − δ(1 − λ2 )p = 0.

(10.28)

Proof. The reason for labeling it p∗2 will become clear below. p∗2 < 1 follows from the fact that p(t) satisfies p0 < 1 − p

(10.29)

and the basic comparison theorem for differential inequalities. In view of Lemma 10.3.1 inequalities (10.27) can be replaced by   m1 (1 − x1 − x2 )f (γ) 0 x1 5 x1 −1 , 1 + a1 − x1 − x2   m2 (1 − x1 − x2 ) x02 5 x2 −1 , 1 + a2 − x1 − x2

282

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

for t sufficiently large. This system of inequalities can be compared to the equations for the chemostat with λ2 and λγ as parameters, where a1 λ2 5 λ1 < = λγ , m1 f (γ) − 1 so that the first component of the comparison system tends to zero as t tends to infinity. Hence so does x1 (t). Thus, for the remainder of this section, we may assume that mi > 1,

i = 1, 2;

0 < λ1 < λ2 < 1

(10.30)

to make the problem interesting. Note also that this provides the boundedness of solutions. The results below provide conditions for one or both of the competitors to wash out of the chemostat. To avoid “unlikely” cases, we tacitly assume that all rest points and periodic orbits are hyperbolic, i.e., that their stability is determined by their linearization. The rest point set. As noted above, system (10.25) is competitive system. From Theorem 10.1.6, a type of Poincar´e-Bendixson theory holds. The only possible omega limit sets are those of a two-dimensional system, specifically a rest point, a periodic orbit, or a finite set of rest points connected by trajectories. Moreover, if there is a periodic orbit, it must have a rest point “inside,” where “inside” is defined in terms of an order. This has the consequence that when there is no interior rest point, there cannot be a periodic orbit in the open positive octant, and hence the limit is on the boundary. Thus the existence of an interior rest point is crucial for coexistence. There are three potential rest points on the boundary, which we label E0 = (0, 0, 1), E1 = (x∗1 , 0, 1), and E2 = (0, x∗2 , p∗2 ). These correspond to one or both competitors becoming extinct. E0 always exists. E2 exists with x∗2 = 1 − λ2 and p∗2 the root of (10.25) if 0 < λ2 < 1, which is contained in our basic assumption (10.30). The existence of E1 is a bit more delicate. In keeping with the definitions in (10.26), define λ0 = a1 /(m1 f (1)−1). The inequality 0 < λ0 < 1 corresponds to the survivability of the first population in a chemostat under maximal levels of the inhibitor. Easy computations show that E1 = (1 − λ0 , 0, 1) will exist if λ0 > 0 and will have positive coordinates and be asymptotically stable in the x1 -p plane if 0 < λ0 < 1. If 1 − λ0 is negative, E1 is not

INTRODUCTION TO MONOTONE DYNAMICAL SYSTEMS

283

meaningful, nor is it accessible from the given initial conditions since the x2 -p plane is an invariant set. The stability of either E1 or E2 will depend on comparisons between the subscripted λ’s. The local stability of each rest point depends on the eigenvalues of the linearization around those points. The Jacobian matrix for the linearization of (10.25) takes the form   m11 m12 m13 J =  m21 m22 0  . (10.31) 0 m32 m33 At E0 , 

m1 f (1) 1+a1

 J =

0 0

−1

 0  m2 . 1+a2 − 1 0  δ − 1+K −1 0

The eigenvalues are the diagonal elements. One eigenvalue is −1, and the eigenvector lies along the p axis. This corresponds to the growth of the inhibitor to its limiting value in the absence of a consumer. The set {(0, 0, p) |p>0 } is invariant and is part of the stable manifold of E0 . m22 = (m2 /(1 + a2 )) − 1 is positive since λ2 < 1. Similarly, the remaining diagonal term m11 is positive if 0 < λ0 < 1, and negative otherwise. When this eigenvalue is negative, the stable manifold of E0 is the entire (x1 − p) plane. Remark 10.3.1 When λ0 > 0, no trajectory of (10.25) has E0 as an omega limit point. At E1 , m21 = 0; since m23 = m31 = 0, the eigenvalues are just the diagonal elements of J. Thus µ1 = −

m1 a1 (1 − λ0 ) f (1), (a1 + λ0 )2

µ2 =

(m2 − 1)(λ0 − λ2 ) , a2 + λ0

µ3 = −1.

If 0 < λ0 < λ2 < 1, then E1 is asymptotically stable. This reflects the fact that x1 , in the presence of the maximal inhibitor concentration, is still a better competition than x2 . If λ0 > λ2 , E1 is unstable and, of course, if λ0 > 1, E1 does not exist. Lemma 10.3.2 If λ0 > λ2 , then any solution of (10.25) satisfies lim inf t→∞ x2 (t) > 0.

284

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS Table 10.1

E0 E1 E2

Exists always 0 < λ0 < 1 λ2 < 1

Stability 1- or 2-dimensional stable manifold asymptotically stable if 0 < λ0 < λ2 asymptotically stable if 0 < λ2 < λ∗

Proof. Suppose that Lemma 10.3.2 is not true. Then some trajectory Γ has an omega limit point in the (x1 -p) plane. Moreover, the initial conditions preclude that Γ is on the stable manifold of E1 . Thus, by the ButlerMcGehee lemma (Theorem 10.2.1), the omega limit set of Γ must contain a point of the stable manifold of E1 and hence the entire trajectory through that point. To remain bounded, such a trajectory must connect to E0 . We have already noted in Remark 10.3.1 that this is not possible. At E2 , m12 = m13 = m23 = 0, so again the eigenvalues are just the following diagonal elements: m1 λ2 f (p∗2 ) − 1, a1 + λ2 m2 a2 (1 − λ2 ) µ2 = − , (a2 + λ2 )2 δK(1 − λ2 ) . µ3 = −1 − (K + p∗2 )2

µ1 =

Clearly, µ2 and µ3 are negative, so E2 always has a two-dimensional stable manifold. µ1 < 0 is equivalent to a1 = λ∗ . λ2 < m1 f (p∗2 ) − 1 The local behavior of the rest point set on the boundary is summarized in Table 10.1, where 0 < λ1 < λ2 < 1 is assumed. The more interesting case is that of an interior rest point. As noted above, the competitiveness of the system and a type of Poincar´e-Bendixson theorem require its existence for coexistence to be possible. Let Ec = (x∗1c , x∗2c , p∗c ) denote the coordinates of a possible interior rest point. First, it must be the case that 1 − x1c − x2c = λ2

(10.32)

for this is the only nontrivial zero of the derivative of x2 . Using this, we set the derivative of x1 equal to zero to find that m1 λ2 f (p) = 1 a1 + λ2

INTRODUCTION TO MONOTONE DYNAMICAL SYSTEMS

285

or that we need (a1 + λ2 )/m1 λ2 to be in the range of f (p). It is, then   a1 + λ2 ∗ −1 pc = f . (10.33) m1 λ2 Since f is monotone, this number is unique. Given p∗c , then x∗2c can be determined from setting p0 (t) equal to zero, yielding 1 − p∗c −

δx∗2c p∗c =0 K + p∗c

or x∗2c =

(1 − p∗c )(K + p∗c ) . δp∗c

(10.34)

This number is unique since p∗c is unique. If x∗2c < 1 − λ2 , then x∗1c is uniquely determined from (10.32) as x∗1c = 1 − x∗2c − λ2 .

(10.35)

Since 1 − λ2 = x∗2 , it follows that if x∗2c exists, then x∗2c < x∗2 . This is the biologically expected statement that x2 will do less well in the coexistent steady state than in the steady state where it is the sole survivor. This is true if and only if x∗2c =

(1 − p∗2 )(K + p∗2 ) (1 − p∗c )(K + p∗c ) < = x∗2 δp∗c δp∗2

and hence, in view of the monotonicity of the expression in p, if and only if p∗2 < p∗c . From (10.23) we have that this is equivalent to f (p∗2 ) >

a1 + λ2 m1 λ2

(10.36)

or to the instability of (E2 ). See Table 10.1, where the value of λ∗ has been substituted to obtain (10.36). Thus we have the following result. Proposition 10.3.2 If (a1 + λ2 )/m1 λ2 is in the range of f (p), then a necessary condition for the existence of an interior equilibrium for (3.2) is that E2 exist and be unstable. We see below that the interior equilibrium may exist even if E1 does not. Before considering the stability of Ec , it remains to investigate whether (10.33) is feasible, to investigate whether (a1 + λ2 )/m1 λ2 is in the range of f (p). If 0 < λ0 < λ2 , then x1 is a better competitor than x2 even at the maximal level of the inhibitor. A simple consequence of the definition of

286

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

λ0 is that f (1) > (a1 + λ2 )/m1 λ2 in this case or that there is no value of p∗c , 0 5 p∗c 5 1, which satisfies (10.33). Hence λ0 = λ2 is a necessary condition for (a1 +λ2 )/m1 λ2 to be in the range of f (p), 0 5 p 5 1. It is also sufficient since f (0) = 1. Hence that Proposition 10.3.2 can be improved to λ0 = λ2 > λ∗ is necessary and sufficient for the existence of Ec . There remains the question of the stability of Ec . The matrix J in (10.31) takes the form   m 1 a1 ∗ ∗ ∗ ∗ m1 λ2 ∗ 0 ∗ 1 a1 − (am 2 f (pc )x1c − (a +λ )2 f (pc )x1c a +λ x1c f (pc ) 1 +λ2 ) 1 2 1 2   m 2 a2 ∗ ∗ 2 a2 − (am 0 J =  − (a2 +λ 2 x2c 2 x2c . 2) 2 +λ2 ) 0

δp∗

− K+pc ∗ c

−1 −

δKx∗ 2c 2 (K+p∗ c)

By expanding the determinant of J in the last row, we see that it is negative or that the dimension of the stable manifold is one or three. If δ < 1, the Gersgorin theory immediately gives two roots with negative real parts — and hence three such roots — so we easily have that Ec is asymptotically stable if δ < 1. The characteristic roots of J satisfy   δKx∗2c a1 x∗1c a2 x∗2c 3 2 µ +µ 1+ + + (K + p∗c )2 (a1 + λ2 )λ2 (a2 + λ2 )λ2     ∗ ∗ δKx2c a1 x1c a2 x∗2c (10.37) +µ 1+ + (K + p∗c )2 (a1 + λ2 )λ2 (a2 + λ2 )λ2 a2 δp∗c f 0 (p∗c ) x∗ x∗ = 0. − ∗ f (pc ) (a2 + λ2 )λ2 K + p∗c 1c 2c Since f 0 (p) < 0, the constant term is positive, so the Routh-Hurwitz criterion states that Ec will be asymptotically stable if and only if    δKx∗2c a1 x∗1c a2 x∗2c δKx∗2c 1+ + + 1 + (K + p∗c )2 (a1 + λ2 )λ2 (a2 + λ2 )λ2 (K + p∗c )2   ∗ 0 ∗ ∗ a2 x2c a2 δp∗c f (pc ) a1 x1c + x∗ x∗ . >− · (a1 + λ2 )λ2 (a2 + λ2 )λ2 f (p∗c ) (a2 + λ2 )λ2 K + p∗c 1c 2c (10.38) Dynamics without an interior rest point. If Ec is not to exist, the inequality λ0 = λ2 > λ∗

(10.39)

must be violated. Recall that we are assuming that 0 < λ1 < λ2 < 1 and, moreover, that λ0 > λ∗ holds by definition of these quantities and the monotonicity of f (p). There are two possible outcomes depending on the way inequality (10.39) is violated.

INTRODUCTION TO MONOTONE DYNAMICAL SYSTEMS

287

Since system (10.25) is competitive, the possible dynamics are limited. Two results are of interest here. Remark 10.3.2 If Ec does not exist, from Theorem 10.1.6(ii) all omega 3 limit sets lie on the boundary of R+ . Theorem 10.3.2 If 0 < λ1 < λ2 < λ∗ , then lim x1 (t) = 0, lim x2 (t) = x∗2 , lim p(t) = p∗2 .

t→∞

t→∞

t→∞

Proof. λ0 > 0 is implied by λ∗ > 0. If λ0 > 1, then the only viable equilibrium is E2 (Remark 10.3.1, Table 10.1), and it is locally asymptotically stable. If λ0 < 1, then E1 exists but is unstable and is not an omega limit point of a trajectory of (10.25). In either case, Remark 10.3.2 completes the proof. Theorem 10.3.3 If 0 < λ0 < λ2 and λ∗ < λ2 , then lim x1 (t) = x∗1 , lim x2 (t) = 0, lim p(t) = 1.

t→∞

t→∞

t→∞

Proof. E1 is locally asymptotically stable and E2 is unstable. Remarks 10.3.1 and 10.3.2 complete the proof. Since λ∗ > λ0 always holds, the above two theorems complete the asymptotic description of the dynamics under the basic hypothesis 0 < λ1 < λ2 < 1 when there is no interior rest point. Dynamics with an interior rest point. It was shown that a necessary condition for the existence of the interior equilibrium point Ec was that E2 be unstable. Ostensibly, there are three cases depending on E1 : (i) E1 exists and is asymptotically stable, (ii) E1 exists and is unstable, (iii) E1 does not exist. Case (i) does not occur, however, since (see Table 10.1) E1 being asymptotically stable requires that λ0 > λ2 , and E2 being unstable requires λ2 5 λ∗ . However, from (10.33), f (p∗c ) =

a1 + λ2 m1 λ2

288

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

or λ2 =

a1 a1 < = λ0 , ∗ mf (pc ) − 1 mf (1) − 1

since 0 < p∗2 < p∗c < 1 and f is decreasing. This contradicts λ2 > λ0 . Hence, we need to only consider cases (ii) and (iii). Theorem 10.3.4 If case (ii) or (iii) holds, then there exists a γ > 0 such that every solution of (10.25) satisfies lim inf x1 (t) = γ, lim inf x2 (t) = γ. t→∞

t→∞

Proof. To use the Theorem 10.2.2 on uniform persistence, first, it is necessary to have a system of equations defined on an open region with a boundary. We use, instead of (10.25), x01 = x1 (g1 (x1 + x2 )f (p) − 1), x02 = x2 (g2 (x1 + x2 ) − 1), δx2 p p0 = 1 − p − , K + |p|

(10.40)

where ( gi (u) =

mi (1−u) 1+ai −u ,

0 5 u 5 1,

0,

u > 1,

i = 1, 2.

Since the only initial conditions of interest are with p(0) = 0, xi (0) > 0, x1 (0) + x2 (0) < 1, solutions of these initial conditions are, for t = 0, solutions of (10.40). The open region is the wedge xi > 0, i = 1, 2 whose boundaries are the x1 -p and x2 -p planes in R3 , given by x2 = 0, x1 = 0 and x1 = 0, x2 = 0. The system is dissipative since in the extended region x1 + x2 > 1, (x1 (t) + x2 (t))0 = −(x1 (t) + x2 (t)) and in the extended region p < 0, p0 (t) = 1. In case (ii) or (iii), no portion of the stable manifolds of E0 , E1 , and E2 intersect the interior of the wedge. Moreover, since there is only one equilibrium point interior to each of the x1 -p and x2 -p faces and an unstable rest point on the p axis whose stable manifold is the p axis, there are no connecting orbits to form a cycle. Hence, by Theorem 10.2.2 (10.40) is uniformly persistent, and the theorem is established. Theorem 10.3.4 guarantees the coexistence of both the x1 and x2 populations. However, it does not give the global asymptotic behavior. The further analysis of the system is complicated by the possibility of multiple

INTRODUCTION TO MONOTONE DYNAMICAL SYSTEMS

289

limit cycles. Since this is a common difficulty in general two-dimension systems, it is not surprising that this presents difficulties in the analysis of three-dimension competitive systems. Theorem 10.3.5 Suppose that system (10.25) has no limit cycles. Then Ec is globally asymptotically stable.

0.8 0.7

Concentration

0.6 x1

0.5 0.4 0.3 0.2

p 0.1 x2 0

0

20

40

60

80

100

Time

Fig. 10.3

Proof. In view of Theorem 10.3.4, the omega limit set of any trajectory cannot be on the boundary x1 = 0 or x2 = 0. Away from the boundary, the system is irreducible. Since there are no limit cycles, all trajectories must tend to Ec by Theorem 10.1.6(i). Conjecture In case (ii), system (10.25) has no limit cycles. We note that since we are assuming hyperbolicity, there must be at least two limit cycles for the conjecture to fail. Because of the assumed stability of Ec , there must be an unstable limit cycle with Ec in its “interior”. However, since the system is dissipative, there must be an asymptotically stable limit cycle as well.

290

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Theorem 10.3.6 Let f (p) = e−ηp in (10.25) and let case (iii) hold. Then for η sufficiently large, there exists a δ0 > 0 and a K0 , such that for δ > δ0 and K < K0 , (10.25) has an attracting limit cycle. Figure 10.3 shows the time course for an example of this type of behavior. Figure 10.4 shows the limit cycle plotted in phase space.

Proof. The theorem follows if Ec is unstable. To show this, we must show that (10.38) is violated. In the case under consideration, −f 0 (p)/f (p) = η. Define c by c = ln(m1 λ2 /(a1 + λ2 )), and note that c = ηp∗c . It follows that (η − c) c . 1 − p∗c = 1 − = η η Note that when η is fixed, p∗c is fixed. From the definition of x∗2c (see (10.34)), it follows that    δKx∗2c K 1 − p∗c K η = = − 1 (10.41) (K + p∗c )2 K + p∗c p∗c K + p∗c c for any choice of η and the corresponding p∗c . Fix η satisfying     a1 (1 − λ2 ) a1 (1 − λ2 ) 2(a2 + λ2 )λ2 η >c+2 3+ +1 , (a1 + λ2 )λ2 (a1 + λ2 )λ2 (1 − λ2 )a2 and (m1 /(a1 +1))e−η < 1. Let K0 be so small that the expression in (10.41) is less than 1. To show that (10.38) is violated, we estimate both sides. Note that, since x∗1c = 1 − λ2 − x∗2c , limδ→∞ x∗2c = 0 and limδ→∞ x∗1c = 1 − λ2 . Hence, for δ sufficiently large, a2 x∗2c /(a2 + λ2 )λ2 < 1 and 1 − λ2 > x∗1c > (1 − λ2 )/2. The right-hand side of (10.38) is bounded below by a2 (1 − λ2 )(η − c). 2(a2 + λ2 )λ2 The left-hand side of (10.38) has three factors, which we denote by F1 , F2 , F3 , respectively. By the discussion above, for δ sufficiently large,   a1 (1 − λ2 ) a1 (1 − λ2 ) F1 < 3 + + 1. , F2 < 2, F3 < (a1 + λ2 )λ2 (a1 + λ2 )λ2 It follows that (η − c)a2 (1 − λ2 ) F1 F2 F3 < , (a2 + λ2 )λ2 2 which contradicts (10.38). Hence Ec is unstable (with a two-dimensional unstable manifold). Choose a trajectory not on the stable manifold of Ec . By Theorem 10.1.6(i), its omega limit set must be a periodic orbit. By hyperbolicity, there must be an attracting orbit.

INTRODUCTION TO MONOTONE DYNAMICAL SYSTEMS

291

Fig. 10.4

10.4

Two Species Competition Models

In this section, we shall introduce an abstract model for two species competition. The abstract model has wide applications to many two species competition models in the form of ODEs, PDEs. The basic setup is as follows. For i = 1, 2, let Xi be ordered Banach spaces with positive cones Xi+ such that IntXi 6= φ. We use the same symbol for the partial orders generated by the cones Xi+ . If xi , x¯i ∈ Xi , then we write xi ≤ x¯i if x¯i − xi ∈ Xi+ , xi < x¯i if xi ≤ x¯i and xi 6= x¯i , and xi  x¯i if x¯i − xi ∈ IntXi+ . If xi , yi ∈ Xi satisfy xi < yi , then the order interval [xi , yi ] is defined by [xi , yi ] = {u ∈ Xi : xi ≤ u ≤ yi }. If xi  yi , then [[xi , yi ]] = {u ∈ Xi : xi  u  yi } is called an open order interval. Let X = X1 × X2 , X + = X1+ × X2+ ,and K = X1+ × (−X2+ ). X + is a cone in X with nonempty interior given by IntX + = IntX1+ × IntX2+ . It generates the order relations ≤, 0. Let Km = {z = (x, y)} ∈ R+ : x ≥ 0, y ≤ 0}, z = (x, y)≤m z¯ = (¯ x, y¯) if x ≤ x ¯ and y ≥ y¯. Then the flow ϕt (z) is monotone with respect to the order relation ≤m and 4 is strongly monotone in the interior of R+ . Consider single population model for species x with two patches:   0 x1 x1 = (x2 − x1 ) + r1 x1 1 − k1   (10.43) 0 x2 x2 = (x1 − x2 ) + r2 x2 1 − . k2

It is easy to show there is unique non-zero equilibrium x ˆ, x ˆ  0 and it attracts all nontrivial solution (4.2). Similarly for the single population model for species y:   0 y1 y1 = δ(y2 − y1 ) + s1 y1 1 − L1   (10.44) 0 y2 y2 = δ(y1 − y2 ) + s2 y2 1 − L2 there is a unique non-zero equilibrium yˆ, yˆ  0 and it attracts all nontrivial solution of (4.3). Let Ex = (ˆ x, 0), Ey = (0, yˆ). Since ϕt ((0, y)) → Ey and ϕt ((x, 0)) → Ex as t → ∞, it follows that all positive orbits are attracted to the set R ≡ [0, x ˆ] × [0, yˆ] = {z : Ey ≤m z ≤m Ex }. Then we can apply Theorem 10.4.1, Theorem 10.4.2 and Theorem 10.4.3 to obtain the results of competitive exclusion, stable coexistence and bistability depending on the stability properties of Ex , Ey and Ec (Ec is a positive equilibrium).

INTRODUCTION TO MONOTONE DYNAMICAL SYSTEMS

10.5

295

Exercises

Exercise 10.1 Discuss the uniform persistence for the system of two com[HHW] peting predators for one prey  m1 x m2 x x 0 − y1 − y2 , x = rx 1 − k a +x a2 + x  1 m1 x y10 = − d 1 y1 , a1 + x   m2 x 0 − d 2 y2 , y2 = a2 + x x(0) > 0,

y1 (0) > 0,

y2 (0) > 0.

Exercise 10.2 Discuss the following system of two species N1 , N2 competing for two complementary resources S and R in a chemostat [HCH] 1 1 S 0 = (S (0) − S)D − f1 (S, R)N1 − f2 (S, R)N2 , y1s y2s 1 1 f1 (S, R)N1 − f2 (S, R)N2 , R0 = (R(0) − R)D − y1r y2r N10 = (f1 (S, R) − D)N1 , N20 = (f2 (S, R) − D)N2 , S(0) ≥ 0,

R(0) ≥ 0,

N1 (0) > 0,

N2 (0) > 0.

where 

m1s S , f1 (S, R) = min a1s + S  m2s S f2 (S, R) = min , a2s + S

 m1r R , a1r + R  m2r R . a2r + R

(1) Show that 1 1 N1 + N2 = S (0) + O(e−Dt ) as t → ∞, y1s y2s 1 1 N1 + N2 = R(0) + O(e−Dt ) as t → ∞. R+ y1r y2r (2) Analyze the limiting system N10 = (f1 (S, R) − D)N1 , S+

N20 = (f2 (S, R) − D)N2 . where

1 1 N1 − N2 ≥ 0, y1s y2s 1 1 R = R(0) − N1 − N2 ≥ 0. y1r y2r S = S (0) −

296

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Exercise 10.3 Consider the following simple gradostat model [SW] with two vessels, S10 = 1 − 2S1 + S2 − fu (S1 )u1 − fv (S1 )v1 , S20 = S1 − 2S2 − fu (S2 )u2 − fv (S2 )v2 , u01 = −2u1 + u2 + fu (S1 )u1 , u02 = u1 − 2u2 + fu (S2 )u2 , v10 = −2v1 + v2 + fv (S1 )v1 , v20 = v1 − 2v2 + fv (S2 )v2 , Si (0) ≥ 0,

ui (0) ≥ 0,

vi (0) ≥ 0,

i = 1, 2,

where m1 S , a1 + S m2 S . fr (S) = a2 + S

fu (S) =

(1) Show that 2 − S1 (t) − u1 (t) − v1 (t) → 0 as t → ∞, 3 1 Σ2 (t) = − S2 (t) − u2 (t) − v2 (t) → 0 as t → ∞. 3 (2) Analyze the single species system   2 − u1 u1 , u01 = −2u1 + u2 + fu 3   1 − u2 u2 , u02 = u1 − 2u2 + fu 3 Σ1 (t) =

ui (0) ≥ 0,

i = 1, 2.

(3) Analyze the two species competition system   2 0 u1 = −2u1 + u2 + fu − u1 − v1 u1 , 3   1 0 − u2 − v2 u2 , u2 = u1 − 2u2 + fu 3   2 0 v1 = −2v1 + v2 + fv − u1 − v1 v1 , 3   1 v20 = v1 − 2v2 + fv − u 2 − v2 v2 . 3

INTRODUCTION TO MONOTONE DYNAMICAL SYSTEMS

297

Exercise 10.4 [SW] The following is a mathematical model for two species competing for a single nutrient with internal storage in a chemostat. Let x1 , x2 be two populations competing for a single nutrient of concentration S in the chemostat. The average amount of stored nutrient per individual of population x1 is denoted by Q1 , and for population x2 by Q2 . Then the model takes the form: x01 = x1 (µ1 (Q1 ) − D), Q01 = ρ1 (S, Q1 ) − µ1 (Q1 )Q1 , x02 = x2 (µ2 (Q2 ) − D), Q02 = ρ2 (S, Q2 ) − µ2 (Q2 )Q2 , S 0 = (S (0) − S)D − x1 ρ1 (S, Q1 ) − x2 ρ2 (S, Q2 ). The function µi (Qi ) and ρ(S, Qi ) satisfy µi (Qi ) ≥ 0, µ0i (Qi ) > 0, µi (ρi ) = 0 i for Qi ≥ ρi ≥ 0 where ρi is the minimal cell quota; ρi (0, Qi ) = 0, ∂ρ ∂S > ∂ρi 0, ∂Qi ≤ 0. (1) Analyze the single population system x0 = x(µ(Q) − D), Q0 = ρ(S, Q) − µ(Q)Q, S 0 = (S (0) − S)D − xρ(S, Q). (2) Prove the conservation properties S + x1 Q1 + x2 Q2 = S (0) + O(e−Dt ) as t → ∞. (3) Analyze the limiting system x01 = x1 (µ1 (Q1 ) − D), Q01 = ρ(S, Q1 ) − µ1 (Q1 )Q1 , x02 = x2 (µ2 (Q2 ) − D), Q02 = ρ2 (S, Q2 ) − µ2 (Q1 )Q1 , where S = S (0) − x1 Q1 − x2 Q2 ≥ 0, by converting to the equivalent system of x1 = x1 , U1 = x1 Q1 , x2 = x2 , U2 = x2 Q2 .

298

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

    U1 −D , x01 = x1 µ1 x1   U1 U10 = ρ1 S (0) − U1 − U2 , − DU1 , x1     U2 x02 = x2 µ2 −D , x2   U2 U20 = ρ2 S (0) − U1 − U2 , − DU2 , x2

(10.45)

with appropriate domain 4 = {(x1 , U1 , x2 , U2 ) ∈ R4+ : xi > 0, U1 + U2 ≤ 1, i = 1, 2.} Define partial ordering defined by (x1 , U1 , x2 , U2 ) ≤K (x1 , U1 , x2 , U2 ) if and only if x1 ≤ x1 , U1 ≤ U1 , x2 ≥ x2 , U2 ≥ U2 . Prove that the system (?) preserves ≤K and apply Theorems 10.4.1, 10.4.2, 10.4.3 to obtain the global asymptotic behavior of the solutions of (10.45).

Chapter 11

INTRODUCTION TO HAMILTONIAN SYSTEMS

11.1

Definitions and Classic Examples

In this chapter we consider systems of the form  ∂  H(t, x, y),  x˙ = ∂y   y˙ = − ∂ H(t, x, y) . ∂x

(H)

Here x = (x1 , · · · , xn ) ∈ D ⊂ Rn , D is open, y = (y1 , · · · , yn ) ∈ Rn , and H ∈ C 2 (R × D × Rn ). Systems of the form are called Hamiltonian systems, and the function H is called the Hamiltonian. We say the degree of freedom of the system is n if it is autonomous (i.e. H = H(x, y)), the degree of freedom is n + 1/2 if otherwise. We call D × Rn the phase space of (H). The flow induced by (H) is called a Hamiltonian flow, a terminology frequently abused even when solutions of (H) do not exist for all time. Let ∇ be the gradient with respect to z = (x, y), I = In be the n × n identity matrix. Then (H) can be written   0 I z˙ = J∇H(t, z), J = . (H’) −I 0 Sometimes we denote ∇ by ∇z to emphasize which space variables were ∂ ∂ taken. Alternatively, we may also write ∇H as ∂z H T , regarding ∂z H as a row vector. Hamiltonian systems are of special importance in physics because most models in classical mechanics can be formulated as Hamiltonian systems, and the Schr¨ odinger equation in quantum mechanics can be formally obtained from Hamiltonian formulation by replacing the variable y with some differential operators. 299

300

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

General theories for Hamiltonian systems are formulated as flows in vector fields on symplectic manifolds (i.e. manifolds equipped with some closed nondegenerate 2-form, with the skew-symmetric bilinear form induced by J in (H’) as special case). As a brief introduction, we will present some important features and methods for Hamiltonian systems on Euclidean spaces, and illustrate the theory through some classic examples. Proofs presented herein avoid usage of differential forms. We refer to [Arnold; HLW; MHO; Moser] for further topics and examples. Example 11.1.1 Any second order equation of the form x ¨ + g(x) = 0 (11.1) is a Hamiltonian system. Here g is assumed to be continuous R x on an open interval I ⊂ R. To see this, pick x0 ∈ I, let y = x, ˙ G(x) = x0 g(t)dt, then define 1 (11.2) H(x, y) = y 2 + G(x), (x, y) ∈ I × R. 2 It is easy to see that the Hamiltonian system with this Hamiltonian H is equivalent to (11.1). This is a Hamiltonian system with one degree of freedom. We have encountered many examples of this type, see Examples 1.1.6, 1.1.7, 1.1.9, 5.2.1, 5.2.2, 5.3.1. In such examples, x and y represent position and momentum (in rectangular or polar coordinates), H represents the total energy. With the presence of an external forcing term f (t), x ¨ + g(x) = f (t), (11.3) the equation becomes a non-autonomous Hamiltonian system with Hamiltonian 1 H(t, x, y) = y 2 + G(x) − xf (t), (t, x, y) ∈ R × I × R. 2 It is a Hamiltonian system with 3/2-degree of freedom. Example 11.1.2 Consider a mechanical system consisting of identical masses m1 = m2 = m and three springs, all aligned on the real axis. One spring connects these two masses and the other two springs connect masses to fixed positions. Let κ12 be the spring constant for the one linking m1 and m2 , and κ > 0 be the spring constant for the other two, xi be the displacement from the equilibrium position of mi , i = 1, 2. See Fig. 11.1. Equations of motion are ( m¨ x1 = −κx1 + κ12 (x2 − x1 ), m¨ x2 = −κx2 − κ12 (x2 − x1 ).

INTRODUCTION TO HAMILTONIAN SYSTEMS

301

This system is called linearly coupled oscillation. Let yi = mx˙ i be the linear momentum of mi . The system can be expressed as a Hamiltonian system with two degrees of freedom:  y1 x˙ 1 = ,    m   y2  x˙ 2 = , (CO) m    y ˙ = −(κ + κ )x + κ x 1 12 1 12 2    y˙ 2 = −(κ + κ12 )x2 + κ12 x1 . Figure for the the coupled oscillator (chapter 11)

The Hamiltonian H is the sum of kinetic energy K and potential energy V :   κ12 κ 2 1 2 y12 + y22 , V (x1 , x2 ) = x1 + x22 + (x1 − x2 ) . K(y1 , y2 ) = 2m 2 2 System (CO) can be used to describe small oscillations of various mechanical systems, such as vibrations of molecules or connected pendulums. κ12

κ m1

x1 Equilibrium position Fig. 11.1

κ m2

-x2 Equilibrium position

Linearly coupled oscillation.

Example 11.1.3 Consider the H´enon-Heiles Hamiltonian given by 1 λ H(x1 , x2 , y1 , y2 ) = (x21 + x22 + y12 + y22 ) + µx21 x2 + x32 . 2 3 This corresponds a Hamiltonian system with two degrees of freedom:  x˙ 1 = y1 ,      x˙ = y , 2 2 (HH)  y ˙ = −x  1 1 − 2µx1 x2    y˙ 2 = −x2 − µx21 − λx22 . Here µ and λ are constants. The system was studied by H´enon and Heiles in 1964 to model the nonlinear motion of a star in a planar galactic disc. The Hamiltonian is the sum of the kinetic energy K and potential energy V , where 1 λ 1 K(y1 , y2 ) = (y11 + y22 ), V (x1 , x2 ) = (x21 + x22 ) + µx21 x2 + x32 . 2 2 3

302

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Example 11.1.4 The N -body problem (Example 1.1.11) concerns the motion of N mass points m1 , · · · , mN moving in Rd (d ∈ {1, 2, 3}) under the influence of gravitational attraction. Let xk be the position of mass mk . According to Newton’s law of universal gravitation, the equations of motion are X mi mk (xi − xk ) , k = 1, . . . , N. (11.4) mk x ¨k = |xi − xk |3 i6=k

Let K be the kinetic energy and V be the Newtonian potential defined by N

K(x) ˙ =

1X mk |x˙ k |2 , 2

V (x) = −

X 1≤i 0 is a constant, and q ∈ R3 is the vector from one mass to the other. This formula fixes the mass center at the origin and has taken conservation of linear momentum into consideration. Kepler’s first law of planetary motions states that the trajectory of any solution q of (11.14) is a conic section. We will show it by identifications of first integrals.

INTRODUCTION TO HAMILTONIAN SYSTEMS

315

Equation (11.14) can be expressed as a Hamiltonian system with 3 degrees of freedom. By setting p = q, ˙ the Hamiltonian H is H(q, p) =

µ 1 2 |p| − . 2 |q|

It is straightforward to verify the conservation of the angular momentum L = q × p: d ∂H ∂H q L = q˙ × p + q × p˙ = ×p−q× = p × p + µq × 3 = 0. dt ∂p ∂q |q| This provides 3 first integrals since it a vector in R3 . Together with the energy integral H, we have a total of 4 first integrals. Another set of first integrals is given by the eccentricity vector: A=

q 1 p×L− . µ |q|

(11.15)

The eccentricity vector is more commonly, but improperly, known as the Laplace-Runge-Lenz vector. Its conservation can be also verified directly, but here we appeal to Theorem 11.3.1. Recall the notation ∇x for gradient with respect to x, Dx for the total derivative with respect to x. Gradient vectors are written as column vectors, so the total derivative Dx f of f : Rn → Rm as an m × n matrix has T ∇x f1T , · · · , ∇x fm as its row vectors. To show that A = (A1 , A2 , A3 ) is conserved, by Theorem 11.3.1(a) and (11.12) (Remark 11.3.1) all we need is showing {A, H} = DAJ∇H = 0. Here DA = Dq,p A is a 3 × 6 matrix whose rows are ∇ATi , i ∈ {1, 2, 3}. Three elementary identities from vector analysis, proofs for which are easy exercises left to readers (Exercise 11.7), will be convenient (q, p, u, v are in R3 ):     1 q·p q p = p− 2q (i) Dq |q| |q| |q| q × (q × p) = (q · p)q − |q|2 p

(ii)

Dp (p × u)v = v × u.

(iii)

316

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Knowing that L is a constant vector, the eccentricity vector A is also conserved because      1 p q , Dp (p × L) DAJ∇H = −Dq |q| µ ∇q (µ/|q|)   q q p − Dp (p × L) 3 = −Dq |q| |q|   1 q q·p =− p − 2 q − 3 × (q × p) = 0. |q| |q| |q| Clearly A is perpendicular to L, thus q, p, and A remain coplanar for all time. Let θ be the angle between A and q. Then |A||q| cos θ = A · q =

1 |L|2 (p × L) · q − |q| = − |q|. µ µ

Therefore, |q| =

|L|2 /µ . 1 + |A| cos θ

This implies Kepler’s first law of planetary motions — the trajectory of q is a conic section with a focus at the origin, eccentricity |A|, and semi-latus rectum |L|2 /µ. It also justifies the name “eccentricity vector” for A. The proof of Kepler’s first law via conservation of the eccentricity vector is due to Hermann and Bernoulli in the 1710s, long before eras of Laplace, Runge, and Lenz. 11.4

Symplectic Transformations

We have seen the close relation between linear Hamiltonian systems and symplectic matrices. In this section, we explore similar connections for general Hamiltonian systems. Definition 11.4.1 Given an open set U ⊂ R2n , we say F ∈ C 1 (U, R2n ) is symplectic (or canonical) if DF (z) ∈ Sp(2n, R) for any z ∈ U . This definition includes symplectic matrices as special cases of symplectic transformations. By the inverse function theorem, a symplectic transformation F is locally a C 1 diffeomorphism and its local inverse is also symplectic. Clearly compositions of symplectic transformations are symplectic.

INTRODUCTION TO HAMILTONIAN SYSTEMS

317

When dealing with time-dependent functions F (t, z), being “symplectic” means symplectic as function of z. In this case, we still use the notation DF = Dz F for the total derivative of F with respect to the space variable z. We shall follow this conventional notation and denote partial differentiation with respect to the first (time) variable by ∂/∂t, denote the total derivative with respect to time by using dot “ ˙ ” or d/dt. It is also convenient to use ∂F/∂z for the total derivative of F with respect to z. To clarify such notations, let us consider for instance a flow φ(t, τ, ξ) induced by z˙ = f (t, z). Then ∂ ∂ F (t, φ(t, τ, ξ)) = DF (t, φ(t, τ, ξ)) φ(t, τ, ξ), ∂ξ ∂ξ d ∂ F (t, φ(t, τ, ξ)) = F (t, φ(t, τ, ξ)) + DF (t, φ(t, τ, ξ))f (t, φ(t, τ, ξ)). dt ∂t Assume the time-dependent symplectic transformation F (t, z) is of class C 1 with respect to t. By the inverse function theorem again, the mapping (t, z) 7→ (t, F (t, z)) is locally a C 1 diffeomorphism. Let ζ = F (t, z), then locally z is a function of (t, ζ), so it makes sense to write ∂ζ/∂z and ∂z/∂ζ, which are inverses of each other. The theorem below states that such symplectic changes of variables send Hamiltonian systems to locally Hamiltonian systems; i.e. systems which can be locally written as Hamiltonian systems. It is not hard to find locally Hamiltonian systems which are not Hamiltonian systems (Exercise 11.6). Theorem 11.4.1 Suppose ζ := F (t, z) is symplectic for z in an open set U ⊂ R2n and is C 1 with respect to t. Then the Hamiltonian system (H’) in terms of the new variable ζ is a locally Hamiltonian system. Proof. Since ζ = F is symplectic, we have identity ∂ζ T ∂ζ = J. J ∂z ∂z Differentiate this identity with respect to t, we find T

∂2ζ ∂ζ ∂ζ T ∂ 2 ζ J + J = 0. ∂t∂z ∂z ∂z ∂t∂z

318

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Let G = −J ∂ζ ∂t . Then ∂ 2 ζ ∂z ∂t∂z ∂ζ !−1 ! T ∂ζ ∂z ∂2ζ ∂ζ T J ∂z ∂t∂z ∂z ∂ζ

Dζ G(t, z(t, ζ)) = −J =

T

∂z T ∂ 2 ζ JT =− ∂ζ ∂t∂z = Dζ G(t, z(t, ζ))T . Therefore, Dζ G is symmetric and, by Theorem 5.4.2, for each ζ0 in its domain of the space variable, there is a neighborhood of ζ0 and a function K(t, ζ) with ζ in this neighborhood such that G as a function of (t, ζ) is equal to ∇ζ K(t, ζ). Let z be solution of the Hamiltonian system (H’). Then ∂ ∂ ζ˙ = F (t, z) + DF (t, z)J H(t, z)T ∂t ∂z  T ∂ζ ∂ ∂ζ ∂ζ + J H(t, z(t, ζ)) = ∂t ∂z ∂ζ ∂z ∂ = JG + J H(t, z(t, ζ))T ∂ζ = J∇ζ K(t, ζ) + J∇ζ H(t, z(t, ζ)). Therefore, in terms of ζ, the system is a locally Hamiltonian system with Hamiltonian K(t, ζ) + H(t, z(t, ζ)). Recall the Poisson bracket defined in (11.12). To emphasize the space variable in use, we add the space variable to the subscript of {·, ·}. Then {F, G}ζ = ∇ζ F T J∇ζ G = ∇z F

T

∂z ∂z T J ∂ζ ∂ζ

! ∇z G

= ∇z F T J∇z G = {F, G}z . This simple observation is summarized in the theorem below. Theorem 11.4.2 Under the assumption of Theorem 11.4.1, the Poisson bracket is preserved by the symplectic change of variable ζ = F (t, z); i.e. {F, G}ζ = {F, G}z .

INTRODUCTION TO HAMILTONIAN SYSTEMS

319

Coordinates functions obtained from symplectic change of variables are called symplectic coordinates. By the above theorem and the Poisson theorem, first integrals F , G, {F, G} for the Hamiltonian system (H) are still first integrals for the locally Hamiltonian system in terms of any other symplectic coordinates. The next theorem can be regarded as a nonlinear and local version of Theorem 11.2.6. The idea of proof has appeared in the proofs of Theorem 11.2.6 and Theorem 11.4.1. Theorem 11.4.3 Given open set D ⊂ R2n and f ∈ C 1 (R × D, R2n ). Consider z˙ = f (t, z), (t, z) ∈ R × D. Let ϕt be the flow induced by the system. Then the system is a locally Hamiltonian system if and only if ϕt is symplectic for sufficiently small |t|. Proof. Assume the system is a locally Hamiltonian system. For any z0 ∈ D, there exists neighborhood U of z0 , neighborhood (−τ, τ ) of 0, and a scalar function H on (−τ, τ ) × U such that f = J∇z H on (−τ, τ ) × U . Moreover, H is of class C 2 with respect to z, so that D2 H is symmetric. ∂ t Fix ξ ∈ U and let Z(t) = ∂ξ ϕ (ξ), then Z(0) = I2n and Z solves the variational equation Z˙ = JD2 H(t, ϕt (ξ))Z. This is valid for |t| sufficiently small and ξ sufficiently close to z0 . Therefore,  d Z T JZ = Z˙ T JZ + Z T J Z˙ dt   = Z T D2 H(t, ϕt (ξ))J T JZ + Z T J JD2 H(t, ϕt (ξ))Z = 0. Again, we have used J T = J −1 = −J. Since Z T JZ = J at t = 0, we conclude that the identity Z T JZ = J holds for |t| sufficiently small. Thus ϕt is symplectic for small |t|. To prove the converse, assume ϕt is symplectic for t ∈ (−τ, τ ). Let ∂ t Z(t) = ∂ξ ϕ (ξ) be as above. Then Z(t) ∈ Sp(2n, R) for small |t|, and it solves the variational equation Z˙ = Df (t, ϕt (ξ))Z. By differentiating the identity Z T JZ = J

320

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

with respect to t, we find Z˙ T JZ + Z T J Z˙ = 0, and thus Z T Df (t, ϕt (ξ))T JZ + Z T JDf (t, ϕt (ξ))Z = 0. This implies JDf (t, ϕt (ξ)) is symmetric. Since ξ ∈ D is arbitrary, we conclude that JDf (t, z) is symmetric for arbitrary z ∈ D and small |t|. By Theorem 5.4.2, for any z0 ∈ D, there is a neighborhood of z0 and a function H(t, z) with z in this neighborhood such that −Jf (t, z) = ∇H(t, z). This shows that the system z˙ = f (t, z) = J∇H(t, z) is a locally Hamiltonian system. Example 11.4.1 Consider the harmonic oscillator in Example 5.2.1 and 1 2 Example 11.3.1 with Hamiltonian H(x, y) = k2 x2 + 2m y : (k, m > 0) y x˙ = , m y˙ = −kx. Define Φ : R2 \ {0} → R+ × R/2πZ by Φ(x, y) = (I, φ), where     y2 1 √ 1 y km x2 + √ I = , φ = tan−1 √ . 2 km km x This transformation is symplectic because   √ √1 y km x ∂(I, φ) √km  ∈ SL(2, R) = Sp(2, R). =  −√ k y k ∂(x, y) m m x kx2 +y 2 /m kx2 +y 2 /m

According to Theorem 11.4.1, the system in terms of new variables (I, φ) ˜ be the Hamiltonian H in terms of is a locally Hamiltonian system. Let H I, φ, then r k ˜ ˜ H = H(I) = I m depends only on I. Thus the original system becomes ˜ ∂H = 0, I˙ = ∂φ r ˜ ∂H k ˙ φ=− = − . ∂I m Therefore solutions are I = constant and φ(t) = φ(0) −

p

k/m t.

INTRODUCTION TO HAMILTONIAN SYSTEMS

321

The symplectic change of variables (x, y) → (I, φ) is the composition of a linear symplectic transformation Φ1 and the symplectic polar coordinates Φ2 : ! √       1 2 2  km x x Φ1 X Φ2 I 2 (X + Y ) = = . −→ −→ √1 y y Y φ tan−1 (Y /X) km In general, if symplectic coordinates (I1 , · · · , In , φ1 , · · · , φn ) were selected for the Hamiltonian system (H) such that I1 , · · · , In are first integrals, then solutions can be obtained by direct integration. We call such coordinates action-angle variables. Coordinates I1 , · · · , In are action variables and φ1 , · · · , φn are angle variables. Let Tn = (R/2πZ)n be the n-dimensional torus. If in terms of the action-angle variables (I, φ) = (I1 · · · , In , φ1 , · · · , φn ) ∈ Rn ×Tn the Hamiltonian is a smooth function of the form H0 (I), then as in Example 11.4.1 we have equations of motion ∂H0 φ˙ i = − =: ωi (I). ∂Ii

I˙i = 0,

Solution with initial data (I 0 , φ0 ) is then given by I = I 0,

φ(t) = φ0 + ω(I 0 )t.

Tori {I 0 } × Tn are invariant and the dynamics is completely determined by the frequency vector ω(I) = (ω1 (I), · · · , ωn (I)). Adding a smooth small perturbation term Hε (I, φ) to the Hamiltonian would typically destroy many invariant tori, resulting chaotic regions in the phase space. A classic result by Kolmogorov (1954), Arnold (1963), and Moser (1962) states that tori with “strongly non-resonant” frequency ω will persist. More precisely, it means that for some α > 0, τ > n − 1, the frequency vector ω satisfies |hk, ωi| ≥

α |k|τ

for all k ∈ Zn \ {0}.

This is also known as the Diophantine condition. Figure 11.2 exhibits how invariant tori (“islands” in the figure) coexist with chaotic regions. The study of this phenomenon has been developed into an important branch in dynamical systems known as the KAM theory. Interested readers are referred to [L; P].

322

11.5

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Generating Functions and Hamilton-Jacobi’s Method

We have seen how a Hamiltonian system can be simplified by choosing proper symplectic coordinates. However, it is in general a nontrivial task to find ideal symplectic coordinates. In this section we introduce HamiltonJacobi’s method for finding such coordinates. Let us begin with the simplest case: n = 1. By writing R2 = Rx ×Ry , we mean the first coordinate is x and the second is y. Given a simply connected region D ⊂ R2 with positively oriented boundary ∂D. By Green’s theorem, I I ZZ Area(D) = x dy = − y dx = dxdy. ∂D

∂D

D

The sign of area changes when the orientation is reversed. As differential of coordinate functions, dx is the linear map sending (ξ, η) to ξ, dy is the linear map sending (ξ, η) to η. More generally, for any smooth function f : R2 → R, its differential df(x0 ,y0 ) at (x0 , y0 ) is the linear mapping ∂f ∂f (x0 , y0 )ξ + (x0 , y0 )η ∂x ∂y ∂f ∂f = (x0 , y0 )dx(ξ, η) + (x0 , y0 )dy(ξ, η). ∂x ∂y

df(x0 ,y0 ) (ξ, η) =

For brevity, (x0 , y0 ) it is often suppressed and the above equation is written df =

∂f ∂f dx + dy. ∂x ∂y

We may also write the linear map df(x0 ,y0 ) as Df (x0 , y0 ) (the row vecT tor (∂f /∂x, ∂f /∂y))  or ∇f (x0 , y0 ) (the column vector (∂f /∂x, ∂f /∂y) ) dx product with dy . For any continuous real-valued function g and smooth path γ : [a, b] → R2 , we have the Riemann-Stieltjes integral over γ Z X g df = lim g(γ(tk ))dfγ(tk ) (γ(tk+1 ) − γ(tk )) maxi |ti+1 −ti |→0

γ

=

lim

maxi |ti+1 −ti |→0

k

X

g(γ(tk ))dfγ(tk ) (γ 0 (tk ))(tk+1 − tk ).

k

Here {a = t0 < t1 < · · · < tn = b} is a partition of [a, b]. Linear combinations of mappings of the form g df are called (differential) 1-forms. The Riemann-Stieltjes integral above defines the integration of 1-forms over paths.

INTRODUCTION TO HAMILTONIAN SYSTEMS

323

If Φ : R2 = Rx × Ry → R2 = RX × RY is symplectic, denote it by Φ(x, y) = (X, Y ), then area-preserving property of DΦ implies I I ZZ Area(Φ(D)) = X dY = − Y dX = dXdY ∂Φ(D) ∂Φ(D) Φ(D) ZZ ZZ  = det DΦ−1 dxdy = dxdy = Area(D). D

H

D

H

Take ∂D x dy and ∂Φ(D) X dY , for example, we may rewrite the later in terms of x, y: I I ∂Y ∂Y dx + X(x, y) dy. X dY = X(x, y) ∂x ∂y ∂Φ(D) ∂D H Combining with ∂D x dy, we find the line integral Z x dy − X dY γ

is independent of path, which implies the local existence of function S(y, Y ) such that dS = x dy − X dY. Symplectic coordinates Φ = (X, Y ) can be generated by various choices of S. For example, S(y, Y ) = yY ⇒ (x, −X) = (Y, y) ⇒ Φ(x, y) = (−y, x),   1 y2 S(y, Y ) = y 2 cot Y ⇒ (x, −X) = y cot Y, − csc2 Y 2 2  2  y  2 x +y ⇒ Φ(x, y) = , tan−1 . 2 x The second one is exactly the symplectic polar coordinates. H H We may take other area integrals in discussions above, say ∂D x dy and Y dX, for which case we have ∂Φ(D) ∂S ∂S dX + dy = dS = Y dX + x dy. ∂X ∂y Symplectic coordinates Φ = (X, Y ) can be also generated by such S. For

324

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

example, S(X, y) = Xy ⇒ (Y, x) = (y, X) ⇒ Φ(x, y) = (x, y), ! y yp −1 2 p 2X − y + X tan S(X, y) = 2 2X − y 2 ! ! p y −1 p ⇒ (Y, x) = tan , 2X − y 2 2X − y 2  2  y  x + y2 ⇒ Φ(x, y) = . , tan−1 2 x The first one is simply the identity map, and the second one is again the symplectic polar coordinates. It is not apparent how the second S was obtained and what advantages we gain by finding such an S. In below, we will introduce the Hamilton-Jacobi method and answer these questions. For more general n, write R2n = Rx1 ×· · · Rxn ×Ry1 ×· · · Ryn , we can fix k and apply discussion above to xk yk -planes R2 = Rxk × Ryk . Coordinates (x, y) and (X, Y ) above are replaced with (xk , yk ) and (Xk , Yk ), Area(D) in our discussions are replaced with projections of a parallelogram in R2n to xk yk -planes. We can then generalize arguments above to surfaces in R2n that can be approximated by disjoint union of parallelograms. Coordinate functions x = (x1 , · · · , xn )T , y = (y1 , · · · , yn )T in (H) are written as column vectors, so are their differentials dx = (dx1 , · · · , dxn )T , dy = (dy1 , · · · , dyn )T . The 1-form x1 dy1 + · · · + xn dyn is supposed to be written xT dy as matrix multiplication, but it is conventionally written x dy. Same for y dx. Theorem 11.5.1 A C 2 transformation Φ(x, y) = (X, Y ) is symplectic if and only if any one of the followings holds: (a) There exists locally a C 2 function S1 (x, X) such that dS1 = y dx − Y dX. (b) There exists locally a C 2 function S2 (x, Y ) such that dS2 = y dx + X dY. (c) There exists locally a C 2 function S3 (X, y) such that dS3 = Y dX + x dy. (d) There exists locally a C 2 function S4 (y, Y ) such that dS4 = x dy − X dY.

INTRODUCTION TO HAMILTONIAN SYSTEMS

325

Real-valued functions S1 , · · · , S4 in this theorem are called generating functions for the symplectic transformation Φ. Proof. We only prove the equivalence of symplecticity of Φ and (a) since all others are similar. Write the Jacobian matrix of Φ as four n × n block matrices: ! ∂X ∂X ∂(X, Y ) ∂x ∂y . = ∂Y ∂Y ∂(x, y) ∂x ∂y It is symplectic if and only if T

∂(X, Y ) ∂(X, Y ) J = J, ∂(x, y) ∂(x, y) or equivalently,  T ∂X ∂x



∂Y ∂x



∂X T ∂Y ∂y ∂x



∂Y T ∂X ∂x ∂x ∂Y T ∂X ∂y ∂x

∂X T ∂x ∂X T ∂y

∂Y ∂y



∂Y ∂y



∂Y T ∂X ∂x ∂y ∂Y T ∂X ∂y ∂y

  =



0 In −In 0

 .

It is easy to see that this matrix identity holds if and only if the 2n × 2n matrix   ∂X T ∂Y ∂X T ∂Y − I n   ∂x ∂x ∂x ∂y ∂X T ∂Y ∂y ∂x

∂X T ∂Y ∂y ∂y

is symmetric. ∂ 2 Xi Since Φ is of class C 2 , the Jacobian matrix ∂(x,y) 2 of each Xi is a 2n×2n symmetric matrix. Therefore, the matrix above is symmetric if and only if     ∂X T ∂Y ∂X T ∂Y ∂X T − In Y −y ∂x ∂x ∂x ∂y ∂x =  D  ∂X T ∂X T ∂Y ∂X T ∂Y Y ∂y ∂y ∂x ∂y ∂y +

n X i=1

Yi

∂ 2 Xi ∂(x, y)2

(11.16)

is symmetric. We leave it as exercise to verify this identity (Exercise 11.11). By Theorem 5.4.2, the matrix above is symmetric if and only if there exists locally a C 2 function S1 (x, y) such that !T !T ∂S1 ∂X T ∂S1 ∂X T = − Y −y , = − Y . ∂x ∂x ∂y ∂y

326

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

This can be alternatively written     T T ∂X T ∂X dS1 = y − Y dx − Y dy ∂x ∂y   ∂X ∂X = y dx − Y T dx + dy ∂x ∂y = y dx − Y dX. This completes the proof. As observed in the previous section, if symplectic coordinates (X, Y ) are action-angle variables, where X1 , · · · , Xn are action variables, then the ˜ is a function of X, and solutions can be obtained by new Hamiltonian H direct integration. Locally (x, y) are functions of (X, Y ) and vice versa. The idea is to find a generating function of the form S(x, X) so that   ∂S ˜ = H(X). (11.17) H t, x, ∂x This partial differential equation is called a Hamilton-Jacobi equation. One the left-hand side, variable x is expressed as x(X, Y ) after differentiating S. If the generating function is of the form S(X, y), then   ∂S ˜ , y = H(X). (11.18) H t, ∂y If succeed, the Hamiltonian in terms of new coordinates do not depend on Y (call such coordinates cyclic), then the Hamiltonian system become X˙ = 0, ˜ ∂H . Y˙ = − ∂X Then X = (X1 , · · · , Xn ) are first integrals, and we can solve the system as in Section 11.4. Hamilton-Jacobi’s method is to find action-angle variables for the Hamiltonian system (H) by finding suitable generating functions through solving the Hamilton-Jacobi equation. This approach is somewhat surprising because solving the Hamilton-Jacobi equation does not sound easier than solving the original system directly. However, in many practical applications, finding solutions for the Hamilton-Jacobi equation are indeed easier. We illustrate Hamilton-Jacobi’s method in the following examples.

INTRODUCTION TO HAMILTONIAN SYSTEMS

327

Example 11.5.1 Consider Hamiltonian system with one degree of freedom and with Hamiltonian 1 H(x, y) = y 2 + V (x). 2 The Hamiltonian is assumed to be of class C 2 on open domain D ⊂ R2 . We seek action-angle variables (X, Y ), with action X and angle Y . Suppose ˜ = H(X). ˜ the Hamiltonian H expressed in terms of (X, Y ) is H Consider generating function of the form S(x, X), case (a) in Theorem 11.5.1. From y = ∂S/∂x and the Hamilton-Jacobi equation (11.17), we have  2 1 ∂S ˜ + V (x) = H(X), 2 ∂x √ q ∂H ˜ − V (x). = y = ± 2 H(X) x˙ = ∂y Thus, with fixed X0 and suitable x0 which depends on X0 , √ Z xq ˜ 0 ) − V (s) ds, S(x, X0 ) = ± 2 H(X

(11.19)

x0

dt =

dx . √ q ˜ − V (x) ± 2 H(X)

(11.20)

˜ 0 ) = h0 = H(x0 , 0) = V (x0 ) and assume Relate X0 and x0 by H(X 6= 0. By the implicit function theorem, the x-intercept (¯ x, 0) of H’s level curve is locally a function of X satisfying x ¯(X0 ) = x0 . Then the integral (11.19) yields a locally well-defined generating function S: √ Z xq ˜ H(X) − V (s) ds. (11.21) S(x, X) = ± 2 ∂ ∂x H(x0 , 0)

x ¯

The angle variable Y is then obtained by Z x ˜ 0 (X) √ q ∂S 1 H ˜ q Y = − = ± 2 H(X) ds. − V (¯ x) x ¯0 (X) ∓ √ ∂X 2 x¯ ˜ H(X) − V (s) The first term on the right-hand side is zero by the definition of x ¯. Thus Z x 0 ˜ H (X) 1 ˜ 0 (X) t. q ds = −H (11.22) Y = ∓√ 2 x¯ ˜ H(X) − V (s) The last identity uses (11.20) and t is the transfer time from x ¯ to x. Consider the case of harmonic oscillator, with V (x) = x2 /2. Define ˜ is simply the identity map X = (x2 + y 2 )/2 (the action variable), then H

328

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

√ ∂ H(x0 , 0) 6= 0 whenever x0 6= 0. Consider and x ¯ = ± 2X. Note that ∂x (x, y) in √ the first quadrant, for instance, then S in (11.21) is nonnegative and x ¯ = 2X. A closed form for the generating function S(x, X) in (11.21) can be found   Z x p xp x . 2X − s2 ds = 2X − x2 + X tan−1 √ S(x, X) = √ 2 2X − x2 2X The integral (11.22) becomes Z x 1 1 ds Y = −√ √ p 2 2X X − s2 /2   y x π −1 √ = − tan = tan−1 − 2 x 2X − x2 as expected. We have seen in Example 11.4.1 that the angle variable Y (t) is linear in t, which also follows from (11.22). For more general potential V , integrals (11.21) and (11.22) may not have closed forms, but we can still describe generating functions and angleangle variables in integral form as outlined in this example. Example 11.5.2 Consider Hamiltonian system with two degrees of freedom and with Hamiltonian x2 (x21 + y12 ) , x1 , y1 , y2 6= 0. H(x1 , x2 , y1 , y2 ) = x1 y1 y2 By observing symmetry of the Hamiltonian, it is natural to use new variables u1 = x1 /y1 and u2 = x2 /y2 in place of either (x1 , x2 ) or (y1 , y2 ). Using Hamilton-Jacobi’s method is a less direct approach, but we shall use this simple example to illustrate Hamilton-Jacobi’s method for systems with higher degrees of freedom. As in the previous example, we seek action-angle variables (X, Y ), with action variables X = (X1 , X2 ) and angle variables Y = (Y1 , Y2 ). In terms ˜ = H(X ˜ 1 , X2 ). of these new variables, the Hamiltonian H becomes H Consider generating function of the form S(X1 , X2 , y1 , y2 ), case (c) in Theorem 11.5.1. From xi = ∂S/∂yi , i ∈ {1, 2}, and the Hamilton-Jacobi equation (11.18), we have !  2   ∂S ∂S ∂S 2 ˜ + y1 = H(X1 , X2 ) y1 y2 . ∂y2 ∂y1 ∂y1 A trick of finding S is by separating variables:   ∂S ∂y1 y1 1 ∂S ˜ 1 , X2 )  = H(X . 2 y2 ∂y2 ∂S 2 + y 1 ∂y1

INTRODUCTION TO HAMILTONIAN SYSTEMS

329

Equations 1 ∂S = c2 , y2 ∂y2



∂S ∂y1

2

+ y12 = c1 y1

∂S ∂y1

suggest setting S to be quadratic in both y1 and y2 . Setting S(X1 , X2 , y1 , y2 ) = y12 X1 + y22 X2 , then ∂S ∂y1 ∂S x2 = ∂y2 ∂S Y1 = ∂X1 ∂S Y1 = ∂X2 x1 =

= 2y1 X1 , = 2y2 X2 , = y12 , = y22 .

Therefore, the symplectic transformation Φ = (X1 , X2 , Y1 , Y2 ) is given by   x1 x2 2 2 (X1 , X2 , Y1 , Y2 ) = , , y1 , y2 . 2y1 2y2 In terms of these symplectic coordinates, we find   1 ˜ ˜ H = H(X1 , X2 ) = 2X2 2X1 + . 2X1 ˜ we find X1 and X2 are conFrom the Hamiltonian system given by H, stants, say X1 = c1 and X2 = c2 , and that x1 = 2c1 y1 ,

x2 = 2c2 y2 .

Also, y1 and y2 are given by = Y1 , y22 = Y2 , and from equations for Y˙ 1 , Y˙ 2 :     1 1 2 2 t + d2 y1 = −2c2 2 − 2 t + d1 , y2 = − 2c1 + 2c1 2c1 y12

for some constants d1 , d2 . Example 11.5.3 We finish the chapter by considering the two-center problem, also known as Euler’s three-body problem. It concerns the motion of a point mass moving in space under the Newtonian gravitational attraction due to two fixed bodies, or a point charge in the space under the electrostatic force according to Coulomb’s law. In celestial mechanics, this

330

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

problem has been used to model the two-body problem with one oblate primary. For simplicity, we consider the planar problem and fixed bodies (called centers) with masses (or charges) µ1 and µ2 placed at c1 = (−c, 0) and c2 = (c, 0), respectively, where c > 0 is constant. Let (x1 , x2 ) be the position of the moving unit mass (or charge). Equations of motion for the free particle is the Hamiltonian system with Hamiltonian µ1 µ2 1 − , H(x1 , x2 , y1 , y2 ) = (y12 + y22 ) − 2 r1 r2 where q q r1 =

(x1 + c)2 + x22 ,

r2 =

(x1 − c)2 + x22

are distances from the free particle to fixed centers. When one of µ1 , µ2 is zero, it becomes the Kepler problem and we have seen in Example 11.3.7 that conic sections with a focus at the nonzero µi are trajectories of solutions. Both branches of hyperbola are trajectories if µi is allowed to be negative, which occurs when charged particles are been considered. We begin by choosing the “correct” coordinates for this problem. It is motivated from the following curious observation: for arbitrary µ1 , µ2 ∈ R, conic sections with foci (±c, 0) are trajectories of solutions for the twocenter problem. We divide this observation into two parts. Let q1 (t) be a solution for the force field F1 due to µ1 and q2 (t) be a solution for the force field F2 due to µ2 , and we assume that they parametrize the same curve γ. Assume q1 and q2 have the same orientation. Consider a new parametrization q for γ whose speed |q| ˙ at time t is given by |q(t)| ˙ 2 = |q˙1 (t1 )|2 + |q˙2 (t2 )|2 , where t1 and t2 are chosen so that q(t) = q1 (t1 ) = q2 (t2 ). (I). Assume α(t) is a regularRparametrization for γ and parameter s denotes t the arc length, then s(t) = t0 |α(t)|dt ˙ for some t0 , and so 1 d 2 |α(t)| ˙ = α ¨ (t) · ~τ , 2 ds where ~τ = α/| ˙ α| ˙ is the unit tangent vector in the direction of α(t). ˙ Therefore, since q1 and q2 have the same orientation, with q(t) = q1 (t1 ) = q2 (t2 ) we have q¨(t) · ~τ = (q¨1 (t1 ) + q¨2 (t2 )) · ~τ   µ1 (q1 (t1 ) − c1 ) µ2 (q2 (t2 ) − c2 ) =− + · ~τ |q1 (t1 ) − c1 |3 |q2 (t2 ) − c2 |3 = (F1 + F2 ) (q(t)) · ~τ .

INTRODUCTION TO HAMILTONIAN SYSTEMS

331

(II). Let α be as above, denote by R the radius of curvature, then R is given by |α| ˙ 3 /|¨ α × α|, ˙ which is independent of parametrizations. Thus at q(t) = q1 (t1 ) = q2 (t2 ) we have  1 1 |¨ q (t) × q(t)| ˙ = |q(t)| ˙ 2 = |q˙1 (t1 )|2 + |q˙2 (t2 )|2 |q(t)| ˙ R R q2 (t2 ) × q˙2 (t2 )| |¨ q1 (t1 ) × q˙1 (t1 )| |¨ + = |q˙1 (t1 )| |q˙2 (t2 )| = |¨ q1 (t1 ) × ~τ | + |¨ q2 (t2 ) × ~τ |

|¨ q (t) × ~τ | =

= |(F1 + F2 ) (q(t)) × ~τ | . The last line follows since q1 and q2 have the same orientation. Putting together (I) and (II), we conclude that q(t) is a solution for the combined forced field F1 + F2 . In case one center is attractive and the other is repulsive, one may modify discussions above by considering q1 and q2 with opposite orientations. Figure for the the two-center problem (chapter 11)

r1

c

Fig. 11.3

r2

c

The two-center problem.

Observation above is a special case of Bonnet’s theorem in classical mechanics (see Exercise 11.17). For the two-center problem, it suggests that we rewrite the Hamiltonian in terms of elliptic coordinates (see Fig. 11.3): r1 + r2 r1 − r2 , ξ2 = . 2c 2c It is routine to check the followings (Exercise 11.16): ξ1 =

332

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

(1) |ξ2 | ≤ 1 ≤ ξ1 . p (2) x1 = cξ1 ξ2 , x2 = ±c (ξ12 − 1)(1 − ξ22 ). ξ1 ξ˙1 (1 − ξ22 ) − ξ2 ξ˙2 (ξ12 − 1) p . (3) x˙ 1 = c(ξ˙1 ξ2 + ξ1 ξ˙2 ), x˙ 2 = ±c (ξ12 − 1)(1 − ξ22 ) By (3), we can write the kinetic energy in terms of elliptic coordinates ξ1 , ξ2 : ! 2 2 ˙2 2 2 2 ˙2 (ξ − ξ ) ξ c (ξ − ξ ) ξ 1 2 2 2 1 2 1 . + 1 (4) (x˙ 1 + x˙ 22 ) = 2 2 ξ12 − 1 1 − ξ22 Let L be the Lagrangian; i.e. kinetic energy minus the potential energy (see Exercise 11.2), ! 2 2 2 ˙2 2 2 ˙2 c (ξ − ξ ) ξ (ξ − ξ ) ξ µ1 µ2 1 2 1 2 2 L(ξ1 , ξ2 , ξ˙1 , ξ˙2 ) = + 1 + + . 2 ξ12 − 1 1 − ξ22 c(ξ1 + ξ2 ) c(ξ1 − ξ2 ) The corresponding Hamiltonian is obtained by the Legendre transformation (see also Exercise 11.2) ηi = ∂L/∂ ξ˙i . Then we have  2    2 ξ1 − ξ22 ˙ ξ1 − ξ22 ˙ 2 η1 = c2 ξ2 , ξ , η = c 1 2 ξ12 − 1 1 − ξ22 and the Hamiltonian function H(ξ1 , ξ2 , η1 , η2 ) is H =

1 2c2 (ξ12



ξ22 )

 2  (ξ1 − 1)η12 + (1 − ξ22 )η22 −

µ1 µ2 − . c(ξ1 + ξ2 ) c(ξ1 − ξ2 )

In order to find symplectic transformation Φ(ξ1 , ξ2 , η1 , η2 ) = (X1 , X2 , ˜ 1 , X2 ), we may search for generating function of Y1 , Y2 ) so that H = H(X the form S(ξ1 , ξ2 , X1 , X2 ). From the Hamilton-Jacobi equation, ˜ 1 , X2 ) 2c2 (ξ12 − ξ22 )H(X  2 2  ∂S ∂S = (ξ12 − 1) + (1 − ξ22 ) − 2cµ1 (ξ1 − ξ2 ) − 2cµ2 (ξ1 + ξ2 ). ∂ξ1 ∂ξ2 As in Example 11.5.2, using the trick of separating variables, 2  ∂S 2 ˜ 12 − 2c(µ1 + µ2 )ξ1 − 2c2 Hξ (ξ1 − 1) ∂ξ1 2  ∂S ˜ 22 . = −(1 − ξ22 ) + 2c(µ1 − µ2 )ξ2 − 2c2 Hξ ∂ξ2

INTRODUCTION TO HAMILTONIAN SYSTEMS

333

The first equation is independent of ξ2 and the second equation is independent of ξ1 , so they are equal to the same constant. Setting both equations ˜ = c2 , we obtain S expressed as elliptic integrals of the form = c1 and H Z ξ1 s c1 + 2c(µ1 + µ2 )ξ1 + 2c2 c2 ξ12 dξ1 S(ξ1 , ξ2 , c1 , c2 ) = ξ12 − 1 ξ¯1 Z ξ2 s −c1 + 2c(µ1 − µ2 )ξ2 − 2c2 c2 ξ22 dξ2 . + 1 − ξ22 ξ¯2 Taking partial derivatives with respect to ξ1 and ξ2 and putting back to the Hamilton-Jacobi equation, then   ∂S ∂S H ξ1 , ξ2 , , ∂ξ1 ∂ξ2 is indeed independent of ξ1 and ξ2 , as desired in (11.17). As our final remark, this two-center problem can be also solved by using another set of elliptic coordinates (ξ, η), which are defined by x1 = c cosh ξ cos η,

x2 = c sinh ξ sin η.

Using the corresponding Lagrangian and the same trick of separating variables, ξ and η can be expressed as functions of a parameter given by some elliptic integrals. See [Wh] for details. 11.6

Exercises

Exercise 11.1 The double pendulum consists of pendulum attached to the end of another pendulum. Assume for simplicity that both pendulums have unit length and a unit mass is placed at the end of each pendulum. Consider planar motions and let (x1 , y1 ) be the position of the first pendulum, (x2 , y2 ) be the position of the second pendulum attached to the first pendulum. The kinetic and potential energies are 1 1 2 (x˙ + y˙ 12 ) + (x˙ 22 + y˙ 22 ), V = y1 + y2 . 2 1 2 Let θ1 be the angle of the first pendulum from equilibrium position, and θ2 be the angle of the second pendulum from equilibrium position. K =

(1) Write equations of motion as a Hamiltonian system using variables θ1 , θ2 . (2) Show that the system can be written as a linearly coupled oscillation when the oscillation is small (i.e. θ1 , θ2 are close to 0).

334

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Exercise 11.2 Let L(q, q, ˙ t) be a smooth real-valued function on D×Rn ×R, d D ⊂ R is open. The Euler-Lagrange equation for L is d ∂L ∂L = . dt ∂ q˙ ∂q We call L the Lagrangian for this equation. (1) Find suitable Lagrangians to rewrite examples in Section 11.1 as EulerLagrangian equations. (2) Assume the second derivative ∂ 2 L/∂ q˙2 is positive definite. Transform the Euler-Lagrange equation to a Hamiltonian system by using new variables (q, p) and Hamiltonian function H = p · q − L, where p = ∂L/∂ q, ˙ called generalized momentum. This change of variables is called the Legendre transformation. (3) The condition on ∂ 2 L/∂ q˙2 in (2) implies L is strictly convex in q. ˙ Extend the definition of Legendre transformations to general convex functions. Exercise 11.3 The motion z = (x, y) of a small asteroid or satellite in a gravitational field generated by two celestial bodies moving along circular orbits can be written  ∂V  , ¨ − 2y˙ = − x ∂x   y¨ + 2y˙ = − ∂V , ∂y were µ x2 + y 2 1−µ V (x, y) = +p +p . 2 2 2 (x + µ) + y (x − 1 + µ)2 + y 2 Here µ, 1 − µ > 0 are masses of the two celestial bodies (e.g. Sun and Jupiter). (1) Write the system as an Euler-Lagrange equation by finding suitable Lagrangian, and write the system as a Hamiltonian system by finding suitable Hamiltonian. (2) Verify that the following is a first integral (known as the Jacobi integral) 1 2 (x˙ + y˙ 2 ) − V (x, y). 2 Then use it to sketch regions in R2 on which h ≥ −V (x, y) for different values of h. This is known as the Hill region. What is the physical meaning of this region? h =

INTRODUCTION TO HAMILTONIAN SYSTEMS

335

(3) Find critical points of the system, and show that critical points located on the x-axis are unstable. 2 Hint: Check Vxx Vyy < Vxy , and show that this condition implies eigenvalues of the linearized matrix at these critical points cannot be pure imaginary. Exercise 11.4 Prove Theorem 11.2.1. Exercise 11.5 Verify details of Example 11.2.1. Exercise 11.6 Find a system of the form z˙ = f (z), z ∈ D and D ⊂ R2n is open, such that the system is not a Hamiltonian system but is a locally Hamiltonian system. Exercise 11.7 Verify identities (i), (ii), (iii) in Example 11.3.7. Exercise 11.8 In Example 11.3.6 we consider a mechanical system consisting of N particles q1 , · · · , qN ∈ R3 , and showed that conservation of two components of the angular momentum implies the conservation of the angular momentum. Assume m1 , · · · , mN > 0 and the Hamiltonian is N

H(q1 , · · · , qN , p1 , · · · , pN ) =

1 X p2k + V (q1 , · · · , qN ). 2 mk k=1

Here the potential function V is arbitrary, pk = mk q˙k is the linear momentum of the particle qk . Consider linear momentum P and angular momentum L of the system: P = (P1 , P2 , P3 ) =

N X k=1

pk ,

L = (L1 , L2 , L3 ) =

N X

qk × pk .

k=1

Generalize result in Example 11.3.6 by finding minimal subsets of {P1 , P2 , P3 , L1 , L2 , L3 } such that conservations of terms in this subset imply conservation of all other terms. Hint: Try {P2 , P3 , L1 , L2 }. Exercise 11.9 Consider the symplectic group Sp(2n, F), F = R or C. (1) Show that M ∈ Sp(2n, F) if and only if M T ∈ Sp(2n, F). (2) Show that Sp(2n, F) is a proper subgroup of SL(2n, F) if and only if n ≥ 2.

336

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

(3) Consider n = 2. Let M ∈ C4×4 be a matrix consisting of four 2 × 2 block matrices A, B, C, D. Namely,   AB M = . CD Determine necessary and sufficient conditions on A, B, C, D for M ∈ Sp(4, C). (4) Assume that all eigenvalues of M ∈ Sp(4, R) have non-zero real parts. Determine possible Jordan forms for M . (5) Consider orthogonal matrices in Sp(2n, R). Discuss its relation with the group of unitary matrices in Cn×n . Exercise 11.10 An alternative proof for the fact that symplectic matrices have determinant 1 (Theorem 11.2.4), without using Pfaffian, can be obtained as follows. Given M ∈ Sp(2n, R). (1) Find orthogonal matrix U and positive definite matrix P such that M = U P . Show that both U and P must be symplectic. (2) Write the orthogonal symplectic matrix U as four n × n block matrices. Show that U must be of the form   A B U = . −B A (3) Show that real matrices of the form in (b) have nonnegative determinants. Use it to conclude that det(M ) = 1. Exercise 11.11 Verify the identity (11.16), and prove equivalence of sympleticity of Φ and (b), (c), (d) of Theorem 11.5.1. Exercise 11.12 We have seen generating functions S(x, X), S(y, Y ), and S(X, y) for the symplectic polar coordinates y x2 + y 2 X = , Y = tan−1 . 2 x Determine the generating function S(x, Y ) for the symplectic polar coordinates. Exercise 11.13 Find action-angle variables for coupled oscillators (CO) given in Example 11.1.2. Hint: Adjust variables in Example 11.3.2.

INTRODUCTION TO HAMILTONIAN SYSTEMS

337

Exercise 11.14 Denote a symplectic transformation Φ : R4 → R4 by Φ(x1 , x2 , y1 , y2 ) = (X1 , X2 , Y1 , Y2 ). Determine symplectic transformations Φ generated by the following generating functions: (1) S(x1 , x2 , X1 , X2 ) = x1 X22 + x22 X1 . (2) S(X1 , X2 , y1 , y2 ) = X1 y1 cos y2 + X2 y1 sin y2 . Exercise 11.15 Consider the Hamiltonian system with Hamiltonian H(x1 , x2 , y1 , y2 ) = y12 (1 + x21 ) + y22 (1 + x22 ) + x1 . Use Hamilton-Jacobi’s method to find a generating function of the form S(x1 , x2 , Y1 , Y2 ), and determine the corresponding symplectic coordinates (X1 , X2 , Y1 , Y2 ). Exercise 11.16 Verify (1)–(4) of Example 11.5.3. Exercise 11.17 (Bonnet’s theorem) In Example 11.5.3 we proved that, for arbitrary µ1 , µ2 ∈ R, given a solution q1 due to the force field F1 generated by µ1 and a solution q2 due to the force field F2 generated by µ2 , if q1 , q2 parametrize the same geometric orbit, then a solution q due to the combined force field F parametrize the same geometric orbit. Generalize it to arbitrary number of centers and more general law of forces (not necessarily inverse square law, like Newton’s law or Coulomb’s law).

This page intentionally left blank

APPENDIX A

A.1 The proof for the local existence of I.V.P. (2.1) [CL] has two steps. First we construct a sequence of “approximate” solutions to the I.V.P. (2.1). Then we apply Ascoli-Arzela’s Theorem to extract a convergent subsequence which converges to a solution of (2.1). Definition A.1.1 We say a piecewise C 1 function ϕ(t) defined on interval I is an ε-approximate solution of I.V.P. (2.1) if |ϕ0 (t) − f (t, ϕ(t))| < ε f or all t ∈ I whenever ϕ0 (t) exists. Since (t0 , x0 ) ∈ D and D is open in R × Rn , there exist a, b > 0 such that S = {(t, x) : |t − t0 | ≤ a, |x − x0 | ≤ b} ⊆ D. From the assumption f ∈ C(D), we obtain |f (t, x)| ≤ M on the compact set S. Let c = min{a, b/M }. Theorem A.1.1 For any ε > 0, there exists an ε-approximate solution of I.V.P. (2.1) on the interval I = {t : |t − t0 | ≤ c}.

339

340

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Proof. From the fact that f (t, x) is continuous on the compact set S, it follows that f (t, x) is uniformly continuous on S. Then, given ε > 0, there exists δ = δ(ε) > 0 such that |f (t, x) − f (s, y)| < ε whenever |(t, x) − (s, y)| < δ.

(A.1)

Partition the interval [t0 , t0 + c] into m subintervals, t0 < t1 < · · · < tm = δ for j = 0, · · · , m − 1. Construct an t0 + c with tj+1 − tj < min 2δ , 2M ε-approximate solution ϕ(t) by Euler’s method: Let ϕ(t0 ) = x0 , and define ϕ(t) by the following: ϕ(t) = ϕ(t0 ) + f (t0 , ϕ(t0 ))(t − t0 ), on [t0 , t1 ] ϕ(t) = ϕ(tj ) + f (tj , ϕ(tj )) (t − tj ), on [tj , tj+1 ] , 1 ≤ j ≤ m − 1.

(A.2)

First we check that (t, ϕ(t)) ∈ S for all t0 ≤ t ≤ t0 + c. Obviously |t − t0 | ≤ c ≤ a. For t0 ≤ t ≤ t1 , |ϕ(t) − x0 | = |f (t0 , x0 )|(t − t0 ) ≤ M c ≤ M · b/M = b. By induction we assume (t, ϕ(t)) ∈ S for t0 ≤ t ≤ tj . Then for tj ≤ t ≤ tj+1 , |ϕ(t) − x0 | ≤ |ϕ(t) − ϕ(tj )| + |ϕ(tj ) − ϕ(tj−1 )| + · · · + · · · + |ϕ(t0 ) − x0 | ≤ M ((t − tj ) + (tj − tj−1 ) + · · · + (t1 − t0 )) = M (t − t0 ) ≤ M c ≤ M · b/M = b.

(A.3)

Now we verify that ϕ(t) is an ε-approximate solution. Let t ∈ (tj , tj+1 ). Then |t − tj | ≤ |tj+1 − tj | < δ/2 and from (A.2) |ϕ(t) − ϕ(tj )| = |f (tj , ϕ(tj ))(t − tj )| ≤ M |t − tj | ≤ M · δ/2M = δ/2. Hence it follows that |(t, ϕ(t)) − (tj , ϕ(tj ))| < δ and from (A.1) we have that |ϕ0 (t) − f (t, ϕ(t))| = |f (tj , ϕ(tj )) − f (t, ϕ(t))| < ε.

Fig. A.1

APPENDIX A

341

Theorem A.1.2 Let f ∈ C(D), (t0 , x0 ) ∈ D. Then the I.V.P. (2.1) has a solution on the interval I = [t0 − c, t0 + c] for some c > 0.

1 ↓ 0 as m → +∞ and let ϕm (t) be an εm -approximate Proof. Let εm = m solution on I. Claim: {ϕm } is uniformly bounded and equicontinuous.

Uniform boundedness: From (A.3) |ϕm (t)| ≤ |ϕm (t0 )| + |ϕm (t) − ϕm (t0 )| ≤ |x0 | + M (t − t0 ) ≤ |x0 | + M c for all t and m. Equicontinuity of {ϕm }: For any s < t with s < tj < · · · < tk < t for some j and k, |ϕm (t) − ϕm (s)| ≤ |ϕm (t) − ϕm (tk )| + |ϕm (tk ) − ϕm (tk−1 )| + · · · + |ϕm (tj+1 ) − ϕm (tj )| + |ϕm (tj ) − ϕm (s)| ≤ M (t − tk ) + M (tk − tk−1 ) + · · · + M (tj − s) = M |t − s|. Hence for any ε > 0 we choose δ = ε/M , then |ϕm (t) − ϕm (s)| < ε whenever |t − s| < δ. From Ascoli-Arzela Theorem, we can extract a convergent subsequence {ϕmk }. Let ϕmk → ϕ ∈ C(I). Then f (t, ϕmk (t)) → f (t, ϕ(t)) uniformly on I. Let Em (t) = ϕ0m (t) − f (t, ϕm (t)). Then Em is piecewise continuous on I and |Em (t)| ≤ εm on I. It follows that Rt ϕm (t) = x0 + t0 [f (s, ϕm (s)) + Em (s)] ds, ↓ ↓ Rt ϕ(t) = x0 + t0 f (s, ϕ(s))ds. From Lemma 2.2.1 we complete the proof Theorem A.1.2.

342

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

A.2 Let f : D ⊆ R × Rn × Rk → Rn be a C 1 function. In Appendix A.2 we prove the continuous dependence of the solution ϕ(t; τ, ξ, λ) of (A.4) x0 = f (t, x, λ) x(τ ) = ξ

(A.4)

on (τ, ξ, λ) where τ is the initial time; ξ is the initial value; λ is the parameter. First we consider the discrete version of the continuous dependence property. Theorem A.2.1 Let {fn (t, x)} be a sequence of continuous functions on D and lim fn = f uniformly on any compact set in D. Let (τn , ξn ) ∈ D, n→∞

(τn , ξn ) → (τ, ξ) and ϕn (t) be any noncontinuable solution of x0 = fn (t, x), x(τn ) = ξn . If ϕ(t) is the unique solution of x0 = f (t, x), x(τ ) = ξ, defined on [a, b]. Then ϕn (t) is defined on [a, b] for n large and ϕn → ϕ uniformly on [a, b].

Proof. It suffices to show that ϕn → ϕ uniformly on [τ, b]. The proof is split into two parts. Part 1: Show that there exists t1 > τ such that ϕn → ϕ uniformly on [τ, t1 ]. Since f ∈ C(D) and (τ, ξ) ∈ D, we let E be a compact subset of D such that Int E ⊇ graph ϕ = {(t, ϕ(t)) : a ≤ t ≤ b}. Let |f | < M in E. Then, |fn | < M for n ≥ n0 for some large n0 . Choose δ > 0 sufficiently small such that R = {(t, x) : |t − τ | ≤ δ, |x − ξ| ≤ 3M δ} ⊆ E.

Fig. A.2

APPENDIX A

343

Choose n1 ≥ n0 large such that |τn − τ | < δ, |ξn − ξ| < M δ

for n ≥ n1 .

Let n ≥ n1 , then (τn , ξn ) ∈ R. For |t − τ | ≤ δ, as long as (t, ϕn (t)) stays in E, we have Z t fn (s, ϕn (s))ds ≤ M |t − τn | |ϕn (t) − ξn | = τn

and |ϕn (t) − ξ| < |ξ − ξn | + |ϕn (t) − ξn | ≤ M δ + M |t − τn | ≤ M δ + M (|t − τ | + |τ − τn |) ≤ M δ + M δ + M δ = 3M δ. Then ϕn (t) is defined and {(t, ϕn (t)) : |t − τ | ≤ δ} ⊆ R. Obviously {ϕn } is uniformly bounded on Iδ = {t : |t − τ | ≤ δ}. From Z s |ϕn (t) − ϕn (s)| = fn (s0 , ϕn (s0 ))ds ≤ M |t − s| for s > t s, t ∈ Iδ , t

it follows that ϕn is equicontinuous on Iδ . By Ascoli-Arzela Theorem, we can extract a convergent subsequence {ϕnk } such that ϕnk → ϕ¯ uniformly on Iδ . Since t

Z ϕn (t) = ξn +

fn (s, ϕn (s))ds, τn

letting nk → ∞ yields that Z ϕ(t) ¯ =ξ+

t

f (s, ϕ(s))ds. ¯ τ

By uniqueness of ϕ(t), we have ϕ(t) ¯ ≡ ϕ(t) on Iδ . Obviously by the uniqueness property every convergent subsequence of {ϕn } converges to ϕ. Now we claim: ϕk → ϕ on Iδ . If not, then there exists ε > 0 and a subsequence ϕn` such that k ϕn` − ϕ k> ε for each ` = 1, 2, · · · . Apply Ascoli-Arzela Theorem to {ϕn` }, we obtain a subsequence ϕn`k → ϕ which is a desired contradiction. Hence ϕk → ϕ uniformly on [τ, t1 ], t1 = τ + δ. Part 2: To show ϕn → ϕ uniformly on [τ, b]. Let t∗ = sup{t1 ≤ b : ϕn → ϕ uniformly on [τ, t1 ]} we claim that t∗ = b. If not, t∗ < b. Choose δ 1 > 0 small such that  R1 = (t, x) : |t − t∗ | ≤ 2δ 1 , |x − ϕ(t∗ )| ≤ 4M δ 1 ⊆ E.

344

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Fig. A.3

Choose t1 , t∗ − δ 1 < t1 < t∗ . Then Z t∗ ∗ 1 |ϕ(t ) − ϕ(t )| ≤ |f (s, ϕ(s))|ds ≤ M δ 1 . t1

Hence 00

R = {(x, t) : |t − t1 | ≤ δ 1 , |x − ϕ(t1 )| ≤ 3M δ 1 } ⊆ R1 . Let ξn1 = ϕn (t1 ), ξ 1 = ϕ(t1 ). Then (t1 , ξn1 ) → (t1 , ξ 1 ). Apply part 1 with δ replaced by δ 1 , we obtain that ϕn (t) → ϕ(t) uniformly on [t1 , t1 + δ 1 ]. 1 1 ∗ But t + δ > t . This contradicts to the definition of t∗ . Now we state the continuous version of the continuous dependence property. Theorem A.2.2 Let f (t, x, λ) be a continuous function of (t, x, λ) for all (t, x) in open set D and for all λ near λ0 and let ϕ(t; τ, ξ, λ) be any noncontinuable solution of x0 = f (t, x, λ), x(τ ) = ξ. If ϕ(t; t0 , ξ0 , λ0 ) is defined on [a, b] and is unique, then ϕ(t; τ, ξ, λ) is defined on [a, b] for all (τ, ξ, λ) near (t0 , ξ0 , λ0 ) and is a continuous function of (t, τ, ξ, λ).

Proof. By Theorem A.2.1 we have |ϕ(t; τ, ξ, λ) − ϕ(t1 ; t0 , ξ0 , λ0 )| ≤ |ϕ(t; τ, ξ, λ) − ϕ(t; t0 , ξ0 , λ0 )| + |ϕ(t, t0 , ξ0 , λ0 ) − ϕ(t1 , t0 , ξ0 , λ0 )| ≤ ε + |ϕ(t; t0 , ξ0 , λ0 ) − ϕ(t1 ; t0 , ξ0 , λ0 )| < 2ε, for a ≤ t ≤ b if |(τ, ξ, λ) − (t0 , ξ0 , λ0 )| < δ and |t − t1 | < δ1 .

APPENDIX A

345

A.3 In this Appendix A.3, we prove Theorem A.3.1. Theorem A.3.1 [Cop] Let ϕ(t, λ0 ) be the unique solution of dx = f (t, x, λ0 ), dt x(t0 ) = ξ0 , on compact interval J = [a, b]. Assume f ∈ C 1 in x and λ at all points (t, ϕ(t, λ0 ), λ0 ), t ∈ J. Then for λ sufficiently near λ0 , the system Eλ : dx dt = f (t, x, λ), x(t0 ) = ξ0 has a unique solution ϕ(t, λ) defined on J. Moreover, ϕλ (t, λ0 ) exists and satisfies y 0 = fx (t, ϕ(t, λ0 ), λ0 )y + fλ (t, ϕ(t, λ0 ), λ0 ), y(t0 ) = 0. Proof. Since fx , fλ are continuous in (t, x, λ) and ϕ(t, λ0 ) is continuous in t, then given any ε > 0 and for each s ∈ J, there exists δ = δ(s, ε) > 0 such that |fx (t, x, λ) − fx (t, ϕ(t, λ0 ), λ0 )| ≤ ε, (A.5)

and |fλ (t, x, λ) − fλ (t, ϕ(t, λ0 ), λ0 )| ≤ ε,

if |t − s| ≤ δ, |x − ϕ(t, λ0 )| ≤ δ, |λ − λ0 | ≤ δ. Since ∪ B(s, δ(s, ε)) ⊇ [a, b], from Heine-Borel Theorem, there exist s∈[a,b] SN N > 0 such that i=1 B(si , δ(si , ε)) ⊇ [a, b]. Let δ 1 = δ 1 (ε) = min(δ(s1 , ε), · · · , δ(sN , ε)). For any t ∈ J, if |x − ϕ(t, λ0 )| ≤ δ 1 ,

|λ − λ0 | ≤ δ 1 ,

(A.6)

then |t − si | ≤ δi (si , ε) = δi for some i and then |x − ϕ(t, λ0 )| ≤ δ 1 < δi , |λ − λ0 | ≤ δ 1 < δi . Hence (A.5) holds for any t ∈ J satisfying (A.6). Let R = {(t, x, λ) : |x − ϕ(t, λ0 )| ≤ δ 1 , |λ − λ0 | ≤ δ 1 , t ∈ J}. Then, fx , fλ are bounded on R, say |fx | ≤ A, |fλ | ≤ B. Hence f satisfies a Lipschitz condition in x and, if |λ − λ0 | ≤ δ 1 , the I.V.P. (Eλ ) has a unique solution passing through any point in the region Ω : |x − ϕ(t, λ0 )| ≤ δ 1 , t ∈ J. Moreover, the solution ϕ(t, λ) passing through the point (t0 , ξ0 ) satisfies |ϕ0 (t, λ) − f (t, ϕ(t, λ), λ0 )| ≤ B|λ − λ0 |,

346

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

as long as (t, ϕ(t, λ)) stays in Ω. Therefore, by Theorem 2.2.4, we have |ϕ(t, λ) − ϕ(t, λ0 )| ≤ C|λ − λ0 |,

(A.7)

where C = B[eAh − 1]/A and h = b − a. It follows that ϕ(t, λ) is defined on J for all λ sufficiently near λ0 . Furthermore by (A.5), (A.7) and the mean value theorem, if λ is sufficiently near λ0 , then for all t ∈ J, |f (t, ϕ(t, λ), λ) − f (t, ϕ(t, λ0 ), λ0 ) − fx (t, ϕ(t, λ0 ), λ0 ) · [ϕ(t, λ) − ϕ(t, λ0 )] − fλ (t, ϕ(t, λ0 ), λ0 ) (λ − λ0 )| ≤ ε|λ − λ0 |. Put ψ(t, λ) = ϕ(t, λ) − ϕ(t, λ0 ) − y(t)(λ − λ0 ). Then ψ(t0 , λ) = 0 and (A.8) can be rewritten as |ψ 0 (t, λ) − fx (t, ϕ(t, λ0 ), λ0 )ψ(t, λ)| ≤ ε|λ − λ0 |. Since y(t) ≡ 0 is the unique solution of the differential equation y 0 = fx (t, ϕ(t, λ0 ), λ0 )y y(t0 ) = 0 then, from Theorem 2.2.4 it follows that  |ψ(t, λ)| ≤ ε|λ − λ0 |

eAh − 1 A

Therefore ϕλ (t, λ0 ) exists and is equal to y(t).

 .

(A.8)

APPENDIX B

We say that W u (0, U ) is a Lipschitz graph over P Rn if there is a neighborhood V ⊆ P Rn of 0 and a Lipschitz continuous function h : V → QRn such that W u (0, U ) = {(ξ, η) ∈ Rn : η = h(ξ), ξ ∈ V } . The set W u (0, U ) is said to be tangent to P Rn at 0 if |h(ξ)| |ξ| → 0 as ξ → 0 u u k in W (0, U ). We say that W (0, U ) is a C (or analytic) graph over P Rn if the above function h is C k (or analytic). Similar definitions hold for W s (0, U ). Theorem B.1 (Stable Manifold Theorem) If g Re(σ(A)) 6= 0, then there is a neighborhood U of 0 in Rn such that W u (0, U ) (or W s (0, U )) is a Lipschitz graph over E u := P Rn (or E s := QRn ) which is tangent to E u (or E s ) at 0. Also, there are positive constants K1 , α1 such that x ∈ W u (0, U ) (resp. W s (0, U )) then the solution ϕt (x) of (4.8) satisfies |ϕt (x)| ≤ K1 e−α1 |t| |x|,

t ≤ 0 (resp. t ≥ 0).

k

Furthermore if g is a C function (or an analytic function) in a neighborhood U of 0, then so are W u (0, U ) and W s (0, U ). Then local stable and unstable manifold can be extended to global stable and unstable manifolds. Define W s (0) = the global stable manifold of 0 [ s = {ϕt (x), x ∈ Wloc (0)}, t≤0

347

348

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

and W u (0) = the global unstable manifold of 0 [ u = {ϕt (x), x ∈ Wloc (0)}. t≥0

Proof. We shall apply the contraction mapping principle. With the function ρ in (4.9) and K, α in (4.14), we choose δ > 0 so that 4Kρ(δ) < α,

8K 2 ρ(δ) < α.

(B.1)

Let S(δ) be the set of continuous functions x : (−∞, 0] → Rn such that |x| = sup−∞≤t≤0 |x(t)| ≤ δ. Then the set S(δ) is a complete metric space with the metric induced by the uniform topology. For any y(·) ∈ S(δ) and any ξ ∈ P Rn with |ξ| ≤ δ/2K, we define, for t≤0 At

Z

t

(T (y, ξ)) (t) = e ξ +

e

A(t−s)

Z

t

P g(y(s))ds +

eA(t−s) Qg(y(s))ds.

−∞

0

(B.2) Next we want to show that T : S(δ) × {ξ ∈ P Rn : |ξ| ≤ δ/2K} → S(δ) is Lipschitz continuous and T (·, ξ) is a contraction mapping with contracδ . From (4.9), (4.13), (4.14), tion constant 1/2 for all ξ ∈ P Rn with |ξ| ≤ 2K (B.1) and (B.2) for t ≤ 0, we have Z 0 |T (y, ξ)(t)| ≤ Keαt |ξ| + Keα(t−s) ρ(δ)|y(s)|ds t Z t −α(t−s) + Ke ρ(δ)|y(s)|ds −∞

≤ K|ξ| + ρ(δ) · δ · K

1 K + ρ(δ)δ α α

< δ/2 + δ/2 = δ. Thus T (·, ξ) : S(δ) → S(δ). Furthermore from (4.9), (4.13), (4.14), (B.1)

APPENDIX B

349

and (B.2), we have |T (y1 , ξ)(t) − T (y2 , ξ)(t)| Z 0 ≤ Keα(t−s) |g(y1 (s)) − g(y2 (s))|ds t Z t + Keα(t−s) |g(y1 (s)) − g(y2 (s))|ds −∞

Z

0

Keα(t−s) ρ(δ)|y1 (s) − y2 (s)|ds

≤ t

Z

t

+

Keα(t−s) ρ(δ)|y1 (s) − y2 (s)|ds

−∞

1 ≤ |y1 − y2 |. 2 Therefore T (·, ξ) has a unique fixed point x∗ (·, ξ) in S(δ). The fixed point satisfies (4.15) and thus is a solution of (4.8) by Lemma 4.3.1. The function x∗ (·, ξ) is continuous in ξ. Also R0 |x∗ (t, ξ)| ≤ Keαt |ξ| + Kρ(δ) t eα(t−s) |x∗ (s, ξ)|ds Rt +Kρ(δ) −∞ e−α(t−s) |x∗ (s, ξ)|ds.

(B.3)

From inequality (B.3) and the following Lemma 4.3.2, one can prove that |x∗ (t, ξ)| ≤ 2Keαt/2 |ξ|,

t ≤ 0.

(B.4)

This estimate shows x∗ (·, 0) = 0 and x∗ (0, ξ) ∈ W u (0). The similar estimate as above shows that ˜ ≤ 2Keαt/2 |ξ − ξ|, ˜ |x∗ (t, ξ) − x∗ (t, ξ)|

t ≤ 0.

In particular, x∗ (·, ξ) is Lipschitz in ξ. Since x∗ (t, ξ) satisfies Rt x∗ (t, ξ) = eAt ξ + 0 eA(t−s) P g(x∗ (s, ξ))ds Rt + −∞ eA(t−s) Qg(x∗ (s, ξ))ds.

(B.5)

(B.6)

Set t = 0 in (B.6), we have ∗

Z

0

eA(−s) Qg(x∗ (s, ξ))ds.

x (0, ξ) = ξ +

(B.7)

−∞

From (B.7) and ξ ∈ P Rn we have P x∗ (0, ξ) = ξ (see Fig. B.1). Thus we define Z 0 h(ξ) = Qx∗ (0, ξ) = eA(−s) Qg(x∗ (s, ξ))ds. −∞

(B.8)

350

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Fig. B.1

Now we prove that W u (0, U ) is tangent to P Rn at 0. From (4.9) and x∗ (t, 0) ≡ 0, let δ = |x∗ (s, ξ)|, it follows that |g(x∗ (s, ξ)) − g(x∗ (s, 0))| ≤ ρ(|x∗ (s, ξ)|) · |x∗ (s, ξ) − x∗ (s, 0)| , or |g(x∗ (s, ξ))| ≤ ρ(|x∗ (s, ξ)|) · |x∗ (s, ξ)| . From (B.4) and (B.8) we have Z 0 |h(ξ)| = e−As Qg(x∗ (s, ξ))ds −∞ Z 0

−As

e ≤ Q · |g(x∗ (s, ξ))| ds, −∞ 0

Z

Keαs ρ (|x∗ (s, ξ)|) |x∗ (s, ξ)|ds, |x∗ (s, ξ)| ≤ 2Ke

≤ −∞ 0

Z

Keαs ρ 2Ke



αs 2

 αs |ξ| · 2Ke 2 |ξ|ds,

−∞ 2

Z

0

≤ 2K ρ(2K|ξ|)

e

3 2 αs

 ds |ξ|,

−∞

4K 2 ρ(2K|ξ|)|ξ|. 3α Therefore from (B.1) =

|h(ξ)| 4K 2 ≤ ρ(2K|ξ|) → 0 as |ξ| → 0. |ξ| 3α

αs 2

|ξ|, s ≤ 0



APPENDIX B

351

Lemma B.1 Suppose that α > 0, γ > 0, K, L, M are nonnegative constants and u is a nonnegative bounded continuous function satisfying one of the inequalities u(t) ≤ Ke−αt + L

t

Z

Z

e−α(t−s) u(s)ds + M

0

αt

u(t) ≤ Ke

e

+L

α(t−s)

Z

M γ

0

u(s)ds + M

eγs u(t + s)ds, t ≤ 0. (B.10)

−∞

t

If β ≡ L/α +

e−γs u(t + s)ds, t ≥ 0 (B.9)

0

0

Z



< 1, then −1

u(t) ≤ (1 − β)−1 Ke−[α−(1−β)

L]|t|

.

(B.11)

Proof. It suffice to prove (B.9). We obtain (B.10) directly from (B.9) by changing variables t → −t, s → −s. Let δ = lim supt→∞ u(t). Since u is nonnegative, bounded, then it follows that δ < ∞. We claim that δ = 0, i.e. lim u(t) = 0. If not, δ > 0. Choose θ, β < θ < 1, then there exists t→∞

t1 > 0 such that u(t) ≤ θ−1 δ for t ≥ t1 . For t ≥ t1 u(t) ≤ Ke−αt + L

Z

t1

e−α(t−s) u(s)ds + L

Z

t

e−α(t−s) u(s)ds Z ∞ +M θ−1 δ e−γs ds 0   Z t1 L M ≤ Ke−αt + Le−αt eαs u(s)ds + + θ−1 δ. α γ 0 0

t1

Let t → ∞. Then we have a contradiction:   L M + θ−1 δ = βθ−1 δ < δ. lim sup u(t) ≤ α γ t→∞ Let v(t) = sup u(s). Since u(t) → 0 as t → ∞, for any t ∈ [0, ∞), there s≥t

exists t1 ≥ t such that v(t) = v(s) = u(t1 ),

t ≤ s ≤ t1 ,

v(s) < v(t1 ) for s > t1 .

352

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Fig. B.2

Then v(t) = u(t1 ) ≤ Ke−αt1 + L

t1

Z

e−α(t1 −s) u(s)ds

Z 0∞

e−γs u(t1 + s)ds  t1 ≤ Ke−αt1 e−α(t1 −s) u(s)ds 0 t Z ∞ +M e−γs v(t + s)ds 0 Z t Z t1 ≤ Ke−αt + L e−α(t−s) v(s)ds + Lv(t) e−α(t1 −s) ds 0 t Z ∞ +v(t)M e−γs ds 0 Z t ≤ Ke−αt + Le−αt eαs v(s)ds + βv(t). +M Z t  Z +L e−α(t1 −s) u(s)ds + L

0

0 αt

Let z(t) = e v(t). Then Z z(t) ≤ K + L

t

z(s)ds + βz(t) 0

or −1

z(t) ≤ (1 − β)

−1

K + (1 − β)

Z L

t

z(s)ds. 0

Then from Gronwall’s inequality, it follows that  z(t) ≤ (1 − β)−1 K exp (1 − β)−1 Lt . Thus the estimate (B.11) follows. For the regularity of W u (0, U ), we observe from (B.7), (4.14) and (4.9), Z 0 ∗ αs ∗ ∗ ¯ ≥ |ξ − ξ| ¯− ¯ ds x (0, ξ) − x∗ (0, ξ) Kρ(δ)e · x (s, ξ) − x (s, ξ) −∞   2 ¯ 1 − 4K ρ(θ) ≥ 1/2|ξ − ξ|. ¯ ≥ |ξ − ξ| 3α

APPENDIX B

353

Thus the mapping ξ 7−→ x∗ (0, ξ) is 1-1 with continuous inverse. Hence W u (0, U ) is a Lipschitz manifold. It is not easy to prove that W (0, U ) is C k -manifold if f ∈ C k . The reader may consult [Hen]. The stable and unstable manifold are defined as follows u

s W s (0) = ∪ ϕt (Wloc (0, U )) , t≤0

u

u W (0) = ∪ ϕt (Wloc (0, U )) . t≥0

This page intentionally left blank

Bibliography

[ASY] [AP] [Apo] [Arnold] [AW]

[BB]

[BDiP] [Che] [CHW]

[CL] [Cole] [Cop] [E] [FMcL]

Alligood, K., Sauer, T. and Yorke, J. (1996). Chaos, An Introduction to Dynamical Systems, Springer-Verlag. A. Ambrosetti and G. Prodi, A primer of Nonlinear Analysis, Cambridge University Press. Apostol, T. M. (1975). Mathematical Analysis, (Addison-Wesley). Arnold, V. (1989), Mathematical Methods of Classical Mechanics, translated by K. Vogtmann and A. Weinstein, 2nd edition, Springer. Aronson, D. and Weinberg, H. (1975). Nonlinear diffusion in population genetics, combustion, and nerve conduction, in Partial Differential Equations and Related Topics, ed., J. A. Goldstein, Lecture Note in Mathematics 446, 5–49, New York: Springer. Berger, M. and Berger, M. (1968). Perspective in Nonlinearity. An Introduction to Nonlinear Analysis, W. A. Benjamin, (Inc. New YorkAmsterdam). Boyce, W. E. and DiPrima, R. C. (1986). Elementary Differential Equations and Boundary Value Problems, John Wiley & Sons. Cheng, K. S. (1981). Uniqueness of a limit cycle for a predator prey system, SIAM Math. Anal. 12, pp. 541–548. Chi, C. W., Hsu, S. B. and Wu, L. I. (1998). On the asymmetric May-Leonard model of three competing species, SIAM J. Appl. Math., pp. 211–226. Coddington, E. A. and Levinson, N. (1955). Ordinary Differential Equations, New York, McGraw-Hill. Cole, J. D. (1968). Perturbation Methods in Applied Mathematics, Waltham, Mass., Blaisdell Pub. Co. Coppel, W. A. (1965). Stability and Asymptotic Behavior of Differential Equations, Heath, Boston. Evans, L. C. (1998). Partial Differential Equations, Graduate Studies in Mathematics, Vol. 19 p. 234. Fife, P. C. and McLeod, J. B. (1977). The Approach of Solutions of Nonlinear Diffusion Equations to Travelling Front Solutions, Archiv. Rat. Mech. Anal. 65, pp. 335–361.

355

356

[FW] [HLW]

[H1] [H2] [HK] [Ha] [Hen] [Hir1]

[Hir2] [HS1] [HS2]

[Hsu] [HCH]

[HHW] [HH] [HN]

[HSW]

[HW]

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Freedman, H. I. and Waltman, P. (1977). Mathematical analysis of some three-species food-chain models, Math. Biosci. 33(3-4), pp. Hairer, E., Lubich, C. and Wanner, G. (2002). Geometric Numerical Integration — Structure-Preserving Algorithms for Ordinary Differential Equations, Springer Series in Comput. Mathematics, Vol. 31, Springer-Verlag. Hale, J. (1969). Ordinary Differential Equations, New York, WileyInterscience. Hale, J. (1988). Asymptotic Behavior of Dissipative Systems, American Mathematical Society. Hale, J. and Kocak, H. (1991). Dynamics and Bifurcations, SpringerVerlag. Hartman, P. (1964). Ordinary Differential Equations, Wiley, New York. Henry, D. (1981). Geometric Theory of Semilinear Parabolic Equations, Lecture Notes in Mathematics, Vol. 840, Springer-Verlag. Hirsch, M. (1982). Systems of differential equations which are competitive or cooperative I; limit sets, SIAM J. Math. Anal. Vol. 13, pp. 167– 179. Hirsch, M. (1984). The dynamical systems approach to differential equations, Bull. A.M.S. 11, pp. 1–64. Hirsch, M. and Smale, S. (1974). Differential Equations, Dynamical Systems and Linear Algebra, New York, Academic Press. Hirsch, M. and Smith, H. (2005). Monotone Dynamical Systems, Handbook of Differential Equations, Ordinary Differential Equations (Second Volume), eds., A. Canada, P. Drabek, A. Fonda, Elsevier, pp. 239– 357. Hsu, S. B. (1978). Limiting behavior of competing species, SIAM J. Appl. Math. 34, pp. 760–763. Hsu, S. B., Cheng, K. S. and Hubbell, S. P. (1981). Exploitative competition of microorganisms for two complementary nutrients in continuous cultures, SIAM J. Appl. Math., 41, pp. 422–444. Hsu, S. B., Hubbel, S. and Waltman, P. (1978). Competing Predators, SIAM J. Appl. Math. Vol. 35, No. 4, pp. 617–625. Hsu, S. B. and Hwang, S. F. (1988). Analysis of large deformation of a heavy cantilever, SIAM J. Math. Anal. Vol. 19, No. 4, pp. 854–866. Hsu, S. B. and Ni, W. M. (1988). On the asymptotic behavior of solutions of v 00 (x)+x sin v(x) = 0, Bulletin of Institute of Math. Academia Sinica, pp. 109–114. Hsu, S. B., Smith, H. L. and Waltman, P. (1996). Competitive exclusion and coexistence for competitive systems on ordered Banach space, Trans. A.M.S. 348, pp. 4083–4094. Hsu, S. B. and Waltman, P. (1991). Analysis on a model of two competitors in a Chemostat with an external inhibitor, SIAM J. Appl. Math., 52, pp. 528–540.

BIBLIOGRAPHY

[Hw1]

[Hw2]

[Hxc]

[IK] [JLZ]

[JS] [KF]

[Kee1] [Kee2] [KH] [KLY]

[L] [Lef] [LS] [ML] [M-S] [M-PS]

[MY] [MHO] [MM]

357

Hwang, T. W. (2003). Global analysis of the predator-prey system with Beddington-DeAngelis functional response, J. Math. Anal. Appl. 281(1), pp. 395–401. Hwang, T. W. (2004). Uniqueness of limit cycles of the predator-prey system with Beddington-DeAngelis functional response, J. Math. Anal. Appl. 290(1), pp. 113–122. Huang, X. C. (1988). Uniqueness of limit cycles of generalised Lienard systems and Predator-prey systems, J. Phys. A: Math. Gen. 21(1988)L685-L691. Isaacson, E. and Keller, H. (1966). Analysis of Numerical Method, New York, Wiley. Jiang, J. F., Liang, X. and Zhao, X. Q. (2004). Saddle-point behavior for monotone semiflows and reaction-diffusion models, J. Differential Equations, 203, pp. 313–330. Jordan, D. W. and Smith, P. (1977). Nonlinear Ordinary Differential Equations, Oxford, New York, Clarendon Press. Kuang, Y. and Freedman, H.I. (1988) Uniqueness of limit cycle in Gauss-type model of predator-prey system, Math. Biosience 88, 67– 84(1988). Keener, J. (1988). Principles of Applied Mathematics, Addison Wiley. Keener, J. P. (1998). Mathematical Physiology, Springer-Verlag. Klebanoff, A. and Hastings, A. (1994). Chaos in three species food chains, J. Math. Biol., 32, pp. 427–451. Kellogg R.B., T.Y. Li and J.A. Yorke (1976). A constructive proof of the Brouwer Fixed Point Theorem and Computational results, SIAM J. Numerical Analysis Vol. 13, pp. 473–483. De La Llave, R. (2001). A tutorial on KAM theory, Proc. Symp. Pure Math. 69, 175–292. Lefschetz, S. (1943). Existence of periodic solutions for certain differential equations, Prof. Nat. Acad. Sci. 29(1), pp. 29–32. Lin, C. C. and Segel, L. A. (1974). Mathematics Applied to Deterministic Problems in the Natural Science, Macmillan, NY. May, R. and Leonard, W. J. (1975). Nonlinear aspects of competition between three species, SIAM Appl. Math. 29, pp. 243–253. Maynard-Smith, J. (1974). Models in Ecology, Cambridge University Press. Mallet-Paret, J. and Smith, H. (1990). The Poincar´e-Bendixson Theorem for monotone cyclic feedback systems, J. Dyn. Diff. Eq. 2, pp. 367– 421. Markus, L. and Yamabe, H. (1960). Global stability criteria for differential systems, Osaka Math. J. 12, pp. 305–317. Meyer, K., Hall, G. and Offin, D. (2009), Introduction to Hamiltonian Dynamical Systems and the N-Body Problem, 2nd edition, Springer. Miller, R. K. and Michel, A. (1982). Ordinary Differential Equations, New York, Academic Press.

358

[Moser]

[Mur] [Murr] [NS] [P] [O’M] [R] [Sel] [Sma] [Smi] [ST]

[SW] [Stak] [Str] [V] [Wal] [WHK] [Wh]

[YS] [Z] [Zh]

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

Moser, J. (1968), Lectures on Hamiltonian Systems, Memoirs of the American Mathematical Society, Number 81, American Mathematical Society. Murdock, J. A. (1991). Perturbation Theory and Methods, John Wiley & Sons, Inc. New York. Murray, J. (1989). Mathematical Biology, Springer-Verlag. Nemitskii, V. and Stepanov, V. (1960). Qualitative Thoery of Differential Equations, Princeton University Press. P¨ oschel, J. (2001). A lecture on the classical KAM theorem, Proc. Symp. Pure Math. 69, 707–732. O’Malley, R. (1974). Introduction to Singular Perturbation, New York, Academic Press. Robinson, C. (1994). Dynamical Systems, Stability, Symbolic Dynamics, and Chaos, CRC Press. Selgrade, J. F. (1982/83). A Hopf bifurcation in single-loop positivefeedback systems, Quart. Appl. Math. 40(3), pp. 347–351. Smale, S. (1980). The Mathematics of Time: Essays on Dynamical Systems, Economic Process and Related Topics, Springer-Verlag. Smith, H. (1995). Monotone Dynamical Systems, AMS Monographs, Vol. 41. [ST] Smith, H. L. and Thieme, H. R. (2001). Stable coexistence and bistability for competitive systems on ordered Banach space, J. Differential Equations, 176, pp. 195–222. Smith, H. and Waltman, P. (1995). The Theory of the Chemostat, Cambridge University Press. Stakgold, I. (1972). Boundary Value Problems of Mathematical Physics, Vol. I, The Macmillan Company. Strogatz, S. H. (1994). Nonlinear Dynamics and Chaos, Addison Wiley. Viana, M. (2000). What’s new on Lorenz strange attractors?, The Mathematical Intelligence, Vol. 22, No. 3, pp. 6–19. Waltman, P. (1983). Competition Models in Population Biology, CBMS Vol. 45, SIAM. Wan, Y. H., Hassard, B. D. and Kazarinoff, N. D. (1981). Theory and Application of Hopf Bifurcation, Cambridge University Press. Whittaker, E. T. (1937). A Treatise on the Analytical Dynamics of Particles and Rigid Bodies, with an Introduction to the Problem of Three Bodies (4th ed.), New York: Dover Publications. Yakubovich, V. A. and Starzhinskii, V. M. (1975). Linear Differential Equations with Periodic Coefficients, Vols. 1 & 2, John-Wiley & Sons. [Z] Zhao, X. Q. (2003). Dynamical Systems in Population Biology, Springer. Zhang, Z. (1986). Proof of the uniqueness theorem of limit cycle of generalized Lienard equations, Applicable Analysis 23: 1-2, 63-76(1986).

Index

ε-approximate solution, 339

constant of motion, 310 continuation of solutions, 21 contraction mapping principle, 19, 348 cooperative system, 32, 34, 263 cyclic coordinates, 326

Abel’s formula, 41, 42, 63 action-angle variables, 321 Adjoint systems, 66 Airy’s equation, 194 alpha limit set, 118 Ascoli-Arzela Theorem, 341, 343 asymptotic phase, 95, 96 asymptotic stability, 94, 126, 140, 153 autonomous system, 54, 78, 79, 115

delta function, 199 differential form, 322 differential inequality, 28 Diophantine condition, 321 distribution, 199 Duffing’s equations, 4 Dulac’s criterion, 154, 187 Dynamical system, 4, 149, 263

Banach space, 13 Belousov-Zhabotinskii Reaction, 7 Bendixson negative criterion, 153, 154 bistability, 293 bistable equation, 106 Bolzano-Weierstrass Theorem, 13 Bonnet theorem, 337 Brouwer degree, 223 Brouwer fixed point Theorem, 78, 230 Butler-McGehee Lemma, 270

eccentricity vector, 315 electric network, 4 equicontinuity, 13 equilibrium, 40, 48 Euler’s three-body problem, 329 Euler-Lagrange equation, 334 exponential matrix, 44

center, 56, 134 characteristic multipliers, 60, 95, 96 comparison theorem, 30 competition model, 8, 29 competitive exclusion, 293 competitive system, 32, 34, 263 completeness of C(I), 14 conservative system, 4, 125, 145, 152

first integral, 310 Fisher’s equation, 102 Floquet exponents, 60 Floquet multipliers, 60 Floquet Theorem, 60 Fredholm alternatives, 67, 208 fundamental matrix, 41, 95 359

360

ORDINARY DIFFERENTIAL EQUATIONS WITH APPLICATIONS

general linear group, 306 generalized eigenvectors, 50 generalized Lienard equation, 174 generalized momentum, 334 generating function, 325 global existence of the solution, 21 Green’s function, 201 Gronwall inequality, 16, 17 Hamilton-Jacobi equation, 326 Hamiltonian flow, 299 Hamiltonian matrix, 303 Hamiltonian system, 299 Hartman-Grobman Theorem, 83 Hill region, 334 Hill’s equation, 40, 62 Homotopy invariance, 217, 219, 220 Hopf Bifurcation, 177 improper node, 57 index theory, 215, 222 initial value problem (I.V.P.), 11 Inner expansion, 245, 259 integrable system, 310 integral of motion, 310 invariance principle, 127, 130 invariant set, 117, 130

linear system, 39 with constant coefficients, 45 with constant coefficients, 2 with nonhomogeneous part, 2 linearization, 40, 79, 219 linearly coupled oscillation, 301 Liouville’s formula, 41, 95 Lipschitz condition, 18, 207 local existence, 11, 14 locally Hamiltonian systems, 317 logistic equation, 241 Lorentz equation, 6 Lotka-Volterra model, 8 Lyapunov function, 115 matching, 245 Mathieu’s equation, 62 maximal interval of existence, 11, 21 Michaelis-Menten Enzyme Kinetics, 6 monotone dynamical system, 263 motion of a spring with damping and restoring forces, 2 n-th order linear equation, 43 node, 54, 56, 85, 104, 219 normal form, 178 norms, 11, 46

Jacobi identity, 311 Jacobi integral, 334 Jordan block, 49 Jordan Curve Theorem, 150, 155

omega limit set, 120 orbital stability, 78, 94 ordered Banach space, 291 outer expansion, 245

KAM theory, 321 Kamke Theorem, 31 KPP Equation, 102

periodic linear system, 95 periodic system, 1, 40, 58, 63 Pfaffian, 303 Picard iteration, 21 Poincar´e-Bendixson Theorem, 149, 269 Poisson bracket, 312 Poisson structure, 310

Lagrangian, 332, 334 LaSalle’s invariance principle, 130 law of mass action, 6, 255 Legendre transformation, 332, 334 Levinson-Smith Theorem, 160 Lie algebra, 311 Lienard equation, 160 linear stability by Lyapunov method, 127

regular perturbation, 237 relaxation of the van der Pol oscillator, 171 resonance, 3, 70, 76, 251

INDEX

Rock-Scissor-Paper model, 273 Routh-Hurwitz criterion, 54 saddle point, 55, 78 property, 88 Sard’s Theorem, 224 Second order linear equation, 191 Simple elementary divisors, 52, 79 Simple pendulum equation, 5 Singular perturbation, 243, 249 special linear group, 306 stability, 40, 77, 79 stable coexistence, 293 stable focus, spiral, 56 stable manifold, 55 stable manifold theorem, 92, 102, 347 stable node, 54 Sturm majorant, 192 Sturm’s Comparison Theorem, 191 Sturm-Liouville boundary value problem, 191 Subcritical Hopf Bifurcation, 181 Supercritical Hopf Bifurcation, 181 symplectic coordinates, 319 symplectic group, 306 symplectic matrix, 305 symplectic polar coordinates, 321 symplectic transformation, 316

361

test function, 199, 202 transversal, 97 Traveling wave solution, 102 two timing, 249 two-center problem, 329 two-dimensional linear autonomous system, 54 uniform persistence, 270 unique of limit cycle, 160, 164, 165, 175 uniqueness of solutions of I.V.P., 14, 18, 20 unstable focus, spiral, 56 unstable manifold, 55, 88, 105, 353 unstable node, 55, 85, 86 van der Pol oscillator, 4, 252 variation of constant formula, 45, 48, 68, 81, 92 variational equation, 26 variational matrix, 80 winding number, 216 Wronskian, 41, 43 Zorn’s Lemma, 22