Contents
Introduction
1. Asymptotic expansions and series
1.1 Definitions of Asymptotic Series and Examples
1.1.1 An Example of Divergent Series
1.1.2 Order Operators
1.1.3 Calibration Sequence. Asymptotic Series
1.1.4 Problems
1.2 Summation of Asymptotic Series
1.2.1 Asymptotic Representation of Functions
1.2.2 Theorem on the Uniqueness of Asymptotic Expansion
1.2.3 Theorem on Existence of a Function with the Given Asymptotic Expansion
1.2.4 Problems
1.3 Laplace Method and Gamma Function
1.3.1 Asymptotic Expansion of Integral when Subintegral Function Exponent Does Not Contain Extrema
1.3.2 Asymptotic Expansion of Integral, When Integrand Exponent Contains Extrema
1.3.3 Derivation of Integral Formula for Gamma Function
1.3.4 Moivre–Stirling Formula
1.3.5 Problems
1.4 Fresnel Integral and Stationary Phase Method
1.4.1 Riemann Lemma
1.4.2 Fresnel Integral Formulae
1.4.3 Large Values of Argument
1.4.4 Method of Stationary Phase
1.4.5 Problems
1.5 Airy Function and Its Asymptotic Expansion
1.5.1 Airy’s Equation
1.5.2 An Integral Representation of General Solution for Airy Equation
1.5.3 Asymptotic Expansion for the Airy Function as z ? –8
1.5.4 Saddle-Point Method and the Airy Function Asymptotic Expansion as z?8
1.5.5 Problem
1.6 Functions of Parabolic Cylinder
1.6.1 Parabolic Cylinder Equation
1.6.2 Integral Representation
1.6.3 Connection Formulae at Different Values of Parameter
1.6.4 Values at Origin of Coordinates
1.6.5 Problems
1.7 WKB Method
1.7.1 Application of WKB Method for Ordinary Differential Equation of the Second Order
1.7.2 Justification of Constructed Asymptotic Expansion
1.7.3 Quasi-Classical Asymptotic Expansion and Transition Points
1.7.4 Essentially Singular Points of Differential Equation and the Stokes Phenomenon
1.7.5 Problems
2. Asymptotic methods for solving nonlinear equations
2.1 Fast Oscillating Asymptotic Expansions for Weak Nonlinear Case
2.1.1 Asymptotic Substitution
2.1.2 Equations for Leading-Order Term and First-Order Correction Term
2.1.3 Equation for the nth Term of Formal Series and Domain of Validity
2.1.4 Justification of Asymptotic Series
2.1.5 Problems
2.2 Boundary Layer Method
2.2.1 Asymptotic Solution
2.2.2 Boundary Layer
2.2.3 Justification of Asymptotic Solution
2.2.4 Problems
2.3 Catastrophes and Regular Expansions
2.3.1 A Formal Series with Respect to Powers of Small Parameter
2.3.2 Justification of Asymptotic Series
2.3.3 Non-analytic Perturbation
2.3.4 Matching of the Asymptotic Series for the Root of Cubic Equation
2.3.5 Compound Asymptotic Expansion for Root of Cubic Equation
2.3.6 Problems
2.4 Weierstrass Function
2.4.1 Differential Equation
2.4.2 Doubly Periodicity
2.4.3 A Representation by Series
2.4.4 Evenness
2.4.5 Liouville’s Theorem
2.4.6 Problem
2.5 Jacobi Elliptic Functions
2.5.1 Sin-Amplitude Function
2.5.2 Periodicity
2.5.3 Jacobi Elliptic Functions
2.5.4 Regular Expansion in the Neighbourhood of Zero Value of Argument
2.5.5 Regular Expansion in the Neighbourhood of Zero Value of Parameter
2.5.6 Regular Expansion in the Neighbourhood of k = 1
2.5.7 Problems
2.6 Uniform Asymptotic Behaviour of Jacobi-sn Near a Singular Point. The Lost Formula from Handbooks for Elliptic Functions
2.6.1 The Asymptotic Behaviour of the Period
2.6.2 Asymptotic Behaviour on a Regular Part of Trajectory
2.6.3 Asymptotic Behaviour Near Turning Point
2.6.4 Uniform Asymptotic Expansion
2.7 Mathieu’s and Lame’s Functions
2.7.1 Hill’s Equation and Floquet’s Theory
2.7.2 Examples
2.7.3 Mathieu Functions
2.7.4 Construction of Mathieu’s Functions
2.7.5 Lindeman form of Mathieu’s Equation
2.7.6 Special Case of Lame’s Equation
2.7.7 Degenerate Case
2.7.8 Problems
3. Perturbation of nonlinear oscillations
3.1 Regular Perturbation Theory for Nonlinear Oscillations
3.1.1 Properties of Solutions for Unperturbed Equation
3.1.2 Formal Asymptotic Expansion for Solution
3.1.3 Solution of Nonlinear Equation for Primary Term
3.1.4 Homogeneous Linearized Equation
3.1.5 Non-homogeneous Linearized Equation
3.1.6 First Correction for Perturbation Theory
3.1.7 Second Correction in Formula (3.8)
3.1.8 Causes Leading to the Growing of Corrections
3.1.9 Problems
3.2 Fast and Slow Variables
3.2.1 Two-Scaling Method
3.2.2 Isochronous Oscillations
3.2.3 An Averaging of Isochronous Oscillations
3.2.4 Transcendent Equation for Parameter
3.2.5 Equations for Parameters of Averaging
3.2.6 Problems
3.3 Krylov–Bogolyubov Method
3.3.1 Asymptotical Substitution
3.3.2 Formula for Leading-Order Term of Asymptotic Expansion
3.3.3 Solution of Linearized Equation
3.3.4 Periodic Solution for the First Correction
3.3.5 Problems
3.4 Higher-Order Terms in Krylov–Bogolyubov Method
3.4.1 Second Correction of Perturbation Theory
3.4.2 Periodic Solution of Equation for nth Correction
3.5 Interval of Validity for Krylov–Bogolyubov’s Ansatz
3.5.1 Small Neighbourhoods of a Centre
3.5.2 Neighbourhood of Separatrix and Saddle
3.5.3 Asymptotic Solution of a Cauchy Problem
4. Nonlinear oscillator in potential well
4.1 Nonlinear Oscillator Near Separatrix
4.1.1 Change to Simple Form
4.1.2 Qualitative Behaviour and Numerical Analysis
4.2 Asymptotic Solution Close to Separatrix
4.2.1 Construction of Germ Asymptotic Expansion
4.2.2 Behaviour of Correction Terms in the Neighbourhood of the Separatrix
4.2.3 Asymptotic Expansion Near Saddle Point
4.2.4 Asymptotic Solution in the Neighbourhood of Lower Separatrix
4.2.5 Parameters of Equation and Cantor Set
4.3 Oscillations with External Force into Potential Well
4.3.1 An Asymptotic Problem of a Capture
4.4 Non-resonant Regions of Parameter
4.4.1 An Equation for Averaged Action
4.4.2 The Substitution of Krylov–Bogolyubov
4.4.3 Linearized Equation
4.4.4 Construction of the First Correction Term
4.4.5 Construction of the Second Correction Term
4.4.6 Resonances in Higher-Order Correction Terms
4.5 Asymptotics in Resonant Regions
4.5.1 Formal Derivation of Nonlinear Resonance Equation
4.5.2 Inner Asymptotic Expansion
4.5.3 Capture into Resonance
4.5.4 Asymptotic Solutions of the Equation of Nonlinear Resonance
4.5.5 Matching of Asymptotic Expansions
4.5.6 Asymptotic Solution of the Capture Problem
5. Autoresonances in nonlinear systems
5.1 Problems of Autoresonance
5.1.1 The Arising of Autoresonance
5.1.2 Autoresonant Asymptotic Expansions and Scattering Problem
5.1.3 Cut-Off of a Resonant Growth
5.2 Threshold of Amplitude for Autoresonant Pumping
5.2.1 Autoresonant Solution
5.2.2 Asymptotic Substitution
5.2.3 Stability of Autoresonant Solution
5.3 Capture Into Autoresonance
5.3.1 Setting of the Problem for Trajectories of Large Amplitude
5.3.2 Numeric Results and Instability
5.3.3 Oscillations Far from the Capture
5.4 A Searching of Suitable Asymptotic Expansion
5.4.1 Asymptotic Expansion Towards Bifurcation
5.4.2 Asymptotic Expansion in Bottleneck
5.4.3 Connection Formulas for Perturbed System
5.5 A Thin Manifold of a Captured Trajectories
5.5.1 Slowly Varying Equilibrium Points
5.5.2 A Rough Conservation Law
5.5.3 Breaking Up of Separatrix
5.6 Asymptotic Solution of the Capture Problem
5.6.1 Matching to Bottleneck Asymptotic Expansion
5.6.2 Numerical investigations
5.6.3 Asymptotics and Numeric Points View
5.7 Capture into Parametric Resonance
5.7.1 Numeric Analysis
5.7.2 Qualitative Analysis
5.8 WKB Solution Before the Capture
5.8.1 The WKB Solution Closed to Zero
5.8.2 Constructing of the WKB Solution for Nonlinear Equation (5.40)
5.9 The Painlevé Layer
5.9.1 The Asymptotic Expansion in the Painlevé Layer
5.9.2 Matching with the WKB Asymptotic Expansion
5.10 The Captured WKB Asymptotic Solution
5.10.1 Slowly Varying Solutions
5.10.2 WKB Asymptotic Expansion Close to the Slowly Varying Centres
6. Asymptotics for loss of stability
6.1 Hard Loss of Stability in Painlevé-2 Equation
6.1.1 Naive Statement of the Problem
6.1.2 Matched Asymptotics for the Solution
6.1.3 The Outer Algebraic Asymptotics
6.1.4 The Domain of Validity of the Algebraic Asymptotic Solution
6.2 The Inner Asymptotics
6.2.1 First Inner Expansion
6.2.2 Second Inner Expansion
6.2.3 Dynamics in the Internal Layer
6.2.4 The Asymptotics of the Inner Expansions as 4 ? 8
6.3 Fast Oscillating Asymptotics
6.3.1 The Krylov–Bogolyubov Approximation
6.3.2 Degeneration of the Fast Oscillating Asymptotics
6.3.3 The Domain of Validity of the Fast Oscillating Asymptotics
6.3.4 The Matching of the Fast Oscillating Asymptotic Solution and the Inner Asymptotics
6.4 An Asymptotic Solution Slowly Crossing the Separatrix Near a Saddle-Centre Bifurcation Point
6.4.1 Typical Problems for the Autoresonance
6.4.2 Three Types of Algebraic Solutions
6.5 Expansions in Bifurcation Layer
6.5.1 Initial Interval
6.5.2 The Bifurcation Layer in the Case of Bounded k
6.5.3 The Intermediate Expansion for Large k
6.6 Fast Oscillating Asymptotic Expansion
6.6.1 Family of the Fast Oscillating Solutions
6.6.2 The Confluent Asymptotic Solution
6.6.3 The Domain of Validity of Confluent Asymptotic Solution as t ? t* – 0
6.6.4 Matching of the Asymptotics
6.6.5 Asymptotic Behaviours
6.7 Dissipation Is Cause for Halt of Resonant Growth
6.7.1 Setting of the Problem
6.7.2 Asymptotics of Autoresonant Growth Under Dissipation
6.7.3 Stability of Autoresonant Growth
6.7.4 Vicinity of the Break of Autoresonant Growth
6.8 Break of Autoresonant Growth
6.8.1 Fast Motion
6.9 Open Problems
6.9.1 Hierarchy of Equations in Transition and Painlevé Equations
7. Systems of coupled oscillators
7.1 The Autoresonance Threshold into System of Weakly Coupled Oscillators
7.1.1 Statement of the Problem and Result
7.1.2 Asymptotic Reduction to the System of Primary Resonance Equations
7.1.3 Algebraic Asymptotic Solutions
7.1.4 Neighbourhoods of Equilibrium Positions
7.2 Forced Nonlinear Resonance in a System of Coupled Oscillators
7.2.1 Results
7.2.2 Formal Constructions for : « 1
7.2.3 Analysis of Asymptotic Solution for + » 1
Bibliography
Index

Citation preview

Sergey G. Glebov, Oleg M. Kiselev, Nikolai N. Tarkhanov Nonlinear Equations with Small Parameter

De Gruyter Series in Nonlinear Analysis and Applications

Editor in Chief Jürgen Appell, Würzburg, Germany Editors Catherine Bandle, Basel, Switzerland Alain Bensoussan, Richardson, Texas, USA Avner Friedman, Columbus, Ohio, USA Mikio Kato, Nagano, Japan Wojciech Kryszewski, Torun, Poland Umberto Mosco, Worcester, Massachusetts, USA Louis Nirenberg, New York, USA Simeon Reich, Haifa, Israel Alfonso Vignoli, Rome, Italy Katrin Wendland, Freiburg, Germany

Volume 23/1

Sergey G. Glebov Oleg M. Kiselev Nikolai N. Tarkhanov

Nonlinear Equations with Small Parameter Volume 1: Oscillations and Resonances

Mathematics Subject Classification 2010 34–02, 34A34, 34C15, 34D05, 34D20, 34E05, 34E10, 34E15, 34E20, 70K30 Authors Prof. Dr. Sergey Glebov Chair of Mathematics Ufa State Petroleum Technological University Faculty of General Scientific Discipline Kosmonavtov St. 1 Ufa 450062 Russian Federation [email protected]

Prof. Dr. Nikolai Tarkhanov Universität Potsdam Institut für Mathematik Am Neuen Palais 10 14469 Potsdam Germany [email protected]

Prof. Dr. Oleg M. Kiselev Russian Academy of Sciences UFA Scientific Centre Institute of Mathematics Chernyshevsky str. 112 Ufa 450008 Russian Federation [email protected]

ISBN 978-3-11-033554-5 e-ISBN (PDF) 978-3-11-033568-2 e-ISBN (EPUB) 978-3-11-038272-3 Set-ISBN 978-3-11-033569-9 ISSN 0941-813X Library of Congress Cataloging-in-Publication Data A CIP catalog record for this book has been applied for at the Library of Congress. Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. © 2017 Walter de Gruyter GmbH, Berlin/Boston Typesetting: Integra Software Services Pvt. Ltd. Printing and binding: CPI books GmbH, Leck @ Printed on acid-free paper Printed in Germany www.degruyter.com

Preface The book presents new methods of construction of global asymptotics of solutions to nonlinear equations with small parameter. They allow one to match asymptotics of various properties with each other in transition regions and to get unified formulas for connection of characteristic parameters of approximate solutions. This approach underlies modern asymptotic methods and gives a deep insight into crucial nonlinear phenomena of natural sciences. These are beginnings of chaos in dynamical systems, incipient solitary and shock waves, oscillatory processes in crystals, engineering constructions and quantum systems. Apart from independent interest, the approximate solutions serve as a foolproof basis for testing numerical algorithms. The book is naturally divided into two volumes. In the first volume we discuss asymptotic methods mostly by the example of ordinary differential equations. The second volume is devoted to applications of asymptotic methods in partial differential equations. Prerequisites for reading this book are some acquaintance with the elementary theory of ordinary differential equations and a good knowledge of classical analysis. The main concepts and methods will be comprehensible for graduate students of physics and engineering sciences. We use and present many ideas going back to diverse mathematicians. We tried to describe the historical development to the best of our knowledge. However, we have probably failed at many instances. This book grew out of the results of a joint project “Nonlinear differential equations with small parameter” of the authors supported by the German Research Society (DFG) and Russian Foundation for Basic Research (RFFI) in 2007–2014 and partially by Russian Scientific Foundation (14-11-00078) in 2014–2016. O.M. Kiselev was also grateful to his wife Irina for her support while working on this book. The help of all these institutions, friends and colleagues was invaluable.

Contents Introduction

XV

1 1 Asymptotic expansions and series 1.1 Definitions of Asymptotic Series and Examples 1 1.1.1 An Example of Divergent Series 1 1.1.2 Order Operators 4 1.1.3 Calibration Sequence. Asymptotic Series 5 1.1.4 Problems 8 1.2 Summation of Asymptotic Series 9 1.2.1 Asymptotic Representation of Functions 9 1.2.2 Theorem on the Uniqueness of Asymptotic Expansion 10 1.2.3 Theorem on Existence of a Function with the Given Asymptotic Expansion 11 1.2.4 Problems 12 1.3 Laplace Method and Gamma Function 13 1.3.1 Asymptotic Expansion of Integral when Subintegral Function Exponent Does Not Contain Extrema 13 1.3.2 Asymptotic Expansion of Integral, When Integrand Exponent Contains Extrema 13 1.3.3 Derivation of Integral Formula for Gamma Function 15 1.3.4 Moivre–Stirling Formula 17 1.3.5 Problems 17 1.4 Fresnel Integral and Stationary Phase Method 17 1.4.1 Riemann Lemma 18 1.4.2 Fresnel Integral Formulae 18 1.4.3 Large Values of Argument 19 1.4.4 Method of Stationary Phase 21 1.4.5 Problems 23 1.5 Airy Function and Its Asymptotic Expansion 24 1.5.1 Airy’s Equation 24 1.5.2 An Integral Representation of General Solution for Airy Equation 24 1.5.3 Asymptotic Expansion for the Airy Function as z → –∞ 28 1.5.4 Saddle-Point Method and the Airy Function Asymptotic Expansion as z → ∞ 29 1.5.5 Problem 30 1.6 Functions of Parabolic Cylinder 30 1.6.1 Parabolic Cylinder Equation 30 1.6.2 Integral Representation 31

VIII

Contents

1.6.3 1.6.4 1.6.5 1.7 1.7.1

Connection Formulae at Different Values of Parameter 33 Values at Origin of Coordinates 34 Problems 35 WKB Method 35 Application of WKB Method for Ordinary Differential Equation of the Second Order 35 Justification of Constructed Asymptotic Expansion 38 Quasi-Classical Asymptotic Expansion and Transition Points 41 Essentially Singular Points of Differential Equation and the Stokes Phenomenon 42 Problems 43

1.7.2 1.7.3 1.7.4 1.7.5

44 2 Asymptotic methods for solving nonlinear equations 2.1 Fast Oscillating Asymptotic Expansions for Weak Nonlinear Case 2.1.1 Asymptotic Substitution 45 2.1.2 Equations for Leading-Order Term and First-Order Correction Term 46 2.1.3 Equation for the nth Term of Formal Series and Domain of Validity 48 2.1.4 Justification of Asymptotic Series 49 2.1.5 Problems 52 2.2 Boundary Layer Method 52 2.2.1 Asymptotic Solution 53 2.2.2 Boundary Layer 54 2.2.3 Justification of Asymptotic Solution 57 2.2.4 Problems 59 2.3 Catastrophes and Regular Expansions 59 2.3.1 A Formal Series with Respect to Powers of Small Parameter 60 2.3.2 Justification of Asymptotic Series 62 2.3.3 Non-analytic Perturbation 64 2.3.4 Matching of the Asymptotic Series for the Root of Cubic Equation 65 2.3.5 Compound Asymptotic Expansion for Root of Cubic Equation 66 2.3.6 Problems 67 2.4 Weierstrass Function 67 2.4.1 Differential Equation 67 2.4.2 Doubly Periodicity 68 2.4.3 A Representation by Series 69 2.4.4 Evenness 70

44

Contents

2.4.5 2.4.6 2.5 2.5.1 2.5.2 2.5.3 2.5.4 2.5.5 2.5.6 2.5.7 2.6 2.6.1 2.6.2 2.6.3 2.6.4 2.7 2.7.1 2.7.2 2.7.3 2.7.4 2.7.5 2.7.6 2.7.7 2.7.8

IX

70 Liouville’s Theorem Problem 71 Jacobi Elliptic Functions 71 Sin-Amplitude Function 71 Periodicity 72 Jacobi Elliptic Functions 73 Regular Expansion in the Neighbourhood of Zero Value of Argument 75 Regular Expansion in the Neighbourhood of Zero Value of Parameter 75 Regular Expansion in the Neighbourhood of k = 1 76 Problems 77 Uniform Asymptotic Behaviour of Jacobi-sn Near a Singular Point. The Lost Formula from Handbooks for Elliptic Functions 77 The Asymptotic Behaviour of the Period 78 Asymptotic Behaviour on a Regular Part of Trajectory 80 Asymptotic Behaviour Near Turning Point 82 Uniform Asymptotic Expansion 83 Mathieu’s and Lame’s Functions 84 Hill’s Equation and Floquet’s Theory 84 Examples 86 Mathieu Functions 87 Construction of Mathieu’s Functions 88 Lindeman form of Mathieu’s Equation 90 Special Case of Lame’s Equation 90 Degenerate Case 92 Problems 92

93 3 Perturbation of nonlinear oscillations 3.1 Regular Perturbation Theory for Nonlinear Oscillations 3.1.1 Properties of Solutions for Unperturbed Equation 95 3.1.2 Formal Asymptotic Expansion for Solution 96 3.1.3 Solution of Nonlinear Equation for Primary Term 97 3.1.4 Homogeneous Linearized Equation 97 3.1.5 Non-homogeneous Linearized Equation 99 3.1.6 First Correction for Perturbation Theory 100 3.1.7 Second Correction in Formula (3.8) 101 3.1.8 Causes Leading to the Growing of Corrections 101 3.1.9 Problems 102 3.2 Fast and Slow Variables 102 3.2.1 Two-Scaling Method 102 3.2.2 Isochronous Oscillations 104

95

X

3.2.3 3.2.4 3.2.5 3.2.6 3.3 3.3.1 3.3.2 3.3.3 3.3.4 3.3.5 3.4 3.4.1 3.4.2 3.5 3.5.1 3.5.2 3.5.3

Contents

An Averaging of Isochronous Oscillations 105 Transcendent Equation for Parameter 107 Equations for Parameters of Averaging 108 Problems 109 Krylov–Bogolyubov Method 109 Asymptotical Substitution 111 Formula for Leading-Order Term of Asymptotic Expansion 112 Solution of Linearized Equation 113 Periodic Solution for the First Correction 115 Problems 118 Higher-Order Terms in Krylov–Bogolyubov Method 118 Second Correction of Perturbation Theory 118 Periodic Solution of Equation for nth Correction 122 Interval of Validity for Krylov–Bogolyubov’s Ansatz 123 Small Neighbourhoods of a Centre 123 Neighbourhood of Separatrix and Saddle 124 Asymptotic Solution of a Cauchy Problem 124

128 4 Nonlinear oscillator in potential well 4.1 Nonlinear Oscillator Near Separatrix 131 4.1.1 Change to Simple Form 132 4.1.2 Qualitative Behaviour and Numerical Analysis 134 4.2 Asymptotic Solution Close to Separatrix 135 4.2.1 Construction of Germ Asymptotic Expansion 135 4.2.2 Behaviour of Correction Terms in the Neighbourhood of the Separatrix 136 4.2.3 Asymptotic Expansion Near Saddle Point 140 4.2.4 Asymptotic Solution in the Neighbourhood of Lower Separatrix 141 4.2.5 Parameters of Equation and Cantor Set 144 4.3 Oscillations with External Force into Potential Well 145 4.3.1 An Asymptotic Problem of a Capture 148 4.4 Non-resonant Regions of Parameter 149 4.4.1 An Equation for Averaged Action 149 4.4.2 The Substitution of Krylov–Bogolyubov 151 4.4.3 Linearized Equation 152 4.4.4 Construction of the First Correction Term 154 4.4.5 Construction of the Second Correction Term 156 4.4.6 Resonances in Higher-Order Correction Terms 159 4.5 Asymptotics in Resonant Regions 159 4.5.1 Formal Derivation of Nonlinear Resonance Equation 159

Contents

4.5.2 4.5.3 4.5.4 4.5.5 4.5.6

161 Inner Asymptotic Expansion Capture into Resonance 162 Asymptotic Solutions of the Equation of Nonlinear Resonance 164 Matching of Asymptotic Expansions 167 Asymptotic Solution of the Capture Problem 168

170 5 Autoresonances in nonlinear systems 5.1 Problems of Autoresonance 170 5.1.1 The Arising of Autoresonance 170 5.1.2 Autoresonant Asymptotic Expansions and Scattering Problem 171 5.1.3 Cut-Off of a Resonant Growth 172 5.2 Threshold of Amplitude for Autoresonant Pumping 173 5.2.1 Autoresonant Solution 173 5.2.2 Asymptotic Substitution 174 5.2.3 Stability of Autoresonant Solution 175 5.3 Capture Into Autoresonance 176 5.3.1 Setting of the Problem for Trajectories of Large Amplitude 176 5.3.2 Numeric Results and Instability 176 5.3.3 Oscillations Far from the Capture 177 5.4 A Searching of Suitable Asymptotic Expansion 181 5.4.1 Asymptotic Expansion Towards Bifurcation 181 5.4.2 Asymptotic Expansion in Bottleneck 184 5.4.3 Connection Formulas for Perturbed System 191 5.5 A Thin Manifold of a Captured Trajectories 193 5.5.1 Slowly Varying Equilibrium Points 193 5.5.2 A Rough Conservation Law 194 5.5.3 Breaking Up of Separatrix 194 5.6 Asymptotic Solution of the Capture Problem 195 5.6.1 Matching to Bottleneck Asymptotic Expansion 196 5.6.2 Numerical investigations 196 5.6.3 Asymptotics and Numeric Points View 198 5.7 Capture into Parametric Resonance 200 5.7.1 Numeric Analysis 201 5.7.2 Qualitative Analysis 201 5.8 WKB Solution Before the Capture 204 5.8.1 The WKB Solution Closed to Zero 204 5.8.2 Constructing of the WKB Solution for Nonlinear Equation (5.40) 206 5.9 The Painlevé Layer 207

XI

XII

5.9.1 5.9.2 5.10 5.10.1 5.10.2

Contents

The Asymptotic Expansion in the Painlevé Layer 208 Matching with the WKB Asymptotic Expansion 211 The Captured WKB Asymptotic Solution 212 Slowly Varying Solutions 212 WKB Asymptotic Expansion Close to the Slowly Varying Centres 213

218 6 Asymptotics for loss of stability 6.1 Hard Loss of Stability in Painlevé-2 Equation 218 6.1.1 Naive Statement of the Problem 220 6.1.2 Matched Asymptotics for the Solution 221 6.1.3 The Outer Algebraic Asymptotics 224 6.1.4 The Domain of Validity of the Algebraic Asymptotic Solution 225 6.2 The Inner Asymptotics 226 6.2.1 First Inner Expansion 226 6.2.2 Second Inner Expansion 228 6.2.3 Dynamics in the Internal Layer 231 6.2.4 The Asymptotics of the Inner Expansions as 4 → ∞ 231 6.3 Fast Oscillating Asymptotics 236 6.3.1 The Krylov–Bogolyubov Approximation 236 6.3.2 Degeneration of the Fast Oscillating Asymptotics 238 6.3.3 The Domain of Validity of the Fast Oscillating Asymptotics 241 6.3.4 The Matching of the Fast Oscillating Asymptotic Solution and the Inner Asymptotics 245 6.4 An Asymptotic Solution Slowly Crossing the Separatrix Near a Saddle-Centre Bifurcation Point 246 6.4.1 Typical Problems for the Autoresonance 247 6.4.2 Three Types of Algebraic Solutions 247 6.5 Expansions in Bifurcation Layer 249 6.5.1 Initial Interval 249 6.5.2 The Bifurcation Layer in the Case of Bounded k 263 6.5.3 The Intermediate Expansion for Large k 272 6.6 Fast Oscillating Asymptotic Expansion 275 6.6.1 Family of the Fast Oscillating Solutions 276 6.6.2 The Confluent Asymptotic Solution 280 6.6.3 The Domain of Validity of Confluent Asymptotic Solution as t → t∗ – 0 280 6.6.4 Matching of the Asymptotics 284 6.6.5 Asymptotic Behaviours 285 6.7 Dissipation Is Cause for Halt of Resonant Growth 288

Contents

6.7.1 6.7.2 6.7.3 6.7.4 6.8 6.8.1 6.8.2 6.9 6.9.1

XIII

289 Setting of the Problem Asymptotics of Autoresonant Growth Under Dissipation 290 Stability of Autoresonant Growth 293 Vicinity of the Break of Autoresonant Growth 294 Break of Autoresonant Growth 295 Fast Motion 300 Formal Approach to Answer 301 Open Problems 305 Hierarchy of Equations in Transition and Painlevé Equations 305

309 7 Systems of coupled oscillators 7.1 The Autoresonance Threshold into System of Weakly Coupled Oscillators 7.1.1 Statement of the Problem and Result 310 7.1.2 Asymptotic Reduction to the System of Primary Resonance Equations 310 7.1.3 Algebraic Asymptotic Solutions 311 7.1.4 Neighbourhoods of Equilibrium Positions 316 7.2 Forced Nonlinear Resonance in a System of Coupled Oscillators 320 7.2.1 Results 321 7.2.2 Formal Constructions for : ≪ 1 322 7.2.3 Analysis of Asymptotic Solution for + ≫ 1 326 Bibliography Index

337

329

309

Introduction Mathematical equations arising from natural sciences contain parameters, e. g., the Planck constant ℏ. This is a small parameter, while using the Fourier transform in a group of variables leads to equations including covariables that are to be treated as large parameters. Analysis on manifolds with singularities motivates the study of problems containing both small and large parameters. On resolving a singularity and freezing the “coefficients” one arrives at characteristic equations that include both distance to the singularity and covariables along its smooth strata. On substituting + = 1/% for a large parameter + one passes to a small parameter %, and conversely. However, the formal substitution leads often to a confusion in interpreting the nature of the original problem. Unless otherwise stated, we focus our attention on problems with small parameter. Generally speaking, one gains from introducing a small parameter, for its presence gives rise to a special ansatz for the solution. Even if the equation contains no small parameter it deserves to create it. In mathematics one uses asymptotic methods where it is impossible to solve the problem explicitly by precise methods and where no numerical methods are applicable. The purpose of the book is to guide the reader from the foundation of asymptotic methods up to modern investigations using asymptotics and perturbation theory in nonlinear equations. The exposition is based on inductive reasoning, i. e., simple progression from particular instances to broader generalizations. The authors believe that a comprehensive primer elaborated in detail deserves often several theorems. We study a nonlinear equation f (x, %) = 0 with small parameter % > 0, where f : X × [0, %0 ) → Y is a continuous mapping of metric spaces. Continuous solutions to this equation are curves x : (0, %0 ) → X in the metric space X . If a solution x = x(%) has a limit x0 ∈ X as % → 0, then f (x0 , 0) = 0. The equation f (x, 0) is referred to as unperturbed, while f (x, %) = 0 is called its perturbation. The perturbation f (x, %) = 0 is said to be regular if any solution x = x(%) converges in X as % → 0. Otherwise the perturbation is called singular. For instance, the equation x2 = % is regularly perturbed, while %x2 = 1 is obviously a singular perturbation. For regular perturbations, the formula x(%) = x0 + o(1) gives an approximate solution to f (x, %) = 0 for small % > 0, where o(1) → 0 in the metric of X as % → 0. In applications, X is often a function space. Then, the perturbation regularity supposes automatically that any solution x = x(%) to f (x, %) = 0 converges in the norm of X as % → 0, e. g., uniformly in the corresponding variable. By the very definition, a perturbation f (x, %) = 0 is regular if there is no curve x = x(%) satisfying this equation. If both X and Y are normed spaces, more advanced analysis is possible, which invokes differential calculus. Assume f is differentiable in X × {0}. One looks for a solution to f (x, %) = 0 of the form x(%) = x0 + x1 % + o(%) for small % > 0, where x0 and x1 are fixed elements of X . On substituting the ansatz into the equation one obtains f (x0 , 0) + (fx󸀠 (x0 , 0)x1 + f%󸀠 (x0 , 0)) % + o(%) = 0

XVI

Introduction

for all % ∈ (0, %0 ). Equating the coefficients of the same powers of % on both sides yields { f (x0 , 0) = 0, { 󸀠 󸀠 { fx (x0 , 0)x1 = – f% (x0 , 0).

(0.1)

By the implicit function theorem, if the unperturbed equation f (x0 , 0) = 0 has a solution x0 ∈ X such that the linear map fx󸀠 (x0 , 0) : X → Y is invertible, then there is precisely one curve x = x(%) defined in a small interval I around % = 0 and satisfying f (x(%), %) = 0 for % ∈ I and x(0) = x0 . Moreover, the curve is differentiable at % = 0. Hence it follows that the perturbation is regular, and the formula x(%) = x0 +x󸀠 (0)%+o(%) provides an approximate solution to f (x, %) = 0 for small % > 0. If the unperturbed equation f (x0 , 0) = 0 has no solutions while the perturbed f (x, %) = 0 has at least one, then the perturbation is singular. As one example we mention the solutions x(%) = ±1/√% to %x2 = 1 for % > 0, both diverging to ±∞ as % → 0. Since a singularly perturbed problem possesses a solution for all % > 0 small enough, one can choose any sufficiently small %󸀠 > 0 and an approximate solution of the form x󸀠 (%) = x0󸀠 + o(1) valid for %󸀠 ≤ % < %0 , where o(1) → 0 in X as % → %󸀠 . On the other hand, to construct an approximate solution x󸀠󸀠 (%) to the perturbed equation on a small interval (0, %󸀠󸀠 ) with %󸀠 < %󸀠󸀠 < %0 , one applies a family T(%) of transformations Z → X , such that f (T(%)z, %) = 0 reduces to a regularly perturbed equation for z ∈ Z , the parameter % running in the interval (0, %󸀠󸀠 ). If z(%) = z0 + o(1) is an approximate solution to the equation f (T(%)z, %) = 0 for small % > 0, then the curve x󸀠󸀠 (%) = T(%)(z0 + o(1)) provides an approximate solution to f (x, %) = 0 for % ∈ (0, %󸀠󸀠 ). Even if the equation f (x, %) = 0 has a unique solution for any parameter value in (%󸀠 , %󸀠󸀠 ), this is no longer true for approximate solutions. Therefore, the asymptotic solutions x󸀠 (%) and x󸀠󸀠 (%) should be matched in the common interval (%󸀠 , %󸀠󸀠 ), which gives a deeper insight into asymptotic analysis. Basically this means the rearranging of the Taylor expansion of x󸀠󸀠 (%) around % = %󸀠 . In applied sciences, it perhaps suffices to find only one or at most two terms of the approximate solution x(%) = x0 + x1 % + o(%) to the equation f (x, %) = 0 for small %. However, one encounters familiar counter evidence, too. Since one fails to construct an explicit solution to f (x, %) = 0, it makes sense to search for an element ∞

x(%) = ∑ xn %n

(0.2)

n=0

in X which satisfies the equation approximately. Under certain hypotheses on f one obtains ∞

n=0

n=0

f ( ∑ xn %n , %) = ∑ fn %n ,

(0.3)

Introduction

XVII

where the coefficients f0 = f (x0 , 0) and fn = fx󸀠 (x0 , 0)xn + yn (x0 , . . . , xn–1 ) with n ≥ 1 are explicitly determined by f . Series (0.2) is called a formal asymptotic solution of the equation f (x, %) = 0 if all coefficients fn vanish. This reduces to the system { f (x0 , 0) = 0, { 󸀠 { fx (x0 , 0)xn = – yn (x0 , . . . , xn–1 ) for all n = 1, 2, . . ., which generalizes eq. (0.1). In the singular perturbation theory, series (0.2) is said to be an outer asymptotic expansion. This designation stems from hydrodynamics where one considers steady flow of small viscosity fluid about a streamlined body. One might expect that the partial sums of series (0.2) approximate well the true solution x(%) for small % > 0. In other words, series (0.2) is an asymptotic series of x(%) as % → 0. It should be noted that no convergence of the series is assumed, let alone for % bounded from zero. Most asymptotic methods are very flexible and it is usually impossible to formulate a single theorem which contains all possible application of the given method. Any attempt to generalize it would lead towards restricting potential. In this book, we try to say as much as possible in any concrete situation. Hence, the choice of materials is quite arbitrary and it is subject to the experience of the authors. Chapter 1 contains a traditional introduction into asymptotic analysis based on simple, preferably exactly solvable, examples. Chapter 2 presents a short discussion of asymptotic methods for nonlinear ordinary differential equations. This chapter gives also asymptotics of elliptic, Lamé and Mathieu functions, which are often used as leading-order terms of asymptotic solutions to more general nonlinear differential equations. Chapter 3 focuses on asymptotic solutions to nonlinear nonautonomous equations of order 2. The methods of perturbation theory are well motivated by celestial mechanics, let alone Newton’s second law of motion. Basically the approach contains two steps including appropriate choice of slow and fast variables and the study of oscillations first in the slow variable and then in the fast one. While the material of the first three chapters is traditional, the study of Chapter 4 and the subsequent chapters is relevant to the own research of the authors. Chapter 4 develops perturbation theory to describe the behaviour of trajectories in a small neighbourhood of a separatrix, which applies to the problem of capture into a resonance in a potential hole. Chapters 5, 6 and 7 are about autoresonances in nonlinear systems. The autoresonance is nowadays thought of as universal phenomenon which occurs in a broad range of oscillating physical systems. Series (0.2) need not converge even for analytic data of the problem unless the perturbation is regular. In case the data are analytic one can use analytic continuation in the parameter % to obtain a solution on the whole interval [0, %0 ). Otherwise one has to show the asymptotic character of the series to make use of the formal solution given by series (0.2). There is no efficient way to do this in general but some tricky recipes

XVIII

Introduction

for very particular classes of problems. On using eq. (0.3) one tries to minimize the discrepancy f (SN (%), %) of the Nth partial sum SN (%) of series (0.2). If x = x(%) is a true solution to the equation f (x, %) = 0 for all nonzero % < %0 and rN (%) = x(%) – SN (%) the error term for N = 0, 1, . . ., then the asymptoticity of series (0.2) means that rN = O(%N+1 ) as % → 0. Even if ‖f (rN , %)‖Y ≤ c %N+1 with c a constant independent of % and N, it is not possible to deduce from this that rN = O(%N+1 ). To this end, one needs certain continuity of the inverse mapping f –1 : Y → X × (0, %0 ). Generically such stability theorems do fail. As eq. (0.3) suggests, perturbation theory reduces nonlinear equations with small parameter to linear ones provided that one succeeds in finding a good substitution for the solution of the corresponding unperturbed problem.

1 Asymptotic expansions and series In this chapter we explain basic ideas of constructions for asymptotic approximations. The framework of this chapter is the study of examples and definitions of asymptotic notations. Main aims of this chapter are as follows: – Show naturalness of appearance of asymptotic series. –

Give definitions for basic statements of asymptotic analysis.

Explain the regularity property of asymptotic series and combined asymptotic series.

This chapter is based on lectures for students of applied mathematics. We consider examples in detail but some technical calculations are not presented. The reader can perform such evaluations with the help of computer algebra systems. Problems given in the end of sections are directed to the comprehension and deepening of obtained knowledge. Below we discuss asymptotic expansions of integrals by Laplace method, stationary phase method. These methods are often used to study asymptotic behaviours of solutions for linear differential equations. At last we explain Wentzel, Kramers and Brilluen (WKB) approximation for solutions of ordinary differential equations. In this chapter we study – asymptotic expansions of Fresnel integral; –

asymptotic expansions of solutions for Airy equation; and

asymptotic expansions of solutions for parabolic cylinder equation.

These solutions often appeared in studies of solutions for weak nonlinear equations. They play the role of a leading-order term of asymptotic expansions. Here we show how the asymptotic methods work in well-known cases. This help students to understand how to apply asymptotic methods to special functions.

1.1 Definitions of Asymptotic Series and Examples In this section we present an example with divergent series. It is shown that this series is convenient to be used for the study of the integral for large values of parameter. We give definitions of calibration sequence and asymptotic series.

1.1.1 An Example of Divergent Series Consider an example that is connected with the study of properties for Fresnel sinintegral as large values of argument: DOI 10.1515/9783110335682-001

2

1 Asymptotic expansions and series

I(x) = ∫ sin(y2 )dy.

(1.1)

x

This integral often appears in applications and is a miracle of asymptotic theory. Integral (1.1) is convergent for all real values of x. We split an interval of integration into intervals where sin(y2 ) is a function of fixed sign. Present the integral as a sum of integral over these intervals. It gives a sum of alternating series with decaying terms. Leibniz condition gives a convergence of the series. The integrand is simple, but it is not possible to evaluate the integral by elementary functions. Direct evaluations of integral are difficult due to fast oscillations of integrand as x → ∞. To estimate the integral for large values of x it is convenient to use integration by parts: ∞

x

x

d cos(y2 ) cos(x2 ) d sin(y2 ) cos(x2 ) = = –∫ ∫ sin(y )dy = – ∫ 2y 2x 2x 4y3 2

x

3d cos(y2 ) sin(x2 ) + . + ∫ 4x3 8y5 x

The order of value of integral for the last term is estimated: 󵄨󵄨 ∞ 󵄨 󵄨󵄨 3d cos(y2 ) 󵄨󵄨󵄨 3 󵄨󵄨󵄨 ∞ sin(y2 ) 󵄨󵄨󵄨 3 󵄨󵄨󵄨 ∞ dy 󵄨󵄨󵄨 C 󵄨󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 dy󵄨󵄨󵄨 ≤ 󵄨󵄨󵄨 ∫ 5 󵄨󵄨󵄨 ≤ 4 , 󵄨󵄨 ∫ 󵄨󵄨󵄨 ≤ 󵄨󵄨󵄨 ∫ 󵄨󵄨 8y5 y5 󵄨󵄨 4 󵄨󵄨 󵄨󵄨 4 󵄨󵄨 y 󵄨󵄨󵄨 x 󵄨󵄨 x x x 󵄨 where C = const > 0. It gives a formula: 󵄨󵄨 ∞ 󵄨󵄨 󵄨󵄨 cos(x2 ) sin(x2 ) 󵄨󵄨󵄨󵄨 C 󵄨󵄨 2 – 󵄨󵄨 ∫ sin(y )dy – 󵄨≤ , 󵄨󵄨 2x 4x3 󵄨󵄨󵄨󵄨 x4 󵄨󵄨 x 󵄨 x → ∞. We can improve this formula. For this purpose we integrate by parts step by step. The consequent integration of eq. (1.1) by parts gives a sum of two series: as

∫ sin(y2 )dy ∼ x

+

3 × 5 × 7 × ⋅ ⋅ ⋅ × (4n – 1) cos(x2 ) ∞ ∑ (–1)n 2x n=0 22n ∗ x4n 3 × 5 × 7 × ⋅ ⋅ ⋅ × (4n + 1) sin(x2 ) ∞ . ∑ (–1)n 4x3 n=0 22n x4n

(1.2)

The reader can note that these series are not suitable for calculation of the integral. Both series are diverging sequences for all real values of x. It is a reason of equivalence sign ∼.

1.1 Deﬁnitions of Asymptotic Series and Examples

3

Mathematicians of eighteenth century such as Leonhard Euler, Isaac Newton, Iohan Leibnitz, mechanics, astronomers and astrologists used segments of divergent series in calculations. Examples with divergent series were known. A divergence of series in Stirling formula for factorial approximation was mentioned in the letter of Bayes [12]. Using divergent series became impossible after reconsideration of basis of analysis by Karl Gauss and Augustin Cauchy [24]. It was found that many obtained results required re-thinking. For example, these results are Moivre-Stirling formula for the factorial approximation [29, 143] and formula for the harmonic sequence obtained by Euler [33]. Henri Poincare tried to understand the celestial mechanics and to explain it mathematically in the end of nineteenth century. He found that segments of divergent series give good approximations that are well compatible with reviewings. The example of this application was detecting of Neptune by Le Verrier’s calculations. Poincare’s efforts gave an idea of asymptotic series and allowed him to formulate basis of new celestial mechanics [138]. Series in eq. (1.2) have two important properties. First, a ratio of consequent to antecedent tends to zero for any n as x → ∞. Denote terms of the first series by an and terms of the second series by bn . Then: an+1 (4n + 1)(4n + 3) =– = 0, an 4x4

x → ∞;

bn+1 (4n + 5)(4n + 7) =– = 0, bn 4x4

x → ∞.

Second, if we use segments with length N, ∀N ∈ N of series it gives the following estimation: 󵄨󵄨 ∞ 󵄨󵄨 3 × 5 × 7 × ⋅ ⋅ ⋅ × (4n – 1) cos(x2 ) N 󵄨󵄨 2 ∑ (–1)N 󵄨󵄨 ∫ sin(y )dx – 󵄨󵄨 2x n=0 22n x4n 󵄨󵄨 x –

󵄨󵄨 cN sin(x2 ) N N 3 × 5 × 7 × ⋅ ⋅ ⋅ × (4n + 1) 󵄨󵄨󵄨 (–1) , ∑ 󵄨󵄨 < 4N+2 3 2n 4n 󵄨 4x n=0 2 x 󵄨󵄨 x

(1.3)

where cN > 0. Index N means that value cN depends on the length of segment. What is more cN → ∞ as N → ∞. Nevertheless, the order of removed terms with respect to x is less than the order of the highest term in the segment of series. Perturbation theory series in celestial mechanics have the same properties. It means: – these series are divergent for any small values of perturbation parameter; –

any consequent term is essentially less than the antecedent term.

Poincare used these two properties to determine an asymptotic series.

4

1 Asymptotic expansions and series

1.1.2 Order Operators The example from previous section contains all main ideas of asymptotic analysis. Here we discuss basic definitions. Deﬁnition 1. Function f (x) is o-big with respect to 6(x) as x → x0 , if ∃c, : ∈ ℝ such that |x – x0 | < : as ∀x: |f (x)| < c|6(x)|.

The notation is f (x) = O(6(x)),

x → x0 .

It means that function f (x) has an order of 6(x) as x → x0 . Example 1. cos(x) = O(1),

∀x ∈ ℝ;

log(x) = O(x – 1), sin(x) = O(x–2 ), x2

x → 1; x → ∞.

It is important to note 1 sin(x) ≠ O ( 2 ) , x2 x

sin(x) = O(x–2 ), x2

x → ∞.

However, if we remove nulls for function sin(x) it gives 1 cos(x) = O ( 2 ), x2 x

x → x0

∀x0 ≠ 0n,

n ∈ ℤ.

Another definition is convenient for using functions of different order. Deﬁnition 2. Function f (x) is o-small with respect to 6(x) as x → x0 , if f (x) = 0, 6(x)

x → x0 .

f (x) = o(6(x)),

x → x0 .

lim

x→x0

The notation is

1.1 Deﬁnitions of Asymptotic Series and Examples

5

Example 2. sin(x2 ) = o(x),

x → 0;

x = o(√x),

x → 0.

Finally, if one function can be approximated by another one it is convenient to use an asymptotic equivalence: Deﬁnition 3. Functions f (x) and 6(x) are asymptotic equivalent as x → x0 if: lim

x→x0

f (x) = 1. 6(x)

The notation is f (x) ∼ 6(x),

x → x0 .

Example 3. ∞

∫ sin(x3 )dx ∼ x

cos(x3 ) , 3x2

x → ∞.

1.1.3 Calibration Sequence. Asymptotic Series The definition of asymptotic equivalence allows to rewrite formulae from Section 1.1 in more elegant form: ∞

∫ sin(y2 )dy ∼ x

cos(x2 ) sin(x2 ) , + 2x 4x3

x → ∞.

It is important to know the order of approximation. In this case we can use more informational formula: ∞

∫ sin(y2 )dy = x

cos(x2 ) sin(x2 ) + + O(x–4 ), 2x 4x3

as x → ∞. To obtain an approximation that is accurate to any order x–N we use: ∞

∫ sin(y2 )dy = x

3 × 5 × 7 × ⋅ ⋅ ⋅ × (4n – 1) cos(x2 ) N ∑ (–1)N 2x n=0 22n x4n +

3 × 5 × 7 × ⋅ ⋅ ⋅ × (4n + 1) sin(x2 ) N + O(x–4N–2 ). ∑ (–1)N 4x3 n=0 22n x4n

6

1 Asymptotic expansions and series

In these formulae we use the inverse powers of x. It is easy to note x–(n+1) = o(x–n ),

x → ∞.

Deﬁnition 4. Sequence of functions 6n (x) is called calibration one as x → x0 when the relation is valid: 6n+1 (x) = o(6n (x)),

x → x0 .

We can use calibration sequence 6n (x) = x–n for series in formula (1.2). Each consequent term has an order that is less than antecedent term. Series of this type are often used for a function approximation: Deﬁnition 5. Series of the form ∞

∑ an fn (x) n=1

is called a formal asymptotic series as x → x0 , where fn (x) = (O(6n (x)) and 6n (x) is a calibration sequence. An asymptotic series as x → ∞ for function (1.1) has form (1.4). It allows to write: ∞

∫ sin(y2 )dy ∼ x

+

3 × 5 × 7 × ⋅ ⋅ ⋅ × (4n – 1) cos(x2 ) ∞ ∑ (–1)n 2x n=0 22n x4n 3 × 5 × 7 × ⋅ ⋅ ⋅ × (4n + 1) sin(x2 ) ∞ . ∑ (–1)n 4x3 n=0 22n x4n

(1.4)

It is important to understand difference between asymptotic series and convergent series. An idea about an asymptotic series at x0 theorizes an idea about a convergent series in the neighbourhood of point x0 . But a convergent series can be applied to a wide domain. Consider an example. This is Maclaurin series ∞

xn x=0 n!

ex = ∑

that is convergent for ∀x ∈ ℝ. But this series has an asymptotic property as x → 0 only. We can write a convergent series for function (1.1). That series is convergent for any values of x. The well-known formula of analysis ∞

∫ sin(y2 )dy = √ –∞

0 2

1.1 Deﬁnitions of Asymptotic Series and Examples

7

gives ∞

x

∫ sin(y2 )dy = √ x

0 – ∫ sin(y2 )dy. 2 0

The integrand from right-hand side can be expanded into Maclaurin series and integrated term by term. It gives a convergent series for any values of x ∞

∫ sin(y2 )dy = √ x

0 ∞ (–1)n x4n+3 –∑ . 2 n=0 (4n + 3) × (2n + 1)!

(1.5)

To use this formula for large values of x it is necessary to use many terms. Estimate a number of terms in segment of series to evaluate the value of integral accurately to 10–2 as x = 10. This series is the alternating one. If absolute values of terms decay, then a residue term is estimated by the first neglected term. To estimate the residue we find a number N so that following conditions are valid. First, absolute values of terms of series monotonically decay. Second, each term is less than 0.01. When x = 10 a number N satisfies 104N+3 ≤ 10–2 (4N + 3) × (2N + 1)! or 104N+5 ≤ (4N + 3) × (2N + 1)! Taking the logarithm and simple evaluations give 2N+1

log(4N + 3) + ∑ log(k) – (4N + 5) log(10) ≥ 1. k=1

This inequality is valid as N > 135. – To obtain an accuracy that is better than 0.01 at x = 10 one should sum more than 135 terms of series (1.5). –

To obtain the same accuracy at x = 10 one term of asymptotic series is sufficient (see eq. (1.2)).

However, asymptotic series are suitable for evaluations of exact values. Evaluate a minimum error that can be obtained by using a segment of asymptotic series. The accuracy becomes better for as long as the absolute value of each next following term of asymptotic series is less or equal than the absolute amount of previous term. When x = 10 we have 3 × 5 × ⋅ ⋅ ⋅ × (4n + 1) 3 × 5 × ⋅ ⋅ ⋅ × (4n + 5) ≤ 104n+3 104n+5 or (4n + 5) ≤ 102 ,

n ≤ 23.

8

1 Asymptotic expansions and series

Approximations for the integral of fast-oscillating function 4 Sum of 200 terms of Maclaurin series Sum of 4 terms of asymptotic series

3 2 1 0 –1 –2 –3 –4 1

2

3

4

5

6

7

8

9

10

x Figure 1.1. Approximations of the integral by Maclaurin and asymptotic series.

Let x = 10. A segment of asymptotic series (1.4) with N = 24 terms gives a maximum accuracy for integral (1.1).

The error is estimated by 10–23 .

For large values of x, maximum accuracy is obtained for large number of terms in the asymptotic series. It is shown on fig. 1.1.

1.1.4 Problems 1.

Construct an asymptotic series and demonstrate a calibrate sequence for an integral ∞

I3 (x) = ∫ sin(y3 /3)dy,

x → ∞.

x

2.

3.

Obtain a convergent series for I3 (x). Compare numbers of terms into asymptotic and convergent series that are sufficient for the evaluation of I3 (10) with an error less or equal than 0.01. Evaluate a minimum error that can be obtained for I3 (10) by an asymptotic series.

1.2 Summation of Asymptotic Series

4.

9

Solve problems 1–3 for an integral ∞

In (x) = ∫ sin(yn /n)dy,

x → ∞.

x

5.

Is a sequence xk sin(1/x), x → 0 a calibrate one?

1.2 Summation of Asymptotic Series Asymptotic series are not convergent. But they are useful for the representation of smooth functions. An asymptotic series can be an asymptotic expansion for function of certain class. This is a basic idea for applications of asymptotic series for solutions of equations.

1.2.1 Asymptotic Representation of Functions In this section, we show that an asymptotic expansion does not determine a unique function. This statement is based on the fact that any calibrate sequence does not describe a set of continuous functions. The calibrate sequence is not a basic set. We can add function 8(x) to calibrate sequence 6k (x). This new function satisfies 8(x) = o(6k (x)),

∀k ∈ ℕ,

x → x0 .

Below we consider the most simple cases. Example 4. We take a calibrate sequence xk on the interval x ∈ (0, x0 ), x0 > 0. Then Taylor series of function exp(x) as x → 0 is an asymptotic series. But the same series is an asymptotic expansion for function exp(–1/x2 ). It means ∞

xk , k! k=0

exp(x) + exp(–1/x) ∼ ∑

x → 0.

The reason of this phenomenon is as follows: function exp(–1/x) as x > 0, x → 0 decays faster than any function of the calibrate sequence xk . The asymptotic expansion of this function with respect to sequence xk has zero coefficients. It leads to a sum exp(x) + exp(–1/x) that has the same coefficients with respect to the sequence xk .

Consider any calibrate sequence 6k (x) as x > 0 and x → 0. We suppose that a certain function is represented by ∞

f (x) ∼ ∑ ak 6k (x), k=0

x → 0.

10

1 Asymptotic expansions and series

Deﬁnition 6. A set of functions with the given asymptotic expansion is called an asymptotic sum of series. Example 5. Let log–k (x), k ∈ ℕ, be a calibrate sequence as x → 0. Then all analytical functions in the neighbourhood of x = 0 are equivalent with respect to this calibrate sequence.

1.2.2 Theorem on the Uniqueness of Asymptotic Expansion Let function f (x) have a certain asymptotic expansion with a calibrate sequence 6k (x) as ∞

f (x) ∼ ∑ ak 6k (x),

x → 0.

k=0

Show that this asymptotic expansion is unique for the function. Let another asymptotic expansion for f (x) exist: ∞

f (x) ∼ ∑ bk 6k (x),

x → ∞.

k=0

Then ∞

k=0

k=0

f (x) – f (x) ∼ ∑ ak 6k (x) – ∑ bk 6k (x),

x → ∞.

The following relation n

n

∑ ak 6k (x) – ∑ bk 6k (x) = o(6n (x)), k=0

x → ∞.

k=0

is valid for ∀n ∈ ℕ. For n = 0 we have a0 60 (x) – b0 60 (x) = o(60 (x)),

x → ∞.

If a0 and b0 are constant, then a0 = b0 . The induction for n = 1, 2, . . . gives an = bn , ∀n ∈ ℕ. The following statement is valid. Theorem 1. The asymptotic expansion for a function is unique with respect to the given calibrate sequence.

1.2 Summation of Asymptotic Series

11

1.2.3 Theorem on Existence of a Function with the Given Asymptotic Expansion Asymptotic series are not convergent. It is difficult to expect that there exists a function with the given asymptotic series. Consider calibrate sequence xk , k ∈ ℕ, in the neighbourhood of x = 0. Let the following formal asymptotic series be ∞

∑ ak xk ,

x > 0,

x → 0.

(1.6)

k=0

It is possible to construct a regular function for the given sequence {ak }∞ k=0 and asymptotic series (see, e. g. [65]). It means for ∀N ∈ ℕ N

f (x) – ∑ ak xk = o(xN ),

x > 0,

x → 0.

k=0

This function can be constructed due to the function exp(–1/x) decays faster than any power of x as x > 0, x → 0. This property allows to construct a function with the given asymptotic series. But the function is represented in the form of convergent series. Consider a series ∞

a0 + a1 x + ∑ ak xk (1 – exp (– k=2

1 )) . 2k |ak |x

(1.7)

It is easy to see that this series is asymptotically equivalent to eq. (1.6). Each term of eq. (1.7) differs from the term of eq. (1.6) by exp(–1/(2|ak |x)). This value is o(xk ), ∀k ∈ ℕ, x > 0, x → 0. Now let’s show that series (1.7) is convergent. A relation 1 – exp(–1/x) < 1/x is valid as 0 < x < 1. The left-hand side is less than unity and the right-hand side is greater than unity. To obtain a convergence of series we use a convergence of majorizing series: 󵄨󵄨 ∞ 󵄨󵄨 󵄨󵄨 󵄨 󵄨󵄨 ∑ ak xk (1 – exp (– 1 )󵄨󵄨󵄨 < 󵄨󵄨 k 2 |ak |x 󵄨󵄨󵄨󵄨 󵄨󵄨k=2 ∞ ∞ k–1 1 x =∑ k . < ∑ |ak |xk k 2 |a |x k k=2 k=2 2 The majorizing series is convergent for all values of |x| < 2. It gives that the right-hand side series is convergent for 0 < x < 2. Then the function ∞

f (x) = a0 + a1 x + ∑ ak xk (1 – exp (– k=2

1 2k |a

k |x

))

12

1 Asymptotic expansions and series

has an asymptotic series as x > 0, x → 0 of the form ∞

f (x) ∼ ∑ ak xk . k=0

Theorem 2. There exists a function with the given asymptotic series (1.6). Example 1.2.1. Let us consider an asymptotic series ∞

n! , n log (x) n=1 ∑

x→∞

and functions [log(log(x))]+k

f (x, k) =

∑ n=1

n! , logn (x)

(k + 1) ∈ ℕ,

where [log(log(x))] is integer part of log(log(x)). It is easy to see ∞

n! , n log (x) n=1

f (x, 0) ∼ ∑

x → ∞,

n! , n log (x) n=1

f (x, 1) ∼ ∑

x→∞

and 1 f (x, 0) + g ( ) ∼ f (x, 1), x

x → ∞,

where g(y) is an analytic function in small neighbourhood of y = 0.

1.2.4 Problems 1.

Find equivalent functions for asymptotic series with a calibrate sequence x1–1/k ,

2.

k ∈ ℕ,

x → 0.

Find equivalent functions for asymptotic series with a calibrate sequence x–k ,

k ∈ ℕ,

x → ∞.

1.3 Laplace Method and Gamma Function

3.

13

Find equivalent functions for asymptotic series with a calibrate sequence logk (x),

k ∈ ℕ,

x → ∞.

1.3 Laplace Method and Gamma Function Laplace method is one of fundamental techniques of asymptotic analysis. This method is used for calculating asymptotic expansions of integrals where an integrand contains an exponent function: b

F(+) = ∫ f (x)e–+S(x) dx,

+ → ∞.

(1.8)

a

Here f (x), S(x) ∈ C∞ [a, b]. Integrals of such type often appear in studies of solutions for differential equations. 1.3.1 Asymptotic Expansion of Integral when Subintegral Function Exponent Does Not Contain Extrema The simplest case is when exponent into subintegral function (1.8) does not contain extrema. It means S󸀠 (x) ≠ 0, x ∈ [a, b]. We can perform integration by parts b

F(+) = ∫ f (x)e–+S(x) dx = – a

b 󵄨b d f (x) 1 f (x) –+S(x) 󵄨󵄨󵄨 1 󵄨 e ( 󸀠 ) e–+S(x) dx. + ∫ 󵄨 󵄨󵄨 + S󸀠 (x) 󵄨a + dx S (x) a

If S(a) < S(b), then F(+) ∼

1 f (a) –+S(a) e . + S󸀠 (a)

In other case: F(+) ∼ –

1 f (b) –+S(b) e . + S󸀠 (b)

One can perform integration by parts for residual integral one more time. It gives correction term in asymptotic expansion of F(+).

1.3.2 Asymptotic Expansion of Integral, When Integrand Exponent Contains Extrema Integration by parts is not applied when an integration interval contains a stationary point of S(x).

14

1 Asymptotic expansions and series

We consider a simple case when maximum point of S(x) is inside the interval of integration. Denote a maximum point of function S(x) by xm ∈ (a, b). Let us suppose S󸀠󸀠 (xm ) ≠ 0. Select an integral over the small neighbourhood xm , x ∈ [xm – \$, xm + \$]: xm +\$ –+S(x)

F(+) = ∫ f (x)e

dx + ∫ f (x)e–+S(x) dx,

x∈L

xm –\$

L = [a, xm – \$] ∪ [xm + \$, b]. Asymptotic expansion of integral over a domain out of neighbourhood of stationary point xm can be evaluated by using integration by parts. Consider the method to obtain an asymptotic expansion of integral over the small neighbourhood of stationary point xm : xm +\$

I(+) = ∫ f (x)e–+S(x) dx. xm –\$

It is convenient to change an index of exponent to –(y2 + +S(xm )). Denote y2 = +(S(x) – S(xm )). This transformation x → y is one-to-one correspondence in the small neighbourhood of xm because xm is a non-degenerate maximum, then S󸀠󸀠 (xm ) > 0. This change of variables leads to y = √+√

S󸀠󸀠 (xm ) S󸀠󸀠󸀠 (x ) (x – xm )√1 + 󸀠󸀠 m (x – xm ) + O((x – xm )2 ). 2 3S (xm ) D

I(+) =

2 1 y 2 2 y y2 √ 󸀠󸀠 + O ( )) × e–y (1 + O ( )) dy, ∫ f (xm + √ 󸀠󸀠 √+ S (xm ) √+ S (xm ) √+ +

–D

where D ∼ \$√2/S󸀠󸀠 (xm )/√+. Expand function f into Taylor series and substitute a segment. It gives ∞

I(+) ∼

2 1 2 √ 󸀠󸀠 f (xm )e–+S(xm ) ∫ e–y dy. √+ S (xm )

–∞

Using a well-known formula: ∞

2

∫ e–y dy = √0, –∞

we obtain the following theorem for asymptotics of integrals by Laplace method.

1.3 Laplace Method and Gamma Function

15

Theorem 3. Let f (x) and S(x) be from C∞ [a, b] and S󸀠 (xm ) = 0, S󸀠󸀠 (xm ) > 0, xm ∈ (a, b), then b

∫ f (x)e–+S(x) dx ∼ a

20 1 √ 󸀠󸀠 f (x )e–+S(xm ) . √+ S (xm ) m

(1.9)

1.3.3 Derivation of Integral Formula for Gamma Function Let us construct an integral formula for gamma function. Denote an independent variable by z and an unknown function by A(z). The equation for G(z) looks like the formula for factorial: A(z + 1) = zA(z).

(1.10)

For natural argument n ∈ ℕ the solution of eq. (1.10) has a property A(n + 1) ≡ n!. Our aim is to find a solution of eq. (1.10) for real value z. This equation is not solvable in terms of elementary functions. Below we will derive well-known integral formula for the gamma function and study its asymptotic behaviour by Laplace method. We construct solution of eq. (1.10) in the form of Laplace integral: ∞

̃ exp(–pz)dp. A(z) = ∫ A(p) –∞

The right-hand side of eq. (1.10) can be written in the form of ∞

̃ exp(–pz)dp zA(z) = z ∫ A(p) –∞ ∞ 󵄨󵄨p=∞ ̃ dA(p) ̃ exp(–pz)󵄨󵄨󵄨󵄨 + = –A(p) exp(–pz)dp. ∫ 󵄨󵄨 󵄨p=–∞ –∞ dp

This formula is valid when limits for outside the integral term exist. Suppose lim A(p) exp(–pz) – lim A(p) exp(–pz) = 0.

p→–∞

p→∞

We can check this property when a solution is obtained. The left-hand side of eq. (1.10) can be written in the form of ∞

̃ exp(–pz) exp(–p)dp. A(z + 1) = ∫ A(p) –∞

(1.11)

16

1 Asymptotic expansions and series

Equation (1.10) for direct image of gamma function is ̃ ̃ exp(–p) = dA(p) . A(p) dp This equation can be solved easily by means of separating variables: dÃ = exp(–p)dp, Ã or Ã = –C exp(– exp(–p)), where parameter C does not depend on p. Let us check a property outside the integral terms in eq. (1.11), when z > 0. lim exp(– exp(–p)) exp(–pz) = 0;

p→–∞

lim exp(– exp(–p)) exp(–pz) = 0.

p→+∞

It is easy to obtain an expression for an inverse image when the Laplace direct image was constructed: ∞

A(z) = C ∫ exp(– exp(–p)) exp(–pz)dp. –∞

This formula can be rewritten in short form. Change the variable of integration t = exp(–p). It leads to ∞

A(z) = C ∫ exp(–t)t(z–1) dt. 0

A special solution such as C = const and ∞

A(1) = C ∫ exp(–t)dt = 1 0

is called gamma function. Now we can write well-known integral representation for gamma function: ∞

A(z) = ∫ exp(–t)t(z–1) dt.

(1.12)

0

To check that function (1.12) satisfies eq. (1.10) one can evaluate the integral from righthand side of eq. (1.12) through integration by parts: ∞

A(z) =

󵄨󵄨t=∞ 1 1 tz 󵄨 + ∫ exp(–t)tz dt = A(z + 1). exp(–t)󵄨󵄨󵄨󵄨 z z z 󵄨󵄨t=0 0

1.4 Fresnel Integral and Stationary Phase Method

17

The integral representation for gamma function allows to evaluate an asymptotic expansion for gamma function as z → ∞. It is clear that this asymptotic expansion at z + 1 = n is equivalent to Moivre–Stirling formula for the asymptotic expansion of factorial. 1.3.4 Moivre–Stirling Formula Let us evaluate an asymptotic expansion of gamma function as z → ∞. It is convenient to use Laplace method. Rewrite a subintegral function in exponential form: ∞

A(z + 1) = ∫ e

z log(t)–t

dt = |& = t/z| = ze

z log(z)

0

∫ d& e–z(& –log(& )) d& . 0

A maximum point for the index of exponent of integral is determined by (& – log(& ))󸀠 = 0,

1–

1 = 0, &

&s = 1.

The asymptotic expansion of Laplace integral gives Moivre–Stirling formula or more often one says Stirling’s approximation: A(z + 1) ∼ z1+z

ez √20 , √z

or z z A(z + 1) ∼ √20z ( ) . e 1.3.5 Problems 1. 2.

Obtain an asymptotic expansion of A(z) in the neighbourhood of z = 0 and –z ∈ ℕ. Calculate an asymptotic expansion of integral 2

∫ e–+(x

3

/3+ax)

dx,

+ → ∞,

a ∈ [–1, 1].

–2

Find a domain of validity with respect to a.

1.4 Fresnel Integral and Stationary Phase Method In this section we obtain formulae for Cos and Sin Fresnel integral. It is shown that Fresnel integral is a solution of differential equations of the first order. We obtain formulae for Fresnel integral for large value of argument. It is shown that Fresnel integral is a basis for formulae of stationary phase method.

18

1 Asymptotic expansions and series

1.4.1 Riemann Lemma Asymptotic expansions of integrals with parameter play an important role in studies of asymptotic expansions of solutions for differential equations. Integral of Fourier type is one of them: b

F(+) = ∫ f (x) exp(i+x)dx. a

A statement on asymptotic behaviour of integrals of any function f (x) that is integrable over [a, b] is called Riemann lemma. Lemma 1 (Riemann lemma). Suppose f (x) ∈ L1 (a, b), then b

+ → ∞.

∫ f (x) exp(i+x)dx = o(1), a

The proof of Riemann lemma is based on an approximation of integrable functions by piecewise constant functions. Integral of piecewise constant function can be changed by a sum of integrals over segments where function is constant. Each integral in the sum is the integral of exponent. After integration each term has an order of +–1 . A required evaluation can be obtained as a limit when a number of terms tend to infinity. 1.4.2 Fresnel Integral Formulae The following functions are called Fresnel Cos and Sin integrals: x

x

C(x) = ∫ cos(t2 )dt,

S(x) = ∫ sin(t2 )dt.

0

0

These integrals play an important role in mathematical physics. They can be used in the fields from diffraction theory to classical mechanics. An importance of these integrals depends on simplicity of these formulae. This is a general conception. The simplest formula is more important for applications. Fresnel integrals can be represented by Taylor series. To obtain these formulae we expand subintegral functions into Taylor series and integrate them term by term. It yields ∞

C(x) = ∑ (–1)n n=0 ∞

S(x) = ∑ (–1)n n=0

x4n+1 , (2n)! 4n x4n+3 . (2n + 1)! (4n + 3)

These series are convergent for ∀x ∈ ℂ. Fresnel integrals are entire odd functions.

1.4 Fresnel Integral and Stationary Phase Method

19

1.4.3 Large Values of Argument First of all consider properties of Fresnel integrals for large real value of argument. Subintegral function is an alternating quantity. The integral is a sum of alternating quantity series of area. This area is proportional to the period of integrand. The period of oscillations tends to zero with t → ∞. It leads to members of series that tend to zero. Liouville condition gives a convergence of series. Integrals C(x) and S(x) are bounded with x → ∞. Evaluate a limit ∞

C = ∫ cos(t2 )dt. 0

Change variable z = t2 , then ∞

∞ 2

C = ∫ cos(t )dt = ∫ 0

0

cos(z) dz. 2√z

It is convenient to write cos(z) = (eiz + e–iz )/2. This formula allows one to represent the integral in the form of sum of integrals of exponents separately. Consider an integral over close contour 𝛾 = 𝛾1 ∩ 𝛾2 ∩ 𝛾3 . Here 𝛾2 is a quarter of circle with radius R (These contours are shown on Fig. 1.2): R

lim (∫

R→∞

0

0/2

0

0

iR

eiz eiz exp(i6/2) d6 + ∫ dz + √R ∫ exp(iR exp(i6)) dz) = 0. 2 2√z 2√z

A limit for each integral exists with R → ∞. It yields: ∞

0/2

0

0

0

i∞

eiz eiz exp(i6/2) d6 – ∫ dz = – lim √R ∫ exp(iR exp(i6)) dz. ∫ R→∞ 2 2√z 2√z

(p) R γ2 γ3 γ1 R

(p)

Figure 1.2. Integral over contour line 𝛾 = 𝛾1 ∩ 𝛾2 ∩ 𝛾3 equals zero because integrand is holomorphic inside 𝛾.

20

1 Asymptotic expansions and series

Let us change variables in the latter integral iz = –y: 0

i∞

0

0

√2 eiz e–y e–y i dy = (1 + i) ∫ –∫ dz = ∫ √–i 2√y 2 2√ydy 2√z ∞

0

–∞

2 2 √2 √2 √2 = (1 + i) ∫ e–t dt = (1 + i) ∫ e–t dt = (1 + i)√0 2 4 4

Consider a singular integral: 0/2

exp(i6/2) d6 lim √R ∫ exp(iR exp(i6)) R→∞ 2 0

√R

= lim

R→∞

2

0/2

∫ exp(i(6/2 + R cos(6))) exp(–R sin(6))d6. 0

For ∀6 ∈ (0, 0/2] the integrand is exponentially small with respect to R as R → ∞. Let’s show that a limit equals zero. We present the integral as a sum of two integrals. The first one is integral over 6 ∈ (0, R–3/4 ] and the second one is integral over 6 ∈ (R–3/4 , 0/2]. Substitute integrals under limit. It yields 0/2

exp(i6/2) lim √R ∫ exp(iR exp(i6)) d6 R→∞ 2 0

≤ const lim

1

4 R→∞ √ R

4 + const lim √R exp(–R sin(1/ √R3 )) = 0.

R→∞

It gives ∞

∫ 0

√2 eiz (1 + i)√0. dz = 4 2√z

∫ 0

√2 e–iz dz = (1 – i)√0. 4 2√z

Then lim C(x) =

x→∞

√20 . 4

1.4 Fresnel Integral and Stationary Phase Method

21

A similar formula is valid for Fresnel’s Sin integral: lim S(x) =

x→∞

√20 . 4

1.4.4 Method of Stationary Phase Here we show an applying of Fresnel’s integrals in studies of integrals Fourier of type for fast oscillating functions. Consider an integral b

F(+) = ∫ f (x) cos(+K(x))dx. a

We suppose that f (x) is a smooth function on a segment [a, b]. Function K(x) has one non-degenerated stationary point xs . It means K󸀠 (xs ) = 0 and K󸀠󸀠 (xs ) ≠ 0. Let us discuss properties of F(+) as + → ∞. We consider a case when K ≡ x2 , f (x) ≡ 1/ cosh(x + 1). Figure 1.3 shows that the largest addition term to the integral goes from the neighbourhood of stationary point. The cause of it is a smaller frequency of oscillations in the neighbourhood of stationary point. Represent the integral as a sum of three integrals: xs –\$

b

F(+) = ∫ f (x) cos(+K(x))dx + ∫ f (x) cos(+K(x))dx a

xs +\$ xs +\$

+ ∫ f (x) cos(+K(x))dx. xs –\$

Neighbourhood of stationary point

1 0.8 0.6 Integrand

0.4 0.2 0 –0.2 –0.4 –0.6 –0.8 –1 –3

–2

–1

0 x

1

2

3

Figure 1.3. The integral is an alternating quantity series of areas for integrand cos(20x2 )/ cosh(x + 1). We use + = 20 and xs = 0. It is easy to see that the largest addition term to the integral goes from the neighbourhood of stationary point. On the other side a maximum of integrand is about x = –1.

22

1 Asymptotic expansions and series

The first and the second integrals do not contain the stationary point of function K(x). Integration by parts gives the following estimations as + → ∞: xs –\$

∫ f (x) cos(+K(x))dx = a

󵄨󵄨x=xs –\$ f (x) 󵄨󵄨 󵄨󵄨 sin(+K(x)) 󸀠 󵄨󵄨 +K (x) 󵄨x=a xs –\$

󸀠

1 f (x) ∫ ( 󸀠 ) sin(+K(x))dx. + K (x) a

The term outside the integral has an order of +–1 at x = a. To estimate the term outside the integral at x = xs – \$ and residual integral we use a condition K󸀠 (xs ) = 0. An order of term outside the integral depends on a parameter \$. Suppose \$ = +–! , where ! > 0. Then xs –\$

∫ f (x) cos(+K(x))dx = O(+–1 ) + O(+!–1 ) + O(+2!–1 ). a

The same evaluations allow to estimate the integral over x ∈ [xs + \$, b]. Consider an integral over a small interval in the neighbourhood of stationary point. Change a variable in the integral +K󸀠󸀠 (xs )(x – xs )2 = sgn(K󸀠󸀠 (xs ))y2 . Suppose sgn(K󸀠󸀠 (xs )) = 1, then dy = √+|K󸀠󸀠 (xs )|dx. Represent integrands in the form of Taylor series with a residual term of Lagrange form: +1/2–! |K󸀠󸀠 (xs )|

xs +\$

∫ f (x) cos(+K(x))dx = xs –\$

(f (xs ) +

∫ 1/2–! – + 󸀠󸀠 |K (xs )|

f 󸀠 ((f ) √+|K󸀠󸀠 (xs )|

× cos (+K(xs ) + y2 +

)

y3 √+|K󸀠󸀠 (x

s

)|3

K󸀠󸀠󸀠 ((K ))

dy √+|K󸀠󸀠 (xs )|

Here (f and (K are parameters from [xs – \$, xs + \$]. Direct calculations give +1/2–! |K󸀠󸀠 (xs )|

xs +\$

∫ f (x) cos(+K(x))dx = xs –\$

f (xs ) cos(+K(xs ) + y2 )

1/2–! – + 󸀠󸀠 |K (xs )|

dy √+|K󸀠󸀠 (xs )|

+ O(+–1/2–! ). Suppose ! = 1/8, it yields F(+) ∼

f (xs )

∫ cos(+K(xs ) + y2 )dy √+|K󸀠󸀠 (xs )| –∞

+

.

23

1.4 Fresnel Integral and Stationary Phase Method

or b

∫ f (x) cos(+K(x))dx ∼ a

20 1 (cos(+K(xs )) – sin(+K(xs ))), f (xs )√ 󸀠󸀠 √+ |K (xs )|

as sgn(K󸀠󸀠 (xs )) = 1. In a case with sgn(K󸀠󸀠 (xs )) = –1, the same evaluations give b

∫ f (x) cos(+K(x))dx ∼ a

20 1 f (x )√ (cos(+K(xs )) + sin(+K(xs ))). √+ s |K󸀠󸀠 (xs )|

Formulae for an integral with function sin are easy to obtain by means of reduction formulas. There are two formulae for integrals with oscillating exponent: b

∫ f (x) exp(i+K(x))dx ∼ a

20 1 exp(i+K(xs ) + i0/4), f (x )√ √+ s |K󸀠󸀠 (xs )|

as sgn(K󸀠󸀠 (xs )) = 1. In a case with sgn(K󸀠󸀠 (xs )) = –1, similar manipulations give b

∫ f (x) exp(i+K(x))dx ∼ a

20 1 exp(i+K(xs ) – i0/4). f (xs )√ 󸀠󸀠 √+ |K (xs )|

We obtain a formula for asymptotic of the integral by method of stationary phase: b

∫ f (x) exp(i+K(x))dx ∼ a

20 1 f (x )√ exp(i+K(xs ) + sgn(K(xs )i0/4). √+ s |K󸀠󸀠 (xs )|

(1.13)

1.4.5 Problems 1.

Evaluate an asymptotic expansion of Bessel function: J- (z) =

1 iz 1 dx exp(i(- + 1)0/4) ∫ exp ( (x – )) -+1 , 2i0 2 x x M

2.

where contour M is on a real axis. The contour goes around point x = 0 via a half circle of small radius in upper half plane. Evaluate an asymptotic expansion of Pearcey [133]: ∫ exp(i("4 /8 – x"2 /2/3 + y"))d", ℝ

x → ∞,

y → ∞.

24

1 Asymptotic expansions and series

1.5 Airy Function and Its Asymptotic Expansion Here we discuss Airy function as a special solution of differential equation. We reduce an integral formula for the function and study properties of the function for large values of argument. 1.5.1 Airy’s Equation Airy’s equation is following ordinary differential equation of second order: u󸀠󸀠 – zu = 0.

(1.14)

General solution of this equation oscillates as z < 0 and exponentially grows as z > 0. It is easy to explain such properties if one studies an equation with fixed parameter instead of the varying parameter z. Solutions of equation v󸀠󸀠 – Zv = 0,

Z = const,

(1.15)

are essentially dependent from a sign of parameter Z. The solutions oscillate with a frequency √–Z when Z < 0. Otherwise when Z > 0 there are two linear independent solutions, one of them grows exponentially and another one decays exponentially. It is reasonable to expect the same behaviour for a solution of Airy equation on the intervals z < 0 and z > 0. Airy equation is often used as a model for mathematical simulation to describe a transition from light zone to shadow zone in wave optics and other problems. 1.5.2 An Integral Representation of General Solution for Airy Equation A solution of Airy equation can be represented in an integral of Laplace type. Let us consider: ̃ exp(pz)dp. u(z) = ∫ u(p) 𝛾

Here 𝛾 is an unknown contour of integration. We choose this contour according to two conditions. The first condition is that the integral should be convergent. The second one is that the integral does not equal to zero. The contour will be determined after ̃ obtaining an explicit form for u(z). Substitute the formula for u(z) into Airy’s equation. We suppose that a differentiation is possible under the integral sign: ∫ p2 ũ exp(pz)dp – z ∫ ũ exp(pz)dp = 0. 𝛾

𝛾

It is convenient to remove a factor z via integration by parts: z ∫ ũ exp(pz)dp = ũ exp(pz)|𝛾𝛾+– – ∫ ũ 󸀠 exp(pz)dp. 𝛾

𝛾

1.5 Airy Function and Its Asymptotic Expansion

25

Points 𝛾± are initial and end one of integration contour. We suppose that a sum of terms outside the integral equals to zero. It is possible when the contour is close or ̃ exp(pz) equals to zero at these points. It yields function u(p) ∫ (p2 ũ + ũ 󸀠 ) exp(pz)dp = 0. 𝛾

The integral equals to zero when an expression in brackets equals to zero. It follows that p2 ũ + ũ 󸀠 = 0. This equation can be easily integrated: dũ = –p2 dp. ũ or ũ = C exp(–p3 /3). It leads to an integral expression: u(z) = C ∫ exp(pz – p3 /3)dp. 𝛾

Now we know an explicit form of integrand. Let us choose a contour of integration. The integrand is holomorphic in complex plane. On Cauchy theorem an integral over a closed contour inside the bounded domain of complex plane equals zero. Therefore, to obtain non-trivial solution for Airy’s equation we choose a contour going through infinity. It is important to find a direction to infinity on the complex plane. The integral is convergent when R(p3 ) < 0 as p → ∞. It means the contour should go to infinity inside sectors –0/6 < arg(p) < 0/6, 0/2 < arg(p) < 50/6 and 70/6 < arg(p) < 100/6. A contour of integration can be shrank to a point when both branches go to infinity through one sector. Such integrals equal to zero. To represent a non-trivial solution we choose contours that are going to infinity through two different sectors (see Figure 1.4). Consider the integral over contour 𝛾3 . This contour is equivalent to an integral over image axis. Then i∞

u(z) = C ∫ exp(pz – p3 /3)dp. –i∞

Change a variable of integration p = ik. It yields ∞

u(z) = iC ∫ exp(i(kz + k3 /3))dk. –∞

26

1 Asymptotic expansions and series

(p)

γ1 γ2

(p)

γ3

Figure 1.4. Curves 𝛾1 , 𝛾2 , 𝛾3 are contours of

integration that lead to non-trivial Laplace integrals.

The image part sin(kz + k3 /3) of integrand is an odd function with respect to k. The integral of it equals to zero. The real part cos(kz+k3 /3) of integrand is even with respect to k. It yields ∞

u(z) = 2iC ∫ cos(kz + k3 /3)dk. 0

Show that the integral is convergent. Represent it as a sum of integrals: √|z|+1

∞ 3

u(z) = 2iC ∫ cos(kz + k /3)dk + 2iC ∫ cos(kz + k3 /3)dk. 0

√|z|+1

The first integral is bounded for z ∈ ℝ and z ≠ 0. Consider the second one: ∞

I(z) = 2iC ∫ cos(kz + k3 /3)dk. √|z|+1

Integrate it by parts: ∞

I(z) = 2iC ∫ cos(kz + k3 /3)dk = 2iC √|z|+1 ∞

+ 2iC ∫ √|z|+1

󵄨k=∞ cos(kz + k3 /3) 󵄨󵄨󵄨 󵄨󵄨 󵄨󵄨 √ z + k2 󵄨k= |z|+1

2k sin(kz + k3 /3)dk. (z + k2 )2

1.5 Airy Function and Its Asymptotic Expansion

27

The last integral is absolutely convergent. The term out of integral equals zero at k = ∞. It gives that the integral I(z) is bounded and the integral u(z) is bounded too. Let us choose C = 1/2i0. As a result we obtain a formula for Airy’s function: ∞

Ai(z) =

1 ∫ cos(kz + k3 /3)dk. 0 0

This function is a solution of the Cauchy problem for Airy’s equation with an initial condition: ∞

1 u(0) = ∫ cos(k3 /3)dk, 0

u󸀠 (0) = 0.

0

A value of the Airy function at z = 0 is represented by gamma function: ∞

0

–∞

1 1 ∫ cos(k3 /3)dk = ∫ (exp(ik3 /3))dk. 0 0 Denote k3 /3 = t. It gives k2 dk = dt or dk = dt/(3t)2/3 , then ∞

1 exp(it) 1 dt. ∫ exp(ik3 /3)dk = ∫ 2/3 20 203 t2/3 –∞ –∞ Now the integral with respect to t can be represented as an integral over an imaginary axis: 4 = it, then u(0) =

i∞

i∞

–i∞

0

1 exp(–4) exp(–4) i d(i4) = d4, ∫ ∫ 2032/3 (i4)2/3 0(3i)2/3 42/3

or i∞

exp(–4) –1 –1 u(0) = d4 = A(1/3). ∫ 0(3)2/3 42/3 0(3)2/3 0

Another linear independent solution with respect to Ai(z) is obtained from a sum of integrals over the contours 𝛾1 and 𝛾2 : ∞ 3

u(z) = C ∫ exp(pz – p /3)dp + C ∫ exp(pz – p /3)dp = 2C ∫ exp(pz – p3 /3)dp 𝛾1

3

𝛾2 –i∞

0 i∞

+ C ∫ exp(pz – p3 /3)dp + C ∫ exp(pz – p3 /3)dp. 0

0

28

1 Asymptotic expansions and series

Let us change a variable of integration in second and third integrals from the righthand side of this formula p = ik. As a result we get ∞

∞ 3

u(z) = 2C ∫ exp(pz – p /3)dp + 2C ∫ sin(kz + k3 /3)dk. 0

0

The Airy Bi(z) integral is ∞

Bi(z) =

1 ∫ ( exp(pz – p3 /3) + sin(pz + p3 /3))dp. 0 0

1.5.3 Asymptotic Expansion for the Airy Function as z → – ∞ Asymptotic expansion for the Airy function as z → –∞ can be evaluated by the method of stationary phase. To obtain the expansion one should select a fast oscillating exponent from integrand: ∞

Ai(z) =

1 ∫ cos(kz + k3 /3)dk = |k = √–z*| 0 0

√–z = ∫ cos((–z)3/2 (–* + *3 /3)d*. 0 0

We write cosine via oscillating exponents: ∞

1 √–z Ai(z) = ∫ exp(i(–z)3/2 (* – *3 /3))d* 0 2 0

+

1 √–z ∫ exp(–i(–z)3/2 (* – *3 /3))d*. 0 2 0

Stationary point of the phase is (–* + *3 /3)󸀠 = 0

– 1 + *2 = 0,

*s = ±1.

The point *s = 1 lies in the domain of integration. A formula for asymptotic expansion of integrals gives Ai(z) ∼

20 1 √–z 1 √ exp(2i(–z)3/2 /3 – i0/4) 4 0 2 √ –z3 2 +

20 1 √–z 1 √ exp(–2i(–z)3/2 /3 + i0/4), 4 3 0 2 √ 2 –z

29

1.5 Airy Function and Its Asymptotic Expansion

or Ai(z) ∼

1 cos(2(–z)3/2 /3 – 0/4), 4 √–z√0

z → –∞.

(1.16)

When z → ∞ the integral for the Airy function does not contain stationary points. Therefore, one can integrate them by parts: ∞

∞ √z √z sin(z3/2 (* + *3 /3)) 󵄨󵄨󵄨󵄨 󵄨󵄨 Ai(z) = ∫ cos(z3/2 (* + *3 /3))d* = 󵄨󵄨 0 0 z3/2 (1 + *2 ) 󵄨0 0

–2* sin(z3/2 (* + *3 /3)) 1 – d* = ⋅ ⋅ ⋅ = o(z–n/2 ), ∫ (1 + *2 )2 0√z

∀n ∈ ℕ,

z → ∞.

0

It means, the Airy function is less than any negative power of z as z → ∞. 1.5.4 Saddle-Point Method and the Airy Function Asymptotic Expansion as z → ∞ An asymptotic expansion of the Airy function can be evaluated using the Cauchy theorem on integrals of holomorphic functions and Laplace method. This method is called saddle-point method. This method can be applied for integrals of the following type: F(+) = ∫ f (z)e–+S(z) dz,

+ → ∞.

L

The contour of integration can be deformed when an integrand is holomorphic in the neighbourhood of L . Due to the Cauchy theorem the value of integral does not change. Let us construct a contour of integration so that the exponential function decays as soon as possible. It allows to simplify calculations. It is sufficient to change integration path in shape so that it goes through saddle point. This point is an extrema point for function S(z) in complex plane. This way is used to obtain the Airy function expansion as z → ∞. It is convenient to rewrite an integral representation for the Airy function in the form of integral of oscillating exponent: Ai(z) =

0

–∞

3 1 1 ∫ cos(kz + k3 /3)dk = ∫ ei(kz+k /3) dk. 0 20

We select a large parameter in exponent: ∞

Ai(z) =

3/2 3 √z ∫ eiz (*+* /3) d*. 20

–∞

30

1 Asymptotic expansions and series

The extrema point of this exponent is dS = 0, 1 + *2 = 0, *s = ±i, dz as k = ±i√z. A real part of exponent of integrand is positive when a contour of integration moves to low half part of complex plane. The index of exponent iz3/2 S(–i) = 2z3/2 /3 is positive at *s = –i. The real part of index of exponent is negative iz3/2 S(i) = –2z3/2 /3 when a contour of integration moves to upper half part of complex plane. To evaluate the integral expansion we shift an integration parameter k = i√z + '. It yields 3/2

e–2z Ai(z) = 20

/3

∫ e–√z'

2

+i'3 /3

d'.

–∞

Formula (1.9) yields 3/2

Ai(z) ∼

e–2z /3 , 4 2√0 √z

z → ∞.

(1.17)

1.5.5 Problem –

Construct an asymptotic expansion for the Airy function in complex plane along half lines z = Rei! , R → ∞, ! ∈ [0, 20).

1.6 Functions of Parabolic Cylinder In this section we derive an integral representation for solutions of parabolic cylinder equation and show connection formulae for parabolic cylinder functions with different values of parameter.

1.6.1 Parabolic Cylinder Equation A canonical form of parabolic cylinder equation is y󸀠󸀠 – (

x2 + a)y = 0. 4

(1.18)

Here x is an independent variable and a is a parameter. This equation with a frozen large coefficient x → X = const has two independent solutions. The first solution grows with respect to x and the second one decays for large value of x. Parabolic cylinder equation can be written in another form: y󸀠󸀠 + (

x2 – a)y = 0. 4

1.6 Functions of Parabolic Cylinder

31

Solutions for this equation oscillate at large real values of x. Equation of the first form transfers to the second form through substitution: x → x exp(i0/4),

a → –ia.

Below we study solutions for parabolic cylinder equation written in the first form. Solutions of equation of the second form can be obtained by the given substitution. Parabolic cylinder equation was obtained by separating variables method. This form of equation is related to a system of cylinder coordinates with the parabola in the bottom. It was found that parabolic cylinder equation plays an important role in quantum mechanics, oscillation theory and studies of the Painlevé equation.

1.6.2 Integral Representation To obtain an integral representation for the solution of parabolic cylinder equation, it is convenient to change a required function: y = exp(–x2 /4)u. Substitute this formula into equation (1.18). It gives an equation for u: 1 u󸀠󸀠 – xu󸀠 – ( + a)u = 0. 2 This equation is more convenient because it contains the first order of independent variable. It allows to obtain the first-order equation for the Laplace image and to integrate it. We construct a solution u in the form of ̃ u(x) = ∫ exp(kx)u(k)dk. 𝛾

̃ This formula contains an unknown Laplace image u(k) and unknown contour of integration in complex plane k. We carry out formal calculations and suppose that they are valid. We justify all manipulations when Laplace image ũ and contour 𝛾 of integration are determined. Evaluate formulae for the first and the second derivatives with respect to x: d ̃ ̃ = ∫ k exp(kx)u(k)dk. ∫ exp(kx)u(k)dk dx 𝛾

𝛾

d2 ̃ ̃ = ∫ k2 exp(kx)u(k)dk. ∫ exp(kx)u(k)dk dx2 𝛾

𝛾

32

1 Asymptotic expansions and series

Substitute these formulae for equation and bring all terms under integral sign: 1 ̃ ̃ – xku(k) ̃ – ( + a)u(k)) exp(kx)dk = 0. ∫ (k2 u(k) 2 𝛾

̃ in round brackets only depend on variable k. Transform the All terms besides –xku(k) integral of this term by parts to remove a variable x: 󵄨󵄨𝛾+ 󵄨 ̃ ̃ exp(kx)󵄨󵄨󵄨 + ∫(u(k) ̃ + kũ 󸀠 (k)) exp(kx)dk, – ∫(xku(k)) exp(kx)dk = –ku(k) 󵄨󵄨 󵄨𝛾– 𝛾 𝛾 where 𝛾± are initial and finishing points for contour 𝛾. We suppose that contour 𝛾 is such that a sum of integral terms equals to zero. It yields: 1 ̃ + u(k) ̃ + kũ 󸀠 (k) – ( + a)u(k)) ̃ exp(kx)dk = 0. ∫ (k2 u(k) 2 𝛾

The integral equals zero when an integrand equals zero. This condition gives a differential equation: 1 ̃ ̃ + kũ 󸀠 (k) + ( – a)u(k) = 0. k2 u(k) 2 Solution of this equation is ∫

–k2 + (a – 21 ) dũ =∫ dk, ũ k

or ũ = C exp (–

k2 ) ka–1/2 . 2

The solution can be represented in the form of y = C exp(–x2 /4) ∫ exp (kx – 𝛾

k2 a–1/2 )k dk. 2

Let us choose the contour of integration. The integrand has a singular point at x = 0 as a < 1/2. Determine a domain where the index of exponent has a negative real part. Let k = . +i+, then R(–k2 ) = +2 – . 2 . It gives R(–k2 ) < 0 as +2 < . 2 . Any contour that contains point at infinity and goes through the sector 30/4 < arg(k) < 50/4 from infinity and goes back through –0/4 < arg(k) < 0/4 can be considered as 𝛾. This contour does not include point k = 0. The Cauchy theorem states that these contours are equivalently accurate to a loop around k = 0 that goes from infinity point under I(k) = 0 axis and goes back over I(k) = 0 axis. Denote a loop on Figure 1.5 by 𝛾.

1.6 Functions of Parabolic Cylinder

33

(p)

(p) γ

Figure 1.5. An integration over loop 𝛾. The contour indents the point of branching z = 0 and goes under and over a cross-cut R(z) < 0 that connects the point of branching z = 0 and inﬁnity point.

A point x = 0 is a pole of n + 1th order as a = –n + 1/2, n ∈ N and y(x) = C exp(–x2 /4) ∫ exp (kx – 𝛾

= C exp(x2 /4) ∫ exp ( – 𝛾

k2 –n )k dk 2

x2 k2 + kx – )k–n dk 2 2 2

1 = C exp(x2 /4) res ( exp ( – (x – k) )k–n ) x=0 2 = C(–1)n

exp(x2 /4) dn exp(–x2 /2). n! dxn

It is convenient to multiply the solution by A(1/2 – a) to withdraw the n! in the last formula. So we determine a parabolic cylinder function by classical notations [1] U(a, x) = D–a+1/2 (x) =

A(1/2 – a) k2 exp(–x2 /4) ∫ exp (kx – )ka–1/2 dk 20i 2 𝛾

and Dn (x) = (–1)n exp(x2 /4)

dn exp(–x2 /2). dxn

1.6.3 Connection Formulae at Different Values of Parameter Let us consider properties of the functions when (a + 1/2) ∈ ℕ. For such values of parameter the function is represented by two multipliers: one of them is an integral over the loop and another one is a A(1/2 – a). On Cauchy’s theorem the integral over such loop is equal to zero. But the gamma function has a pole of the first order. Therefore, we get the indetermination infinity times zero. To find an answer we need additional studies. It is convenient to use connection formulae of parabolic cylinder functions for different values of parameter a. A formula is valid: ∫ 𝛾

d k2 ( exp (kx – )ka–1/2 )dk = 0. dk 2

34

1 Asymptotic expansions and series

Then ∫ (x exp (kx – 𝛾

k2 a–1/2 k2 – k exp (kx – )ka–1/2 )k 2 2

+ (a – 1/2) exp (kx –

k2 a–3/2 )k )dk = 0. 2

Represent the integral as a sum of two integrals. Multiply it by a factor A(1/2 – a) exp(x2 /4)/(20i). It yields [162] A(1/2 – a) (a – 1/2)A(1/2 – a) U(a + 1, x) + U(a – 1, x) = 0. A(1/2 – (a + 1)) A(1/2 – (a – 1))

xU(a, x) –

Using a property of gamma function we obtain xU(a, x) – (1/2 – a)U(a + 1, x) – U(a – 1, x) = 0. This formula allows one to calculate values of parabolic cylinder functions as (a + 1/2) ∈ N. 1.6.4 Values at Origin of Coordinates To connect an integral formula for parabolic cylinder function with its Taylor expansion in the neighbourhood of x = 0, we evaluate the value of integral as a > –1/2. For the given value of parameter a the integral over a loop is represented as an integral over a small radius circle around the origin of coordinates and upper and lower sides of crosscut. The argument of integration variable is different for upper and lower sides of crosscut: U(a, 0) =

A(1/2 – a) k2 ∫ exp ( – )ka–1/2 dk 20i 2 𝛾

󵄨󵄨 󵄨󵄨 󵄨 󵄨 = 󵄨󵄨󵄨󵄨k = |k| exp(i arg(k)), |k| = 1, arg(k) = ±i0 󵄨󵄨󵄨󵄨 󵄨󵄨 󵄨󵄨 ∞ 󵄨󵄨 󵄨󵄨 󵄨 󵄨 A(1/2 – a) 2i sin(0(a – 1/2)) ∫ exp(–12 /2)1a–1/2 d1 = 󵄨󵄨󵄨󵄨12 /2 = +󵄨󵄨󵄨󵄨 = 20i 󵄨󵄨 󵄨󵄨 0

a/2–1/4 A(1/2

=2

0

– a)

sin(0(a – 1/2)) ∫ exp(–+)+a/2–3/4 d+ 0

a 3 A(1/2 – a) sin(0(a – 1/2))A( + ) = 2a/2–1/4 = 2a/2–1/4 0 2 4

A( a2 + 43 ) ; A(a – 21 )

1.7 WKB Method

35

Here we use A(1 – z)A(z) sin(0z) = 0. Similar evaluations for derivatives give U 󸀠 (a, 0) =

A(1/2 – a) k2 ∫ exp ( – )k(a+1)–1/2 dk 20i 2 𝛾

= 2a/2+1/4

A( a2 + 45 ) . A(a – 21 )

1.6.5 Problems 1. 2. 3.

Obtain an asymptotic expansion of U(a, x) as x → ∞ and a ∈ ℝ. Define an interval of validity for asymptotic expansion of the above problem for large a. Discuss the asymptotic expansion for x ∈ ℂ as |x| → ∞ and a → ∞.

1.7 WKB Method This method was created for construction of fast oscillating asymptotic expansions for linear ordinary differential equations. Later it was modified for linear partial differential equations. It allows one to construct a quasi-classical approximation for solutions of Schroödinger equation [111]. It was also developed in studies of diffraction of shot waves. WKB method is called in honour of Wentzel, Kramers and Brilluen. They used a special method for construction of an asymptotic expansion of fast oscillating solution for Schrödinger equation [21, 101, 159]. But similar approach was applied earlier by Liouville and Green in 1837 [56, 106]. Sometimes mathematicians call this method as Liouville–Green method [132].

1.7.1 Application of WKB Method for Ordinary Differential Equation of the Second Order WKB method can be used for construction of fast oscillating solutions for differential equations of certain order. An application for system of equations is developed in Ref. [158]. In this section, we demonstrate main ideas of WKB method to solve an equation of the second order. Consider a differential equation of the second order with a small parameter :2 under the second derivative :2 u󸀠󸀠 + q(x)u = 0.

(1.19)

36

1 Asymptotic expansions and series

We use a square of small parameter for convenience only. Change a variable x to fast variable . = (x – X)/:, where X is a fixed real point. The equation takes a form of d2 u + q(X + :. )u = 0. d. 2 Let : = 0 . In this case the equation has two linear independent oscillating solutions as q(X) > 0 and exponentially growing and decaying solutions as q(X) < 0. It is easy to obtain for q(X) > 0: u ∼ c1 exp (i√q(X). ) + c2 exp (i√q(X). ) ,

c1,2 ∈ ℝ.

This formula is an approximation in the small neighbourhood of X with respect to x. To construct approximations that are uniformly suitable for a bounded interval of independent variable x we suppose: – first, q(x) > 0, this allows us to consider oscillating solutions of eq. (1.19) only; –

second, the function q(x) is infinitely smooth.

The infinite smoothness is important for construction of asymptotic series for linear independent solutions of eq. (1.19). To construct a segment of series, the finite smoothness for the function is sufficient. We construct an asymptotic approximation of solution for eq. (1.19) in the form of u = exp(iS(x)/:)A(x). Substitute this formula into the equation. Some calculations give –(S󸀠 )2 exp(iS(x)/:) + iS󸀠󸀠 exp(iS(x)/:)A + 2i:S󸀠 A󸀠 exp(iS(x)/:) + :2 exp(iS(x)/:)A󸀠󸀠 + q(t) exp(iS(x)/:)A = 0. We can remove the exponent. It yields –(S󸀠 )2 + q(x) + :(iS󸀠󸀠 A + 2S󸀠 A󸀠 ) + :2 A󸀠󸀠 = 0. Relations under :0 and :1 give two equations: S󸀠 = ±√q(x) and

S󸀠󸀠 A + 2S󸀠 A󸀠 = 0.

These equations are easily integrable: S = ± ∫ √q(x)dx,

A=

1 . 4 √ q(x)

1.7 WKB Method

37

The function is called WKB approximation: i exp ( ∫ √q(x)dt) : , u0 (x, :) = 4 √ q(x) which satisfies eq. (1.19) up to :2 terms:

:2 u󸀠󸀠 0 + q(x)u0 = :

i exp ( ∫ √q(x)dx) (4qq󸀠󸀠 – 5(q󸀠 )2 ) : 2 4 9 16 √ q

.

The formula for u0 is WKB approximation. Let us construct a whole asymptotic expansion for solution i exp ( ∫ √q(x)dx) ∞ : u∼ (1 + :n An (x)) . ∑ 4 √ q(x) n=1

(1.20)

Substitute the series into equation and collect terms at similar powers of parameter :. It leads to a recurrent system of equations for coefficients An : iA󸀠1 =

iA󸀠n =

(4qq󸀠󸀠 – 5(q󸀠 )2 ) 32√q5

;

(4An–1 qq󸀠󸀠 – 5(q󸀠 )2 An–1 + 8qq󸀠 A󸀠n–1 – 16q2 A󸀠󸀠 n–1 ) 32√q5

,

n > 1.

The equation for An is easily integrable. It is possible to construct a series with respect to powers of :. Coefficients of this series have singularities at x0 , where q(x0 ) = 0. The order of the singularities grows with the growth of number of correction term. For this cause the constructed series is not valid in the small neighbourhood of point x0 . Suppose that function q(x) has a null of the kth order at x0 . It means q(x) = O ((x – x0 )k ). Then the coefficient A1 has a singularity of 1 + k/2 order at this point. Integration gives the order of nth correction term of asymptotic series: An = O((x – x0 )–(nk)/2–n ). The condition of validity for series (1.20) is :An (x) = o(1), An–1 (x)

: → 0.

38

1 Asymptotic expansions and series

2

The constructed asymptotic series is valid as :– k+2 (x – x0 ) ≫. The similar calculations give an asymptotic expansion for the term with complex conjugate exponent. The general solution has an asymptotic expansion: i exp ( ∫ √q(x)dx) ∞ : u ∼ C1 (1 + ∑ n=1 :n An (x)) 4 √q(x) i exp (– ∫ √q(x)dx) ∞ : (1 + ∑ n=1 :n An (x)) . + C2 4 √ q(x)

(1.21)

Remark 1. WKB formulae can be used for constructing non-oscillating asymptotic series as q(x) < 0. An asymptotic formula for the decaying solution is essentially less than a measure of inaccuracy for asymptotic formula for exponentially growing solution. Therefore, in this case, the formula with growing asymptotic expansion is used for asymptotic approximations. The formula for general solution (1.21) has no sense without additional arguments, when q(x) < 0. 1.7.2 Justification of Constructed Asymptotic Expansion The whole asymptotic series (1.20) is not usually used. Instead of them a segment of series is taken into account for approximation. Denote by i exp ( ∫ √q(x)dx) N : uN = (1 + :n An (x)) . ∑ 4 √ q(x) n=1 To show that the formula gives one an asymptotic approximation for the solutions, we construct a solution in the form of u = uN + :N+1 v. Substitute the solution into original differential equation. It gives an equation for a residue term v: :2 v󸀠󸀠 + q(x)v = :2 FN . Here FN = –

i 4 A󸀠󸀠 √ N e : ∫ q(x)dx . 4 √ q(x)

Rewrite the second-order differential equation as a system of the first-order equations: (

:v1󸀠

) :v2󸀠

=(

0

1

–q(x) 0

)(

v1 v2

) + :2 (

0 FN (x)

).

1.7 WKB Method

39

An asymptotic solution for differential equation gives a fundamental solution for homogeneous system up to :2 order: exp(iS/:) 4 √q

R(x, :) = ( (4iS󸀠 q – :q󸀠 ) exp(iS/:)

exp(–iS/:) 4 √q

(–4iS󸀠 q – :q󸀠 ) exp(–iS/:) ) .

4 5 4√ q

4 5 4√ q

A determinant of matrix is det(R) = –2i. We construct a solution in the form of B = (B1 (x, :), B2 (x, :))T ,

v = RB,

where B is an unknown function. Substituting u into the system of equations yields :RB󸀠 = :2 IB + :2 FN ,

where

:2 I = QR – :R󸀠 .

Here Q=(

0 1 ). –q(x) 0

It is easy to obtain the Volterra integral equation for vector B: x

x –1

B = : ∫ R IBdy + : ∫ R–1 FN dy, x1

x1

where x1 is a constant such that q(x1 ) > 0. Inverse matrix R–1 is easily constructed –i

(4iS󸀠 q + :q󸀠 ) exp(–iS/:) 8 √q5

–i

exp(–iS/:) 4 2 √q

i

exp(iS/:) 4 2 √q

4

R–1 (x, :) = ( –i

(4iS󸀠 q – :q󸀠 ) exp(iS/:) 4 5 8√ q

).

Denote a norm in a space of continuous functions by || ∙ ||. Let q(x0 ) = 0 and suppose that q(x) > 0 as x > x0 . Suppose that the integral equation has a solution in the space of continuous vector functions. Then it is easy to obtain inequality for ∀x > x0 : x

||B|| ≤ : ∫ D||B||dy + :PN , x1

where D, PN = const, so that D > ||R–1 (x)I(x)|| and PN > ||FN (x)|| as ∀x > x0 .

40

1 Asymptotic expansions and series

The known Grönwall’s lemma gives that ||B|| is bounded on an interval x ∈ (x0 , c:–1 ) as ∀c > 0. The residue term satisfies :N+1 v = O(:N+1 ) when x changes on considered interval. Show that WKB asymptotic expansion is an approximation for the solution on the interval :–1/(k+2) |x – x0 | ≫ 1. Estimate an order of ||FN || in the small neighbourhood of x0 : –k/2– ||FN || ≤ O((x – x0 )–k/4 )||A󸀠󸀠 N || ≤ O((x – x0 )

Nk 2 –N–2 ).

An upper estimation for ||R–1 || is k

||R–1 || = O((x – x0 )–k/4 ) + O(:(x – x0 )– 4 –1 ). An estimation for :2 I = QR – :R󸀠 is 󸀠 󵄨󵄨 󵄨󵄨󵄨󵄨 󵄨󵄨󵄨󵄨 1 1 󵄨󵄨󵄨󵄨 󵄨󵄨󵄨󵄨 󵄨󵄨󵄨󵄨 4 4 󵄨󵄨󵄨󵄨 󵄨󵄨󵄨󵄨 √q √q 󵄨󵄨󵄨󵄨 󵄨󵄨 ||:2 I|| = O (:2 󵄨󵄨󵄨󵄨󵄨󵄨( 4iq3/2 – :q󸀠 –4iq3/2 – :q󸀠 ) 󵄨󵄨󵄨󵄨󵄨󵄨) 󵄨󵄨󵄨󵄨 󵄨󵄨󵄨󵄨 󵄨󵄨󵄨󵄨 󵄨󵄨󵄨󵄨 4 5 4 5 󵄨󵄨󵄨󵄨 󵄨󵄨󵄨󵄨 √q √q 󵄨󵄨󵄨󵄨 󵄨󵄨󵄨󵄨 ≤ :2 O((x – x0 )–k/4–1 ) + :3 O((x – x0 )–k/4–2 ). Consider a sequence x

B0 = : ∫ R–1 FN dy, x1 x

x –1

Bn+1 = : ∫ R IBn dy + : ∫ R–1 FN dy. x1

x1

The sequence {Bn }∞ n=0 belongs to a space of bounded differentiable vector functions 2

as :– k+2 |x – x0 | ≫ 1. Estimate the norm of difference between two nearest-neighbour elements of sequence x

||Bn+1 – Bn || ≤ : ∫ ||R–1 I(Bn – Bn–1 )||dy x1 x

≤ :||Bn – Bn–1 || ∫ ||R–1 ||||I||dy ≤ :||Bn – Bn–1 ||C x1 x k

× ∫((y – x0 )–k/4 ) + :(y – x0 )– 4 –1 )((y – x0 )–k/4–1 ) + :(y – x0 )–k/4–2 )dy x1 k

≤ :||Bn – Bn–1 ||C1 ((x – x0 )–k/2 ) + :(x – x0 )– 2 –1 + :2 (x – x0 )–k/2–2 ) 2

2

2

≪ (:1+ k+2 (–k/2) + :2+ k+2 (–k/2–1) + :3+ k+2 (–k/4–2) ) C2 ||Bn – Bn–1 ||.

1.7 WKB Method

41

It yields 2

:– k+2 |x – x0 | ≫ 1,

||Bn+1 – Bn || ≪ C2 ||Bn – Bn–1 ||,

: → 0.

The sequence converges for the given interval of x. The main result of this section is 󵄨 󵄨󵄨 󵄨 A 󵄨󵄨 v = o (:N 󵄨󵄨󵄨󵄨 4 N 󵄨󵄨󵄨󵄨) , 󵄨󵄨 √q 󵄨󵄨

2

:– k+2 |x – x0 | ≫ 1,

: → 0.

It means that eq. (1.20) is the asymptotic expansion for the solution of eq. (1.19).

1.7.3 Quasi-Classical Asymptotic Expansion and Transition Points WKB method is related to quasi-classical asymptotic expansion in physics. This is a formal interpretation of connection between classic and quantum mechanics. The one-dimensional Schrödinger equation for quantum particle in potential hole is ih8t + h2 8xx + U(x)8 = 0,

(1.22)

where h is Planck’s constant. We consider it as a small parameter. U(x) is a potential function. We construct a quasi-steady solution 8 = exp(–i9t/h)u(x). Function u satisfies an ordinary differential equation of the second order with a small parameter at second derivative h2 u󸀠󸀠 + (9 + U(x))u = 0. We show above that the constructed WKB asymptotic series is not valid in the neighbourhood of x0 . These points are solutions of equation 9 + U(x0 ) = 0. We show that points x0 are connected with transition points in classical mechanics. Schrodinger equation (1.22) is connected with the Hamilton system: H=

(x󸀠 )2 + P(x). 2

The value of 9 determines the value of energy for classical particle in potential hole. It gives points x0 such that 9 + U(x0 ) = 0 are transition points for trajectories of particle in Hamilton mechanics. To construct an asymptotic series in the neighbourhood of transition points, a new scaled variable is used. Quantum mechanics explains this transition through transition point as a tunnel transition through a potential barrier. These problems were studied in diffraction theory for shot waves [157]. This phenomenon is called a transition to shadow zone.

42

1 Asymptotic expansions and series

1.7.4 Essentially Singular Points of Differential Equation and the Stokes Phenomenon Transition points are related to essentially singular points in differential equation theory. The transition through these points is connected with a transition through the Stokes lines. Essentially singular points for ordinary differential equation of the second order u󸀠󸀠 + a(x)u󸀠 + b(x)u = 0 are poles of the second order and higher for function a(x) and poles of the third order and higher for function b(x). It is possible to construct WKB asymptotic series for solutions in the neighbourhood of essentially singular point. We consider the Airy equation and show a connection between WKB asymptotic series with asymptotic series in the neighbourhood of essentially singular points of the second-order differential equation: u󸀠󸀠 – xu = 0. The point x = ∞ is an essentially singular point. It can be explained by change of variable z = 1/x. The construction of asymptotic series for solution in the neighbourhood of x = ∞ is well known [132]. But we use WKB method with a small parameter. Determine a new slow variable . = x:2/3 , where : is a small parameter. The given equation in terms of . looks like :2

d2 u – . u = 0. d. 2

In the domain . < 0 the leading-order term of WKB series as : → 0 is 3/2

u∼

e2i(–. ) /(3:) . 4 √ –.

It is shown that this series is valid as –. ≫ :2/3 . Return back to the original variable x = :–2/3 . : u∼:

2/3 e

2i(–x)3/2 /3 4 √–x

.

This asymptotic series is valid as –x ≫ 1. Multiply the obtained asymptotic series by a parameter :–2/3 . It gives a standard formula for asymptotic expansion for solution of the Airy equation for large values of –x: 3/2

u∼

e2i(–x) /3 . 4 √–x

1.7 WKB Method

43

This formal introduction of parameter allows one to construct WKB asymptotic expansions for various differential equations of the second order.

1.7.5 Problems 1.

Construct an asymptotic series for the solution of the Mathieu equation: :2 u󸀠󸀠 + (a + 16q cos(2x))u = 0,

2.

Determine a domain of validity that depends on parameters a, q. Determine a value of parameter ! so that it is possible to construct WKB asymptotic series for a solution of equation u󸀠󸀠 + (x2 + cos(x! ))u = 0,

3.

x → ∞.

Construct an asymptotic series for solution of the Airy equation :2 u󸀠󸀠 – zu = 0,

4.

: → 0.

z < 0,

: → 0.

Construct an asymptotic series for solution of parabolic cylinder equation: :2 u󸀠󸀠 + (z2 – a2 )u = 0. Determine a domain of validity for the constructed asymptotic series.

2 Asymptotic methods for solving nonlinear equations In this chapter we present some examples of using asymptotic methods for solving nonlinear equations. We will discuss the following asymptotic approaches in detail: – Constructions of solutions for weak nonlinear equations by Wentzel, Kramers and Brilluen (WKB) method –

Boundary layer method

Matching method

Below in this chapter we will consider elliptic functions as solutions of nonlinear differential equations: – Weierstrass ℘ -function and its periodicity and poles –

Jacobi elliptic functions and properties for small oscillations and oscillations near a separatrix

Mathieu and Lame functions and their properties in point of view of perturbation theory for nonlinear oscillations

Most of the subjects are well known, but one subject looks new. We show an uniform asymptotic expansion of sn(t|m) – Jacobi when m → 1 – 0. The constructed approximation is valid over more than a half of period, so a turning point is included into the interval of validity. In addition we obtain the asymptotic formula for elliptic integral of the first kind and discuss the differences with the same formula from a handbook.

2.1 Fast Oscillating Asymptotic Expansions for Weak Nonlinear Case We study fast oscillating solutions with small amplitude for nonlinear equation: :2 u󸀠󸀠 + q(t)u = :u3 ,

: → 0.

(2.1)

We construct asymptotic solution for values of t where q(t) > 0. The form of equation is related to the given problem. We choose the dependence on small parameter : so that constructing series do not contain fractional powers of small parameter. It is important that the order with respect to : of nonlinear term is less than the order of linear terms. It is supposed that the order of linear term q(t)u is a unity. Note that the equation contains a cubic nonlinear term. In this case, such kind of nonlinearity leads to the resonance on eigenfrequency of linear oscillations. We’ll demonstrate this phenomenon later. The quadratic nonlinear terms give resonances for higher-order correction terms and equations become more complicated. DOI 10.1515/9783110335682-002

2.1 Fast Oscillating Asymptotic Expansions for Weak Nonlinear Case

45

2.1.1 Asymptotic Substitution We construct solutions in the form of formal asymptotic series with respect to powers of small parameter :: ∞

u(t, :) ∼ ∑ :n un (t, :).

(2.2)

n=0

The formal series is essentially different from the WKB series. Correction terms un (t, :) depend on the parameter : in this case. Formula (2.2) shows a general representation of asymptotic expansion. Our goal is to obtain coefficients un (t, :). We construct the leading-order term of asymptotic expansion in the form of fast oscillating function: u0 (t, :) = exp(iK/:)A0,1 (t) + c.c., where c.c. means a complex conjugate term: K = S(t) + 6(t, :),

6(t, :) = ∑ :k 6k (t). k=1

Equation (2.1) is a non-autonomous one. General solution of eq. (2.1) contains three parameters. These parameters are an initial value of time t0 and two real arbitrary constants. We introduce a real constant in a complex exponent and another one in an amplitude of solution. It leads to function A0,1 to become real. Functions S(t), 6k (t), k ∈ N are also real. A period of oscillations depends on an amplitude in case of weak nonlinear oscillations. It leads to the necessity of correction of an exponent for all values of k. The amplitude of oscillations depends on the correction term of formal asymptotic expansion. The nonlinearity of an equation gives powers of oscillating exponent in correction terms. Substitute the formal series into right-hand side and collect terms under : and exponent exp(±3iK/:). It gives u1 (t, :) = exp(iK/:)A1,1 (t) + exp(3iK/:)A1,3 (t) + c.c. Function u1 satisfies non-homogeneous linear equation of the second order. Special form of correction term 6 in phase of oscillation allows us to construct A1,1 (t) as a real function. However, function A1,3 (t) is complex in general. The same calculations can result in the form of second-order correction term: u2 (t, :) = exp(iK/:)A2,1 (t) + exp(3iK/:)A2,3 (t) + exp(5iK/:)A2,5 (t) + c.c.

46

2 Asymptotic methods for solving nonlinear equations

Here A2,1 (t) is a real function, A2,3 (t) and A2,5 are complex. The nth correction term is n

un (t, :) = ∑ exp((2k – 1)iK/:)An,2k–1 (t) + c.c. k=1

All functions An,1 (t) are real. All the other coefficients under oscillating exponents are complex.

2.1.2 Equations for Leading-Order Term and First-Order Correction Term To obtain equations for coefficients of asymptotic expansions, we substitute eq. (2.2) into eq. (2.1). Collect terms under the same powers of :. It gives a sequence of equations for S, 6n , un , n ∈ N. To obtain equations for Fourier coefficients we collect terms under the same exponents. It leads to recurrent sequence of equations for S, 6n , An,k . Terms under :0 give –(S󸀠 )2 A0,1 + q(t)A0,1 = 0. It leads to t

S = ∫ √q(4)d4. Terms under :1 lead to equation: (–2S󸀠 6󸀠1 A0,1 + i(2S󸀠 A󸀠0,1 + S󸀠󸀠 A0,1 )) exp(iK/:) + (–9(S󸀠 )2 + q(t))A1,2 exp(3iK/:) + c.c. = –3A30,1 exp(iK/:3 ) – A30,1 exp(3iK/:) + c.c. Collect terms under equal Fourier exponents. Relations for real and image parts of coefficients under the first harmonic lead to 2S󸀠 A󸀠0,1 + S󸀠󸀠 A0,1 = 0,

–2S󸀠 6󸀠1 = 3A20,1 ,

(–9(S󸀠 )2 + q(t))A1,3 = –A30,1 .

The first equation is solved easily: A0,1 =

C0,1 4 √ q(t)

,

∀C0,1 ∈ R.

Functions 61 and A1,3 can be expressed by q(t): 61 = –

2 3C0,1

2

t

d4 , q(4)

A1,3 = –

3 C0,1 4 3 8q(t) √ q (t)

.

(2.3)

2.1 Fast Oscillating Asymptotic Expansions for Weak Nonlinear Case

47

Relation under :2 gives 󸀠 2 (–2S󸀠 6󸀠2 A0,1 – 2S󸀠 6󸀠1 A1,1 + A󸀠󸀠 0,1 – A0,1 (61 )

+ i(A1,1 6󸀠1 + 2A󸀠1,1 6󸀠1 + A2,1 S󸀠󸀠 + 2A󸀠2,1 S󸀠 )) exp(iK/:) + (–9(S󸀠 )2 + q(t))A2,3 exp(3iK/:) + (–25(S󸀠 )2 + q(t))A2,5 exp(5iK/:) + c.c. = (3A20,1 A1,3 – 9A20,1 A1,1 ) exp(iK/:) + (6A20,1 A1,3 + 3A20,1 A1,1 ) exp(3iK/:) – 3A20,1 A1,3 exp(5iK/:) + c.c. Collect coefficients under equal exponents. Relations under the first exponent give the following equations: 󸀠 󸀠 󸀠󸀠 󸀠 󸀠 A0,1 6󸀠󸀠 1 + 2A0,1 61 + A1,1 S + 2A1,1 S = 0,

– 2S󸀠 6󸀠2 A0,1 – 2S󸀠 6󸀠1 A1,1 = 3A20,1 A1,3 – 9A20,1 A1,1 , (–9(S󸀠 )2 + q(t))A2,3 = (6A20 A1,3 + 3A20 A1,1 ), (–25(S󸀠 )2 + q(t))A2,5 = 3A20,1 A1,3 .

(2.4)

The first equation allows us to determine A1,1 . Function 62 is a solution of the second equation. Other equations are algebraic. These equations allow us to determine functions A1,3 and A1,5 . It is important to determine an order of singularity at q(4) = 0. It gives the domain of validity for asymptotic solution. The equation for A1,1 can be written in the form 2√q(t)A󸀠1,1 +

2 27C0,1 q󸀠 (4) q󸀠 (4) A1,1 = . 4 2√q(t) 2q2 (4) √ q(4)

The partial solution of the equation is A1,1 =

3 3C0,1

4q7/4

.

The equation for 62 can be integrated: 6󸀠2 = –

4 51C0,1 5 16q 2 (t)

q󸀠󸀠 (t) 3 8q 2 (t)

+

5(q󸀠 (t))2 5

.

32q 2 (t)

Equations for A2,3 and A2,5 are algebraic. Solutions of these equations are A2,3 =

39C5 9iq󸀠 (t)C3 – ; 3/4 64q (t) 64q13/4 (t) A2,5 =

5 C0,1

64q13/4 (t)

.

48

2 Asymptotic methods for solving nonlinear equations

The leading-order term of nonlinear WKB approximation (2.2) is: t

u ∼ C0,1

exp ( :i ∫ √q(4)d4 – i

2 3C0,1 2

t

dt ∫ q(t) )

+ c.c.

4 √ q(t)

The leading-order term contains an additional term in oscillating exponent. This term does not exist in WKB approximations of solutions for linear equations. 2.1.3 Equation for the nth Term of Formal Series and Domain of Validity To construct the formal asymptotic series (2.2) and determine a domain of validity, it is convenient to study the nth correction term. It has the following form: n

un (t, :) = ∑ exp(i(2k – 1)Kn /:)An,2k–1 (t) + c.c. k=1

We use a segment of formal series in phase function K: n

Kn = S(t) + ∑ :m 6m (t). m=1

Equations for Fourier coefficients under :n for k ∈ {0, . . . , n} are – ((2k – 1)S󸀠 )2 An,k + q(t)An,k + A󸀠󸀠 n–2,k – 2(2k – 1)2 S󸀠 ∑ 6󸀠l Al,k – 2(2k – 1)2 m+l=n

∑ 6󸀠l 6j Am,1 l+m+j=n

n–1

󸀠 󸀠 + i(2k – 1)(S󸀠󸀠 An–1,k + 2S󸀠 A󸀠n–1,k + ∑ (6󸀠󸀠 l An–l–1,k + 26l An–l–1,k )) = Fn,k ,

(2.5)

l=1

lFn,k =

A-1 ,!1 A-2 ,!2 A-3 ,!3 + c.c.,

- i ∈ Z+ ,

!i ∈ Z.

(2.6)

|-| = n – 1 |!| = k Obtaining coefficients for k = 1 and for k ≠ 1 differs. Let us consider the obtaining equation for k = 1. The real part of equation gives a formula for correction term of phase function: – 2S󸀠 6󸀠n–1 A0,1 = 2S󸀠

∑ l + m = n, m ≠ 0

6󸀠l Am,1 +

∑ 6󸀠l 6󸀠j Am,1 + R(Fn,1 ) – R(A󸀠󸀠 n–2,1 ),

(2.7)

l+m+j=n

n–1

󸀠 󸀠 󸀠󸀠 S󸀠󸀠 An–1,1 + 2S󸀠 A󸀠n–1,1 = – ∑ (6󸀠󸀠 l An–l–1,k + 26l An–l–1,k ) + I(Fn,1 ) + I(An–2,1 ). l=1

(2.8)

49

2.1 Fast Oscillating Asymptotic Expansions for Weak Nonlinear Case

Coefficients for k > 1 are determined from a linear equation: 2 󸀠 󸀠 2 ((2k – 1)S󸀠 )2 An,k = –A󸀠󸀠 n–2,k + 2(2k – 1) S ∑ 6l Al,k + 2(2k – 1) m+l=n

∑ 6󸀠l 6j Am,1 l+m+j=n

n–1

󸀠 󸀠 – i(2k – 1) (S󸀠󸀠 An–1,k + 2S󸀠 A󸀠n–1,k – ∑ (6󸀠󸀠 l An–l–1,k + 26l An–l–1,k )) + Fn,k .

(2.9)

l=1

Let us calculate the order of singularity at q = 0. The maximum order of singularity for terms under :n is related to non-integrated terms. Coefficient An,2n–1 contains terms of this type. Formula (2.9) gives An,2n–1 = O (q–

1+6n 4 ),

q → 0.

The singularity of coefficients for series for 6k at q = 0 is less. It is a result of integration. The domain of validity is determined by :n+1 An+1,2(n+1)–1 ≪ 1, :n An+,2n–1

:q–3/2 ≪ 1.

2.1.4 Justification of Asymptotic Series In this section we prove a theorem of existence of solution for eq. (2.1). The proof is based on the theorem of existence of the Cauchy problem solution for system of nonlinear equations. The difference of this proof is as follows: we prove the existence of the problem solution with parameter and the use of a long interval with respect to fast variable S/:. We suppose that a segment of asymptotic series is constructed N

UN = ∑ :n un (t, :), n=0

where K is also represented by a segment of series: N

KN = S + ∑ :n 6n . n=1

Substitution of UN into equation gives a residual term in the form of :2 UN󸀠󸀠 + q(t)UN – :UN3 = :LN (t, :), where LN (t, :) is an algebraic polynomial of 3N order with respect to : and trigonometric polynomial of 3(2N – 1) with respect to KN /:.

50

2 Asymptotic methods for solving nonlinear equations

We construct the solution of eq. (2.1) in the form of u = UN + :N V.

(2.10)

Here V is a residual term of asymptotic expansion. In this section we obtain equation for V and estimate an order of the partial solution with respect to :. Substitute eq. (2.10) into eq. (2.1). The equation for the residual term of asymptotic expansion V is :2 V 󸀠󸀠 + q(t)V = 3:UN2 V + 3:N+1 UN V 2 + :2N+1 V 3 + :LN .

(2.11)

It is convenient to note the right-hand side of the equation by :FN (V, t, :). Let us rewrite the equation of the second order in the form of the first-order equations system with v1 = V, v2 = :V 󸀠 : (

:v1󸀠

) :v2󸀠

=(

0

1

–q(t)

0

)(

v1 v2

)+(

0 FN (v1 , t, :)

).

We use an asymptotic solution of linearized equation: exp(i3) 4 √ q(t) R(3, t, :) = (

exp(–i3) 4 √ q(t)

(iS󸀠 q – :q󸀠 ) exp(i3) (–iS󸀠 q – :q󸀠 ) exp(–i3) 4 5 4√ q (t)

),

3 = S/:.

4 5 4√ q (t)

Function R is a solution of linear system of equation with respect to a fast variable. The determinant of matrix is det(R) = –2i. We construct a solution of nonlinear system in the form of V = RB,

B = (b1 , b1 )T .

Substitution of this representation into the system of equations gives B󸀠 = –:R–1

𝜕R B + R–1 (0, FN )T . 𝜕t

The right-hand side of this system of nonlinear differential equations is bounded with respect to B. It satisfies Lipschitz condition and is uniformly continuous at long interval of S/: where q(t) ≠ 0. To estimate the right-hand side of system of differential equation in the neighbourhood of q(t) = 0 we study its asymptotic behaviour as q(t) → 0. The right-hand side is a matrix operator that affects vector B. Suppose function q(t) have a zero of nth

2.1 Fast Oscillating Asymptotic Expansions for Weak Nonlinear Case

51

order at t = t0 . We estimate a maximum value for coefficients of this operator. The first term gives :R–1 𝜕t R = O(:(t – t0 )–3/2 ) + O(:2 (t – t0 )–5/2 ). The second term contains a nonlinear operator that is cubic with respect to components of B. Coefficients of this operator can be estimated separately. Let us introduce an Euclid norm in two-dimensional vector space. It yields –1

T

–1/4

R (0, FN ) = (O(q

) + O(:q

–5/4

3

)) ∑ :

j(N–1)

j

3–j

N

n –(1+6)n/4

||B|| ( ∑ O(: q

))

,

q → 0.

n=0

j=1

We rewrite the system of differential equations for B as a system of integral equations. We add a zero initial condition at a point that is left-hand side to t0 where Q(t0 ) = 0: Bt=t1 = B0 ,

t1 < t0 .

The system of integral equations has a form t

B = B0 + ∫ (–:R–1 t1

𝜕R B + R–1 (0, FN )T ) dt. 𝜕t

We study iterations: t

Bm+1 = B0 + ∫ (–:R–1 t1

𝜕R B + R–1 (0, FN )T ) . 𝜕t m

The difference between two consequent iterations equals: t

Bm+1 – Bm = ∫ (–:R–1 t1

𝜕R (B – Bm–1 )+ (R–1 (0, FN (Bm )T – R–1 (0, FN (Bm–1 )T ) dt. 𝜕t m

Lipschitz condition and estimation of coefficients give: ||Bm+1 – Bm || ≤ ||Bm – Bm–1 ||C(1 + C1 :|t – t0 |–3/2 )|t – t1 |, C, C1 = const > 0,

:|t – t0 |–3/2 ≪ 1.

This estimation shows that Euclid norms sequence for difference between m + 1th and mth powers of transformation tends to zero when t1 is small and :|t – t0 |–3/2 ≪ 1. Consequently, there exists an interval (t1 , t), :|t – t0 |–3/2 ≪ 1 where formal asymptotic solution is the asymptotic expansion of solution for differential equation (2.1).

52

2 Asymptotic methods for solving nonlinear equations

2.1.5 Problems 1.

Construct an asymptotic expansion of solution for Painlevé-2 equation :u󸀠󸀠 – tu + :u3 = 0,

2.

: → 0,

when t > 0. Determine a domain of validity for asymptotic expansion.

2.2 Boundary Layer Method Hydrodynamics is a main domain where asymptotic method is applied. Investigations in hydrodynamics give new statements of problems and a new phenomenon. This phenomenon is called boundary layer. One of the simplest examples with boundary layer arrives in study about flow around the disk by a viscous fluid. An idea about the boundary layer appeared in papers [139] and [14] by Prandtl and Blasius. These papers contain two important ideas: stretching and construction of boundary layer correction terms. Later Friedrichs and Wasow studied singular perturbed problems in similar manner [47, 156, 157]. Vishik and Lyusternik studied problems with a regular degeneration [154, 155]. They used an exponential boundary layer method. Vasilieva and Butuzov studied singular perturbed systems of ordinary differential equations [151, 152]. The boundary layer theory appeared in physical problem studies [100, 108]. One can find a big review of boundary layer theory in Ref. [150]. It is important that the boundary layer method is simple and can be applied to various problems related to ordinary differential equations. It is also applied to partial differential equations with a small parameter under a higher derivative. When small parameter : under a higher derivative equals zero, an order of nonperturbed differential equation is less than the order of original equation. All the boundary conditions cannot be satisfied. To satisfy all boundary conditions one can introduce boundary layer functions. These functions tend to zero when : → 0. A typical exponential boundary layer function in the neighbourhood of x = 0 is exp(–+x/:), +x > 0. There exists a non-exponential boundary layer. Examples of such kind of boundary and initial problems were considered in paper by Lomov. The simplest example of non-exponential boundary layer function is :/(x + :). We do not study this type of problems. A simple problem with boundary layer is the Cauchy problem for the first-order differential equation: :y󸀠 (x) + y2 (x) = x, y(1) = 0, where 0 < % ≪ 1 is a small positive parameter.

x ∈ (1, 2] (2.12)

2.2 Boundary Layer Method

53

We construct a solution for eq. (2.12) in the form of series with respect to small parameter :. Equation (2.12) is algebraic when : = 0. The solution does not satisfy the initial condition. We construct another representation of the solution in the neighbourhood of x = 1.

2.2.1 Asymptotic Solution We construct a solution in the form of ∞

y ∼ ∑ %k yk (x).

(2.13)

k=0

Substitute eq. (2.13) into problem (2.12) and collect terms under linear independent functions :k . It leads to a recurrent sequence of problems for coefficients of representation (2.13). Relation under %0 gives y02 = x. This equation has a solution y0 (x) = ±√x. Relation under :1 leads to an equation 2y0 y1 = –y0󸀠 . Substituting expressions for y0 and y0󸀠 into this equation yields y1 = –

1 . 4x

A relation under :2 gives an equation for y2 2y0 y2 = –y12 – y1󸀠 . Substituting functions y0 , y1 into this equation yields y2 = –

±5 . 32x5/2

Similar calculations allow us to determine all coefficients of eq. (2.13). Equation for yn has a form of 󸀠 , 2y0 yn = – ∑ ckm yk ym – yn–1 k,m

k + m = n.

54

2 Asymptotic methods for solving nonlinear equations

These equations lead to yn = O (x–

1–3n 2 ),

We obtain two formal asymptotic solutions for eq. (2.12) with leading-order terms u0 = √x and u0 = –√x. The formal asymptotic solution means a solution in the form of series (2.13) where all coefficients can be determined recurrently. Asymptotic series (2.13) is a formal asymptotic solution for x ∈ [1, 2] but does not satisfy the initial condition at x = 1. The difference between formal asymptotics (2.33), asymptotics with boundary layer and the numerical solution is shown on Fig. 2.1. 2.2.2 Boundary Layer To satisfy the initial condition at x = 1 we introduce a new scaled variable . = (x – 1)/:. The original equation written with this variable has a form of y󸀠 + y2 = 1 + :. .

(2.14)

A solution of eq. (2.14) is constructed as a sum of constructed series (2.13) and a series of boundary layer functions Yk (. ): ∞

k=0

k=0

y ∼ ∑ :k yk (x) + ∑ :k Yk (. ).

Boundary layer 3 Numeric solution Outer approximation Uniform approximation

2.5 2 1.5 1 0.5 0 –0.5 –1

1

1.5

2

2.5

3

t Figure 2.1. Numeric solution and boundary layer approximation for solution of :y󸀠 + y2 = x, y|x=1 = 0.

55

2.2 Boundary Layer Method

We change a variable from x to scaled variable . in expressions for coefficients yk of series (2.13). Substitute x = 1+:. . We expand yk (1+:. ) to Taylor’s series as : → 0 and collect terms with the equal power of small parameter :. It yields a series with respect to powers of small parameter with coefficients ŷk (. ): .2 . y0 = ±√1 + :. = ± (1 + : – :2 + ⋅ ⋅ ⋅) , 2 8 1 1 . = – : + ⋅⋅⋅, 4(1 + :. ) 4 4

y1 =

y2 = –

25. ±5 5 + ⋅ ⋅ ⋅) . = (±) ( + : 32 64 32(1 + :. )5/2

It gives ŷ0 (. ) = ±1,

. 1 ŷ1 (. ) = ± + , 2 4

ŷ2 (. ) = ∓

. 2 . ±5 – – . 8 4 2

We obtain a formal asymptotic series ∞

k=0

k=0

k=0

∑ :k yk (x) ∼ ∑ :k yk (1 + :. ) ∼ ∑ :k ŷk (. ). Coefficients of this series have the following order ŷk (. ) = O(. k ). The constructed series is asymptotic in domain :. ≪ 1. Collect terms under equal powers :k , k = 0, 1, . . . . We denote these sums as ŷk (. ). We construct a solution in the form of ∞

k=0

k=0

y(. , %) = ∑ :k Yk (. ) + ∑ :k ŷk (. ).

(2.15)

Substitute eq. (2.15) into original equation and collect terms under :k , k = 0, 1, . . . . A relation under :0 gives an equation Y0󸀠 ± 2Y0 + Y02 = 1 ∓ 1. A solution of this equation is Y0 (. ) =

(1 ∓ 1)C0 exp(2. ) – (1 ∓ 1) . 1 + C0 exp(2. )

Consider different boundary layer solutions that are related to different leading-order terms ±√x of external solution.

56

2 Asymptotic methods for solving nonlinear equations

Let y0 = √x. It yields Y0+ =

–2 . 1 + C0 exp(2. )

The initial condition is (Y0+ + ŷ0 )|. =0 = 0,

Y0 |. =0 = –1.

It gives C0 = 1 and Y0+ = –

2 . 1 + exp(2. )

When y0 = –√x we obtain Y0– =

2C0 exp(2. ) . 1 + C0 exp(2. )

Substitution of the initial condition gives Y0– =

2C0 = 1, 1 + C0

C0 = 1.

In this case we obtain Y0– =

2 exp(2. ) . 1 + exp(2. )

Constructed functions have the following behaviour Y0+ → 0 and Y0– → 2 as . is sufficiently great. It means that Y0– is not a boundary layer function. Besides, the sum is Y0– + ŷ0 → 1 as . is sufficiently great. Corollary 1. Studies of boundary layer show that the leading-order term y0 = √x only corresponds to problem (2.12). A relation under :1 gives the Cauchy problem 1 Y1󸀠 + 2(Y0 + 1)Y1 = ( – . )Y0 , 2

Y1 |. =0 = –ŷ1 (0).

A solution is Y1 (. ) =

(. 2 – . + 1) exp(2. ) – . . (1 + exp(2. ))2

57

2.2 Boundary Layer Method

A relation under :2 leads to 1 1 1 5 Y2󸀠 + 2(Y0 + 1)Y2 = –Y12 – . Y1 + Y1 + . 2 Y0 – . Y0 + Y0 . 2 4 2 16 Calculations of high-order correction terms are complicated. These correction terms Yk , k = 3, . . . , are obtained from the Cauchy problems’ recurrent sequence. An equation for Yn is Yn󸀠 + 2(Y0 + 1)Yn = P2 (Y0 , . . . , Yn–1 ),

(2.16)

where P2 is a polynomial of second order with respect to previous correction terms. It is important that function Yn has a form of Yn (. ) = O(. 2n exp(–2. )).

(2.17)

It means that Yn is a boundary layer function for ∀n ∈ ℕ. It leads to Theorem 4. Asymptotic series ∞

y ∼ ∑ :k Yk ( k=0

∞ x–1 ) + ∑ :k yk (x) , : k=0

satisﬁes the Cauchy problem (2.12) up to O(:N ),

y0 = √x,

∀N ∈ ℕ as x ∈ (1, 2].

Corollary 2. A function z=–

2 1 + exp ( 2(x–1) ) :

+ √x

substituted into initial conditions for eq. (2.12) gives error o(1) as : → 0 for x ∈ (1, 2]. 2.2.3 Justification of Asymptotic Solution Change variables y=:

z󸀠 , z

x = :2/3 s

(2.18)

Original eq. (2.12) transfers to the Airy equation z󸀠󸀠 – sz = 0.

(2.19)

58

2 Asymptotic methods for solving nonlinear equations

The solution of eq. (2.12) can be represented by the solution of the Airy equation. A new variable s changes on the interval [:–2/3 , 2:–2/3 ]. This interval corresponds to large positive values. So we can use asymptotic expansion for a solution of the Airy equation with the aim of correction of constructed asymptotic solution for eq. (2.12). To justify the constructed asymptotic solution we seek an exact solution in the form of sum of asymptotic series of N-segments: N

N

k=0

k=0

y = ∑ :k yk (1 + :. ) + ∑ :k Yk (:. ) + :N R(. , :),

(2.20)

where R is a residual term. We obtain the Cauchy problem for the residual term and show that it is bounded for x ∈ (1, 2]. Substitution of expression N

N

k=0

k=0

ỹN = ∑ :k yk (x) + ∑ :k Yk ((x – 1)/:) into eq. (2.12) gives the function of order :N+1 : :N+1 FN (x, :) = :

dỹN + ỹN2 – x. dx

The residual term satisfies the Cauchy problem: :R󸀠 + 2ỹn R = :FN (x, :) – :N R2 ,

R|x=1 = :C(:),

where C(:) is a function of : > 0 that is defined in a domain : = 0. This Cauchy problem can be rewritten in the form of integral equation: x

1 R(x, :) = :C(:) exp (– ∫ 2ỹN dy) : 1

x

x

y

1

1

1

1 1 + exp (– ∫ 2ỹN dy) ∫ exp ( ∫ 2ỹN dz) F(y, :)dy : :

+:

N–1

x

x

y

1

1

1

1 1 exp (– ∫ 2ỹN dy) ∫ exp ( ∫ 2ỹN dz) R2 (y, :)dy. : :

We have ỹN ∼ √x. It leads to ∃:0 > 0 such that exponential functions in the integral operator are uniformly bounded for x ∈ (1, 2], : ∈ (0, :0 ). Denote an operator from the right-hand side of I(R). The integral operator from the right-hand side transfers functions from C1 , x ∈ (1, 2] to C1 , x ∈ (1, 2] when : ∈ (0, :0 ). This operator satisfies Lipschitz condition.

2.3 Catastrophes and Regular Expansions

59

Consider iterations Rn+1 = I(Rn ). It gives x

Rn+1 – Rn = I(Rn ) – I(Rn–1 ) ≤ :

N–1

󵄨󵄨 󵄨󵄨 A ∫ 󵄨󵄨󵄨󵄨Rn – Rn–1 󵄨󵄨󵄨󵄨dy, 󵄨 󵄨

A = const > 0.

1

We obtain |Rn+1 – Rn | ≤ q|Rn – Rn–1 |,

0 < q < 1,

∀t = o(:1–N ).

It means this transformation is contractive in C1 , x ∈ (1, 2] for : ∈ (0, :0 ). It yields Theorem 5. The solution of Cauchy problem (2.12) has an asymptotic expansion ∞

k=0

k=0

y ∼ ∑ :k yk (x) + ∑ :k Yk ((x – 1)/:),

: → 0,

∀x ∈ (1, 2].

2.2.4 Problems 1. 2.

Construct an asymptotic expansion for problem (2.12) with the cubic y3 nonlinear term. The initial condition and segment for independent variable are the same. Obtain an equation for a residual term when an exact solution contains terms up to O(:).

2.3 Catastrophes and Regular Expansions Here we discuss main ideas concerning asymptotic series with respect to a small parameter. The small parameter as a basis for calibrating sequence is used very often to investigations of problems of mathematical physics. Coefficients of expansions depend on independent variable. Therefore, the coefficients can be varied so that the expansions become invalid for some domains of the independent variable. In such case one should construct an additional asymptotic series and match two series to each other. In this section we introduce asymptotic series into different domains of values of parameter. We show an approach for justification of constructed asymptotic series. Such asymptotic expansions are used for intersected domains and matched. The matching method was presented in Refs. [32, 46]. Later, Kaplun constructed a compound uniform asymptotic solution [82]. A justification of uniform asymptotic solutions was presented in Ref. [64].

60

2 Asymptotic methods for solving nonlinear equations

2.3.1 A Formal Series with Respect to Powers of Small Parameter Consider a nonlinear equation: (y + 1)(y – t)(y + t) – : = 0.

(2.21)

The equation contains two real parameters t and :. It is easy to find all roots of this equation as : = 0. When : ≠ 0, the solution of the equation can be obtained by the Cardano formulae. But these formulae are complicated and it is convenient to use approximate methods. We will consider : as a small parameter and t as an independent variable. The chosen form of equation is convenient to show how to construct an asymptotic solution for the equation and to study a solution near a confluent point of the roots at t = 0. Let us suppose that roots change small when the equation is perturbed by : ≠ 0. We will find the approximation of root y = t as : → 0 in the form of formal series with respect to parameter :: ∞

y ∼ ∑ :k Yk ,

Y0 = t.

(2.22)

k=0

To determine coefficients Yk , k ∈ ℕ, we substitute the formal series into eq. (2.21). Collect terms under linear independent functions :k of parameter :. Relations under : give 2t2 Y1 + 2tY1 – 1 = 0,

it yields that

Y1 =

1 . 2t(t + 1)

Relations under :2 give 2t(t + 1)Y2 = –(3t + 1)Y12 ,

it follows that

Y2 = –

3t + 1 . 8t3 (t + 1)3

Equations for the third and the fourth correction terms are obtained from relations under :3 and :4 : 2t(t + 1)Y3 = –2(3t + 1)Y1 Y2 – Y13 , 2t(t + 1)Y4 = –(3t + 1)(2Y1 Y3 + Y22 ) – 3Y12 Y2 . It is easy to obtain that Yn =

Pn (Y0 , Y1 , . . . , Yn–1 ) . t(t + 1)

61

2.3 Catastrophes and Regular Expansions

1.1

1.1

Numeric Outer asymptotic approximation

1 0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6

0.5

0.5

0.4

0.4

0.3

0.3

0.2 0.1

0.2 –0.2

0.2 0.3 0.4

0.5 0.6 0.7 0.8 0.9 t

Numeric Inner asymptotic approximation

1

1

0

0.2

0.4 t

0.6

0.8

1

1.1 1

Numeric Matched asymptotic approximation

0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 –0.2

0

0.2

0.4

0.6

0.8

1

t

Figure 2.2. The left hand-side graph is a numerical solution and two terms of outer asymptotic expansions (2.22) as : = 0.1. The asymptotic series is not valid in the neighbourhood of t = 0. On the middle graph we show three terms of the inner asymptotic expansion, which are obtained using eq. (2.28). The asymptotic series is not valid out of the neighbourhood of t = 0. The right-hand side graph contains the compound asymptotic expansion (2.29). This asymptotic expansion is uniformly valid in the neighbourhood of t = 0 and into a domain t > 0.

The numerator of fraction contains Pn (Y0 , Y1 , . . . , Yn+1 ), that is, a polynomial of the third order with respect to all variables. Non-zero coefficients are under terms (3t + 1)Ya Yb , where a + b = n and Ya Yb Yc , where a + b + c = n. It is important to determine the order of singularity of Yn in the neighbourhood of t = 0 and t = 1. The difference between the outer approximation by two terms of (2.22) and the numerical solution for (2.21) is shown on the first picture of Fig. 2.2. A segment of series (2.22) with five terms is (8t2 + 5t + 1) 1 2 (3 t + 1) 3 y∼ t+ : +: –: 2t(t + 1) 8 (t + 1)3 t3 16 (t + 1)5 t5 – :4

5 (3t + 1) (7t2 + 4t + 1) 128 (t + 1)7 t7

+ :5

(384t4 + 453t3 + 237t2 + 63t + 7) 256 (t + 1)9 t9

.

62

2 Asymptotic methods for solving nonlinear equations

Consider a recurrent system of equations for Yk , k = 1, . . . , n – 1. Suppose that the following estimations Yk = O(t–2k+1 ) and Yk = O((t + 1)–2k+1 ), k < n, are valid in the neighbourhood of t = 0 and t = 1. The order of singularity of Yn in the neighbourhood of t = 0 is Yn = O(t–(2a–1)–(2b–1)–1 ) + O(t–(2c–1+2d–1+2e–1+1) ),

a + b = n,

c + d + e = n.

It gives Yn = O(t–2n+1 ). The order of singularity of Yn at t = 0 equals 2n – 1. One can obtain the order of singularity of Yn at t = 1. It equals 2n – 1. The order of singularity at t = 1 and t = 0 grows with n. Series (2.22) is asymptotic when :k+1 Yk+1 = o(:k Yk ). This condition gives a relation : = o(1). t2 (t + 1)2 It follows that series (2.22) loses an asymptotic property when t ∼ √: or t ∼ –1 + √:. The reason is that the following eq. (2.21) has multiple root y = 0 as t = 0 and : = 0. A multiple root also appears as t = –1 and : = 0 . An analytical dependence of roots on the parameter : is missing in the neighbourhoods of multiple roots. It means that another representation of roots in neighbourhoods of t = 0 and t = –1 should be constructed.

2.3.2 Justification of Asymptotic Series We have constructed the formal asymptotic series (2.22). In this section we discuss if the constructed series is appropriate in approximation for solutions of eq. (2.21). We prove that a segment of asymptotic series is an approximation for an exact solution. Consider N-segment of series: N

Ỹ N = t + ∑ :k Yk . k=1

Construct a solution for eq. (2.21) in the form of y = Ỹ N + :N+1 y.̃

(2.23)

2.3 Catastrophes and Regular Expansions

63

Function ỹ is a difference between the exact solution and subsum of asymptotic series (2.22). We prove that ỹ exists and it is bounded. Substituting eq. (2.23) into eq. (2.21) yields an equation for y:̃ :N+1 (3Ỹ N2 + 2Ỹ N – t2 )ỹ + (Ỹ 3 – Ỹ 2 – t2 Ỹ – :) + :2(N+1) ỹ2 (3t + 1 + 3Ỹ N ) + :3(N+1) ỹ3 = 0. (2.24) We obtain (Ỹ 3 + Ỹ 2 – t2 Ỹ – :) = :N+1 F(t), where F(t) is a function with singularities of order 2N – 2 at t = 0 and t = 1. Some calculations allow to rewrite eq. (2.24) in the form of ỹ = –

̃ ỹ3 F(t) (N+1) 2 (3t + 1 + 3YN ) 2(N+1) ̃ y – : + : . 3Ỹ N2 + 2Ỹ N – t2 3Ỹ N2 + 2Ỹ N – t2 3Ỹ N2 + 2Ỹ N – t2

(2.25)

The leading-order term of denominator for right-hand side of eq. (2.25) can be estimated from the exact formulae for Yk , k = 0, 1, . . . , n: 3Ỹ N2 + 2Ỹ N – t2 = O(t(t + 1)). Change a variable ỹ = Az,̃ where A=–

F(t) . 2 ̃ 3YN + 2Ỹ N – t2

An equation for z̃ is z̃ = 1 – :Bz̃2 – :2 Cz̃3 ,

(2.26)

where B = :N A

(3t + 1 + 3Ỹ N ) , 3Ỹ 2 + 2Ỹ – t2 N

C = :2N+1

N

2

A . 2 ̃ 3YN + 2Ỹ N – t2

Coefficients from the right-hand side of equation for z̃ in the neighbourhood of t = 0 can be represented as A = O(t–2N+1 ),

B = :N O(t–2N ),

C = :2N O(t–4N+1 ),

t → 0.

Coefficients B and C are bounded in the domain of validity. We use an iteration method for solving eq. (2.25): z0̃ = 1,

̃ = 1 + :Bzk̃2 – :2 Czk̃3 . zk+1

64

2 Asymptotic methods for solving nonlinear equations

It is easy to see that a solution is within the distance of order : from unity. It follows that ỹ ∼ –F/(3Ỹ n2 + 2Ỹ n – t2 ). It gives a statement that series (2.22) gives an asymptotic series of root for eq. (2.21).

2.3.3 Non-analytic Perturbation In this section we construct an asymptotic series for the root of eq. (2.21) when parameters : and t are small and t = O(√:). Asymptotic series (2.22) is not valid in this domain. Change variables t = √:4,

y = √:z.

Equation (2.21) becomes z2 – 42 – 1 + √:(z3 + 42 z) = 0.

(2.27)

In this form parameter \$ = √: is small. We construct an asymptotic series with respect to this parameter \$. A solution is constructed in the form of ∞

z = ∑ \$n Z n .

(2.28)

n=0

Coefficients of this series can be obtained by substitution of series into eq. (2.27) and collecting terms under the same powers of small parameter \$. It gives a recurrent system of equations for Zn . An equation for Z0 is Z02 = 42 + 1.k We choose Z0 = √42 + 1. Another equations can be solved identically. An equation for Zn is Zn = –

4 2√42 + 1

Zn–1 –

1 ∑ Zk Zk Zk . 2√42 + 1 |k|=n–1 1 2 3

A sum of five terms is z ∼ √42 + 1 – \$4

442 + 5 42 + 2 \$ + \$2 + – \$3 2 2 8√42 + 1

4 2 6446 + 33644 + 50442 + 231 5 4 + 64 + 7 . + \$ 2 128(42 + 1)3/2

2.3 Catastrophes and Regular Expansions

65

It is easy to see that a general term of series grows as 4 → ∞: Zn = O(4n–1 ). Series (2.28) is asymptotic as \$Zn /Zn–1 = o(1) or \$4 = o(1). Series (2.28) is an asymptotic expansion for root of eq. (2.27). The proof of this statement can be obtained by an iteration method. One should obtain an equation for a residue term and estimate it by the iteration method. We omit a detailed calculations here.

2.3.4 Matching of the Asymptotic Series for the Root of Cubic Equation Consider domains of validity for constructed asymptotic series. Series (2.22) is asymptotic out of small neighbourhoods of t = 0 and t = –1. We call this series an external asymptotic series. Series (2.28) is an asymptotic one as \$4 = o(1) or in domain :t = o(1). It means that series (2.28) is asymptotic in the small neighbourhood of t = 0. We call this series an internal asymptotic series. Domains of validity of external and internal series intersect. Both expansions are valid into domain 1 ≪ t ≪ :–1 . It is clear that these series should be equivalent for one root for eq. (2.21). To compare these series we change variable in internal expansion. We rewrite it in terms of : and t: √:z ∼ √: (√(t/√:)2 + 1 –

+ :2

2 √: 4(t/√:)2 + 5 3 (t/√:) + 2 +: – √: 2 2 8√(t/√:)2 + 1

4 2 64(t/√:)6 + 336(t/√:)4 + 504(t/√:)2 + 231 5 (t/√:) + 6(t/√:) + 7 √: + ). 2 128((t/√:)2 + 1)3/2

Construct an asymptotic expansion of left expression as : → 0: √:z ∼ t + :

1 – 3t2 + 8t3 – 15t4 + 24t5 1 – t + t2 – t3 + t4 – t5 . – :2 2t 8t3

An asymptotic expansion as t → 0 is √:z ∼ t + :

1 1 3 + :2 (– 3 – ) . 2t 8t 8t

Calculate an asymptotic expansion as t → 0 for external representation: y∼t+:

1 3 1 + :2 (– 3 + ) . 2t 8t 8t

66

2 Asymptotic methods for solving nonlinear equations

It is clear that both asymptotic representations are equal in the domain where both representations are valid. It is important that this matching does not take place for leading-order term –√42 + 1 of internal expansion.

2.3.5 Compound Asymptotic Expansion for Root of Cubic Equation It is possible to obtain one asymptotic formula that is uniformly suitable for inside and out of the neighbourhood of t = 0. To obtain this formula we sum external and internal asymptotic expansions and subtract the common part of them. The difference between the compound asymptotic approximation and numerical solution is shown in last picture of Fig. 2.2. To simplify a compound asymptotic formula we use two terms from external representation and three terms from the internal one. A formula for the external representation is yout = t + :

1 . 2t(t + 1)

A formula for the internal representation is yinn = √:√

1 t2 +1–: . : 2

The common part of this asymptotic expansion can be calculated by rewriting the external one as t → 0 yout ∼

: :t : +t– + , 2t 2 2

t→0

and internal expansion as : → 0: yinn ∼ t –

: : + , 2 2t

: → 0.

It follows that the common part is yint = t –

: : + . 2 2t

Then the compound asymptotic expansion is y ∼ yout + yinn – yint = t + :

1 : : t2 1 + √:√ + 1 – : – t – + . 2t(t + 1) : 2 2 2t

2.4 Weierstrass Function

67

Simple calculations give y ∼ √t 2 + : –

: . 2(t + 1)

(2.29)

Asymptotic formula (2.29) is uniformly valid both as t = o(1) and as t > 0. The justification of compound asymptotic expansions can be obtained by the iteration method for a residue term equation.

2.3.6 Problems 1.

Construct a uniformly valid asymptotic solution for equation (y – t)(y + t) – :.

2. 3. 4.

Construct an asymptotic expansion for the root of cubic equation (2.21) that is uniformly valid as t ∈ ℝ. Obtain an equation for residue term of uniformly valid asymptotic solution and use the iteration method. Study the domain of validity of uniformly valid asymptotic solution with respect to , when : → 0 for an equation (y – ,)(y – t)(y + t) – : = 0.

2.4 Weierstrass Function Here we introduce Weierstrass function ℘ as a solution of the first-order differential equation on Riemann surface of the first kind. We show that this function is a doubly periodic function. We present Liouville theorem for elliptic function.

2.4.1 Differential Equation In this section we study the solution of differential equation of the first order (u󸀠 )2 = 4u3 – g2 u – g3 .

(2.30)

This equation appears when one integrates the equation of the second order with quadratic nonlinearity: u󸀠󸀠 = 6u2 – g2 /2.

68

2 Asymptotic methods for solving nonlinear equations

Separate variables in eq. (2.30): dz =

±du √4u3

.

– g 2 u – g3

This equation can be easily integrated. To determine a unique solution of nonlinear equation of the first order we should take an additional condition. We should choose sign + or –. It means we take two sheets of Riemann surface. It gives the Cauchy conditions z = z0 ,

u|z0 = u0 ,

We choose the bifurcation point z = ∞ as an initial one. In this case, we obtain ∞

z = –∫ u

d& √4& 3

– g 2 & – g3

.

Function u = ℘(z) that is an inverse function to this elliptic integral is called Weierstrass ℘-function. 2.4.2 Doubly Periodicity To obtain a solution of the equation it is not important what contour of integration on Riemann surface was taken (Fig. 2.3). But all the contours are not equivalent. Denote roots of cubic polynomial 4& 3 – g2 & – g3 = 0 by e1 , e2 , e3 . We suppose that e3 is a real root. Connect points e1 and e2 , e3 and ∞ of Riemann surface by a crosscut. We denote a cycle on the upper sheet of surface by

e2

b

a e3 e1

Figure 2.3. Riemann surface of square root as a double covering of complex plane with crosscuts [e1 , e2 ] and [e3 , ∞). Cycle a goes around the crosscut [e1 , e2 ]. Cycle b goes over upper sheet from the crosscut [e3 , ∞) and goes to the low sheet through the crosscut [e1 , e2 ].

2.4 Weierstrass Function

69

a-period. This cycle goes around a segment [e1 , e2 ]. We denote a cycle that connects two crosscuts and goes over upper and low sheets of surface by b-period. Fig. 2.3 illustrates this structure. It is easy to see that any contour 𝛾 on Riemann surface is represented by a path that does not go around a cycle 𝛾. This path is the sum of finite number of goings around over a and b cycles. 𝛾 = 𝛾󸀠 + am + bn,

m, n ∈ ℤ.

It means that Weierstrass function is a doubly periodic function. Numbers T1 = ∫ a

d& √4& 3 – g2 & – g3

,

T2 = ∫ b

d& √4& 3 – g2 & – g3

are periods of this function ℘(z + mT1 + nT2 ) = ℘(z). If a relation of these two numbers is real, then doubly periodicity does not take place. A function can have only one real period. In this case a relation of period is a complex number. Let us illustrate it. First, the integral over a-cycle can be changed in shape into an integral around the crosscut over a real axis from [e3 , ∞). An integral over a large circle on upper sheet of Riemann surface remains. But this integral equals zero due to the diminishing of integrand as |z| → ∞. An integral over upper and low bank of cut [e3 , ∞) is real because of the fact that integrand is real. It is easy to show that integral over b cycle is imaginary. It yields that T1 /T2 is a complex number. Double periodicity of the function allows to study this function in the rectangle 0 ≤ R(z) < T1 , 0 ≤ I(z) < T2 . This rectangle is called Jacobi manifold. 2.4.3 A Representation by Series A value of Weierstrass function u → ∞ as z → 0. By means of integral representation it is easy to obtain z∼

1 , √u

or

u∼

1 . z2

This germ of a series near z = 0 allows to find a segment of any length for this function: ℘(z) ∼

3g22 6 3g2 g3 8 g2 2 g 3 4 1 + + + z z z + z + O(z10 ). 28 3600 6160 z2 20

(2.31)

Weierstrass function is periodic and it gives z + mT1 + nT2 . It allows to represent ℘(z) in the form 1 + regular part. 2 m,n (z + mT1 + nT2 ) ∑

70

2 Asymptotic methods for solving nonlinear equations

The regular part is a periodic and bounded function on complex plane. It gives that it is a constant. From the formula (2.31) one get that the constant is zero. It means one should subtract the constant as z = 0 for any term of the series: It yields Weierstrass ℘-function as a series : ℘(z) =

1 1 1 + – ]. ∑ [ 2 2 z2 (m,n)=(0,0) (z + mT + nT ) (mT + 1 2 1 nT2 ) ̸

(2.32)

This series is convergent and its sum is Weierstrass function ℘(z). 2.4.4 Evenness A series for ℘󸀠 (z) can be obtained by means of term by term differentiation of (2.32): ℘󸀠 (z) = –2 ∑ m,n

1 . (z + mT1 + nT2 )3

Change z by –z in this formula. Simple calculations give 1 1 =2∑ = –℘󸀠 (z). 3 3 (–z + mT + nT ) (z + mT + nT ) 1 2 1 2 m,n m,n

℘󸀠 (–z) = –2 ∑

It yields that a derivative of Weierstrass function is an uneven function. Using a series representation for ℘(z) it is easy to see that ℘(–z) = ℘(z). It means that Weierstrass function is an even function.

2.4.5 Liouville’s Theorem Liouville’s theorem is a Cauchy theorem for elliptic functions in many respects. Theorem 6. Let F(z) be an elliptic function. A sum of residues for this function with respect to all its poles equals zero. Prove this statement. Consider F(z) in parallelogram of periods T1 and T2 . Function F(z) is regular on sides of this parallelogram. An integral over the boundary of parallelogram is c+T1

∫ F(z)dz = ∫ F(z)dz + Q

c

c+T1 +T2

∫ c+T1

c+T2

F(z)dz +

c

F(z)dz + ∫ F(z)dz.

c+T1 +T2

c+T2

2.5 Jacobi Elliptic Functions

71

The periodicity of integrands for the second and the third integrals allows to rewrite integrals over segments [c, c + T2 ] and [c + T2 , c]. It yields c+T1

c+T2

c

c

∫ F(z)dz = ∫ F(z)dz + ∫ F(z)dz + ∫ F(z)dz + ∫ F(z)dz. Q

c

c

c+T1

c+T2

Integrands are equal, but directions of integrations are different. It gives ∫ F(z)dz = 0. Q

The theorem is proved. This theorem gives some important conclusions. Corollary 3. Taking into account the multiplicity factor, the number of poles for an elliptic function cannot be less than two. 󸀠

(z) Corollary 4. Consider 6(z) = f f(z)–a instead of f (z) where a = const. Then the number of poles for the function that is not equal to a constant is a number of points where f (z) = a for ∀a.

Corollary 5. There is no function that is not equal to the constant and regular in the parallelogram of periods. 2.4.6 Problem –

Determine nulls of Weierstrass function into the parallelogram of periods.

2.5 Jacobi Elliptic Functions In this section we investigate sin-amplitude function of a complex argument. We obtain an approximation of this function when the parameter is close to null and unity. 2.5.1 Sin-Amplitude Function Apparently the title of sin-amplitude function is connected with the solution of mathematical pendulum equation 6󸀠󸀠 + sin(6) = 0.

(2.33)

Multiply this equation by 6󸀠 and integrate it with respect to t. It gives: (6󸀠 )2 – 2 cos(6) = 2E.

(2.34)

72

2 Asymptotic methods for solving nonlinear equations

We study only oscillations of mathematical pendulum neglecting of rotations. A relation |E| ≤ 1 is valid for this kind of motion. It is convenient to denote a parameter E by E = cos(s). Using formula cos(!) = 1 – 2 sin2 (!/2) we write the solution of mathematical pendulum equation in the form of t + t0 =

ds 1 6 . ∫ 2 60 √ 2 sin (60 /2) – sin2 (s/2)

This formula is more convenient when the integrand contains a polynomial dependence instead of sin function under the square root. To obtain this dependence we change variables ku = sin(s/2) k = sin(60 /2). Then kdu =

cos(s/2) ds, 2

or

ds =

2kdu . √ 1 – k 2 u2

The solution of mathematical pendulum equation is expressed by an integral: sin(6/2) k

t=

∫ 0

du . √(1 – u2 )(1 – k2 u2 )

(2.35)

Now it is convenient to study the inverse function for the integral as a new special function Jacobi sn u = sn(t|k),

where

sin(6/2) = u sin(60 /2).

This new function is called sin of amplitude or Jacobi sn-function. 2.5.2 Periodicity Function sn(z|k) is an inverse function for an integral y

z=∫ 0

du √(1 –

u2 )(1

– k 2 u2 )

.

(2.36)

Parameter k (modulus of elliptic function) is a positive number less than unity. The integration is realized over a contour on two sheets of Riemann surface. This surface is determined by w2 = (1 – u2 )(1 – k2 u2 ).

2.5 Jacobi Elliptic Functions

73

Make two crosscuts on the upper sheet of surface. Let the first crosscut goes over segment [–1, 1]. Let the second one connect points –1/k and 1/k through the infinity and go on a real axis. We connect opposite borders of crosscuts with borders on the low sheet. Make contours on this Riemann surface. Let a cycle be on the upper sheet and go around [–1, 1]. b cycle goes over the upper sheet from a bifurcation point 1 to a bifurcation point 1/k. Later it goes to low sheet and passes from 1/k to 1 and return to the upper sheet of surface. The contour of integration for integral (2.36) over Riemann surface is determined accurately to an integer number of a and b cycles. It means that the inverse function for integral (2.36) is periodic. These periods are equal integrals over a and b cycles: T=∫ a

T󸀠 = ∫ b

1

du √(1 –

u2 )(1

k 2 u2 )

= 2∫ –1

du √(1 –

1/k

du √(1 – u2 )(1 – k2 u2 )

=2∫ 1

u2 )(1

– k 2 u2 )

;

du . √(1 – u2 )(1 – k2 u2 )

Number T is real because the integrand is real over the contour of the integration. Number T 󸀠 is an image because the integrand is an image over the contour of integration. The relation of periods for sn(z|k) is an image. It means this function is double periodic. Usual notations are 1

K(k) = ∫ 0

du ; √(1 – u2 )(1 – k2 u2 )

1/k 󸀠

K (k) = ∫ 1

du . √(1 – u2 )(1 – k2 u2 )

It gives T = 4K(k),

T 󸀠 = 4K 󸀠 (k).

2.5.3 Jacobi Elliptic Functions Elliptic functions appeared as double periodic meromorphic functions of complex argument or inverse functions of elliptic integrals. In this section we use the definition of Jacobi elliptic functions as solutions of differential equations. Differentiation of eq. (2.35) with respect to t gives that sine amplitude function satisfies equation y󸀠 = √(1 – k2 y2 )(1 – y2 ).

74

2 Asymptotic methods for solving nonlinear equations

Denote cn2 (t|k) = 1 – sn2 (t|k), dn2 = 1 – k2 sn2 (t|k). Function cn(t|k) is called cosine of amplitude or Jacobi cn-function. Function dn(t|k) is called delta of amplitude. Let us obtain formulae for derivatives of these functions 2cn󸀠 (t|k)cn(t|k) = –2sn(t|k)√(1 – sn2 (t|k))(1 – k2 sn2 (t|k)), 2dn󸀠 (t|k)dn(t|k) = –2k2 sn(t|k)√(1 – sn2 (t|k))(1 – k2 sn2 (t|k)). Or sn󸀠 (t|k) = cn(t|k)dn(t|k), cn󸀠 (t|k) = –sn(t|k)dn(t|k), dn󸀠 (t|k) = –k2 sn(t|k)cn(t|k). We obtain that these three functions sn(t|k), cn(t|k) and dn(t|k) satisfy the system of equations: y1󸀠 = y2 y3 ,

y2󸀠 = –y1 y3 ,

y3󸀠 = –k2 y1 y2 ,

(2.37)

with initial data that are obtained from the integral representation of sn(t|k): y1 |u=0 = 0,

y2 |u=0 = 1,

y3 |u=0 = 1.

(2.38)

Elliptic functions can be introduced as solutions for the Cauchy problem (2.37), (2.38). One of the first integrals can be obtained when one multiplies the first equation by 2y1 and the second one by 2y2 and add them: 2y1 y1󸀠 + 2y2 y2󸀠 = (y12 + y22 )󸀠 = 0. Another first integral can be obtained when one multiplies the first equation by 2k2 y1 and the third one by 2y3 and add them: 2k2 y1󸀠 y1 + 2k2 y3󸀠 y3 = (k2 y12 + y32 )󸀠 = 0. It gives two first integrals y12 + y22 = c1 ,

k2 y12 + y32 = c2 .

Constants c1 and c2 are determined from initial conditions (2.38). It leads to relations: sn2 (t|k) + cn2 (t|k) = 1,

k2 sn2 (t|k) + dn2 (t|k) = 1.

(2.39)

2.5 Jacobi Elliptic Functions

75

2.5.4 Regular Expansion in the Neighbourhood of Zero Value of Argument In the following sections we present formulae for expansion of sine amplitude function as small value of argument and values of parameter k close to zero and unity. These expansions are valid in the neighbourhood of real axis I(z) = 0. To emphasize that an argument is real we note it by t. Let the value of t be sufficiently small. Then we can find sn(t|k) in the form of series with respect to t: sn(t|k) ∼ f1 (k)t + f2 (k)t2 + f3 (k)t3 + ⋅ ⋅ ⋅ . Substitute this expression into differential equation for the sn(t|k): (y󸀠 )2 = (1 – y2 )(1 – k2 y2 ). It gives (f1 (k) + 2f2 (k)t + 3f3 (k)t2 + ⋅ ⋅ ⋅)2 ∼ (1 – (f1 (k)t + f2 (k)t2 + f3 (k)t3 + ⋅ ⋅ ⋅)2 ) × (1 – k2 (f (k)t + f2 (k)t2 + f3 (k)t3 + ⋅ ⋅ ⋅)2 ). Collect terms under the same powers of t. Then f12 (k) ≡ 1

4f1 (k)f2 (k) ≡ 0,

6f1 (k)f3 (k) + 4f2 (k) = –f12 (k) – k2 f1 (k). The final expansion in the neighbourhood of zero value of argument is 1 sn(t|k) ∼ t – (1 + k2 )t3 + ⋅ ⋅ ⋅ . 6 The expression for the general member of this series is unknown.

2.5.5 Regular Expansion in the Neighbourhood of Zero Value of Parameter Let parameter k be close to zero. In this case, function sn(t|k) is close to the trigonometric sine. It is easy to illustrate this fact if to find a representation for sn(t|k) of the form sn(t|k) ∼ 60 (t) + 61 (t)k + 62 (t)k2 + 63 (t)k3 + ⋅ ⋅ ⋅ .

76

2 Asymptotic methods for solving nonlinear equations

Substitute this expression into the differential equation for the function sn(t|k): (6󸀠0 + 6󸀠1 k + 6󸀠2 k2 + 63 k3 + ⋅ ⋅ ⋅)2 ∼ (1 – (60 + 61 k + 62 k2 + 63 k3 + ⋅ ⋅ ⋅)2 ) × (1 – k2 (60 + 61 k + 62 k2 + 63 k3 + ⋅ ⋅ ⋅)2 ). Collect terms under the same order of powers of k. It gives differential equations for functions 6j : (6󸀠0 )2 ∼ 1 – 620 ,

26󸀠1 6󸀠0 = –260 61 ,

26󸀠0 6󸀠2 + (6󸀠1 )2 ∼ –260 62 – 620

....

Initial conditions for functions 6j are 60 |t=0 = 1,

6j |t=0 = 0, j = 1, 2, . . . .

Solutions for this recurrent system of equations are 60 = sin(t),

61 ≡ 0,

62 = (–t cos(t) + sin(t) cos2 (t))/2, . . . .

It gives asymptotic of sn as k → 0: sn(t|k) ∼ sin(t) +

k2 (–t cos(t) + sin(t) cos2 (t)) + ⋅ ⋅ ⋅ . 2

2.5.6 Regular Expansion in the Neighbourhood of k = 1 Consider a case when parameter k is close to 1. In this case it is convenient to express it by additional modulus k2 = 1 – (k󸀠 )2 . We seek a solution of sine amplitude equation in the form of series with respect to powers of additional modulus k󸀠 : sn(t|k) ∼ 80 (t) + k󸀠 81 (t) + (k󸀠 )2 82 (t) + ⋅ ⋅ ⋅ . Substitute this expansion into the differential equation for sn(t|k): (8󸀠0 + 8󸀠1 k󸀠 + 8󸀠2 (k󸀠 )2 + ⋅ ⋅ ⋅)2 ∼ (1 – (80 + 81 k󸀠 + 82 (k󸀠 )2 + ⋅ ⋅ ⋅)2 ) × (1 – (1 – (k󸀠 )2 )(80 + 81 k󸀠 + 82 k2 + ⋅ ⋅ ⋅)2 ). Collect terms under the same powers of k󸀠 . It gives a recurrent sequence of equations: (8󸀠0 )2 = (1 – 820 )2 ,

28󸀠1 8󸀠0 = –280 81 ,

28󸀠0 8󸀠2 + (8󸀠1 )2 = –280 82 (1 – 820 ) – (280 82 – 820 )(1 – 820 ),

....

77

2.6 Uniform Asymptotic Behaviour of Jacobi-sn Near a Singular Point

Initial conditions for functions 8j are the same as they were for 6j in the previous section. Solve equations for three first coefficients of expansions. It gives asymptotic of sn as k → 1 – 0: 1 1 sn(t|k) ∼ tanh(t) + (k󸀠 )2 (sinh(t) cosh(t) – t) + ⋅⋅⋅ 2 cosh2 (t)

2.5.7 Problems 1. 2. 3.

Find coordinates of zeroes for function sn(t|k). Find coordinates of poles for function sn(t|k). Write an asymptotic expansion for the solution of equation u󸀠󸀠 + g2 u + g3 u2 + %u3 = 0, as % → 0.

2.6 Uniform Asymptotic Behaviour of Jacobi-sn Near a Singular Point. The Lost Formula from Handbooks for Elliptic Functions The goal of this section is to construct asymptotic formula for Jacobi elliptic function sn(t|m), when m → 1 – 0, which is uniform for all periods of the function. The asymptotic expansions for the elliptic functions, when m → 1 – 0, are given in numerous handbooks (see the example [1]). However, those expansions are not uniform for very large t, when t = O(T(k)), where T(k) is a period of the function. An obstacle for the uniformity of the expansions is special behaviour of the elliptic functions in the neighbourhoods of turning points t = T(k)/4 + nT/2, ∀n ∈ ℤ and far from them. Let us consider the equation (u󸀠 )2 = (1 – u2 )(1 – (1 – :)u2 ),

0 < : ≪ 1,

(2.40)

with an initial condition u(0) = 0. The solution of this Cauchy problem is the Jacobi elliptic function: u(t, :) = sn(t|m),

m = 1 – :.

The handbook gives the following approximation (see Ref. [1], formula (16.15.1)): sn(t|1 – :) ∼ tanh(t) +

1 : (sinh(t) cosh(t) – t) sech2 (t). 4

78

2 Asymptotic methods for solving nonlinear equations

This approximation is non-periodic, but sn(t, 1 – :) is a periodic function with formula for period: 1

T(:) = 4 ∫ 0

dy √(1 –

y2 )(1

– (1 – :)y2 )

≡ 4K(1 – :).

The integral on the right-hand side of the formula is the elliptic integral of the first kind, which is typically denoted by K(m). The handbook [1] gives a polynomial approximation of the integral (formula (17.3.34)): K(m) = (1.38662943 + 0.09666344259: + 0.03590092383:2 + 0.03742563713:3 + 0.01451196212:4 ) + (0.5 + 0.12498593597: + 0.06880248576:2 + 0.03328355346:3 + 0.00441787012:4 ) log(1/:) + e(m), |e(m)| < 2 × 10–8 .

(2.41)

Here we clarify a formula for the asymptotic of the elliptic integral of first order and obtain an asymptotic and uniform asymptotic approximation for sn(t|1 – :). Due to the symmetry sn(t|m) = –sn(–t|m),

sn(t + T/2|m) = –sn(t|m),

the asymptotic approximation is sufficient for the half of the period. 2.6.1 The Asymptotic Behaviour of the Period For small values of : the elliptic integral can be represented in the form of the integral with weak singularity at y = 1. Let us consider the integral as a sum of two integrals over two intervals from zero to the small neighbourhood of 1 and over the small neighbourhood of y < 1: 1

∫ 0

1

dy √(1 –

y2 )(1

– (1 –

:)y2 )

=∫ 0

dy

1

√(1 + y)(1 + √1 – :y) √(1 – y)(1 – √1 – :y)

.

Denote 1 – √1 – : = ,, then 1

K(1 – :) = ∫ 0

dy 1 . √(1 + y)(1 + y – ,y) √(1 – y)(1 – y + ,y)

Now it is convenient to expand the first multiplier into series of ,: 1 √(1 + y)(1 + y – ,y)

=

,y 3,2 y2 5,3 y3 35,4 y4 1 + + + + + O(,5 ). 2 3 4 y + 1 2(y + 1) 8(y + 1) 16(y + 1) 128(y + 1)5

79

2.6 Uniform Asymptotic Behaviour of Jacobi-sn Near a Singular Point

Next step is a substitution of this expansion into integral for K(1 – :). Now the integral should be represented as a sum of integrals. These integrals are obviously calculated by computer algebra system, such as Maxima [1]. For example: 1

I0 = ∫ 0

=

dy (y + 1)√(1 – y)(1 – (1 – ,)y)

√4 – 2 , log(,) log(–, + 2 √4 – 2 , + 4) √4 – 2 , – . 2, – 4 2, – 4

Similar formulas are obtained for the next integrals: 1

yk dy

k

Ik = , ak ∫ 0

(y + 1)k+1 √(1 – y)(1 – (1 – ,)y)

,

k = 1, 2, 3, 4.

Here a1 = 1/2, , a2 = 3/8, a3 = 5/16, a4 = 35/128. As a result we obtain K(1 – :) ∼ I0 + I1 + i2 + I3 + I4 , , → 0. This formula for elliptic integral as k = 1 – : in terms of : has the following form: K(1 – :) ∼ –

log(:) –1 + 2 log(2) log(:) + 2 log(2) + ( – ): 2 4 8

+(

–185 + 300 log(2) 25 log(:) –21 + 36 log(2) 9 log(:) – ) :2 + ( – ) :3 128 128 1536 512

+(

–18655 + 29400 log(2) 1225 log(:) – ) :4 , 196608 32768

: → 0.

(2.42)

Similar formula in the numeric form: K(1 – :) ∼ –0.5 log (:) 1.386294361119891 + : (0.09657359027997264 – 0.125 log (:)) :2 (0.03088514453248459 – 0.0703125 log (:)) + :3 (0.01493760036978098 – 0.048828125 log (:)) + :4 (0.00876631219717606 – 0.037384033203125 log (:)) .

(2.43)

The difference between eqs. (2.43) and (2.41) can be explained by the different kinds of these formulas. Formula (2.43) is asymptotic, but eq. (2.41) is numerical approximation by polynomials of the value for numerical calculation of the elliptic integral.

80

2 Asymptotic methods for solving nonlinear equations

2.6.2 Asymptotic Behaviour on a Regular Part of Trajectory Let us construct a solution in the form of expansion on :: ∞

u = tanh(t) + ∑ :n un (t).

(2.44)

n=1

Here the primary term of asymptotic expansion is a separatrix of eq. (2.40) as : = 0. The equations of high-order terms can be obtained after substituting eq. (2.44) into eq. (2.40) and collecting coefficients of :k for k ∈ ℕ. These equations are combined into a recurrent system. For example, the equations for u1 and u2 are 2 tanh(t) u󸀠1 + 4 u1 + tanh4 (t) – tanh2 (t) = 0, 2 cosh (t) cosh2 (t) 2 tanh(t) u󸀠2 + 4 u2 + (u󸀠1 )2 + (–6 tanh2 (t) + 2)u21 2 cosh (t) cosh2 (t) + (4 tanh3 (t) – 2 tanh(t))u1 = 0. The equation for nth order term has the form of tanh(t) 2 !n–1 ! = 0. u󸀠n + 4 un + ∑ A! tanh!0 (t)u1 1 . . . un–1 2 cosh (t) cosh2 (t) |!|=n Initial conditions for all corrections are un |t=0 = 0. The equations for the high-order terms have solutions in the form of un =

an (t) cosh2 (t)

.

Particularly for a1 we obtain: a󸀠1 = sinh2 (t), It yields a1 =

1 1 sinh(2t) – t. 8 4

The equation for a2 is a󸀠2 =

5 1 1 1 9 cosh(4t) – cosh(2t) + t tanh(t) + t2 sinh2 (t) + . 64 32 8 16 64

It yields a2 = –

t2 1 5 9 tanh(t) – sinh(4t) + sinh(2t) – t. 16 256 64 64

(2.45)

81

2.6 Uniform Asymptotic Behaviour of Jacobi-sn Near a Singular Point

High-order terms can be obtained in the same way by using eq. (2.45). In particular for the nth order term we obtain a󸀠n = –

1

!

!

n–1 ∑ A! tanh!0 (t)a1 1 . . . an–1 2 cosh2n–6 (t) |!|=n

Obvious form of the solutions are very large. Here the asymptotic behaviour as t → ±∞ is more important: an = O(e±2nt ). Then the interval of validity for the constructed expansion is :e±2t ≪ 1,

1 t ≪ ∓ log(:). 2

As a result we get that asymptotic expansion of the elliptic functions is valid when log(:)/2 ≪ t ≪ – log(:)/2. The constructed expansion is valid for less than a half of the period for the elliptic function sn and the sum of primary term and first correction looks like left-hand side graph on Figure 2.4. This expansion is not valid for the neighbourhoods of the turning points u ∼ 1, u󸀠 = 0. To match this expansion to another, which will be constructed near the turning points, we need asymptotic properties of this expansion near the border of suitability. Let us change the variable: t = T(:)/4 + 4. Non-uniform asymptotics of sn(t|1–0) Non-uniform asymptotics sn(t,0.9)

1.5

1.02

Asymptotics sn(t,0.9)

1 0.98

1

0.96 0.5

0.94

0

0.92 0.9

–0.5

0.88 0.86

–1 –4

–2

0 t

2

4

–1

–0.5

0 tau

0.5

1

Figure 2.4. In the right graph it is shown the divergence of the asymptotic curve for outer and function sn(t|1–:) near the turning points. In the left graph, one can see the neighbourhood of the turning point u = 1 for inner asymptotic expansion. The asymptotic curve and function sn(t–T/4, √1 – :) when : = 0.1 and 4 = t – T/4.

82

2 Asymptotic methods for solving nonlinear equations

Here 4 is a new independent variable. After substitution we obtain an asymptotic expansion as 4 ≪ –1: u ∼ 1 + : (–

4e24 e24 e–24 1 3e24 34e–24 4e–24 e–44 11 – + ) + :2 ( –– – – + + ). 8 8 4 16 32 32 16 128 64 (2.46)

The same asymptotic expansion can be obtained for the neighbourhood of lower separatrix, if one uses formula: u(t + T/2, :) = –u(t, :). 2.6.3 Asymptotic Behaviour Near Turning Point Elliptic function sn has asymptotic behaviour of another type near the turning points. Here we construct asymptotic expansion for the sn near the saddle point (1, 0). ∞

u(t, :) = 1 + ∑ :n vn (4).

(2.47)

n=1

Here vn = vn (4). Equations for the coefficients of the expansion can be obtained in the ordinary way. One should collect terms with the similar order of :. As a result, the following recurrent system of equations can be obtained: (v1󸀠 )2 = 4v12 + 2v1 , 2v1󸀠 v2󸀠 = 8v1 v2 – 2v2 + 4v13 + 5v1 , 2v1󸀠 vn󸀠 = 8v1 vn – 2vn + Pn (v1 , . . . , vn–1 ). Here Pn is a polynomial of four powers with vk1 vk2 vk3 vk4 , where k1 + k2 + k3 + k4 = n. The solution for v1 has the form: v1 =

e24 e–24 1 c1 + + . 16 4c1 4

Here c1 is a parameter of solution. The matching with eq. (2.46) yields c1 = –2 The solution for the second-order term is v2 = e24 c2 – e–24 c2 +

e44 4e24 4e–24 3e–24 e–44 11 + – – + + . 128 16 16 16 128 64

Here c2 is a parameter of solution also. The matching with eq. (2.46) yields c2 = –3/32. The higher terms are solutions of linear equations of the first order. Their solutions can be presented in the form of vn = e24 cn – e–24 cn + O(e±2n4 ),

4 → ±∞.

2.6 Uniform Asymptotic Behaviour of Jacobi-sn Near a Singular Point

83

Here cn is a parameter. It is defined by matching with asymptotic expansions that are valid outside the small neighbourhoods of the turning points. The validity of this expansion is defined by the condition: :n+1 vn+1 = o(:n vn ). Using estimations for the growing terms we obtain |4| ≪ –1/2 log(:). The intervals of validity for eq. (2.44) intersect when t ≫ 1 and 4 ≪ –1. In the intersected field one can match the parameters of the asymptotic expansions. As a result: u(t, :) ∼ 1 – : (–

+ :2 (

e24 e–24 1 – + ) 8 8 4

4e24 3e24 4e–24 3e–24 e44 e–44 11 – – – + + + ). 16 32 16 32 128 128 64

This asymptotics one can see on the right-hand side of Figure 2.4.

2.6.4 Uniform Asymptotic Expansion Now we are ready to construct a combined approximation of the function u(t, :) ≡ sn(t|1 – :), which will be uniform over more than a half of the period t ∈ (log(√:), –3 log(√:)) as : → ∞. For this purpose we use an asymptotic device which was offered by Kaplun [82] for combined approximations. We sum the constructed asymptotic expansions and subtract their common part. As a result we obtain the following formula (see left-hand side of Figure 2.5): u(t, :) ∼ tanh(t) + :

2t 1 1 : 1 2 e sinh(2t) – t) + – : . ( 4 4 128 cosh2 (t) 8

(2.48)

This formula is uniform asymptotics of sn for interval that is large than T/2. In the neighbourhood of the left saddle point u = –1, u󸀠 = 0, we can construct the same asymptotic expansion. But it is easy to get the formula u(t + T/2, :)) = –u(t, :), and the asymptotic expansion will be obtained automatically by using the asymptotic expansion (2.48) (see right-hand side of Figure 2.5): u(t, :) ∼ – tanh(t – T/2) – : +

e2(t–T/2) : – :2 . 4 128

1 1 1 ( sinh(2(t – T/2)) – (t – T/2)) 4 cosh (t – T/2) 8 2

(2.49)

84

2 Asymptotic methods for solving nonlinear equations

The matching asymptotic curve and Jacobi sn. Over a half period.

The matching asymptotic curve and Jacobi sn. 1.5

1 Matching curve Jacobi sn(t,k)

Matching curve Jacobi sn(t,k)

0.8

1

0.6 0.4

0.5

0.2

0

–0.2

–0.5

–0.6

0 –0.4 –0.8 –1

–1 –4

–2

0

2

4

6

5

5.5

6

6.5

7

7.5

8

8.5

9

t

t

Figure 2.5. On the left-hand side one can see the combined asymptotic approximation, which is valid for over than a half of the period of elliptic function. On the right-hand side, the asymptotic and analytic curves are prolonged, : = 0.01.

The combined asymptotic approximation which is valid over all periods of the elliptic function can be constructed in the same way, by using formulas (2.48) and (2.49) for the sum and subtract their common parts. But such combined asymptotic formula is large and it is not written here.

2.7 Mathieu’s and Lame’s Functions In this section we discuss Floquet’s theory for equations with periodic coefficients. We give examples from mathematical physics and mechanics where Mathieu’s equation appears. Also we consider a definition of Mathieu’s functions and an algorithm for their construction. 2.7.1 Hill’s Equation and Floquet’s Theory In this section we study an equation of the second order with a periodic coefficient: ∞

u󸀠󸀠 + ((0 + 2 ∑ (n cos(nx)) u = 0.

(2.50)

n=1

We suppose that coefficients of Fourier expansion are such that the Fourier series is convergent. Equation (2.50) is called Hill’s equation [162]. He investigated this equation in 1877 when he studied a theory of the Moon movement. The coefficient of equation is a periodic function with 20 period. It is easy to see that if a function u(x) is a solution of eq. (2.50) then the function u(x + 20) is also a solution of the equation. In addition this function u(x) is not periodic with respect to x

2.7 Mathieu’s and Lame’s Functions

85

in general. Because when the argument is changed on 20, this function can be multiplied by a constant or another linear independent solution of eq. (2.50) can be added. It is easy to understand that the potentialities of this change have been exhausted. This limited case suite allows to investigate properties of solutions for eq. (2.50) in detail without construction of exact solution. Let functions g(x) and h(x) be a fundamental system of solutions for eq. (2.50) and Wronsky determinant of this system equals unity. Such function can be found due to the linearity of the equation. We suppose g(0) = 1, g󸀠 (0) = 0,

h(0) = 0, h󸀠 (0) = 1.

General solution of eq. (2.50) can be expressed as F(x) = Ag(x) + Bh(x), where A and B are constants. Then functions g(x + 20) and h(x + 20) can be expressed by functions g(x) and h(x): g(x + 20) = !1 g(x) + !2 h(x),

h(x + 20) = "1 g(x) + "2 h(x).

These formulae allow to obtain a relation for !j , "j . A constant value of Wronsky determinant for eq. (2.50) gives !1 "2 – !2 "1 = 1. It is easy to express general solution F(x + 20): F(x + 20) = (A!1 + B"1 )g(x) + (A!2 + B"2 )h(x). Among various linear combinations of g(x) and h(x) that give a solution for eq. (2.50) we study a special case such that F(x + 20) = kF(x).

(2.51)

The solution of eq. (2.50) with this property is called Bloch function. This function is useful in the way that v(x) = F(–x) is also a solution of Hill’s equation. If k ≠ 1, then v(x) and F(x) are linear independent solutions for eq. (2.50).

86

2 Asymptotic methods for solving nonlinear equations

If k = exp(20,) then function 6(x) = exp(–x,)F(x) is periodic. Really 6(x + 20) = exp(–20,)F(x + 20) = 6(x). Relation (2.51) gives a system of equations for k, A, B: A!1 + "1 B = kA,

A!2 + B"2 = kB.

This system is homogeneous, so it is necessary for the existence of non-trivial solution that the determinant is equal to zero: (!1 – k)("2 – k) – !2 "1 = 0, or k2 – (!1 + "2 )k + 1 = 0. This equation has two solutions:

k12 =

(!1 + "2 ) ± √(!1 + "2 )2 – 4 2

.

There exist real solutions for this equations when |!1 + "2 | ≥ 2. If !1 + "2 = ±2, then k = ±1 and Bloch function is periodic with period 20 or 40, respectively. If |!1 + "2 | > 2 then one of the values of |k| > 1 and Bloch function is not bounded on x-axis. When |!1 + "2 | < 2, k is complex. To obtain the behaviour of Bloch function we evaluate |k|. Numbers k1 and k2 are complex conjugates. Vieta’s theorem gives: k1 k2 = 1. It means |k1 | = |k2 | = 1. Consequently solutions of Hill’s equation are bounded on real axis.

2.7.2 Examples Mathieu equation is usually written in the form of u󸀠󸀠 + (a + 16q cos 2z)u = 0.

(2.52)

Mathieu equation appeared in studies of stationary oscillations of oval disk. The frequency of oscillations equals p: 𝜕x2 v + 𝜕y2 v +

p2 v = 0. c2

2.7 Mathieu’s and Lame’s Functions

87

The derivation of Mathieu equation is complicated (see the example [162]). The main idea is evident. It is necessary to proceed to elliptic coordinates: x = h cosh . cos ',

y = h sinh . sin ',

where (±h, 0) are focuses of elliptic membrane. A solution is a periodic function with respect to '. It is necessary to determine values of parameter a whereby 20 periodic solutions for the Mathieu equation exist. Another example with Mathieu equation is a mathematical pendulum with a periodically oscillated centre of suspension. The centre of suspension oscillates vertically as cos Kt with an amplitude A. The mathematical pendulum equation is written as (mg + A cos Kt) sin 6 + ml6󸀠󸀠 = 0 or (󸀠󸀠 + (

g a + cos Kt) sin ( = 0. l l

When considering small oscillations in the neighbourhood of low equilibrium state, it gives u󸀠󸀠 + (1 + q cos Kt)u = 0. In the neighbourhood of upper equilibrium state this equation is u󸀠󸀠 – (1 + q cos Kt)u = 0. Mathieu’s equation determines a stability of equilibrium state for the parametricdriven mathematical pendulum. We are interested in the existence of growing solutions for Mathieu’s equation for the given values of parameter of equation.

2.7.3 Mathieu Functions The Mathieu equation u󸀠󸀠 + (a + 16q cos 2z)u = 0 as q = 0 and a = n2 has 20 periodic solution of the form [162] 1 cos z cos 2z . . . sin z sin 2z . . . .

88

2 Asymptotic methods for solving nonlinear equations

Solutions of Mathieu’s equation leading to sine and cosine as q → 0 are called Mathieu’s functions of order n = 0, 1, 2, . . . . These functions are usually denoted by ce0 (z, q) ce1 (z, q) ce2 (z, q) . . . se1 (z, q) se1 (z, q) . . . . Given definition does not allow to construct Mathieu’s functions identically. To obtain an identical solution it is necessary that Fourier coefficient of n order as cos(nz) or sin(nz) for Mathieu function equals unity. It means that it does not depend on q. It is necessary to note that a depends on q and a = n2 when q = 0. The dependence a on q is determined by the construction of Mathieu’s function of corresponding order. This algorithm is presented in the next section. The presented Floquet theory gives that Mathieu’s functions are special periodic solutions with special values of parameters. Another linear independent solution of Mathieu’s equation is a linear growing oscillating solution when q ≠ 0.

2.7.4 Construction of Mathieu’s Functions We seek for function ce0 (z, q) in the form of ce0 (z, q) = 1 + qv1 (z) + q2 v2 (z) + ⋅ ⋅ ⋅ .

(2.53)

a = a1 q + a2 q2 + a3 q3 + ⋅ ⋅ ⋅ .

(2.54)

Parameter a is

Substitute eqs. (6.10) and (6.13) into Mathieu equation and collect terms under the same powers of parameter q. It gives a recurrent sequence of equations. Equation for v1 has a form v1󸀠󸀠 + a1 + 16 cos(2z) = 0. A solution for v1 is v1 = –

a1 z2 + 4 cos(2z) + c1 z + c0 . 2

We are interested in a periodic solution. It yields a1 = 0, c1 = 0. Besides, this solution should not contain terms that give a contribution into zero coefficient of Fourier expansion for ce0 (z, q). It gives c0 = 0. Finally, v1 = 4 cos(2z).

2.7 Mathieu’s and Lame’s Functions

89

An equation for v2 is simplified in this case: v2󸀠󸀠 = –a2 – 64 cos2 (2z) or v2󸀠󸀠 = –a2 – 32 – 32 cos(4z). The desired solution has a form of v2 = –

a2 + 32 2 z + 2 cos(4z). 2

Periodicity condition leads to a relation a2 = –32. An equation for v3 is v3󸀠󸀠 = 128 cos(2z) – 32 cos(4z) cos(2z) – a3 . Trigonometric formula 1 cos(2z) cos(4z) = ( cos(2z) + cos(6z)) 2 and solving of the equation gives v3 = –28 cos(2z) +

a 4 cos(6z) – 3 z2 . 9 2

The parameter a3 = 0. The given algorithm allows to construct Mathieu function in the form of series with respect to powers of q and also to determine a as a series with respect to powers of q. We do not prove, but these series are convergent and Mathieu’s functions are entire functions of parameter z. Some terms of constructed series for ce0 (z, q) are ce0 (z, q) ∼ 1 + (4q – 28q3 ) cos(2z) + 2q2 cos(4z) +

4 3 q cos(6z), 9

a ∼ –32q2 + 224q4 . A representation in the form of the whole series is published in Ref. [1].

90

2 Asymptotic methods for solving nonlinear equations

2.7.5 Lindeman form of Mathieu’s Equation Make a trigonometric replacement of independent variable [162]: . = cos2 (z): d d. d d = = –2 cos(z) sin(z) , dz dz d. d. d d d2 d d2 2 ( – sin(2z) ) = sin = (2z) – 2 cos(2z) , d. d. dz2 dz d. 2 sin2 (2z) = 4. (1 – . );

cos(2z) = . – (1 – . ) = 2. – 1.

Mathieu’s equation has a form 4. (1 – . )

d2 u du + 2(1 – 2. ) + (a – 16q + 32q. )u = 0. d. d. 2

This equation has two regular points of singularity . = 0 and . = 1. Point . = ∞ is an essentially singular point for this equation. Two linear independent solutions of equation in the neighborhood of regular singular point can be constructed. It is possible to develop a similarity of Floquet’s theory for these solutions to study them on Riemann surface.

2.7.6 Special Case of Lame’s Equation Lame’s equation appears in variable separation for wave equation while using elliptic coordinates [162]. Lame’s equation can be written as a differential equation of the second order with periodic coefficient. This coefficient is an elliptic function. Consider Lame’s equation of Jacobi form: d2 D = (n(n + 1)k2 sn2 (!|k) + A)D. d!2 We are interested in a special case when Lame’s equation is obtained as a linearization of equation for sn(t). Show a derivation of linearization equation. The equation of the second order for sn(t|k) is u󸀠󸀠 = –(1 + k2 )u + 2k2 u3 . We seek for the solution of this equation in the form of u(t) = sn(t|k) + %v(t).

2.7 Mathieu’s and Lame’s Functions

91

Substitute this formula into equation for u and derive equation for coefficients under %j , j = 0, 1, 2, 3. Equation under %0 is solved identically. A relation under % gives v󸀠󸀠 = (6k2 sn2 (t|k) – (1 + k2 ))v.

(2.55)

This equation is called a linearized equation. It is a special case of Lame’s equation with n = 2. One of the solutions for eq. (2.55) is easy to be obtained by differentiation of solution for nonlinear equation v1 (t|k) =

d sn(t|k) = cn(t|k)dn(t|k). dt

(2.56)

Another linear independent solution for eq. (2.55) is obtained from a constant value of Wronskian of two solutions for eq. (2.55): W(v1 , v2 ) = v1 v2󸀠 – v2 v1󸀠 = const. We seek the second solution such that W(v1 , v2 ) = 1. Then a formula for Wronskian can be considered as a differential equation for v2 (t). The method of variation of parameters gives a formula for v2 : t

v2 (t) = v1 (t) ∫ 0

d4 v12 (4)

or t

v2 (t) = cn(t|k)dn(t|k) ∫ 0

d4 . (1 – sn2 (4|k))(1 – k2 sn2 (4|k)

(2.57)

This formula for v2 (t) is valid when the common denominator does not equal zero. It takes place when –K(k) < t < K(k). Function cn(t|k) = √1 – sn2 (t|k) has null of the first order at tm = (2m + 1)K. Consequently, the integrand has a pole of the second order at such points, and integral over segment with points tm is divergent. Formula (2.57) can be used for the whole t-axis. To obtain this fact it is sufficient to note that function v2 can be evaluated at tm exactly from Wronskian: v2 (tm ) = –

1 . v1󸀠 (tm )

92

2 Asymptotic methods for solving nonlinear equations

2.7.7 Degenerate Case The most interesting case takes place when k = 1. In this case, an elliptic sine degenerates to tanh(t). The linearized equation (Lame’s equation) has a form v󸀠󸀠 = (6 tanh2 (t) + 2)v. The solution is v1 (t) =

1 . cosh2 (t)

The second solution of linearized equation is obtained

from the formula for v2 (t): v2 (t) =

1 1 ( sinh(t) + 4 sinh(2t) + 6t). 16 cosh2 (t) 2

Remark 2. A solution for elliptic sine equation as k = 1 separates a domain of periodic solutions for mathematical pendulum equation from domain of non-periodic solutions. This special solution is called separatrix. A linearized separatrix equation has the exponentially growing solution v2 (t). It means that the separatrix solution is Lyapunov unstable one.

2.7.8 Problems 1. 2. 3.

Construct an asymptotic expansion for se2 (z, q) and ce2 (z, q) and find the borders of stability. Construct an asymptotic expansion for linear independent solution to se2 (z, q). Construct an asymptotic expansion for linear independent solution to ce0 (z, q).

3 Perturbation of nonlinear oscillations There is no doubt that analytical solutions of ordinary differential equation of the second order are one of the most important fields of mathematics. To confirm this, it is sufficient to mention Newton’s second law. However, to exclude linear equations with constant coefficients one can only see a thin set of analytically solved solutions. Other equations can be solved numerically or asymptotically or other methods can be used. Methods of perturbation theory are appropriate for celestial mechanics. A small parameter for such perturbations is a ratio between masses of Jupiter and Sun. This ratio is approximately 1/1000. Other physical problems contain different small parameters. Here we discuss nonlinear oscillations that can be defined by nonlinear and nonautonomous equations. Basic features of these solutions are: – A period depends on energy of the oscillations. –

A phase space has three dimensions.

Perturbed equations are non-integrable.

Approximations of solutions for nonlinear oscillators go back to S. Newcomb and Lindstedt [105, 137]. Their works contain constructions for coefficients of solutions as series of a small parameter for perturbed equations. Such approach allows to study periodic solutions whose period depends on a small parameter. Later Krylov and Bogolyubov developed general theory for averaging of oscillating dynamical systems. This approach considers parameters of solutions as slow varying functions, and Lindstedt–Poincare method is a certain case for this theory. The averaging method allows to study quasi-periodic solutions for perturbed equations (see for example [84, 124]). An approach presented in this chapter contains two important steps, which reduce a problem. First step is dividing variables into two kinds. They are fast ones and slow ones. This division allows to change a fast variable in such a way that the period of oscillations does not depend on this variable. This important property of approximation gives an asymptotic solution with bounded derivatives on the fast variable. Due to these properties the equations for correction terms in perturbation theory have bounded terms. The second step is rewriting parameters for the primary term of asymptotic expansion as series of a small parameter with coefficients depending on slow variable. Such a representation yields set of functions to delete secular terms for all powers of perturbation theory. These two steps lead to changing of ordinary non-autonomous equation to autonomous with respect to fast variable partial differential equations. This gives essential reduction of the problem. DOI 10.1515/9783110335682-003

94

3 Perturbation of nonlinear oscillations

For nonlinear non-autonomous equations such asymptotic approach was used by Kuzmak [102]. He studied the equation for primary term and the first correction term of perturbation theory for eq. (3.25). He obtained an asymptotic solution in the form u(t, :) = u0 (S, 4) + :u1 (S, 4),

(3.1)

where S = K(t)/:. Such asymptotic formula being substituted into eq. (3.25) gives residual O(:2 ). It means that the asymptotic formula is a formal asymptotic solution of eq. (3.25) modulo mod(O(:2 )). In Ref. [102] the second correction term has not been studied. However, it is well known that solutions of the equations for corrections can grow as square on fast variable. Therefore, this formal solution can be used in small interval with respect to slow variable (t – t0 ) = o(:–1 ). Next step for constructing asymptotic expansions was made by Luke. In Ref. [109] he discussed a second-order term and higher-correction terms of perturbation theory for nonlinear partial differential Klein–Gordon equation. However, his results may be used for ordinary differential equations as well. Using perturbation theory Luke obtained equations for primary term of asymptotic expansion and constructed an asymptotic solution which is valid on not small interval of slow variable. It yields obtaining a first correction term for S: K(4, :) = K(4) + :60 (4). In this case: S=

K(4) + 60 (4). :

(3.2)

One more Luke’s result is a formal justification of principle of the least action in perturbation theory. This principle was used for perturbation theory by Witham [160]. It obviously allows to derive the equation for modulation of parameters without solving the equations for corrections. But such general approach has some internal obstacles. First, this equation contains small parameter itself. Second, this equation has the same difficulties while solving as the basic equation (3.25). Later Luke’s results were generalized for some non-dissipative equations by Dobrokhotov and Maslov [31]. Dissipative equations were studied by Bourland and Haberman [17]. They derived equations for modulating action and phase shift in the elegant form through frequency and energy of a system. All constructions listed earlier are formal. Here we should mention well-known results for perturbed triangle system of equations. Following systems are often considered [5]: dI = :h1 (I, 8, :), d4

d8 = 9(I) + :h2 (I, 8, :), d4

I|4=0 = I0 , 8|4=0 = 80

where functions h1,2 (I, 8, :) depend on 8 periodically and the period should be constant, for example equal to 1. The asymptotic expansions for solutions of such

3.1 Regular Perturbation Theory for Nonlinear Oscillations

95

systems are constructed for any order :N and N = O(exp(1/:)) [128]. The justification for a large time 4 = O(:–1 ) was obtained in Ref. [8]. For Hamiltonian systems the action is an adiabatic invariant and this invariant oscillates with an amplitude of order : on large time 4 = 1/: [5]. If the solution belongs to some bounded area for large time exp(1/:), then the changing of the invariant is not more than : [128].

3.1 Regular Perturbation Theory for Nonlinear Oscillations Here we show obstacles for constructing oscillating solutions of perturbed nonlinear equation as a series of small parameter over large time. Behaviour of nonlinear oscillations should be kept in mind. – The period of oscillations depends on amplitudes of the oscillations or, that’s just the same, the period depends on an energy. For generality we study solutions of the following perturbed equation: u󸀠󸀠 + f (u) = :g(u, u󸀠 ).

(3.3)

Here f (u) = F 󸀠 (u) and F(u) is called potential or potential function and g(u, u󸀠 ) is perturbation. The motion of a particle in potential well with a dissipation as an example can be kept in mind. 3.1.1 Properties of Solutions for Unperturbed Equation Let us consider an unperturbed equation: U 󸀠󸀠 + f (U) = 0.

(3.4)

This equation has a conservation law: U 󸀠2 + F(U) = E. 2

(3.5)

The conservation law is an equation of the first order and we study it instead of the second-order equation (3.4). Here E is a parameter of eq. (3.5). This equation is integrable in quadratures: U

1 dy = t + t0 . ∫ √2 ±√E – F(y)

(3.6)

Y0

Here Y0 , t0 and sign before the square root are parameters of the solution for eq. (3.5). An explicit formula for solution of eq. (3.4) can be obtained by using eq. (3.6). This solution has tree parameters E, t0 and Y0 . Parameters t0 and Y0 define the point on phase plane for given E. These parameters are connected by explicit formula. Therefore, Y0 will be omitted.

96

3 Perturbation of nonlinear oscillations

Properties of solution are defined by zeros of square root in the integrand (3.6). Let us define yk as the zero of the first order for function E – F(y). The solution of eq. (3.4) oscillates between neighbour zeros yk and yk+1 , if E – F(U) > 0, U ∈ [yk , yk+1 ]. The period of the oscillations is T=∫ L

dU , U󸀠

L = {U :

(U 󸀠 )2 + F(U) = E} . 2

Zeros of the first order correspond to U 󸀠 = 0 and therefore zeros define turning points of the oscillations in the interval [yk , yk+1 ]. Zeros of the second order E – F(y) = 0,

f (y) = 0

define equilibriums for eq. (3.4) like centres and saddles. Define Ej , Yj solutions for this system. If f 󸀠 (Yj ) > 0, then U = Yj ; U 󸀠 = 0 is a saddle on phase manifold (U, U 󸀠 ), if f 󸀠 (Yj ) < 0 then U = Yj , U 󸀠 = 0 is a centre. Saddles and centres are staying alternately on the axis U 󸀠 = 0 and therefore they can be enumerated. Let us denote the saddles by odd and the centres by even numbers. Below we discuss solutions that are separated from equilibriums, it means E2j < E < E2j±1 . Therefore we will consider a system with only one equilibrium which is a centre without loss of generality. Assume U ∈ [yk , yk+1 ], where yk , yk+1 such that E – F(yk ) = 0, Uk < Uk+1 and E – F(U) > 0, U ∈ (yk , yk+1 ). Denote U(x, E) such that U(0, E) = yk+1 . Curve E = const has a symmetry and U(x, E) is even: U(x, E) = U(–x, E). 3.1.2 Formal Asymptotic Expansion for Solution Let us consider the equation with cubic nonlinearity and small dissipation as an example: u󸀠󸀠 + 2u + 2u3 = –:u󸀠 ,

: → 0.

(3.7)

Let us construct a solution for eq. (3.7) as a series of small parameter :: ∞

u ∼ ∑ :k uk (t).

(3.8)

k=0

Substitute this series into eq. (3.7) and collect coefficients for powers of :. It yields a recurrent system of equations for coefficients of eq. (3.8). Namely, primary term is a nonlinear equation: 3 u󸀠󸀠 0 + 2u0 + 2u0 = 0.

(3.9)

3.1 Regular Perturbation Theory for Nonlinear Oscillations

97

For uk , k ∈ N, linear equations can be obtained: 2 u󸀠󸀠 k + 2uk + 6u0 uk = Fk ,

Fk = –u󸀠k–1 + Pk,3 (u0 , . . . , uk–1 ).

Here Pk,3 is a polynomial of the third order. 3.1.3 Solution of Nonlinear Equation for Primary Term Solution of eq. (3.9) depends on two parameters, which are a time shift t0 and full energy for eq. (3.9): E=

(u󸀠 )2 u4 + u2 + . 2 2

The value of E defines the behaviour of solution for eq. (3.9). Denote a potential: F = u2 +

u4 . 2

The solution of equation dF ≡ 2u(1 + u2 ) = 0 du defines equilibrium for eq. (3.9). This equilibrium u = 0 is a centre on the phase plane. For periodic solutions 0 < E period of oscillations can be obtained by formula: T(E) = ∫ L

du , u󸀠

L :E=

(u󸀠 )2 u4 + u2 + . 2 2

The solution has a form of u0 (0)

u0 (t) = U(t – t0 , E),

t0 = ∫ 0

dy ±√2E – 2F(y)

.

Here t0 , E, and signs plus or minus in a integrand are parameters of the solution. 3.1.4 Homogeneous Linearized Equation When examining perturbations of nonlinear equations an important point is the solution of linearized equation. One can obtain the linearized equation by changing u0 by u0 + :v in a nonlinear equation then differentiating on : the result and, at last, equating : = 0: v󸀠󸀠 + 2v + 6u20 v = 0.

98

3 Perturbation of nonlinear oscillations

To obtain two solutions of this equation one should differentiate a solution of nonlinear equation u0 = U(t + t0 , E) on their parameters t0 and E. Let us differentiate eq. (3.9) on t0 : 𝜕 󸀠󸀠 (u + 2u0 + 2u30 ) = 0. 𝜕t0 0 As a result we obtain 𝜕u

d2 𝜕t 0 0

dt2

+2

𝜕u 𝜕u0 + 6u20 0 = 0. 𝜕t0 𝜕t0

𝜕u

Hence 𝜕t 0 is a solution of the linearized equation. In the same way we can see that 0 is a solution of the same linearized equation. Denote these solutions v1 and v2 : v1 (t) =

𝜕u0 , 𝜕t0

v2 (t) =

𝜕u0 𝜕E

𝜕u0 . 𝜕E

Let us show that v1 (t) and v2 (t) are linear independent. To achieve this we calculate their Wronskian: 𝜕u0 󵄨󵄨 𝜕u 𝜕u 󵄨󵄨 󵄨󵄨󵄨󵄨 √2E – 2u20 – u40 0 0 󵄨󵄨 ± 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨󵄨󵄨 𝜕E 󵄨󵄨 𝜕t0 𝜕E 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 = 󵄨󵄨 3 𝜕u0 2 – (4u 󵄨󵄨 2 󵄨 0 + 4u0 ) 󵄨󵄨 𝜕 u0 𝜕2 u0 󵄨󵄨󵄨 󵄨󵄨󵄨󵄨 𝜕E 3 󵄨󵄨 󵄨 –2u0 – 2u0 󵄨󵄨 𝜕t2 𝜕t𝜕E 󵄨󵄨󵄨 󵄨󵄨󵄨󵄨 2 √ 󵄨 0 󵄨 󵄨󵄨 ±2 2E – 2u0 – u40

󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 = 2. 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨

(3.10)

Functions v1 and v2 have following properties: – v1 (t) is periodic on t; –

v2 (t) is a sum of periodic and oscillating function with linear growing amplitude.

In fact, a Fourier series for u0 : u0 (t, E) = ∑ An (E) cos(20in9(E)t + t0 ),

9(E) =

n

1 . T(E)

Differentiating on E one gets 𝜕u0 𝜕A 𝜕9 = ∑ n cos(20in9(E)t + t0 ) – 20it ∑ nAn sin(20in9(E)t + t0 ). 𝜕E 𝜕E 𝜕E n n We can see that the last sum is v1 : v1 (t) = 20i9(E) ∑ nAn cos(20in9(E)t + t0 ). n

3.1 Regular Perturbation Theory for Nonlinear Oscillations

99

Finally we get v2 (t) = t

𝜕9 v (t) + w(t), 9𝜕E 1

w(t) = – ∑ n

𝜕An sin(20in9(E)t + t0 ). 𝜕E

Here w(t) is periodic. 3.1.5 Non-homogeneous Linearized Equation Method of variation of parameters can be used for solving non-homogeneous linearized equation v󸀠󸀠 + 2v + 6u20 v = f (t). It yields t

t

v (t) v (t) v(t) = cv1 (t) + bv2 (t) + 1 ∫ f (4)v2 (4)d4 – 2 ∫ f (4)v1 (4)d4. 2 2 This formula should be rewritten. Denote t

J(t) = ∫ f (4)v1 (4)d4. Integrate by parts: t

t

∫ f (4)v2 (4)d4 = ∫ f (4) (4

𝜕9 v (4) + w(4)) d4 9𝜕E 1 t

t

𝜕9 𝜕9 tJ(t) – ∫ J(4)d4 + ∫ f (4)w(4)d4. 9𝜕E 9𝜕E

= Now the solution has a form of v(t) = cv1 (t) + bt

𝜕9 v (t) + c2 w(t) 9𝜕E 1

t

t

v (t) 𝜕9 v (t) w(t) + 1 ∫ f (4)w(4)d4 – 1 J(t). ∫ J(4)d4 – 2 2 9𝜕E 2 It is convenient to define t

t

𝜕9 𝜕9 t+ ∫ J(4)d4 + ∫ f (4)w(4)d4. 6(t) = b 9𝜕E 9𝜕E As a result we obtain general solution of the non-homogeneous equation: v(t) = cv1 (t) + bw(t) +

v1 (t) w(t) 6(t) – J(t), 2 2

(3.11)

100

3 Perturbation of nonlinear oscillations

where c, b ∈ ℝ, J 󸀠 = f (t)v1 (t), 6󸀠 =

𝜕9 𝜕9 J(t) + f (t)w(t) + b. 9𝜕E 9𝜕E

(3.12)

Here J(t) and 6(t) should be defined by using triangle system of equation. It is easy to see that if 6(t) and J(t) are bounded then the solution is bounded too. – Solution of the non-homogeneous equation is represented in eq. (3.11), where J and 6 are solutions of triangle system of equations (3.12). 3.1.6 First Correction for Perturbation Theory The equation for the first correction has a form of 2 󸀠 u󸀠󸀠 1 + 2u1 + 6u0 u1 = –u0 .

Using eq. (3.11) for non-homogeneous equation we get u1 (t) = c1 v1 (t) + b1 w(t) + v1 (t)61 (t) + J1 (t)w(t), J1󸀠 = –u󸀠0 v1 , 6󸀠1 =

𝜕9 𝜕9 J1 + u󸀠0 w(t) + b. 9𝜕E 9𝜕E 1

The right-hand side in the equation for J1 is non-positive and periodic: J1󸀠 = –(u󸀠0 )2 , hence, J1 can be represented as a sum of linearly decreasing function and periodic J1 (t) with mean zero: T(E)

J1 = At + J1̃ (t),

A=–

1 ∫ (u󸀠0 )2 (4)d4, T(E) 0

t

J1̃ (t) = ∫(u20 (4) – A)d4. Coefficient A is proportional by square inside the trajectory of u0 on phase plane over a period: A=–

1 ∫ u󸀠0 du0 , T(E)

u0 ∈ L : E = (u󸀠0 )2 /2 + u2 + u40 /2.

L

Integrating the equation for 61 , it can be obtained that 61 grows as t2 : 61 = O(t2 ), t → ∞. It yields u1 = O(t2 ), t → ∞.

3.1 Regular Perturbation Theory for Nonlinear Oscillations

101

3.1.7 Second Correction in Formula (3.8) The right-hand side of the equation for the second correction in eq. (3.8) has a form of F2 = –6u0 u21 – u󸀠1 . General solution for the equation for the second correction is as follows: u2 (t) = c2 v1 (t) + b2 w(t) + v1 (t)62 (t) + J2 (t)w(t), J2󸀠 = F2 v1 , 𝜕9 𝜕9 J + F2 w(t) + b. 9𝜕E 2 9𝜕E 2

6󸀠2 = Evaluate t

t

J2 = ∫(–6u0 u21 – u󸀠1 )u󸀠0 d4 = –3u20 u21 – ∫(–6u20 u1 + u󸀠0 )u󸀠1 d4 t

=

–3u20 u21

t

∫(u󸀠󸀠 1

+ 2u1 +

2u󸀠0 )u󸀠1 d4

=

–3u20 u21

(u󸀠 )2 – 1 – u21 – 2 ∫ u󸀠1 u󸀠0 d4. 2

For large t, the correction u1 has an order t2 . It yields J2 = O(t4 ). Integrating equation for 62 gives 4

t

(u󸀠 )2 𝜕9 62 = ∫ (–3u20 u21 – 1 – u21 – 2 ∫ u󸀠1 u󸀠0 d41 ) d4; 9𝜕E 2 –

hence, 62 = O(t5 ) as t → ∞. As a result we get u2 = O(t5 ),

t → ∞.

3.1.8 Causes Leading to the Growing of Corrections J1 and J2 are causes of the growth of the first and the second corrections. The formula for nth term of eq. (3.8) is un (t) = cn v1 (t) + bn w(t) + v1 (t)6n (t) + Jn (t)w(t), Jn󸀠 = Fn v1 , 6󸀠n =

𝜕9 𝜕9 J + Fn w(t) + b . 9𝜕E n 9𝜕E n

If Jn is bounded, one can construct bounded 6n using an equation for bn : T(E)

∫ ( 0

𝜕9 𝜕9 Jn + Fn w(t)) dt + T(E) bn = 0. 𝜕E 𝜕E

102

3 Perturbation of nonlinear oscillations

This means that the boundedness of Jn is sufficient for boundedness of un . – In regular perturbation theory, high-order corrections grow faster than low-order corrections. –

The expansion of the regular perturbation theory lost is non-valid for large values of t.

3.1.9 Problems 1.

Construct regular perturbation expansion for i8󸀠 + (|8|2 – :t)8 = 1,

2.

: → 0.

Show an interval of validity for constructed asymptotic series. Construct primary term and two corrections of regular perturbation theory for solution of equation u󸀠󸀠 + sin(u) = : sin(Kt), Show value for K and E = bounded for all t.

(u󸀠0 )2 2

: → 0.

– cos(u0 ), when first and second corrections are

3.2 Fast and Slow Variables Fast oscillations and slow changing of parameters of solution are two basic properties of constructing above asymptotic expansions. Therefore, it is convenient to use fast and slow variables instead of one. As a result an ordinary differential equation is rewritten as a partial differential equation. – Fast and slow variables occur in natural fashion in solutions of equations with small parameter. –

The using of slow variable gives opportunity to rewrite the equation in such a form that period of solution is a constant with respect to fast variable.

In general, partial differential equations are more complicated than ordinary differential equations but for the equations with small parameter such change allows to rewrite non-autonomous equations into the form of autonomous equations with respect to fast variable. Equations for fast and slow variables are connected together by averaging over the period of fast variable. Averaging equations give equations for slow changing of parameters. 3.2.1 Two-Scaling Method Below we discuss a two-scaling method of constructing asymptotic expansions. This method is used when a derivative on independent variable can be written as a sum of

3.2 Fast and Slow Variables

103

terms of different orders with respect to perturbation parameter. One of the first works for this approach was Ref. [118]. Let us consider u(t, :) = sin(:t2 ),

0 < : ≪ 1.

Its derivative du = 2:t cos(:t2 ) dt is small for bounded t, but is not small for large when t = O(:–1 ). Obviously, here two variables can be convenient. One of them is fast t and another one is slow variable 4 = :t. It yields u(t, :) = sin(4t). In this case a sum of partial derivations should be used: d 𝜕 𝜕 = +: dt 𝜕t 𝜕4 In some cases the dependency on slow variable can be easily seen. Let us consider an equation for pendulum with a small dissipation: u󸀠󸀠 + sin(u) = –:u󸀠 .

(3.13)

An energy of the pendulum has a form of E=

(u󸀠 )2 – cos(u), 2

Differentiate energy of the pendulum: dE = –:(u󸀠 )2 . dt

(3.14)

This formula shows that the solution of the perturbed pendulum depends on slow variable: u = U(t, 4, :),

4 = :t.

Substitute this formula into equation and we get: 𝜕t2 U + 2:𝜕t 𝜕4 U + :2 𝜕42 U + sin(U) = –:𝜕t U – :2 𝜕4 U. –

(3.15)

The period of oscillations changes slowly; therefore, the slow dependency of parameters is natural for the perturbed equation.

104

3 Perturbation of nonlinear oscillations

3.2.2 Isochronous Oscillations Evaluations of slow changes for parameters of oscillating solutions are based on averaging over a period. The dependency of parameters on slow variable appears as conditions of boundedness and periodicity with respect to fast variable for corrections of perturbation theory. This approach allows to exclude the growing parts of the corrections of regular perturbation theory (Section 3.1) and include the growing parts as a slow changes of low order corrections. As a result the equations for corrections do not contain parts that cause a growth with respect to fast variable. An averaging is integrating over a period of oscillations. But this period changes slowly for perturbed equation like eq. (3.13). Let us change the fast variable in such a way that the period is a constant for new fast variable S: u = U(S, 4, :),

1 S = K(4). :

Substituting into eq. (3.15) yields (K󸀠 )2 𝜕S2 U + sin(U) = –:K󸀠 𝜕S U – :2 𝜕4 U – 2:K󸀠 𝜕S 𝜕4 U – :K󸀠󸀠 𝜕S U – :2 𝜕42 U.

(3.16)

This equation looks more complicated than eq. (3.13). But here new function K(4) can be chosen so that the period of oscillation is constant. Let us consider primary term of this equation: (K󸀠 )2 𝜕S2 U + sin(U) = 0.

(3.17)

This equation has a conservation law: 1 󸀠2 (K ) (𝜕S U)2 – cos(U) = E. 2

(3.18)

It means that it is possible to integrate eq. (3.18) on S. As a result one gets a formula for the solutions as inversion of elliptical integral. This solution has two parameters E and phase shift 6. The period of oscillations is T = K󸀠 ∫ L

dy √2E – 2 cos(y)

.

Curve L lies in phase space of variables U, 𝜕S U for eq. (3.17) with condition (3.18). Let us use the solution of eq. (3.17) as odd function where U(0, E) = arccos(–E). Then: U = U(S + 60 , E). Here 60 and E are parameters of the solution.

3.2 Fast and Slow Variables

105

In this formula the integrand depends on 4. Those dependency can be defined by averaging of the first correction. K󸀠 defines the value of the period of oscillations. It is convenient to choose T ≡ 20. As a result we get an equation for K: K󸀠 ∫ L

dy √2E – 2 cos(y)

= 20.

(3.19)

For ∀E, –1 < E < 1 solutions of eqs. (3.17) and (3.19) have period T = 20. Equations (3.17) and (3.19) define the isochronous oscillations with respect to fast variable S. Function K(4) is defined through E(4).

3.2.3 An Averaging of Isochronous Oscillations Let us consider the solution of eqs. (3.16)–(3.19) as follows: ̃ + 6, E, 6, :). u(t, :) = U(S + 6, E) + :u(S

(3.20)

Then ũ is a solution of the following equation: (K󸀠 )2 𝜕S2 ũ + ũ cos(U) = – (K󸀠 + :6󸀠 )𝜕S U – :2 𝜕E UE󸀠 – 2K󸀠 6󸀠 𝜕S2 U – :(6󸀠 )2 𝜕S2 U – 2(K󸀠 + :6󸀠 )E󸀠 𝜕S 𝜕E U – :2 (E󸀠 )2 𝜕E2 U – (2:K󸀠 6󸀠 + :2 (6󸀠 )2 )𝜕S2 ũ – :(K󸀠 + :6󸀠 )𝜕S ũ – :2 𝜕4 ũ – –2:𝜕S 𝜕4 ũ – :2 𝜕42 ũ 1 – (sin(U + :u)̃ – sin(U) – :ũ cos(U)), :

(3.21)

where ̃ 󸀠. 𝜕4 ũ ≡ 𝜕E uE An equation for E, which is obtained in eq. (3.14), has a fast variable on the right-hand side. But here E depends on slow variable only. Therefore to obtain the equation for E(4) we use an averaging, which was suggested by Krylov and Bogolyubov [104]. Denote the right-hand side of eq. (3.21) by F(S, u,̃ 4, E, 6). Suppose U(S, E) and ũ are periodic on S. Multiply eq. (3.21) on 𝜕S U and evaluate the integral over the period. As a result we get an equation that contains the dependency on the slow variable 4 only: 20

20

∫ ((K󸀠 )2 𝜕S2 ũ + ũ cos(U))𝜕S UdS = ∫ F(S, u,̃ 4, E, 6)𝜕S UdS. 0

0

106

3 Perturbation of nonlinear oscillations

We integrate first term of the right-hand side by parts: 20

20

∫ ((K󸀠 )2 𝜕S2 ũ

̃ 𝜕S UdS = – ∫ ((K󸀠 )2 𝜕S2 U 𝜕S udS

0

0 20

20

= ∫ 𝜕S ũ sin(U)dS = – ∫ ũ cos(U)𝜕S UdS. 0

0

As a result we get 20

∫ F(S, u,̃ 4, E, 6)𝜕S UdS = 0. 0

This equation should be reduced. The integrals, which contain 𝜕S2 U𝜕S U, can be integrated by parts. Due to periodicity such integrals are equal to zero. Here it is convenient to use an action: 20

I = ∫ (𝜕S U)2 dS ≡ ∫ 𝜕S UdU. L

0

Note 20

20 󸀠

2 ∫ E 𝜕E 𝜕S U 𝜕S UdS = 𝜕4 ∫ (𝜕S U)2 dS = 𝜕4 I. 0

0

One can see that I is homomorphic to E. Indeed, for small U curve L is close to circle with radius √E – 1. Hence a square is close to 0(E – 1). It means that I ∼ 0(E – 1) for small U. Let us show 20

∫ 𝜕E U𝜕S UdS = 0 0

and 20

∫ 𝜕E2 U𝜕S UdS = 0. 0

Both formulas can be proved by the same method; therefore, we show the proof for the first of these formulas. Transform U(S + 6, E) into Fourier series on S + 6. Such series exists because U is smooth: ∞

U(S + 6, E) = ∑ A(E) cos(in(S + 6)). n=0

3.2 Fast and Slow Variables

107

Then 𝜕E U is an even function of S + 60 , 𝜕S U is an odd function. Integral over the period is equal to zero. Also: 20

∫ sin(U)𝜕S UdS = 0. 0

This formula can be proved by integration by parts and using periodicity of integrands. As a result averaging allows us to get the following equation: 𝜕4 I + I + :I(I, 6) = 0,

(3.22) 20

I(I, 6) =

K󸀠

1 ∫ ((2:K󸀠 6󸀠 + :2 (6󸀠 )2 )𝜕S2 ũ + :6󸀠 0

󸀠

– :(K + :6󸀠 )𝜕S ũ – :2 𝜕4 ũ – 2:𝜕S 𝜕4 ũ – :2 𝜕42 ũ 1 – (sin(U + :u)̃ – :ũ cos(U)) 𝜕S Uds. : –

Equation (3.22) is a full averaging equation. This equation is a necessary condition for periodicity of ũ with respect to fast variable S.

3.2.4 Transcendent Equation for Parameter The second condition for periodicity of the correction can be obtained from the solution of linearized equation (K󸀠 )2 𝜕S2 v + v cos(U) = 0. Solutions of this equation are v1 = 𝜕S U,

v2 =

𝜕E K󸀠 S𝜕S U + 𝜕E U. K󸀠

Wronskian of these solutions is v1 𝜕S v2 – v2 𝜕S v1 = –(

𝜕E K󸀠 𝜕E K󸀠 2 S𝜕 U + 𝜕 U) 𝜕 U = (𝜕 U)2 + 𝜕S U𝜕SE U – 𝜕E U 𝜕S2 U S E S K󸀠 K󸀠 S =

=

𝜕E K󸀠 𝜕E K󸀠 2 (𝜕 U) + S𝜕S U𝜕S2 U + 𝜕S U𝜕SE U S K󸀠 K󸀠

𝜕E K󸀠 1 sin(U) (2E + 2 cos(U)) + 𝜕E (𝜕S U)2 – 𝜕 U 2 K󸀠3 K󸀠2 E

𝜕E K󸀠 1 1 sin(U) 1 (2E + 2 cos(U)) + 𝜕E ( 󸀠2 E + 󸀠2 cos(U)) – 𝜕 U = 󸀠2 . K󸀠3 K K K󸀠2 E K

Let ũ be ũ = 𝜕E U + w,̃

̃ + 20, E) = w(S, ̃ E). w(S

108

3 Perturbation of nonlinear oscillations

Then ũ is a solution of integral equation: S

𝜕 K󸀠 𝜕 K󸀠 ũ = c𝜕S U + b ( E 󸀠 S𝜕S U + 𝜕E U) + 𝜕S U ∫ ( E 󸀠 S𝜕S U + 𝜕E U) F(U, u,̃ S, 4, :)dS K K S

–(

𝜕E K󸀠 S𝜕S U + 𝜕E U) 𝜕S U ∫ 𝜕S UF(U, u,̃ S, 4, :)dS K󸀠

Here we denote t

J(S, 4, u,̃ :) = ∫ 𝜕S U F(U, u,̃ S, 4, :)dS. Because periodic eq. (3.21) averaging over period of J should be equal to zero. Assuming periodicity of U and ũ on S it can be obtained that J is periodic too. Integral in the second term can be integrated by parts: S

𝜕S U ∫

𝜕E K󸀠 𝜕E K󸀠 ̃ S𝜕 UF(U, u, S, 4, :)dS = S𝜕S UJ(U, u,̃ S, 4, :) S K󸀠 K󸀠 S

𝜕 K󸀠 – E 󸀠 𝜕S U ∫ JF(U, u,̃ S, 4, :)dS K Now ũ can be rewritten in the form of S

ũ = c𝜕S U + b𝜕E U + 𝜕S U (b

𝜕E K󸀠 S + ∫ 𝜕E UF(U, u,̃ S, 4, :)dS) K󸀠

S

𝜕 K󸀠 – E 󸀠 𝜕S U ∫ J(U, u,̃ S, 4, :)dS – 𝜕E UJ(U, u,̃ S, 4, :). K

(3.23)

Brackets contain terms that can lead to order S. Therefore for the periodicity of ũ we obtain a condition: 20

𝜕 K󸀠 b E 󸀠 20 + ∫ 𝜕E UF(U, u,̃ S, 4, :)dS = 0. K

(3.24)

0

This formula is a transcendental equation for b, which is a parameter of solution in eq. (3.23).

3.2.5 Equations for Parameters of Averaging Constructing of the solution as a sum of two periodic functions on S and 4 as eq. (3.20) leads to a system of three nonlinear equations (3.19), (3.22) and (3.24). Two equations of this system are differential equations of the first order. Therefore if this system is decidable, then the solution contains two real parameters. These parameters are parameters of periodic solutions for second-order equation (3.13). One more

3.3 Krylov–Bogolyubov Method

109

parameter c, which occurs in formula for ũ and is not defined now, can be rewritten as an initial correction for 6. Therefore this parameter can be equated to zero c ≡ 0 without loss of generality. Derived here equations are not usable for the neighbourhoods of singular points of parameter E. First, it is the neighbourhood of E = –1, where frequency of oscillations K󸀠 does not depend on E. Second, it is the neighbourhood of separatrix E = 1, where the period of oscillations tends to infinity and K󸀠 → 0. 3.2.6 Problems 1.

Divide the dependency on slow and fast variables for decreasing asymptotic solution of Painlevé-2 equation: u󸀠󸀠 + tu – u3 = 0

2.

as t → ∞. Find fast and slow variables for asymptotic solution of Painlevé-2 equation as u = √tv(t) as t → ∞.

3.3 Krylov–Bogolyubov Method In general case, the period of oscillations for nonlinear equations depends on parameters of the solution. In the large interval of time, the parameters of primary term of asymptotic expansion change slowly. Therefore one should find differential equations for slow changing parameters to construct an asymptotic solution suitable for the large interval of time. Here we discuss the method of constructing of oscillating asymptotic expansion for solution of nonlinear second-order equation. Basic steps for the Krylov– Bogolyubov method are: – to choose two scales of independent variable; –

to transform the equation to isochronous form;

to use parameters for solution which are dependent on slow variable;

to derive equations for the parameters using averaging over the period of fast variable.

We discuss basics of the method by the example of asymptotic solution on small parameter : for the Cauchy problem: u󸀠󸀠 + f (:t, u) = :g(u, :u󸀠 , :t),

u|t=t0 = v,

u|t=t0 = w

(3.25)

for t ∈ [t0 , :–1 t󸀠 ], t󸀠 = const > 0. In eq. (3.25), f (x, y), g(x, y, t) are smooth real functions. Assume that f (4, u) =

dF(4, u) . du

110

3 Perturbation of nonlinear oscillations

Here F(4, u) such that ∀4 ∈ [40 , 41 ] as u ∈ [–a, a] there exists one minima of F(u, 4). For simplicity assume min F(4, u) = F(4, 0), where a = const > 0. Such form of F coincides with potential well with slow changing parameters. Below we discuss oscillations in such potential well with perturbation g. The equation depends on t; therefore, the phase space has three dimensions. The equation is non-Hamiltonian one, and action-angle variables do not exist there. Let us study a foliation of three-dimensional phase space with respect to parameter 4 = :t. For every value of 4 we consider the phase plane (u, u󸀠 ). The oscillating solutions of eq. (3.25) look like cycles on this plane. This geometric simplification can be formulated by using the two-scaling method, where 4 = :t is slow time. It yields 𝜕t2 u + f (u, 4) = :g(u, 𝜕t u + :𝜕4 u, 4) – 2:𝜕t 𝜕4 u – :2 𝜕4 u.

(3.26)

The Krylov–Bogolyubov method is based on two rules. The first rule is dividing motions on fast and slow ones. The second rule is averaging of the fast motion over a period. In Section 3.2.2 we show that period depends on slow variable even for autonomous perturbed equations, of course the same is true for non-autonomous equations like eq. (3.26). It is convenient to use the averaging method for solutions with constant period of oscillations. Therefore we replace the fast variable t by new variable S and change eq. (3.26) by equation with isochronous oscillations: S(4, :) =

K(4) + I(4, :). :

In this case the solution has the form of u = U(S, 4, :). Substitution of this formula into eq. (3.26) yields (K󸀠 )2 𝜕S2 U + f (U, 4) = : (g(U, (K󸀠 + :I󸀠 )𝜕S U + :𝜕4 U, 4) –2K󸀠󸀠 𝜕S U – K󸀠 I󸀠 𝜕S2 U + K󸀠 𝜕S 𝜕4 U) – :2 (I󸀠2 𝜕S U + I󸀠 𝜕S 𝜕4 U + 𝜕42 U) .

(3.27)

Let us assume : = 0 in eq. (3.27) and assume that slow variable is a parameter 4 ∈ ℝ. Then we obtain (K󸀠 )2 𝜕S2 u + f (u, 4) = 0 and E(4) = (K󸀠 )2

(𝜕S u)2 + F(u, 4), 2

which do not depend on S and hence E is a conservation law for eq. (4.23).

(3.28)

3.3 Krylov–Bogolyubov Method

111

The trajectory of oscillating solution for eq. (4.23) is a cycle on plane (u, 𝜕S u). Let us denote this cycle by L . Period of motion over this cycle is constant and for simplicity T≡

1 du = 20, ∫ K󸀠 𝜕S u

(3.29)

L

where K󸀠 𝜕S u = ±√2E – 2F(u, 4), a sign ± should be “+” on the upper half plane of phase plane (U, 𝜕S U) and “–” otherwise. Equation (3.29) defines K󸀠 through parameter E. As a result, formula for the solution of the second-order equation contains three parameters: K, E, 6. – Parameters E and 6 are constants of integration for the autonomous equation. –

Parameter K allows to obtain isochronous oscillations with respect to S.

3.3.1 Asymptotical Substitution –

Formal asymptotic solution has a form of series of perturbation theory.

The Krylov–Bogolyubov’s asymptotic ansatz looks like N

UN (t, :) = ∑ :n un (S, 4).

(3.30)

n=0

Let us construct the solution for eq. (3.25) in the form of eq. (3.30), where S=

K(4) + IN (4, :), :

N

IN (4, :) = ∑ :n 6n (4). n=0

Functions K(4) and 6n (4) will be defined below. New variables S and 4 change ordinary differential equation to partial differential equation with respect to new variables S and 4. Here 2

N N N d d 󸀠 n 󸀠 󸀠 n 󸀠 2 󸀠󸀠 [(K = + : : 6 )𝜕 + :𝜕 ] = (K + : : 6 ) 𝜕 + :(K + : :n 6󸀠󸀠 ∑ ∑ ∑ S 4 n n S n )𝜕S + dt2 dt n=0 n=0 n=0 N

+ 2 :(K󸀠 + : ∑ :n 6󸀠n )𝜕4 𝜕S + :2 𝜕42 . n=0

Substitute eq. (3.30) into eq. (3.25) and equate coefficients of equal orders of :. As a result we obtain a recurrent system of equations for correction terms

112

3 Perturbation of nonlinear oscillations

Dependency of primary term on the fast variables is defined by the equation: (K󸀠 )2 𝜕S2 u0 + f (u0 , 4) = 0,

(3.31)

First correction term is defined by non-autonomous linear equation: (K󸀠 )2 𝜕S2 u1 + fu󸀠 (u0 , 4)u1 = g(u0 , K󸀠 𝜕S u0 , 4) – K󸀠󸀠 𝜕S u0 – – 2K󸀠 𝜕4 𝜕S u0 – 2K󸀠 6󸀠0 𝜕S2 u0 ,

(3.32)

The right-hand side of eq. (3.32) contains terms, which depend on slow variable 4. This dependency is defined by additional condition which is boundedness of the first correction with respect to fast variable S. The equation for the second correction has a form of (K󸀠 )2 𝜕S2 u2 + fu󸀠 (u0 , 4)u2 = gu (u0 , K󸀠 𝜕S u0 , 4)u1 + + gu󸀠 (u0 , K󸀠 𝜕S u0 , 4)(K󸀠 u1 + 60󸀠 𝜕S u0 + 𝜕4 u0 ) –

1 f (u , 4)u12 – 2 uu 0

– K󸀠󸀠 𝜕S u1 – 2K󸀠 60󸀠 𝜕S2 u1 – 2K󸀠 𝜕4 𝜕S u1 – 󸀠 – 2K󸀠 61󸀠 𝜕S2 u0 – 𝜕42 u0 – (6󸀠0 )2 𝜕S2 u0 – 6󸀠󸀠 0 𝜕S u0 – 260 𝜕4 𝜕S u0 ,

(3.33)

The equation for nth correction term is too cumbersome and is not presented here. – Coefficients of eq. (3.30) are solutions of recurrent system of equations.

3.3.2 Formula for Leading-Order Term of Asymptotic Expansion –

Leading-order term of asymptotic expansion is a solution of nonlinear autonomous equation of the second order with respect to fast variable S.

Let us construct the solution of the recurrent system of equations. Value 1 󸀠2 (K ) (𝜕S u0 )2 + F(4, u0 ) = E(4) 2

(3.34)

does not depend on S. Equation (6.155) can be integrated: u0 (S,4)

S=K

󸀠

∫ y0

dy . √2E – 2F(y, 4)

The solution can be obtained as an inversion of integral in formula (3.35).

(3.35)

113

3.3 Krylov–Bogolyubov Method

In general, case parameter E(4) defines a trajectory in phase space (u0 , 𝜕S u0 ). A place of initial point on the trajectory is defined by two parameters: u0 = y0 and by sign of 𝜕S u0 . The third parameter is an initial value of slow variable 4 = 40 . Equations (3.31)–(3.33) are written under assumption that the coefficients of eq. (3.30) and the right-hand sides of eqs. (3.31)–(3.33) are bounded as : → 0 and 4 ∈ [40 , 4󸀠 ]. Lemma 2 (Kuzmak [102]). If period of smooth function U(S, 4) does not depend on 4, then the derivative with respect to 4 is bounded. Proof 2. See [102] and [38, 109]. Let period T depend on 4 then: 𝜕4 U(S + mT, 4) = 𝜕S U(S, 4) – m𝜕T 𝜕S U(S + mT, 4). It is easy to see that the right-hand side is unbounded for m ∈ Z. Assume that T does not depend on 4. Let us rewrite U(S, 4) as a Fourier series: ∞

U(S, 4) = ∑ Un (4) exp(2i0nS/T). –∞

Here T is a period of U(S, 4) on S. Differentiate on 4, then ∞

𝜕4 U(S, 4) = ∑ 𝜕4 Un (4) exp(2i0nS/T). –∞

Hence the derivative is periodicity on S.

3.3.3 Solution of Linearized Equation –

For constructing bounded correction we obtain conditions of boundedness for solutions of linearized equation.

The corrections are solutions of linearized equation: (K󸀠 )2 𝜕S2 v + 𝜕U f (U, 4)v = h(S, 4).

(3.36)

Here h(S, 4) is periodic with respect to S and period is equal to 20. Two linear independent solutions of eq. (3.36): v1 = 𝜕S U(S, E),

v2 =

𝜕E K󸀠 S𝜕S U + 𝜕E U. K󸀠

(3.37)

114

3 Perturbation of nonlinear oscillations

Wronskian of these solutions are v1 𝜕S v2 – v2 𝜕S v1 =

1 . K󸀠2

General solution of eq. (3.36): S

𝜕 K󸀠 𝜕 K󸀠 v(S, 4) = a𝜕S U + b ( E 󸀠 S𝜕S U + 𝜕E U) + 𝜕S U ∫ ( E 󸀠 3𝜕3 U(3, 4) + 𝜕E U(3, E)) h(3, 4)d3 K K S

𝜕 K󸀠 – ( E 󸀠 S𝜕S U(S, 4) + 𝜕E U(S, E)) ∫ 𝜕3 U(3, E)h(3, 4)d3. K Denote S

J(S, E) = ∫ 𝜕3 U(3, E)h(3)d3, S

J(S, E) = ∫ 𝜕E U(3, E)h(3)d3. Integrating by part, we can obtain S

v(S, 4) = a𝜕S U + b𝜕E U – J(S, E)𝜕E U + (b

𝜕E K󸀠 𝜕 K󸀠 S + E 󸀠 ∫ J(3, E)d3 + J) 𝜕S U. 󸀠 K K

Functions 𝜕S U and 𝜕E U are linearly independent and periodic on S. Solution is bounded if the right-hand side satisfies two conditions: 20

∫ h(S)𝜕S U(S, )dS = 0,

(3.38)

0

If eq. (3.38) is true, then J(S, 4) is periodic with respect to S and it has zero mean: b

20

20

0

0

𝜕E K󸀠 𝜕E K󸀠 20 + ∫ J(3, E)d3 + ∫ h(S)𝜕E U(S, E)dS = 0. K󸀠 K󸀠

If J(3, 4) have zero mean, then eq. (3.39) is an equation for b: b(4) = –

K󸀠 (J(20, 4) – J(0, 4)) . 20𝜕E K󸀠

Suppose eq. (3.38) is true and b is defined by eq. (3.39). Denote S

8=(

𝜕 K󸀠 S (J(20, 4) – J(0, 4)) + E 󸀠 ∫ J(3, E)d3 + J) , 20 K

(3.39)

3.3 Krylov–Bogolyubov Method

115

then bounded solution of linearized equation has the form of v(S, 4) = a𝜕S U +

K󸀠 (J(20, 4) – J(0, 4)) 𝜕E U + 8(S, 4)𝜕S U – J(S, E)𝜕E U. 20𝜕E K󸀠

The solution of eq. (3.36) has a form of eq. (3.40).

The solution of eq. (3.36) is bounded if eqs. (3.38) and (3.39) are true.

The bounded solution has only one arbitrary constant a.

(3.40)

3.3.4 Periodic Solution for the First Correction –

The right-hand side of the equation for the first correction contains an arbitrary function on slow variable. Formula (3.38) is an equation to define this function and to construct bounded correction.

Denote the right-hand side of equation for the first correction by F1 (S, 4): F1 (S, 4) = g(u0 , K󸀠 𝜕S u0 , 4) – K󸀠󸀠 𝜕S u0 – 2K󸀠 𝜕4 𝜕S u0 – 2K󸀠 6󸀠0 𝜕S2 u0 . The correction has a form of u1 (S, 4) = a1 𝜕S u0 + b1 𝜕E u0 + 81 (S, 4)𝜕S u0 – J1 (S, E)𝜕E u0 .

(3.41)

Here terms with index 1 correspond to the same terms without any indexes in eq. (3.40) and h(S) ≡ F1 . The conditions of periodicity on S are 20

∫ dS 𝜕S u0 (S, 4)F1 (S, 4) = 0,

(3.42)

0 20

K󸀠 b1 (4) = – ∫ dS 𝜕E u0 (S, 4)F1 (S, 4) = 0. 20K󸀠E

(3.43)

0

Due to eqs. (3.42) and (3.43) we define functions E(4) and 6(4). Let us reduce these equations using explicit form of F1 . Let us transform the integral in eq. (3.42): 20

20

∫ dS 𝜕S u0 (S, 4)F1 (S, 4) = ∫ g(u0 , K󸀠 𝜕S u0 , 4)𝜕S u0 (S, 4)dS 0

0 20

20

0

0

1 – 𝜕4 (K󸀠 ∫ (𝜕S u0 (S, 4, :))2 dS) – K󸀠 6󸀠0 ∫ dS (𝜕S u0 )2 . 2 Last term in the integral from F1 is equal to zero because of periodicity.

116

3 Perturbation of nonlinear oscillations

Denote the first term: 20

D0 (4) = – ∫ g(u0 , K󸀠 𝜕S u0 , 4)𝜕S u0 (S, 4)dS. 0

This term defines a dissipation. The second term can be rewritten in the form of x+

20 󸀠

I0 (4) = K ∫ dS(𝜕S u0 ) = 2 ∫ √2E(4) – 2F(x, 4)dx. 2

x–

0

This function is equal to square between trajectory on phase plane (u0 , 𝜕S u0 ). This defines an action for u0 (S, 4). The action is convenient to be used in formula for the period of oscillations: x+

𝜕E I = 𝜕E 2 ∫ dx√2E – F(x, 4) = 2𝜕E (x+ )√2E – F(x, 4)|x=x+ x– x+

– 2𝜕E (x– )√2E – F(x, 4)|x=x– + 2 ∫ x–

dx √2E – F(x, 4)

.

The borders of integration x± are solutions of transcendental equation F(x± , 4) = E. Therefore, 𝜕E x± =

1 f (x± , 4)

.

Value f (x± , 4) ≠ 0. The radicand at x± is equal to zero. It yields 𝜕E I =

20 . K󸀠

(3.44)

As a result we obtain an equation [102]: I0󸀠 + D0 (4) = 0.

(3.45)

Remark 3. If D0 (4) ≡ 0, then the primary term of action I0 does not change and therefore I0 is an adiabatic invariant. To avoid cumbersome formulas let us rewrite eq. (3.43) as a sum of terms of F1 and transform each of them. Denote 20

I1 (4) = ∫ g(u0 , K󸀠 𝜕S u0 , 4)𝜕E u0 (S, 4)dS. 0

3.3 Krylov–Bogolyubov Method

117

Using integrating by parts we get 20

20

2K󸀠 ∫ 𝜕S2 u0 (S, 4)𝜕E u0 (S, 4)dS = –2K󸀠 ∫ 𝜕S u0 (S, 4)𝜕E 𝜕S u0 (S, 4)dS 0

0 20

= –K󸀠 𝜕E ∫ (𝜕S u0 (S, 4))2 dS = –K󸀠 𝜕E ( 0

I K󸀠 K󸀠 2 2 ) = – (I𝜕 I) = – 𝜕 𝜕 I . E E K󸀠 20 40 E

To evaluate residuary integrals we use Fourier series for u0 (S, 4) and evenness of u0 (S, 4) on S: ∞

u0 (S, 4) = ∑ Ak (4, E) cos(Sk). k=0

Then 20

∫ 𝜕S u0 (S, 4)𝜕E u0 (S, 4)dS = 0 0

due to the oddness of an integrand. In the same way we can get 20

∫ 𝜕4 𝜕S u0 (S, 4)𝜕E u0 (S, 4)dS = 0. 0

As a result we obtain 20

∫ F1 (S, 4)𝜕E u0 (S, 4)dS = –6󸀠0 0

K󸀠 2 2 𝜕 I + I1 (4). 40 E

Then eq. (3.43) has the form b1 (4) = 6󸀠0

𝜕E2 (I 2 ) K󸀠 I1 (4). – 40𝜕E2 I 20𝜕E K󸀠

(3.46)

This formula defines the dependency of the first corrections on 6󸀠0 . As a result we construct a periodic solution of the Cauchy problem for the first correction term: u1 (S, 4) = (6󸀠0

𝜕E2 (I 2 ) K󸀠 I1 (4)) 𝜕E u0 + 81 (S, 4)𝜕S u0 – J1 (S, E)𝜕E u0 . – 2 40𝜕E I 20𝜕E K󸀠

(3.47)

Theorem 7 (Kuzmak [102]). Let E(4), K(4) and 60 (4) are solutions of eqs. (3.45), (3.19) and (3.46), then periodic solution of eq. (3.32) has the form (3.41).

118

3 Perturbation of nonlinear oscillations

3.3.5 Problems 1.

Find a formula for primary term of asymptotic solution for equation u󸀠󸀠 + 2u – 2u3 = –:u󸀠 .

2.

Construct an asymptotic term of order sqrtt for the solution of equation u󸀠󸀠 + tu – u3 = 0 as t → ∞ as an elliptic function.

3.4 Higher-Order Terms in Krylov–Bogolyubov Method In Section 3.3 we construct the primary term and the first correction term of perturbation series for the solution of eq. (3.25). But it is not enough to approximate a solution for perturbed equation with precision O(1) as : → 0. The cause is uncertainty of 6(4, :). In this section we construct the second correction for eq. (3.25) and show the approach to construction of higher corrections of perturbation theory. High-order terms are important to: – more precise approximation and –

clarify the interval of validity for the asymptotic expansion.

3.4.1 Second Correction of Perturbation Theory –

The boundedness of the second correction defines the primary term of asymptotic expansion with accuracy o(1) as t = O(:–1 ).

The second correction is a solution of eq. (3.33). Denote the right-hand side of eq. (3.33) by F2 (S, 4): 1 F2 (S, 4) = gu (u0 , K󸀠 𝜕S u0 , 4)u1 + gu󸀠 (u0 , K󸀠 𝜕S u0 , 4)(K󸀠 u1 + 60󸀠 𝜕S u0 + 𝜕4 u0 ) – fuu (u0 , 4)u12 2 – K󸀠󸀠 𝜕S u1 – 2K󸀠 60󸀠 𝜕S2 u1 – 2K󸀠 𝜕4 𝜕S u1 󸀠 – 2K󸀠 61󸀠 𝜕S2 u0 – 𝜕42 u0 – (6󸀠0 )2 𝜕S2 u0 – 6󸀠󸀠 0 𝜕S u0 – 260 𝜕4 𝜕S u0 . 󸀠 󸀠󸀠 It is convenient to highlight terms with coefficients 6󸀠󸀠 0 and 61 . Coefficients of 60 and 󸀠 61 help to define conditions for boundedness of the correction. Denote

F2 (S, 4) = –2K󸀠 61󸀠 𝜕S2 u0 – 6󸀠󸀠 0 𝜕S u0 + P2 . Here P2 contains remaining terms of F2 . The solution of the equation for u2 (S, 4) can be represented in the same way as u1 (S, 4) after changing index 1 by 2. As a result we can see that the second correction

3.4 Higher-Order Terms in Krylov–Bogolyubov Method

119

depends on 61 (4) and 62 (4). These functions are defined when studying boundedness of the third and the fourth corrections. But boundedness of the second correction gives us an equation of the second order for 60 . The conditions for periodicity of the second correction are 20

∫ dS 𝜕S u0 (S, 4)F2 (S, 4) = 0,

(3.48)

0 20

K󸀠 b2 (4) = – ∫ dS𝜕E u0 (S, 4)F2 (S, 4) = 0. 20𝜕E K󸀠

(3.49)

0

Formulas (3.48) and (3.49) contain undefined functions 60 and 61 . Formula (3.48) defines 60 : 20

∫ dS 𝜕S u0 (S, 4)(–2K󸀠 61󸀠 𝜕S2 u0 – 6󸀠󸀠 0 𝜕S u0 + P2 ) = 0 0

or 20 󸀠 I6󸀠󸀠 0 – K ∫ P2 𝜕S u0 dS = 0.

(3.50)

0

The second equation defines b2 : 20

K󸀠 b2 = – ∫ (–2K󸀠 61󸀠 𝜕S2 u0 – 6󸀠󸀠 0 𝜕S u0 + P2 )𝜕E u0 dS = 20𝜕E K󸀠 0

20

󸀠

I K (K󸀠 6󸀠1 𝜕E ( 󸀠 ) + ∫ P2 𝜕E U0 dS) . 20𝜕E K󸀠 K 0

To obtain explicit form, eq. (3.48) should be transformed. Such transformations can be seen below. In these formulas F2 is represented as a sum of integrals. First of all, let us transform an integral that contains (u1 )2 : 20

1 – ∫ dSfuu (u0 , 4)u12 (S, 4)𝜕S u0 (S, 4) 2 0

󵄨󵄨S=1 20 1 󵄨 2 = – (fu (u0 , 4)u1 (S, 4))󵄨󵄨󵄨󵄨 + ∫ dSfu (u0 , 4)𝜕S u1 (S, 4)u1 (S, 4) 2 󵄨󵄨S=0 0

20

= ∫ dS( – (K󸀠 )2 𝜕S2 u1 (S, 4) + g(u0 , K󸀠 𝜕S u0 , 4) – K󸀠󸀠 𝜕S u0 0

–2S 󸀠 𝜕t 𝜕S u0 – 2S 󸀠 60󸀠 𝜕S2 u0 )𝜕S u1 (S, 4).

120

3 Perturbation of nonlinear oscillations

Integrating the integral of the first term by part, we obtain zero because of periodicity. As a result we obtain 20

20

0

0

1 – ∫ dSfuu (u0 , 4)u12 (S, 4)𝜕S u0 (S, 4) = ∫ dS(g(u0 , K󸀠 𝜕S u0 , 4)𝜕S u1 (S, 4) 2 – K󸀠󸀠 𝜕S u0 𝜕S u1 – 2S 󸀠 𝜕t 𝜕S u0 𝜕S u1 (S, 4) – 2S 󸀠 60󸀠 𝜕S2 u0 𝜕S u1 (S, 4)). Integrate the terms with linear dependency on u1 : 20

∫ dS(gu (u0 , K󸀠 𝜕S u0 , 4)u1 + gu󸀠 (u0 , K󸀠 𝜕S u0 , 4)K󸀠 𝜕S u1 0

– K󸀠󸀠 𝜕S u1 – 2S 󸀠 𝜕t 𝜕S u1 – 2S 󸀠 60󸀠 𝜕S2 u1 )𝜕S u0 (S, 4). Let us note that integrating the last term in this formula by part cancels the last term from previous formula. As a result we obtain 󸀠

20

20

󸀠

D1 = –2 (K ∫ dS(𝜕S u1 𝜕S u0 )) + ∫ dS(g(u0 , K󸀠 𝜕S u0 , 4)𝜕S u1 0

0

+ (gu (u0 , K󸀠 𝜕S u0 , 4)u1 + gu󸀠 (u0 , K󸀠 𝜕S u0 , 4)K󸀠 𝜕S u1 )𝜕S u0 ). Now let us evaluate integrals that are dependent on u0 linearly: 20

D2 = ∫ dS( – 2K󸀠 61󸀠 𝜕S2 u0 – 𝜕t2 u0 – (60󸀠 )2 𝜕S2 u0 0

– 60󸀠󸀠 𝜕S u0 – 260󸀠 𝜕t 𝜕S u0 )𝜕S u0 (S, 4). Using evenness we get 20

D2 =

–(60󸀠

2

󸀠

∫ dS(𝜕S u0 ) ) . 0

The last group of terms of F2 are nonlinearly dependent on u0 : 20

D3 = ∫ dS(60󸀠 𝜕S u0 + 𝜕t u0 )gu󸀠 (u0 , K󸀠 𝜕S u0 , 4)𝜕S u0 (S, 4). 0

3.4 Higher-Order Terms in Krylov–Bogolyubov Method

121

As a result, eq. (3.48) has a form of 20

󸀠

20

2

2(K ∫ dS(𝜕S u1 𝜕S u0 )) + (60󸀠 ∫ dS(𝜕S u0 ) ) 󸀠

󸀠

0

0 20

– ∫ dS[((60󸀠 𝜕S u0 + 𝜕t u0 )gu󸀠 (u0 , K󸀠 𝜕S u0 , 4) 0

+ gu (u0 , K󸀠 𝜕S u0 , 4)u1 + gu󸀠 (u0 , K󸀠 𝜕S u0 , 4)K󸀠 𝜕S u1 )𝜕S u0 (S, 4) + g(u0 , K󸀠 𝜕S u0 , 4)𝜕S u1 ] = 0.

(3.51)

Denote 20

20 󸀠

I1 = K ∫ dS(2𝜕S u1 𝜕S u0 ) +

60󸀠

0

2

∫ dS(𝜕S u0 ) ; 0

20

D1 = – ∫ dS[((60󸀠 𝜕S u0 + 𝜕t u0 )gu󸀠 (u0 , K󸀠 𝜕S u0 , 4) 0

+ gu (u0 , K󸀠 𝜕S u0 , 4)u1 + gu󸀠 (u0 , K󸀠 𝜕S u0 , 4)K󸀠 𝜕S u1 )𝜕S u0 (S, 4) + g(u0 , K󸀠 𝜕S u0 , 4)𝜕S u1 ]. Then eq. (3.51) has a form of I1󸀠 + D1 = 0. Finally, u2 (S, 4) = b2 (4)𝜕E U0 (S, 4) +

K󸀠E 𝜕 u (S, 4)J2 (S, 4) + 𝜕E u0 (S, 4)J2 (S, 4). K󸀠 S 0

dJ2 = F2 (S, 4)𝜕S u0 (S, 4), dS

dJ2 K󸀠E = 󸀠 J2 (S, 4) + F2 (S, 4)𝜕E u0 (S, 4). dS K

Here

Remark 4. If g ≡ 0, then eq. (3.51) can be reduced: 20

60󸀠 (

20

2 2 (K󸀠 )2 [ 𝜕 ∫ dS(𝜕S u0 ) ] + ∫ dS(𝜕S u0 ) ) = const, A E [0 ] 0

or 60󸀠 𝜕 I = const. 𝜕E S 󸀠 E 0

(3.52)

122

3 Perturbation of nonlinear oscillations

Theorem 8 (Luke [109], Dobrokhotov–Maslov [31], Bourland–Haberman [17]). Let b1 (4) and 60 (4) be solutions of eqs. (3.51) and (3.46), b2 (4) and 61 (4) are connected by eq. (3.49), then formal asymptotic solution of eq. (3.25) on mod(O(:3 )) has a form of t ∈ (t0 , t0 + const), const > 0: u(t, :) = u0 (S, 4) + :u1 (S, 4) + :2 u2 (S, 4). –

Here b2 (4) and 61 (4) are still non-deﬁned. But E(4), S(4) and 60 (4) deﬁne primary term of the asymptotic ansatz of Krylov–Bogolyubov u0 (S, 4) with accuracy o(:) on bounded interval of t.

3.4.2 Periodic Solution of Equation for nth Correction –

General form of nth correction gives recurrent formulas for differential system of equations for parameters dependent on slow variable.

The conditions for periodicity of nth correction leads to equation for 6n–2 .

Let us consider the first condition for boundedness of the correction: 20

∫ dS Fn 𝜕S u0 = 0.

(3.53)

0

󸀠 So the condition leads to equations for 6󸀠󸀠 n–2 and 6n–1 ; therefore, it is convenient to 󸀠󸀠 󸀠 highlight terms containing 6n–2 and 6n–1 . It yields

Fn = –2K󸀠 8󸀠n–1 𝜕S2 u0 – 8󸀠󸀠 n–2 𝜕S u0 + Pn . Then 20

∫ dS (–2K󸀠 8󸀠n–1 𝜕S2 u0 – 8󸀠󸀠 n–2 𝜕S u0 + Pn ) 𝜕S u0 (S, 4),

(3.54)

0 󸀠

20

K bn = – ∫ dS (–2K󸀠 8󸀠n–1 𝜕S2 u0 – 8󸀠󸀠 n–2 𝜕S u0 + Pn ) 𝜕E u0 (S, 4). 20𝜕E K󸀠

(3.55)

0

After transformations we get 20

I6󸀠󸀠 n–2

󸀠

– K ∫ dSPn 𝜕S u0 (S, 4) = 0,

(3.56)

0 20

bn = –

K󸀠 I (K󸀠 6󸀠n–1 𝜕E ( 󸀠 ) + ∫ dSPn 𝜕E u0 (S, 4)) . 20𝜕E K󸀠 K 0

(3.57)

3.5 Interval of Validity for Krylov–Bogolyubov’s Ansatz

123

Equations (3.56) and (3.57) can be reduced, but calculations are too lengthy. Finally the solution of the equation for nth correction has a form of un (S, 4) = bn (4)𝜕E U0 (S, 4) +

KE 𝜕 u (S, 4)Jn (S, 4) + 𝜕E u0 (S, 4)Jn (S, 4). K󸀠 S 0

dJn = Fn (S, 4)𝜕S u0 (S, 4), dS

dJn K󸀠E = 󸀠 Jn (S, 4) + Fn (S, 4)𝜕E u0 (S, 4). dS K

Here

Theorem 9 (Luke [109]). Let6n–2 (4) be the solution of eq. (3.56) and let bn (4) and 6n–1 (4) be connected by eq. (3.57), then formal asymptotic solution of eq. (3.25) on mod(O(:(n+1) )) as t ∈ (t0 , t0 + const), const > 0 has a form: u(t, :) = u0 (S, 4) + :u1 (S, 4) + :2 u2 (S, 4) + ⋅ ⋅ ⋅ + :n un (S, 4). –

So we construct arbitrary terms of perturbation series which is asymptotic solution of eq. (3.25).

3.5 Interval of Validity for Krylov–Bogolyubov’s Ansatz We discussed earlier an approach of Krylov–Bogolyubov–Kuzmak for constructing asymptotic expansion. This approach is used for approximations of solutions for the equation in which period of oscillations depends on parameter. Full energy for mechanical systems is an example of such parameter. The value of the energy changes slowly under perturbation. In general case, such changing leads the trajectory of perturbed system to the neighbourhood of stationary points. Centres and saddles are examples of such stationary points. 3.5.1 Small Neighbourhoods of a Centre In small neighbourhood of centre Uj the conservation law can be written as (Z 󸀠 )2 Z 2 + + O(|Z|3 ) = E – Ej . 2 2 Here Z = (U – Uj ). In turning points Z 󸀠 = 0 it’s possible to evaluate the amplitude of oscillations: Z ∼ √2(E – Ej ). Hence U = O (√E – Ej ) . This dependency of amplitude of oscillations on E leads to non-differentiating on E of primary term of asymptotic expansion and therefore to non-validity of the Krylov– Bogolyubov–Kuzmak asymptotic expansion close to the centre. Indeed, 𝜕E U = O (1/√E – Ej ) . Then u1 = O (1/√E – Ej ), u2 = O((E – Ej )–3/2 ) and un = O((E – Ej )(2n–1)/2 ).

124

3 Perturbation of nonlinear oscillations

The condition for validity of the expansion is :(E – Ej )–1 ≪ 1,

E – Ej ≫ :.

3.5.2 Neighbourhood of Separatrix and Saddle The Krylov–Bogolyubov’s ansatz is convenient for solutions with constant period of oscillations. Changing by equation with isochronous oscillations gives additional terms with derivatives in slow variable of frequency of oscillations K󸀠 (4). Let us consider E → E∗ , where E∗ is defined by a system of equations: 2E∗ + F(4, Y∗ ) = 0,

f (4, Y∗ ) = 0.

In general case: f (Y) ∼ c ∗ (Y – Y∗ ),

c = const ≠ 0.

Formula for K󸀠 eq. (3.29) gives K󸀠 = O(log |E – E∗ |). Evaluate the right-hand side of equation for the first correction: F1 = O(|E – E∗ |–1 ). It yields u1 = O(|E – E∗ |–2 ). Due to the nonlinearity the right-hand side of the equation for the second correction is F2 = O(|E – E∗ |–4 ). It yields u2 = O(|E – E∗ |–5 ). One can see that the right-hand side of the equation for every next correction grows on two orders of 1/|E – E∗ |. Integration brings one more multiplier of order 1/|E – E∗ | into the value of the correction. As a result we get the interval of validity for Krylov–Bogolyubov’s ansatz: :|E – E∗ |3 ≪ 1. 3.5.3 Asymptotic Solution of a Cauchy Problem We suppose that K = 0 at 4 = 0. Therefore, the fast variable at 4 = 0 can be represented as a segment of formal series: S|4=40 = SN0 ,

N

SN0 ≡ ∑ :k 6k (0). k=0

3.5 Interval of Validity for Krylov–Bogolyubov’s Ansatz

125

The value of 6k (4) at 4 = 0 should be defined by using initial condition of the Cauchy problem (3.25). To construct every correction, non-homogeneous linear equation should be solved. In relation to general theory, the solution of the equation can be presented as a sum of certain solution of non-homogeneous equation and solution of homogeneous equation with non-defined coefficient. The corrections of Krylov–Bogolyubov’s asymptotic ansatz have no non-defined coefficients. Instead non-defined parameters are contained in phase shift 60 and corrections for its 6k . Values of 6k (0) and its derivatives 6󸀠k (0) should be defined for construction of asymptotic expansion for the given Cauchy problem. These parameters can be defined by initial data of the Cauchy problem (3.25). The initial value at t = t0 u(t, :)|t=t0 = v0 can be rewritten for the formal asymptotic in the form of N

u0 (SN0 , 0) + ∑ :k bk (0) ( k=0

𝜕E K󸀠 0 S 𝜕 U (S0 , 0) + 𝜕E U0 (SN0 , 0)) ∼ v0 . K󸀠 N S 0 N

The formula for derivative at t = t0 has a form 𝜕t u|t=t0 = w0 . This formula can be rewritten as (K󸀠 (0) + :SN0󸀠 )𝜕S u0 (SN0 , 0) + :𝜕4 u0 (SN0 , 0) N

+ ∑ :k (K󸀠 (0) + :SN0󸀠 )bk (0) ( k=1

+

𝜕E K󸀠 0 2 S 𝜕 U (S0 , 0) K󸀠 N S 0 N

𝜕E K󸀠 𝜕 U (S0 , 0) + 𝜕S 𝜕E U0 (SN0 , 0)) K󸀠 S 0 N N

+ : ∑ :k (𝜕4 (bk (0) k=1

𝜕E K󸀠 0 𝜕E K󸀠 0 0 0 ) S 𝜕 U (S , 0)S + b S 𝜕 𝜕 U (S0 , 0) S 0 k N N N K󸀠 K󸀠 N 4 S 0 N

+ 𝜕4 bk 𝜕E U0 (SN0 , 0) + bk 𝜕4 𝜕E U0 (SN0 , 0)) ∼ w0 . Formulas for corrections can be obtained by expansion by powers of :. This approach was discussed with respect to primary term and the first correction in Ref. [17]. Define initial values of I(0), 60 (0) and 6󸀠0 (0) using E(0). In the order of :0 one obtains U0 (60 (40 ), 40 ) = v0 , K󸀠 (40 )𝜕S u0 (60 (40 ), 40 ) = w0 . These formulas allow to define values of E(0) and 60 (0).

126

3 Perturbation of nonlinear oscillations

Value of 6󸀠0 (0) can be defined while using b1 (0) in the order of :. The formula for initial value of solution is 61 (0)𝜕S u0 (60 (0), 0) + b1 (0) (

𝜕E K󸀠 6 (0)𝜕S U0 (60 (0), 0) + 𝜕E U0 (60 (0), 0)) = 0 K󸀠 0

and formula for initial value of derivative is 6󸀠0 𝜕S U0 (60 (0), 0) + K󸀠 61 (0)𝜕S2 u0 (60 (40 ), 0) + 𝜕4 u0 (60 (0), 0) + K󸀠 (t0 )b1 (0)(

𝜕E K󸀠 (60 𝜕S2 u0 (60 (0), 0) + 𝜕S U0 (60 (0), 0)) K󸀠

+ 𝜕S 𝜕E U0 (60 (0), 0)) = 0. Parameter b1 (4) can be written as 6󸀠0 and eq. (3.46). Then we obtain system of algebraic equations for 6󸀠0 (0) and 61 (0): 61 (0)𝜕S U0 (60 (0), 0) – 6󸀠0

𝜕E2 (I 2 ) 𝜕E K󸀠 ( 󸀠 60 (0)𝜕S U0 (60 (0), 0) + 𝜕E U0 (60 (0), 0)) = Q1 , K 40𝜕E2 I

K󸀠 61 (0)𝜕S2 u0 (60 (40 ), 0) – +6󸀠0 𝜕S U0 (60 (0), 0) – K󸀠 (t0 )6󸀠0

(3.58)

𝜕E2 (I 2 ) 𝜕E K󸀠 ( 󸀠 (60 𝜕S2 u0 (60 (0), 0) + 𝜕S U0 (60 (0), 0)) K 40𝜕E2 I

+ 𝜕S 𝜕E U0 (60 (0), 0)) = Q2 .

(3.59)

The right-hand sides of the system of equations are

Q1 =

𝜕 K󸀠 K󸀠 I1 (4)) ( E 󸀠 60 (0)𝜕S U0 (60 (0), 0) + 𝜕E U0 (60 (0), 0)) , 󸀠 20𝜕E K K Q2 =

𝜕 K󸀠 K󸀠 I1 (4)) ( E 󸀠 (60 𝜕S2 u0 (60 (0), 0) 󸀠 20𝜕E K K

+ 𝜕S U0 (60 (0), 0)) + 𝜕S 𝜕E U0 (60 (0), 0)) – 𝜕4 U0 (60 (0), 0). Let us evaluate determinant of the matrix for the system of equations for 61 and 6󸀠0 . This matrix can be written as two linearly independent solutions of linearized equation (3.37):

3.5 Interval of Validity for Krylov–Bogolyubov’s Ansatz

󵄨󵄨 󵄨󵄨 𝜕E2 I 2 󵄨󵄨 󵄨󵄨 v 2 2 – 40𝜕 v2 󵄨󵄨 󵄨󵄨 1 2 󸀠 𝜕E I EI 󵄨 󵄨 – K W = 󵄨󵄨 = v 󵄨 2 2 1 󵄨󵄨 󵄨󵄨 󸀠 40𝜕E I 󸀠 𝜕E I 󵄨 󵄨󵄨󵄨 K 𝜕S v1 v1 – K 40𝜕E I 𝜕S v2 󵄨󵄨󵄨 = (𝜕S u0 )2 – K󸀠

127

󵄨 󵄨󵄨 󵄨󵄨 v1 v2 󵄨󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨 󵄨󵄨 󵄨󵄨 𝜕S v1 𝜕S v2 󵄨󵄨󵄨

𝜕E2 I 2 1 𝜕E2 I 2 1 2 = (𝜕 u ) – . S 0 40𝜕E I K󸀠2 40𝜕E I K󸀠

Using eq. (3.44) we get 1 W = (𝜕S u0 )2 – 𝜕E2 I 2 . 2

(3.60)

In physical examples like pendulum or Duffing’s oscillator we can see that 𝜕E I grows monotonically with respect to E; hence, 𝜕E2 I > 0. Using this general formula for initial data we can see the same system for 6n (t0 ) and 6n–1(t0 ). This way a formal asymptotic of solution for the Cauchy problem (3.25) can be obtained. Theorem 10 (Bourland and Haberman, 1988). The problem of deﬁnition for initial values of parameters for asymptotic expansion of solution for the Cauchy problem (3.25) is reduced to solution of solvable system of equations (3.59). Remark 5. Bourland and Haberman give an approach to defining initial values of E(t0 ), 60 (t0 ) and 60(t0 ). Values for nth corrections can be defined in the same way. Theorem 11. Formal asymptotic solution of Cauchy problem (3.25) with accuracy O(:(n+1) ) is deﬁned by a system of linear algebraic equations and theorem 6.112.

4 Nonlinear oscillator in potential well In this chapter, we discuss two problems for nonlinear equations with one and a half degrees of freedom. – At first we consider the approach of perturbation theory for trajectories in the small neighbourhood of separatrices for perturbed equations. This means that the particle oscillates near the edge of the well. If an energy of the particle exceeds the border value, then the particle leaves the well. – In this approach we construct perturbation series with separatrix as a primary term of approximation.

Then we show ascending and descending discrete dynamics for parameters of the perturbed solution.

We also show an irregular subset for small parameter which is a support of oscillating asymptotic behaviour for solution.

Second, we discuss a problem for capture into a resonance as a problem for matching asymptotic series with different asymptotic behaviours: – We show that when a trajectory transits through a resonance it is close to the neighbourhood of separatrix for the pendulum equation with small external torque and small additional quasi-dissipative term. –

Finally, we give formulas for parameters of trajectories which will be captured into resonance.

Second-order differential equations are of fundamental importance for applications in physics. This is an obvious consequence of Newton’s second law. In case of one and a half degrees of freedom, this is a recognized testing area for nonlinear equations. At the end of nineteenth century Poincaré called the study of such systems the most important problem of dynamics. In spite of progress both in qualitative comprehension of processes in systems with one and a half degrees of freedom and in quantitative investigation, analytic theory is by no means completed. The equation of motion for a particle at a potential hole with small almost periodic external force and dissipation u󸀠󸀠 + g(u) = :f (t) – :Au󸀠

(4.1)

is an example of dissipative perturbed system with one and a half degrees of freedom. Here : is a small parameter, f (t) a smooth almost periodic function and A > 0 the parameter of dissipation. This equation is a model for mathematical investigation of nonlinear oscillatory systems with dissipation. Dissipation leads to decrease of energy and so it results in the change of the oscillation period. The period of oscillations of a system goes through resonance values under an external force. Locally in the neighbourhood of resonance the solution is determined by the equation of nonlinear resonance. DOI 10.1515/9783110335682-004

4 Nonlinear oscillator in potential well

129

In the dissipationless case, if moreover f (t) = cos(9t), then eq. (4.1) is the equation of mathematical pendulum with outer torque [26]. When passing through resonance without capture, the solution is known to undergo change as large as the square root of the perturbation parameter [6]. Because of dissipation, the phase portrait of the equation of nonlinear resonance changes essentially and there appears a region of trajectories that are captured into resonance. Estimations for the measure of the region of captured solutions are obtained in Ref. [127]. Papers [120–122] give the complete qualitative investigation of solutions to Duffing’s equation with dissipation in the neighbourhood of resonance level. In Ref. [59], the qualitative approach to the study of resonances in nonlinear systems is presented. In spite of achievements in qualitative theory and understanding of local analysis, the capture into resonance is interpreted from the viewpoint of probability theory [23, 125, 126] and symbolic dynamics [3, 134]. In problems on perturbations near a separatrix there appears asymptotics on Cantor sets [89]. Hence the change to a mathematical tool which allows to investigate the measure of trajectory regions might be of use. However, this does not cancel the importance of investigations of the behaviour of single trajectories with characteristic properties. From the viewpoint of perturbation theory, three time intervals with distinctive behaviour of solution can be distinguished in the problem of capture into resonance. First, this is a time interval far off the resonance. Second, this is a time interval near the resonance which is, however, outside the capture region. Third, this is a capture region. It turns out that for each of the intervals there are specific small parameters that enable to construct the asymptotic solution suitable for the region. Formulas for asymptotic solutions in these regions represent intermediate asymptotics [11]. The regions of applicability of the intermediate asymptotics meet each other. Hence, it’s possible to match the parameters of constructed asymptotic in much the same way as it is done in the method of matching of asymptotic expansions [64]. As a result one manages to construct an asymptotic solution of the original problem which fits all concerned regions. Away from resonance one can use the Krylov–Bogolyubov method to construct asymptotics of solutions (see [15, 17, 102, 119]). In order to find the connection of parameters, the Whitham method can be exploited (see Refs. [17, 109, 161]). However, the results of Refs. [17, 109, 161] cannot be applied directly to our case, for we deal with non-periodical perturbations. Thus we had to develop this approach by using averaging over all fast time rather than averaging over a period. Here we consider two important elements of general theory. First, we derive a connection formula for asymptotics away from the region of capture into nonlinear resonance and the parameters of trajectories captured into resonance in the capture region. In this formula the value of the phase shift of oscillations is of crucial importance. It turns out that this value is singular in the parameter of perturbation. Second, in order to compute connection formulas we ought to develop perturbation theory for trajectories that are similar to the trajectories of angular motion of asymptotic pendulum away from the separatrix.

130

4 Nonlinear oscillator in potential well

The capture into resonance corresponds to crossing over the separatrix for perturbed equation of nonlinear resonance. When the dynamic system crosses over separatrix, parameters such as an action and a phase shift for primary term of asymptotic expansion are changed [23, 126]. The equations and their solutions when crossing separatrix were studied for example in Ref. [62]. The value of separatrix splitting is defined by Melnikov’s integral [117]. Formulae for solution, which are suitable uniformly for all intervals, allow us to study the asymptotic behaviour for certain trajectories. Such formulae for problems of auto-resonance were derived in Refs. [87, 92, 93], [95] by matching of asymptotic expansions. We consider separatrices of two types. The first one is homoclinic. Such trajectories begin at a singular point of an equation for the leading-order term of the asymptotic expansion and finish at the same point as t → ∞. The second one is heteroclinic. Such trajectories begin as t → –∞ and finish as t → ∞ in different singular points of the same equation. Namely bifurcations of the asymptotic expansions near homoclinic and heteroclinic trajectories are objects under study in this chapter. In the simplest case the transition occurs near the singular point of a trajectory for the unperturbed equation. Such a process is usually considered as a motion through the region with chaotic dynamics. Such dynamic processes are studied as a dynamics of classes of trajectories and a statistic behaviour instead of the single trajectory and its behaviour. The difference between statistic approach and analytic approach for one trajectory is explained by complicate and unstable of single trajectory behaviour near the singular point. Here we study a long-time dynamics of a single trajectory close to the separatrices. It turns out that asymptotic methods allow us not to only consider complicated dynamics in detail but also to construct discrete dynamic maps and to obtain the asymptotic behaviour under a fractal set for asymptotic parameter. The study of the perturbed separatrix solution is a long story. Such problems in case of dynamics of three bodies were considered by Poincare [137]. Later Melnikov [117] calculated the gap between perturbed separatrices. Results concerning chaotic behaviour for three bodies (4.3) were obtained by Alekseev [3]. Filonenko et al. [39] introduced the definition for separatrix map of canonical variables for Hamiltonian systems. The separatrix map is derived from using first-order approximation of perturbation theory, see for example review of the article [134]. The problem of perturbation of the separatrix is important for the capture into a resonance. Neishtadt [126] obtained the value for the measure of trajectories captured into the resonance after crossing over separatrix. Timofeev [148] studied behaviours of separatrix solutions near the saddle point. Changes of action-angle variables when crossing separatrix were considered in several works [23]. Diminnie and Haberman [30] studied separatrix crossing near the saddle-center and pitchfork bifurcation points. The full asymptotic expansions for separatrix crossing near the saddle-center bifurcation were obtained in Refs. [87, 92]. Here we develop a method that is different from works [23, 134]. The first question is the behaviour of separatrix curves. The curves have two threads tending to infinity

4.1 Nonlinear Oscillator Near Separatrix

131

and two separatrices that begin at one saddle and tend to another one, see eq. (4.1), instead of separatrices that look like number eight as in following works, [23, 134]. This leads to complicated set of perturbation parameters like Cantor’s set [89]. Second, we construct asymptotic approximation as series on power of %. As a result we obtain long-time dynamics for perturbed theory near the separatrix. The basis of this section is generalization of results of work [89] for nonlinear oscillator in general form. The main technique used in this chapter belongs to matching method for asymptotic expansions. We construct asymptotic expansions with different behaviour in covered intervals and then match the expansions to each other. One of the expansion is useful near the separatrices and another one is used near singular point. The neighbourhood of the singular point contains germs of four separatrices. Two of these germs are the germs of bounded separatrices and another two are unbounded and tend to infinity. When we match asymptotic expansions for trajectories we found that some of them match to bounded separatrices and another one matches to unbounded separatrices. Therefore, the gaps for intervals of parameters for oscillating asymptotics appear.

4.1 Nonlinear Oscillator Near Separatrix To study long-time dynamics in the small neighbourhood of separatrix we consider an equation of nonlinear oscillator without dissipation: u󸀠󸀠 + f (u) = : cos(9t + I0 ).

(4.2)

Here t is an independent variable, f (u) = F 󸀠 (u), : is a small positive parameter, 9 and I0 are real constants. The unperturbed equation has a conservation law: (u󸀠 )2 + F(u) = E. 2 As usual value E corresponds to full energy of unperturbed oscillator. We suppose that the phase plane of the unperturbed equation contains bounded or unbounded sequence of saddles Sn = {(uk , 0)}, where k is an integer and F(uk ) = Es , ∀k. Function f (u) also has a null of the first order at the saddle points and the saddles are not confluent: f 󸀠 (uk ) ≠ 0. We suppose here exists a separatrix for the pair of neighbour saddles with the same values of energy. Such separatrix is a parametric curve L = {(U(t), U 󸀠 (t), t ∈ (–∞, ∞)}, and U(t) is a solution of eq. (4.2) as : = 0 and (U(t), U 󸀠 (t)) → (uk , 0), t → –∞ and (U(t), U 󸀠 (t)) → (uk+1 , 0) t → ∞. Such trajectories are called heteroclinic. Results for heteroclinic trajectories can be narrowed down on the heteroclinic curves if all saddles Sk coincide. Equations with such behaviour are typical for mathematical models for mechanics and physics. If all saddles coincide, then typical equation is an equation with quadratic nonlinearity: u󸀠󸀠 + u – u2 = : cos(9t + I0 )

132

4 Nonlinear oscillator in potential well

or with cubic nonlinearity and two points of minima for potential function F(u): u󸀠󸀠 – u + au + u3 = : cos(9t + I0 ),

a ∈ R.

Two saddles case corresponds to Duffing’s oscillator: u󸀠󸀠 + 2u – 2u3 = % cos(9t + I0 ).

(4.3)

Unbounded set for saddles on phase plane corresponds to pendulum equation: u󸀠󸀠 + sin(u) = : cos(9t + I0 ).

(4.4)

For asymptotic theory the typical intervals are the neighbourhoods of separatrix far from saddles and small neighbourhoods of saddles. Constructions for separatrices and for saddles are universal in general case. Therefore a nonlinear oscillator of general form is used in basic examples for theory use. This model is widely used, and obstacles for study behaviours of solutions are clear. Moreover, this model is defined by nonlinear equation with perturbation in general case. A set of solutions for nonlinear oscillator is bundled by thin resonant and nonresonant manifolds. Such intervals are everywhere dense on the set of parameters near the separatrix. As a result the nonlinear oscillator highlights complex dynamics which is called deterministic chaos. 4.1.1 Change to Simple Form Here we construct asymptotic solution for eq. (4.2). Our aim is special solutions that oscillate for a long time near separatrices of non-perturbed equation. We define the asymptotic solution as asymptotic series in the form ∞

U(t, %) = ∑ %n Un (t),

(4.5)

n=0

which gives a remainder term o(%m ), ∀m ∈ N, for ∀t ∈ (–T0 (%), T1 (%)) after substitution into eq. (4.2). Here Ti (%) → ∞, i = 0, 1 when % → 0. Zero is an accumulation point for a set of values for parameter %. In this problem the set of values for % depends on parameters of solution and has complicated structure. Below we will consider the behaviour of solution between two saddles of sequence Sn (uk , 0) and (uk+1 , 0). For simplification it is convenient to change: u → –1 + 2

u – uk , uk+1 – uk

F(u) → F( – 1 + 2

u – uk ). uk+1 – uk

As a result of these changes one can obtain coordinates of the saddles (uk , 0) and (uk+1 , 0) in points (–1, 0) and (1, 0) correspondingly.

4.1 Nonlinear Oscillator Near Separatrix

133

The primary term of the asymptotic solution is a separatrix of non-perturbed equation. This separatrix is defined by inversion of the integral: u+ (t)

t + t0 = ∫ 0

dy √Es – F(y)

t ∈ R.

,

(4.6)

Due to the symmetry of non-perturbed equation with respect to change t → –t one more separatrix can be found. This separatrix lies into lower half-plane: u– (t)

t + t0 = ∫ 0

dy –√Es – F(y)

,

t ∈ R.

(4.7)

This solution is negative and u– → ∓1 when t → ∓∞. The separatrix has only one parameter t0 . This parameter is a shift along the trajectory. This trajectory is defined by the value of E = Es . The solution, which is defined by formula (4.6), is positive and u → ±1 when t → ±∞. The time for moving along separatrix is unbounded. However, finite time is necessary to cross from :-neighbourhood of saddle (–1, 0) to :-neighbourhood of saddle (1, 0). The necessary time to cross from :-neighbourhood of phase space (1, 0) into :-neighbourhood of (–1, 0) is 1–:

T(:) = ∫ –1+:

dy √Es – F(y)

.

To calculate the value for T(:) it is convenient to rewrite radicand into the neighbourhood of y = –1. Since F(y) has a local maxima at point y = –1 therefore the radicand can be represented in the form of +2– (y + 1)2 – f2– (y + 1)3 + ⋅ ⋅ ⋅ 2 +2 f– = – (y + 1)2 (1 + 2– (y + 1) + O((y + 1)2 )) . 2 +1

Es – F(y) =

Here +– > 0. The same formula is suitable in the neighbourhood of y = 1: +2+ (y – 1)2 – f2+ (y – 1)3 + ⋅ ⋅ ⋅ 2 +2 f+ = + (y – 1)2 (1 + 22 (y – 1) + O((y – 1)2 )) . 2 ++

Es – F(y) =

Here ++ > 0. An integration yields T(:) ∼ – (

1 1 ̃ + ) log(:) + T(:). +– ++

̃ is a regular function when : → 0. Here T(:)

134

4 Nonlinear oscillator in potential well

4.1.2 Qualitative Behaviour and Numerical Analysis In general case there exists an alternative to prolong a solution which is close to the separatrix (Figure 4.1) on large times. When t = O(– ln(%)) the trajectory of the solution tends to saddle point (1, 0). Here the trajectory has a bifurcation. The trajectory has an ability to turn to the lower separatrix and to follow along it to another saddle point (–1, 0). Other ability is a cross into the neighbourhood of separatrix for nonperturbed equation which follows from (1, 0) to (+∞, +∞). The same alternative exists near saddle (–1, 0). One can see this in Figure 4.1. Here we consider the behaviour of asymptotic solutions that trajectories do not tend to infinity but oscillate near the separatrices between saddles (–1, 0) and (1, 0). For this aim, a set of parameters for the solution and a set for the perturbation parameter % should be found. These sets have a complicated structure. For example one can consider the set of values for % for which a numerical solution oscillates between

–1.5

–1.0

1.0 0.8 0.6 0.4 0.2 0.0 –0.5 –0.2 0.0 –0.4 –0.6 –0.8 –1.0

0.5

1.0

1.5

Figure 4.1. Solutions for the Dufﬁng’s oscillator (4.3) with initial condition (u, u󸀠 )|t=0 = (–1, 0) for different values of parameter: % = 0.01, % = 0.005, % = 0.0015. These solutions diverge near the saddle points on the phase plane (u, u󸀠 ). In the left ﬁgure one can see the trajectories near (–1, 0) and in the right ﬁgure one can see the neighbourhood of (1, 0). 105 100 100 80 95

60

90

40

85

20 0 0.00

0.02

0.04

0.06

0.08

0.10

80 0.0800

0.0801

0.0802

Figure 4.2. In the left picture we show dependency on : for lifetime of oscillatory solutions for perturbed Dufﬁng’s equation (4.3), where 9 = 1, I0 = 0 and initial conditions are u(0) = 0, u󸀠 (0) = 1. These solutions were calculated at t ∈ [0, 100] for values of perturbation parameter % ∈ [0.001, 0.1] with step B% = 0.000099. In the right ﬁgure one can see a scaled neighbourhood of peak at % = 0.08 on % ∈ [0.08, 0.0802] with step B% = 0.0000002. The ﬁgure was obtained by using interpolation by horizontal intervals for values of % between calculated values on grid.

4.2 Asymptotic Solution Close to Separatrix

135

the saddles at least for pre-defined time. The results of these calculations can be seen in Figure 4.2. For small time such as +1 log(:) < t < – +1 log(:), a numerical solution is close to – + the separatrix of the unperturbed equation for any of small %. For more large time, part of solutions go away from the neighbourhood of the bounded separatrix. There exists the value of % for which the solution is bounded. Such values can be seen in the left figure as peaks. For some of these values of : the trajectories leave the neighbourhood of bounded separatrices at large time. In the left Figure 4.2 a thin set near peak at % = 0.08 can be seen.

4.2 Asymptotic Solution Close to Separatrix In this section we construct an asymptotic solution which is close to separatrix of unperturbed equation u = u± (t). We also construct an asymptotic solution which is useful near saddle points u = ±1. After that we match these asymptotic solutions and obtain uniform asymptotic solution for separatix and saddle points. 4.2.1 Construction of Germ Asymptotic Expansion Let us construct asymptotic solution as a series: ∞

u(t, %) = ∑ %n Un (t).

(4.8)

n=0

Leading-order term is a separatrix U0 (t), which is defined by formula U0 (t)

t – t0 = ∫ 0

dy √Es – F(y)

.

Function f (y) has Taylor expansion as y = ±1. Let us consider the neighbourhood of y = 1 as an example: ∞

f (y) = –+2+ (y – 1) + ∑ fk+ (y – 1)k ,

++ > 0.

k=2

This solution can be represented in the form of series of exponents in the neighbourhood of U0 = 1: ∞

U0 (t) ∼ 1 + ∑ c+k e–k++ t ,

t → ∞.

(4.9)

k=1

Power ++ is defined when solving linearized equation in the small neighbourhood of U0 = 1. Constant c+1 is the parameter of separatrix. Coefficients c+k for k > 1 depend on c+1 and can be defined explicitly.

136

4 Nonlinear oscillator in potential well

Let the following formula be valid in the neighbourhood of y = –1: ∞

f (y) = –+2– (y + 1) + ∑ fk– (y + 1),

+– 0.

k=2

The asymptotic expansion for the separatrix solution near u = –1 can be constructed in the same way: ∞

U0 (t) ∼ –1 + ∑ c–k ek+– t ,

t → –∞.

(4.10)

k=1

One can obtain connection formulae for parameters of separatrix expansion c–1 and c+1 by using regular part of integral T(:). Let us consider an asymptotic behaviour of the integral as u → 1 – 0: u

∫ 0

u

1 √2Es – 2F(y)

1 1 dy + ∫ dy = t – t0 , ++ (1 – y) ++ (1 – y)

0

1 log(1 – u) + T+ + ∼ t – t0 , ++

u ∼ 1 – e++ (T+ +t0 ) e–++ t ,

c+ = –e++ (T+ +t0 ) .

Here 1

T+ = ∫ 0

1 √2Es – 2F(y)

1 dy. ++ (1 – y)

The same asymptotic behaviour is true when u → –1 + 0: 1 log(–1 – u) + T– ∼ t – t0 , + u ∼ –1 – e+– (T– +t0 ) e–+– t ,

–1

T– = ∫ 0

1 √2Es – 2F(y)

1 dy. +– (–1 + y)

c– = –e+– (T– +t0 ) .

Therefore we obtain the connection formula: U0 (t) ∼ –1 + c–1 exp(+– t), U0 ∼ 1 + c+1 exp(–++ t),

t → –∞, t → ∞.

4.2.2 Behaviour of Correction Terms in the Neighbourhood of the Separatrix Cauchy problems for high-order terms of the asymptotic expansion can be written in the following form: Un󸀠󸀠 + f 󸀠 (U0 )Un = fn ,

U1 |t=0 = y0 ,

U1󸀠 |t=0 = y1 ,

Un |t=0 = Un󸀠 |t=0 = 0, n > 1,

(4.11)

4.2 Asymptotic Solution Close to Separatrix

137

where f1 = cos(9t + 60 ). If n > 1, then fn has nth order with respect to ujk at |j| = n. Here j = {j1 , . . . , jn–1 } is a multi-index. The homogeneous part of eq. (4.11) v󸀠󸀠 + f 󸀠 (U0 )v = 0 has two linear independent solutions: U0 (t)

dU0 v1 = , dt

dU dy . v2 = – 0 ∫ dt (2Es – 2F(y))3/2

Wronskian of these solutions equals to 1. This can be shown by calculations, if formula for v1 is used: v1 ≡

dU0 = √2Es – 2F(U0 ). dt

Let us consider behaviours of v1 and v2 when t → ±∞. Asymptotic behaviour of v1 can be obtained by differentiating on t, the asymptotic formula for U0 when t → ±∞: ∞

v1 ∼ ∓+± ∑ kc±k e∓k+± t . k=1

It is important to see that the asymptotic series for v1 contains decreasing exponents. Therefore function v1 is exponentially small as t → ±∞. The asymptotic behaviour of v2 as t → ±∞ can be obtained by using the following formula for v2 : ∞

v2 ∼ b±1 e∓+± t + ∑ Pk (t)e±k+± t ,

b±1 =

k=0

1 . –2+2± c±1

Here Pk± (t) is a polynomial of power less than k. Function v2 (t) is exponentially growing as t → ±∞. Solutions of eq. (4.11) can be represented as t

t

U1 = v1 (t) ∫ dtf̃ 1 (t)v2 (t)̃ – v2 (t) ∫ dtf̃ 1 (t)v1 (t)̃ + y0 v1 (t) + y1 v2 (t), 0

0

t

t

̃ Un = v1 (t) ∫ dtf̃ n (t)v2 (t)̃ – v2 (t) ∫ dtf̃ n (t)v1 (t), 0

n > 1.

(4.12)

(4.13)

0

In the neighbourhood of saddle (1, 0) explicit formulas for correction terms are useful to be written as asymptotic expansion with respect to t as t → ∞. It is easy to see that the asymptotic expansion of the first-order term can be represented as series of power of exp(++ t) with coefficients of tm cos(n9t) and tk sin(l9t), where k, l, m, n ∈ ℕ.

138

4 Nonlinear oscillator in potential well

Lemma 3. Let ±n

2(n–1)

n

k=∓∞

l=0

m=0

± ± cos(m9t) + Hk,l,m sin(m9t)) ]) , fn ∼ ∑ e±+± kt ( ∑ tl [ ∑ (Fk,l,m

as t → ±∞, then as t → ±∞: ±n

2n

n+1

k=∓∞

l=0

m=0

± ± cos(m9t) + Ṽ k,l,m sin(m9t)) ]) . Un (t) ∼ ∑ e±+± kt (∑ tl [ ∑ (Ũ k,l,m

To prove it one should substitute the asymptotic expansion for v1,2 and formula for fn into eq. (4.13) and then to integrate . The asymptotic formulae are useful for finding parameters of nth correction term. So each of the correction terms is solution of the second-order differential equation. Therefore the solution contains two parameters. These parameters can be obtained by using coefficients for asymptotics of linear independent solutions for homogeneous + + ordinary differential equation. These coefficients are Ũ –1,0,0 and Ũ 1,0,0 in asymptotic – – ̃ ̃ expansion as t → +∞ and coefficients U1,0,0 and U–1,0,0 as t → –∞. Define A± = Ũ ± /(∓+ c± ) and B± = Ũ ± 2+2 c± . These parameters define the n

∓1,0,0

± 1

n

±1,0,0

± 1

solution as t → ±∞. Change the parameters during transition from one saddle point (–1, 0) to the neighbourhood of another saddle point (1, 0): BAn = A+n – A–n ,

BBn = B+n – B–n .

The change of the parameters can be projected on changing of the corrections for primary term of asymptotic expansion. Parameter An defines shift along trajectory and parameter Bn defines shift of energy of the primary term so as the shift in transversal direction to separatrix. In first-order correction such shift defines the change of energy during one half period and properties of motion during the next half-period. Therefore, formula for its calculation is well known. This shift BB1 is defined by Melnikov’s integral: ∞

BB1 = B+1 – B–1 = ∫ cos(9t + I0 )v1 (t)dt –∞ ∞

= cos(I0 ) ∫ dtv1 (t) cos(9t) – sin(I0 ) ∫ dtv1 (t) sin(9t). –∞

–∞

Here it is useful to get definitions for sine and cosine Fourier transformation of v1 (t): ∞

v̂1c (9)

= ∫ dtv1 (t) cos(9t), –∞ ∞

v̂1s (9) = ∫ dtv1 (t) sin(9t). –∞

139

4.2 Asymptotic Solution Close to Separatrix

Define 𝛾 = arctan(vs (9)/vc (9)), where cos(𝛾(9)) =

v̂1c (9) , c (v̂1 (9))2 + (v̂1s (9))2

sin(!) =

v̂1s (9) . c (v̂1 (9))2 + (v̂1s (9))2

Then BB1 = BB̃ 1 cos(I0 + 𝛾(9)),

BB̃ 1 = √(v̂1c (9))2 + (v̂1s (9))2 .

A formula for BA1 is not usually given. To calculate this formula one should regularize the following integral: T

IT = ∫ cos(9t + 60 )v2 (t)dt –T

T

T

= cos(60 ) ∫ cos(9t)v2 (t)dt – sin(60 ) ∫ sin(9t)v2 (t)dt. –T

–T

The limit does not exist as T → ∞, because function v2 grows exponentially as |t|. For regularization of the integral an asymptotic behaviour of the integral as T → ∞ should be constructed and a term that does not depend on T should be found. For integral in the first correction term such regularization is equivalent to finding growing terms of the integrand: T

BA1 = Reg [IT ] = cos(60 ) ∫ cos(9t) (v2 (t) – c–1 e+– t – c+1 e++ t ) dt –T

T→∞

T

– sin(60 ) ∫ sin(9t) (v2 (t) – c–1 e+– t – c+1 e++ t ) dt. –T

Denote T

Reg ∫ cos(9t)v2 (t)dt = w1 ,

T→∞

–T T

Reg ∫ sin(9t)v2 (t)dt = w2 ,

T→∞

\$ = arccos (

w1 √w12

+

w22

–T

),

BA1̃ = √w12 + w22 ,

BA1 = BA1̃ cos(6 + \$). Series in eq. (4.8) is asymptotic as follows: %Un+1 /Un ≪ 1. This condition is bound for independent variable : exp(++ t) ≪ 1 and : exp(–+– t) ≪ 1. Then –:

– +1

≪ t1 ≪ :

– +1

+

and

140

4 Nonlinear oscillator in potential well

parameters of the solution A+n and B+n : A+n ≪ %–1 , B+n ≪ %–1 formulated as a theorem:

∀n ∈ N. This result can be

Theorem 12. Asymptotic solution in the form (4.8) is valid as –:

– +1

≪ et ≪ :

– +1

+

.

4.2.3 Asymptotic Expansion Near Saddle Point Let us construct the solution of non-perturbed equation in small neighbourhoods of saddle points in the following form: ∞

u(t, %) = ±1 + ∑ %n/2 u±n (4)

4 = t + 4±0 .

(4.14)

n=1

The coefficients of the expansion are solutions of the equation: 󸀠󸀠

u±n – +2± u±n = fn± , where f1± ≡ 0, f2± = cos(94 – 940 + 60 ) ± 3f2 (u±1 )2 and fn± is a polynomial of nth power, which is defined by coefficients of asymptotic expansion with j = j1 , . . . , jn–1 such that |j| = n. In general case, fn± is a sum with respect to power of e4 , sine, cosine and independent variable 4. General formula for nth correction term is as follows: u±n = !±n exp(–+± 4) + "±n exp(+± 4) + wn± (4).

(4.15)

Here wn± (4) does not contain terms like C1 e+± 4 and C2 e–+± 4 for ∀C1 , C2 = const. The first and the second correction terms are u±1 = !±1 exp(–+± 4) + "±1 exp(+± 4); u±2 = !±2 exp(–+± 4) + "±2 exp(+± 4) +

("± )2 (!±1 )2 ± f2 exp(–2+± 4) + 1 2 f2± exp(2+± 4) 2 3+± 3+±

2 ± ± ± 1 ! " f – cos(94 + 60 – 940 ). +2± 1 1 2 +2± + 92

For large 4 the nth correction terms can be presented in the form of u±n = O(exp(±+n|4|)),

n > 1,

4 → ±∞.

The interval of usability for the asymptotic expansion is defined by inequality: %1/2 u±n+1 /u±n ≪ 1 as 4 → ±∞. It yields :1/2 e±++ 4 ≪ 1 or : The result of the section is as follows.

– 2+1

+

≫ e|4| .

Lemma 4. There exists an asymptotic solution in form (4.14) when % → 0, where un (4) is – 2+1

eq. (4.15) and :

+

≫ e|4| .

4.2 Asymptotic Solution Close to Separatrix

141

4.2.3.1 Matching of the Asymptotic Expansions Near the Saddle Point (1, 0) Here we match parameters of the asymptotic expansions U(t, %) and u(4, %). For parameters with even indexes we obtain !+2n = 0, "+2n = 0. Recurrent formulae for correction terms with odd indexes are –++ c+1 exp(–++ t) = %1/2 !+1 exp(–++ 4), !+1 = –++ c+1 , 1 ln(%). 4 = t + 4+0 , 4+0 = 2++ It yields 1 2+2+ c+1

B+1 exp(++ t) = "+1 %–1/2 exp(++ 4),

"+1 =

1 2+2+ c+1

B+1 .

For higher-order terms, as t → ∞ we get – ++ c+1 A+n exp(–++ t) = %1/2 !+2n+1 exp(–++ 4), 1 2+2+ c+1

B+n exp(++ t) = "+2n–1 %–1/2 exp(++ 4),

!+2n+1 = –++ c+1 A+n ; "+2n–1 =

1 2+2+ c+1

B+n ,

n ∈ N.

A sign of "+1 is defined by 60 : "+1 =

B–1 + BB̃ 1 cos(60 + 𝛾) 1 + B = . 2+2+ a+1 1 2+2+ c+1

If "+1 > 0 or cos(60 + 𝛾(9)) > –sgn(c+1 )B–1 /BB̃ 1 , then the asymptotic solution (4.3) is unbounded; in other case if cos(60 + 𝛾(9)) < –sgn(c+1 )B–1 /BB̃ 1 , then the asymptotic solution (4.3) is close to separatrix, which tends from the right saddle to the left saddle. It yields: Theorem 13. There exists asymptotic solution (4.2) in form (4.8) when % 1

– 2+1

3

1

≪ et ≪ % ++

and in the form (4.14) as : 2++ ≪ et ≪ % 2++ . 4.2.4 Asymptotic Solution in the Neighbourhood of Lower Separatrix Let "+1 < 0. Then the primary term of the asymptotic expansion is a lower separatrix as 4 → ∞ which goes from the right saddle (1, 0) to the left saddle (–1, 0). A formula for the primary term is u0 (t1 )

t1 = ∫ 0

dy √Es – F(y)

.

An asymptotic expansion that is valid close to the lower separatrix has a form: ∞

u(t, %) = u0 (t1 ) + ∑ %n un (t1 ). n=0

(4.16)

142

4 Nonlinear oscillator in potential well

It is easy to see that the asymptotic solution (4.16) has the same form as the asymptotic solution near higher separatrix with variable t1 instead of t: ∞

u0 ∼ ±1 + ∑ c±k e±k+pm t1 ,

t1 → ±∞.

k=1

Then pair of linear independent solutions has a following asymptotic expansion, their Wronskian equals to 1: ∞

v1 ∼ +± ∑ kc±k e±k+± t1 ,

t1 → ∞,

k=1 ∞

v2 ∼ –b±1 e±+± t1 – ∑ Pk (t)e–∓k+± t1 ,

b±1 =

k=0

1 , ±2+2± c±1

t1 → ±∞.

Equation for nth correction term has the form: 󸀠 u󸀠󸀠 n + f (u0 )un = fn .

(4.17)

When t1 → ±∞: ±(n–1)

2(n–1)

n

k=∓∞

l=0

m=0

± ± cos(m9t1 ) + Hk,l,m sin(m9t1 ))]). fn ∼ ∑ ek+∓ t1 ( ∑ t1l [ ∑ (Fk,l,m

(4.18)

Here fn is a polynomial of nth power with respect to uk , for ∑ kj = n. General formula for solutions of eq. (4.17) as t1 → ±∞ looks as follows: ±n

2n

n+1

k=∓∞

l=0

m=0

± un (t1 ) ∼ ∑ ek+∓ t1 ( ∑ t1l [ ∑ (ũ ±k,l,m cos(m9t1 ) + ṽk,l,m sin(m9t1 ))]).

Let us denote by a±n = u∓1,0,0 /(+± c±1 ) and b±n = ũ ±±1,0,0 2+2± c±1 . These parameters define solution. Change of these parameters along the separatrix is Ban = a–n – a+n ,

Bbn = b–n – b+n .

The asymptotic solution near the lower separatrix is valid when %un+1 /un ≪ 1. Or the same as –%–1/++ ≪ et1 ≪ %–1/+– . As a result we obtain the following statement. Lemma 5. There exists asymptotic solution (4.3) in the form of eq. (4.16) as –%–1/++ ≪ et1 ≪ %–1/+– .4.2.4.1 Matching Asymptotic Expansions Near Saddle Point (1, 0) Matching procedure yields near saddle point and lower separatrix (4.16): %1/2 "+1 exp(++ 4) = ++ c+1 exp(++ t1 )

and

%1/2 !+1 exp(–++ 4) = %a+1

exp(–++ t1 ) , 2+2+ c+1

4.2 Asymptotic Solution Close to Separatrix

143

as 4 → ∞ and t1 → –∞. Here "+1 < 0. As a result we get t1 = 4 +

"+ 1 1 ln(%) + ln (– 1 + ) . 2++ ++ ++ c1

The following formula can be obtained after substituting t1 and reducing: a+1 = –:–1/2 !+1 2c+1 +2+ e–++ 4 e++ t1 or a+1 = 2++ "+1 !+1 . Similar steps give formulae for "+3 and b+1 : b+1 = %1/2 "+3

exp(++ 4 – ++ t1 ) . ++ c+1

As a result we get b+1 =

–"+3 "+1

and a+n = !+2n+1 2++ (–"+1 ),

b+n =

"+2n–1 . "+1

It is convenient to write formulas for asymptotic solution near upper and lower separatrices: a+1 = B–1 + BB1 , a+n+1 = (A–n + BAn )(B–1 + BB1 ), b+n =

B–n+1 + BBn+1 , B–1 + BB1

t1 = t + 6=–

n ∈ N;

(B– + BB ) 1 1 ln(%) + ln ( – 1 3 + 21 ), ++ ++ 2++ (c1 )

(B– + BB ) 9 [ ln(%) + ln ( – 1 3 + 21 )] + I. ++ 2++ (c1 )

4.2.4.2 Matching Asymptotic Expansions Near Lower Separatrix and Saddle Point (–1, 0) The matching procedure near saddle point (–1, 0) is the same as below at point (1, 0). Therefore here we give formulas with simple hints: +– c–1 exp(–+– t1 ) = %1/2 !–1 exp(–+– 41 ), 1 % 2 – b–1 exp(+– t1 ) = %1/2 "–1 exp(+– 41 ), 2+– c1 %+– c–1 a–1 exp(–+– t1 ) = %3/2 !–3 exp(–+– 41 ).

144

4 Nonlinear oscillator in potential well

It yields formulas for parameters of the asymptotic expansion near the saddle: t1 = –

1 ln(%) + 41 , 2+–

3 ln(%), !–1 = +– c–1 , 2+– a– !–3 = 1 – , "–1 = 2+2– c–1 b–1 . +– c1

4–0 =

The matching formulas for high-order correction terms are %(2n+1)/2 (!–2n+1 exp(–+– 41 ) + "–2n+1 exp(+– 41 )) = %n (+– c–1 a–n exp(–+– t1 ) 1 + 2 – b–n exp(+– t1 )). 2+– c1 Using these formulas we get formulas for parameters: !–2n–1 =

a–n , +– c–1

"–2n+1 = 2+2– c–1 b–n .

Changing of coefficient b1 can be obtained from the following formula: (B– + BB ) 9 Bb1 = b–1 – b+1 = BB̃ 1 cos ( – [ ln(%) + ln ( – 1 3 + 21 )] + I). ++ 2++ (c1 ) Let us formulate the statement: Theorem 14. If there exists "+1 < 0, then there exists asymptotic solution (4.3) in the form –1

–1

of eq. (4.5) as –: +– ≪ et ≪ : ++ ; in the form of eq. (4.14) with lower index “plus” as –1

– 1

–: 2++ ≪ et ≪ : ++ ; in the form of eq. (4.16) as 1 ≪ t ≪ – ln(%) and in the form of eq. (4.14) with lower index “minus” as – 43 ln(%) ≪ t ≪ – 45 ln(%). 4.2.5 Parameters of Equation and Cantor Set Parameter % plays a central role. It turns out that change of dependencies of parameters for asymptotic germ and values of % can construct a formal expansion with arbitrary numbers of oscillations near the separatrices. Corollary 6. The primary term of the asymptotic expansion for u(t, %) depends on B–n , n = 1, . . . , k, when tk = O(1). This means that the numbers of oscillations for asymptotic solution depend on small corrections of germ asymptotic expansion. – ∞ Corollary 7. There exists a set of {A–n }∞ n=1 and {Bn }n=1 such that asymptotic expansion oscillates between the saddle points if \$ = 9 ln(%) belongs to a set with Cantor structure.

Let us consider asymptotic solutions that oscillate near bounded separatrices of unperturbed equation. Each oscillation can be explained by using four asymptotic

4.3 Oscillations with External Force into Potential Well

145

expansions as follows: the asymptotic expansion near upper separatrix, asymptotic expansion near saddle point (1, 0), asymptotic expansion near lower separatrix and asymptotic expansion near saddle point (–1, 0). Parameters of the expansions are matched. As a result we get a discrete dynamical system for parameters of the oscillations. The phase shift I0 and set of A– and B– are initial parameters of the discrete system. The values of parameters for the asymptotic expansions and germ of the asymptotic can be obtained by the discrete dynamical system. Using matching of the asymptotic expansions one can see that coefficients Bn are defined by higher-order terms of the previous oscillation. This means nth correction term of N + 1th oscillation depends on (n + 2)th correction term of Nth oscillation. As a result, the accuracy is lost if finite terms of the asymptotic expansion are used. The set of parameters %, which defines oscillating asymptotic solutions, is complicated. Every next oscillation leads to a gap for interval of values of %. As a result the set looks like a Cantor set [89].

4.3 Oscillations with External Force into Potential Well The equation of motion for a particle at a potential hole with small almost periodic external force and dissipation u󸀠󸀠 + g(u) = :f (t) – ,u󸀠

(4.19)

is an example of dissipative perturbed system with one and a half degrees of freedom. Here g(u) is a potential force, : and , are small parameters, f (t) a smooth almost periodic function and A > 0 the parameter of dissipation. This equation is a model for mathematical investigation of nonlinear oscillatory systems with dissipation. Dissipation leads to a decrease of energy and so it results in a change of the oscillation period. The period of oscillations of a system goes through resonance values under an external force. Locally in a neighbourhood of resonance the solution is determined by the equation of nonlinear resonance. In the dissipationless case, if moreover f (t) = cos(9t), then eq. (4.1) is the equation of mathematical pendulum with outer momentum [25]. In mathematical point of view the solution of eq. (4.19) depends on two parameters : ∈ (0, :0 ) and , ∈ (0, ,0 ). In the right-hand side, f (t) is external force bounded for all t ∈ ℝ: ∃C ∈ ℝ, C > 0 and |f (t)| < C. We suppose f (t) is oscillating force and there exists a sequence {tj }j=∞ such that j=–∞

∀N ∈ ℕ ∃j+ and j– : tj– < –N and tj+ > N, where f 󸀠 (tj ) = 0 and sgn(f 󸀠 (tj – 0)) ≠ sgn(f (tj + 0)). An averaging of f (t) is zero: ∞

∫ f (t)dt = 0. –∞

146

4 Nonlinear oscillator in potential well

Full energy of the particle can be found by the following formula: 1 E(t) = (u󸀠 )2 + G(u). 2 Here function G(u) is a potential, G󸀠 (u) = g(u) and G󸀠󸀠 (u) exists. Let us consider a minima for the potential as a beginning of u: G(0) = minu∈ℝ G(u) = 0. If the particle oscillates, then there exist moments when a speed of the particle is equal to zero. Such points are called turning points. A sequence of coordinates in the turning points defines by {uj }. This sequence is appropriate to obtain energy of the particle. The energy in jth point is defined by Ej . We will say that the particle is resonant with external force if ∃k ∈ ℕ such that ∀j ∈ ℤ and uj+k = uj . A time of motion from uj to uj+k : uj+k

T(E) = ∫ uj

du . u󸀠

(4.20)

When the particle moves between two turning points the energy changes due to dissipation and external force f (t). A dependency of the energy defines by differential equation: dE = :f (t)u󸀠 – ,(u󸀠 )2 . dt In the right-hand side the dissipative term –,(u󸀠 )2 decreases the energy. The term :f (t)u󸀠 can increase the energy and decrease them. The kinetic energy is equal to zero at the turning points. The changing of energy from one to another turning points can be obtained by a formula: uj+1

uj+1

Ej+1 – Ej = G(uj+1 ) – G(uj ) + : ∫ f (t)du – , ∫ u󸀠 du. uj

uj

This formula shows that the following equation is true for resonant particles: uj+k

uj+k

: ∫ f (t)du = , ∫ u󸀠 du. uj

uj

It is convenient to integrate over variable t in the left-hand side of eq. (4.21): uj+k 󸀠

: ∫ f (t)u dt = , ∫ u󸀠 du. T(E)

uj

(4.21)

4.3 Oscillations with External Force into Potential Well

147

Let us consider the phase variable over circle in phase space (u, u󸀠 , t): t

u

6 ≡ 9 ∫ dt = 9 ∫ uj

du , u󸀠

9 = 1/T(E).

(4.22)

The variable 6 defines by equation: 𝜕6 d6 𝜕6 󸀠 ≡ u + E󸀠 , dt 𝜕u 𝜕E or u

E󸀠 𝜕9 𝜕u E󸀠 𝜕uj E󸀠 d6 𝜕 du =9+ 6 + 9( – + ∫ E󸀠 ). dt 9 𝜕E 𝜕E u󸀠 𝜕E u󸀠j 𝜕E u󸀠 uj

Let us substitute E󸀠 : d6 𝜕9 6 = 9 + (:f (t)u󸀠 – ,(u󸀠 )2 ) dt 𝜕E 9 + 9(

u 𝜕uj 𝜕 du 𝜕u (:f (t) – ,u󸀠 ) – :f (tj ) + ∫ E󸀠 ). 𝜕E 𝜕E 𝜕E u󸀠 uj

To obtain the derivative 6󸀠 one should differentiate the integral over E. The denominator of the integrand u󸀠 = ±√2E – 2G(u) is equal to zero at points ui , i = j, . . . , j + k. The interval of integration depends on E. Therefore, it is convenient to rewrite the integrand in such form where the limits of integral coincide to singular points of the integrand and the limits not depend on E: 2E – 2G(u) = u=

u + ui ui+1 – ui y + i+1 , 2 2

(1 + y)(1 – y) , Gj2 (y, E)

i = j, . . . , j + k – 1.

The formula for 6 has a form u

∫ uj

1

u

–1

–1

l Gi (y, E)dy Gl (y, E)dy du +∫ ), = ( ∑ ∫ u󸀠 √ √ (1 + y)(1 – y) (1 + y)(1 – y) i=j

where ul < u < ul+1 , j ≤ l < (j + k). The function Gi (y, E) has no zeros on interval (ui , ui+1 ). A formula for the change of phase is d6 E󸀠 d9 𝜕u =9+ 6 + 9 (:f (t) – ,u󸀠 ) dt 9 dE 𝜕E l

1

– 9 (∑ ∫ i=j –1

u

𝜕E Gi (y, E)E󸀠 dy 𝜕 G (y, E)E󸀠 dy +∫ E l ). √(1 + y)(1 – y) √(1 + y)(1 – y) –1

148

4 Nonlinear oscillator in potential well

4.3.1 An Asymptotic Problem of a Capture Let us define properties of solution of non-perturbed equation and properties of a perturbation. A non-perturbed equation is U 󸀠󸀠 + g(U) = 0.

(4.23)

Let g(U) being a smooth function which is defined by potential well. Define U

G(U) = ∫ g(y)dy,

min G(U) = G(U∗ ). U

U∗

Denote by U(t – t0 , E) a general periodic solution of the non-perturbed equation. Here t0 ∈ R and 1 E = (U 󸀠 )2 + G(U) 2 are parameters of a solution. We will confine our consideration to those g(U), for which the period of the solution of eq. (4.23) depends monotonically on the parameter E, that is, T(E) ≡ ∫

dU < ∞, U󸀠

1 L : (U 󸀠 )2 + G(U) = E, 2

dT ≠ 0. dE

(4.24)

L

In this section we consider an interval of those values of E for which there are periodical solutions of the form U(t – t0 , E) real analytic in t ∈ ℝ. As a typical example one can consider the equation of mathematical pendulum. In this latter case, we have g(U) = sin(U), U∗ = –0/2, G(U) = – cos(U), E ∈ (–1, 1). An external perturbation is an almost periodic smooth real-valued function [16] with Fourier series f (t) = ∑ fk ei9k t .

(4.25)

k

We arrange for a special decomposition of f (t) into summands of different orders in :. For n = 1, 2, . . . , denote by Kn (:) the set of those indices k, for which :n+1 < |fk | ≤ :n . For n = 0, we define k ∈ K0 (:) if fk > :. Set fk = :–n fk for k ∈ Kn (:). Then ∞

f (t) = ∑ fk eikt/T + ∑ :n ∑ fk eikt/T . K0 (:)

n=1

Kn (:)

We introduce fn (t) =

∑ k∈Kn (:)

fk eikt/T .

4.4 Non-resonant Regions of Parameter

149

Denote by K(E) the frequency of oscillations of solution of the non-perturbed equan tion. A parameter value E = Em,k satisfying n ) = 9k , m K(Em,k

for m ∈ ℤ and k ∈ Kn (:), is called a resonant level at order n. Since T(E) is monotonic, one can order the resonant levels according to the increase of E. We consider functions g(U) and f (t) with the property that the neighbouring resonant levels of E at order 0 differ from each other by a quantity much larger than √:, that is, 0 0 – Em | ≫ √:, min |Em 1 ,l1 2 ,l2

(4.26)

provided (m1 , k1 ) ≠ (m2 , k2 ). This condition is called the asymptotic condition of nonoverlap of nonlinear resonances. Remark 6. Condition (4.26) is wittingly weaker than the well-known non-overlap condition for resonances by Chirikov. Remark 7. Condition (4.26) may be violated close to separatrices. Consider e.g. g(U) ≡ sin U in a neighbourhood of E = 1.

4.4 Non-resonant Regions of Parameter In this section we construct an asymptotic solution of eq. (4.1) far away from the res0 onant levels Em,k . The solution is built by the method of two scales combined with the Whitham averaging method and Krylov–Bogolyubov expansions for the parameters of solution. However, in contrast to the standard approach, we trace accurately beginnings of singularities in a neighbourhood of resonant levels. Intimate knowledge of singularities allows one to determine the applicability region of Krylov–Bogolyubov asymptotics when approaching resonant levels of the parameter E. 4.4.1 An Equation for Averaged Action Here we introduce slow and fast variables and derive an integrate equation for averaged action. Outside the resonant levels of the parameter E the asymptotic solution of eq. (4.1) is built by the method of two scales. We change the independent variable by introducing the fast time variable 3(() + !((, :) : and the slow time variable ( = :t. In the new variables the original variable t is given by S=

t=

( ( S– !((, :). 3(() 3(()

150

4 Nonlinear oscillator in potential well

The differential equation (4.1) takes the form (3󸀠 )2 𝜕S2 u + g(u) = :f (t) – :(23󸀠 !󸀠 𝜕S2 u + 3󸀠󸀠 𝜕S u + 23󸀠 𝜕( 𝜕S u + A3󸀠 𝜕S u) – :2 ((!󸀠 )2 𝜕S2 u + !󸀠󸀠 𝜕S u + 2!󸀠 𝜕S 𝜕( u + 𝜕(2 u + A!󸀠 𝜕S u + A𝜕( u),

(4.27)

where D1 = 23󸀠 !󸀠 𝜕S2 + 3󸀠󸀠 𝜕S + 23󸀠 𝜕( 𝜕S – A3󸀠 𝜕S , D2 = (!󸀠 )2 𝜕S2 + !󸀠󸀠 𝜕S + 2!󸀠 𝜕S 𝜕( + 𝜕(2 + A!󸀠 𝜕S + A𝜕( . Let G(y) be a primitive of g(y). On multiplying eq. (4.27) by 𝜕S u and integrating in S we get S

(3󸀠 )2

(𝜕S u)2 + G(u) = E + : ∫ (f (t) + D1 u + :D2 u)𝜕S udS, 2 S0

where E = E(() is a parameter of the solution. Suppose S

1 ∫ (f (t) + D1 u + :D2 u)𝜕S udS = O(:2 ), S→∞ S lim

(4.28)

S0

then the principal part in : of the solution u of eq. (4.27) is given by a function u0 satisfying (3󸀠 )2 (

𝜕u0 2 ) = 2E – 2G(u0 ). 𝜕S

This equation for u0 can be integrated in S by quadratures, namely u0 󸀠

S=3 ∫ y0

dy √2E – 2G(y)

,

y0 = const,

u– ≤ y0 ≤< u+ .

where y0 is a constant in the interval u– ≤ y0 ≤ u+ whose bounds u+ and u– are roots of the equation 2E – 2G(y) = 0, such that 2E – 2G(y) > 0 for all u– < y < u+ . Following Ref. [17] we assume for definiteness that u0 (S, E) is an even function of S with zero mean value over the period. We require the period of the function u0 in the variable S to be equal to one, that is 3󸀠 ∫

du0 = 20. 𝜕S u0

(4.29)

In the Krylov–Bogolyubov method, formula (4.29) is regarded as differential equation for the unknown function 3((). The left-hand side of this equation is not yet defined, for the function E(() has remained indetermined.

4.4 Non-resonant Regions of Parameter

151

Define the averaged action by the formula S

1 ∫ (𝜕S u)2 dS. S→∞ S

I(() = (3󸀠 + :!󸀠 ) lim

(4.30)

S0

Then eq. (4.28) can be essentially simplified to I 󸀠 + AI + F((, I) = 0,

(4.31)

where S

1 ∫ f (t)𝜕S udS. S→∞ S

F((, I) ≡ lim

S0

The function E(() is determined uniquely through I((). We have thus proved the following lemma. Lemma 6. Assume that E((), 3(() and !((, :) satisfy eq. (4.31). Then u(t, :) ∼ u0 (S, E). This assertion is a generalization of the well-known Whitham method for periodical solutions to the non-periodical case. Equation (4.28) is neither linear nor autonomic. To study this equation we develop a perturbation theory later. The properties of solutions to this equation differ essentially from those of known solutions obtained by averaging method. This distinction is due to the non-periodicity in S of the solution of eq. (4.27). 4.4.2 The Substitution of Krylov–Bogolyubov We look for a solution to eq. (4.27) by the Krylov–Bogolyubov method in the form of series in the powers of small parameter :. That is, ∞

u(t, :) = ∑ :n un (S, (, :),

(4.32)

n=0 ∞

!((, :) = ∑ :n !n (().

(4.33)

n=0

Substitute eq. (4.32) into eq. (4.27) and equate the coefficients of the same powers of :. As a result we get a recurrent system of equation for determining the coefficients un . The equation for u0 is (3󸀠 )2 𝜕S2 u0 + g(u0 ) = 0,

(4.34)

(3󸀠 )2 𝜕S2 u1 + g 󸀠 (u0 )u1 = f0 (t) – D1 u0

(4.35)

the equation for u1 is

152

4 Nonlinear oscillator in potential well

and the equation for u2 is (3󸀠 )2 𝜕S2 u2 + g 󸀠 (u0 )u2 = f1 (t) –

g 󸀠󸀠 (u0 ) 2 u 1 – D1 u 1 – D 2 u 0 . 2

(4.36)

The techniques of solving these equations for correction terms is well understood. The case f (t) ≡ 0 is treated in detail, for example, in the paper [17]. The problem under study does not fit into this theory, for the function f (t) on the right-hand side of eq. (4.1) need not be zero. By parameters of asymptotic solution (4.32) we will mean an “initial” value of slow time ( = (0 and the values E0 = E((0 ) and a = !((0 ) at (0 . For definiteness we also assume that u󸀠 ((0 ) > 0. 4.4.3 Linearized Equation The equations for the correction terms un , with n > 0, are linear. Their solutions can be obtained by the variation of constants method, when one starts with two linearly independent solutions v1 = 𝜕S u0 ,

v2 =

1 dK(E) S𝜕S u0 + 𝜕E u0 K(E) dE

of the corresponding homogeneous linear equation. To check that both v1 and v2 satisfy the homogeneous linear equation, one differentiates immediately the nonlinear equation (4.34) in S (for v1 ) and E (for v2 ). We proceed by evaluating the Wronskian for v1 , v2 , namely 1 dK(E) ) (𝜕S u0 + S𝜕S2 u0 ) + 𝜕S 𝜕E u0 ) K(E) dE 1 dK(E) – 𝜕S2 u0 (( ) S𝜕S u0 + 𝜕E u0 ) K(E) dE 1 dK(E) =( ) (𝜕S u0 )2 + 𝜕S u0 𝜕S 𝜕E u0 – 𝜕S2 u0 𝜕E u0 . K(E) dE

W = v1 𝜕S v2 – v2 𝜕S v1 = 𝜕S u0 ((

Using the equation for u0 yields W=(

(𝜕 u )2 1 dK(E) 1 𝜕 G(u0 ) ) (𝜕S u0 )2 + 𝜕E ( S 0 ) + K(E) dE 2 (K(E))2 E

whence (K(E))2 W = 𝜕E ((K(E))2

(𝜕S u0 )2 + G(u0 )) . 2

In the final shape the Wronskian is W=

1 . (K(E))2

4.4 Non-resonant Regions of Parameter

153

The general solution of the linearized equation (K(E))2 𝜕S2 un + g 󸀠 (u0 )un = Fn

(4.37)

for un can be written in the form S

un (S, () = c1 (() v1 (S, () + c2 (() v2 (S, () + v1 (S) ∫ Fn v2 (s, ()ds 0 S

– v2 (S, () ∫ Fn v1 (s, ()ds.

(4.38)

0

Set 𝜕S Jn = Fn 𝜕S u0 (s, ().

(4.39)

Consider separately S

S

∫ Fn v2 (s, ()ds = ∫ Fn (s, () ( 0

0

1 dK(E) s𝜕S u0 + 𝜕E u0 ) ds K(E) dE S

S

0

0

1 dK(E) = ∫ Fn (s, ()s𝜕s u0 ds + ∫ Fn (s, () 𝜕E u0 ds K(E) dE S

S

0

0

1 dK(E) 1 dK(E) J (S, () – =S ∫ Jn (s, ()ds + ∫ Fn (s, () 𝜕E u0 ds. K(E) dE n K(E) dE Introduce 𝜕S an (S, () =

1 dK(E) J (S, () – Fn (S, () 𝜕E u0 . K(E) dE n

(4.40)

Using formulas for solutions v1 and v2 we can now transform equality (4.38) to the form un (S, () = c1 (()v1 + c2 (()v2 – 𝜕S u0 an (S, () + 𝜕S u0 S –(

1 dK(E) J (S, () K(E) dE n

1 dK(E) S𝜕S u0 + 𝜕E u0 )Jn (S, (). K(E) dE

The terms containing the factor S cancel and we arrive at un (S, () = c1 (()v1 + c2 (()v2 – 𝜕S u0 an (S, () – 𝜕E u0 Jn (S, (),

(4.41)

154

4 Nonlinear oscillator in potential well

where an and Jn are solutions of eqs. (4.40) and (4.39). The coefficients c1 (() and c2 (() are so far not determined. We have thus proved the following assertion. Lemma 7. The solution of eq. (4.37) can be represented in the form (4.41), where an (S, () and Jn (S, () are solutions of system (4.40) and (4.39), and c1 (() and c2 (() are arbitrary functions of (. Our next goal is to derive a boundedness condition for the solution of eq. (4.37) for the almost periodic in S right-hand side Fn (S, (). If S

1 ∫ dS Fn (S, () 𝜕S u0 (S, () = 0, S→∞ S lim

(4.42)

0

then Jn (S, () is almost periodic in S. Under condition (4.42) one can always choose c2 (() in such a way that un is bounded. Namely, S

1 c2 (() = lim ∫ dS Fn (S, () 𝜕E u0 (S, (). S→∞ S

(4.43)

0

As a result we deduce that un is an almost periodic function of S. We have thus proved. Theorem 15. Let eq. (4.42) be fulﬁlled. Then the solution of eq. (4.37) is an almost periodic function, where an (S, () and Jn (S, () are solutions of triangular system (4.40) and (4.39) and c2 (() is given by equality (4.43). Corollary 8. Suppose ∞

Fn (S, () = ∑ Fn,k ei-k s k=0

is an almost periodic function and there are integers m and k such that m3/( → -k as ( → (m,k . Then uk = O(

Fn,k ), (m – -k (/3)2

( → (m,k .

To prove this assertion it suffices to represent 𝜕S u0 and 𝜕E u0 as Fourier series and integrate explicitly eqs. (4.42) and (4.43). Then the result should be substituted into eq. (4.41). 4.4.4 Construction of the First Correction Term In this section we compute the first correction term and derive an equation for the main order term of averaged action.

4.4 Non-resonant Regions of Parameter

155

Write the solution of eq. (4.35) in the form u1 (S, E) = U1,f (S, E) + U1,A (S, E),

(4.44)

where U1,f (S, E) and U1,A (S, E) are solutions of the equations (3󸀠 )2 𝜕S2 U1,f + U1,f cos(u0 ) = f0 (t),

(4.45)

(3󸀠 )2 𝜕S2 U1,A + U1,A cos(u0 ) = –D1 u0 .

(4.46)

respectively. Here, we study the non-resonant case of external force f0 (t). To pass to the fast variable S, set t = (S – !)(/3. The independent variable ( is considered in an interval of the real axis where m ≠ 9k (/3 for all m ∈ Z and k ∈ K0 . In this case, the boundedness condition for the solution of eq. (4.45) is fulfilled identically in (. To construct an explicit formula for U1,f one can exploit Corollary 8. Let there be integers m and k with the property that m3(() → 9k as ( → (m,k . Then U1,f = O(

1 ). (m + 9k (/3)2

(4.47)

The right-hand side of eq. (4.46) is periodic in S, hence the averaging of it over an unbounded interval can be replaced by averaging over the period. In this way the equality s

1 ∫ ( – A3󸀠 𝜕S u0 – 3󸀠󸀠 𝜕S u0 – 2!󸀠0 3󸀠 𝜕S2 u0 – 23󸀠 𝜕S 𝜕( u0 )𝜕S u0 dS = 0 s→∞ s lim

(4.48)

0

can be rewritten as ordinary differential equation for the function 1 󸀠

I0 = 3 ∫(𝜕S u0 (S, E))2 dS 0

of (. More precisely, we get AI0 + I0󸀠 = 0. Hence, it follows that I0 = I e–A( ,

I = I(=(0 = const,

(4.49)

where I = I0 ((0 ) is an arbitrary constant. When assuming the function E(I) to be invertible, we determine in this way the dependency of the parameter E on (. Let us sum up what we obtained in this section.

156

4 Nonlinear oscillator in potential well

Lemma 8. Equation (4.35) has a bounded solution if the parameter E evolves in accordance with eq. (4.49) for E ≢ Em,k . Obviously, the solution of the equation for the first correction term fails to work for 0 the zeroes of the denominator in eq. (4.47), which are the resonant values Em,k of the parameter E. For each :, the set of values of k is finite. Hence, the resonant values are bounded away from each other. The distance between two neighbouring resonances depends on the behaviour of the Fourier series of f (t) and is determined by the asymptotic condition of non-overlap of resonances (see eq. (4.26)). We now ascertain the applicability domain of asymptotic expansion (4.32). To this end we consider higher-order correction terms. 4.4.5 Construction of the Second Correction Term In this section we derive an evolution equation for the phase shift and construct a twoparameter asymptotic solution which suits well outside of the resonant levels of the parameter E. Let us discuss the influence of external force on solutions of the equation for the second correction term. In general, the formula for f1 (t) may contain resonant summands in the fast variable S at some point (. Since the dependence 3(() has been determined at the previous step when constructing u1 , the resonances in the second correction term are local with respect to the slow variable. For passing through a local resonance it is necessary to change the averaging operator. It is well understood that passing through local resonances leads in generic situation to a change in correction terms as large as :–1/2 (see Ref. [6]). From the formal view point this means that in generic situation one can consider the averaging operator of the form !+3/:

S

1 1 ∫ f1 (t)𝜕S u0 (y, ()dy ∼ ∫ f1 (t)u󸀠0 (! + 3/:)(3󸀠 + :!󸀠 )d( = O(√:). S→∞ S 3 + :! lim

30

In order to establish this formula one substitutes Fourier series for the integrands that converge absolutely. Then, one passes on to termwise integration and applies the stationary phase method for evaluating rapidly oscillating integrals. Let us now compute the action of the averaging operator on the remaining summands of the right-hand side of eq. (4.36), S

g 󸀠󸀠 (u0 ) 2 1 lim ∫ (– u1 – D1 u1 – D2 u0 ) 𝜕S u0 (y, ()dy. S→∞ S 2 It is convenient to break down the computation for the averaging operator into single terms. The integral of the first summand is S

∫–

S u2 󵄨󵄨󵄨 g 󸀠󸀠 (u0 ) 2 u1 𝜕S u0 (S, ()dS = –g 󸀠 (u0 ) 1 󵄨󵄨󵄨󵄨 + ∫ g 󸀠 (u0 )u1 u󸀠1 dS. 2 2 󵄨󵄨 S

4.4 Non-resonant Regions of Parameter

157

On evaluating g 󸀠 (u)u1 from the equation for the first correction term we get S

∫g

S 󸀠

(u0 )u1 u󸀠1 dS

= ∫ (–(3󸀠 )2 𝜕S2 u1 + f0 (t) – D1 u0 ) 𝜕S u1 dS.

The first summand under the integral is explicitly integrable, and its average just amounts to zero. As a result the average reduces to S

B = limS→∞

1 ∫ ((f0 (t) – D1 u0 ) 𝜕S u1 – D2 u0 𝜕S u0 (S, ()) dS. S

The last summand is evaluated immediately, namely S

󸀠

󸀠

I I 1 !󸀠 !󸀠 lim ∫(–D2 u0 𝜕S u0 (S, ())dS = (!󸀠󸀠 + A!󸀠 ) 0󸀠 + !󸀠 ( 0󸀠 ) = ( 󸀠 I0 ) + A 󸀠 I0 . S→∞ S 3 3 3 3 For evaluating other summand we make use of representation (4.44) for the first summand, obtaining S

S

1 1 lim ∫ ((f0 (t) – D1 u0 ) 𝜕S u1 ) dS = lim ∫ (f0 (t) – D1 u0 )(𝜕S U1,f + 𝜕S U1,A )dS S→∞ S S→∞ S S

S

1 1 ∫ (f0 (t) – D1 u0 )𝜕S U1,f dS + lim ∫ (f0 (t) – D1 u0 )𝜕S U1,A dS. S→∞ S S→∞ S

= lim We claim that S

S

1

1 1 ∫ (f0 (t) – D1 u0 )𝜕S U1,A dS = lim ∫ f0 (t)𝜕S U1,A dS – ∫ D1 u0 𝜕S U1,A dS ≡ 0. S→∞ S S→∞ S lim

0

Indeed, the integrand in the first summand is a conditionally periodic function of zero mean value, hence, the average vanishes. The second summand coincides with the integral treated in Ref. [17]. This integral vanishes, for its integrand is the product of an even function and an odd function of S. Integrating it over the entire period yields zero. It remains to consider the average of two summands that do not encounter in Ref. [17]. These are S

S

1 1 lim ∫ f0 (t)𝜕S U1,f dS – lim ∫ D1 u0 𝜕S U1,f dS. S→∞ S S→∞ S Since the function D1 u0 𝜕S U1,f is conditionally periodic and of mean value zero, we deduce readily that the second limit vanishes.

158

4 Nonlinear oscillator in potential well

To compute the remaining average we write ∞

u0 = ∑ u0,l (E) cos(lS), l=0

whence ∞

𝜕S u0 = ∑ lu0,l (E) sin(lS),

𝜕E u0 = ∑ v0,l (E) cos(lS).

l=0

l=0

Besides, represent f0 in the form ( f0 (t) = ∑ |fk | cos (9k S + \$k ) , 3 k∈K

\$k = arg(fk ) –

󸀠

( !. 3

Further computations may be done in explicit form, for example, if one uses the programme of analytic computations. One substitutes the series for u0 and f0 into eqs. (4.39) and (4.40), interchanges the integral and infinite sum and then integrates termwise, thus arriving at an explicit expression for U1,f . Then it is straightforward to compute the integral and the limes of the averaging operator. As a result we obtain S

1 lim ∫ f0 (t)𝜕S U1,f dS = 0. S→∞ S Thus, the averaging procedure leads to the equation 󸀠

(

!󸀠 !󸀠 I ) + A I = 0. 0 3󸀠 3󸀠 0

(4.50)

Write a particular solution of this equation in the form ! = 3(() + A ,

(4.51)

where A is a parameter of the solution. Theorem 16. Suppose that I0 and !0 are determined in eqs. (4.49) and (4.51), respectively. Then u ∼ u0 (S, E) is a two-parameter asymptotic solution of the form (4.32) deﬁned up to o(1) for ( ≢ (m,k . The quantities I and A are parameters of the solution. Denote S

F2 (S, () = ∫ F2 (s, ()𝜕S u0 ds. 0

By the properties of averaging and integration of summands with local resonance it follows that F2 (S, () = o(t) + O(1/√:). Letting ( → (m,k we deduce from Theorem 15 that u2 = O((m + 9k (/3)–6 ).

4.5 Asymptotics in Resonant Regions

159

In much the same way we construct higher-order correction terms. One can show that the singularity of un for ( → (m,k increases with n, namely un = O ((

4(n–1)+2 1 ), ) m + 9k (/3

k ∈ K0 (:).

Then the appropriateness domain of asymptotic u ∼ u0 (S, E) is described by : ≪ 1. (m + 9k (/3)4 This just amounts to saying that in generic situation the condition of applicability of asymptotic (4.32) is |( – (m,k | ≫ :1/4 .

(4.52)

4.4.6 Resonances in Higher-Order Correction Terms By the above mentioned, the constructed asymptotics no longer work close to the resonant values of (. Such resonant values of ( are determined starting from the set K0 (:) and an integer k. For passing through the resonances related to the set K0 (:) one ought to change the structure of asymptotic expansion at the order of smallness √: (see Ref. [6]). For higher-order correction terms similar resonances appear also for the values of (, such that K(E(())/9k ∈ ℤ with some k belonging to the union of the sets K0 (:), . . . , Kn–1 (:), where n is the correction term number. The passage through a resonant value of ( in higher-order correction terms leads to a structure change of asymptotics at the order of smallness :n–1/2 . Asymptotic expansions in neighbourhoods of resonant values of ( are studied in the next section.

4.5 Asymptotics in Resonant Regions 4.5.1 Formal Derivation of Nonlinear Resonance Equation In this section we build an asymptotic solution in a neighbourhood of resonant value 0 Em,k . A particular attention is paid to the applicability intervals of the asymptotic. We compute also an intermediate asymptotic, when approaching the capture region, and derive a formula of connection between this intermediate asymptotic and parameters of captured solutions. The equation of nonlinear resonance was derived by Chirikov in the dissipationless case for f (t) ≅ cos(9t) in Ref. [25]. This is the equation of mathematical pendulum with outer torque. Since then the derivation of this equation was reproduced by diverse methods in a great number of papers. Here the nonlinear resonance equation is obtained by the Whitham method for eq. (4.1) with periodic function f (t) in the presence of dissipative summand.

160

4 Nonlinear oscillator in potential well

Let us look for an asymptotic (in :) solution of eq. (4.1) in a neighbourhood of 0 resonant level E = Em,k by the Whitham method. For this purpose we substitute a function u of the form u = u(t + >(4)),

4 = √:t,

with 4 = √:t into eq. (4.1). After differentiation we get u󸀠󸀠 + g(u) = –√:>󸀠 u󸀠󸀠 + :(f (t) – >󸀠󸀠 u󸀠 – Au󸀠 – (>󸀠 )2 u󸀠󸀠 ) – :√:A>󸀠 u󸀠 .

(4.53)

We shall build a solution which is periodic in the fast variable with period 0 T = 20/K(Em,k ). Let us multiply the equation by u󸀠 and average it in the fast variable S. The necessary condition of boundedness of solution is s

s

1 1 ∫(u󸀠 )2 dt + lim ∫ f (t)u󸀠 (t + >)dt = 0. s→∞ s s→∞ s

(–>󸀠󸀠 – A – √:A>󸀠 ) lim

0

(4.54)

0

Denote by s

1 ∫(u󸀠 )2 dt s→∞ s

I ≡ I(T) = lim

0

the action for a given T to be treated as parameter independent of 4. Represent u = u(t + >) and f (t) by their Fourier series ∞

u = ∑ (al cos(lK(t + >)) + bl sin(lK(t + >))), l=0 ∞

f (t) = ∑ (gj cos(9j t) + hj sin(9j t)). j=0

Differentiate the series for u termwise in t, multiply the series for u󸀠 by the series for f (t) and transform the products of trigonometric functions into trigonometric functions of the sum and difference of arguments. Then apply the averaging operator to the series obtained in this way. As a result we conclude that the action of averaging operator is different from zero only for j and k, such that jK = 9k . Hence S

1 lim ∫ f (t)u󸀠 (t + >)dt = ∑ (Al cos(lK>) + Bl sin(lK>)). S→∞ S l 0

Denote Q(>) = ∑ (Al cos(lK>) + Bl sin(lK>)). l

4.5 Asymptotics in Resonant Regions

161

Denoting the right-hand side of this equality by G(>), we rewrite eq. (4.54) as equation for >, (–>󸀠󸀠 – A – √:A>󸀠 )I +

K Q(>) = 0. 2

The change of variables 6 = K>,

𝛾 = A/K,

% = A√:,

q(6) =

1 Q(K>) IK

reduces the above equation to general equation for nonlinear resonance with dissipation 6󸀠󸀠 + q(6) + 𝛾 = –%6󸀠 .

(4.55)

Here, q(6) is a periodic function of period 20/K and zero mean value. From the convergence of Fourier series for the functions 𝜕S u0 and f (t) it follows that q(6) → 0 as m, k → ∞. Consider, for instance, g(u) ≡ sin u and f (t) ≡ A cos(9t). Then, for 9 < 1, there may occur resonances on subharmonics, that is mK(E) = 9. As a result we get an equation of the form 6󸀠󸀠 + h sin(6 + a) + 𝛾 = –%6󸀠 , where 𝛾 = A/(mK), h = (f /(mKI)√a2m + b2m , a = arctan(am /bm ), am and bm being the Fourier coefficients of the elliptic function 𝜕S u0 . 4.5.2 Inner Asymptotic Expansion Equations (4.53) and (4.55) constitute a system. The equations of this system separate asymptotically provided that √:> ≪ 1. We’ll look for a solution of eq. (4.53) in the form of power series in powers of √:, namely ∞

u(t, :) = ∑ :k/2 uk (t + >).

(4.56)

k=0

To obtain equations for uk one substitutes eq. (4.56) into eq. (4.53) and equates the coefficients of the same powers of √:. The equation for the main term u0 looks like u󸀠󸀠 0 + g(u0 ) = 0, the equation for the first correction term is 󸀠 󸀠 󸀠󸀠 u󸀠󸀠 1 + g (u0 )u1 = –> u0 ,

the equation for the second correction term is 󸀠 󸀠 u󸀠󸀠 2 + g (u0 )u2 = –g0 (u0 )

u21 󸀠󸀠 󸀠 󸀠 󸀠 2 󸀠󸀠 – >󸀠 u󸀠󸀠 1 + f (t) – > u0 – 𝛾u0 – (> ) u0 2

162

4 Nonlinear oscillator in potential well

and similarly for higher order correction terms. Notice that the equation for the correction term un contains a term proportional to (6󸀠 )n . One can prove that, if secular terms cancel in all correction terms, expansion (4.56) is asymptotic for √:>󸀠 ≪ 1. Equation (4.54) is a necessary condition for the expansion to be uniform. 4.5.3 Capture into Resonance This section is aimed at evaluating the measure of resonance solutions in the space of parameters of solutions. A solution of eq. (4.55), for which there is a constant C with the property that |6| < C as 4 → ∞, is captured into resonance. The solutions of eq. (4.55) can be parametrized with the help of three parameters. These are an initial value 4 = 40 and “initial data” 6(40 ) = v0 and 6󸀠 (40 ) = v1 , all the parameters are real numbers. In the phase plane the trajectories of solutions are characterized by two parameters 6󸀠 and 6. The existence of equilibrium states of eq. (4.55) is determined by the inequality |𝛾| ≤ max |q(6)|. The equilibrium states are saddle and focal points. They are solutions of the transcendent equation 𝛾 + q(6j ) = 0. It is easily seen that the saddles and foci are situated on the axis 6󸀠 = 0 and alternate with each other. We enumerate these solutions with index j. Then, the saddles are at the points (0, 62j+1 ), such that q󸀠 (62j+1 ) < 0, and the foci are at the points (0, 62j ), such that q󸀠 (62j ) > 0. 0 The capture into resonance is possible not at all resonant levels Em,k . From the asymptotic property q(6) → 0, as m, k → ∞, it follows that for large m and k eq. (4.55) has no stationary solutions, and so there might be no capture into resonance. Although one ought to take proper account of eq. (4.55) for passing through such resonant levels, no capture into resonance occurs nevertheless. When passing through a resonant level for which |𝛾| > max |q(6)|, the solution changes by an amount O(√:). Write p = 6󸀠 , then the equation for phase trajectories is p

dp = –q(6) – 𝛾 + %p. d6

For % = 0 this equation is integrated explicitly, giving E = (6󸀠 )2 + Q(6) + 𝛾6.

(4.57)

Here Q󸀠 (6) = q(6), where Q(6) is a primitive of q(6). The parameter E varies slowly on solutions of eq. (4.55), for dE = % (6󸀠 )2 . dt

4.5 Asymptotics in Resonant Regions

163

Hence, it is convenient to use E for parametrizing the trajectories. At the phase curve we have dE = %6󸀠 . d6 An equivalent formulation of this equation is dE = sgn(6󸀠 )%√E – Q(6) – 𝛾6. d>

(4.58)

The trajectories captured into resonance are located between separatrices that tend to a saddle as 4 → ∞. At the saddle point – := Q(62j+1 ) + 𝛾62j+1 . E (62j+1 ) = E2j+1

It should be noted that the separatrix which loops round the focus 62j has actually ± two different values at 6 = 62j+1 as 4 → ∞. The corresponding values E2j+1 of E differ by the value of Mel’nikov’s integral over the loop of the separatrix, that is + – E2j+1 = E2j+1 + % ∫ 6󸀠 d6. L

The trajectories of resonance solutions lie between the separatrices with values + – and E (62j+1 ) = E2j+1 . One uses these values to approximate the bounds E (62j+1 ) = E2j+1 of the region of capture into resonance in the phase plane. Our next objective is to construct an asymptotic solution of the Cauchy problem for the parameter E with data E (62j+1 ) = E0 . Let ∞

E = E0 + ∑ Ek (>) %k .

(4.59)

k=1

Substitute this expression into differential equation (4.55) and equate the coefficients of the same powers of %. As a result we get a series of Cauchy problems dE1 = √E0 – Q(6) – 𝛾6, d6

E1 (62j+1 ) = 0;

dE2 E1 (6) = , E2 (62j+1 ) = 0; d6 2√E0 – Q(6) – 𝛾6 dE3 E1 (6)2 E2 (6) , E3 (62j+1 ) = 0, – = d6 2√E0 – Q(6) – 𝛾6 8(√E0 – Q(6) – 𝛾6)3 etc., provided that 6󸀠 > 0. The measure of trajectories in the phase plane, which are captured into resonance, amounts to the sum of the area of separatrix loop + E2j+1

Sj = ∫ 6󸀠 d6 – E2j+1

164

4 Nonlinear oscillator in potential well

and the area of the region between the separatrices towards the saddle point 62j+1 6 + – Bj (6) = ∫ (6󸀠 (E2j+1 ) – 6󸀠 (E2j+1 )) d6. 62j+1 0 near It follows that the measure of trajectories captured into resonance of level Em,k the focus 6j , for |6| ≪ %–1 , is

,m,k ∼ Sj + Bj = O(1). In this way we arrive at Theorem 17. The measure of trajectories captured into resonance in a neighbourhood of the 2j + 1th focus has smallness order ,m,k . If 6 → –∞ then E1 = O(63/2 ), E2 = O(62 ) and E3 = O(65/2 ). Moreover, one can show that Ek = O(6k/2+1 ) for all integers k ≥ 4. This estimate allows one to directly evaluate the region of applicability of asymptotic expansion (4.59). More precisely, %k+1 Ek+1 ≪ –1, %k Ek

(4.60)

and so %(–6)1/2 ≪ 1 whence –6 ≪ %–2 . 4.5.4 Asymptotic Solutions of the Equation of Nonlinear Resonance In this section we build an unbounded asymptotic solution of eq. (4.55) for 4 → ∞. Besides, we derive a connection formula for the parameters of the asymptotic expansion and parameters of solution in a small neighbourhood of capture into resonance. The solution is searched for in the form 6 = s(4, %) + 8(s, %),

(4.61)

where s󸀠󸀠 + +(s, %) = –%s󸀠 . The summand +(s, %) will be determined below under constructing an asymptotic expansion for the function 8. Substitute expression (4.61) into eq. (4.55), obtaining (–+ – %s󸀠 ) + (s󸀠 )2 8󸀠󸀠 + (–+ – %s󸀠 )8󸀠 + q(s + 6) + 𝛾 = –%s󸀠 – %s󸀠 8󸀠 . Elementary transformations yield an equation for 8, namely (s󸀠 )2 8󸀠󸀠 – +8󸀠 + q(s + 8) + (𝛾 – +) = 0.

165

4.5 Asymptotics in Resonant Regions

Our concern is the behaviour of 8 as s󸀠 → ∞. To this end it is convenient to rewrite the equation in the form 8󸀠󸀠 =

1 (+8󸀠 – q(s + 8) – (𝛾 – +)) . (s󸀠 )2

Set z(s) = 1/s󸀠 . We look for a solution of the form 8(s, %) = 8(s, z) by the method of two scales. Here, s is thought of as fast variable and z as slow variable. The total derivative in s is d dz = 𝜕s + 𝜕. ds ds z To evaluate z󸀠 , we make use of the equation for s in the form s󸀠

ds󸀠 = –+ – %s󸀠 , ds

implying dz –1 ds󸀠 –1 + = 󸀠2 = 󸀠 2 ( – 󸀠 – %) = +z3 + %z2 . ds (s ) ds s (s ) Hence, the total derivative in s just amounts to d8 = 𝜕s 8 + (+z3 + %z2 ) 𝜕z 8 ds and the second derivative is d2 8 = 𝜕s2 8 + (+z3 + %z2 ) (2𝜕s 𝜕z 8 + (3+z2 + z3 𝜕z + + 2%z) 𝜕z 8 + (+z3 + %z2 ) 𝜕z2 8) . ds2 As a result the equation of the method of two scales for 8 takes the form 𝜕s2 8 = (+z3 + %z2 ) (–2𝜕s 𝜕z 8 – (3+z2 + z3 𝜕z + + 2%z)𝜕z 8 – (+z3 + %z2 )𝜕z2 8) + +z2 𝜕s 8 + +(+z3 + %z2 )𝜕z 8 – q(s + 8) – (𝛾 – +).

(4.62)

We look for a solution of this equation of the form ∞

8(s, z) = ∑ 8k (s) zk

(4.63)

k=2

for small values of z. Assume that the parameter + is represented by a similar series in powers of z, that is ∞

+(s, :) = ∑ +k zk ,

(4.64)

k=0

+k being undetermined constant coefficients. As usual, we substitute expressions (4.62) and (4.64) into eq. (4.62), equate the coefficients of the same powers of z and

166

4 Nonlinear oscillator in potential well

arrive at a recurrent system of second-order ordinary differential equations for the unknown functions 8k (s). In particular, 8󸀠󸀠 2 = –q(s) – (𝛾 – +0 ). This equation possesses a (bounded) periodic solution if +0 = 𝛾. We take such a periodic solution as 82 (s). The equation for 83 (s) is 󸀠 8󸀠󸀠 3 = –4%82 + +1 .

This equation has a periodic solution if +1 = 0. The equation for 84 is in turn 2 󸀠 󸀠 2 󸀠 8󸀠󸀠 4 = –6% 82 – 5𝛾82 – 6%83 – 3% 82 – q (s)82 + +2 .

A periodic solution of this equation exists if T

1 ∫ q󸀠 (s)82 (s) ds + +2 = 0, T 0

which is the case for +2 = 0, as is easy to check. In this way we determine the periodic coefficients 8k (s) and +k–2 one after another. The coefficient 8k (s) is evaluated from an equation 󸀠 󸀠 8󸀠󸀠 k = Fk (s, 82 , . . . , 8k–2 , 82 , . . . , 8k–1 , +0 , . . . , +k–4 ) + +k–2 ,

where k > 2. The parameter +k–2 is determined from the condition that the mean value over the period of the right-hand side of equation for 8k vanishes. To wit, T

+k–2 = –

1 ∫ Fk (s, 82 , . . . , 8k–2 , 8󸀠2 , . . . , 8󸀠k–1 , +0 , . . . , +k–4 ) ds. T 0

On having constructed +(z, %) we pass to the study of equation for s. Up to terms of smallness order o(z2 ) the equation has the form s󸀠󸀠 + 𝛾 ∼ –%s󸀠 . We are now in a position to formulate the result of this section. Theorem 18. The solution of eq. (4.55) behaves like 𝛾 𝛾(4 – 40 ) 𝛾 ) + s0 – e–%(4–40 ) – % %2 %2 𝛾(4 – 40 )2 4 – 40 ∼ s0 + (1 + % + O(%2 (4 – 40 )2 )) 2 3

s∼(

as s󸀠 → ∞. Here, s0 and 40 are constants of integration.

4.5 Asymptotics in Resonant Regions

167

The theorem implies that unbounded solutions of eq. (4.55) behave like 6∼

1 𝛾 (4 – 40 )2 2 m,k

for 1 ≪ (4 – 40 ) ≪ %–1 . Substituting this asymptotic in estimate (4.60) for the appropriateness region of expansion (4.59) yields an estimate for the region of appropriateness of eq. (4.59). Namely, %(4 – 40 ) ≪ 1,

(4.65)

that is (4 – 40 ) ≪ %–1 . The connection between the parameters of the asymptotic of 6 in eqs. (4.61), (4.63) and the parameter E is obtained by substitution of asymptotic (4.61), (4.63) into formula (4.57). We get s0 ∼ E as (4 – 40 ) → ∞ and % → 0. Asymptotic solution (4.61), (4.63) will be captured in the resonance in a neighbourhood of the 2jth equilibrium state, if – + E2j+1 < s0 < E2j+1 .

for 4 → ∞. 4.5.5 Matching of Asymptotic Expansions In this section we match the parameters of asymptotic solutions outside of nonlinear resonances and in neighbourhoods of them. We first rewrite the applicability region of the inner asymptotic expansion in terms of t and :. Namely, since 4 = √%t, from (4 – 40 ) ≪ %–1 it follows that √:(t – 40 /√:) ≪ :–1 which is equivalent to (t – 40 /√:) ≪ :–3/2 . The outer asymptotic is valid in the region (( – (m,k ) ≫ :1/4 which is equivalent to (t – (m,k /:) ≫ :–3/4 . Hence, if we choose 40 = (m,k /√:, the appropriateness regions for the constructed asymptotics meet each other. Our next goal is to match the formulas for the asymptotic in the non-resonant 0 region (4.32) and asymptotic (4.56) in a neighbourhood of each resonant level E = Em,k . 0 Suppose the parameter E(() evaluated from formula (4.49) takes on the value Em,k at some point ( = (m,k . We determine the function 3(() starting from eq. (4.29) and condition 3((m,k ) = 0. Let !((m,k ) = A , then asymptotic solution (4.32) has three independent parameters (0 , I0 and A . 0 The asymptotic solution in a neighbourhood of resonance level E = Em,k has parameters s0 and 40 . From formulas for the applicability intervals of asymptotics (4.52) and (4.65) one sees readily that the regions of appropriateness of these asymptotics overlap. Set 0 40 = (m,k /√:. Since E(() → Em,k as ( → (m,k , we conclude that the principal terms

168

4 Nonlinear oscillator in potential well

of the asymptotics, which are the function u0 (S/3󸀠 , E) of eq. (4.32) and u0 (t + >) of eq. (4.56), respectively, coincide. We now match the arguments of these asymptotics that contain fast variables. In these arguments one ought to single out and equate the parameters that are independent of the variables t, (, 4, for these parameters are precisely the parameters of the asymptotic solution. Let us compute the asymptotics of the fast argument of S/3󸀠 for ( → (m,k . We have 3󸀠󸀠 ((m,k ) (m,k 2 (m,k (m,k )+: (t – ) + 3󸀠 ((m,k )A + :!󸀠 ((m,k )(t – ), : 2 : : 3󸀠󸀠 ((m,k ) (m,k 1 1 ∼ (1 – : (t – )) 󸀠 󸀠 󸀠 3 3 ((m,k ) 3 ((m,k ) : S ∼ 3󸀠 ((m,k )(t –

whence (m,k S ∼ A + (t – ), 󸀠 3 : (m,k (m,k s0 s0 + (t – ). = + t + >(4) ∼ t + 0 0 : : K(Em,k ) K(Em,k ) The matching condition

S ∼ t + > leads to the following assertion. 3󸀠

Theorem 19. The asymptotic solution, which applies uniformly in the parameter ( in each interval including a neighbourhood of nonlinear resonance point (m,k , has parameters A =

(m,k s0 , + 0 ) : K(Em,k

40 = (m,k .

(4.66)

The principal significance of formula (4.66) is that it provides the matching of phase shifts of fast variables. 0 Corollary 9. Assume that in a resonant layer at Em,j the capture into resonance is possible, that is, the condition |𝛾| ≤ max |q(6)| is fulﬁlled. Then, a trajectory with parameter A will be captured into resonance if this parameter satisﬁes – E2j+1 0 ) K(Em,k

+

+ E2j+1 (m,k (m,k + 0. The change of variables y = x – !̄ reduces it to ∞

I(E, %) = ∫ ( 0

1 1 ) dy. – √4f cos !–4f √2%y ̄ ̄ cos(y + !)+2%y

From the implicit formula for the general solution we deduce as above that for ; → +∞ the equality 2 (! – !)̄ = (; + const – I(E, %))2 % 0 holds whence !0 = !̄ +

% 2 (; + 2;(const – I(E, %)) + (const – I(E, %))2 ) . 2

5.4 A Searching of Suitable Asymptotic Expansion

191

Using the known values of parameters for ; → –∞ we derive the parameters of asymptotics for ; → +∞, namely % !+0 = !–0 + 210,0 I(E, %) + I(E, %)2 , 2 % + – (5.34) 10 = 10 + I. 2 Formulas (5.34) demonstrate explicit connections of asymptotics of the solution to eq. (5.29) for ; → ±∞. Changing the variable in the integral I(E, %) by z = 2%y yields I(E, %) =

1 I, 2%

where ∞

I = ∫( 0

1 √8f sin(!̄ + z/4%) sin(z/4%)+z

1 ) dz, √z

which shows that I(E, %) = O(%–1 ) as % → 0. It would be desirable to derive asymptotics of this integral up to O(1) as % → 0 but we have not been able to do this. 5.4.3 Connection Formulas for Perturbed System In this section we make the connection formula for asymptotics at ; → ±∞ more precise by using correction terms. The main result of capture into resonance is given in Theorem 25. We now consider the perturbed system, see eq. (5.28). Our concern will be to construct a solution to this system for large values of ;. Assume that the parameters !± and 1± of the solution depend slowly on ;. Substitute the asymptotics of the solution to the perturbed system and average the system over the fast time ;. As a result of integration of the averaged system we obtain !± = %3/2 ((1±0 )2 ; + O(;–1 )) , 1± = 1±0 + %3/2 O(;–1/2 ), where 1–0 = 10,0 for uniformity. These formulas determine the modulation of parameters of the solution to eq. (5.28) for large values ;. Derive an equation for the evolution of parameter E for the perturbed system (5.28). To this end we differentiate the expression for E according to system (5.28). This gives dE %5/2 ∼ %3/2 (f 12 sin ! + f 2 sin ! cos !) + (f cos ! + 12 ) . d; 2

(5.35)

The derivative of E is small, hence the parameter E changes little when ; runs over a bounded interval. One should study the behaviour of E for large ;, since the changes of E may be essential on a big interval. For large values of ; it is convenient to use the

192

5 Autoresonances in nonlinear systems

asymptotics of !0 and 10 evaluated earlier. Substituting the asymptotics into eq. (5.35), gathering similar terms and averaging over the fast variable S = –% ;2 /2 + 10,0 ; – !0,0 lead to an equation for the slow modulation Ẽ of the parameter E. More precisely, we get dẼ %5/2 f2 1 ∼ (120,0 + 2 2 ) . d; 2 2% ; The change s = %5/2 ; reduces this equation to Ẽ ∼ const +

120,0 2

(5.36)

f2 dẼ 1 2 ∼ 10,0 + %3 2 . Integration gives ds 2 4s s – %3

f2 . 4s

For ; → –∞, we take into account the value 10,0 = 0 from the outer asymptotic expansion. Adjusting the formulas with each other we get f2 1 Ẽ ∼ %> – %3 , 2 4s where > is the phase shift to be treated as parameter of the solution. At the separatrices and saddle points we have E = Ek . Hence it follows that on a separatrix > satisfies 1 %> ∼ –f – % 0k, 2 k k being an integer number. Then, >k ∼ –

2f – 20k %

holds on the kth separatrix. The constructed outer asymptotics suits if –%–1/2 3 ≫ 1. In terms of the inner variable ; this inequality just amounts to –; ≫ %–1 . Matching of outer and inner asymptotics leads to the following assertion. Theorem 25. Given any const > 0, assume that %–3/2 |E – Ek | > const for 3 → –0. Then the trajectories of solutions are not captured into resonance. The formula for Ek implies Ek+1 –Ek = %0. Hence, within strips of width %0 in parameter E, the asymptotic solutions with parameter values in any strip Ek–1 + C1 %3/2 < E < Ek – C2 %3/2 , where C1 and C2 are arbitrary positive constants and are not captured into resonance close to centres of nonperturbed system (5.29). The domain of validity of inner asymptotics. From the view point of perturbation theory the solution of system eq. (5.28) can be searched for in the form of solution to eq. (5.29) with slowly varying parameters. From general considerations of dependence of the solution to eq. (5.29) on the parameter E we derive the estimate %5/2 ;2 ≪ 1, or |;| ≪ %5/4 .

5.5 A Thin Manifold of a Captured Trajectories

193

5.5 A Thin Manifold of a Captured Trajectories In this section we carry out analysis of solutions of perturbed system (5.28) in neighbourhoods of saddle points and show the width of the domain of those values of E for which the solutions are captured into oscillations about the centre. The rebuilding of solutions takes place in neighbourhoods of saddle points of nonperturbed system (5.29). Earlier we obtained an estimate for the domain of those values of E for which no capture happens. Capture into resonance for the perturbed system occurs within the domain E – Ek = o(%3/2 ). To describe the capture domain more precisely it is necessary to conduct more delicate investigations of trajectories of perturbed system (5.28) nearby saddle points.

5.5.1 Slowly Varying Equilibrium Points For perturbed system (5.28), the slowly varying solutions are analogues of equilibrium points. We will look for such solutions of the form of formal series in powers of %1/2 , the so-called Puiseux series. Substitute such series to system (5.28) rewritten in terms of 4 = %;. As a result of the standard procedure of equating the coefficients of the same powers of %1/2 we get a recurrent system of algebraic equations for determining the coefficients !̃ k and 1̃ k of these expansions. Solving the system yields !̃ k ∼ 0k + % 1̃ k ∼

(–1)k 4 (–1)k+1 (–1)k+1 , + %5/2 + %3 2f 4f 48f 3

4 42 (–1)k f 43 (–1)k f 4 – %3/2 ( + + ) + %3 ( + ) 2 8 2 2 16 2

(5.37)

for k = 0, 1, . . . . Asymptotics (5.37) are applicable for 4 ≪ %–3/2 . On introducing the new dependent variables !̃ = ! – !̃ k and 1̃ = 1 – 1̃ k and letting % → 0 in such a way that % |;| = O(1) we obtain d sin !̃ !̃ ∼ 21̃ – %3/2 (1̃ 2 + f (cos !̃ – 1)) – %5/2 ( + ;1) d; 2 +

%3 (4f 1̃ (cos !̃ + 1) + 2%;f (cos !̃ – 1) + (%;)2 1)̃ , 4

d cos !̃ – 1 sin !̃ cos !̃ – 1 1̃ ∼ –f sin !̃ + % + %2 – %7/2 ; . d; 2 8 4

(5.38)

One can show that, for k = 2m, the slowly varying solutions are saddle points. A thorough analysis using the Wentzel, Kramers and Brilluen (WKB) method in much the same way as in Ref. [92] actually shows that the points !̃ = 0, 1̃ = 0 for k = 2m + 1 are stable focal points.

194

5 Autoresonances in nonlinear systems

5.5.2 A Rough Conservation Law Expression (5.31) changes little on solutions of system (5.28) when ; → ∞. This follows immediately from eq. (5.36). This is easily shown by immediate substitution of the constructed asymptotics for ; → ∞. However, this expression oscillates rapidly with amplitude of order %3/2 , see eq. (5.35). In order to study solutions in small neighbourhoods of turning points it is convenient to use a modified form of eq. (5.36), which oscillates in ; with amplitude much less than %5/2 and is valid for |;| ≪ %–5/2 just as system (5.38). Set 1̃ 2 ;1̃ 2 % Ẽ = 1̃ 2 + f (cos !̃ – 1) – (sin !̃ – !)̃ + %3/2 (f 1̃ cos ! + 1̃ 3 – f 1)̃ + %5/2 + %2 . 2 2 8f Differentiating in ; according to system (5.38) gives dẼ %5/2 ∼ (21̃ 2 cos !̃ + f sin !̃ – 1̃ 2 ) d; 2

(5.39)

for |;| ≪ %–5/2 . 5.5.3 Breaking Up of Separatrix Consider the separatrices arriving at a point (!2m , 12m ). To be specific, assume that m = 0. At the saddle point two separatrices arrive when t → ∞. For both separatrices the limit value of Ẽ as t → ∞ is equal to %0m. However, one of these separatrices loops the point (!2m+1 , 12m+1 ). From eq. (5.39) it follows that the values of Ẽ on the separatrices on the left of line !̃ = 20m differ by the value of Mel’nikov’s integral over the loop ℓ of the separatrix of non-perturbed system (5.29). Namely, %5/2 BẼ ∼ ∫ (21̃ 2 cos !̃ + f sin2 !̃ – 1̃ 2 ) d;. 2 ℓ

The integral over the loop of separatrix of the equation of mathematical pendulum with outer momentum % tends to the sum of two integrals over the upper and lower separatrices of mathematical pendulum without outer momentum, when % → 0. The loop of separatrix for mathematical pendulum with outer momentum begins and ends at the same saddle point. For mathematical pendulum without outer momentum the upper and lower separatrices begin and end at different saddle points. Write the integral over loop of separatrix as the sum of integrals of integrand terms and consider the integrals obtained in this way separately. When obviously integrated by parts, the principal term of the first integral transforms to the form ̃ ∼ – ∫ 1̃ cos !̃ d!̃ ∼ – ∫ sin !̃ d1̃ ∼ f ∫ sin2 !̃ d;, 2 ∫ 1̃ 2 cos !d; ℓ

5.6 Asymptotic Solution of the Capture Problem

Focal point

r

195

Captured trajectory

a

~

~

~

Separatrix E ~ E2m + ΔE

~

~

Separatrix E ~ E2m

Figure 5.4. Trajectory scheme of system (5.28).

which is due to eq. (5.29). Hence it follows that the sum of the first and second integrals amounts to ∞

2 2√2f 1 d1̃ 2 2 d f ∫ sin !̃ d; ∼ ∫ ( ) d; = ∫ ( ) d; f d; f d; cosh(√2f ;) 2

–∞

=

1

4f sinh(√2f ;) 2 16√2f 2 ) d; = 16√2f ∫ tanh2 (√2f ;) d tanh(√2f ;) = . ∫ ( 2 f 3 √ cosh ( 2f ;) –∞ –1

The principal term of the third integral is evaluated explicitly, namely ∞

4√2f 1 d(√2f ;) = 8√2f , ∫ 1̃ 2 d; ∼ ∫ 2 cosh2 (√2f ;) –∞ ℓ

and so the formula for Mel’nikov’s integral takes the form BẼ ∼ –%5/2

8√2f . 3

For Ẽ 2m < Ẽ < Ẽ 2m + BẼ the trajectories prove to be captured into the neighbourhood of the focal point !̃ = ! – !k , 1̃ = 1 – 1k with k = 2m + 1, see Figure 5.4.

5.6 Asymptotic Solution of the Capture Problem In this section we match the parameters of asymptotics constructed for the outer and inner expansions. As a result we derive connection formulas for non-captured asymptotic solutions and describe the domain of parameters containing those asymptotic solutions which are captured into resonance.

196

5 Autoresonances in nonlinear systems

5.6.1 Matching to Bottleneck Asymptotic Expansion The parameters of outer asymptotics are % and >. Matching of asymptotics (5.21) and (5.25) in the domain 1 ≪ %–1/2 s ≪ %–1/2 yields A00 = >,

R01 = 0.

Matching of asymptotics (5.25) and (5.32) in the domain 1 ≪ –%–1/2 3 ≪ %–1/2 leads to formulas !0,0 = >,

10,0 = 0.

The expressions for E and Ẽ coincide in the principal term of the asymptotics for |;| ≪ %–5/2 . The splitted separatrices differ by the quantity BẼ ∼ –%5/2 8√2f , which does not depend on ;. An equivalent shift in the parameter E also causes splitting of near separatrices. The separatrices of system (5.38) lie in the domain K of parameter > given by |> – 20k +

1 I(Ek /(4f ), %/(4f ))| = o(%1/2 ) √f

for k = 0, ±1, . . . . For some >1 , >2 ∈ K, such that >2 – >1 = 2BE,̃ the trajectories with parameter > satisfying >1 < > < >2 are captured into resonance, that is J ∼ √t as t → ∞. On setting E ∼ Ẽ one evaluates the length of the interval of those >, for which the trajectories are captured into resonance, by B> ∼ 2%–1 BE ∼ 2%–1 BẼ ∼ %3/2

16√2f . 3

In the plane of variable J the area of trajectories that are captured by the time t = O(%–2 ) has order B> %–2 = O(%–1/2 ). For non-captured solutions, the matching of parameters of asymptotics after passing the inner domain gives r30 ∼

1 I, 4

>+ ∼ > +

1 I 2. 8%

Without loss of generality one can make the change %+ = % – %3/2 I /4. As a result we deduce that the new value of the parameter % in the outer expansion for s > 0 is actually O(%3/2 ) less than the value of % before passing the inner domain. 5.6.2 Numerical investigations In this section we present the results of two numerical simulations concerning the main analytic results for solutions of eq. (5.19). On investigating the family of trajectories of solutions to system (5.28) by the Runge–Kutta method of order 4 with step 0.001 we get a numerical value for the width BẼ of the interval of captured trajectories. For each value of the parameter % in [0.3, 0.01] with step 0.001 we construct 2024 trajectories with initial data for ; = –1,

5.6 Asymptotic Solution of the Capture Problem

197

0.2 0.15 0.1 0.05 0

10

20

30

40

50

60

70

80

90

100

R Figure 5.5. The dependence of the relative difference between the value of BẼ obtained from the asymptotic formula and the value obtained by numerical calculations.

! = !s,0 in the thickening set of initial data 1 = 10 %5/4 √20(N/2 – j)/N, where N = 2023 and j = 0, 1, . . . , 2023. A trajectory is considered to be captured if –20 < ! < 0 holds for ; = 70. The formula for the relative error \$ reads \$=

BẼ num – BẼ , BẼ

where BẼ num is the numerical value for the width of the interval of values Ẽ corresponding to captured solutions for a given %. The graph of the relative error depending on the quantity R = 1/% is given in Figure 5.5. This graph shows that the relative difference changes less than by 0.1 in the large interval [3.3, 100] of R. Such a discrepancy between the asymptotic and numerical values of BẼ may be thought of as satisfactory. To justify the analytical calculations we have conducted numerical simulations. We study the family of solutions to the Cauchy problem for eq. (5.19) in the form (5.28) with initial data at ; = –%–5/2 instead of the problem on connection of asymptotics for ; → –∞ and ; → ∞. In such setting there are two essential hindrances in the study of solutions with small values of parameter %. First, the interval of numerical integration is large, L = O(%–5/2 ). Second, as asymptotic analysis shows, the solutions oscillate rapidly with frequency K = O(%–%; ) far away from the origin. In this case the familiar error formula for integration by the Runge–Kutta method of order 4 with step h gives B = O(h4 K4 L) = O(h4 %–17/2 ). For computations with floating point and double accuracy the value of h is chosen to be greater than 10–3 , since for smaller steps the discreteness of the set of double accuracy numbers affects the error adversely. This inequality and the error formula for the Runge–Kutta method B yield a restriction on the numerical values of parameter %. One of the purposes of numerical integration is to evaluate B>num , the width of the interval of those values > which correspond to captured trajectories, and compare it with the asymptotic formula for B>. By the above, we get B> = O(%3/2 ). Hence, the error of numerical integration satisfies B ≪ %3/2 . Combining this estimate with the minimal

198

5 Autoresonances in nonlinear systems

step value in the Runge–Kutta method one obtains the minimal value of the parameter %, to wit h4 %–17/2 ≪ %3/2 . That is, % ≪ 10–12/10 ∼ 0.063 or R ≪ 106/5 ∼ 15.85, where R = %–1 . On arguing in this way we consider the family of Cauchy problems for % ∈ [0.09, 0.3] with step in % equal 0.001. For these % we evaluated 2024 solutions on the interval t ∈ [–2%–5/2 , 3%–5/2 ] by the Runge–Kutta method of order 4 with integration step h = 0.001. The initial conditions for the family of Cauchy problems parametrized by the numbers N = 0, 1, . . . , 2023 are chosen according to the constructed asymptotic solution for ; → –∞. The net of trajectories thickens according to the low %3/2 . More precisely, !(s, %)|;0 = S+%2 ( 1(S,%)|;0 = %3/2 (

f 2 (2s+1) f (s+2) sin S –% ), s2 s2

f cos S 2 f 3 (2s+1) sin S 3 f 2 f 2 (4–(s+2) cos 2S) –% +% ( 2 + )) , s s3 4s 4s3

where ;0 = –2%–5/2 , s = %5/2 ;0 , 1 s2 S= 4 + >N , % 2 2f + 0.77. % In Figure 5.6 the domain of parameters R = 1/% and > is painted which corresponds to captured solutions of the Cauchy problems discussed earlier. and >N = 5%√%(𝚤 – N/2)/N –

5.6.3 Asymptotics and Numeric Points View Earlier we show the parameters of asymptotics in which the domain of solutions captured into resonance looks regularly. An important technical result is the formula for the exponent index in the asymptotic solution (5.19), namely 𝚤( – %–2 t +

t2 2f – + > + %2 a). 2 %

5.0

6.0

1.4 1.2 phi

1.0 0.8 0.6 0.4 0.2 3.0

3.5

4.0

4.5

5.5 R

Figure 5.6. Parameters of captured solutions.

6.5

7.0

7.5

8.0

199

5.6 Asymptotic Solution of the Capture Problem

In this formula > is the parameter of the solution, all the remaining summands except for the third one can be obtained from the analysis of outer asymptotic expansion. The third term is determined by the construction of inner asymptotic solution and matching of the inner and outer asymptotics. This summand is indeed of crucial importance in order that Figure 5.1 with a thin spiral irregular in % might change into Figure 5.6 which is a regular domain of trajectories coming from the dark region and captured into resonance. We establish an asymptotic formula for the connection between the parameters of those solutions that are not captured into autoresonance. Moreover, we get an estimate for the parameters before the resonance of those solutions which may be captured into autoresonance. The content of this section clearly recognizes open technical problems. Computations with principal terms of asymptotics yield the width of the domain in the plane of variables % and > corresponding to those solutions which are captured into resonance. In wit B> ∼ %3/2

16√2f . 3

In Figure 5.7 we demonstrate the graph of dependence on R of the width of the interval of captured trajectories obtained by the asymptotic formula and by numerical simulation. On the same figure we show the relative difference between the widths of intervals of > for trajectories captured into resonance, which are evaluated numerically and analytically for diverse values of R. 0.3 1.2

Asymptotic curve Numeric curve

Relative difference 0.25

1 0.2

0.8

0.15

0.6 0.4

0.1

0.2

0.05

0

4

5

6

7 R

8

9

10

11

0

4

5

6

7 R

8

9

10

11

Figure 5.7. On the left graph of B>, the width of the interval of parameters > corresponding to captured solutions, depending on the quantity R = %–1 . The dotted line corresponds to the constructed asymptotic formula, the continuous line is a piecewise approximation of B> obtained numerically by data handling for the constructed family of numerical solutions to the Cauchy problems. On the right graph of relative difference (B> – B>num )/B>num , where B>num is the length of the interval of parameter > for trajectories captured into resonance. The length is evaluated numerically for diverse values of the parameter R = 1/%.

200

5 Autoresonances in nonlinear systems

One can see on right-hand side of Figure 5.7 that the relative difference does not exceed 0.22 in the given interval of R = 1/%. Moreover, it remains small and does not change essentially for the given values of small parameter %. In order to obtain numerical results for eq. (5.19) with less values of %, one requires further computations, for example, in the class of numbers to arbitrary degree of accuracy. On the other hand, for making the obtained asymptotic formulas, in particular, B> is more exact, one has to construct higher-order correction terms. However, this requires essential additional analytic investigations of rotary trajectories of perturbed mathematical pendulum with outer torque.

5.7 Capture into Parametric Resonance We study the primary parametric resonance equation: i:6󸀠 + (3(() + |6|2 )6 – 6∗ = 0.

(5.40)

Our goal is to develop an asymptotic theory of the capture into the parametric autoresonance with 3(() ≡ –(. We give the asymptotic analysis of the capture into the parametric autoresonance and connect the asymptotic formulas for the solution of eq. (5.40) before and after the capture. The capture in a nonlinear (not parametric) resonance was studied in Ref. [25]. The capture in the nonlinear resonance associated with loss of the stability and slow crossing through the separatrix (see [67, 126]). Autoresonance phenomenon was supposed for accelerators of relativistic particles in Refs. [115, 153]. Later the autoresonance was found in many different oscillatory and wave processes (see review). Mathematical approach to the capture into the autoresonance was considered in Ref. [77]. The capture into autoresonance accompanies with bifurcation or separatrix crossing in the primary resonance equation (see [92]). It is well known the parametric resonance leads to the exponential growth for the solutions of the linear equation (see [41]). However, the nonlinear terms lead to the unbalancing of the frequency of the external force and the frequency of the oscillations for the solution of the nonlinear equation. It changes the amplitude of the oscillations for the value of the order of square root of the perturbations. This phenomenon was shown in eq. [15] by analysis of the solution for primary parametric resonance equation in the form: R󸀠 + R sin(2-) = 0,

-󸀠 – 3 – R2 + cos(2-) = 0,

(5.41)

where 3 = const. Later eq. (5.41) was studied in Refs. [36, 85], when 3 ≢ const. It was shown that the autoresonance phenomenon takes place for the primary parametric resonance equation when 3 < 0 and the modulo of the coefficient |3| grows with respect to (. We consider the primary parametric resonance equation in the form (5.40). One can obtain this equation after the substitution: 6 = R exp(i-). There are two reasons to investigate eq. (5.40) with 3(() ≡ –(. On the one hand, the case 3(() ≡ –( is

5.7 Capture into Parametric Resonance

201

the simplest one and it contains the autoresonance phenomenon for parametrically driven systems. On the other hand, this case saves most of essential features of the solution for eq. (5.40). Our analysis gives opportunity for another point of view on the subject. The capture into the parametric autoresonance may be considered as a loss of stability in the pitchfork bifurcation. The dynamic theory of the pitchfork bifurcation for ordinary second-order differential equations with slowly varying coefficient was considered in Refs. [60, 62, 110]. The bifurcation when two centres and saddle coalesce is called by supercritical pitchfork. The solution in bifurcation layer approximates by solutions of the Painlevé-2 equation (see Ref. [60]). Later the connection formulas were obtained for the solution before and after the supercritical pitchfork bifurcation for asymptotic solution of the special form of perturbed Painlevé-2 equation with dissipation and slowly varying bifurcation parameter (see Ref. [110]). The supercritical pitchfork bifurcation for equation A󸀠 = i(+A – |A|2 – \$A∗ ) was considered in Ref. [58]. Here + and \$ are two parameters. We study the same equation but the parameter + varies slowly. We show that the capture into parametric resonance may be explained as the pitchfork bifurcation in the primary parametric resonance equation. We prove that the solution close to the moment of the capture is described by the Painlevé-2 equation and obtain the connection formulas for the solution of the primary parametric resonance equation before and after the capture using the matching of the asymptotic expansions. An asymptotic theory for such capture was developed in Ref. [93]. In this section we discuss the capture into the parametric resonance for the solution of eq. (5.40) and present the numeric and qualitative analysis of the capture. We formulate the main result at the end of this section. 5.7.1 Numeric Analysis Let us consider two numeric solutions for eq. (5.40) (see Figure 5.8). These solutions differ at the initial moment only. The first solution (left) corresponds to the initial moment at t = –2 and the second one (right) corresponds to t = –2.01. We see the solution of the equation is very sensitive with respect to an initial data. The illustrated phenomenon is the scattering on the parametric autoresonance. Our goal is to explain the solution for the scattering problem in terms of the asymptotic theory. 5.7.2 Qualitative Analysis Here we present the qualitative analysis for the equation with the “frozen” coefficient 3 ≡ –T: i:6󸀠 + (–T + |6|2 )6 – 6∗ = 0.

(5.42)

202

5 Autoresonances in nonlinear systems

Im ϕ

–0.25 –0.5

0.5 –1 0.25 0

–0.5

Re ϕ 0 0.5 1

0

–0.5

–1

θ

–1.5

–2 Figure 5.8. The ﬁgure shows two solution of eq. (5.40). Left curve shows the solution of the Cauchy problem for eq. (5.40) at t = –2.01, R(6) = 0.02, I(6) = 0, % = 0.01. Right curve shows the solution of the Cauchy problem for eq. (5.40) at t = –2, R(6) = 0.02, I(6) = 0, % = 0.01.

The trajectory of the solution for the equation with the varying coefficient 3(() = –( is locally closed to the trajectories of eq. (5.42) at T = (. Therefore, the solution of the equation with the frozen coefficient gives a local behaviour for the solution of eq. (5.40). The Hamiltonian for the equation is 1 1 H(6, 6∗ ) = – |6|4 + T|6|2 + ((6∗ )2 + 62 ). 2 2 It is easy to see that there exists only one centre when T < –1. There exist two centres and one saddle when –1 < T < 1 and three centres and two saddles at T > 1. We show the phase portraits for the equation with different values of the parameter T on the following figure. These three pictures give the conjecture about the numeric solution that was shown on Figure 5.8. On Figure 5.8 one can see the non-resonant solution when t ≤ –1, the capture when –1.1 ≤ t ≤ –0.8 and the captured solution when –1 < t. Before the capture the solution oscillates around the unique centre for T < –1. When T > –1 we see the captured solution that oscillates around one of two centres from the second picture in Figure 5.9. The left trajectory is captured by the left centre from the second picture on Figure 5.9 and the right trajectory on Figure 5.8 is captured by the right centre on the second picture of Figure 5.9. The bifurcation at T = 1 does not affect on the captured solutions. These solutions remain close to the same centres. Below we obtain asymptotic solutions that illustrate the qualitative and numeric analysis.

203

5.7 Capture into Parametric Resonance

1 0.75 0.5 0.25 –1.5

–1

–0.5 –0.25 –0.5 –0.75 –1

2 1.5 1 0.5

0.6 0.4 0.2 0.5

1

1.5

–1

–0.5

–0.2

0.5

–0.4 –0.6

1

–2 –1.5 –1 –0.5 –0.5 –1 –1.5 –2

0.5

1

1.5

2

Figure 5.9. On the left picture T < –1, on the middle picture T = 0, and on the right picture T > 1.

Moreover, we obtain the formula that connects the asymptotic solutions before the capture and the solutions captured by the left or the right centre. The formula is obtained by the matching method [64]. Let us formulate the result. Theorem 26. Let the asymptotic solution of the primary resonance equation be 6 ∼ %1/2 !1,0 [ √ 4

(–1 4 (+1 sin (s) + i √ cos (s)], (+1 (–1

as ( < –1,

where s=

󵄨󵄨 󵄨 9 󵄨 ( – 1 󵄨󵄨󵄨 󵄨󵄨], + >1,0 + !21,0 [2( + 2 ln 󵄨󵄨󵄨󵄨 % 󵄨󵄨 ( + 1 󵄨󵄨󵄨

and 1 1 9 = (√(2 – 1 – ln(( + √(2 – 1). 2 2 The constants !1,0 and >1,0 are the parameters of the solution and 3 0 >1,0 ≠ !21,0 ln(2) – – arg (A(i!21,0 /2)) + 70(mod(20)), 7 = 0, 1, 2 4 then the asymptotic solution has a form: 1 1 4 cos(S) + i √1 + ( sin(S))], I(j) (S, (, %) ∼ (–1)j [ √1 + ( + %1/2 A0,0 ( 4 √1 + ( 2 S∼

3 K 5 1 3 + >0,0 + A20,0 ( ( + (2 + (112 – 8()√1 + ( + ln(1 + ()) % 2 4 6 2

in the domain ( > –1. Here K∼

4 (( + 1)3/2 , 3

(5.43)

204

5 Autoresonances in nonlinear systems

j = 2, 3, the constants >0,0 and A0,0 are the parameters of the asymptotic solution. The parameters are deﬁned by A20,0 =

1 + |p|2 1 ln ( ), 0 3|I(p)|

>0,0 = –

0 7 2 + A – arg (A(iA20,0 )) – arg(1 + p2 ), 4 2 0,0

where 3 0 p = √exp(0!21,0 ) – 1 exp (i !21,0 ln(2) – i – i arg (A(i!21,0 /2)) – i>1,0 ). 2 4 In formula (5.43) j = 2 as I(p) > 0 and j = 3 as I(p) < 0.

5.8 WKB Solution Before the Capture Let us construct the solution of the WKB type before the capture. The qualitative analysis shows that the capture into the parametric resonance occurs at ( = –1. When ( < –1 the solution of eq. (5.42) has an unique centre at 8 = 0. This fact prompts that eq. (5.40) has the oscillating solutions and they are closed to 8 = 0. We construct the asymptotic expansion for the solutions of such type in this section.

5.8.1 The WKB Solution Closed to Zero Let us consider the WKB solution in the form: ∞

6 = %1/2 ∑ %n–1 6n (s, ().

(5.44)

n=1

The leading-order term of the asymptotic expansion is the solution of the equation: i9󸀠 𝜕s 61 – (61 – 6∗1 = 0.

(5.45)

The higher-order terms of eq. (5.44) are solutions of the equation: i96󸀠n – (6n – 6∗n = fn ,

n = 2, 3, . . . ,

where fn = –i𝜕( 6n–1 – i𝜕( >n–1 𝜕s 6n + hn . Here hn is a polynomial of the third order with respect to 6m , m < n.

(5.46)

5.8 WKB Solution Before the Capture

205

5.8.1.1 The Basis of the Solutions for the Linear Equation To solve eq. (5.45) and the equations for the higher-order terms we construct the basis of two linear independent solutions for eq. (5.45) as the WKB approximation. Define the basis solutions for eq. (5.45) by v1 and v2 . Let the first solution be v1 (4, () = 9󸀠 cos (4) – i(( + 1) sin (4), where 4 = 9/%, (9󸀠 )2 = (2 – 1. The Wronskian of two linear independent solutions of eq. (5.45) is constant. Therefore we construct the second solution as v2 (4, () =

–1 i sin (4) – 󸀠 cos (4). (( + 1) 9

The Wronskian for these solutions is w(v1 , v2 ) = v1 v2∗ – v1∗ v2 = 2i. The formula for the general solution of eq. (5.45) has a form: v(4, () = C1 (()v1 ((, S) + C2 (()v2 (4, (). Using the WKB approximation we specify the C1 (() and C2 (() such that C1 = C1,0

1 , √(–( – 1)9󸀠

C2 = C2,0 √(–( – 1)9󸀠 . Here C1,0 and C2,0 are arbitrary constants. We use the functions 1

1

1

1

(–1 4 (+1 4 w1 (4, () = [( ) cos(4) + i( ) sin(4)] (+1 (–1 and (–1 4 (+1 4 ) sin(4) – i( ) cos(4)] w2 (4, () = [( (+1 (–1 as the basis of the linear independent solutions of eq. (5.45).

206

5 Autoresonances in nonlinear systems

5.8.2 Constructing of the WKB Solution for Nonlinear Equation (5.40) The main term of the WKB asymptotic solution for eq. (5.40) we construct in the form: 1

1

(–1 4 (+1 4 ) sin(s) – i( ) cos(s)], 61 (s, () = ![( (+1 (–1

(5.47)

where s = 9/% + > and arbitrary constants ! and > are parameters of the solution. These parameters are represented by asymptotic series: ∞

! = ∑ %k–1 !k ((), k=1

> = ∑ %k–1 >k ((). k=1

It is easy to see that the solution of eq. (5.46) is bounded if the right-hand side is orthogonal for two linear independent solutions of the linearized equation: 20

∫ fn (z, ()w1∗ (z, () + fn∗ (z, ()w1 (z, ()dz = 0, 0 20

∫ fn (z, ()w2∗ (z, () + fn∗ (z, ()w2 (z, ()dz = 0.

(5.48)

0

Let us construct the bounded solution of eq. (5.46) at n = 2. The right-hand side is f2 = –i

d 6 – |61 |2 61 ≡ –i𝜕( !1 𝜕! 61 – i!1 𝜕s 61 – |61 |2 61 . d( 1

Formulas (5.48) give the equations for the functions !1 (() and >1 ((): 𝜕( !1 = 0,

2𝜕( >1 + !21 |61 |2 = 0.

Using the formula for 61 we obtain 2𝜕( >1 + !21

(( + 1)2 + (( – 1)2 = 0. (( + 1)(( – 1)

The solution of the equation has the form 󵄨 󵄨󵄨 󵄨 ( – 1 󵄨󵄨󵄨 󵄨󵄨). >1 = >1,0 – (!01 )2 (( + ln 󵄨󵄨󵄨󵄨 󵄨󵄨 ( + 1 󵄨󵄨󵄨 Analogously, the next terms of the WKB asymptotic expansion are constructed. The equation for the nth correction term of the WKB asymptotic solution has a form: i9󸀠 6󸀠n – (6n – 6∗n = fn ,

(5.49)

207

5.9 The Painlevé Layer

where fn = 𝜕( !n–1 𝜕! 61 + !𝜕( >n–1 𝜕s 61 + hn . Here hn depends on the lower-order terms of the WKB asymptotic expansion. These equations allow to determine coefficients !n and >n . 5.8.2.1 Asymptotic Behaviour Close to the Turning Point The WKB asymptotic expansion is not valid at the turning point ( = –1. As ( → –1 – 0 the asymptotic behaviour of the coefficients for asymptotic series (5.44) are represented by the following formulas. For the leading-order term we obtain 61 ∼ !1,0 (

–2 ) (+1

–1/4

sin (

2√2 (1 – ()3/2 – (!1,0 )2 ln(1 + () + >1,0 ), ( → –1 – 0, 3

where !1,0 = const and >1,0 = const are the parameters of the solution. The second term of asymptotic series (5.44) has a behaviour: 62 = O((–1 – ()–7/4 ),

( → –1 – 0.

The recurrent calculations give 6n+1 = O(6n (–1 – ()–3/2 )),

( → –1 – 0.

5.8.2.2 The Domain of Validity Expansion (5.44) saves the asymptotic property with respect to % when %6n+1 ≪ 1, 6n

as

( → –1 – 0.

It yields the domain of validity for eq. (5.44) %–2/3 (–1 – () ≫ 1.

5.9 The Painlevé Layer To construct the asymptotic solution when ( is closed to –1 we use the scaled variables: ( + 1 = %2/3 ',

6 = %1/3 x(', %) + i%2/3 y(', %).

This change of the variables leads to the system of equations for real and imaginary parts of the function 6: x󸀠 + 2y = %2/3 (' – x2 )y – %4/3 y3 , y󸀠 + (' – x2 )x = %2/3 y2 x.

(5.50)

208

5 Autoresonances in nonlinear systems

5.9.1 The Asymptotic Expansion in the Painlevé Layer We construct the solution of system (5.50) of the form: ∞

x(', %) = ∑ %2n/3 xn ('),

y(', %) = ∑ %2n/3 yn (').

n=0

(5.51)

n=0

The leading-order terms of eq. (5.51) are solutions of the system of equations: x0󸀠 + 2y0 = 0, y0󸀠 + (' – x02 )x0 = 0.

(5.52)

The higher-order terms are determined from the system: xn󸀠 + 2yn = Hn(1) , yn󸀠 + (' – 3x02 )xn = Hn(1) .

(5.53)

Here functions Hn(1) and Hn(2) depend on ' and the lower-order terms of asymptotic expansion (5.51). For example H1(1) = (' – x02 )y0 ,

H1(2) = y02 x0 ;

H2(1) = (' – x02 )y1 – 2x0 y0 x1 – y03 ,

H2(2) = 2y0 y1 x0 + y02 x1 ;

The leading-order term of the asymptotic expansion x0 is a solution of the second order equation: x0󸀠󸀠 + 2(–' + x02 )x0 = 0.

(5.54)

The higher-order terms of eq. (5.51) satisfy the linear differential equation of the second order: xn󸀠󸀠 + 2(–' + 3x02 )xn = hn ,

hn = Hn(2) + 2

d (1) H . d' n

(5.55)

The right-hand side of eq. (5.55) has the following structure: n–1

n–1

k=0

k=0

hn = Hn(2) + 2𝜕' Hn(1) + 2 ∑ 𝜕xk Hn(1) xk󸀠 + 2 ∑ 𝜕yk Hn(1) yk󸀠 . The substitution

xk󸀠

= –2yk +

Hk(1)

and

yk󸀠

=

(3x02

– ')xk + Hk(2) gives

n–1

hn = Hn(2) + 2𝜕' Hn(1) + 2 ∑ 𝜕xk Hn(1) (–2yk + Hn(1) ) k=0 n–1

+ 2 ∑ 𝜕yk Hn(1) ((3x02 – ')xk + Hk(2) ). k=0

(5.56)

5.9 The Painlevé Layer

209

The substitutions ' = 21/3 z,

x0 (') = 21/3 iu(z)

(5.57)

reduce eq. (5.54) to the Painlevé-2 equation: u󸀠󸀠 – zu – 2u3 = 0.

(5.58)

Using the substitution xn (') = 21/3 iun (z) we obtain the perturbed linearized Painlevé-2 equation: 2 u󸀠󸀠 n – zun – 6u un = Hn .

(5.59)

The Painlevé-2 equation is integrable by the isomonodromic deformation method. The solution u(z; !,̃ >)̃ of the equation is called by the Painlevé transcendent that depends on two parameters !̃ and >.̃ It is known that the real solution of eq. (5.58) does not have the singularities as z ∈ ℝ [69]. Therefore, the main term of the asymptotic expansion ̃ is presented by the Painlevé transcendent u(z, !,̃ >). The homogeneous linearized Painlevé-2 equation w󸀠󸀠 – zw + 6u2 w = 0 has two linear independent solutions: ̃ u1 = 𝜕!̃ u(z, !,̃ >),

̃ u2 = 𝜕>̃ u(z, !,̃ >).

These solutions allow to present the higher-order terms of the expansion (5.51) in the form: '

xn =

B(1) n X1 (')

+

B(2) n X2 (z)

+ X1 (z) ∫ '0

'

– X2 (') ∫ '0

Hn (& )X2 (& ) d& W

Hn (& )X1 (& ) d& . W

(5.60)

Here W ≡ const ≠ 0 is the Wronskian of the solutions X1 = iu1 (2–1/3 ') and X2 = (2) iu2 (2–1/3 ') for the linearized equation, B(1) n , Bn and '0 are arbitrary constants. 5.9.1.1 The Asymptotic Behaviour on the Left-Hand Side of the Validity Interval In this paragraph we determine the domain of validity for asymptotic expansion (5.51) and match the expansion with the WKB asymptotic expansion that was obtained in Section 5.8. Therefore, we should determine the asymptotic behaviour of the Painlevé2 transcendent u(z, !,̃ >)̃ as z → –∞. The asymptotic expansion of the Painlevé-2 solution as z → –∞ has a form ([13, 68]): 2 3 –1/4 ̃ ̃ + o((–z)–1/4 ). u(z) = i!(–z) sin ( (–z)3/2 + !̃ 2 ln(–z) + >) 3 4 Here !̃ and >̃ are parameters of the solution of Painlevé-2 equation.

(5.61)

210

5 Autoresonances in nonlinear systems

Formula (5.61) gives the asymptotic behaviour for the solutions of the linearized equation U1 and U2 as z → –∞: 3 2 3 ̃ , U1 ≡ 𝜕!̃ u ∼ i !̃ 2 ln(–z)(–z)–1/4 cos ( (–z)3/2 + !̃ 2 ln(–z) + >) 2 3 4 2 3 –1/4 ̃ ̃ . cos ( (–z)3/2 + !̃ 2 ln(–z) + >) U2 ≡ 𝜕>̃ u ∼ i!(–z) 3 4 Using these asymptotic behaviours we obtain the asymptotic formulas for the higherorder terms as z → –∞: xn = A–n X1 (') + B–n X2 (') + X–n (z).

(5.62)

Here A–n and B–n are constants. They are obtained by matching procedure with external asymptotic solution below, and X–n (z) is the particular solution of the equation for the nth term of the asymptotic expansion xn . 5.9.1.2 The Left Border of the Interval of the Validity for the Painlevé Layer Using the asymptotic behaviour of x0 as ' → –∞ and the asymptotic behaviour of X1 and X2 , we obtain the asymptotic behaviour of the higher-order terms. The equation for x1 is x1󸀠󸀠 + (–' + x02 )x1 = O((–')2–1/4 ),

' → –∞.

The particular solution has an order: X–1 = O((–')3/4 ),

' → –∞.

The higher-order term satisfies the equation of the form xn󸀠󸀠 + (–' + x02 )xn = O((–')2 xn–1 ). By the sequential calculations we obtain X–n = O((–')n–1/4 ). This estimated value for the higher-order terms give the left border for the interval of validity for eq. (5.51): – %2(n+1)/3 xn+1 %2/3 Xn+1 ∼ ≪ 1, Xn– %2n/3 xn

%2/3 (–') ≪ 1,

or –%2/3 ' ≪ 1,

' < 0, % → 0.

5.9 The Painlevé Layer

211

5.9.2 Matching with the WKB Asymptotic Expansion The domains of validity of asymptotic expansions (5.44) and (5.51) are intersected. The matching with the WKB asymptotic expansion from Section 5.10.2 allows to obtain the parameters !̃ and >̃ for the solution of the Painlevé-2 equation, for example: !̃ = !1,0 ,

>̃ = >1,0 .

(5.63)

5.9.2.1 Asymptotic Behaviour on the Right-Hand Side The theorem by Its–Kapaev–Belogrudov [13, 68] gives the asymptotic behaviour of solution (5.61) for the Painlevé-2 equation. Theorem 27 (by Its–Kapaev–Belogrudov). If a solution of the Painlevé-2 equation has asymptotic behaviour (5.61) as z → –∞ and >̃ 1 =

i!̃ 2 3 2 0 !̃ ln(2) – – arg (A( )) + *0(mod(20), 2 4 2

* = 0, 1,

then the solution of the Painlevé-2 equation is u(z) =

iq –1/4 2 z exp ( – z3/2 )(1 + o(1)), 3 2√0

z → ∞,

where a2 = exp(0!̃ 2 ) – 1, and sgn(a) = 1 – */2; Otherwise the absolute value of the solution increases: –1 z 2√2 3/2 3 2 u(z) = ±i√ ± i(2z) 4 1 cos ( z – 1 ln(z) + 5) + o(z–1/4 ), 2 3 2

(5.64)

where 1 > 0 and 0 ≤ 5 < 20 . The parameters 1 and 5 are deﬁned by !̃ and >:̃ 12 =

5=–

1 1 + |p|2 ln ( ), 0 3|I(p)|

0 7 2 + 1 ln(2) – arg(A(i12 ) – arg(1 + p2 ), 4 2

where !̃ 2 3 0 ̃ p = (exp(0!̃ 2 – 1)1/2 exp (i !̃ 2 ln(2) – i – i arg (A (i )) – i>), 2 4 2 and the sign “+” is deﬁned by I(p) < 0; if 1 = 0, then z–5/2 z u(z) = ±i√ ± i + O(z–11/2 ). 2 8√2

(5.65)

212

5 Autoresonances in nonlinear systems

These formulas give the asymptotic solution of the system of eq. (5.50) as ' → ∞. The solution of the scattering problem for the system of eq. (5.50) allows to solve the problem of the capture into the resonance for the small amplitude solution of the primary resonance equation. 5.9.2.2 The Right Border of the Interval of the Validity for the Asymptotic Expansions in the Painlevé Layer Formulas (5.64), (5.65) and substitutions (5.57) give the leading-order term of the captured into the resonance asymptotic solution. The solution has the behaviour: x0 (') ∼ ∓21/3 √',

' → ∞.

The nonlinear terms in the perturbed Painlevé-2 equation (5.50) are xn (') = O((–')n+1/2 ). Then asymptotic expansion (5.51) is valid in the domain %2/3 ' ≪ 1.

5.10 The Captured WKB Asymptotic Solution The previous section shows the asymptotic solution of eq. (5.40) increases as ±√(. When ( → –1 + 0 the leading-order term of the asymptotic expansion in the Painlevé layer corresponds to the one of the centres for the equation with frozen coefficient. Equation (5.40) has two slowly varying solutions as –1 < ( < 1. In this section we construct the formal asymptotic expansion for these solutions and for slowly varying solutions as ( > 1 also. 5.10.1 Slowly Varying Solutions Let us construct the solution of eq. (5.40) as follows: ∞

U((, %) = ∑ %n Un (().

(5.66)

n=0

The main term of the asymptotic expansion is defined by an algebraic equation: – (U0 + |U0 |2 U0 – U0∗ = 0.

(5.67)

Using the complex conjugated function we obtain [U0 + U0 ∗ ][U0 – U0 ∗ ] = 0. This formula shows that the leading-order term of the asymptotic expansion is purely real or purely imaginary one. The real terms are U0(1) = 0,

U0(2,3) = ±√1 + (,

(5.68)

5.10 The Captured WKB Asymptotic Solution

213

and the imaginary terms are U0(4,5) = ±i√( – 1.

(5.69)

The algebraic equations for the higher-order terms Un(j) , j = 2, 3, 4, 5 are as follows: [–( + 2|U0(j) |2 ]Un(j) + [(U0(j) )2 – 1](U0(j) )∗ = Qn . Here Qn = –i(U0(j) )󸀠 +

(j) ∗ Cklm Uk(j) Ul(j) (Um ) ,

∑ k+l+m=n

where Cklm is a constant. Let us represent the linear equation as a system of the equations for the real and imaginary parts. The determinant of the system of equations is B(j) = 4(1 + (),

j = 2, 3

B(j) = 4(1 – (),

j = 4, 5.

and

Then the system for the higher-order terms is solvable if j = 2, 3 and ( > –1. Otherwise if j = 4, 5, then the system for higher-order terms is solvable when ( > 1. When ( → ∞ the order of nth term of asymptotic is O((–1/2 ), as n = 1, 2, . . . . Therefore, the asymptotic expansion (5.66) is uniform with respect to ( when ( > –1 for j = 2, 3 and when ( > 1 for j = 4, 5. 5.10.2 WKB Asymptotic Expansion Close to the Slowly Varying Centres Here we construct the WKB asymptotic expansion that oscillates close to the slowly varying centres U (j) . The WKB asymptotic expansion has a following form: ∞

(S, (, %). I((, %) = U (j) ((, %) + ∑ %k/2 I(j) k

(5.70)

k=1

Here ∞

S = K(()/% + ∑ %k/2 >k ((). k=0

The equations for the higher-order terms are ∗ iK󸀠 𝜕S I(j) + [–( + 2|U0(j) |2 ]I(j) + [(U0(j) )2 – 1]I(j) = Gn , k k k

Gn = –i ∑ 𝜕S I(j) >󸀠l – i𝜕( I(j) n–2 – l l+k=n–2

where Cklm (U0( j) ) are polynomials.

∑ k+l+m=n

∗ Cklm (U0(j) )I(j) I(j) I(j) m , k l

(5.71)

214

5 Autoresonances in nonlinear systems

5.10.2.1 WKB Solutions for the Equation in Variations The homogeneous linearized equation (5.71) is called by equation in the variations: iK󸀠 𝜕(1 V + (–( + 2|U (j) |2 )V + [U (j)2 – 1]V ∗ = 0. Two linear independent solutions of this equation have the following forms. The first asymptotic solution is V1 ((1 , () = K󸀠 cos((1 ) + i((–( + 2|U (j) |2 ) + [U (j)2 – 1]) sin((1 ). Here (1 = K/%, 2 󵄨 󵄨2 K󸀠 = √[ – ( + 2|U (j) |2 ] – 󵄨󵄨󵄨󵄨U (j)2 – 1󵄨󵄨󵄨󵄨 .

The second asymptotic solution is V2 ((1 , () =

1 [ – ( + 2|U (j) |2 ] + R(U (j)2 – 1) –i

sin((1 )

(U (j)2 – 1) + (–( + 2|U (j) |2 K󸀠 ([ – ( + 2|U (j) |2 ] + R(U (j)2 – 1))

cos((1 ).

The Wronskian of these solutions is W(V1 , V2 ) = V1 V2∗ – V1∗ V2 = 2i. Using explicit formulas for the asymptotic expansion of U (j) for j = 2, 3 as % → 0 one can obtain: K󸀠 ∼ 2√1 + (, V1 ∼ 2√1 + ( cos((1 ) + 2i(( + 1) sin((1 ), V2 ∼

1 1 cos((1 ). sin((1 ) – i 2(1 + () 2√( + 1

5.10.2.2 The First Correction Term of the WKB Asymptotic Expansion Let the first correction term of the WKB asymptotic solution I(j) 1 (S, (, %) be I(j) 1 (S, (, %) = A((, %)V1 (S, (), where ∞

A((, %) = ∑ %k/2 Ak ((). k=0

The first correction term of the asymptotic expansion (5.70) has two series of parameters Ak (() and >k ((), k ∈ N. These parameters are defined by usual way in the WKB theory. The parameters are solutions of the anti-resonant equations:

5.10 The Captured WKB Asymptotic Solution

215

20

∫ Gk (z, ()V1∗ (z, () + G∗k (z, ()V1 (z, ()dz = 0, 0 20

∫ Gk (z, ()V2∗ (z, () + G∗k (z, ()V2 (z, ()dz = 0.

(5.72)

0

The pair of the equations are defined parameters Ak–1 (() and >k–3 ((), where k > 2. Below we obviously show the process for definition of the parameters for A1 (() and >0 ((). 5.10.2.3 The Second-Order Correction Term of Asymptotic Expansion (5.70) The substitution of I(j) 1 in the WKB asymptotic expansion leads to the equation for the second correction term: (j) (j) 2 (j)2 ∗ iK󸀠 𝜕S I(j) – 1)(I(j) 2 + (– ( + 2|U | )I2 + (U 2 ) = G2 , (j) 2 2 (j) (j) ∗ G2 = –[2|I(j) 1 | U + (I1 ) (U ) ].

The particular solution of the inhomogeneous equation has a form S

3 (G (3, ()V2∗ (3, () + G∗2 (3, ()V2 (3, ()) Ĩ (j) 2 = V1 (S, () ∫ d –2 2 S

+ V2 (S, () ∫ d

3 (G (3, ()V1∗ (3, () + G∗2 (3, ()V1 (3, ()). –2 2

It is easy to see the solution of the equation is bounded with respect to the fast variable S. 5.10.2.4 The Third-Order Correction Term of Asymptotic Expansion (5.70) The equation for the third-order term of the WKB solution is as follows: (j) (j) 2 (j)2 ∗ iK󸀠 𝜕S I(j) – 1)(I(j) 3 + ( – ( + 2|U | )I3 + (U 3 ) = G3 , (j) 󸀠 (j) 2 (j) G3 = –i𝜕( I(j) 1 – i𝜕S I1 >0 – |I1 | I1 (j) (j) ∗ (j) (j) (j) (j)∗ ∗ (j) (j) –I(j) . 1 I2 U – I1 I2 U – I1 I2 U

Equations (5.72) give the differential equations for the parameters A0 (() and >0 ((). One can obtain the equations for the main terms of the asymptotic of A0 (() and >0 (() using the obvious form for the asymptotic expansion of U (j) as % → 0: 3A0 + 4(1 + ()A󸀠0 + O(%) = 0, –2(1 + ()3/2 >󸀠0 + A20 (16 + 12( – 4(2 + (8 + 8( + 3(2 )√( + 1) + O(%) = 0.

216

5 Autoresonances in nonlinear systems

Using regular asymptotic expansion with respect to % we obtain A0 (() ∼

A0,0

, (1 + ()3/4 A20,0 3 1 (5( + (2 + (112 – 8()√1 + ( + 3 ln(1 + ()). >0 (() ∼ >0,0 + 2 2 3 Here A0,0 and >0,0 are parameters of the asymptotic solution for eq. (5.40). The values of the parameters will be obtained by matching of the constructed WKB expansion and the asymptotic expansion in the Painlevé layer. As a result we write out the asymptotic formula for the solution as % → 0: 1 1 4 I(j) (S, (, %) ∼ (–1)j [ √1 + ( + %1/2 A0,0 ( 4 cos(S) + i √1 + ( sin(S))], √1 + ( 2 S∼

A20,0 1 K 3 + >0,0 + (5( + (2 + (112 – 8()√1 + ( + 3 ln(1 + ()). % 2 2 3

Here A0,0 and >0,0 are the parameters of the constructed WKB solution. 5.10.2.5 The Domain of Validity for the WKB Asymptotic Solution at Short Distance of the Slowly Varying Equilibrium The constructed WKB asymptotic solution is not valid at the turning points. The higher-order terms of the asymptotic expansion are infinite at these points. The turning points are ( = –1 and ( = ∞. Using obvious formulas for the higher-order terms we reduce 3/2 (j) I(j) n = O((( + 1) In–1 ),

( → –1 + 0,

Then the domain of validity for the WKB solution is %–2/3 (1 + () ≫ 1,

( → 1 + 0.

When ( → ∞ the domain of validity for the WKB solution is obtained analogously: 5/4 (j) I(j) n = O(( In–1 ),

( → ∞,

therefore: ( ≪ %–4/5 ,

( → ∞.

5.10.2.6 Matching the Expansion in the Painlevé Layer and the Captured WKB Asymptotic Expansion Asymptotic expansion (5.51) is valid as –%2/3 ' ≪ 1 or (( + 1) ≪ 1 for ( close to –1. The captured asymptotic expansion is valid as %–2/3 (1 + () ≫ 1. Therefore, captured

5.10 The Captured WKB Asymptotic Solution

217

asymptotic solution (5.70) and asymptotic expansion (5.51) are valid both in the domain %2/3 ≪ (1 + () ≪ 1. Using the uniqueness theorem for the asymptotic expansions one can obtain that these expansions coincide in this domain. The usual matching procedure [64] gives us the connection formulas for the parameters Ak and 6k of expansion (5.70) and parameters of expansion (5.51). For example, the parameters of the first correction term (5.70) are >0,0 = 5,

A0,0 = 1.

Formulas (5.63), (5.73) and Theorem 27 give us the statement of Theorem 26.

(5.73)

6 Asymptotics for loss of stability Qualitative behaviours of solutions of second-order ordinary differential equations with respect to an additional parameter were explained, for example, in the book [4]. In this chapter we study the loss of stability for equilibriums. These phenomena are convenient for showing of asymptotic methods in work. Equations, which are discussed here, have a small parameter with derivative and varying coefficients. So slowly changing solutions exist. These solutions look like real equilibriums, but due to the slowly changes these equilibriums can lose their stability. In such case solutions begin oscillating with fast frequency. Therefore, we construct here three asymptotic expansions with different properties. These are as follows: – slowly varying equilibriums; –

transitional expansion;

fast oscillating expansion.

The bifurcations of slowly varying equilibrium positions of second-order equation with the algebraic nonlinearity and slowly varying parameters were considered in Ref. [60] only in a preliminary fashion. Uniform asymptotic construction for solution, which losses a stability, was published in Ref. [86]. Later work [30] was published. In that work, a change of an energy and a phase jump has been studied for a solution in a very narrow layer near a saddle-centre bifurcation point in general case. Among other works, in which the asymptotic solutions of the nonlinear equations with varying coefficients were studied we should note the work [126]. In that work, changing of an adiabatic invariant was studied in a problem when a solution passed through separatrix in the non-degenerate case. One more work where the passage through the separatrix in non-degenerate case was studied is [62]. Below we study these expansions and match them to each other. Here we find that first and second Painlevé transcendents play an important role in the process. Therefore, we begin to study the Painlevé-2 equation by the same approach, which was developed in works [86, 87].

6.1 Hard Loss of Stability in Painlevé-2 Equation Here we consider a special asymptotic solution for the equation Painlevé-2 %2 u󸀠󸀠 + 2u3 + tu = 1

(6.1)

as % → 0. The behaviours of the solution differ on left- to right-hand side of Figure 6.1. Equation (6.1) is non-autonomous; however, one can separate the dependence on slow t and fast variables (4, () in the asymptotic solution of this equation. We may speak about the phase plane and phase trajectory of this equation with respect to the DOI 10.1515/9783110335682-006

6.1 Hard Loss of Stability in Painlevé-2 Equation

2

219

Loss of stability of solution for Painlevé-2 equation

1.5 1 0.5 0 –0.5 –1 –1.5 –2 –2.5 –10

–5

0 t

5

10

Figure 6.1. On the left-hand side the curve is smooth and close to the least root of a cubic equation 3 2u3 + tu = 1. This corresponds to slow varying equilibrium. Near t = t∗ = –3/√ 2 the equilibrium looses the stability and the curve tends to special solution of the Painlevé-1 equation with respect to new scaling variable 4 = (t – t∗ )%–4/5 . This special solution of the Painlevé-1 equation has poles at 4 = 4k , k = 0, 1, . . . . In the neighbourhoods of these poles one more scaling is done. New variable is ( = (4 – 4k )%–1/5 . In this case the asymptotics is deﬁned by separatrix solution of a nonlinear autonomous equation. This combined asymptotic structure becomes invalid as 4 → ∞, because the poles of the solution of Painlevé-1 equations close to each other. On large 4 the Painlevé-1 tends to the Weierstrass wp-function. At last as 4 → ∞ a fast oscillating asymptotics is valid on the right-hand side of the curve.

fast variable t/%. The equilibrium positions on the phase plane depend on t. When t < t∗ there are three equilibrium positions. At the critical point t∗ the bifurcation “saddle-centre” occurs. It means that one of stable and unstable equilibrium positions coalesce. When t > t∗ only one equilibrium position exists. Such bifurcation leads to instability. More complicated bifurcation is the pitchfork in the equation Painleve-2. Asymptotics with respect to a small parameter of the solutions for the equation Painleve-2 with zero in the right-hand side of eq. (6.1) was investigated in works [62, 83]. In this case a solution in an interior layer near a bifurcation value of the parameter t∗ is determined by the equation Painlevé-2, but already without a small parameter; and the problem, generally speaking, does not become simpler. The asymptotics of the solutions for the Painlevé equations with a leading term as an elliptic function with the modulated parameters were studied, for example, in work [79]. We must mention some of the works about the scaling limits or double asymptotics for the Painlevé-2 equation. The general approach to the scaling limits of the Painlevé equations based on the Bäcklund transformations was studied in Ref. [98]. The scaling limit passage from the equation Painlevé-2 to Painlevé-1 was studied on

220

6 Asymptotics for loss of stability

the level of classical solutions in the work [81]. In work [70] different approaches to the double asymptotics were developed. The qualitative analysis for the relation of the algebraic and fast oscillating asymptotic solutions of eq. (6.1) was done in the work also. However, the asymptotic solutions constructed by this way are non-uniform with respect to two variables t and %. The major difference of the presented work in comparison with the others cited is constructing uniform asymptotic solution with respect to two parameters t and % as % → 0 into an interval of t, where the main term of asymptotics (elliptic function) is degenerated. This uniform asymptotic solution is valid on a segment containing the saddle-centre bifurcation point. In this work we specify the different types of asymptotic approximations of the being studied solution, their valid intervals and orders of neglected terms which result when these asymptotics are substituted into a being solved equation. 6.1.1 Naive Statement of the Problem Let’s consider a cubic equation: 2u3 + tu = 1,

(6.2)

which is obtained as a rejection of the term with the small parameter in eq. (6.1). There exist point t∗ and value u∗ such that if t = t∗ , then u∗ is double root of eq. (6.2). The values u∗ and t∗ are easy to obtain by solving of the equations: 2u3∗ + t∗ u∗ = 1,

6u2∗ + t∗ = 0.

3 There are t∗ = –3/√ 2, u∗ = –4–1/3 . The discriminant of eq. (6.2) has the form

t 3 1 2 D=( ) +( ) . 6 4 The discriminant D < 0 when t < t∗ and hence the cubic equation (6.2) has three real roots u1 (t) < u2 (t) < u3 (t). If t > t∗ then D > 0 and the cubic equation (6.2) has one real root and two complex conjugate roots. At t = t∗ the roots u1 (t) and u2 (t) coalesce u1 (t∗ ) = u2 (t∗ ) = u∗ . When t < t∗ it is possible to construct a real formal solution of eq. (6.1): ∞

u(t, %) = ∑ %2k u2k (t)

(6.3)

k=0

by taking any of roots uj (t) as a leading term u0 (t). These three formal solutions are slowly varying equilibrium positions for eq. (6.1). Consider an equation %2 v󸀠󸀠 + (6u2j (t) + t) v = 0,

j = 1, 2, 3,

(6.4)

6.1 Hard Loss of Stability in Painlevé-2 Equation

221

which describe small perturbations of the leading term uj (t) of the asymptotic expansions (6.3). If j = 2, then 6u22 (t) + t < 0 so that eq. (6.4) has one exponentially growing solution and one exponentially decreasing solution. Hence corresponding asymptotic expansion is unstable with respect to small perturbation of the leading term. Otherwise if j = 1, 3 then the coefficients in eq. (6.4) are positive and the formal solution (6.3) with uj (t) as the leading term of the asymptotics is stable. When t = t∗ the roots u1 (t) and u2 (t) coalesce: u1 (t∗ ) = u2 (t∗ ) = u∗ and when t > t∗ , there exists only one equilibrium u3 (t). Therefore, t∗ is the bifurcation point for the asymptotic solution (6.3) in case the leading term u0 (t) ≡ u1 (t). Our proposal is to construct a smooth asymptotic solution of eq. (6.1) with the leading term u1 (t) when t < t∗ on a segment [t∗ – a, t∗ + a], a = const > 0. 6.1.2 Matched Asymptotics for the Solution Here we describe a smooth asymptotic solution constructed in this work. Followed by Maslov [112] we will use the words “asymptotic solution with respect to mod (O (%! ))”, namely, a function is said to be an asymptotic solution of mod (O (%a )) of eq. (6.1) if after its substitution into this equation the latter is satisfied up to the terms of the order O (%a ). When t∗ – a ≤ t < t∗ , (a = const > 0) and (t∗ – t)%–4/5 ≫ 1 the asymptotic solution –13/2 with respect to mod (O (%6 ) + O (%6 (t – t∗ ) )) has the form u(t, %) = u1 (t) + %2

–2tu1 (t) + %4 u2 (t). (6u21 (t) + t)4

(6.5)

The last term of the formal asymptotic solution as t → t∗ – 0 can be written as u2 (t) = O ((t – t∗ )–9/2 ) . When |t –t∗ | ≪ 1 the asymptotic solution is defined by two different types of the formal asymptotic expansions. The first one has the form u(t, %) = u∗ + %2/5 v0 (4) + %4/5 v1 (4).

(6.6)

Here the variable 4 is defined by the formula 4 = (t – t∗ )%–4/5 , the function v0 (4) is defined as the solution of the Painlevé-1 equation: d2 v0 (4) + 6u∗ v02 + u∗ 4 = 0, d42 with the pure algebraic asymptotic behaviour as 4 → –∞: 4 v0 (t) = –√– + O (4–2 ) . 6

222

6 Asymptotics for loss of stability

Formula (6.6) is asymptotic solution with respect to mod (O (%8/5 42 ) + O (%8/5 )) as 1 ≪ –4 ≪ %–4/5 . The function v0 (4) has poles of second order at some points 4k , k = 1, 2, . . . (see, e.g., Ref.): v0 (4) = –

1 + O (4k (4 – 4k )2 ) . u∗ (4 – 4k )2

Near the poles the last term of the asymptotic solution can be written as v1 (4) = O ((4 – 4k )–4 ) ,

as

4 → 4k .

Expansion (6.6) is suitable at %–1/5 |4 – 4k | ≫ 1. Formula (6.6) is asymptotic solution –8 with respect to mod (O (%8/5 ) + O (%8/5 42 ) + O (%8/5 4k (4 – 4k ) )). As 4 → ∞ the main term of the asymptotics (6.6) is v0 (4, %) = √4℘(s, g2 , g3 ) + O (4–𝛾 ) ,

𝛾 = const > 0.

(6.7)

Here s=

4 5/4 4 + 30 (7), 5

5 where 7 = %2/5 47/4 , 7

the phase shift 30 is defined in Section 5.4.1 by a formula (6.32). The parameter of the Weierstrass elliptic function g2 = –2u∗ . The second parameter g3 is defined by a solution of an equation (see, e.g., Ref. [98]): Re ∫ d+ 9 = 0, A

where A is any circle on an algebraic curve 92 = +3 + +/2 – g3 /4. The last term of asymptotics (6.6) has the form: v1 (4) = O(4) + O (

4 ) (4 – 4k )4

as

4 → ∞ and

4 = ̸ 4k .

Formula (6.6) is asymptotic solution with respect to mod(O (%8/5 ) + O (%8/5 43/2 ) + (4 – 4k )–8 )). Expansion (6.6) is suitable as 4 ≪ %–4/5 and %–1/5 |4–4k |4–1/4 ≫ 1. O (%8/5 43/2 k k The second one, which is valid in the neighbourhoods |4 – 4k ||4k |1/5 ≪ 1 of poles 4k of the function v0 (4), reads as u(t, %) = u∗ + w0 ((k ) + %4/5 w1 ((k ), 1

(6.8)

where (k = (4 – 4k )%–1/5 + %1/5 (k,1 . The phase shift (k is defined by formula (6.21). The function w0 ((k ) is defined by the formula: w0 ((k ) = –

16u∗ . 4 + 16u2∗ (k2

223

6.1 Hard Loss of Stability in Painlevé-2 Equation

The last term of the formal asymptotics (6.8) at |(| → ∞ can be written as w1 ((k ) = O ((k2 |4k |) . Formula (6.8) is asymptotic solution with respect to mod (O (%8/5 ) + O (%8/5 (k4 42k )). When (t – t∗ )%–2/3 ≫ 1 and t < t∗ + a the asymptotic solution with respect to mod (O (%2 ) + O (%2 (t – t∗ )–3 )) has the fast oscillating behaviour: u(t, %) = U0 (t1 , t) + %U0 (t1 , t).

(6.9)

Here the last term of the asymptotics (6.9) at t → t∗ + 0 can be written as U1 (t1 , t) = O ((t – t∗ )–3/2 ) . The leading term of the asymptotic solution satisfies the Cauchy problem: 2

(S󸀠 )2 (𝜕t1 U0 ) = –U04 – tU02 + 2U0 + E(t),

U0 |t1 =0 = u∗ .

Here t1 = S(t)/% + 6(t). The function E(t) is defined by the equation !(t)

I0 ≡ 2 ∫ √–x4 – tx2 + 2x + E(t) dx = 20, "(t)

where !(t) and "(t) (!(t) > "(t)) are two real roots of the equation –x4 –tx2 +2x+E(t) = 0, and other roots of this equation are complex. The phase function S(t) is the solution of the Cauchy problem: !(t)

T=S

󸀠√

2 ∫ "(t)

dx √–x4

– tx2 + 2x + E(t)

,

where T is the constant defined by the formula T=

√2C∗ (k) 2|u∗

|1/2

(

1/4 3 ) , 2 6 – 2k

where k ≈ 0.463 is the unique solution of the equation ∞

∫ dy 0

–ky + k2 + 1 5/2

[(y – k)2 + 1]

y5/2 = 0,

S|t=t∗ = 0,

224

6 Asymptotics for loss of stability

and ∞

C∗ (k) = ∫ 0

dy √y [(y – k)2 + 1]

.

The phase shift 6(t) is defined by an equation (see [18]): 𝜕E I0 󸀠 6 = a = const. 𝜕E S󸀠

Remark 8. Here the constant a remains outside of our analysis. Its value may be defined by using the monodromy-preserve method for the Painlevé-2 equation [40]. Remark 9. The domains of validity of the asymptotic solution (6.5) and the asymptotic solution (6.6) intersect so that these expansions match. The solution of the Painlevé-1 equation which defines the asymptotics (6.6) has infinite sequence of the poles 4k , k = 1, 2, . . . . Near all of these poles we match the asymptotic solutions (6.6) and (6.8). As the number of the pole k → ∞, the domain of validity of this complicated combine asymptotics (6.6) and (6.8) intersects with the domain of validity for the fast oscillating asymptotic solution (6.9). It allows to match the sandwiched asymptotics with the fast oscillating asymptotic solution. 6.1.3 The Outer Algebraic Asymptotics The algebraic asymptotic solution (6.5) of eq. (6.1) is constructed here. This asymptotic solution is suitable when t < t∗ and asymptotic behaviour of this solution is investigated as t → t∗ – 0. We construct the asymptotic solution of eq. (6.1) as u(t, %) = u0 (t) + %2 u1 (t) + %4 u2 (t) + ⋅ ⋅ ⋅ .

(6.10)

Let’s formulate the result of this section. The asymptotic solution (6.5) with respect to mod (O (%6 (t – t∗ )–13/2 )), where u0 (t) ≡ u1 (t) is least of the solutions of eq. (6.2), is suitable when (t∗ – t)%–4/5 ≫ 1 and t > t∗ – a, where a = const > 0. 6.1.3.1 Constructing the Algebraic Asymptotic Solution Let’s obtain the coefficients of asymptotics (6.10). Substituting ansatz (6.10) into eq. (6.1) and equating coefficients at identical powers of % we find the sequence of the formulas for uk (t), k = 0, 1, 2, . . . 2u30 (t) + tu0 (t) = 1,

(6u20 (t) + t) u1 (t) = –u󸀠󸀠 0 (t),

(6u20 (t) + t) u2 (t) = –6u0 (t)u21 (t) – u󸀠󸀠 1 (t).

225

6.1 Hard Loss of Stability in Painlevé-2 Equation

The cubic equation for u0 (t) when t < t∗ has three real roots u1 (t) < u2 (t) < u3 (t). As the leading term of asymptotic expansion (6.10) we choose u1 (t). The second derivative of u0 (t) has the form u󸀠󸀠 0 =(

󸀠

2tu20 (t) u0 . ) = 2 3 6u0 + t (6u20 (t) + t)

This allows to obtain the formula for u1 (t). It is easy to get the expressions for the following terms of the asymptotic solution (6.10). In an explicit form they are not adduced here; however, it is important to note, that the power of the denominator (6u20 (t) + t) in the coefficients of the asymptotics grows with each next step. The nth term of the asymptotic expansion as (6u20 (t) + t) → 0 has the form un (t) = O ((6u20 (t) + t)

–5n+1

).

(6.11)

Let’s write the asymptotic behaviour of the asymptotic expansion (6.10) as t → t∗ . For this purpose we shall calculate the asymptotics of the expression (6u20 (t) + t): 󵄨󵄨 2 5 (6u20 (t) + t)󵄨󵄨󵄨󵄨 = –2u∗ √6√t∗ – t + (t∗ – t) – (t – t∗ )3/2 + O ((t∗ – t)2 ) . 3 󵄨t→t∗ 9√6u∗ Using this formula and u0 (t), u1 (t) we obtain

u(t, %) = u∗ –

1 1 1 √t∗ – t + (t∗ – t) + %2 [– 10/3 (t∗ – t)–2 – O ((t∗ – t)–3/2 )] √6 18u∗ 32

+ O (%4 (t∗ – t)–9/2 ) + O ((t∗ – t)3/2 ) . 6.1.4 The Domain of Validity of the Algebraic Asymptotic Solution The domain of validity for this expansion as t → t∗ – 0 is determined from the relation %2 un+1 (t)/un (t) ≪ 1. It follows from formula (6.11) that the expansion (6.10) is suitable when (t∗ – t)%–4/5 ≫ 1. Evaluate the residual which is obtained when one substitutes the asymptotic solution (6.5) into eq. (6.1): 3 8 2 2 12 3 F(t, %) = –%6 (u󸀠󸀠 2 + 2u0 u1 u2 + 6u1 ) – % (6u0 u2 + u1 u2 ) – % u2 .

Using the asymptotic behaviour of the uk , k = 0, 1, 2 as t → t∗ – 0 one can obtain F(t, %) = O (%6 (t – t∗ )–13/2 ) .

226

6 Asymptotics for loss of stability

6.2 The Inner Asymptotics In this section the asymptotic expansions of solution of eq. (6.1) which are suitable in the small neighbourhood of a point t∗ are constructed. By following the terminology of the matching method they are called “the inner asymptotic expansions”. 6.2.1 First Inner Expansion It follows from the consideration of the validity of the outer expansion, made in the previous section, that it is natural to make the following scaling of variables: (u – u∗ ) = %2/5 v,

(t – t∗ ) = %4/5 4.

As a result we write eq. (6.1) as d2 v + 6u∗ v2 + u∗ 4 = –%2/5 (4v + 2v3 ) . d42

(6.12)

In the limit as % → 0 we obtain the equation Painlevé-1. This asymptotic reduction is known as one of the scaling limits for the Painlevé-2 equation [66] (see also [61]). A solution of this equation has the asymptotic expansion as 4 → –∞: v(4, %) = (–√–4/6 + + %2/5 (–

1 49 + + ⋅ ⋅ ⋅) 48u∗ 42 768√6u2∗ (–4)9/2

4 1 + + ⋅ ⋅ ⋅) + O (%4/5 43/2 ) . 18u∗ 144√6(–4)3/2

The asymptotic solution of eq. (6.72) we build as ∞

v(4, %) = v0 (4) + ∑ %2n/5 vn (4),

(6.13)

n=1

where the function v0 (4) is the solution of the Painlevé-1 equation. Here it is shown that the asymptotic solution (6.13) is suitable in the neighbourhood of infinity (when –4 ≪ %–4/5 ) and in the neighbourhood of the poles for the function v0 (4): (4 – 4k )%–1/5 ≫ 1. 6.2.1.1 Asymptotic Behaviour as 4 → –∞ The coefficients of the asymptotics are calculated from the matching condition for the asymptotic expansion (6.10) as t → t∗ and expansion (6.13) as 4 → –∞. In particular, v0 (4) has the algebraic asymptotics: v0 (4)|4→–∞ = –√–4/6 +

1 49 + + O (4–7 ) . 48u∗ 42 768√6u2∗ (–4)9/2

(6.14)

6.2 The Inner Asymptotics

227

In the book it is shown that there exists the solution of the Painlevé-1 equation with the asymptotics (6.14). The data of a monodromy for the solution of the Painlevé-1 equation with the asymptotics (6.14) are calculated in the work [79]. The first correction in the asymptotics (6.13) satisfies the equation d2 v1 + 12u∗ v0 v1 = –4v0 – 2v03 . d42

(6.15)

The asymptotics of the solution for this equation as 4 → –∞ has the form v1 (4) = –

4 1 + + O (4–4 ) . 18u∗ 144√6(–4)3/2

Asymptotics of the higher corrections is constructed by ordinary way. The nth correction as 4 → –∞ has an order: vn (4) = O ((–4)(n+1)/2 ) .

6.2.1.2 Validity of the Asymptotic Solution as 4 → –∞ The requirement of validity for the asymptotics is %2/5 v1 /v0 ≪ 1. It reduces to the condition (–4) ≪ %–4/5 . The residual of the asymptotic solution (6.6) has the form: F(4, %) = –%8/5 (6u∗ v12 + 4v1 + 6v1 v02 ) – %10/5 v0 v12 – %12/5 v13 . Using the asymptotic behaviour of vk , k = 0, 1 as (–4) ≪ %–4/5 one can obtain F(4, %) = O (%8/5 (42 )) .

6.2.1.3 Asymptotic Behaviour Near the Poles The function v0 (4) has the poles when 4 ∈ (–∞, ∞). Let’s denote these poles by 4k . In the neighbourhood of the pole 4 → 4k ± 0 the function v0 (4) is defined by the converging power series (see, e.g.,) v0 (4) = –

4 u u 1 + k ∗ (4 – 4k )2 + ∗ (4 – 4k )3 + ck (4 – 4k )4 + O ((4 – 4k )5 ) . (6.16) 10 6 u∗ (4 – 4k )2

The constants 4k and ck are the parameters of this solution. In the review [98] it is noted that the problem on the connection between the asymptotics of this solution at infinity and the constants 4k and ck is not investigated yet. The points of the poles 4k and appropriate constants ck can be obtained with the help of the numerical calculation using the given asymptotics at infinity (6.14).

228

6 Asymptotics for loss of stability

The asymptotics of v1 as 4 → 4k ± 0 may be written as a sum of a certain solution of a non-homogeneous linearized Painlevé-1 equation v1c (4) v1c (4) = –

4k 9ck 1 1 + – (4 – 4k ) + (4 – 4k )2 + O ((4 – 4k )5 ) , (4 – 4k )4 120u∗ 24u∗ 10u2∗

and two solutions of a homogeneous linearized equation V1 (4), V2 (4): V1 (4) =

4k u2∗ u2∗ 1 (4 – 4 (4 – 4k )2 + 2ck u∗ (4 – 4k )3 + O ((4 – 4k )5 ) , + ) + k 10 5 (4 – 4k )3 V2 (4) = (4 – 4k )4 + O ((4 – 4k )8 ) .

Thus: v1 = v1c (4) + a±1,k V1 (4) + b±1,k V2 (4).

(6.17)

Here a±1,k and b±1,k are constants. Higher corrections have the same form: vn (4) = vnc (4) + a±n,k V1 (4) + b±n,k V2 (4), where vnc (4) = O ((4 – 4k )–2(n+1) ) ,

as 4 → 4k .

6.2.1.4 Validity of the Asymptotic Solution as 4 → 4k By using the asymptotics (6.16) and (6.17), it is easy to see that the asymptotic expansion (6.13) is suitable at %–1/5 |4 – 4k | ≫ 1. The residual of the asymptotic solution as 4 → 4k when %–1/5 |4 – 4k | ≫ 1 is F(4, %) = %8/5 O (

4k ) + %8/5 O ((4 – 4k )–8 ) . (4 – 4k )4

6.2.2 Second Inner Expansion For the construction of the uniform asymptotics in the neighbourhood of the pole of the function v0 it is necessary to make one more scaling of the independent variable and the function (see [61]): (4 – 4k ) = %1/5 (,

%–2/5 v = w.

For function w we obtain the equation: d2 w + 6u∗ w2 + 2w3 = –%4/5 4k (u∗ + w) – %((u∗ + w). d(2

(6.18)

6.2 The Inner Asymptotics

229

The solution of this equation has following asymptotic expansion as ( → –∞: – 1 1 –6 1/5 a1,k w=– + + O (( ) + % ( + O ((–4 )) u∗ (2 4u3∗ (4 (3 + %2/5 ((a–1,k ) + %4/5 (– + %6/5 (

2

1 1 + O ((–6 )) + %3/5 (a–2,k 3 + O ((–4 )) 4 u∗ ( (

1204k 4k u∗ 2 u (3 ( ( + O ((–1 )) + % ( ∗ + + + a1, k– 4k u2∗ ( + O(1)) u∗ 10 6 24u∗

a–1,k u2∗ 2 9ck 2 4 ( + O ((1 )) + O (%7/5 (5 ) ( + c ( + k 5 10u2∗

+ %8/5 (b–1,k (4 –

42k u3∗ 6 ( + O ((2 )) + O (%9/5 ) . 300

This long asymptotic formula shows that the constant b–1,k appears only in the correction of an order %8/5 . If we want to construct the first correction of the asymptotics for the first inner expansion after the pole 4k , we must construct the correction in order %8/5 for the second inner expansion. It is convenient to include a time shift depending on % into the main term, and construct the asymptotic expansion depended on a new time variable: (k = ( + %1/5 (1,k + %3/5 (2,k , where (n,k = const. We search the asymptotic expansion for the solution of this equation as a segment of an asymptotic series w((k , %) = w0 ((k ) + %4/5 w1 ((k ) + %w2 ((k ) + %6/5 w3 ((k ) + %8/5 w4 ((k ).

(6.19)

In this case the equation for the w((k , %) looks like d2 w + 6u∗ w2 + 2w3 = – %4/5 4k (u∗ + w) – %(k (u∗ + w) d(k2 1

2

+ %6/5 (k (u∗ + w) + %8/5 (k (u∗ + w) + ⋅ ⋅ ⋅ . It is shown here that the asymptotic solution (6.8) is the formal asymptotic solution of eq. (6.18) with respect to mod (O (%8/5 42k (4 ) + O (%9/5 4k (5 ) + O (%2 (6 )) when | ≪ %1/5 . |(41/5 k The solution of the equation for the leading term of the asymptotics (6.19) is defined by the asymptotics as 4 → 4k of the asymptotic expansion (6.13), which is outer with respect to eq. (6.19). This solution has the form w0 ((k ) = –

16u∗ . 4 + 16u2∗ (k2

(6.20)

230

6 Asymptotics for loss of stability

The constants (n,k are defined by asymptotics of the function w((, %). Using formula (6.19) we obtain u∗ – a , 2 n,k

(n,k =

n = 1, 2, . . . .

(6.21)

The corrections in expansion (6.19) satisfy the linearized equations d2 w1 + (12u∗ w0 + 6w02 ) w1 = –4k (u∗ + w0 ) , d(k2 d2 w2 + (12u∗ w0 + 6w02 ) w2 = (k (u∗ + w0 ) , d(k2 d2 w3 + (12u∗ w0 + 6w02 ) w3 = (1,k (u∗ + w0 ) , d(2 d2 w4 + (12u∗ w0 + 6w02 ) w3 = –6w12 (w0 + u∗ ) + (2,k (u∗ + w0 ) . d(2 The expression for w0 can be used to obtain two linearly independent solutions of the homogeneous equation for the corrections: W1 =

8(k (1 + 4u2∗ (k2 )

W2 = [–

2

,

2u2 1 2 1 + 2u2∗ (k2 – u∗ (k4 + (k6 + ∗ (k8 ] . 2 8 5 7 (1 + 4u2 (2 ) ∗ k

By using these solutions of the homogeneous equation it is easy to get the solutions of the non-homogeneous equations for the corrections. The asymptotics of the corrections as ( → ∞ has the form: w1 =

4k u∗ 2 4k 4 1 ( + + k (–2 – (–4 + O ((k–6 ) , 10 k 120u∗ 120 k 160u2∗ k

w2 =

u∗ 3 1 ( + ( + O ((k–5 ) , 6 k 24u∗ k

w3 =

1 c (4 + O ((k2 ) , 56u2∗ k k

w4 = –

u3∗ 420 6 11u∗ 42k 1 – (k + ( ) (k4 + O ((k2 ) . b – 1,k 300 2100 56u2∗

It is important to note that the leading term of the asymptotics of w3 as (k → ±∞ is the same. This term defines the constants ck and hence the solution of the Painlevé-1 equation before and after the pole. We have the same value of the constant ck as (k → –∞ and (k → ∞ and then we have the same asymptotic solution of Painlevé-1 equation before and after the pole 4k .

6.2 The Inner Asymptotics

231

An opposite result takes place for the coefficient w4 in order (k4 as (k → ∞. This coefficient is changed. It is equal to b–1,k / (56u2∗ ) as (k → –∞ and b–1,k / (56u2∗ ) – 11u∗ 42k /2100 as (k → ∞. 6.2.2.1 Validity of Second Internal Asymptotic Solution Using the asymptotics for the corrections and the leading term, we obtain that ex| ≪ %–1/5 . On the other hand, expansion (6.13) pansion (6.19) is suitable when |(41/5 k is suitable at |(| ≫ 1. Hence, the domains of the applicability for expansions (6.13) and (6.19) are intersected at realization of the condition |4k | ≪ %–1 . If we take into account also the requirement of fitness of the asymptotic expansion (6.19), then get the restriction |4k | ≪ %–4/5 . From this inequality it follows that in this section the formal asymptotic expansions to be suitable when |t – t∗ | ≪ 1 are constructed. We calculate the residual of second internal asymptotic expansion using the k

asymptotic behaviour of w, k = 0, 1, 2, 3, 4: F((, 4k , %) = %9/5 O (4k (5 ) ,

(6.22)

when |(||4k |1/5 ≪ %–1/5 . 6.2.3 Dynamics in the Internal Layer Using the asymptotic expansion of the second inner expansion as ( → ∞ and the first inner expansion as 4 → 4k + 0 we find that the first inner expansion after the pole has the form (6.13) where vn (4) = vnc (4) + a+n,k V1 (4) + b+n,k V2 (4). Here a+n,k = a–n,k ,

b+n,k = b–n,k + Bn,k ,

n = 1, 2.

The shift Bn,k may be calculated from the asymptotics of the second inner expansion as ( → ∞, for n = 1 we have obtained: B1,k = –

22u3∗ 42k . 75

Thus a behaviour of the asymptotic solution in the internal layer is combined by the first and the second inner asymptotic expansions. 6.2.4 The Asymptotics of the Inner Expansions as 4 → ∞ In the above sections we demonstrate the asymptotic behaviour of the asymptotic solution at 4 → –∞ and near the poles of the solution for the Painlevé-1 equation. Below we study the asymptotic behaviour of the solution as 4 → ∞.

232

6 Asymptotics for loss of stability

The regular asymptotic expansion on % is constructed in the previous section concerning the first inner asymptotic expansion. This asymptotics is not valid as large 4. To use the first inner asymptotic expansion as 4 → ∞ we must modulate the parameters of the solution of the Painlevé-1 equation and cancel secular terms in the first and the second corrections in the asymptotic expansion (6.13) using singular perturbation theory. Instead of modulating the parameters of the solution of the Painlevé-1 equation it is more convenient to study the modulation equation for parameters of the main term of the asymptotics for the solution of the Painlevé-1 equation as 4 → ∞. This study will be developed in this section. 6.2.4.1 Asymptotic Behaviour of First Inner Expansion The elliptic asymptotics of the solution for the Painlevé-1 equation as 4 → ∞ was obtained by Boutroux [19, 20]. Here we are interested in a connection formula for the solution of the Painlevé-1 equation. Namely we have the asymptotic behaviour of the solution as 4 → –∞ and we need the asymptotic behaviour of the same solution as 4 → ∞. The Painlevé-1 equation is integrable by the monodromy-preserving method [40]. If the monodromy data are known, then the solution of the Painlevé-1 equation is uniquely defined. In correspondence with Ref. [79], in our case the monodromy data are constants s2 and s3 and these constants are equal to zero. The asymptotics of the function v0 (4) outside of the poles has the form (see, e.g., [98]) v0 = √4 10 (3) + O (4–𝛾 ) ,

(6.23)

where 𝛾 > 0 is some constant, the function 10 (3) is determined by the Weierstrass elliptic function 10 (3) = –℘(3, g2 , g3 )/u∗ .

(6.24)

The phase function 3 = 45 45/4 . It is important to note that in formula (6.23) the shift of the phase function 3 is equal to zero. A parameter is g2 = –2u∗ and a parameter g3 is defined as a solution of the equation: Re ∫ 9 d+ = 0, 𝛾

where 𝛾 is any circle on an algebraic curve: 92 = +3 + +/2 – g3 /4. We are interested in the asymptotic solution of the perturbed Painlevé-1 equation (6.72). Therefore the asymptotics of the works [79, 98] for unperturbed Painlevé-1 equation are the main term of our asymptotics on %. Substitute into eq. (6.72): v(4, %) = √41(3, %),

(6.25)

where 3 = 445/4 /5. As a result we obtain an equation: 1󸀠󸀠 + 6u∗ 12 + u∗ = – (

53 –1 󸀠 53 –2 53 2/5 ) 1 + ( ) 1 – %2/5 ( ) (1 + 213 ) . 4 4 4

(6.26)

6.2 The Inner Asymptotics

233

Let us construct the asymptotics of the perturbed equation as a segment of the asymptotic series: 1(3, %) = 10 (s) + %2/5 (

53 2/5 53 4/5 ) 11 (s) + %4/5 ( ) 12 (s), 4 4

(6.27)

where s = 3 + 30 (7),

5 53 7/5 7 = %2/5 ( ) , 7 4

30 is modulated phase shift and 7 is once more slow variable. Substitute formula (6.27) into eq. (6.26). Then let’s equate the coefficients with identical powers of %. As a result we obtain the sequence of the equations: 5 –1 󸀠 1 5 –2 2 1󸀠󸀠 ( 3) 10 ] , 0 + 6u∗ 10 + u∗ = [– ( 3) 10 + 4 4 4

(6.28)

3 󸀠 󸀠󸀠 1󸀠󸀠 1 + 12u∗ 10 11 = – (10 + 210 ) – 30 10

1 5 –1 5 –1 7 5 –2 + [– ( 3) 30󸀠 1󸀠0 – 2 ( 3) 1󸀠1 + ( 3) 11 ] , 2 4 4 8 4 2

2 2 󸀠 󸀠󸀠 󸀠󸀠 󸀠 󸀠 󸀠󸀠 1󸀠󸀠 2 + 12u∗ 10 12 = – 6u∗ 11 – 11 (1 + 610 ) – (30 ) 10 – 30 10 – 30 11 –1

–2

+ ( 45 3) L1 (10 , 11 , 12 ) + ( 45 3) L2 (10 , 11 , 12 ) . Here L1,2 are linear operators. Solutions of these equations may be represented as asymptotic expansions as 3 → ∞. We will assume that the corrections of these expansions are small as 3 → ∞. Therefore we will neglect the correction terms of this asymptotics and consider only the main terms of these asymptotics with respect to 3. The equations for the main terms 10,0 , 11,0 , 12,0 have the form: 2 1󸀠󸀠 0,0 + 6u∗ 10,0 + u∗ = 0,

(6.29)

3 󸀠 󸀠󸀠 1󸀠󸀠 1,0 + 12u∗ 10,0 11,0 = – (10,0 + 210,0 ) – 30 10,0 ,

(6.30)

2

2 2 󸀠 󸀠󸀠 󸀠󸀠 󸀠 󸀠 󸀠󸀠 1󸀠󸀠 2,0 + 12u∗ 10,0 12,0 = –6u∗ 11,0 – 11,0 (1 + 610,0 ) – (30 ) 10,0 – 30 10,0 – 30 11,0 .

(6.31)

The requirement of validity for the first inner asymptotics at 4 → ∞ is %2n/5 32n/5 1n /10 ≪ 1, where n = 1, 2. We find the modulated equation for the 30 such that the first inner expansion is valid as large 4. 0

The solution of the equation for the main term is the function 1(s) which is defined by eq. (6.24). The equations for the corrections is the Lame equations with external

234

6 Asymptotics for loss of stability

force. To write the solutions of these equations we will use two linear independent solutions of the Lame equation. Denote one of these solutions by P1 (s) = 𝜕s 10,0 . We denote the second solution as P2 (s). The solutions P1 (s) and P2 (s) are such that a Wronskian of these solutions equals to 1. The second solution is aperiodic: P2 (s + K) = CP1 (s) + P2 (s),

where C = const ≠ 0.

The solution of eq. (6.30) can be written as s

11,0 (s) = A1,k P1 (s) + B1,k P2 (s) + P1 (s) ∫ dz (–10,0 (z) – 2130,0 (z)) P2 (z) s0 s

– P2 (s) ∫ dz (–10,0 (z) – 2130,0 (z)) P1 (z) – 30󸀠 sP1 (s). s0 0

Here s0 = sk + K/2, where sk is pole of the function 10 (s) and K is a real period of 1. The A1,k and B1,k are constants. The first correction 11 is bounded as s ∈ ℝ if K

B1,k =

1 󸀠 1 3 + R.P. ∫ dz (10,0 (z) + 2130,0 (z)) P1 (z). C 0 C 0

In this formula the integral must be regularized, namely K

K–r

1 R.P. ∫ dz (10,0 (z) + 2130,0 (z)) P1 (z) = resr=0 [ ∫ dz (10,0 (z) + 2130,0 (z)) P1 (z)]. r 0 ] [ r Using a perturbation theory for the second-order equation developed in works [17, 109], we can show that the solution for 12 has an order O(3), 3 ≠ 3k , k ∈ ℤ, if the function 30 is a solution of the differential equation: 30󸀠󸀠 = 0. This equation and the equation for the B1,k allow to write the Cauchy problem for the 30 in the form: K

30󸀠 = CB1,k + R.P. ∫ dz (10,0 (z) + 2130,0 (z)) P2 (z), 0

7 ∈ (7k , 7k+1 ) , where 5 53 7/5 7k = %2/5 ( k ) . 7 4

30 |7=0 = 0,

6.2 The Inner Asymptotics

235

The constants A1,k and B1,k are defined by matching conditions of the asymptotics (6.25), (6.27) and (6.13) as 4k → 4k + 0: A1,k =

B1,k =

a+1,k 4k

K/2

– R.P. ∫ dz (10,0 (z) + 2130,0 (z)) P2 (z),

u∗ b+1,k 144k

0 K/2

– R.P. ∫ dz (10,0 (z) + 2130,0 (z)) P1 (z). 0

As a result we obtain formula (6.7) for the main term of the asymptotics (6.27). 6.2.4.2 Validity of the First Inner Expansion as 4 → ∞ Let us denote by vn (4) = 4(n+1)/2 1n (3),

n = 0, 1, 2.

Then the condition of the validity %2/5 vn /v0 ≪ 1, n = 1, 2, fulfills as %2/5 4/√4 ≪ 1 or as the same: 4 ≪ %–4/5 . Near the pole 4n we obtain the asymptotics: v0 = O (

√4 ) (4 – 4k )2

and

v1 = O (

4 ). (4 – 4k )4

Hence the first inner asymptotic expansion is suitable near the poles 4k as ≫ 1. %–1/5 (4 – 4k )4–1/4 k The residual of the first inner expansion is F(4, %) = – %2 (12u∗ v1 v2 – 6v0 v1 2 + 6v02 v2 + 4v2 ) – %12/5 (2v13 + 12v0 v1 v2 + 6u∗ v22 ) + %14/5 (6v12 v2 + 6v0 v22 ) + %16/5 6v1 v22 + %18/5 2v23 . Using the results of the above section outside of the poles of v0 one can obtain: F = O (%2 45/2 ) + O (

%2 45/2 ). (4 – 4k )10

6.2.4.3 Validity of the Second Inner Expansion as 4k → ∞ In the second inner expansion, the asymptotics as 4 → ∞ corresponds to the asymptotics at 4k → ∞. It is easy to see that in this case the first correction grows. This growth limits the value 4k , at which the asymptotics (6.19) is correct: |4k | ≪ %–4/5 . One can use formula (6.22) to obtain the residual of the second inner asymptotics as 4k → ∞.

236

6 Asymptotics for loss of stability

6.3 Fast Oscillating Asymptotics 6.3.1 The Krylov–Bogolyubov Approximation In this section we apply formulas obtained in Refs. [17, 31, 102] to the fast oscillating formal asymptotic solution of the Painlevé-2 equation. These formulas are usable when t > t∗ . The fast oscillating asymptotics is constructed as u(t, %) = U0 (t1 , t) + %U1 (t1 , t) + %2 U2 (t1 , t) + ⋅ ⋅ ⋅ .

(6.32)

As the argument t1 we use expression t1 = S(t)/% + 6(t), where S(t) and 6(t) are unknown functions. The equations for the leading term and the corrections of the asymptotics (6.32) look like: (S󸀠 )2 𝜕t21 U0 + 2U0 3 + U0 t = 1,

(6.33)

(S󸀠 )2 𝜕t21 U1 + (6U0 2 + t) U1 = –2S󸀠 𝜕tt2 1 U0 – S󸀠󸀠 𝜕t1 U0 – 2S󸀠 6󸀠 𝜕t21 U0 ,

(6.34)

(S󸀠 )2 𝜕t21 U2 + (6U0 2 + t) U2 = – 6U0 U12 – 2S󸀠 𝜕tt2 1 U1 – S󸀠󸀠 𝜕t1 U1 – 2S󸀠 6󸀠 𝜕t21 U1 – 𝜕t2 U0 – (6󸀠 )2 𝜕t1 U0 – 6󸀠󸀠 𝜕t1 U0 – 26󸀠 𝜕t 𝜕t1 U0 .

(6.35)

Integrating once with respect to t1 the equation for U0 we obtain: 2

(S󸀠 )2 (𝜕t1 U0 ) = –U0 4 – tU0 2 + 2U0 + E(t),

(6.36)

where E(t) is the “constant of integration”. We study the equation when the right-hand side has two real roots "(t) < !(t). We define the initial data as U0 |t1 =0 = "(t). Solution of eq. (6.36) is an elliptic function. The left-hand side of formula (6.36) is positive for real functions S󸀠 and U0 . The term of highest order on the right-hand side is –U0 4 . Therefore, the real solution of eq. (6.36) has no poles when t1 ∈ ℝ and |E(t)| < ∞. To construct an uniform asymptotics (6.32) we choose the unknown functions S(t), 6(t) and the “constant of integration” E(t) by a special way. They must satisfy to antiresonant conditions for eqs. (6.33)–(6.35). The first condition is the boundedness of

6.3 Fast Oscillating Asymptotics

237

the right-hand side of eq. (6.34) as t1 ∈ ℝ. It is satisfied if a period of the oscillation of the function U0 (t1 , t) on t1 is a constant (see, e.g., [102]): !(t) 󸀠

T = √2S ∫ "(t)

dx . √–x4 – tx2 + 2x + E(t)

(6.37)

The next condition is a boundedness of the first correction U1 (t1 , t) when t1 ∈ ℝ. It gives equation defining a main term of an action I0 (see, Ref. [102]): T 2

󸀠

I0 = S ∫ [𝜕t1 U0 (t1 , t)] dt1 = const. 0

Using the explicit expression for the derivative with respect to t1 we present this formula in a some other form: !(t)

I0 = 2 ∫ √–x4 – tx2 + 2x + E(t) dx = const,

(6.38)

"(t)

where !(t) and "(t) are the solutions of the equation –x4 – tx2 + 2x + E(t) = 0. Equation (6.38) means that the action is a constant. On the other hand, eq. (6.38) defines the energy E(t) as a function with respect to slow time t. At last the necessary condition of boundedness of the second correction U2 with respect to t1 is the equation (see Ref. [17]): 𝜕E I0 󸀠 6 = a = const. 𝜕E S󸀠

(6.39)

Equations (6.37)–(6.39) define the parameters of the main term of the asymptotics as functions with respect to t. To find these functions one should define corresponding constants. Notice that to define the value of the action I0 we construct the asymptotic solution of eq. (6.1) when t > t∗ . Thus the polynomial of the fourth power on U0 on the right-hand side of eq. (6.36) can have no more than two various real roots !(t) and "(t). Hence this polynomial can be submitted as F(x, t) = (!(t) – x)(x – "(t)) ((x – m(t))2 + n2 (t)) . The degeneration of the elliptic integral at t = t∗ corresponds to the case m(t∗ ) = "(t∗ ) = u∗ and n(t∗ ) = 0. For this case, it is easy to calculate the constant on the righthand side of eq. (6.38), which is equal to 20 (i.e. I0 = 20) and the value of the parameter E(t∗ ) = E∗ =

4 3

( 21 )

2/3

.

238

6 Asymptotics for loss of stability

6.3.2 Degeneration of the Fast Oscillating Asymptotics In this section we calculate the asymptotic behaviour of the phase functions S(t) as t → t∗ + 0. The oscillating solution is degenerated as t → t∗ + 0. Let’s construct the asymptotics of this solution in the neighbourhood of the degeneration point. For this purpose we calculate the asymptotics of the phase function S(t) and the function E(t). Let’s write eq. (6.38) as !

∫ √(! – x)(x – ") [(x – m)2 + n2 ] dx = 0,

(6.40)

"

where !, ", m, n are real functions when t ≥ t∗ . These functions satisfy the Vieta equations: ! + " + 2m = 0,

(6.41)

m2 + n2 + !" + 2m(! + ") = t,

(6.42)

(! + ") (m2 + n2 ) + 2m!" = 2,

(6.43)

!" (m2 + n2 ) = –E.

(6.44)

Equation (6.37) and three equation from eq. (6.44) define the dependency !, ", m, n on the parameter t. The last equation in eq. (6.44) defines the function E(t). Let’s make changes of variables: E = E∗ +g1 , t = t∗ +', 4m = m∗ +m1 . After simple transformations of eq. (6.44) we obtain: 2m∗ [6m21 – 2n2 + '] + [2m21 – 2n2 + '] 2m1 = 0, m2∗ (12m21 – 4n2 + ') + 2m∗ m1 (6m21 – 2n2 + ') + (3m21 – n2 + ') (m21 + n2 ) = –g1 .

(6.45)

Construct the solution of this system as t → t∗ + 0 as m1 = ,√' + O('),

n = -1 √' + O('),

g1 = 𝛾1 ' + O ('3/2 ) .

Let’s substitute these expressions in eq. (6.45), and equate the coefficients at the identical powers of '. As a result we obtain 6,21 – 2-12 = –1,

𝛾1 = m2∗ .

6.3 Fast Oscillating Asymptotics

239

To define the constants ,1 and -1 it is necessary to construct the asymptotics as ' → +0 of the left-hand side of eq. (6.40). The asymptotics of the outside integral coefficient in eq. (6.40) has the form (! – ")3 = 64|m|3 [1 –

2 2 3 ,1 √–' 3 -1 – ,1 – 1 + ' + O ('3/2 )] . 2 m∗ 2 4m2∗

(6.46)

The integral in eq. (6.40) is presented as 1

I(k, \$) = ∫ dz √(1 – z)z√(z – k\$)2 + \$2 , 0

where z=

x–" , !–"

m–" = k\$, !–"

\$2 =

n2 . (! – ")2

(6.47)

The value of the constant k will be defined from an asymptotics below. The asymptotics of an integral I(k, \$) as \$ → 0 has the form I(k, \$) =

0 0 0 – k\$ + \$2 + c(k)\$5/2 + O (\$3 ) , 16 8 4

(6.48)

where ∞

c(k) = –

–ky + k2 + 1 5/2 8 y . ∫ dy 5/2 5 [(y – k)2 + 1] 0

First three terms in this formula are calculated by standard way. Let’s show as we can obtain the function c(k). For this purpose the following trick [38] is applicable. Let’s calculate third derivative with respect to \$ of the function I(k, \$): 1

𝜕3 I –kz + k2 \$ + \$ = –3 ∫ dz √(1 – z)z . 3 5/2 𝜕\$ [(z – k\$)2 + \$] 0

On the right-hand side we replace z by \$y and we present the integral as ∞

2 𝜕3 I –1/2 5/2 –ky + k + 1 = –3\$ dy y + O(1). ∫ 5/2 𝜕\$3 [(y – k)2 + 1]

(6.49)

0

Solving the ordinary differential equation (6.49) in the neighbourhood of \$ = 0, we get: I(k, \$) = c0 + \$c1 + \$2 c2 + \$5/2

8 c (k) + O (\$3 ) , 15 3

240

6 Asymptotics for loss of stability

where ∞

C3 (k) = –3 ∫ du 0

–ky + k2 + 1 [(y –

k)2

+ 1]

5/2

y5/2 .

After that it is easy to obtain the asymptotics (6.48). To define the value of k we substitute the asymptotics (6.46) and (6.48) in eq. (6.40) and equate to zero the coefficients at identical powers of '. As a result we get at '5/4 the equation c(k) = 0. This is the transcendental equation for the definition of the parameter k. The numerical solution gives k ∼ 0.463. Using formula (6.47), we get k |-|, 3

,1 =

-1 = √

3 . 2 (3 – k2 )

To construct the asymptotics of S(t) as t → t∗ + 0 we use the equation connecting the period of fast oscillations with its phase [102]: !

dx

󸀠

T = √2S ∫ "

√(! – x)(x – ") [(x – m)2 + n2 ]

.

(6.50)

Present the integral on the right-hand side as 1

J=

dz 1 . ∫ ! – " √(1 – z)z [(z – k\$)2 + \$2 ] 0

After the same replacements, as at the construction of the asymptotics we get

𝜕3 I , 𝜕\$3

as \$ → 0

J=

\$–1/2 dy + O(1). ∫ ! – " √y [(y + k)2 + 1] 0

We substitute this expression into eq. (6.50), use the asymptotics \$ and (!–") as ' → +0 and in the result we get: S󸀠 = (t – t∗ )1/4 S∗ (k) + O ((t – t∗ )1/2 ) , where S∗ (k) =

1/4 3 T 2|m∗ |1/2 ( ) , 2 √2 C∗ (k) 6 – 2k

C∗ (k) = ∫ 0

dy √y [(y – k)2 + 1]

.

6.3 Fast Oscillating Asymptotics

241

The period of oscillations for the function U0 (t1 , t) with respect to the variable t1 in the Krylov–Bogolyubov’s method is an arbitrary constant. Let’s choose it such that S∗ (k) = 1: T=

1/4 S∗ (k)√2C∗ (k) 3 ( ) . 2 1/2 6 – 2k 2|u∗ |

(6.51)

As a result the phase of oscillations as t → t∗ has a form S(t) =

4 (t – t∗ )5/4 + O ((t – t∗ )3/2 ) + S0 , 5

(6.52)

where S0 is some constant. Its value will be defined below at the matching of the asymptotics (6.32) and inner asymptotics (6.13), (6.19) as t → t∗ + 0.

6.3.3 The Domain of Validity of the Fast Oscillating Asymptotics In this section we establish the domain of validity of the fast oscillating asymptotics and compute the residual of this asymptotic solution. The validity of the asymptotics is defined by the formula %U1 ≪ U0 . Let us check this requirement. For this we must obtain the order of the first correction as t → t∗ + 0. Evaluate the order on the right-hand side of the equation for the first correction: F1 (t1 , t, %) = –2S󸀠 𝜕tt2 1 U0 – S󸀠󸀠 𝜕t1 U0 . From the equation for U0 one can evaluate the second term in F1 (t1 , t, %) as t → t∗ + 0: S󸀠󸀠 𝜕t1 U0 = O ((t – t∗ )–1 ) . One must reduce formula for the derivative of U0 with respect to t to evaluate the first term in the formula for F1 (t1 , t, %). The function U0 is the inverse function with respect to the elliptic integral U0

dy

󸀠

t1 + t0 = S ∫ "(t)

√–y4 – ty2 + 2y + E(t)

.

Both limits of the integration are functions with respect to t, and it is not convenient for us. Make the substitution y = (! – ")z + ". Then we obtain 󸀠

t1 + t0 =

U0 –" !–"

dz S . ∫ (! – ") √z(1 – z)√(z – 𝛾)2 + \$2 0

242

6 Asymptotics for loss of stability

Now we differentiate this formula with respect to t, as a result we obtain the formula U0 –" for 𝜕t [ !–" ]: 𝜕t [

U0 – " 1 √ (! – U0 ) (U0 – ") ((U0 – m)2 + n2 ) ]= !–" (! – ")S󸀠

U0 – " [ S󸀠 [ × [–𝜕t ( )∫ ! – " (! – ") 0 [

U0 – " ] dz \$\$󸀠 – (z – 𝛾)𝛾󸀠 S󸀠 ], + ∫ !–" 3/2 ] (! – ") 2 2 2 2 0 √z(1 – z) ((z – 𝛾) + \$ ) √z(1 – z) ((z – 𝛾) + \$ ) ] dz

m . !–b In the same way we can obtain the formula:

where 𝛾 =

𝜕t (

U0 – ! 1 √ (! – U0 ) (U0 – ") ((U0 – m)2 + n2 ) ]= !–" (! – ")S󸀠

[ × [–𝜕t ( [

U0 – ! U0 – ! dz dz \$\$󸀠 – (z – A)A󸀠 S󸀠 S󸀠 ] ! – " + )∫ ∫ !–" ], (! – ") (! – ") 0 –1 √z(1 – z) ((z – A)2 + \$2 )3/2 √z(1 – z) ((z – A)2 + \$2 ) ]

!–m . !–" These formulas will be useful when we will reduce the formula for the second derivative of U0 with respect to t. The first derivative of U0 with respect to t has the form: where A =

𝜕t U0 = (! – ") [

!󸀠 – "󸀠 –" 1 √ (! – U0 ) (U0 – ") ((U0 – m)2 + n2 ) + 𝜕t ( )] + !–" (! – ")S󸀠 (! – ")2

U0 – " [ S󸀠 !–" ×[ –𝜕 ( ) ∫ [ t (! – ") 0 [

dz √z(1 – z) ((z – 𝛾)2 + \$2 )

+

U0 – " ] dz (z – 𝛾)𝛾󸀠 + \$\$󸀠 S󸀠 ]. ∫ !–" ] (! – ") 0 √z(1 – z) ((z – 𝛾)2 + \$2 )3/2 ]

Now we can evaluate the second derivative 𝜕tt2 1 U0 : 𝜕tt2 1 U0 = 𝜕t [

1 √(! – U0 ) (U0 – ") ((U0 – m)2 + n2 )] . S󸀠

Using the formula for 𝜕t U0 one can obtain as t → t∗ + 0: 𝜕tt2 1 U0 = O ((t – t∗ )–5/4 ) . This formula allows to evaluate the right-hand side in eq. (6.34) as t → t∗ + 0: F1 (t1 , t, %) = O ((t – t∗ )–5/4 ) .

6.3 Fast Oscillating Asymptotics

243

The first correction is periodical function with respect to t1 . One can derive the solution of the equation for the first correction using two linear independent solutions of the equation (S󸀠 )2 𝜕t21 V + (6U0 2 + t) V = 0. Here our goal is to write these solutions in terms of U0 because then we evaluate the order of derivatives of the first correction of asymptotic solution (6.32) using the formula for 𝜕t U0 . The first one is U1 (t1 , t, %) ≡ 𝜕t1 U0 = ±

1 √–U0 4 – tU0 2 + 2U0 + E(t). S󸀠

Here the sign before the root is +, when 𝜕t1 U0 > 0 and vice versa. The second solution of the homogeneous linearized equation for the first correction is t1

U2 (t1 , t, %) = ±

d3 1 √–U0 4 – tU0 2 + 2U0 + E(t) ∫ . S󸀠 –U0 4 – tU0 2 + 2U0 + E(t) t0

Integral in this formula must be regularized because integrand has second-order poles at points U0 = ! and U0 = ". One of the possible ways of the regularization is done in Ref. [38]. Here we will follow Ref. [38]. Near the singular points 3 = t1j j ∈ ℤ one must represent the integral as j

t1 ++

j

t1 ++

d3 1 1 d3 ( ) S󸀠 ∫ 2 = –S󸀠 ∫ 𝜕 U (3, t, %) U (3, t, %) U1 (3, t, %) 3 1 1 j

t1 –+

j

t1 –+ j

t ++

1 󵄨󵄨t1j ++ d3 𝜕32 U1 (3, t, %) S 󵄨󵄨 󸀠 󵄨󵄨 =– – S . ∫ 2 󵄨 U1 (3, t, %)𝜕3 U1 (3, t, %) 󵄨󵄨tj –+ U (3, t, %) (𝜕 U (3, t, %)) 1 3 1 1 j

󸀠

t1 –+

The parameter + may be, for example, + = T/4, where T is the period of oscillations of the function U0 with respect to t1 . Change the second derivative of U1 in the last integrand as 𝜕t21 U1 = – (

1 2 ) (6U0 2 + t) U1 , S󸀠

and the first derivative of U1 as 𝜕t1 U1 = S󸀠 𝜕t21 U0 =

1 (1 – tU0 – 2U0 3 ) . S󸀠

244

6 Asymptotics for loss of stability

As a result we obtain formula for regularization of the integral: j

t1 ++

j

j

1

󵄨󵄨3=t1 ++ d3 (S󸀠 )3 󵄨󵄨 󵄨󵄨 S ∫ 2 =– U1 (3, t, %) (1 – tU0 (3, t, %) – 2U0 3 (3, 4, %)) 󵄨󵄨󵄨3=tj –+ U1 (3, t, %) 󸀠

t1 –+

j

t1 ++

+ S󸀠 ∫ j t1 –+

d3 (6U0 2 (3, t, %) + t) (1 – tU0 (3, t, %) – 2U03 (3, t, %))

2

.

Using the functions U1 and U2 one can solve eq. (6.34) for the first correction of the asymptotic solution (6.32) and obtain the solution in terms of U0 . One can see the first correction U1 has the order of the right-hand side of the equation for the first correction multiplied on S󸀠 . It means that U1 = O ((t – t∗ )–3/2 ) as t → t∗ + 0. This formula allows to obtain the restriction for the validity of the formal asymptotic solution (6.32): (t – t∗ )%–2/3 ≫ 1. Evaluate the residual of the asymptotic solution (6.32). For this we must evaluate the function F(t1 , t, %) = –%2 𝜕t2 U0 – %2 𝜕t (

1 1 𝜕 U ) – %2 󸀠 𝜕t1 𝜕t U1 – %3 𝜕t2 U1 . S󸀠 t1 1 S

For all t ∈ (t∗ , t∗ + a] the order of F(t1 , t, %) = O (%2 ). But the order of F grows as t → t∗ + 0, because the derivatives with respect to t have singularity at t = t∗ . On the right-hand side of this formula there is only functions on the U0 , then we can differentiate the right-hand side for deriving the second derivation of U0 with respect to t. The formula for 𝜕t2 U0 will be very large if we will write it in these definitions but now one can evaluate the order of 𝜕t2 U0 as t → t∗ + 0 using the formulas S󸀠 = O ((t – t∗ )1/4 ) ,

O(𝛾) = O(\$) = O ((t – t∗ )1/2 ) ,

as

t → t∗ + 0.

As a result one obtains: 𝜕t2 U0 (t1 , t, %) = O ((t – t∗ )–10/4 ) . Using the same formulas one can evaluate the order of F(t1 , t, %) as t → t∗ + 0: F(t1 , t, %) = O (%2 (t – t∗ )–11/4 ) + O (%3 (t – t∗ )–17/4 ) .

6.3 Fast Oscillating Asymptotics

245

6.3.4 The Matching of the Fast Oscillating Asymptotic Solution and the Inner Asymptotics The matching of this asymptotics with the inner asymptotics (6.13) and (6.19) is carried out. From the matching condition for the phase function we obtain the initial condition S(t)|t=t∗ = 0. Now we turn to the evaluation of the asymptotics for the function U0 as t → t∗ + 0. The function U0 may be written in the implicit form: !(t)

dx

󸀠

t1 = –S ∫ U0

√–x4

– tx2 + 2x + E(t)

.

(6.53)

We remind the parameter t1 is equal to %–1 S(t) + 6(t) and the additional term S0 is undefined in the function S(t). This term we will define in this section. Formula (6.53) allows us to obtain the main term of asymptotics for the U0 at t → t∗ + 0. Denote U0 = u∗ + W(t1 , t) and x = u∗ + y. Using asymptotics of E(t) and !(t) as t → t∗ + 0 one can get t1 = –S󸀠 ∫

u∗ +!(t)

W 4u∗

= S󸀠 (∫

W

dy √–y4 – 4u∗ y3 + O(t – t∗ ) dy

√–y4 – 4u∗ y3

+ O ((t – t∗ )1/2 ) + O (

(t – t∗ ) )) . W 5/2

This formula allows to write the asymptotic expansion of U0 as t → t∗ + 0 in the form: U0 (t1 , t) = W0 (t1 /S󸀠 ) + O ((t – t∗ )1/2 ) + O (

(t – t∗ ) ). W 5/2

(6.54)

The main term of the asymptotics is defined by the formula: W0 (t1 /S󸀠 ) = –

4u∗ . 1 + 4u2∗ (t1 /S󸀠 )2

This asymptotics is applicable as W05/2 ≫ (t – t∗ ). The function U0 (t1 , t) is periodic with respect to t1 . It means the asymptotic (6.54) is applicable on some segments of the

246

6 Asymptotics for loss of stability

interval %4/5 ≪ (t – t∗ ) ≪ 1. The argument of the function W0 in the neighbourhood of some point tk as t → t∗ + 0 is (

󵄨 S(tk ) + S󸀠 (tk )(t – tk ) + O (S󸀠󸀠 (tk )(t – tk )2 ) t – tk t1 󵄨󵄨󵄨 (t–t )2 󵄨 + Sk + O ( %tk ) , ) ∼ ∼ 󵄨 󸀠 󸀠 󸀠󸀠 󵄨 k %S 󵄨󵄨t=t % S (tk ) + O (S (tk )(t – tk )) k

where Sk = %–1 S(tk )/S󸀠 (tk ). It is easy to see the argument of the function W0 may be represented as t1 ∼ ( + Sk , %S󸀠 where Sk is some constants depending on S0 and number k. One can see the main term of the asymptotics (6.54) coincides up to shift Sk with the main term of the 0

second inner asymptotic expansion which is the function w((k ). It is easy to see for full definition of the function U0 (t1 , t) one must find the phase shift S0 . We defined the additional constant S0 by matching the functions U0 (t1 , t) and the first inner asymptotic expansion. Formula (6.54) is suitable when |W0 (()|5/2 ≫ |t – t∗ |. When W0 (() is small, we consider other asymptotic formula for the function U0 (t1 , t): U0 (t1 , t) = u∗ + √t – t∗ P (

S(t) + 6(t), t) . %

(6.55)

Substitute this formula to the second-order equation for the function U0 (t1 , t) (6.33). Expand the function P (S(t)/% + 6(t), t) with respect to the small parameter (t – t∗ ). As a result the equation for the main term of the asymptotic expansion is p󸀠󸀠 + 6u∗ p2 + u∗ = 0. This equation coincides with the equation for the asymptotics of the first correction of the first inner asymptotic expansion. The boundary conditions for the function p(S(t)/% + 6(t)) is obtained from the condition of matching (6.55) with the asymptotics of expansion (6.54) as |(| → ∞. The additional constant S0 in formula (6.52) is finally defined at the matching of the asymptotic expansions (6.54) and (6.55) with asymptotics of the inner asymptotic expansions. From this we get S0 = 0.

6.4 An Asymptotic Solution Slowly Crossing the Separatrix Near a Saddle-Centre Bifurcation Point Beginning from this section we take care of bifurcations for the primary resonance 3 2 3 4 3 equation. Let us change variables 8 = √ f J, 4 = √ f T, % = √ f . It yields i%

dJ + (T – |J|2 )J = 1, dT

0 < % ≪ 1.

(6.56)

6.4 An Asymptotic Solution Slowly Crossing the Separatrix Near a Saddle-Centre

247

We will show that the same sequence of equations arises in the bifurcation layer for dynamical saddle-centre bifurcation for the primary autoresonance equation.

6.4.1 Typical Problems for the Autoresonance We seek an asymptotic solution of eq. (6.56) in the form ∞

J(T, %) = ∑ %n 8n (t).

(6.57)

n=0

We obtain a nonlinear equation for the leading-order term in eq. (6.57) (T – |80 |2 )80 = 1.

(6.58)

A number of roots for this algebraic equation depend on the parameter T. When T > T∗ eq. (6.58) has three roots. Let us denote them by u1 (T) < u2 (T) < u3 (T). At T = T∗ = 3(1/2)2/3 there are one simple root and one multiple root u2,3 = (1/2)1/3 . When T < T∗ eq. (6.58) has a single root. The roots of eq. (6.58) are the first approximations for slowly varying equilibrium positions. The roots u1 (t), u2 (t) correspond to the centres and u3 (t) corresponds to the saddle. At T = T∗ the roots u2 (T) and u3 (T) merge as a saddle-centre bifurcation occurs. Choosing one of uk , k = 1, 2, 3, as the leading-order term for eq. (6.57) one can construct three different asymptotic expansions for the solution of eq. (6.56) in domain T > T∗ . The asymptotic solution with primary term u1 (T) have no bifurcations. This was discussed in Refs. [53, 91]. This solution is stable for all values of T [88]. We will construct the asymptotic solution with the leading-order term u2 (T) and extension of this solution inside and after the neighbourhood of bifurcation on the interval T ∈ [T∗ – C, T∗ + C], where C = const > 0. 6.4.2 Three Types of Algebraic Solutions In this section we construct the asymptotic expansion of the solution of eq. (6.56) as (T – T∗ ) ≥ %4/5–𝛾1 , ∀𝛾1 ∈ (0, 2/5). Let us construct the expansion in the form: ∞

m=0

m=0

J(t, %) = ∑ %2m 82m (t) + i ∑ %2m+1 82m+1 (t)

(6.59)

Substituting this series in eq. (6.56) and collecting the terms under the same powers of the small parameter we obtain the recurrent sequence of the algebraic equations for coefficients of expansion (6.59).

248

6 Asymptotics for loss of stability

The leading-order term in the expansion is the solution of the equation (T – |80 |2 )80 = 1. This equation has three roots u1 (T) < u2 (T) < u3 (T) as T > T∗ . Further we use u2 (t) as the leading-order term in the expansion. The higher-order terms in the expansion are determined from linear equation [T – 3820 ] 8n = –8󸀠n–1 + Pn (80 , . . . , 8n–1 ),

n = 2m,

(6.60)

[T – 820 ] 8n = –8󸀠n–1 + Pn (80 , . . . , 8n–1 ),

n = 2m + 1,

(6.61)

where Pn is a cubic form with respect to arguments. The expression [T – 3820 ] as T → T∗ has the asymptotic behaviour [T – 3820 ] = O ((T – T∗ )1/2 ). Simple calculations give us the explicit form of the first correction term 81 = –

80 . [T – 3820 ][T – 820 ]

(6.62)

This expression has a singularity at T = T∗ . The structure of the nth correction term as T → T∗ can be obtained by the analysis of eqs. (6.60) and (6.61). In the case of n = 0, 1: ∞

8n (T) = (T – T∗ )–n/2 ∑ 8n,k (T – T∗ )k/2 ,

T → T∗ ,

(6.63)

k=0

where 80,0 = u∗ , 80,1 = –1/√3. In the case of n = 2m, m ∈ ℕ: ∞

8n (T) = (T – T∗ )(1–5m)/2 ∑ 8n,k (T – T∗ )k/2 ,

T → T∗ .

(6.64)

t → T∗ .

(6.65)

k=0

In the case of n = 2m + 1, m ∈ ℕ: ∞

8n (t) = (T – T∗ )(–1–5m)/2 ∑ 8n,k (T – T∗ )k/2 , k=0

Expansion (6.59) is valid when the following conditions %2

82m+2 ≪ 1, 82m

take place. This yields (T – T∗ )%–4/5 ≫ 1.

%2

82m+1 ≪1 82m–1

6.5 Expansions in Bifurcation Layer

249

In order to obtain the error of the order of %𝛾(N) , 𝛾(N) > 0 after substituting N terms of the asymptotic series into eq. (6.56) it is sufficient to claim %2

82m+2 ≤ %, , 82m

%2

82m+1 ≤ %, , , > 0. 82m–1

It yields (T – T ∗ ) ≥ %4/5–𝛾1 , 𝛾1 > 0.

6.5 Expansions in Bifurcation Layer In this section we construct the asymptotic expansion of the solution in the domain |t – t∗ | ≪ 1. Usually the asymptotics of this type are called internal ones [64]. 6.5.1 Initial Interval 6.5.1.1 The Painlevé-1 Interval In this section we construct the expansion of the solution in the neighbourhood of the singularity T = T∗ . In the neighbourhood of the singularity we use a scaling variable 4 = (T – T∗ )%–4/5 . This new variable is defined by the structure of the external asymptotic expansion (6.59) as T → T∗ . We construct the asymptotic solution in the form J = U∗ + %2/5 !(4, %) + i%3/5 "(4, %).

(6.66)

The equations for ! and " are !󸀠 + (U∗2 – t∗ )" = –%2/5 2U∗ !" – %4/5 (!2 – 4)" – %6/5 "3 , "󸀠 – 3U∗ !2 + U∗ 4 = %2/5 (–!3 – U∗ "2 + !4) + %4/5 !"2 .

(6.67)

The asymptotic behaviour of the functions ! and " as 4 → ∞ is known. It is obtained by substituting eqs. (6.63), (6.64) and (6.65) in eq. (6.59), decomposing this expression with respect to variable 4 and collecting the coefficients under %2n/5 , n = 0, 1, . . . , ∞ ∞

m=1

k=1

m=1

!(4, %) = 41/2 80,1 + ∑ 82m,0 4(1–5m)/2 + ∑ %2k/5 4(k+1)/2 (80,k+1 + ∑ 82m,k 4(1–5m)/2 ) , (6.68) ∞

k=0

m=0

"(4, %) = ∑ %2k/5 4k/2 ∑ 82m+1,k 4(–1–5m)/2 .

(6.69)

250

6 Asymptotics for loss of stability

It is convenient to construct the asymptotic solution of eqs. (6.67) in the following form: ∞

!(4, %) = ∑ %2n/5 !n (4),

"(4, %) = ∑ %2n/5 "n (4).

n=0

(6.70)

n=0

The leading-order terms are defined by !󸀠0 + (U∗2 – T∗ )"0 = 0,

"󸀠0 – 3U∗ !20 + 4U∗ = 0.

(6.71)

Let us differentiate the first equation and take into account that U∗ (U∗2 – T∗ ) = 1, then we obtain !0 󸀠󸀠 + 3!20 – 4 = 0.

(6.72)

Substitution of y = –!0 /2, x = –4/2 lead us to the Painlevé-1 equation in a standard form y󸀠󸀠 = 6y2 + x. We obtain the asymptotics of !0 (4) as 4 → +∞ from eq. (6.68): ∞

!0 (4) = √4 [80,1 + ∑ 82m,0 4–

5m 2 ].

(6.73)

m=1

The solution of the Painlevé-1 equation with the asymptotic behaviour of such type was investigated in Refs. [63, 79]. The representation of the solution by monodromy data s2 , s3 was obtained in Ref. [79]. Using these results we obtain !0 = –y(–4/2, 0, 0)/2, where y(x, s2 , s3 ) is the Painlevé-1 transcendent. The coefficient "0 is obtained from the first equation of system (6.71) "0 =

!󸀠0 . 2U∗2

The higher-order terms !n and "n are obtained from the following equations: !󸀠n – 2U∗2 "n = –2U∗

n

"󸀠n – 3U∗ ∑ !i !j = –

n–1

n–2

∑ !i "j –

∑ !i !j "k + 4"n–2 –

n–3

∑ "i "j "k ,

i+j=n–1

i+j+k=n–2

i+j+k=n–3

i,j≥0

i,k,j≥0

i,k,j≥0

n–1

n–1

!i !j !k – U ∗

(6.74)

n–2

"i "j + 4!n–1 +

i+j=n

i+j+k=n–1

i+j=n–1

i+j+k=n–2

i,j≥0

i,j,k≥0

i,j≥0

i,j,k≥0

!i "j "k .

(6.75)

251

6.5 Expansions in Bifurcation Layer

Instead of system (6.74), (6.75) we investigate the second-order differential equation for !n n–1

!󸀠󸀠 n + 6!0 !n = –6

+ 2U∗2

n–1

n–1

!i !j – 2U∗2

!i !j !k –

i+j=n–1

i+j+k=n–1

i+j=n–1

i,j≥1

i,j,k≥0

i,j≥0

n–1

n–2

!i !j "k – 2U∗

"i "j + 2U∗2 4!n–1

n–2

(!󸀠i "j + !i "󸀠j ) –

(2!󸀠i !j "k + !i !j "󸀠k )

i+j+k=n–2

i+j=n–1

i+j+k=n–2

i,j,k≥0

i,j≥0

i,j,k≥0

n–3

+ "n–2 + 4"󸀠n–2 – 3

"󸀠i "j "k .

(6.76)

i+j+k=n–3 i,j,k≥0

Asymptotic behaviour of !n as 4 → ∞ is obtained from eq. (6.68) ∞

!n = 4(n+1)/2 (80,n+1 + ∑ 82m,n 4(1–5m)/2 ) ,

4 → ∞.

(6.77)

m=1

The coefficient "n is expressed by !n from eq. (6.74). Denote by a1 (4) and a2 (4) two linear independent solutions of the homogeneous equation (6.76): 1 a1 = – 𝜕s2 (y(–4/2, s2 , s3 ))|s2 =s3 =0 , 2

1 a2 = – 𝜕s3 (y(–4/2, s2 , s3 ))|s2 =s3 =0 . 2

The asymptotic property ai = O(4–5/8 ), 4 → ∞, is obtained from Ref. [79]. Let us construct the solution of eq. (6.76) with asymptotics (6.77) in the form !n (4) = !n,N + !n,res ,

(6.78)

where N

!n,N = 4(n+1)/2 (80,n+1 + ∑ 82m,n 4(1–5m)/2 ) . m=1

Then !n,res satisfies the equation !󸀠󸀠 n,res + 6!0 !n,res = fn,N ,

where fn,N = O (4

n–5N–2 2 ),

4→∞

(6.79)

and asymptotic condition ∞

!n,res = 4(n+1)/2 ∑ 82m,n 4(1–5m)/2 , m=N+1

4 → ∞.

(6.80)

252

6 Asymptotics for loss of stability

Let N be such that N > 4n–5 . Then the solution for eq. (6.79) with the asymptotics (6.80) 20 can be represented in the form !n,res (4) =

4

4

1 (a1 ∫ a2 fn,N d4󸀠 – a2 ∫ a1 fn,N d4󸀠 ) , W!

(6.81)

where Wa = const is the Wronskian of the linear independent solutions a1 and a2 . A simple calculation gives us the domain of validity of this asymptotic expansion as 4 → ∞. Representation (6.70) of the solution is valid as 4 ≤ %–4/5+𝛾1 , 𝛾1 > 0. The domains of validity for this asymptotic expansion and for the expansion from previous section are intersected when 𝛾1 ∈ (0, 2/5). It is known that the solution of the Painlevé-1 equation with asymptotics (6.73) has poles at points 4k [97]. Let us denote the first pole by 40 and call it the bifurcation point. The solution of the Painlevé-1 equation is represented by converging series in the neighbourhood of 40 : !0 (4) = –

∞ 40 2 2 4 (4 – 4 – ) + a (4 – 4 ) + !0,k (4 – 40 )k , ∑ 0 4 0 (4 – 40 )2 10 k=6

where the constants 40 and a4 are parameters of the solution. These parameters are uniquely defined by the monodromy data s2 , s3 , (see Ref. [99]). In the neighbourhood of 40 there exist two linear independent solutions of equation !̃ 󸀠󸀠 + 6!0 !̃ = 0 such that !̃ 1 (4) = (4 – 40 )–3 + c1 (4 – 40 ) + c2 (4 – 40 )2 + c3 (4 – 40 )3 + O((4 – 40 )5 )

(6.82)

and !̃ 2 (4) = (4 – 40 )4 + O((4 – 40 )8 ).

(6.83)

The pairs of the linear independent solutions (a1 , a2 ) and (!̃ 1 , !̃ 2 ) are connected by the following formula: (

a1 !̃ ) = T( 1 ), !̃ 2 a2

(6.84)

where T is the fixed non-singular matrix. Formulas (6.78)–(6.84) yield the formula for the representation of !n in the neighbourhood of the bifurcation point 40 as 4 > 40 !n = !̃ n,c (4) + a+n !̃ 1 (4) + b+n !̃ 2 (4).

(6.85)

6.5 Expansions in Bifurcation Layer

253

Here a+n and b+n are constants. The term !̃ n,c (4) is the partial solution of the nonhomogeneous equation for !n (4) with the asymptotics: !̃ n,c (4) =

!̃ n,k (4 – 40 )k ,

where

!̃ n,–3 = 0,

!̃ n,4 = 0.

k=–2n–2

The partial solution with such property is constructed by adding the solutions of homogeneous equation. By analysis of written asymptotics we obtain that the asymptotic expansions (6.70) are valid as (4 – 40 ) ≥ %1/5–𝛾2 , 𝛾2 ∈ (0, 1/10). This proves Proposition 2.

6.5.1.2 The Neighbourhood of the Bifurcation Point In this section we construct an asymptotic expansion which is valid in the small neighbourhood |4 – 40 | ≪ 1 of the bifurcation point. We use a new scaling variable ( = (4 – 40 )%–1/5 in the small neighbourhood of the point 40 . Denote by w = J – U∗ . It yields the equation for w((, %) iw󸀠 + 2U∗2 w + 2U∗ |w|2 + |w|2 w + U∗2 w∗ + U∗ w2 – %4/5 40 (U∗ + w) – %((U∗ + w) = 0.

(6.86)

To obtain the asymptotic behaviour of w((, %) as ( → +∞ we change the independent variables in the expression J = U∗ + %2/5 !(4, %) + i%3/5 "(4, %) and expand this one with respect to variable ( as ( → ∞. It yields ∞

w((, %) =U∗ + ∑ %k/5 (k [!0,k–2 (–2 + i"0,k–3 (–3 + %3/5 !1,k–1 (–1 k=0 ∞

n=2

n=1

+ ∑ !n,k–2n–2 (–2n–2 + i ∑ "n,k–3–2n (–3–2n ],

( → ∞.

(6.87)

We seek the expansion in the form ∞

w((, %) = w0 + %4/5 ∑ %(n–1)/5 wn ,

% → 0.

(6.88)

n=1

The first correction term in eq. (6.88) has an order %4/5 , but first correction term in the asymptotics (6.87) with respect to % has an order %1/5 . To match eqs. (6.87) and (6.88) we modify the independent variable: ∞

(0 = ( + ∑ %(2n–1)/5 (0,n , n=1

% → 0.

254

6 Asymptotics for loss of stability

The correction terms (n are defined in eq. (6.87). Using formulas (6.82), (6.83) and (6.85) we obtain (0,n =

a+n . 4

(6.89)

It is convenient to pass from eq. (6.86) to the equation with respect to the variable (0 : iw󸀠 + (2U∗2 – t∗ )w + 2U∗ |w|2 + |w|2 w + U∗2 w∗ + U∗ w2 ∞

= (%4/5 40 + %(0 – % ∑ %(2n–1)/5 (0,n )(U∗ + w),

% → 0.

(6.90)

n=1

To construct the formal solution w((0 , %) we substitute the formal series (6.88) in eq. (6.90) and gather the terms of the same order. Then we obtain the recurrent sequence of the equations for the leading-order and higher-order terms in eq. (6.88). The equation for the leading-order term is w0󸀠 + U∗ (2|w0 |2 + w02 ) + U∗2 (w0 – w0 ) + |w0 |2 w0 = 0.

(6.91)

The asymptotics of w0 as (0 → ∞ can be obtained from the asympotics of the function w((0 , %) ∞

w0 ((0 ) = ∑ w0,k (–k ,

w0,2 = –2.

(6.92)

k=2

To construct the leading-order term w0 ((0 ) we use the conservation law for eq. (6.91): 1 1 1 U∗ [|w0 |2 w0 + |w0 |2 w0 ] + U∗2 [ w0 2 + w02 – |w0 |2 ] + |w0 |4 = H. 2 2 2 Here the constant H is defined by asymptotics (6.92): H = 0. Let us solve this algebraic equation with respect to w and substitute the obtained expression into eq. (6.91). It yields iw0󸀠 – 2U∗ w0 √–U∗ w0 = 0.

(6.93)

This equation can be obviously solved. The constant of an integration is defined by eq. (6.92). The leading-order term in eq. (6.88) is w0 ((0 ) =

–2 . ((0 – iU∗ )2

This formula defines the separatrix solution of eq. (6.91).

(6.94)

6.5 Expansions in Bifurcation Layer

255

The higher-order terms wm , m = 1, 2, . . . , satisfy the equations 󸀠 + [2U∗ w0 + 2U∗ w0 – U∗2 + 2w0 w0 ] wm + (U∗ + w0 ) wm = Fm . iwm 2

(6.95)

Here Fm = –

∑󸀠 wi wj wk – U∗ i+j+k=m–6

+ 40 wm–7 + (0 wm–8 –

wi wj – 2U∗

i+j=m–3;i,j>0

wi wj

i+j=m–3;i,j>0

(i,0 wj ,

(6.96)

2i+j=m–7

where G󸀠 does not contain terms with pair of zero indexes. In particular, F1 = 40 (U∗ + w0 ),

F2 = (0 (U∗ + w0 )

F3 = –(1,0 (U∗ + w0 ),

F4 = 0,

F5 = –(2,0 (U∗ + w0 ) – 2U∗ |w1 |2 – U∗ w12 . The asymptotics of wm ((0 ) as ( → ∞ is defined by eq. (6.87). To construct the higher-order terms we need the solutions of the homogeneous equation (6.95). The first solution is obtained by differentiation of w0 : W1 ((0 ) =

1 d 1 w = . 4 d(0 0 ((0 – iU∗ )3

(6.97)

To obtain second solution we consider homogeneous equation (6.95) and its complex conjugation as a system of first-order differential equations. The Wronskian of linear independent solutions for this system is a constant. This allows us to determine the second solution. A formula for the second solution is complicated and we do not give it here. The second solution grows as |(0 | → ∞: W2 ((0 ) = (04 + i

1 3 ( + ⋅⋅⋅. 2U∗2 0

(6.98)

The Wronskian of W1 , W 1 and W2 , W 2 is W = W1 W 2 – W2 W 1 = –

7i . U∗2

By using W1 and W2 we write the higher-order terms in the asymptotic expansion with the given asymptotic behaviour as (0 → +∞ in an explicit form. The asymptotic behaviour as (0 → –∞ of these corrections evaluates below. It is convenient for us to represent the higher-order terms in eq. (6.88) in the form – – – ((0 ) + Xn,0 W1 ((0 ) + Yn,0 W2 ((0 ), wn ((0 ) = wn,c

(6.99)

256

6 Asymptotics for loss of stability

– where wn,c ((0 ) is the partial solution for the nth correction term. Here we choose the special partial solution. This solution does not contain terms of the order of (04 and (0–3 in the asymptotics as (0 → –∞. This special partial solution is obtained from any partial solution by adding the solutions W1 and W2 with the corresponding coefficients. – – The constants Xn,0 and Yn,0 are r

d( 1 – Xn,0 = – res [ ∫ (Fn W 2 + F n W 2 ) ] , r=∞ r W [ –r ] r

d( 1 – = b+m – res [ ∫ (Fn W 1 + F n W1 ) ] , Yn,0 r=∞ r W [ –r ]

for n = 2m + 3, m = 1, 2, . . . ,

and r

d( 1 – = – res [ ∫ (Fn W 1 + F n W1 ) ] , Yn,0 r=∞ r W [ –r ]

for n ≠ 2m + 3, m = 1, 2, . . . .

The asymptotic behaviour as (0 → –∞ of the partial solution in formula (6.99) is obtained by analysis of eq. (6.96). The cubic and quadratic terms in eq. (6.96) lead to the major increase of wn,c– ((0 ). Other terms are weaker. – Let us analyse the asymptotic behaviour of the partial solution wn,c ((0 ) as (0 → – – ∞. Denote the contribution of quadratic terms by wsq . Then -

wn1 ,sq– ((0 ) = O((01 ),

where

n1 (i, j) = 4i + 5j – 3, -1 (i, j) = 4i + 6j – 2.

– the contribution of cubic terms. Then we obtain Denote by wn,cub -

wn–2 ,cub ((0 ) = O((02 ), -

wn3 ,cub– ((0 ) = O((03 ), -

wn4 ,cub– ((0 ) = O((04 ),

where

n2 (i, j) = 8i + 5j – 6, -2 (i, j) = 8i + 6j – 8;

where

n3 (i, j) = 4i + 10j – 6, -3 (i, j) = 4i + 12j – 4;

where

n4 (i, j) = 4i + 5j – 6, -4 (i, j) = 4i + 6j – 4.

– increases as (0 → –∞ we have to solve systems of two In order to estimate how wn,c equations

nk (i, j) = n, n (i,̂ j)̂ = n, m

(6.100)

with respect to (i, j, i,̂ j)̂ and calculate -km = max {-k , -m } , (k ≠ m, k, m = 1, 2, 3, 4). This calculated value -(n) = maxk,m -k,m defines the asymptotic behaviour of the partial solution wn,c– ((0 ) as (0 → –∞.

257

6.5 Expansions in Bifurcation Layer

If the corresponding systems (6.100) have no solutions, then we investigate the weaker terms in eq. (6.96). This investigation is realized by the similar way. – In particular, the segment of the Laurent series for the function w1,c ((0 ) has the form – w1,c ((0 ) =

40 U∗3 2 4 U 4 34 ( + i 0 ∗ (0 – 0 + i 03 (0–1 + O((0–2 ), 5 0 5 30U∗ 5U∗

(0 → ∞.

The coefficients as W1 and W2 are – X1,0 = 0,

– Y1,0 = 0.

The function w2 ((0 ) as (0 → –∞ is also represented in the form of sum of the partial solution of the non-homogeneous equation and solutions of the homogeneous equation – – – w2 ((0 ) = w2,c ((0 ) + X2,0 W1 ((0 ) + Y2,0 W2 ((0 ).

(6.101)

The partial solution of the non-homogeneous equation is represented in the form as (0 → –∞: – ((0 ) = – w2,c

U (2 (03 16U∗2 5(0 2 3i 15U∗ +i ∗ 0 + + – + + i( )(–2 + O((0–4 ). 6 2 12U∗ 4 16(0 15 3U∗ 0

The coefficients W1 and W2 are – X2,0 = 0,

– Y2,0 =

0 . 2

To obtain uniform asymptotic expansion (6.88) we need %1/5 wn+1 /wn ≤ %𝛾 , where 𝛾 > 0. It yields us the domain –%–1/10+𝛾3 ≤ (0 ≤ %–1/5+𝛾2 , ∀𝛾3 > 0. The domains for eqs. (6.88) and (6.70) are intersected when 𝛾2 ∈ (0, 1/10). It is easy to see that the asymptotics of eq. (6.88) as (0 → ∞ and as (0 → –∞ are different. The difference is the terms of the order of %(04 and weaker. It means that these additional terms appear via the passage through the neighbourhood of the bifurcation point. The leader contribution of these additional terms is %0(04 /2. 6.5.1.3 Intermediate Expansion. Initial Interval We investigate the behaviour of the solution after a passage through the small neighbourhood of the bifurcation point 40 .

258

6 Asymptotics for loss of stability

Introduce a new scaled variable T1 = (0 %1/6 . We construct the expansion of our special solution for primary resonance equation in the form: U = U∗ + %2/6 A(T1 , %) + i%3/6 B(T1 , %),

(6.102)

where A(T1 , %) and B(T1 , %) are real functions. The primary resonance equation can be written in terms of these new variables in the form: ∞

A󸀠 – 2U∗2 B = – %1/3 2U∗ AB – %2/3 A2 B – %B3 + (%4/5 40 + %5/6 T1 – %4/5 ∑ %(2n–1)/5 (n,0 )B, n=1

B󸀠 + 3U∗ A2 = – %1/3 (A3 – B2 U∗ ) – %2/3 AB2 ∞

+ (%1/6 T1 + %2/15 40 – %2/15 ∑ %(2n–1)/5 (n,0 )(U∗ + %1/3 A),

% → 0. (6.103)

n=1

The asymptotics of U as (0 → –∞ rewrote in terms of the variable T1 gives ∞

n=2

k=1

U = U∗ + ∑ %n/6 [w0,n T1–n + ∑ %(6k+14–5-(k))/30 wk,n–2–-(k) T12+-(k)–n ],

(6.104)

where wn,m are the coefficients of the Laurent series for wn ((0 ) as (0 → –∞. System (6.103) and asymptotics (6.104) contain the terms of the different orders with respect to %. It is convenient to represent the formal asymptotic solution with respect to % in the form: ∞

A(T1 , %) = ∑ %n/30 An (T1 ), n=0 ∞

B(T1 , %) = ∑ %n/30 Bn (T1 ),

% → 0.

(6.105)

n=0

By substituting the representation for A and B in the equations we obtain the recurrent sequence of the equations for An (T1 ), Bn (T1 ): A󸀠n – 2U∗2 Bn = Hn , B󸀠n + 3U∗ ∑ Ai Aj = Gn , i+j=n

where for n = 1, 2, 3, 6, 7, Hn = Gn ≡ 0, H4 = 40 B0 ,

G4 = 40 U∗ ,

H5 = T1 B0 ,

G5 = T1 U∗

(6.106)

259

6.5 Expansions in Bifurcation Layer

and for n ≥ 8 ∑

Hn = – 2U∗

Ai Aj Bk

i+j+k=n–20

Bi Bj Bk + 40 Bn–24 + T1 Bn–25 –

Ai Bj –

i+j=n–10

i+j+k=n–30

Gn =

Ai Aj Ak + U∗

i+j+k=n–10

Bi Bj

i+j=n–10

Ai Bj Bk – U∗ (l,0 + T1 An–15 + 40 An–16 –

(i,0 Bj ,

∑ 12i+j=n–18

i+j+k=n–20

(i,0 Aj ,

(6.107)

12i+j=n–8

where in the term U∗ (l,0 index l is such that 12l – 2 = n. Let us investigate the differential equation of the second order for the function An (T1 ) instead of the system of differential equations of the first order. The equation for the leading-order term is 2 A󸀠󸀠 0 + 3A0 = 0.

(6.108)

The equations for the higher-order terms in eq. (6.105) are A󸀠󸀠 m + 6A0 Am = Fm , where Fm = 2U∗2 [ – 3U∗

Ai Aj +

i+j=m;i,j=0 ̸

Ai Aj Ak + U∗

i+j+k=m–10

– U∗ (l,0 + T1 Am–15 + 40 Am–16 –

(i,0 Aj – 2U∗ (

12i+j=m–8

–2

A󸀠i Aj Bk –

∑ i+j+k=m–20

i+j+k=m–20

Ai Bj Bk

i+j+k=m–20

A󸀠i Bj +

∑ i+j=m–10

Ai Aj B󸀠k – 3

Bi Bj –

i+j=m–10

Ai B󸀠j )]

i+j=m–10

B󸀠i Bj Bk + 40 B󸀠m–24 + Bm–25

∑ i+j+k=m–30

(6.109) + T1 B󸀠m–25 –

(i,0 B󸀠j .

12i+j=m–18

In particular, Fm ≡ 0, m = 1, 2, 3, 6, 7;

F4 = –2U∗2 40 ;

F8 = –3A24 (T1 ); F10 = –(1,0 U∗ – 2U∗ (

(A󸀠0 ) 2U∗

2

F5 = –2U∗2 T1 ;

F9 = –6A4 (T1 )A5 (T1 ); 2

2 (A󸀠0 ) 3A0 2 3 + ) – 2U (A – ) + 3(A ) . 5 ∗ 0 2U∗2 4U∗3

260

6 Asymptotics for loss of stability

The asymptotics as T1 → –0 of the coefficients of the formal series for A(T1 , %) can be obtained from formulas (6.99), (6.104) and (6.105): A0 =

–2 – + Y2,0 T14 + O(T110 ); T12

A4 =

40 U∗3 2 T1 + O(T13 ); 5

A8 (T1 ) = O(T16 );

Am ≡ 0, m = 1, 2, 3, 6, 7;

A5 =

A9 (T1 ) = O(T17 );

–1 3 T + O(T14 ); 6 1

(6.110)

A10 (T1 ) = O(T1–4 ).

Let us construct the coefficients of eq. (6.105). The solution A0 (T1 ) has the asymptotics A0 = –

2 + a4 (1)T14 + O(T110 ), T12

T1 → –0,

– where a4 (1) = Y2,0 . Using this asymptotics it is easy to see that the function A0 is expressed by the Weierstrass ℘-function:

A0 (T1 ) = –2℘(T1 ; 0, g3 (1)),

where g3 (1) = a4 (1)/56.

The coefficient An (T1 ) can be represented in the neighbourhood of the point T1 = 0 in the form – – An (T1 ) = A–n,0 (T1 ) + xn,0 A1 (T1 ) + yn,0 A2 (T1 ).

The function A–n,0 (T1 ) is the partial solution of the non-homogeneous equation for the nth correction. This solution does not contain terms of the order of T1–3 and T14 . The functions A1 (T1 ) and A2 (T1 ) are linear independent solutions of the linear homogeneous equation in variations. These solutions are uniquely defined by their asymptotics as T1 → –0 A1 (T1 ) = T1–3 + . . . ,

A2 (T1 ) = T14 + . . . ,

and condition that the expansion of the function A1 (T1 ) does not contain the term of the order of T14 . Both of these solutions are represented via the Weierstrass function. In the case g2 = 0 we obtain [1] A1 (T1 ) =

1 󸀠 ℘ (T1 , 0, g3 ), 4

A2 (T1 ) = –14𝜕g3 ℘(T1 , 0, g3 ) ≡

7 14 T ℘󸀠 (T1 , 0, g3 ) + ℘(T1 , 0, g3 ). 3g3 1 3g3

The Wronskian of these solutions is W = – 47 .

6.5 Expansions in Bifurcation Layer

261

– – and yn,0 are defined by matching of eq. (6.105) as T1 → –0 The coefficients xn,0 and eq. (6.88) as (0 → –∞: – – = Xm,0 , xn,0

n = 6m + 23;

– – = Ym,0 , yn,0

n = 6m – 12.

– A1 (T1 ) have a singularity as T1 → 0. These singularities are eliminated The terms xn,0 by shifting of the pole T1 = 0 to the point ∞

– . T1 = ∑ %n/30 xn,0 n=29

The increasing of the order of singularities in the asymptotic expansion occurs due to nonlinear terms on the right-hand side of eq. (6.109). Formulas (6.109), (6.110) show that major singularities appear in corrections with numbers divisible by 10. It allows us to represent the asymptotic expansion (6.105) in the form ∞

9

l=0

j=0

9

l=0

j=0

A(T1 , %) = ∑ %l/3 ( ∑ %j/30 A10l+j (T1 )),

B(T1 , %) = ∑ %l/3 ( ∑ %j/30 B10l+j (T1 )),

% → 0.

(6.111)

This representation gives us the domain of validity for expansion (6.111) as n/30 – (T1 – ∑∞ xn,0 ) → –0 n=1 % 9

%1/3 ( ∑ %j/30 A10(l+1)+j (T1 )) j=0 9

(∑ j=0

≤ %𝛾 , 𝛾 > 0. %j/30 A10l+j (T1 ))

It yields ∞

– T1 – ∑ %n/30 xn,0 ≥ %1/6–𝛾3 , 𝛾3 ∈ (0, 1/15). n=29

If 𝛾3 belongs to this interval, then the domains of validity for expansions from this and previous sections are intersected. Remark: One can construct an expansion into the studied domain using the asymptotic sequence %l/3 instead of %l/30 . In that case the coefficients of the asymptotic expansion should depend on %. We represent this dependance in explicit form by using expansion (6.111).

262

6 Asymptotics for loss of stability

The function A0 has poles. One of them is T1 = 0. The second pole of this function is defined by an elliptic integral ∞

K1 = 2 ∫ Amin

1/3

dy √4y3 – g3 (1)

,

Amin = (

g3 (1) ) . 4

Expansion (6.111) losses the asymptotic property in the neighboruhoods of poles. To determine the coefficients of eq. (6.111) as T1 → –K1 + 0 we solve the equations for the higher-order terms and write their expansions as T1 → –K1 + 0. It is convenient to represent An as T1 → –K1 + 0 in the form + + A1 (T1 + K1 ) + yn,1 A2 (T1 + K1 ). An (T1 ) = A+n,1 (T1 ) + xn,1

(6.112)

The function A+n,1 (T1 ) is the solution of the non-homogeneous equation for the nth correction. The asymptotics as T1 → –K1 + 0 of this function does not contain terms of the order of (T1 + K1 )–3 and (T1 + K1 )4 . The constants in the formula for the nth correction are evaluated by –K1 +r

dz Fn (z)A1 (z) ] [1 + – yn,1 = yn,0 + res [ ∫ ], r=0 r W [ –r ] –K1 +r

+ xn,1

=

– xn,0

+

+ C1 yn,1

dz Fn (z)A2 (z) ] [1 + res [ ∫ ], r=0 r W –r [ ]

where the constant C1 is defined by A2 (T1 + K1 ) = C1 A1 (T1 ) + A2 (T1 ). To obtain the domain of the validity of eq. (6.111) as T1 → –K1 + 0 it is necessary to determine an order of singularities for the correction terms. The first non-zero correction term is A4 (T1 ). Expression (6.112) for n = 4 contains the partial solution A+4,1 (T1 ). This function is smooth and has the identical asymptotics as T1 → –K1 + 0 and as T1 → –0. But the asymptotics of the A4 as T1 → –K1 + 0 contains terms of the order of (T1 + K1 )–3 + with the coefficients x4,1 ≠ 0. Namely, + = 0, y4,1

+ x4,1 =

2840 & (K1 /2; 0, g3 ), 3g3

where & (T1 , g2 , g3 ) is the Weierstrass & -function.

263

6.5 Expansions in Bifurcation Layer

+ A1 (T1 ) for n = 4, 5, . . . have a singularity as T1 → –K1 + 0. These The terms xn,1 singularities are eliminated by shifting of the pole T1 = –K1 to the point ∞

+ T1 = –K1 + ∑ %n/30 xn,1 . n=4 n/30 + It gives us the domain of validity for eq. (6.111) as (T1 + K1 – ∑∞ xn,1 ) → +0: n=4 % 9

%1/3 ( ∑ %j/30 A10(l+1)+j (T1 )) j=0

≤ %𝛾 , 𝛾 > 0.

9

( ∑ %j/30 A10l+j (T1 )) j=0

It yields ∞

+ ≥ %1/6–𝛾4 , 𝛾4 ∈ (0, 1/6). T1 + K1 – ∑ %n/30 xn,1 n=4

6.5.2 The Bifurcation Layer in the Case of Bounded k 6.5.2.1 The Neighbourhoods of the Poles In this section we construct the asymptotic expansion of our solution in the neighbourhoods of the poles and show the change of the expansion upon the passage from one pole to another one. In the neighbourhood of the poles we use a new scaling variable (k = (Tk + Kk –

1 ∞ n/30 + ∑ % xn,k ) %–1/6 , 4 n=4

% → 0.

(6.113)

It yields ∞

iJ󸀠 + |J|2 J – (T∗ + %4/5 (40 + ∑ %(2n–1)/5 (n,0 ) n=1 k

+ %5/6 ( ∑ Kj + j=1

1 ∞ n/30 k + ∑% ∑ xn,j + %1/6 (k )) J = 1, 4 n=4 j=1

% → 0.

(6.114)

We construct the expansion in the form ∞

J = U∗ + W0 ((k ) + %4/5 ∑ %(n–1)/30 Wn ((k ),

% → 0.

(6.115)

n=1

The coefficients of eq. (6.115) satisfy a recurrent system of equations. In particular, the function W0 ((k ) satisfies eq. (6.91). The equations for higher-order terms are iWn󸀠 + [2U∗ W 0 + 2U∗ W0 – U∗2 + 2W0 W 0 ] Wn + (U∗ + W0 ) W n = Fn . 2

(6.116)

264

6 Asymptotics for loss of stability

Here ∑

Fn =

j+l+m+46=n

+

∑ (U∗ + W0 )Wj Wl +

Wj Wl Wm +

j+l+23=n

k 1 + + (k Wn–30 ∑ Wm ∑ xj,l 4 m+j+25=n l=1

1 k + (U + W0 ) + (m,0 Wl + (s,0 (U∗ + W0 ), ∑x ∑ 4 j=1 n–2,j ∗ 12m+l+18=n

(6.117)

where the last term contains number s such that 12s – 5 = n. For example, k

F1 = 40 (U∗ + W0 );

F2 = (U∗ + W0 ) ∑ Kj ;

Fn ≡ 0, n = 3, 4, 5;

j=1

F6 = (U∗ + W0 ) ((k +

1 k + ∑x ) 4 j=1 4,j

F7 = (U∗ + W0 ) ((k +

1 k + ∑x ). 4 j=1 5,j

The solutions of the equations for the higher-order terms in the asymptotics can be represented in the form + + + Wn ((k ) = Wn,k ((k ) + Xn,k W1 ((k ) + Yn,k W2 ((k ).

(6.118)

+ Here the function Wn,k ((k ) is the solution of the non-homogeneous equation for the nth correction term. This solution does not contain the terms of the order of (k4 and (k–3 in the asymptotics as (k → ∞. The linear independent solutions of the homogen+ + eous equation W1 ((k ) and W2 ((k ) are defined early. The constants Xn,k and Yn,k will be determined below. The asymptotics of the function W0 ((k ) as (k → ∞ is given. It is obtained by rewriting of U∗ + %2/6 A(Tk , %) + i%3/6 B(Tk , %) as Tk → –Kk + 0 in terms of the scaling variable (k . This asymptotics uniquely determines the leading-order term in the expansion

W0 ((k ) =

–2 . ((k – iU∗ )2

(6.119)

+ + To evaluate the constants Xn,k and Yn,k in the formula for the nth correction we match an asymptotics as (k → ∞ with the asymptotics obtained by rewriting of U∗ + %2/6 A(Tk , %) + i%3/6 B(Tk , %) as Tk → –Kk + 0 in terms of the scaling variable (k . We obtain + Xn,k ≡ 0.

(6.120)

This takes place, because all terms connected with the solutions of the type of + xn,k A1 (Tk ) have been matched by the shift of the independent variable (k .

6.5 Expansions in Bifurcation Layer

265

+ + and ym,k Matching procedure gives us the formula that connects the constants Yn,k + + Yn,k = ym,k ,

n = m + 7,

+ Yn,k = 0, n = 0, 1, . . . , 6,

+ Y7,k = 56g3 (k).

(6.121)

Let us represent the expansions of two corrections of eq. (6.115) as (k → ∞: + ((k ), W1 ((k ) = W1,k

(6.122)

where + ((k ) = W1,k

40 U∗3 2 4 U 4 34 ( + i 0 ∗ (k – 0 + i 03 (k–1 + O((k–2 ). 5 k 5 30U∗ 5U∗

+ W2 ((k ) = W2,k ((k ),

(6.123)

where + ((k ) = W2,k

Pk U∗3 2 P U Pk 3P + i k3 (k–1 + O((k–2 ), ( + i k ∗ (k – 5 k 5 30U∗ 5U∗

(* → ∞,

k

Pk = ∑ Kj . j=1

From the formulas for right-hand side of equations for junior terms we see that the major singularities in the asymptotic expansion appear in corrections of the order of %n , n = 1, 2, . . . . It allows us to represent the asymptotic expansion (6.115) in the form ∞

29

n=1

j=0

U = U∗ + (W0 ((k ) + %24/30 Wl + ⋅ ⋅ ⋅ + %29/30 W6 ) + ∑ %n (∑ %j/30 W30(n–1)+j+7 ),

% → 0. (6.124)

This representation gives us the domain of validity for expansion (6.115) as (k → ∞ 29

%( ∑j=0 %j/30 W30n+j+7 ((k )) 29

≤ %𝛾 , 𝛾 > 0.

( ∑j=0 %j/30 W30(n–1)+j+7 ((k )) j/30 W30n+j+7 ((k )) = O((k6n–2 ), It is easy to obtain that ( ∑29 j=0 %

(k → ∞.

It yields (k ≤ %–1/6+𝛾4 , 𝛾4 ∈ (0, 1/15). The constant 𝛾4 is such that the domains of validity for kth separatrix expansion and kth intermediate expansion are intersected.

266

6 Asymptotics for loss of stability

Remark: One can construct expansion into studied domain using asymptotic sequence %n instead of %n/30 . In that case the coefficients of asymptotic expansion depend on %. We represent this dependance in explicit form by using expansion (6.115). Let us write the representation of the correction terms in expansion (6.115) as (k → –∞: – – – ((k ) + Xn,k W1 ((k ) + Yn,k W2 ((k ). Wn ((k ) = Wn,k

(6.125)

– ((k ) is the solution of the non-homogeneous equation for the Here the function Wn,k nth correction term. This solution does not contain the terms of the order of (k4 and (k–3 – – in asymptotics as (k → –∞. The constants Xn,k and Yn,k are

– = BXn,k , Xn,k

– + Yn,k = Yn,k BYn,k ,

BXn,k = – res [ r=∞

d( 1 r ∫ (F W + F n W2 ) k ] , r –r n 2 W

BYn,k = – res [ r=∞

(6.126)

d( 1 r ∫ (Fn W 1 + F n W1 ) k ] . r –r W

(6.127)

n = 1, . . . , 6;

(6.128)

It is important to note that – = 0, Yn,k

as

– + = Y7,k + Y7,k

0 . 2

(6.129)

Remark: Formula (6.129) in the case of the second-order differential equation was obtained in the work [30]. The domain of validity for the asymptotic expansion (6.115) as (k → –∞ is determined as well as (k → ∞. We obtain that for any value of k this domain is determined by –%–1/6+𝛾3 ≤ (k ≤ %–1/6+𝛾4 .

6.5.2.2 Interval of the Weierstrass ℘-Function In this section we study the asymptotic expansion of our solution in the domain between kth and (k + 1)th poles of the Weierstrass function. We use the slow variable Tk = (k %1/6 and U = U∗ + %2/6 Ak (Tk , %) + i%3/6 Bk (Tk , %).

(6.130)

267

6.5 Expansions in Bifurcation Layer

It yields ∞

A󸀠k – 2U∗2 Bk = –%1/3 2U∗ Ak Bk – %2/3 A2k Bk – %B3k + (%4/5 40 – %4/5 ∑ %(2n–1)/5 (n,0 n=1

k

k

j=1

n=4

j=1

+ + %5/6 (Tk – ∑ Kj ) – %5/6 ∑ %n/30 ∑ xn,k )Bk , B󸀠k + 3U∗ A2k ∞

k

n=1

j=1

= – %1/3 (A3k – B2k U∗ ) – %2/3 Ak B2k + (%2/15 40 + %2/15 ∑ %(2n–1)/5 (n,0 + %1/6 (Tk – ∑ Kj ) ∞

k

n=4

j=1

+ – %1/6 ∑ %n/30 ∑ xn,k )(U∗ + %1/3 Ak ),

% → 0.

(6.131)

Asymptotics (6.115) as (k → –∞ has been written in terms of the new variable Tk gives us the asymptotics as Tk → –0 of the functions A and B as was shown in eq. (6.104). We construct the asymptotic expansion in the form ∞

Ak (Tk , %) = ∑ %n/30 An,k (Tk ), n=0 ∞

Bk (Tk , %) = ∑ %n/30 Bn,k (Tk ),

% → 0.

(6.132)

n=0

Using eq. (6.132) we obtain the recurrent sequence of the equations for An,k (Tk ) and Bn,k (Tk ): An,k󸀠 – 2U∗2 Bn,k = Hn,k ,

B󸀠n,k + 3U∗ ∑ Ai,k Aj,k = Gn,k ,

(6.133)

i+j=n

where for n = 1, 2, 3, 6, 7, Hn,k = Gn,k ≡ 0, H4,k = 40 B0,k ,

G4,k = 40 U∗ ,

H5,k = (Tk + Pk )B0 ,

G5,k = Tk U∗

and for n ≥ 8 ∑

Hn,k = – 2U∗

Ai,k Bj,k –

i+j=n–10

Bi,k Bj,k Bl,k + 40 Bn–24,k

Ai,k Aj,k Bl,k –

i+j+l=n–20

i+j+l=n–30 k

+ (Tk – Pk )Bn–25,k –

(i,0 Bj,k –

∑ 12i+j=n–18

Gn,k =

∑ i+j+l=n–10

Ai,k Aj,k Al,k + U∗

+ Bl,k ∑ xm,k ,

∑ m+l=n–25

Bi,k Bj,k –

i+j=n–10

j=1

Ai,k Bj,k Bl,k – U∗ (l,0

∑ i+j+l=n–20

k

+ (Tk – Pk )An–15,k + 40 An–16,k –

(i,0 Aj,k –

12i+j=n–8

where in the term U∗ (l,0 index l is such that 12l – 2 = n.

∑ m+l=n–15

+ Bl,k ∑ xm,k , j=1

(6.134)

268

6 Asymptotics for loss of stability

As mentioned earlier, it is convenient for us to investigate the second-order differential equation for An,k (Tk ) instead of the system of the first order for An,k (Tk ), Bn,k (Tk ). The equation for the leading-order term in asymptotics (6.132) is 2 A󸀠󸀠 0,k + 3A0,k = 0.

(6.135)

The equations for the next correction terms are A󸀠󸀠 m,k + 6A0,k Am,k = Fm,k , where Fm,k = 2U∗2 [ – 3U∗ ∑ Ai,k Aj,k + i+j=m

Ai,k Aj,k Al,k + U∗

Bi,k Bj,k

i+j=m–10

Ai,k Bj,k Bl,k – U∗ (l,0 + (Tk – Pk )Am–15,k + 40 Am–16,k

∑ i+j+l=m–10

i+j+l=m–20 k

ti,0 Aj,k –

12i+j=m–8

– 2

+ Ai ∑ xj,k – 2U∗ (

∑ i+j=m–15

l=1

A󸀠i,k Aj,k Bl,k –

∑ i+j+l=m–20

A󸀠i,k Bj,k +

∑ i+j=m–10

Ai,k Aj,k B󸀠l,k – 3

∑ i+j+l=m–20

Ai,k B󸀠j,k )]

∑ i+j=m–10

B󸀠i,k Bj,k Bl

∑ i+j+l=m–30

+ 40 B󸀠m–24,k + Bm–25,k + (Tk – Pk )B󸀠m–25 –

k

(i,0 B󸀠j,k +

∑ 12i+j=m–18

+ B󸀠i ∑ xj,k .

i+j=m–25

l=1

(6.136) In particular, Fm ≡ 0, m = 1, 2, 3, 6, 7;

F4 = –40 ;

F8 = –3A24,k (Tk );

F9 = –6A4,k (Tk )A5 (Tk );

F10 = –(1,0 U∗ – 2U∗ (

(A󸀠0,k )

2

2U∗

F5 = –Tk + Pk ;

2

+

(A󸀠0,k ) 3A0,k 2 3 ) – 2U (A – ). ∗ 0,k 2U∗2 4U∗3

The asymptotic behaviour as Tk → –0 of the coefficients of the asymptotic expansions is obtained by matching A0,k =

–2 – 4 + Y2,k Tk + O(Tk10 ); Tk2

A4,k =

40 U∗3 2 Tk + O(Tk3 ); 5

A8,k (Tk ) = O(Tk6 );

Am,k ≡ 0, m = 1, 2, 3, 6, 7;

A5,k =

A9,k = O(Tk6 );

Pk U∗3 2 –1 3 Tk + Tk + O(Tk4 ); 5 6

A10 = O(Tk–4 ).

6.5 Expansions in Bifurcation Layer

269

Let us construct the formal solution in form (6.132). The solution for the leading-order term of the asymptotic expansion is A0,k = –

2 – + Y2,k–1 Tk4 + O(T110 ) Tk2

Tk → –0.

Using this formula it is easy to see that the leading-order term A0,k of the asymptotics is expressed by the Weierstrass ℘-function A0,k (Tk ) = –2℘(Tk ; 0, g3 (k)),

where

– g3 (k) = Y2,k–1 /56.

The period of the function A0,k (Tk ) is defined by the elliptic integral ∞

Kk = 2 ∫ Amin

dy √4y3 – g3 (k)

1/3

,

Amin = (

g3 (k) ) . 4

The solution An (Tk ) can be represented in the neighbourhood of the point Tk = 0 in the form – – An,k (Tk ) = A–n,k (Tk ) + xn,k A1 (Tk ) + yn,k A2 (Tk ).

(6.137)

The function A–n,k (Tk ) is the partial solution of the equation for the nth correction term. This solution does not contain terms of the order of Tk–3 and Tk4 in the asymptotics as Tk → 0. – – The coefficients xn,k and yn,k are defined by matching of asymptotics (6.132) as Tk → –0 and asymptotics (6.115) as (k → –∞ – – = Xm.k , xn,k

n = m + 28;

(6.138)

– – = Ym,k , yn,k

n = m – 7.

(6.139)

The domain of validity for expansion (6.132) is defined by typical way: %1/3 ( ∑9j=0 %j/30 A10(l+1)+j (Tk ))

≤ %𝛾 , 𝛾 > 0.

( ∑9j=0 %j/30 A10l+j (Tk )) It yields ∞

– Tk – ∑ %l/30 xl,k ≥ %1/6–𝛾3 . l=29

(6.140)

270

6 Asymptotics for loss of stability

Represent the correction term An,k as Tk → –Kk + 0 in the form + + A1 (Tk + Kk ) + yn,k+1 A2 (Tk + Kk ). An,k (Tk ) = A+n,k (Tk ) + xn,k+1

(6.141)

Here the function A+n,k (Tk ) is the solution of the non-homogeneous equation for the nth correction and the expansion does not contain terms of the order of (Tk + Kk )–3 and (Tk + Kk )4 as Tk → –Kk . The constants in the formula for the nth corrections are evaluated by –Kk +r

+ yn,k+1

=

– yn,k

+

y \$n,k ,

y \$n,k

dz Fn (z)A1 (z) ] [1 = res [ ∫ ], r=0 r W –r [ ]

(6.142)

–Kk +r

+ xn,k+1

=

– xn,k

+

+ Ck yn,k+1

+

\$xn,k ,

\$xn,k

dz Fn (z)A2 (z) ] [1 = res [ ∫ ], r=0 r W –r [ ]

(6.143)

where the constant Ck is defined by A2 (Tk + Kk ) = Ck A1 (Tk ) + A2 (Tk ),

Ck =

28Kk . 3g3 (k)

+ + = yn,k+1 = 0, for n = 1, 2, 3, For example, xn,k+1 + x4,k+1 =

2840 & (Kk /2; 0, g3 ), 3g3

+ y4,k+1 = 0,

where & (T1 , g2 , g3 ) is the Weierstrass & -function. + + The values of the coefficients yn,k+1 and xn,k+1 for n > 4 can be also evaluated. In a similar way as it was shown in Section 5.1.3 we obtain the domain of the validity for expansion (6.132): ∞

+ Tk + Kk – ∑ %l/30 xl,k+1 ≥ %1/6–𝛾4 . l=4

6.5.2.3 The Discrete Dynamical System The coefficients that determine the intermediate and the separatrix expansions are changed by the passage from k to (k + 1). The parameter g3 (k) of the Weierstrass ℘-function for the leading-order term in the intermediate expansion is represented by g3 (k) =

1 0 (a + (k – 1)) . 56 4 2

± ± , Yn,k and (k , Investigate the discrete dynamical system for the coefficients Xn,k + ± xn,k , yn,k .

6.5 Expansions in Bifurcation Layer

271

From eqs. (6.121), (6.127), (6.139) and (6.142) we obtain y

+ + = yn,k + \$n,k + BYm,k , yn,k+1

m = n + 7, n ≥ 1.

(6.144)

and from eqs. (6.120), (6.126), (6.138) and (6.143) + + = Ck yn,k + \$xn,k–1 + BXm,k–1 , xn,k

m = n – 28, n ≥ 1,

(6.145)

where BXm,k–1 ≡ 0 for m ≤ 0. From eqs. (6.126), (6.127), (6.117) and (6.121) one can see the terms BYm,k and BXm,k + contain Yj,k for 7 ≤ j ≤ m – 29. This discrete dynamical system allows us to evaluate (k+1 = (k + %–1/6 (Kk+1 +

1 ∞ n/30 + ∑ % xn,k+1 ), 4 n=1

% → 0.

Let us evaluate the internal separatrix variable to within o(1). To obtain this result it is + – as n ≤ 5 and Ym,k as m ≤ 12. necessary to evaluate the term xn,k+1 BYm,k ≡ 0, y

\$n,k ≡ 0,

m ≠ 7; 0 < n < 5;

y

\$5,k = 2& (Kk /2, 0, g3 (k)). \$x4,k =

2840 & (Kk /2; 0, g3 (k)), 3g3 (k)

\$x5,k =

28Pk & (Kk /2; 0, g3 (k)). 3g3 (k)

We also have that BXl,k ≡ 0, as l < 1. Then + yn,k ≡0 + ≡ 0, y4,k

+ xn,k ≡ 0,

n = 1, 2, 3, y

+ + y5,k = y5,k–1 + \$5,k–1 .

Finally we obtain + = \$x4,k = x4,k

+ = x5,k

2840 & (Kk /2; 0, g3 (k)), 3g3 (k)

56Kk k–1 28Pk & (Kk /2; 0, g3 (k)). ∑ & (Kj /2, 0, g3 (j)) + 3g3 (k) j=1 3g3 (k)

272

6 Asymptotics for loss of stability

6.5.2.4 Validity of the Intermediate Expansion Here we explain how to get the domain of the validity for the intermediate expansion as k → ∞. Let us remind that the domain of the validity for the intermediate expansion was defined by segments of asymptotic series, see (6.140). When k → ∞ secularities appear inside of these segments. For example, the function A5,k contains + the term y5,k A1 (Tk ) which increases with respect to k k

y

+ y5,k = ∑ \$5,j = O(k7/6 ),

k → ∞.

j=1

Due to the nonlinearity of the right hand side of eq. (6.131) for An , Bn we can see that major secularities appear in terms with the numbers divisible by 5. This leads to new conditions of validity %1/6 A5(m+1),k ≤ %𝛾 , 𝛾 > 0. A5m,k This formula gives us the inequality %1/6 k7/6 ≤ %, , , > 0,

k ≤ %–1/7+𝛾 , 𝛾 > 0.

6.5.3 The Intermediate Expansion for Large k In this section to take into account the secular terms as large values of k we modify the intermediate expansion. 6.5.3.1 The Domain for Moderate Large Values of k As was shown above for the large values of k the correction terms in the asymptotics contain the secular terms with respect to k. These secularities are different. First, the + secularities are connected with the terms yn,k as the solutions of the homogeneous k + are contained in right-hand sides equations. Second, the secular terms Pk and ∑j=1 xn,j of eq. (6.131). To suppress these secularities we include the dependance on % into the parameter g3 : ∞

g3 (k, %) = g0,3 (k) + ∑ %n/30 gn,3 (k). n=1

Here gn,3 (k) = gn,3 (k – 1) +

1 – , Y 56 m,k–1

m = n + 5.

(6.146)

273

6.5 Expansions in Bifurcation Layer

To suppress the secularities of the second type we include the following term in equation for the leading-order term in the intermediate expansion ∞

2g2 (k, %) = %1/6 (%–1/30 40 + %–1/30 ∑ %(2n–1)/5 (n,0 + Pk + n=1

1 ∞ (n–1)/30 k + ∑% ∑ xn,j ). 4 n=4 j=1

After this substitution the system for A, B are A󸀠k – 2U∗2 Bk = –%1/3 2U∗ Ak Bk – %2/3 A2k Bk – %B3k + %2/3 2g2 (k, %)Bk – %5/6 Tk Bk , B󸀠k + 3U∗ A2k = 2g2 (k, %)(U∗ + %1/3 Ak ) – %1/3 (A3k – B2k U∗ ) – %2/3 Ak B2k + %1/6 Tk (U∗ + %1/3 Ak ).

(6.147)

This leads us to change the asymptotic sequence. Namely, ∞

Ak (Tk , %) = ∑ %n/6 An,k (Tk , %),

Bk (Tk , %) = ∑ %n/6 Bn,k (Tk , %).

n=0

(6.148)

n=0

The equation for the leading-order term is 2 A󸀠󸀠 0,k + 3A0,k = 2g2 ,

and the solution is A0,k (Tk , %) = –2℘(Tk , g2 (k, %), g3 (k, %)). The terms of the order of %m/30 with m ≠ 5l, l ∈ Z are included in the leading-order term in the asymptotics and corrections by introducing g2 . To obtain the intermediate expansion with respect to %m/30 one should expand this representation with respect to %. The functions A1 and A2 are solutions of linearized equation A1 (Tk , %) =

d ℘(Tk , g2 (k, %), g3 (k, %)), dTk

A2 (Tk , %) =

d ℘(Tk , g2 (k, %), g3 (k, %)). dg3

274

6 Asymptotics for loss of stability

The second-order differential equation for higher-order terms in expansion is 2 A󸀠󸀠 n,k + 6A0,k An,k = 2U∗ [ – 3U∗ ∑ Ai,k Aj,k – i+j=n

+ U∗ ∑ Bi,k Bj,k – i+j=n–2

Ai,k Bj,k Bl,k + (2g2 )An–2,k + Tk An–3,k ]

∑ i+j+l=n–4

– 2U∗ ∑ A󸀠i,k Bj,k – 2U∗ ∑ Ai,k B󸀠j,k – 2 i+j=n–2

i+j=n–2

Ai,k Aj,k B󸀠l,k

Ai,k Aj,k Al,k

∑ i+j+l=n–2

–3

i+j+l=n–4

A󸀠i,k Aj,k Bl,k

i+j+l=n–4

Bi,k Bj,k B󸀠l,k

+ (2g2 )Bn–4,k

i+j+l=n–6

– Bn–5,k – Tk Bn–5,k .

(6.149)

The solutions of eq. (6.149) are defined by their asymptotics as Tk → –0: – (%)A∞ (Tk , %), An,k (Tk , %) = A–n,k (Tk , %) + 7n,k

(6.150)

– l/30 – where 7n,k (%) = ∑5n–24 Xl,k 5n–28 % The leading-order term in the asymptotics with respect to Tk is the term of the order of Tk4 . The function A–n,k is the partial solution of the non-homogeneous equation for the higher-order terms and does not contain the terms of the order of Tk4 and Tk–3 in the asymptotics as Tk → –0. In the neighbourhood of the point Tk = –Kk + 0 the corrections are represented by formula

y

An,k (Tk , %) = A+n,k (Tk , %) + 7+n,k+1 A1 (Tk , %) + \$n,k A2 (Tk , %), where y

+ – (%) = Ck \$n,k (%) + 7n,k , 7n,k+1

Ck (%) =

6g2 & (Kk , g2 , g3 ) – 9g3 Kk . 2(g23 – 27g32 )

The constructed solution is valid between the poles of the Weierstrass ℘-function ℘(Tk , g2 , g3 ) in the similar domains as in the case of intermediate expansion for finite values of k. y + To use formulas from Section 5.2.1 we expand the 7n,k+1 (%) and \$n,k (%) with respect to %l/30 : ∞

+ +,m (%) = ∑ %m/30 7n,k+1 , 7n,k+1 m=0

y

y,m

\$n,k (%) = ∑ %m/30 \$n,k . m=0

6.6 Fast Oscillating Asymptotic Expansion

275

+ + and yn,k+1 : Now we obtain the expressions for xn,k+1 + xn,k+1 = 0 for n = 1, 2, 3, 4;

+ +,m xn,k+1 = ∑ 7l,k+1 ,

l ≥ 1, n ≥ 5;

(6.151)

l ≥ 1, n ≥ 5.

(6.152)

5l+m=n

+ = 0 for n = 1, 2, 3, 4; yn,k+1

y,m

+ yn,k+1 = ∑ \$l,k , 5l+m=n

6.5.3.2 Asymptotics for Large Values of k The large values of k lead to growth of the term g2 (k, %). The structure of the expansion in this case becomes complicated. Two parameters appear and the expansion of our solution use both of them. These two parameters are % as small one and g2 as large one. In order to estimate the increasing of coefficients in eq. (6.148) we display in an explicit form the secular multiplier from the coefficients. This leads to the following form of the representation for the intermediate expansion ∞

Ak = ∑ %l/3 (2g2 )(l+1)/2 (A2l,k + %1/6 A2l+1,k ), l=0 ∞

Bk = ∑ %l/3 (2g2 )(3+2l)/4 (B2l,k + %1/6 B2l+1,k ),

(6.153)

l=0

The coefficients of the asymptotics depend on new independent variable 3k = (2g2 )1/4 Tk and do not increase with respect to k. The singularities in this representation appear due to the parameter g2 . Therefore the intermediate expansion is valid as g2 ≤ %–2/3+𝛾 , 𝛾 > 0. The constructed sequence of the intermediate and the separatrix expansions is valid in the domain (t∗ – t) ≤ %1/6+𝛾5 , 𝛾5 ∈ (0, 1/6). This proves the statement.

6.6 Fast Oscillating Asymptotic Expansion In this section we construct a special fast oscillating asymptotic expansion for our solution of primary resonance equation by Krylov–Bogolyubov method [102, 104]. This expansion is valid as (t∗ – t) ≥ %1/2–𝛾5 . In the neighbourhood of t∗ this special solution is matched with the asymptotics from the previous section.

276

6 Asymptotics for loss of stability

6.6.1 Family of the Fast Oscillating Solutions Here we construct the asymptotic expansion for two parametric families of the fast oscillating solutions: ∞

J(t, %) = ∑ %n Jn (t1 , t),

(6.154)

n=0 n where t1 = S(t)/% + ∑∞ n=0 % 6n (t) is the fast variable. The problem on constructing the fast oscillating solution for the second-order differential equation was investigated by many authors. In this section we construct the expansion for the solution in the manner of Bourland and Haberman [17]. They have investigated the second-order differential eq. and the direct reference for eq. (6.56) is not correct. In this section we represent some evaluations that lead us to the elegant formulas from the work [17]. Let us substitute eq. (6.154) into eq. (6.56). As a result we obtain the equations for the leading-order and for higher-order terms in eq. (6.154):

iS󸀠 𝜕t1 J0 + |J0 |2 J – tJ0 = 1,

(6.155)

iS󸀠 𝜕t1 J1 + (2|J0 |2 – t) J1 + J20 J1 = –i𝜕t J0 – i6󸀠0 𝜕t1 J0 ,

(6.156)

iS󸀠 𝜕t1 Jn + (2|J0 |2 – t) Jn + J20 Jn = –i𝜕t Jn–1 – i ∑ 6󸀠i 𝜕t1 Jj + Fn (J0 , . . . , Jn–1 ),

n ≥ 2,

(6.157)

i+j=n–1

where Fn (J0 , . . . , Jn–1 ) is the cubic form. Variables t1 and t are independent. Equation (6.155) has the first integral with respect to fast variable t1 1 |J |4 – t|J0 |2 – (J0 + J0 ) = E(t), 2 0

(6.158)

where E(t) is a “constant” of integration. This expression for the first integral can be considered as the equation for the function J0 . Let us express J0 by the complex conjugate function and as a result we obtain the equation for J0 only iS󸀠 𝜕t1 J0 = ±√2J30 + (2E(t) + t2 )J20 + 2tJ0 + 1.

(6.159)

This equation is easy integrated: J0

iS󸀠 ∫ p0

dy ±√2y3

+ (2E + t2 )y2 + 2ty + 1

= t1 + S0 .

(6.160)

6.6 Fast Oscillating Asymptotic Expansion

277

Here u0 and S0 are constants and we integrate on the curve A(t) on the complex plane. This curve is defined by 1 4 |y| – t|y|2 – (y + y∗ ) = E(t). 2 The sign + or – fixed the sheet of a Riemann surface where the initial point u0 is situated. We choose “+” sign. The motion defined by eq. (6.160) is periodic with respect to fast variable t1 . By integrating formula (6.160) over the period we obtain: iS󸀠 ∫ A(t)

dy √2y3

= T.

(6.161)

+ (2E + t2 )y2 + 2ty + 1

The constant T is the period of the function J0 with respect to the fast variable t1 . This formula is the differential equation for the function S(t). It is convenient to represent the leading-order term J0 (t1 , t) as the sum of the real and the imaginary parts: J0 = J0,R + iJ0,I . By a shift of variable t1 we obtain that J0,R (t1 , t) is even function with respect to t1 and J0,I (t1 , t) is odd one. This property of the functions J0,I (t1 , t) and J0,R (t1 , t) is convenient to evaluate the integrals with respect to fast variable. To determine the leading-order term in eq. (6.154) it is necessary to obtain an 1

equation for E(t). This equation appears from a condition of the boundedness of U . The general solution of the equation for the first correction is represented using two linear independent solutions of the homogeneous equation u1 (t1 , t) = 𝜕t1 J0 ,

and

u2 (t1 , t) =

𝜕E S t u + 𝜕E J0 . S󸀠 1 1

(6.162)

In these formulas u1 is periodic bounded function, but u2 grows linearly with respect to t1 . The Wronskian of these solutions is W = u1 u∗2 – u∗1 u2 = =

1 1 (– (|J0 |2 – t)J0 + 1)𝜕E J0 + 󸀠 (– (|J0 |2 – t)J0 + 1)𝜕E J0 = iS󸀠 iS 1 1 –1 𝜕E (– |J0 |4 + t|J0 |2 + J0 + J0 ) = 󸀠 . 󸀠 iS 2 iS

To obtain necessary condition of the boundedness of J1 we multiply eq. (6.156) by 𝜕t1 J0 and integrate with respect to t1 over the period. Then we integrate the first term by parts T

T 2

iS󸀠 ∫ dt1 𝜕t1 J1 𝜕t1 J0 = – ∫ dt1 (2(|J0 |2 – t)𝜕t1 J0 + (J0 ) 𝜕t1 J0 )J1 . 0

0

278

6 Asymptotics for loss of stability

As a result we obtain T

T

T 2

∫ dt1 [J20 J2 𝜕t1 J0 – (J0 ) 𝜕t1 J0 J1 ] = –i ∫ dt1 𝜕t J0 𝜕t1 J0 – i6󸀠0 ∫ dt1 𝜕t1 J0 𝜕t1 J0 . 0

0

0

Let us consider the equation for the complex conjugate function 2

–iS󸀠 𝜕t1 J1 + (2|J0 |2 – t) J1 + (J0 ) J1 = i𝜕t J0 + i6󸀠0 𝜕t1 J0 . Multiply this equation by 𝜕t1 J0 and integrate with respect to t1 over the period. Then we obtain T

T 2

∫ dt1 [(J0 ) J1 𝜕t1 J0 –

J20 𝜕t1 J0 J1 ]

T 󸀠

= i ∫ dt1 𝜕t J0 𝜕t1 J0 + i6 ∫ dt1 𝜕t1 J0 𝜕t1 J0 .

0

0

0

Combining obtained expressions we get T

T

–i ∫ dt1 𝜕t J0 𝜕t1 J0 + i ∫ dt1 𝜕t J0 𝜕t1 J0 = 0. 0

0

After integrating by parts we get T

T

i ∫ dt1 J0 𝜕t1 𝜕t J + i ∫ dt1 𝜕t J0 𝜕t1 J0 = 0, 0

0

or T

i𝜕t ∫ dt1 J0 𝜕t1 J0 = 0, 0

or I ≡ i ∫ JdJ = 3 = const.

(6.163)

A(t)

The integral in this formula geometrically means the doubled square of the domain bounded by the curve A(t). The necessary condition of the boundedness of the first correction term is an invariance of this square. It is the classical adiabatic invariant. The solution of the equation for the first correction term can be written in the form 󸀠

J1 (t1 , t) = c1 𝜕t1 J0 +

–i60 𝜕 J + R1 (t1 , t). 𝜕E S E 0

(6.164)

6.6 Fast Oscillating Asymptotic Expansion

279

Here t1

R1 (t1 , t) = u1 (t1 , t) ∫ dt󸀠 (𝜕t J0 (t󸀠 , t)u∗2 (t󸀠 , t) + 𝜕t J0∗ (t󸀠 , t)u2 (t󸀠 , t)) 0 t1

– u2 (t1 , t) ∫ dt󸀠 (𝜕t J0 (t󸀠 , t)u∗1 (t󸀠 , t) + 𝜕t J0∗ (t󸀠 , t)u1 (t󸀠 , t)). 0

To evaluate the second correction we use the property of parity and oddness of the real and imaginary parts of the first correction term. These properties of the first and the second terms in formula (6.164) can be obtained from analysis of the leading-order term in the asymptotics. The last term in this sum contains the even function R(R1 ) and the odd function I(R1 ) with respect to variable t1 . Construction of the periodical second correction allows us to determine the function 60 (t). The necessary condition for periodicity of the second correction is T

∫ dt1 (F2 𝜕t1 J0 + F2 𝜕t1 J0 ) = 0, 0 󸀠

󸀠

where F2 = –i𝜕t J1 – i60 𝜕t1 J1 – i61 𝜕t1 J0 + F2 is the right-hand side of eq. (6.157). Let us separately evaluate the integral depending on square of the term J1 : T 2

J = ∫ dt1 [ – (2|J1 |2 J0 + J21 J0 ) 𝜕t1 J0 – (2|J1 |2 J0 + (J1 ) J0 ) 𝜕t1 J0 ]. 0

Integrating by parts and substituting the expressions for 𝜕t1 J1 lead us to the following formula: T

J = ∫ dt1 [(– i𝜕t J0 – i6󸀠1 𝜕t1 J0 – i6󸀠0 𝜕t1 J1 )𝜕t1 J1 + (– i𝜕t J0 + i6󸀠1 𝜕t1 J0 + i6󸀠0 𝜕t1 J1 )𝜕t1 J1 ]. 0

Then we obtain T

J = 𝜕t ∫ dt1 [iJ1 𝜕t1 J0 – iJ1 𝜕t1 J0 ]. 0

Substituting J1 into this formula and using of even properties we obtain T

𝜕t ∫ dt1 0

6󸀠0 (𝜕 J 𝜕 J ) = 0 𝜕E S E 0 t1 0

or 6󸀠 𝜕 I ≡ 61 = const. 𝜕E S E

(6.165)

280

6 Asymptotics for loss of stability

This elegant form of the equation for the phase shift was obtained by Bourland and Haberman [17] for the solutions of the second-order differential equation. In this way we determine the bounded corrections Jn and write out equations for 6n–2 , n ≥ 3. These formulas determine the asymptotics for family of the fast oscillating solutions for eq. (6.56).

6.6.2 The Confluent Asymptotic Solution In this section we choose the values of the parameters when the constructing fast oscillating expansion degenerates at t = t∗ . The algebraic expansion constructed above degenerates at t = t∗ in the point U∗ . The internal asymptotic expansion has as the leading-order term the constant U∗ . Hence the fast oscillating expansion degenerates at the point U∗ and it allows us to match constructed expansions in the different domains. It yields 1 3 1 1/3 E∗ = |U∗ |4 – t|U∗ |2 – (U∗ + U∗∗ ) = ( ) . 2 4 2 At the point U∗ all three roots of the equation 2y3 + (2E + t∗2 )y2 + 2t∗ y + 1 = 0 coalesce. It means that the integrand in eq. (6.160) has the singularity of the order of 3/2 and elliptic integral in eq. (6.160) degenerates. The obtained values of the constants E∗ and t∗ allow us to evaluate the adiabatic invariant 3. To obtain this value it is necessary to evaluate the integral I∗ = ∫ u∗ du = 3∗ .

(6.166)

A(t∗ )

As a result eq. (6.163) defines the function E(t) for this confluent solution. Thus to obtain the leading-order term in eq. (6.154) completely it is necessary to determine S0 in formula (6.160). It can be evaluated by matching of the asymptotics (6.154) and internal asymptotics from previous section. 6.6.3 The Domain of Validity of Confluent Asymptotic Solution as t → t∗ – 0 To match the internal asymptotic expansions with the fast oscillating expansion it is necessary to know where the last one is valid as t → t∗ . From formula (7.42) it is easy to see that the function S󸀠 is defined by the value inverse to an integral K= ∫ A(t)

dy √2y3 + (2E + t2 )y2 + 2ty + 1

.

281

6.6 Fast Oscillating Asymptotic Expansion

So to obtain the domain of validity of the fast oscillating solution it is necessary to determine the order of singularity of the leading-order term in the asymptotics of this integral as t → t∗ – 0. The integral K depends on two parameters t and E. The function E = E(t) is defined by eq. (6.163). In the neighbourhood of the point t∗ we use the following notations: t = t∗ + ,,

E = E∗ + \$.

Let us change the variable in the integral 1 y = –( ) 2

1/3

+ &.

Then the integral K is 1/3

1 K = ∫ d& (2& 3 + & 2 [2\$ + 6( ) , + ,2 ] + & [ – 25/3 \$ – 4, – 22/3 ,2 ] 2 A(t)

1/3

2/3

1 1 + [21/3 \$ + 3( ) , – 22/3 , + ( ) ,2 ]) 2 2

–1/2

.

The curve A(t) as t = t∗ passes through a point & = 0. At this point the integrand at t = t∗ has a non-integrable singularity and the integral diverges. The integrand and the curve A(t) give us an information that to evaluate the leading-order term in the asymptotics as t → t∗ – 0 it is enough to evaluate the integral only on the small segment of the curve A(t) close to the point & = 0. To make this evaluations it is convenient to pass to the integral on a small segment of the real axis and to write the complex variable & in the form & = . + i',

. , ' ∈ R.

In a small neighbourhood of & = 0 the imaginary part of & is expressed by the formula 1/3

1 '± (. ) = ±√ 21/3 + 22/3 . – . 2 + , – 21/3 √1 + 3( ) , + 21/3 \$ + 2–2/3 ,2 + 24/3 . . 2 If we assume that . , , and \$ are small then we obtain 1/3

1 '2± = –2–2/3 . 3 + 2–1/3 [3( ) , + 21/3 \$]. + [ – ,/2 – 21/3 \$] + O(. 4 ) 2 + O(. 2 ,) + O(. 2 \$) + O(.\$) + O(.,) + O(,2 ) + O(\$2 ) + O(,\$). The function \$(,) stays undetermined in this formula.

(6.167)

282

6 Asymptotics for loss of stability

E(t) and \$(,) are obtained from eq. (6.163). Let us write this equation by integrals on segments x+

∫ dx√t – x2 + √t2 + 2E + 4x–, xl x–

∫ dx√t – x2 – √t2 + 2E + 4x = I∗ ,

(6.168)

xl

where xl = –(t2 + 2E)/4, x+ is the right real root of the equation x4 – 2tx2 – 4x – 2E = 0,

(6.169)

and x– is the left one. To construct the asymptotics of the function E(t) as t → t∗ – 0 it is necessary to write the asymptotics of the roots of eq. (6.169). In the investigated domain this equation has two real roots and two complex conjugate roots. So we write (x – x+ )(x – x– )((x – m)2 + n2 ) = 0, where x– , x+ , m and n are real functions of t and satisfy Vieta’s equations x– x+ (m2 + n2 ) = –2E, – (x– + x+ )(m2 + n2 ) – 2mx– x+ = –4, m2 + n2 + x– x+ + 2m(x– + x+ ) = –2t, x– + x+ + 2m = 0. We seek a solution of these systems in the form x– = U∗ + !,

x+ = –4U∗ + ",

m = U∗ + m1 .

Substitute U∗ , E∗ , t∗ then 22/3 [3m21 – n2 – ,] – 2m1 [–m21 + n2 + ,] = 0, 21/3 [6m21 – 2n2 – ,] + 23/5 m1 [, – 3m21 – 2n2 ] + [m21 + n2 ][–2, + 3m21 – n2 ] = –2\$. The asymptotics of the solution for this system is seeking in the form m1 = ,1 √–, + ,2 , + ⋅ ⋅ ⋅ , n = -1 √–, + -2 , + ⋅ ⋅ ⋅ , \$ = \$1 , + \$2 (–,)3/2 + ⋅ ⋅ ⋅ .

6.6 Fast Oscillating Asymptotic Expansion

283

By substituting into the system we get 3,21 – -12 = 1,

1 \$1 = –( ) 2

2/3

.

Thus in the first correction we did not determine relations between ,1 and -1 . This relation can be determined from integral equation (6.168). Let us construct the leading-order term in the asymptotics of the integral K. In the small neighbourhood of & = 0 the curve A(t) intersects the real axis. So the integral over a small neighbourhood of & = 0 can be represented in the form .max

K0 ∼ ∫ !

.max

𝜕. &+ (. )d. √2&+3 – 2[–24/3 + 2],&+ + !+ (–,)3/2

+ ∫ !

𝜕. &– (. )d. √2&–3 – 2[–24/3 + 2],&– + !– (–,)3/2

,

where .max is the null of the radicand, &± = . + i'± (. ), !± are constants. The parameter ! is chosen such that !(–,)–1/2 ≫ 1 and ! ≪ 1 as , → –0. The asymptotics of the function '± and the explicit formula for the leading-order term of the integral I0 and the change of the variable . = (–,)1/2 - in the integral lead us to the formula K0 = O((–,)–1/4 ). Therefore S󸀠 = O((–,)1/4 ),

as , → –0.

By similar evaluations one can show that 𝜕E I = O((–,)–1/4 ),

𝜕E S = O((–,)1/4 ), as , → –0.

Therefore 6󸀠0 = O((–,)1/2 ). By this way we obtain 𝜕t J0 = O(,–1 )

as , → –0.

From eqs. (6.159) and (6.162) we get that the solutions of the homogeneous equation u1 and u2 are O(,–1/4 ) as , → 0. From eq. (6.164) we obtain J1 = O((–,)–3/2 ). By analysis of eq. (6.157) we get the right-hand side Fn = O(–,(–4n+2)/2 ). Therefore, Un = O(–,(–4n+1)/2 ), , → 0.

284

6 Asymptotics for loss of stability

It yields that the fast oscillating asymptotics is valid as (t∗ – t) ≥ %1/2–𝛾5 . It is easy to see that the domains are intersected, when 𝛾5 ∈ (0, 1/6). 6.6.4 Matching of the Asymptotics Let us match the fast oscillating asymptotics and internal asymptotics (6.66) and (6.115) into the transition layer. The leading-order term in the fast oscillating asymptotics is defined as an implicit function by the formula

t1 = –iS

󸀠

U∗ +W

dy . –|y|2 y – ty + 1

y∈A(t)

This integral is represented in the form 󸀠

U∗ +W

t1 = –iS ( ∫ A(t∗ )

–|y|2 y

dy – t∗ y + 1 + O(t – t∗ )y

Here \$GA is a figure bounded by the curves A(t) and A(t∗ ). The function U∗ + W is periodic with respect to t1 . In the case the following inequalities are valid: \$3A ≪ 1 (where \$3A – square of the figure \$GA ) W2

t – t∗ ≪ 1, W 7/2

we obtain the following representation in some neighbourhoods of the point t(k) J0 ∼ U∗ + W(t(k) ), where W(t(k) ) =

–2 . (t(k) – iU∗ )2

The argument of the function W can be represented in the form 60 (t(k) ) S(t(k) ) + S󸀠 (t(k) )(t – t(k) ) + o(t – t(k) ) t1 + 󸀠 , ∼ 󸀠 󸀠 󸀠󸀠 S %(S (t(k) ) + O(S (t(k) )(t – t(k) )) S (t(k) )

as t → t(k) .

By matching with the leading-order term in the internal asymptotics we obtain t – t(k) 60 (t(k) ) S(t(k) ) + + 󸀠 ∼ (k . 󸀠 %S (t(k) ) % S (t(k) )

6.6 Fast Oscillating Asymptotic Expansion

285

This matching with the second internal asymptotics gives us that this fast oscillating expansion can also be matched with the intermediate expansion. The constants 60,0 and 60,1 are evaluated by this way. 6.6.5 Asymptotic Behaviours The asymptotic solution has different asymptotic expansions on different intervals of the segment [T∗ – C, T∗ + C]. Below we represent the asymptotic expansions on corresponding intervals. All asymptotic expansions are formal. It means that if one substitutes the N ﬁrst terms of the asymptotic series into eq. (6.56) then one obtains the error O(%N1 ), where lim (N1 /N) = const > 0. N→∞

Proposition 1. The asymptotic expansion of the special solution has the form ∞

m=0

m=0

U(t, %) = ∑ %2m u2m (t) + i ∑ %2m+1 u2m+1 (t)

(6.170)

as (t – t∗ ) ≥ %4/5–𝛾1 , ∀𝛾1 ∈ (0, 2/5). The leading-order term in this asymptotic expansion is u2 (t). The higher-order terms are algebraic functions of t and satisfy eqs. (6.60) and (6.61). Define 4 = (t – t∗ )%–4/5 . Denote by y(x, s2 , s3 ) the Painlevé-1 transcendent where s2 , s3 are data of monodromy, (see Ref. [79]). Let x0 be the first real pole of the Painlevé-1 transcendent with data of monodromy s2 = s3 = 0. Proposition 2. When 4 ≤ %(–4/5+𝛾1 ) and (4 – 40 ) ≥ %(1/5–𝛾2 ) , ∀𝛾2 ∈ (0, 1/10) the asymptotic expansion of our solution has the form ∞

n=0

n=0

U(t, %) = U∗ + %2/5 ∑ %2n/5 !n (4) + i%3/5 ∑ %2n/5 "n (4).

(6.171)

Here 40 = –x0 /2, !0 = –y(–x/2, 0, 0)/2; "0 = !󸀠0 /(2U∗2 ). The higher-order terms in this expansion are deﬁned by solutions of eqs. (6.74) and (6.75) with the asymptotic behaviour as 4 → ∞ which is obtained from eqs. (6.68) and (6.69). In the neighbourhood of 40 the coefficients of the asymptotic expansion for our solution depends on a fast variable ( = (4 – 40 )%–1/5 . Proposition 3. When –%–1/10+𝛾3 ≤ (0 ≤ %–1/5+𝛾2 , ∀𝛾3 ∈ (0, 1/15) the asymptotic expansion of the solution has the form ∞

U(t, %) ≡ W((0 , %) = U∗ + w0 ((0 ) + %4/5 ∑ %(n–1)/5 wn ((0 ), n=1

(6.172)

286

6 Asymptotics for loss of stability

n/5 where (0 = ( + ∑∞ n=1 % (n,0 . The constants (n,0 are deﬁned by formula (6.89). The leading-order term in the asymptotic expansion has the form

–2 . ((0 – iU∗ )2

w0 ((0 ) =

The higher-order terms in eq. (6.172) are deﬁned by eq. (6.99). As –(0 ≥ %(–1/10+𝛾3 ) the asymptotic solution is defined by a sequence of two asymptotic expansions which are called intermediate and separatrix asymptotic expansions. Denote Tk = (k–1 %1/6 , k = 1, 2, . . . . The variable (k–1 as k > 1 is defined by eq. (6.174). Proposition 4. For k ≪ %–1/7 in the domain ∞

– ) ≥ %1/6–𝛾3 , (Tk – ∑ %l/30 xl,k

+ (Tk + Kk – ∑ %l/30 xl,k+1 ) ≥ %1/6–𝛾4 ,

l=29

l=4

𝛾4 ∈ (0, 1/15), the intermediate asymptotic expansion has the form ∞

n=0

n=0

U(t, %) = U∗ + %1/3 ∑ %1/30 An,k (Tk ) + i%1/2 ∑ %1/30 Bn,k (Tk ),

% → 0,

where ∞

Kk = 2 ∫ Amin

1/3

dy √4y3 – g3 (k)

,

Amin = (

g3 (k) ) , 4

g3 (k) = g3 (k – 1) + 0/112, g3 (0) = a4 /56; the constant a4 is the coefﬁcient as (4 – 40 )4 in the Laurent expansion of !0 (4). The leading-order term in the expansion is A0,k = –2℘(Tk , 0, g3 (k)). The higher-order terms are deﬁned by eq. (6.137). Proposition 5. The intermediate expansion as 1 ≪ k ≪ %–1 and ∞

+ ) ≥ %1/6–𝛾3 , (Tk – ∑ %l/30 xl,k l=4

+ (Tk + Kk – ∑ %l/30 xk+1 ) ≥ %1/6–𝛾4 , l=4

has the form ∞

n=0

n=0

U(t, %) = U∗ + %1/3 ∑ %n/6 An,k (Tk , %) + i%1/2 ∑ %n/6 Bn,k (Tk , %),

(6.173)

287

6.6 Fast Oscillating Asymptotic Expansion

where Kk is the real period of ℘(Tk , g2 (k, %), g3 (k, %)); g2 (k, %) = g2 (k – 1, %) + %1/6 (Kk–1 +

1 ∞ (n–1)/30 + xn,k–1 ), ∑% 4 n=4

g2 (0, %) = %2/15 (40 + ∑ %(2n–1)/5 (n,0 ), n=1 + as l = 1, 2, . . . , k – 1 are deﬁned by eqs. (6.145), (6.151) and (6.152); the constants xn,l ∞

g3 (k, %) = g0,3 (k) + ∑ %n/30 gn,3 (k),

g3 (0, %) = (a4 + %–1/15 ∑ %2n/5 b+n )/56,

n=1

n=1

where gn,3 (k) is evaluated by eq. (6.146). The leading-order term is A0,k (Tk , %) = –2℘(Tk , g2 (k, %), g3 (k, %)). The higher-order terms in this expansion are deﬁned by eq. (6.150). The separatrix expansions are valid in small neighbourhoods of the poles of the Weierstrass function. Denote by ∞

+ )%–1/6 , (k = (Tk + Kk – ∑ %n/30 xn,k

k = 1, 2, . . . .

(6.174)

n=4

Proposition 6. The separatrix expansion of the solution for all k and as –%(–1/6+𝛾3 ) ≤ (k ≤ %(–1/6+𝛾4 ) has the form ∞

U(t, %) ≡ W((k , %) = U∗ + w0 ((k ) + %4/5 ∑ %(n–1)/30 Wn ((k ).

(6.175)

n=1

The higher-order terms in the expansion are deﬁned by eq. (6.125). Statement. The sequence of the intermediate and separatrix asymptotic expansions are valid when t < t∗ and (t∗ – t) ≤ %1/6+𝛾5 , 𝛾5 ∈ (0, 1/6). The proof is contained in Section 5.3.2. Proposition 7. When (t∗ – t) ≥ %1/2–𝛾5 the fast oscillating asymptotics has the form ∞

U(t, %) = ∑ %n Jn (t1 , t, %), n=0

(6.176)

288

6 Asymptotics for loss of stability

where t1 = S(t)/% + ∑ %n 6n (t); S(t) is a solution of the Cauchy problem: n=0

iS󸀠 ∫ dy(3y3 + (2E + t2 )y2 + 2ty + 1)–1/2 = T, S|t=0 = 0, A(t)

T = const > 0, the curve A(t) is 1 4 |U| – t|U|2 – (U + U) = E(t), 2 The function E(t) is a solution of I ≡ i ∫ udu = 0. A(t)

The phase shift 60 is deﬁned by the Cauchy problem: 6󸀠0 𝜕 I = 60,1 , 60 (t∗ ) = 60,0 . 𝜕E S E The leading-order term in this expansion eq. (6.176) is situated on A and satisﬁes the equation iS󸀠 𝜕t1 J0 + |J0 |2 J0 – tJ0 = 1. The higher-order terms in the oscillating expansion are periodic functions with respect to t1 and deﬁned from eq. (6.157). The intervals for the constants 𝛾1 , . . . , 𝛾5 are such that the domains of validity for asymptotic expansions in Propositions 1–7 are pairwise intersected.

6.7 Dissipation Is Cause for Halt of Resonant Growth The autoresonance phenomenon in systems with dissipation was earlier studied both by means of mathematical models and in physical experiments. In particular, the existence of autoresonant solution for the system of three coupled oscillators with small dissipation was established in Ref. [164] and for the system with parametric autoresonance in Ref. [85]. In the papers [37, 123], the threshold of capture into autoresonance was discussed in the presence of dissipation. The resonant phaselocking phenomenon in van der Pol Duffing’s equation with external driver of slowly varying frequency was studied in Ref. Here we treat two problems for autoresonance in dissipative systems. First, we prove the existence of an attracting set for solution trajectories captured into

6.7 Dissipation Is Cause for Halt of Resonant Growth

289

autoresonance. Such attracting set was observed numerically in a number of papers [85, 142, 164]. The attractor in these systems is a slowly varying steady-state solution. The solutions captured into autoresonance oscillate around such a solution and lose the energy of oscillations because of dissipation. Therefore, all captured solutions tend to the steady-state solution. Mathematically this means that the slowly varying steady-state solution is Lyapunov stable. The second problem we deal with consists in evaluating the bound of the autoresonant growth of solution in the presence of small dissipation in the system. Earlier one observed that the amplitude growth of nonlinear oscillations in systems with dissipation is bounded (see Refs. [141, 142, 163]). From physical viewpoint, the boundedness of autoresonant growth can be easily explained. Namely, the work of driver is proportional to the length of trajectory in the phase space. If the dissipation depends linearly on the velocity then its work is proportional to the area described by the phase trajectory. Under the growth of energy the area of the phase curve grows faster than its length. It follows that even if the dissipation is small, its work exceeds the work of external force at some moment and the autoresonant growth of solution stops. Mathematically this looks like the impossibility of extension of the solution under phase capture. What happens is the hard loss of stability and passage to fast oscillations. This section is aimed at finding an asymptotic expansion for the slowly varying steady-state solution to the primary resonance equation and at showing that it is an attracting set for those solutions that are captured into resonance. Moreover, we derive asymptotics for the maximal amplitude of oscillations under autoresonance with small dissipation and calculate the period of the autoresonant mode in the solution.

6.7.1 Setting of the Problem In Section 5.1.3 we have mentioned the autoresonance and dissipation. Here we study influence of dissipation on the autoresonance in detail. We study solution of the primary resonance equation 𝚤J 󸀠 + (T – |J|2 )J + 𝚤\$J = f ,

(6.177)

where T is an independent variable, \$ a dissipation parameter and f is a parameter related to the amplitude of external force. This primary resonance equation is often written as the system of equations for the amplitude R(T) = |J(T)| and the phase >(T) = arg(J). More precisely, R󸀠 = –\$R – f sin >, >󸀠 = (T – R2 ) –

f cos >. R

(6.178)

The autoresonance or phase-locking for the solution of system (6.178) means that >󸀠 = o(1) for T → ∞. This condition along with the second equation of system (6.178)

290

6 Asymptotics for loss of stability

determines behaviour for the amplitude growth which reads R = √T + o(1). The first equation of eq. (6.178) gives a sufficient condition for the instant at which the phaselocking is destroyed, namely T∗ = f 2 /\$2 . Analysis of the equation and phase-locking condition actually yield an estimate of autoresonance growth in a dissipative system with small dissipation, \$ ≪ 1. Here we discuss asymptotics for the slowly varying steady-state solution of eq. (6.178) with \$ ≪ 1. We prove that this solution is an attracting set for the captured solutions. Moreover, we show asymptotics of the maximal value of R and evaluate the instant at which the phase-locking is destroyed. In order to better motivate the problem, we demonstrate results of numerical simulations for eq. (6.177) with \$ > 0. In Figure 6.2 one can observe three stages of evolution for the solution of eq. (6.177). At the first stage the oscillations are close to some smooth curve. Then, at the second stage the solution varies slowly. Finally, at the third stage the solution loses its stability and the amplitude of fast oscillations tends to zero.

6.7.2 Asymptotics of Autoresonant Growth Under Dissipation Let us change variables and functions: ( = T\$2 ,

(6.179)

\$J(T) = 8((, \$). Solution for primary resonance equation 140 120 100 80 T 60 40 20 10

8 6 4 2 0 –2 –4 –6 –8 –10 –10 –6 Imaginary part Psi

–2

2

6

10

Real part Psi

Figure 6.2. The solution of eq. (6.177) with parameters f = 1, \$ = 0.1 and the initial condition J ↾T=0 = 0. The modulus of the solution increases slowly, then at T = 100 the solution passes from slow changes to fast oscillations under amplitude decay.

6.7 Dissipation Is Cause for Halt of Resonant Growth

291

It yields an equation for 8: 𝚤\$4 8󸀠 + (( – |8|2 )8 + 𝚤\$3 8 = \$3 f .

(6.180)

Let us construct an asymptotic solution to eq. (6.180) in the domain (f 2 – ()\$–1 ≫ 1 and ( > 0. To this end we introduce new unknown functions 1((, \$) and !((, \$) related to the amplitude R and phase > of the unknown function 8 by R((, \$) = √( + \$3 1((, \$), >((, \$) = !((, \$). On substituting these formulas into eq. (6.180) and separating the real and imaginary parts of the equation we get \$4 1󸀠 + √( + f sin ! + \$ 󸀠

(\$√( + \$ 1)! + 2(1 – f cos ! + 3\$ 4

1 2√(

3√

+ \$3 1 = 0,

2

(6.181) 6 3

(1 + \$ 1 = 0.

Assuming \$ to be small, we look for a solution 1, ! in the form of asymptotic series ∞

1((, \$) ∼ ∑ \$k 1k ((), k=0 ∞

(6.182)

!((, \$) ∼ ∑ \$k !k ((). k=0

We first derive equations to determine the coefficients of these asymptotic series. For this purpose we substitute eq. (6.182) into eqs. (6.181). The trigonometric functions in these equations are expanded as Taylor series in a neighbourhood of some point !0 . Then we equate the coefficients of the same powers of parameter \$. As a result we get a recurrent sequence of triangle systems of linear equations for the unknown coefficients of eq. (6.182). In particular, for the leading-order terms of series we obtain √( + f sin !0 = 0, 2(10 – f cos !0 = 0, which gives 10 and !0 . On equating the coefficients of \$ we arrive at the system 2!1 f √( cos !0 + 1 = 0, 2(11 + √(!󸀠0 + !1 f sin !0 = 0,

292

6 Asymptotics for loss of stability

which readily yields !1 and 11 by !1 = – 11 =

1 , √(√f 2 – (

1 1 (!1 + ). √ √ 2 ( 2 (√f 2 – (

On equating the coefficients of \$2 we get the system –2!2 cos !0 + sin !0 !21 = 0, 4(12 + 2√(!󸀠1 + (cos !0 !21 + 2 sin !0 !2 )f = 0 implying !2 =

√(!21 2√f 2 – (

12 = –

,

1 1 √ 2 f – (!21 . (!󸀠1 – !2 ) + 4( 2√(

On equating the coefficients of \$3 one still obtains a transparent system for two unknown functions !3 , 13 f cos !0 !3 = –10 +

f ( cos !0 !31 + 6 sin !0 !2 !1 ), 6

2√(13 – !3 = –!󸀠2 – 120 –

f 1 f cos !0 10 + !31 – cos !0 !2 !1 √( ( 6

whose solution is !3 =

√( 1 3 !1 ! 2 , (!1 – 3() – 6 √f 2 – (

13 = –

√f 2 – ( 1 1 ((2 + 2)(f 2 – () !1 !2 – (!󸀠2 + !3 ) – !31 – , 2( 2√( 12√( 8√(

and so on. Careful analysis of formulas for !k and 1k obtained in this way actually shows that !k = O((f 2 – ()(1–2k)/2 ), 1k = O((f 2 – ()(1–2k)/2 ) as ( → f 2 – 0. From these equalities it follows that the constructed asymptotic expansion is valid for \$(f 2 – ()–1 ≪ 1.

6.7 Dissipation Is Cause for Halt of Resonant Growth

293

6.7.3 Stability of Autoresonant Growth We will look for a solution which is a partial sum of the asymptotic series constructed ̃ , (, \$) and !(. ̃ , (, \$). Namely, we consider above, up to remainders 1(. ̃ 3 ) exp (𝚤(!0 (() + !1 (()\$ + !\$ ̃ 2 )), 8((, \$) = (√( + 11 (()\$3 + 1\$

(6.183)

where 0 < \$ ≪ 1 and . = \$–3 ( is the fast variable. We substitute eq. (6.183) into eq. (6.180). The task is now to write down the linear part of the system for 1̃ and !.̃ An easy computation yields the system of equations 1̃ 󸀠. = (√f 2 – (\$ –

\$2 2√ f 2 – (

) !̃ + f1 (() + O(\$3 ), (6.184)

!̃ 󸀠. = –2√(1̃ + \$!̃ + f2 (() + O(\$2 ). The right-hand side of the system has slowly varying coefficients in the fast variable . . Solutions of such systems are usually constructed by the WKB method, see for instance [158]. The eigenvalues of the matrix on the right-hand side of eq. (6.184) are +1,2 = ±𝚤 √2 √(f 2 – ()( \$1/2 – 4

\$ + O(\$3/2 ), 2

and so the real part of eigenvalues is negative. Hence it follows that the asymptotic solution constructed earlier is stable in linear approximation. Figure 6.3 illustrates this result.

Im Ψ –8

–6

–4

Re Ψ

–2 –2.5 –5 –7.5

Figure 6.3. The graph displays the

–10

exponential decay of oscillations with zero initial data in a neighbourhood of a slowly varying asymptotic solution (bold curve) with f = 1 and \$ = 0.05.

–12.5 –15

294

6 Asymptotics for loss of stability

6.7.4 Vicinity of the Break of Autoresonant Growth We change the variables by ( = f 2 – \$'. The new independent variable ' is stretched with respect to (. We will look a solution of the form 1((, \$) = f + \$r(', \$), !((, \$) =

3 0 + √\$ a(', \$). 2

This substitution leads to the system for two unknown functions r(', \$) and a(', \$): r󸀠 = –r + f (cos(√\$a) – 1)/\$, a󸀠 = \$–5/2 (' – fr – \$r2 ) – \$1/2

f sin(√\$a) . f + \$r

This system can be rewritten in a slightly different form fa2 = 2 (–r󸀠 – r – f

cos(√\$a) – 1 + \$a2 /2 , \$

2fr = ' – \$r2 – \$5/2 (a󸀠 – \$–1/2

f sin(√\$a) ). f + \$r

To find asymptotics we substitute formal series in powers of √\$ for a(', \$) and r(', \$), namely ∞

r(', \$) = ∑ rk (')\$k/2 , k=0 ∞

a(', \$) = ∑ ak (')\$k/2 . k=0

On substituting these series into the system we expand both left-hand side and righthand side of the equalities as formal series in powers of √\$. Then we equate the coefficients of the same powers of √\$ in both series. As a result we arrive at a recurrent system of equations for determining the coefficients of formal series. For k = 0 it reads 2fr0 = ', fa20 = –1 – '. For k = 1 we get r1 = 0 and a1 = 0. For k = 2 the system is 2fr2 = –r02 , fa0 a2 = – r2󸀠 + r2 + f

a40 , 24

6.8 Break of Autoresonant Growth

295

implying r2 = –

'2 , (2f )3

a2 = –

2'2 + 4' – 1 √–1 – ', 24f 3 ' + 24f 3

and so on. The formulas for the coefficients rk and ak are cumbersome. However, using the recurrence relations one can see that the coefficients have a singularity at ' = –1. The greater the k, the higher the singularity. This is caused by differentiating the square root √–1 – ' and by increasing the nonlinear dependence on lower-order terms of asymptotics at each step of iteration. More precisely, we get ak = O((–1 – ')(1–k)/2 ), rk = O((–1 – ')(4–k)/2 ) as ' → –1, provided k – 4 ∈ ℕ. Hence it follows that the constructed series is asymptotic for \$ (–1 – ')–1 ≪ 1.

6.8 Break of Autoresonant Growth In a neighbourhood of the point ' = –1 we change the variables by the formula ' = –1 + 4\$. The new independent variable 4 is fast with respect to the original variable '. The solution of the primary resonance equation is written in the form a = \$1/2 u(4, \$), 4=–

1 (4f 2 4 – 1) \$ + \$2 v(4, \$). + 2f 8f 3

Substituting these formulas into the system of equations for a and r, we immediately obtain v󸀠 –

1 4 1 (f cos(\$u) – f ) + + \$v = 0, – 2f 8f 3 \$2

u󸀠 – 2fv +

1 (4f 2 4 – 1) + 8f 4

f sin(\$u) 1 \$ f + \$( – (4f 2 4 – 1) + \$2 v) + 2f 8f 3

1 1 1 – \$( v – (4f 2 4 – 1)2 ) – \$2 3 v (4f 2 4 – 1) – \$3 v2 = 0. f 4f 64f 6

296

6 Asymptotics for loss of stability

We will look for a formal solution to this system in the form of power series in \$ ∞

u(4, \$) = ∑ uk (4)\$k , k=0 ∞

(6.185) k

v(4, \$) = ∑ vk (4)\$ . k=0

Substituting these series into the equations and expanding the left-hand sides as power series in \$, we equate the coefficients of the same powers of \$. This leads to a recurrent sequence of differential equations for uk and vk . In particular, for u0 and v0 we get u󸀠0 + 2fv0 –

1 1 (4 – 2 ) = 0, 2f 2 4f

f 1 1 v0󸀠 + u20 + 2 (4 – 2 ) = 0. 2 2f 4f For u1 and v1 the system looks like 1 1 1 2 u󸀠1 + 2fv1 = v0 – u0 – 2 (4 – 2 ) , f 4f 4f f v1󸀠 + u0 u1 = –v0 . 2 For u2 and v2 the system is 1 1 1 u󸀠2 + 2fv2 = v1 – 3 (1 – 4f 2 4) v0 – u1 – 2 u0 , f 4f 2f v2󸀠 +

f f 4 f 2 u u = –v1 + u – u, 2 0 2 24 0 2 1

and so on. The system of equations for the leading-order terms reduces to the Painlevé-1 equation. To see this, let 2

3 f u0 = √ y(z, c1 , c2 ), 6

3 v0 = √

4+

9 d z 1 (y(z, c1 , c2 )) + – 7, 7 24f 8f 2f d4

1 f2 = z. 2 6 4f

6.8 Break of Autoresonant Growth

297

Then the differentiation of the equation for u0 leads, by the second equation, to the Painlevé-1 equations in the standard form y󸀠󸀠 = 6y2 + z. Here y = y(z, c1 , c2 ) is the first Painlevé transcendental, c1 and c2 are real parameters of the transcendental which are monodromy data, cf. [79], and z is an independent variable. The solution of the system of equations for the leading-order term is determined through the first Painlevé transcendental. The parameters of the transcendental are defined by making asymptotic expansions consistent. To this end we re-expand asymptotic series (6.185) in terms of the variable 4 and equate the coefficients of the same powers of \$. Then we get asymptotics of the coefficients for 4 → –∞, namely 1 1 u0 = – √–4 + O( ), √–4 f v0 =

1 1 1 ); 4– + O( √–4 4f 3 16f 5

u1 =

1 1 1 ) + O( ), ( 5 √–4 4 8f

v1 =

1 2 3 4 – 4 + O(√–4); 8f 3 16f 3

and u2 =

1 √ 3 5 1 1 √–4 + 2 + O( ), –4 + √–4 12f 3 32f 5 2f

v2 =

3 2 5 4 – 4 + O(√–4). 16f 5 32f 7

The asymptotics of the kth correction is u2n = O(√–42n+1 ), u2n+1 = O(√–42n–1 ), v2n = O(4n+1 ),

v2n+1 = O(4n+2 )

as 4 → –∞. The solutions u0 and v0 with given asymptotics as 4 → –∞ can be expressed through the first Painlevé transcendental. The asymptotics of the first Painlevé transcendental were investigated in Refs. [63, 79]. Here it is convenient to make use of the connection of the asymptotics and the monodromy data 2

3 f u0 = √ y(z, c1 , c2 ) ↾ c1 =0 , 6 c2 =0

see Ref. [79]. Starting with the formula for u0 , one obtains easily an expression for v0 (4) from the first equation of the system for u0 and v0 .

298

6 Asymptotics for loss of stability

We now turn to construction of solutions uk and vk . The corresponding homogeneous system is U 󸀠 + 2fV = 0, V󸀠 +

f u U = 0. 2 0

(6.186)

Set 2

3 f U(4) = √ w(z). 6

On differentiating the first equation and substituting V 󸀠 into the second equation we arrive at the linearized Painlevé-1 equation w󸀠󸀠 + 2u0 w = 0. The general solution of this equation is known to be a linear combination of the partial derivatives of the first Painlevé transcendental in parameters, that is, w = A1 𝜕c1 y(z, c1 , c2 ) + A2 𝜕c2 y(z, c1 , c2 ), where A1 and A2 are arbitrary constants. The asymptotics of y(z, c1 , c2 ) as z → –∞ implies 𝜕c1 y(z, c1 , c2 ) ↾ c1 =0 = O(z–5/8 ), c2 =0

𝜕c2 y(z, c1 , c2 ) ↾ c1 =0 = O(z3/8 ), c2 =0

see Ref. [79]. The formulas for corrections uk and vk can now be obtained by the method of variation of constants 4

(

uk A f ) = I(4)( k ) + I(4) ∫ I(4󸀠 )–1 ( k )d4󸀠 . vk Bk gk a

Here I(4) stands for the fundamental matrix of linearized system (6.186) with Wronskian equal to 1. By a is meant an arbitrary real constant satisfying a < 40 := (f 2 /6)z0 – 1/(4f 2 ), where z0 ∼ 2.38 is the least real pole of the Painlevé transcendental. The constants An and Bn are uniquely determined from the matching condition for asymptotic solutions. The first Painlevé transcendental has second-order poles on the real axis, see Ref. In a neighbourhood of 40 the constructed asymptotic expansion no longer holds. Indeed, we get

6.8 Break of Autoresonant Growth

u0 ∼ v0 ∼

f 2 (4

299

6 , – 40 )2

6 f 3 (4 – 40 )3

for 4 close to 40 . The general solution for the first corrections u1 and v1 can be represented in the form a1 3 + + O((4 – 40 )–1 ), u1 = (4 – 40 )3 f 4 (4 – 40 )2 v1 =

3a1 6 + 5 + O((4 – 40 )–2 ). 4 2(4 – 40 ) f (4 – 40 )3

Here a1 is one of the solution parameters. The second independent parameter is contained in the smooth part of asymptotics remainder. The parameters of the solution are uniquely determined while one constructs it by the method of variation of constants. However, in the expansion in a neighbourhood of the pole 40 the parameter a1 can be included in the pole translation of the leading-order term of order \$, namely 41 = 40 – \$f 2 a1 /12. Computations show that a1 ∼ –38.25. As a result the value 40 in the expansion of leading-order terms should be replaced by 41 and the expansions for u1 and v1 become u1 =

3 + O((4 – 41 )–1 ), f 4 (4 – 41 )2

v1 =

6 + O((4 – 41 )–2 ). f 5 (4 – 41 )3

Thus, the pole of the leading-order term of asymptotics is defined uniquely up to \$2 . More precisely, the pole asymptotics of the perturbed problem is determined by singling out summands of order (4 – 41 )–3 in the asymptotics of uk for k > 1. The order of singularity at the point 4 = 41 increases, for the higher-order corrections depend on lower-order corrections in a nonlinear way. For u2 and v2 , we have –18 u2 ∼ 6 , 5f (4 – 41 )6 v2 ∼

–54 . 5f 7 (4 – 41 )7

One can show that u2n–1 = O((4 – 41 )–2n ),

u2n = O((4 – 41 )–4n–2 ),

v2n–1 = O((4 – 41 )–2n–1 ), v2n = O((4 – 41 )–4n–3 ) as 4 → 41 .

300

6 Asymptotics for loss of stability

From the behaviour of uk and vk in a neighbourhood of singular point we deduce that the constructed asymptotics is valid in the domain √\$ ≪ 1. |4 – 41 | 6.8.1 Fast Motion In a neighbourhood of the singular point 41 the behaviour of the solution changes drastically. The solution begins to very faster. The new scale of independent variable is now .=

(4 – 41 ) . √\$

One introduces new dependent variables p(. , \$) and s(. , \$) by \$ 1 = f – 2 + \$√\$ p(. , \$), f

(6.187)

3 ! = 0 + s(. , \$). 2 The genuine independent variable ( is related to the new independent variable . by the formula ( = f 2 – \$ + \$2 41 + √\$5 . . Substituting the expressions for 1, ! and ( into original system (6.181) yields a system of equations for p(. , \$ and s(. , \$). This system is cumbersome and we need not write it here in explicit form. Using the standard procedure of perturbation theory we look for the leading-order term of asymptotics in \$ of the form p(. , \$) ∼ p0 (. ), s(. , \$) ∼ s0 (. ). For p0 (. ) and s0 (. ) we obtain the system p󸀠0 + f (1 – cos s0 ) = 0, s󸀠0 + 2fp0 = 0. This system admits the conservation law E0 = p20 + (sin s0 – s0 ).

6.8 Break of Autoresonant Growth

301

Making (6.187) and asymptotics in a neighbourhood of the pole consistent gives a condition for p0 and s0 , namely both E0 and s0 vanish as ' → –∞. The solution of the system for p0 and s0 varies fastly and s0 increases infinitely. The variable s0 stands actually for the argument of the solution in complex form \$ 3 (f – 2 + \$√\$ p(. , \$)) exp (𝚤 0 + s(. , \$)). f 2 Thus, in this mode the phase-locking condition fails to hold and the solution is not autoresonant.

6.8.2 Formal Approach to Answer To formulate the results it is convenient to change both the independent and dependent variables by ( = T\$2 , (6.188) \$J(T) = 8((, \$). The equation for 8 takes the form 𝚤\$4 8󸀠 + (( – |8|2 )8 + 𝚤\$3 8 = \$3 f .

(6.189)

Denote R = |8| and > = arg(8) for ( > 0 and 0 < \$ ≪ 1. In the sequel we will discuss the primary resonance equation in the form (6.180) and construct asymptotics of the solution to this equation for finite values of the parameter (. The existence time for the autoresonant mode in the solution of eq. (6.177) is evaluated by 5 ( ∗ = f 2 – \$ + \$2 ( √

6 1 z – ) + O(√\$5 ), f 2 0 4f 2

where z0 ∼ 2.38 is the first real pole of the Painlevé-1 transcendental with zero monodromy data y1 (z, 0, 0). Furthermore, the maximal amplitude is estimated by R∗ ∼ f –

\$ 1 5 6 1 + \$2 ( √ z – ) + O(√\$5 ). 2f 2f f 2 0 4f 3

The comparison of these asymptotics and numerical results is given in Figure 6.4. We are now able to give an explicit description of asymptotics for the autoresonant solution which is an attracting set for the solutions captured into autoresonance.

302

6 Asymptotics for loss of stability

R

Autoresonance with small dissipation 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0

Numeric solution Predicted time of existence Predicted maximum 0.0

0.5

1.0 theta

Figure 6.4. The modulus of the solution to eq. (6.180) with parameters f = 1, \$ = 0.1 and the initial condition 8 ↾(=0 = 0. Here predicted maximum and time of existence are R∗ and (∗ .

If (f 2 – ()\$–1 ≫ 1, then R and > behave like √f 2 – ( 1 f 2 – 4( R((, \$) ∼ √( + \$3 + \$5 – \$4 2( 16(2 (f 2 – ()3/2 2(√f 2 – ( + \$6

(( – 3f 2 )(( – f 2 )3 + (3/2 √f 2 – ( , 8(5/2 (f 2 – ()3

>((, \$) ∼ – arctan ( + \$3

√( √f 2

–(

)+\$

(6.190)

1 1 + \$2 2 2 8√((f – ()3/2 2√(√f – (

(f 2 + 2()√f 2 – ( – 24√((( – f 2 )3 . 48(( – f 2 )3 (3/2

(6.191)

To write the asymptotics in a neighbourhood of ( = f 2 , we change the variables by ' = (( – f 2 )\$–1 , r(', \$) = (R – f )\$–1 , a(', \$) = (> – (3/2)0)\$–1/2 . The functions r(', \$) and a(', \$) have the form r∼

'2 '3 (2' + 3) ' – \$ 3 + \$2 – \$5/2 2 , 2f 8f 16f 5 4f √–' – 1

a∼

√–' – 1 (2'2 + 43 – 1) , –\$ f 24f 3 √–' – 1

the representations being valid if \$(–1 – ')–1 ≪ 1.

6.8 Break of Autoresonant Growth

303

Close to ' = –1 it is convenient to represent the asymptotics in the form 4=

'+1 , \$

a = \$1/2 u(4, \$), r=–

1 4f 2 4 – 1 + \$2 v(4, \$). +\$ 2f 8f 3

In this domain the autoresonant mode of the solution loses its stability. The leadingorder term of u relative to \$ admits the representation 3

6 5 u(4) ∼ √( 2 ) y1 (z, 0, 0), f 5 4=√

6 1 z – 2, f2 4f

where y1 (z, 0, 0) is the Painlevé-1 transcendental, see [79], that is, a special solution of the Painlevé-1 equation y1󸀠󸀠 = 6y12 + z with asymptotics y1 (z) = √

–z + O(z–1/2 ). 6

The asymptotic formula for v looks like v(4) ∼

(4 – u󸀠 ) 2f

as \$ → 0. The Painlevé-1 transcendental y1 (z, 0, 0) has poles on the real axis. The approximate solution of eq. (6.177) by means of y1 (z, 0, 0) is valid up to a small neighbourhood 5 6 z – of the first z0 ∼ 2.38 of these poles or, what is the same, up to 4 = 40 := √ f2 0

1 . 4f 2

Near the pole the validity domain is determined by the inequality √\$ ≪ 1. 4 – 40 The asymptotics of the solution of eq. (6.177) in a neighbourhood of the pole represents by fast non-autoresonant oscillations in the new scale of variable ( = f 2 – \$ + \$2 (40 + 3.27\$) + √\$5 . . It is convenient to write the unknown functions in the form R∼f– >∼

\$ + \$√\$ p(. ), 2f

3 0 + s(. ). 2

304

6 Asymptotics for loss of stability

The function s(. ) is a special solution of the equation s󸀠 = –√E + f 2 (s – sin s), such that s → –0 as . → –∞. The function p(. ) is determined from the equation p=–

s󸀠 . 2f

Note that the function s(. ) depends on a parameter E = p2 + (s – sin s) which tends to 0 as . → –∞. The results obtained are formulated for those solutions that are captured into autoresonance. In general, the problem of separating the domains of initial data for solutions, which are captured into autoresonance and which are not, remains open. However, numerical simulations show that for T = 0 there is a disk of finite radius in a neighbourhood of the origin, from which all solutions are captured into autoresonance, cf. Ref. [77]. In the limit case \$ → 0 the form (6.180) of eq. (6.177) no longer makes sense. However, after the inverse transformations T = (/\$2 and J = \$–1 8 the asymptotic formulas (6.190) and (6.191) give a familiar asymptotics [77] for the solution when T → ∞. The important role of Painlevé’s transcendentals in passage from slow changes to fast oscillations in the solutions of second-order equations with slowly varying coefficients was first observed in Ref. [60]. A detailed study of rearrangement from slowly varying modes to fast oscillations for the primary resonance equation without dissipation was given in Refs. [91, 92]. In these works rearrangements are caused at the end by the non-autonomy of the primary resonance equation. In this section we deal with rerearrangements that are caused by the presence of dissipation in the system. However, also in this case the behaviour of the solution in a neighbourhood of the reorganization is completely determined by Painlevé’s transcendentals. We now compare the asymptotics obtained with the numerical solution. For clearness we bring graphs of the amplitude of numerical solution with zero initial data at ( = 0 for % = 0.1 and the amplitude of asymptotic solution. The asymptotic solution is composite. We demonstrate, for example, the region of passage to fast oscillations in Figure 6.5. For diverse intervals of (, the mentioned asymptotics approximates the solution with different precision. Hence it is reasonable to consider the difference between the numerical solution and asymptotics in the corresponding intervals. The graphs given in Figure 6.6 demonstrate rather strikingly that the difference between the numerical and asymptotic solutions increases in a neighbourhood of passage to fast oscillations. This is explained by the fact that the correction to the leading-order term in the asymptotic solutions for fast oscillations has a lower order in \$, namely \$2 .

6.9 Open Problems

305

Numerical and asymptotic solutions 0.98 0.96

R

0.94 0.92 0.90 0.88 0.86 0.84

Painlevé asymptotics Fast oscillated asymptotics Numeric solution 0.900 0.905 0.910 0.915 0.920 0.925 0.930 0.935 0.940 Theta

Figure 6.5. The interval of adjusting asymptotics in passage from the Painlevé’s layer to the layer of fast oscillations. The numerical solution of eq. (6.180) corresponds to f = 1, \$ = 0.1 and 8 ↾(=0 = 0.

6.9 Open Problems The bifurcation of the slowly varying equilibrium of the Painlevé-2 equation was studied by matching method on the formal asymptotic approach. However, it is necessary to note two important problems that remind out of side of our analysis. 1. The phase shift of the oscillating asymptotic solution is undefined in our approach. Its definition demands much more thin calculations for the corrections of the asymptotic formulas. 2. A problem of a justification of the remainder term for the constructed asymptotic solution remains open as well. 6.9.1 Hierarchy of Equations in Transition and Painlevé Equations Above we have considered three examples of loss of stability in nonlinear equations in general positions. One can see the loss of stability is involved with appearance of a thin transitional level with complicated and yet simple internal hierarchy. First step of the hierarchy is defined by the Painlevé-1 transcendent. Near the poles of the transcendent the solution is approximated by special solution of autonomous equation with cubic nonlinearity. Then the Painlevé-1 layer transforms to Weierstrass layer where the primary role belongs to Weierstrass ℘-function. At last the thin transitional layer transforms to wide layer with fast nonlinear oscillations. So we obtain full asymptotic description for the saddle-centre dynamical bifurcation. For confluent cases more sophisticated bifurcations exist and in the work it was suggested a connection between higher-order bifurcations and six Painlevé equations.

306

6 Asymptotics for loss of stability

Outer layer

5.0e–04

Inner layer

2.5e–04

4.5e–04 2.0e–04

4.0e–04 3.5e–04

1.5e–04

3.0e–04 2.5e–04

1.0e–04

2.0e–04 1.5e–04

5.0e–05

1.0e–04 5.0e–05 0.3

0.4

0.5

0.6 theta

0.7

0.8

0.9

Painlevé’s layer

0.025

0.0e+00 0.74 0.76 0.78 0.80 0.82 0.84 0.86 0.88 0.90 theta Fast oscillated layer

0.018 0.016

0.020

0.014 0.012

0.015

0.010 0.008

0.010

0.006 0.005

0.004

0.000 0.86 0.87 0.88 0.89 0.90 0.91 0.92 0.93

0.000

0.002 0.925

theta

0.930

0.935

0.940

0.945

theta

Figure 6.6. Here the absolute values of difference between the modulus of numerical solution to eq. (6.180) for f = 1, \$ = 0.1 and 8 ↾(=0 = 0 and the modulus of asymptotic solution on distinctive intervals are given. All the intervals are pairwise disjoint.

1. From six equations the Painlevé with parameters a, b, c and d are: P1 : wxx = 6w2 + x, P2 : wxx = 2w3 + xv + a, P3 : wxx =

wx2 wx (aw2 + b) d – + + cw3 + , w x x w

P4 : wxx =

wx2 3w3 8b + + 4xw2 + 2(x2 – a)w + , 2w 2 w

P5 : wxx = ( P6 : wxx =

w 1 aw + b 1 2 cw dw(w + 1) + ) wx2 – x + 2 (w – 1)2 ( )+ + , 2w w – 1 x w x w–1 x

1 1 1 1 1 1 ( + + ) wx2 – ( + + 1 w – x) wx 2 w w–1 w–x x x–1 +

w(w – 1)(w – x) a + bx c dx(x – 1) [ + (x – 1) + ], x2 (x – 1)2 w2 (w – 1)2 (w – x)2

6.9 Open Problems

307

First five equations can be obtained from sixth using following passages to the limit [66]: – substitution in P6 1 + :x instead of x, d/:2 instead of d, c: – d:2 instead of c with : tending to zero gives the equation P5; –

in the equation P5 a substitution 1 + :w instead of w, –b/:2 instead of b, b/:2 + a/: instead of a, c/: instead of c d/: instead of d and the passage to the limit : → 0 gives the equation P3;

in the same fifth equation the Painlevé we shall substitute :w√2 instead of w, 1 + :√2 instead of w, 1/(2:4 ) instead of a, –1/:4 instead of c, –1/(2:4 ) d/: instead of d. In this case in a limit there will be an equation P3;

in turn, substitution in equation P3 1+ :2 x instead of x, 1+ 2:w instead of w, 1/(4:6 ) instead of c, –1/(2:6 ) instead of a, 1/(2:6 + 2b/:3 instead of b gives in a limit the equation P2;

the equation P2 can be obtained by a passage to the limit as well as from the equation P4 using a substitution :x2–1/3 - 1/:3 instead of x, 22/3 :w + 1/:3 instead of w, –1/(2:6 ) – a instead of a, –1/(2:12 ) instead of b;

and, at last, P1 it turns out as a limit from the equation P2 at a substitution :2 x – 6/:10 instead of x, :w + 1/:5 instead of w, 4/:15 instead of a.

Thus, the Painlevé equations form the following hierarchy:

P P

P

P

P

H

H

P The hierarchies H H

H H L

L

L

L

L

L form also [131] Hamiltonians H1 – H6 of the Painlevé equations and the linear ordinary differential equations of the second order L1 – L6 on an auxiliary parameter + found in Refs. [48, 52], which permit to investigate asymptotic behaviour of the equations PJ with the help of monodromy-preserving method.

308

6 Asymptotics for loss of stability

In the present work we call attention to natural and usefulness relation of the Painlevé equations with a hierarchy

P P

P

P

P

P The following algebraic equations were obtained from the equations P1 – P6 by a neglect of terms containing derivative: P10 : 6w2 + x = 0, P20 : 2w3 + xw + a = 0, P30 : (aw2 + b)/x + cw3 + d/w = 0, P40 : 3w2 /2 + 4xw2 + 2(x2 – a)u2 + b/w = 0, P50 : (w – 1)2 (aw + b/w)/x2 + cw/x + 3w(w + 1)/(w – 1) = 0, P60 : w(w – 1)(w – x)[a + bx/w2 + c(x – 1)/(w – 1)2 +dx(x – 1)/(w – x)2 ]x2 (x – 1)2 = 0. Extreme left arrow of a hierarchy P10 – P60 corresponds to the confluence of five solutions of an equation P60 , and from it in an outcome of a passage to the limit there is an equation P50 . The derivative of the solutions for the given hierarchy of the algebraic equations will tend to infinity in points in which they have the multiple roots. Therefore it becomes untrue a neglect of terms with derivative in small neighbourhoods of such points at exposition of the Painlevé transcendents assigned in a principal order to the appropriate solutions. Just therefore in small neighbourhoods of such points there is essentially a hierarchy of degenerations of the Painlevé transcendents. Some formal steps in this direction were done in Ref. Uniform asymptotics for this hierarchy are interesting for both mathematics and applications.

7 Systems of coupled oscillators In this chapter we present our results related to systems of coupled oscillators. The main aims of chapter: – Study a system of two weakly coupled oscillators. Show that the external periodic perturbation can lead to the capture into resonance. –

Study a resonantly perturbed system of coupled nonlinear oscillators with small dissipation and outer periodic perturbation. Show that for the large time t ∼ :–2 one component of the system is described for the most part by the inhomogeneous Mathieu equation while the other component represents pulsation of large amplitude.

7.1 The Autoresonance Threshold into System of Weakly Coupled Oscillators A system of nonlinear oscillators is often used as a standard model for oscillating processes. The oscillators have eigen-frequencies and can resonantly interact between each other or with an external perturbation. When frequencies are constant and connected by a resonance relation it leads to increasing of amplitudes of oscillators. A slow varying of frequencies can give an essentially new effect. The frequency of the perturbation goes to the resonant value and evolves near this value. It leads to the increasing of the solution amplitude. This phenomenon is usually called the capture into autoresonance [115, 153]. Later the similar effect was observed in various fields of modern physics [28, 42, 116]. In this section we investigate the system of two weakly coupled nonlinear oscillators under small perturbation. The eigen-frequencies are constant and relate to each other as 1 : 2. This relation corresponds to the parametric resonance. The question is how to obtain a solution of big amplitude due to a small oscillating perturbation. There is a standard answer. To obtain a solution of the big amplitude you should use the resonant perturbation for linear systems. To obtain the similar result for nonlinear systems one should use the slow varying frequency of the perturbation and the effect of autoresonance. Not long ago it was found that this simple receipt is not sufficient for the capture into autoresonance. It was numerically obtained that there is a threshold value of the perturbation when the capture into resonance is not observed [43, 44]. Later the similar results were obtained for a series of one-dimensional autoresonance models analytically [72, 77]. In this section we construct asymptotics of algebraic solutions for twodimensional system of primary resonance equations. It was shown that autoresonant phenomenon appears when the amplitude of the perturbations is greater than the threshold value. This threshold value of the perturbation was found explicitly. DOI 10.1515/9783110335682-007

310

7 Systems of coupled oscillators

7.1.1 Statement of the Problem and Result We consider the system of primary resonance equations: 1 A󸀠 (t) = –i (2tA + A∗ B + f ) , 2 1 B󸀠 (t) = –i (4tB + A2 ) . 4

(7.1)

This system is the asymptotic reduction of the system of weakly coupled oscillators that was perturbed by an external oscillating force. The goal of our study is the behaviour of the solutions of eq. (7.2) when t approaches to infinity. It is shown there are increasing and finite solutions. The finite solutions are studied in detail. It is found that the periodic perturbation of system of oscillators leads to the capture into resonance. The asymptotic description and numerical simulations of the phenomenon are presented. We obtained the explicit formula for the threshold value of the perturbation. It was found there are solutions related to autoresonance phenomenon when |f | ≥ 12. 7.1.2 Asymptotic Reduction to the System of Primary Resonance Equations The system of primary resonance equations are reduced from the system of weakly coupled oscillators x󸀠󸀠 + 92 x = :!1 xy + :(𝛾 exp{i>} + c.c.) 󸀠󸀠

2

(7.2)

2

y + (29) x = :!2 x , here > = (9 + :!4)(, 4 = :(, : is a small positive parameter. The constants in the system have the following sense: 9 = const is frequency of the oscillator with the amplitude x, !1 and !2 are parameters of the nonlinear coupling, 𝛾 is the amplitude of the external perturbation, ! is the derivative of detuning of frequency of external perturbation with respect to slow time 4. We construct the solution for eq. (7.2) in the complex form: x = A (4) exp{i9(} + c.c.,

y = B(4) exp{2i9(} + c.c.

(7.3)

Substitute eq. (7.3) into eq. (7.2). It yields i𝛾 i!1 ∗ A B– exp{i!42 } + i:A 󸀠󸀠 , 29 29 i! B 󸀠 (4) = – 2 A 2 + i:B 󸀠󸀠 . 49

A 󸀠 (4) = –

Neglecting the terms of order : and substituting A = a(4) exp{i!42 } + c.c.,

B = b(4) exp{2i!42 } + c.c.

(7.4)

7.1 The Autoresonance Threshold into System of Weakly Coupled Oscillators

311

we obtain i𝛾 i!1 ∗ a b– , 29 29 i! b󸀠 (4) = –4i!4b – 2 a2 . 49 a󸀠 (4) = –2i!4a –

(7.5)

Some of parameters in system (7.5) can be normalized to unity by scaling 4 and amplitudes a and b. Let us make a substitution a(4) = +A(t),

b(4) = *B(t),

4 = 7t,

where *=

9√! , !1

+ = 9√

! , !1 !2

72 =

1 . !

√!1 !2 𝛾 . Later we analyse this system that is the !92 asymptotic reduction of system (7.2). It yields system (7.1) with f =

7.1.3 Algebraic Asymptotic Solutions In this section we construct algebraic asymptotic solutions of system (7.1) in the form of series with respect to integer power of t when t → ∞. Theorem 28. When t → ∞ there exists the solution of system (7.1) of the form f 3 –5 f if 3f )t + O(t–7 ), A2 (t) = – t–1 + t–3 + ( – 2 4 8 512 B2 (t) = –

f 2 –3 7if 2 –5 t + t + O(t–7 ), 64 256

When |f | ≥ 12 and t → ∞ there exist solutions of eq. (7.1) of the form A1 (t) = –8( cos(J) + i sin(J))t +

f –1 t + O(t–3 ), 4

B1 (t) = –4( cos(2J) + i sin(2J))t + (– here sin(J) =

f – 2i) t–1 + O(t–3 ), 4

12 . f A3 (t) = 8( cos(J) + i sin(J))t +

f –1 t + O(t–3 ), 4

B3 (t) = –4( cos(2J) + i sin(2J))t + (– cos(J) [ here sin(J) = –

12 . f

f 24 + ] + 2i[1 + sin2 (J)]) t–1 + O(t–3 ), 4 f

312

7 Systems of coupled oscillators

The general approach to the construction of power asymptotics can be read in Ref. [22]. The proof of the theorem consists in the construction of the asymptotic solution of eq. (7.1) and using Kuznetsov’s theorem [103]. We construct the asymptotic solution of the form ∞

A(t) = ∑ ak t–k ,

B(t) = ∑ bk t–k .

k=–1

k=–1

(7.6)

Let us substitute representation (7.6) into (7.1) and gather the terms of the same order of t. It gives the recurrent system of equations for coefficients ak , bk of asymptotic expansion (7.6): 2iak +

i ∗ i (a b + a∗–1 bk ) = (k – 2)ak–2 – ∑ a∗m bl , 2 k –1 2 m,l i i 4ibk + ak a–1 = (k – 2)bk–2 – ∑ am al , 2 4 m,l

(7.7)

here m + l = 1 – k and m, l ≠ k. Relations under t2 give 1 2a–1 + a∗–1 b–1 = 0, 2

4b–1 +

1 2 a = 0. 4 –1

Coefficient a–1 is the solution of equation 1 |a |2 a – 2a–1 = 0. 32 –1 –1 It shows there are two increasing solutions with |a–1 |2 = 64 and one finite solution 1 with a–1 = 0. The leading-order term b–1 = – a2–1 . 16 Here we give the procedure of construction of increasing solution that goes to +∞ when t increases also. Other increasing solution that goes to –∞ will be discussed below. We suppose that a–1 = 8 exp{iJ},

b–1 = –4 exp{2iJ}.

Relations of order of t give a homogeneous system for a0 and b0 . The matrix of the system for real and imaginary parts of a0 and b0 has the form 4 cos(J) sin(J) –4 cos(J)2 4 sin(J) –4 cos(J) 2 –4 cos(J) sin(J) 4 cos(J) 4 sin(J) 4 sin(J) ( ). –4 sin(J) –4 cos(J) 0 –4 4 cos(J) –4 sin(J) 4 0

(7.8)

7.1 The Autoresonance Threshold into System of Weakly Coupled Oscillators

313

The range of the matrix equals three and solution depends on a parameter. The solution is Y = ,0 Y0 = ,0 ( sin(J), – cos(J), – sin(2J), cos(2J)). It yields a0 = ,0 [ sin(J) – i cos(J)],

b0 = ,0 [ – sin(2J) + i cos(2J)].

Relations of order of t0 give a non-homogeneous system for real and imaginary parts of a1 and b1 with the matrix (7.8). To obtain non-trivial solution, the right-hand side 1 F = ( – 8 cos(J) – ,20 sin(J), 2 1 –f + ,20 cos(J) – 8 sin(J), 2 1 4 cos(J)2 – ,20 cos(J) sin(J) – 4 sin(J)2 , 2 1 2 1 , cos(J)2 + 8 cos(J) sin(J) – ,20 sin(J)2 ). 4 0 4

(7.9)

should be orthogonal to solutions of union system. The solution of union system is Z = ( – cos(J), – sin(J), cos(2J), sin(2J)).

(7.10)

The solvability condition for the system for real and imaginary parts of a1 and b1 has the form sin(J) = –

12 . f

(7.11)

The variable J that determines a turning of the leading-order term of asymptotic expansion can be determined when |f | ≥ 12. Real and imaginary parts of a1 and b1 are represented as a sum of general solution for homogeneous equation and partial solution of non-homogeneous equation: 4 cos(J) – ,20 , 0, 16 cos(J) cos(J)(–192 – 2f 2 + f ,20 cos(J)) , 8f

(R[a1 ], I[a1 ], R[b1 ], I[b1 ]) = ,1 Y + (

1152f cos(J) + 8f 3 cos(J) + 864,20 – 9f 2 ,20 T ) . 4f 3 cos(J)

(7.12)

Coefficients for higher-order terms of asymptotics are determined by the similar way. The solutions for these coefficients depend on a parameter ,k . This parameter is determined from the solvability condition for the non-homogeneous system of algebraic equations with degenerate matrix. The solvability condition looks as follows: Z ⋅ (F1 , F2 , F3 , F4 ) = 0,

(7.13)

314

7 Systems of coupled oscillators

where Z is a solution of the union system, (F1 , F2 , F3 , F4 ) is the right-hand side of equation for real and imaginary part of ak+2 , bk+2 . Vector (F1 , F2 , F3 , F4 ) contains a product of coefficients for higher-order correction terms of eq. (7.6). The algebraic system is written for (R[ak ], I[ak ], R[bk ], I[bk ]). It yields the following rule for right-hand side: F1 = –R[ak–1 ] + R[ ∑ a∗m bl ], F2 = –I[ak–1 ] + I[ ∑ a∗m bl ], F3 = –R[bk–1 ] + R[ ∑ am al ], F4 = –I[bk–1 ] + I[ ∑ am al ].

(7.14)

The parameter ,k is determined from relation (7.13) for all values of k. The equation for ,k looks as follows: Z ⋅ (Y0 ⊙ Xk ) – Z ⋅ Xk–1 = 0,

(7.15)

where Z is a solution of homogeneous union system, Y0 is a solution of homogeneous system and Xk is a partial solution of non-homogeneous system for kth correction term. An operation ⊙ is determined by y3 x1 + y4 x2 + y1 x3 + y2 x4 Y ⊙X =(

–y3 x2 + y4 x1 – y4 x3 + y1 x4 2y1 x1 – 2y2 x2

).

2y2 x1 + 2y1 x2 We determine the parameter ,k and relation (7.15) becomes valid. The solution Xk can be expanded on a basis Y0 , Y1 , Y2 , Y3 , where Y0 = ( sin(J), – cos(J), – sin(2J), cos(2J)), Y1 = ( cos(J), sin(J), 0, 0), Y2 = (0, 0, cos(2J), sin(2J)), Y3 = (0, 0, – sin(2J) cos(2J), cos(2J) sin(2J)). Direct calculations give Z ⋅ (Y0 ⊙ Yi ) = 0, for i = 0, 1 and

Z ⋅ (Y0 ⊙ Yi ) ≠ 0, for i = 2, 3,

(7.16)

7.1 The Autoresonance Threshold into System of Weakly Coupled Oscillators

315

and the vector Z is not orthogonal to vectors Y1 , Y2 , Y3 . It yields a non-trivial equation for ,k ,k Z ⋅ (Y0 ⊙ [Ck,2 Y2 + Ck,3 Y3 ]) = Z ⋅ (Ck–1,1 Y1 + Ck–1,2 Y2 + Ck–1,3 Y3 ). Note. We can construct another solution with the leading-order term a1 = –8 in a similar way. The change of sign for a1 does not lead to essential change of procedure of asymptotic construction. Above asymptotic constructions give the following algebraic asymptotic expansions for increasing solutions: A1 (t) = –8( cos(J) + i sin(J))t +

f –1 t + O(t–3 ), 4

B1 (t) = –4( cos(2J) + i sin(2J))t + (– where sin(J) =

f – 2i) t–1 + ⋅ ⋅ ⋅ , 4

(7.17)

12 : f

A3 (t) = 8( cos(J) + i sin(J))t +

f –1 t + O(t–3 ), 4

(7.18)

B3 (t) = –4( cos(2J) + i sin(2J))t + (– cos(J) [

f 24 + ] + 2i[1 + sin2 (J)])t–1 + O(t–3 ), 4 f

12 . f We construct the finite solution of the form

where sin(J) = –

A(t) = ∑ ak t–k , k=1 ∞

B(t) = ∑ bk t–k .

(7.19)

k=1

Substitution of eq. (7.19) into equation gives the recurrent sequence of problems (7.7) for coefficients (7.19). The coefficients ak and bk are determined from a system of algebraic equations with a non-degenerate matrix. Relations of order of t0 give the following system for real and imaginary parts of the leading-order terms: 2ia1r – 2a1i = –if ,

4ib1r – 4b1i=0 .

These equations allow us to determine the leading-order term of asymptotic expansion f a1 = – , 2

b1 = 0.

316

7 Systems of coupled oscillators

The real and imaginary parts of a2 , b2 should be equal to zero. Relations of order of t–2 give non-trivial equations: f 2ia3r – 2a3i = – , 2

4ib3r – 4b3i = –

if 2 . 16

It yields a3 =

if , 4

b3 = –

f2 . 64

The recurrent procedure gives f if 3f f 3 –5 )t + O(t–7 ), A2 (t) = – t–1 + t–3 + ( – 2 4 8 512 B2 (t) = –

f 2 –3 7if 2 –5 t + t + O(t–7 ). 64 256

(7.20)

7.1.4 Neighbourhoods of Equilibrium Positions 7.1.4.1 Stability by a Linear Approximation In this section we present the analysis of stability for constructed algebraic asymptotic solutions by the linear approximation. Let us consider the system of equations that are linearized on the increasing asymptotic solution (A1 (t), B1 (t)). Substitute a(t) = A1 (t) + !(t),

b(t) = B1 (t) + "(t),

into eq. (7.1). It gives the system of linear differential equations for !(t) and "(t). It is convenient to rewrite this system as a system for real and imaginary parts !(t) and "(t). The eigenvalues for the system are +1 = –4i√3t + O(t–1 ),

+3 = –

4 √ (f 2 – 144) , √6

+2 = 4i√3t + O(t–1 ),

+4 =

4 √ ( f 2 – 144) . √6

The number +4 has a positive real part. It yields that the constructed above algebraic solution is not stable with respect to small perturbations. The similar linearization on the algebraic solution A2 (t), B2 (t) leads to the matrix with eigenvalues of form +1 = –4it + O(t–1 ),

+2 = 4it + O(t–1 ),

+3 = –2it,

+4 = 2it.

The leading-order terms of the given asymptotic expansions are imaginary and question on stability of A2 (t), B2 (t) requires additional investigations.

7.1 The Autoresonance Threshold into System of Weakly Coupled Oscillators

317

A matrix of the system of linear differential equations that are realized on A3 (t), B3 (t) has eigenvalues of the form +1 = –4it + O(t–1 ),

+3 = –i

4 2 √ f – 144 , √6

+2 = 4it + O(t–1 ),

+4 = i

4 2 √ f – 144 . √6

It leads to additional investigations of stability of A3 (t), B3 (t). 7.1.4.2 Oscillating Asymptotic Solution in the Neighbourhood of Finite Solution In this section, we present an oscillating asymptotic solution in the neighbourhood of finite solution. The constructed solution of eq. (7.1) has the form A(t) = a(t) exp{–it2 },

B(t) = b(t) exp{–2it2 }.

(7.21)

Substitution gives 1 ∗ a b + f exp{it2 }, 2 1 ib󸀠 (t) = a2 . 4 ia󸀠 (t) =

(7.22)

Here we study the behaviour of solution of the system in the neighbourhood of finite asymptotic solution (A2 , B2 ). Substitution a = A2 exp{it2 } + !,

b = B2 exp{2it2 } + ",

(7.23)

gives a system for !, " 1 ∗ 1 ! " + (A∗2 " exp{–it2 } + !∗ B2 exp{2it2 }), 2 2 1 1 i"󸀠 = !2 + A2 ! exp{it2 }. 4 2

i!󸀠 =

(7.24)

Theorem 29. There exists a formal asymptotic solution of eq. (7.1) of forms ∞

! = ∑ !k t–k , k=0

" = ∑ "k t–k

(7.25)

k=0

that depend on four real parameters. The leading-order terms of the expansion are determined in terms of elliptic functions. The proof of the theorem will be obtained by formal asymptotic construction of eq. (7.25). Substitute eqs. (7.23) and (7.25) into eq. (7.24) and take into account eq. (7.20). Gathering the terms of the same order of 4 gives a recurrent system of equations for coefficients of the asymptotic expansion. The leading-order terms are satisfied by

318

7 Systems of coupled oscillators

1 ∗ ! " , 2 0 0 1 i"󸀠0 (t) = !20 . 4

i!󸀠0 (t) =

(7.26)

This system can be solved in terms of elliptic functions. The system has two conservation laws |!0 |2 + 2|"0 |2 = E2 , (!∗0 )2 "0 + (!0 )2 "∗0 = H. iH is a Hamiltonian of eq. (7.26). The first conservation law allows us to 4 obtain the following representation for !0 , "0 :

Function –

!0 = E exp{i>} cos(J),

"0 =

E exp{i8} sin(J). √2

After this substitution the second conservation law becomes H = √2E3 cos2 (J) sin(J) cos(I), where I = 2> – 8.

(7.27)

System (7.26) can be written as E cos(I) sin(J), 2√2 E 8󸀠 = – cos(I) cos2 (J) sin–1 (J), 2√2 E J󸀠 = sin(I) cos(J), √ 2 2 >󸀠 = –

(7.28)

Using of expression (7.27) allows us to separate system (7.28). It yields the separate equation for J J󸀠 =

√2E6 cos4 (J) sin2 (J) – H 2 4E2 cos(J) sin(J)

.

The solution of the equation is cos(2J)

∫ u0

du Et =– , √G – u3 – u2 + u 2

(7.29)

E6 – 4H 4 . Function cos(2J) is a finite periodic function with respect E6 to t. We determine the function I from eq. (7.27) and integrate the first equation of eq. (7.28): where G =

t

> = >0 +

dt H . ∫ 4E2 1 + cos2 (2J)

7.1 The Autoresonance Threshold into System of Weakly Coupled Oscillators

319

The integrand is a periodic function with respect to t with non-zero average value. It yields the following behaviour > = O(t), t → ∞. The function 8 is determined by 8 = 2> – I. Thus we have constructed a family of solutions !0 , "0 that depends on four parameters E, G, u0 , >0 . The first correction terms of expansion (7.23) are determined from the linearized system 1 f i!󸀠1 – (!∗1 "0 + !∗0 "1 ) = – "0 exp{–it2 }, 2 4 1 f i"󸀠1 – !0 !1 = – !0 exp{it2 }. 2 4

(7.30)

The fundamental matrix W of the homogeneous system that relates to eq. (7.30) are formed by the first-order derivatives with respect to parameters of solutions for eq. (7.26). When t → ∞ we obtain the following asymptotic behaviour: 𝜕E !0 = O(t),

𝜕G !0 = O(t),

𝜕u0 !0 = O(1),

𝜕>0 !0 = O(1).

The similar formulas are valid for functions !∗0 , "0 , "∗0 . Thus the fundamental matrix contains two columns of the order of t and has the determinant of the order of a constant. The solution of the non-homogeneous system is increasing due to a fast oscillations of the right-hand sides. It is convenient to rewrite the right-hand side of the system as a sum of two vectors g + exp{it2 } + g – exp{–it2 }. Here 0

g+ =

f "∗ – 40 ( f! ) , – 40

0 ). 0

g– = (

0

f "0 4

f !∗ 0 4

After twice integrating we obtain ∞

W ∫ W –1 g + exp{it2 }dt = t 󸀠

+

=

g W W –1 g + exp{it2 } + ( ) exp{it2 }– 2it 4t t ∞

W ∫ [( t

W –1 g + 󸀠 1 󸀠 ) ] exp{it2 }dt = 2t 2t 󸀠

=

1 g+ W (W –1 )󸀠 g + g+ exp{it2 } + ( ) exp{it2 } + exp{it2 }– it t t t t ∞

W ∫ [( t

W –1 g + 󸀠 1 󸀠 ) ] exp{it2 }dt. 2t 2t

The range of matrix W equals 2 and any minor of the matrix does not contain terms of the order of t2 . Taking into account that the determinant of matrix W is of order of

320

7 Systems of coupled oscillators

constant we obtain the order of terms of inverse matrix W –1 . Their order is t. Thus term W (W –1 )󸀠 g + has the order of a constant. Another integral can be evaluated by parts t t and estimated by O(t–1 ). The similar evaluations are valid for the part of the solution with g – . It yields that the first correction terms !1 , "1 are finite. The next order correction terms are determined from the system of type (7.30). The right-hand sides of the systems are quadratic forms of previous corrections !k , "k and coefficients of eq. (7.20). Solutions are finite. The theorem is proved.

7.2 Forced Nonlinear Resonance in a System of Coupled Oscillators In this section we consider a resonantly perturbed system of coupled nonlinear oscillators with small dissipation and outer periodic perturbation. We show that for the large time t ∼ :–2 one component of the system is described for the most part by the inhomogeneous Mathieu equation while the other component represents pulsation of large amplitude. A Hamiltonian system is obtained which describes for the most part the behaviour of the envelope in a special case. The analytic results agree with numerical simulations. We consider a system 󸀠󸀠 x(( + ,x(󸀠 + +2 x = : xy + f cos (2 , 󸀠󸀠 y(( + y = :\$ x2 .

(7.31)

The system of coupled oscillators (7.31) arises in the description of propagation of surface and interior gravitational waves (see Refs. [10]). Many applications are presented in Ref. [135]. The simplest analogue of system (7.31) is a nonlinear oscillator with resonance pumping. The behaviour of such oscillator is characterized at large times by the beginning of permanent periodic nonlinear oscillations of the envelope whose amplitude is large when compared with pumping. This process is described by the primary resonance equation, see for instance [15, 124]. System (7.31) belongs to the other class of resonantly perturbed problems. Pumping fails here to occur directly, that is, by including the resonant perturbation to the right-hand side of the equation, but rather by means of nonlinear interaction of oscillators. In this situation, the Mathieu equation proves to be of crucial role. In the section an analogue of the nonlinear resonance equation for the envelope of resonant component is derived in the case of special nonlinear coupling in system (7.31). At the initial stage the solution amplitude grows linearly. This growth corresponds to the linear resonance in the second equation of the system. The envelope is well approximated by the straight line y = 4:f 2 t/(4+2 –1)2 . For large values of y the nonlinear effects of interaction between oscillators become essential.

321

7.2 Forced Nonlinear Resonance in a System of Coupled Oscillators

The main result of the section consists in describing dynamics of the envelope of oscillations for large times. It turns actually out that the envelope oscillates as well. We show formulas for the amplitude and period of these oscillations. 7.2.1 Results The asymptotic solution of system (7.31) for the large time t = O(:–2 ) has the form x((, :) ∼ x0 ((, :2 (), y((, :) ∼ :–1 Y((, :2 () for : → 0 and Y((, :2 () = 2 R k(:2 () exp(𝚤(), provided that Q = 4+2 – ,2 , R = 2|k| and (Q, R) ∈ G. Hereby G is meant the domain of parameters a and q, for which the general solution of the Mathieu equation u󸀠󸀠 + (a – 2q cos 2s)u = 0 is bounded. The function x0 satisfies the inhomogeneous Mathieu equation 2 x0 + 4, 𝜕s x0 + 4(+2 – |k| cos 2s)x0 𝜕ss 1 = 4f cos (s – arg k), 2

(7.32)

where ( + arg k . 2 Write k = k1 + 𝚤k2 , where k1 = Rk and k2 = Ik are the real and imaginary parts of the function k, respectively. They are solutions of the averaged system s=

T

\$ ∫ x2 (;, 4) sin ; d;, T→∞ T

𝜕4 k1 = – lim

0

T

𝜕4 k2 =

\$ lim ∫ x2 (;, 4) cos ; d;, T→∞ T 0

where 4 = :2 (. For the particular solution corresponding to forced oscillations in the case + = 9/K ≫ 1, which is important for applications, the last system simplifies and reduces to the Hamiltonian system with Hamiltonian of the form H= and K1 =

k1 k , K2 = 22 . 2 + +

|K|2 – K1 f2 20 K1 (1 – |K|2 ) – (|K|2 – K1 )√1 – |K|2

322

7 Systems of coupled oscillators

The general solution of system (7.31) depends on five arbitrary parameters. For the Cauchy problem these are (0 and the values of x, x󸀠 , y and y󸀠 at the moment (0 . The special solution determined by the Hamiltonian system contains merely three arbitrary parameters (0 ∈ [0, 20], r ∈ [0, 1] and a ∈ [0, 20]. The Cauchy data are expressed in terms of these parameters by ( ∼ (0 , x∼

󸀠

f cos

(0 2

+2 (1 – r cos(a + (0 ))

x ∼–

f (3r sin (a +

(0

,

) + 2 sin

2 4+2 (1

(0 2

3

+ r sin (a + (0 ) 2

,

– r cos(a + (0 ))2

y ∼ :–1 r cos(a + (0 ), y󸀠 ∼ :–1 r sin(a + (0 ). The maximal value of the amplitude y for H = H0 and for t = O(:–2 ) with : → 0 is evaluated by max |y| ∼

+2 max{|r– |, |r+ |}, :

where + ≫ 1 and r± =

(f 2 – 20H0 ) ± √–f 4 + 4f 2 0H0 + 40 2 H02 40H0

.

The period is in turn determined by the formula T(H0 ) ∼

dK1 +8 , ∫ 𝜕H :2 c 𝜕K2

the integration being over the cycle c in the complex plane of the variable K = K1 + 𝚤K2 given by H(K1 , K2 ) = H0 . 7.2.2 Formal Constructions for : ≪ 1 In this section we write down equations for determining the coefficients of the asymptotic solution to system (7.31). The solution of eq. (7.31) is constructed by the two-scales method in the form x((, 4, :) = x0 ((, 4), y((, 4, :) = :–1 Y((, 4) + : y1 ((, 4), where 4 = :2 ( is a slow variable.

(7.33)

7.2 Forced Nonlinear Resonance in a System of Coupled Oscillators

323

Let us substitute eq. (7.33) into system (7.31) and group the coefficients of the same powers of the small parameter :. The equation for Y takes obviously the form 2 𝜕(( Y + Y = 0.

The general solution of this equation is of the form Y((, 4) = k(4) exp(𝚤() + k(4) exp(–𝚤() with arbitrary function k(4) to be chosen later on. The main term x0 is determined from the equation ( 2 𝜕(( x0 + ,𝜕( x0 + (+2 – Y) x0 = f cos , 2 which can be rewritten in the form 1 2 x0 + 4, 𝜕s x0 + (q – 2r cos 2s)x0 = 4f cos (s – a) 𝜕ss 2

(7.34)

with (+a , a = arg k, 2 2 r = 2 |k|, q = 4+ , s=

cf. eq. (7.32). Let >1 , >2 be a fundamental system of solutions of the homogeneous equation corresponding to eq. (7.34). Then any solution of eq. (7.34) can be represented in the form x0 = c1 >1 + c2 >2 s

+∫ s0

>1 (s)>2 (s󸀠 ) – >2 (s)>1 (s󸀠 ) 1 4f cos (s󸀠 – a)ds󸀠 , 2 >1 (s󸀠 )>󸀠2 (s󸀠 ) – >󸀠2 (s󸀠 )>1 (s󸀠 )

the denominator being the Wronsky determinant of the linearly independent system {>1 , >2 }. The change of dependent variables x0 = u exp(–2,s) reduces the homogeneous equation corresponding to eq. (7.34) 2 𝜕ss x0 + 4, 𝜕s x0 + (q – 2r cos 2s)x0 = 0

(7.35)

to the Mathieu equation with slowly varying coefficient r. More precisely, we get u󸀠󸀠 + (Q – 2R cos 2s)u = 0, where Q = q – ,2 and R = R(:2 s).

(7.36)

324

7 Systems of coupled oscillators

It is known that the general solution of the Mathieu equation changes within a period as u1,2 (s+20) = exp(20+1,2 )u1,2 (s), where +1,2 (q, r) are characteristic indices of the (homogeneous) Mathieu equation (see Ref. [41]). For q and r such that both the indices are purely imaginary, the solution of the Mathieu equation is bounded. If either of the characteristic indices is real and positive then the solution has exponential growth. Depending on the values q and r the solution can grow exponentially or remain bounded (see Ref. [162]). In Figure 7.1 the dependence upon the characteristic index R+1 > 0 on the parameters Q and R is shown. In the case under consideration the coefficient r = r(4) changes slowly. In each 20 interval of the variable s the multiplicator of the solution of eq. (7.35) has the form k1,2 = exp(20(+1,2 (q, r(4)) – 2,)). Consider the properties of solutions to eq. (7.35) for the fixed value q = 36. For R+ – 2, < 0 the multiplicators are less than 1. Hence it follows that the modulus of the solution decreases on a sequence of intervals. If R+ – 2, > 0, then the solution grows exponentially on a sequence of intervals. The integral index is 4

D(4) = ∫(R+1 (Q, R(4󸀠 )) – 2,) d4󸀠 . 0

3 2

60

1 0 25

40 R

30 20

35 Q

40 45

0

Figure 7.1. The dependence of the real part of characteristic index R+1 of the Mathieu equation (7.36) upon parameters Q and R. This ﬁgure is obtained by numerical simulations for diverse values of parameters, where Q ∈ [25, 49] and R ∈ [0, 64]. The grid step for numerical simulations equals 0.01 for both parameters.

7.2 Forced Nonlinear Resonance in a System of Coupled Oscillators

325

The principal component of the solution to eq. (7.34) does not depend on the initial data as long as D ≤ 0 and ( ≫ 1. For eq. (7.32) this means that, when studying the general solution at large times, one can restrict oneself to the particular solution of zero initial data to the inhomogeneous equation and neglect the solution of the homogeneous equation as long as the condition D < 0 is fulfilled. The correction y1 is determined from the differential equation 2 2 𝜕(( y1 + y1 = –2𝜕(4 Y + \$ x02 .

We look for a solution y1 of the form y1 ((, 4) = ℓ(() exp(𝚤() + ℓ(() exp(–𝚤(), ℓ satisfying the equation 𝜕( ℓ = 𝜕4 k exp(–2𝚤() – 𝜕4 k +

1 2 \$x exp(–𝚤(). 2𝚤 0

The function ℓ satisfies the complex conjugate equation. By means of averaging over the variable ( we single out the growing summands on the right-hand side of the equation for ℓ. This will allow us to describe the dependence of k on the slow variable 4. The average value of the first summand is equal to zero. The averaging of the second and third summands leads to the equation T

𝚤\$ 𝜕4 k = – lim ∫ x02 exp(–𝚤;)d;. T→∞ T

(7.37)

0

The averaging of the equation for ℓ gives an equation for k which is complex conjugate to eq. (7.37). Averaging over the fast variable on the right-hand side of eq. (7.37) determines the derivative of k as function of slow time. The evolution of x0 over the fast variable is determined by the inhomogeneous eq. (7.32). The dependence of x0 upon the slow variable is explained by the slow perturbation of the coefficients of the Mathieu equation. If q and r are in the domain where R+1 < 2,, then y1 = o(() for ( → ∞. From the mathematical viewpoint the question arises if the limit on the right-hand side of eq. (7.37) exists. One can verify eq. (7.37) by means of numerical simulation of the initial system (7.31). In the domain, where R+1 > 2,, asymptotic representation (7.33) does not work. However, numerical computations show that if x0 in eq. (7.37) is replaced by the entire expression for x then eq. (7.37) modified in this way becomes valid also for R+1 > 2,.

326

7 Systems of coupled oscillators

The numerical solution of system (7.31) allows one to evaluate separately the lefthand side and the right-hand side of eq. (7.37) and to determine the relative residual. We construct a numerical solution for the parameter values : = 0.2, + = 3, f = 1 and , = 0.1. System (7.31) is solved by the Runge–Kutta method of fourth order. As a result we get a numerical solution xnum , ynum . We then divide the entire integration interval into subintervals of length 20 and on each subinterval we compute the Fourier coefficients of sin ( and cos ( for the function :ynum . From the data obtained in this fashion we evaluate the difference derivative k4󸀠 . The averaging operator on the right-hand side of eq. (7.37) is written in the form convenient for analytical computations. However, in numerical data the dependences upon fast and slow variables are not separated from each other. Hence, it is not possible to directly apply formula (7.37). Instead the averaging operator over the fast variable of eq. (7.37) is replaced by the averaging over the interval of length O(:–2 ) with centre at the point 4i = :2 ti . In this way we get the values of the right-hand side at the points 4i . In Figure 7.2 one compares the derivatives in 4 on the left-hand side of eq. (7.37), which are evaluated numerically, and the integral on the right-hand side. We thus conclude that the substitution of the numerical solution for the genuine solution of eq. (7.37) leads to an inessential residual. 7.2.3 Analysis of Asymptotic Solution for + ≫ 1 In this section we carry out the analysis of the behaviour of the main terms of representation (7.33) under the assumption that + = 9/K ≫ 1. This assumption corresponds to the case of forced combinative scattering, where + attains the value 102 (see Refs. [2, 136]).

0.00008 0.00007 0.00006 0.00005 0.00004 0.00003 0

10,000

20,000

30,000

40,000

Figure 7.2. The relative error |Sl – Sr |/|Sl | of eq. (7.37). Here Sl and Sr are numerical evaluations of the left-hand and right-hand sides of eq. (7.37), respectively.

7.2 Forced Nonlinear Resonance in a System of Coupled Oscillators

327

Equation (7.32) for the main term x0 can be obviously rewritten in the form 1 2 4, 2r 4f a 𝜕 x + 𝜕 x + (4 – 2 cos 2s)x0 = 2 cos (s + ). 2 +2 ss 0 +2 s 0 + + The asymptotics of the particular solution for large values of + is x0 =

2f cos(s + a/2) –2 + + O(+–4 ). 2 – +–2 r cos 2s

(7.38)

Our next objective is to treat eq. (7.37) for + ≫ 1. To this end we substitute eq. (7.38) into eq. (7.37) and evaluate the integral explicitly. As a result we get 𝜕H , 𝜕K2

dK1 = d4󸀠

dK2 𝜕H =– , d4󸀠 𝜕K1

(7.39)

where 4󸀠 = +–8 4. This is a Hamiltonian system with Hamiltonian H=

|K|2 – K1 f2 . 20 K1 (1 – |K|2 ) – (|K|2 – K1 )√1 – |K|2

System (7.39) is easily seen to have a stable equilibrium point of type “centre” 1 , √2 K2 = 0. K1 =

The amplitude of oscillations can be found from the system H(K1 , K2 ) = H0 , d (K 2 + K22 ) = 0. d4󸀠 1

(7.40)

This allows one to evaluate the maximal value of the amplitude of oscillation envelope through +2 max{|r– |, |r+ |}, : where r± =

( f 2 – 20H0 ) ± √–f 4 + 4f 2 0H0 + 40 2 H02 40H0

.

(7.41)

The period is determined by T(H0 ) ∼

dK1 +8 , ∫ 𝜕H :2 c 𝜕K2

(7.42)

328

7 Systems of coupled oscillators

envelope of Y

40

30

20

10

0 0

1000

2000

3000 τ

4000

5000

6000

Figure 7.3. Numerically evaluated envelope function of y -component of the solution (solid curve) and the approximation of the solution envelope as solution of system (7.39) (dashed curve). The vertical line corresponds to the value of the period T = 5264.76. It is determined numerically by (7.42). The horizontal line gives an estimate of the maximal value +2 :–1 of envelope amplitude.

where the integration is over the cycle c in the complex plane of the variable K = K1 + 𝚤K2 given by |K|2 – K1 K1 (1 – |K|2 ) – (|K|2 – K1 )√1 – |K|2

=–

20H0 . f2

The analytical result obtained in the section agrees well with the numerical simulation. For the values of parameters K1 and K2 , which are used in the numerical simulation, the Hamiltonian value proves to be H0 = 1/(40). In this case the amplitude of pulsations is evaluated by +2 :–1 . This is demonstrated in Figure 7.3. The horizontal line corresponds to +2 :–1 . The explicit value for the period of envelope oscillations can be evaluated from eq. (7.42). Our numerical simulations with parameters f = 1, + = 3, , = 0.1, : = 0.2 give the value T = 5264.76 in the variable 4. System (7.39) presents a suitable approximation for determining the behaviour of functions K1 and K2 . In Figure 7.3 the solid curve corresponds to the envelope of the y -component of the solution. This curve is found from a numerical solution of system (7.31). The dashed curve √K12 + K22 presents an approximation of the solution envelope as solution of system (7.39). The point K1 = 0, K2 = 0 is an instable node equilibrium point. Initial data for system (7.39) correspond to the linear resonance of the y -component of the solution to eq. (7.31) at the initial stage. In Figure 7.3 the vertical line shows the value of the period T = 5264.76. It is determined numerically by expression (7.42). The horizontal line presents an estimate of the maximal value +2 :–1 of envelope amplitude.

Bibliography [1] [2] [3] [4] [5] [6] [7] [8]

[9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24]

M. Abramowitz and I. Stegun. Handbook of mathematical functions, volume 55 of Applied Mathematics Series. National Bureau of Standards, Washington, 1964. S.A. Akhmanov and S. Yu. Nikitin. Physical Optics. Nauka, Moscow 2004. V.M. Alekseev. Quasirandom dynamical systems. i, ii, iii. Mat. Sb. (N.S.), 76, 77, 78 (1,4,1):72–134, 545–601, 3–50, 1968, 1968, 1969. A.A. Andronov and S.E. Chaikin. Theory of Oscillations. Princeton University Press, Princeton, 1949. V.I. Arnold. Mathematical Methods of Classical Mechanics, volume 60 of Graduate Texts in Mathematics. Springer, Berlin, 1989. V. I. Arnold. Applicability conditions and an error bound for the averaging method for systems in the process of evolution through a resonance. Doklady AN SSSR, 161:9–12, 1965. M. Asaf and B. Meerson. Parametric autoresonance of faraday waves. Phys Rev E Stat Nonlin Soft Matter Phys. 2005 Jul; 72(1 Pt 2): 016310. Epub 2005 Jul 22. V.D. Azhotkin and V.M. Babich. An application of the method of two-scale expansions to a single-frequency problem in the theory of nonlinear vibrations. Prikladnaia Matematika i Mekhanika (ISSN 0032-8235), 49:377–383, 1985. V.M. Babich and V.S. Buldyrev. Asymptotic methods in short-wavelength diffraction theory. Alpha Science Intl Ltd, 2009. K. Ball. Energy transfer between external and internal gravity waves. J. Fluid Mech., 19(3):65–478, 1964. G.I. Barenblatt. Similarity, Self-Similarity, and Intermediate Asymptotics. Theory and Applications to Geophysical Hydrodynamics. Gidrometeoizdat, Leningrad, 1982. T. Bayes. Letter to john canton. 1763. A.N. Belogrudov. Ob asimptotike vyrozhdennogo resheniya vtorogo uravneniya painleve. Differencial’nye Uravneniya, 33(5):587–594, 1997. H. Blasius. Grenzschichten in ﬂüssigkeiten mit kleiner reibung. Zeit. für Math. Phys., 56:1–37, 1908. N.N. Bogolyubov and Yu.A. Mitropolskii. Asymptotic Methods in the Theory of Non-Linear Oscillations. Gordon and Breach Science Publishers, New York, 1961. H. Bohr. Almost Periodic Functions. Mir, Moscow, 1934. F.J. Bourland and R. Haberman. The modulated phase shift for strongly nonlinear, slowly varying and weakly damped oscillators. SIAM J. Appl. Math., 48(4):737–748, 1988. F.J. Bourland and R. Haberman. Connection across a separatrix with dissipation. Stud. Appl. Math., 91:95–124, 1994. P. Boutroux. Rechercles sur les transcedantes de m. painleve et l’etude asymptotique des equations differentielles du second orde. Ann. Sci. Ecol. Norm. Super., 30:255–376, 1913. P. Boutroux. Rechercles sur les transcedantes de m. painleve et l’etude asymptotique des equations differentielles du second orde. Ann. Sci. Ecol. Norm. Super., 31:99–159, 1914. L. Brillouin. La m?canique ondulatoire de schr?dinger: une m?thode g?n?rale de resolution par approximations successives. C. R. Acad. Sci., 183:24–26, 1926. A.D. Bruno. Power Geometry in Algebraic and Differential Equations. (Stepennaya geometriya v algebraicheskikh i differentsial’nykh uravneniyakh.) (Russian). Nauka Fizmatlit, Moskva, 1998. J.R. Cary and R.T. Skodje. Phase change between separatrix crossing. Physica D, 36:287–316, 1989. A.L. Cauchy. Resume des lecons sur le calcul inﬁnitesimal (cite: Differencial’noe i integral’neo ischislenie. Imperatorskaia Akademiia Nauk, Sankt-Peterburg, 1831. Russian transl. by V. Bunyakovskii ). Paris, 1823.

330

[25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45]

[46] [47] [48] [49]

Bibliography

B.V. Chirikov. Resonance processes in magnetic traps. Atomnaya Energiya, 6(1):630–638, 1959. B.V. Chirikov. Passage of nonlinear oscillatory system through resonance. Sov. Phys. Dokl., (v.4):390–394, 1959. B.V. Chirikov. A universal instability of many-dimensional oscillator systems. Phys. Rep., 52(5):264–379, 1979. G. Cohen and B. Meerson. Dynamic autoresonance and global chaos in a slowly evolving system of two coupled oscillators. Phys. Rev., 47(2):967–975, 1993. A. de Moivre. Miscellanea Analytica de Seriebus et Quadraturis. J. Tonson & J. Watts, London, 1730. D.C. Diminnie and R. Haberman. Slow passage through a saddle-center bifurcation. J. Nonlinear Sci., 10:197–221, 2000. S. Yu. Dobrokhotov and V.P. Maslov. Finite-zone, almost-periodic solutions in wkb approximations. J. Sov. Math., 16(6):1433–1487, 1981. A.A. Dorodnitsin. Asymptotic solution of van-der-pole equation. Prikladnaya matematika i mekhanika, 11:313–328, 1947. L. Euler. Institutiones Calculi Differentialis. Academia Scientiarum Imperialis Petropolitanae, St. Petersburg, 1755. J. Fajans and L. Friedland. Autoresonant (non-stationary) exitation of pendulums, plutinos, plasmas, and other nonlinar oscillators. Amer. J. Phys., 69(10):1096–1102, 2001. J. Fajans, E.Gilson, and L. Friedland. Second harmonic autoresonance control of the l = 1 diocotron mode in pure electron plasmas. Phys. Rev. E, 62:4131–4136, 2000. J. Fajans, E. Gilson, and L. Friedland. Autoresonant exitation of the diocotron mode in non-neutral plasmas. Phys. Rev. Lett., 82:4444–4447, 1999. J. Fajans, E. Gilson, and L. Friedland. The effect of dumping on autoresonant (nonstationary) exitation. Phys. Plasmas, 8:243–247, 2001. M.V. Fedoryuk. The wkb-method for a non-linear equation of the second order. USSR Comput. Math. Math. Phys., 26(1):121–128, 1986. N.N. Filonenko, R.Z. Sagdeev, and G.M. Zaslavskii. Destruction of magnetic surfaces by magnetic ﬁeld irregularities. Nucl. Fusion, 7:253, 1967. H. Flaschka and A.C. Newell. Monodromy- and spectrum preserving deformations. Comm. Math. Phys., 76:65–116, 1980. G. Floquet. (1883) Sur les Equations Differentielles Lineaires a Coefﬁcients Periodiques. Annales Scientiﬁcs de l’Ecole Normale Superieur, 12, 47–88. L. Friedland. Autoresonant solutions of nonlinear schrodinger equation. Phys. Rev. E, 58:3865–3875, 1998. L. Friedland. Subharmonic autoresonance. Phy. Rev. E, 61(4):3732–3735, 2000. L. Friedland. Subharmonic autoresonance of the diocotron mode. Phys. Plasmas, 7(5):1712–1718, 2000. L. Friedland. From the pendulum to Rydberg accelerator and planetary dynamics: Autoresonant formation and control of nonlinear states. Proc. of the Symposium: PhysCon2005, St. Petersburg, Russia, 2005., St. Petersburg, Russia, 2005. K.O. Friedrichs and R.F. Dressler. A boundary-layer theory for elastic plates. Commun. Pure Appl. Math., 14:1–33, 1961. K.O. Friedrichs and W. Wasow. Singular perturbations of nonlinear oscillations. Duke Math. J., 13:367–381, 1946. R. Fuchs. Uber lineare homogene differentialglichungen zweiter ordnung mit drei im endlichen gelegene wesentlich singulare stellen. Math. Ann., (3):301–321, 1907. R. N. Garifullin. Asymptotic analysis of a subharmonic autoresonance model. Proc. Steklov Inst. Math. 2003, Asymptotic Expansions. Approximation Theory. Topology, suppl. 1, S75–S83.

Bibliography

[50] [51] [52]

[53]

[54] [55] [56] [57] [58] [59] [60] [61] [62]

[63] [64] [65] [66] [67]

[68] [69]

[70] [71] [72]

331

R.N. Garifullin. Asymptotic solution to the problem of autoresonance: Outer expansion. Comput. Math. Math. Phys., 46(9):1526–1538, 2006. R.N. Garifullin. Construction of asymptotic solutions of the autoresonance problem. Inner decomposition. J. Math. Sci., 151(1):2651–2663, 2008. R. Garnier. Sur les equations differentielles du troisieme ordre don’t l’integrale est uniforme et sur une classe d’equations nouvelles d’ordre superieuer dont l’integrale generale a les points criticues ﬁxes. Annales Scientiﬁﬁques de l’Eqole Normale Superieure, 29:1–126, 1912. S.G. Glebov and O.M. Kiselev. Asymptotics of a rigid generation of natural oscillations. I. transactions of conference “Complex analysis, differential equations and applications”, Ufa, 2000, pp. 49–52. S.G. Glebov, O.M. Kiselev, and V.A. Lazarev. The autoresonance threshold in a system of weakly coupled oscillators. Proc. Steklov Inst. Math., (1) S111–S123, 2007. S.G. Glebov, O.M. Kiselev, and N. Tarkhanov. Autoresonance in a dissipative sytem. J. Phys. A: Math. Theor., 43:2152003, 2010. G. Green. On the motion of waves in a variable canal of small depth and width. Trans. Cambridge Philos. Soc., 6:457–462, 1837. V.I. Gromak and N.A. Lukashevich Analytical Properties of Solutions of the Painlevé Equations. Universitetskoe, Minsk: 1990. J. Guckenheimer and A. Mahalov. Instability inducted by symmetry reduction. Phys. Rev. Lett., 68:225–2260, 1992. G. Gukenheimer. Nonlinear Oscillations, Dynamical Systems and Bifurcations of Vector Fields. Izhevsk, IKI, Moscow, 2002. R. Haberman. Nonlinear transition layers - second painelevé trancendent. Stud. Appl. Math., 57:247–270, 1977. R. Haberman. Slowly varying jump and transition phenomena associated with algebraic bifurcation problem. SIAM J. Appl. Math., 37:69–109, 1979. R. Haberman. Slow passage through the nonhyperbolic homoclinic orbit associated with a subcritical pitchfork bifurcation for Hamiltonian systems and the change in action. SIAM J. Appl. Math., 62(2):488–513, 2001. P. Holmes and D. Spence. On a painleve-type boundary-value problem. Q. Mech. Appl. Math., 37(4):525–538, 1984. A.M. Il’in. Matching of Asymptotic Expansions of Solutions of Boundary Value Problem. AMS, Providence, 1992. A.M. Il’in and A.R. Danilin. Asymptotic Methods in Analysis. Fizmatlit, Moscow, 2009. E.L. Ince. The Ordinary Differential Equations. Dover Publications VIII, New York, 1944. A.P. Itin, A.I. Neishtadt, and A.A. Vasiliev. Captures into resonance and scattering on resonance in dynamics of a charged relativistic particle in magnetic ﬁeld and electrostatic wave. Physica D, 141:281–296, 141. A.R. Its and A.A. Kapaev. Metod izomonodromnykh deformacii i formuly svyazi dlya vtorogo transcendenta painleve. Izv ANSSSR, ser. Matematicheskaya, 51:878–892, 1987. A.R. Its and V.Yu. Novokshenov. The isomonodromic deformation method in the theory of Painleve equations, volume 1191 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 1986. N. Joshi and M.D. Kruskal. On asymptotic approach to the connection problem for the ﬁrst and the second Painleve equations. Phys. Lett. A, 130:129–137, 1988. L.A. Kalyakin. Asymptotic solution of the autoresonance problem. Theoret. Math. Phys., 133(3):1684–1691, 2002. L.A. Kalyakin. Asymptotic solution of the threshold phenomenon problem for the principal resonance equations. Differ. Equ., 40(6):780–788, 2004.

332

[73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91]

[92] [93] [94]

[95] [96] [97] [98]

Bibliography

L.A. Kalyakin. Resonance capture in a nonlinear system. Theor. Math. Phys., 144(1):944–951, 2005. L.A. Kalyakin. Intermediate asymptotics for solutions to the degenerate principal resonance equations. Comput. Math. Math. Phys., 46(1):79–89, 2006. L.A. Kalyakin. Lyapunov functions in justiﬁcation theorems for asymptotics. Math. Notes, 98(5):752–764, 2015. L.A. Kalyakin. Stability of the autoresonance in a dissipative system. Russ. J. Math. Phys., 23(1):77–87, 2016. L.A. Kalyakin. Asymptotic analysis of autoresonance models. Russ. Math. Surv., 65(5):791–857, 2008. L.A. Kalyakin and O.A. Sultanov. Stability of autoresonance models. Differ. Equ., 49(3):267–281, 2013. A. A. Kapaev. Asymptotic behaviour of solutions for paineleve equation of ﬁrst kind. Differentsial’nye uravneniya, 24:1684, 1988. A.A. Kapaev, Painleve-transcendents as nonlinear special functions. PhD thesis, Diss. of Doct. of Sci., 1997. A.A. Kapaev and A.V. Kitaev. Limit transition p2->p1. J. Math. Sci., 73(4):460–467, 1995. S. Kaplun. Fluid mechanics and singular perturbation. Academic Press, New York 1967. M.V. Karasyev and A.V. Pereskokov. One-dimensional equations of a self-consistent ﬁeld with cubic nonlinearity in a semiclassical approximation. Math. Notes, 52(2):66–82, 1992. J. Kevorkian and J.D. Cole. Multiple scale and syngular perturbation methods, volume 114 of Applied Mathematical Science. Springer, Berlin, 1996. E. Khain and B.Meerson. Parametric autoresonance. ArXiv/Physics0101100, 2001. O.M. Kiselev. Asymptotic approach for the rigid condition of appearance of the oscillations in the solution of the painleve-2 equation. Arxiv.org:solv-int/9902007, 1999. O.M. Kiselev. Hard loss of stability in painleve-2 equation. J. Nonlinear Math. Phys., 8(1):65–95, 2001. O.M. Kiselev. Introduction into nonlinear oscillations. Bashkir State University, Ufa, 2006, 140p. O.M. Kiselev. Oscillations near a separatrix in the Dufﬁng equation. Proc. Steklov Inst. Math., 281:82–94, 2013. O.M. Kiselev. Threshold values of autoresonant pumping. arXiv:1303.4691, 2013. O.M. Kiselev and S.G. Glebov. Asymptotics of a rigid generation of natural oscillations. II. transactions of conference “Complex analysis, differential equations and applications”, Ufa, 2000, pp. 95–97. O.M. Kiselev and S.G. Glebov. An asymptotic solution slowly crossing the separatrix near a saddle-center bifurcation point. Nonlinearity, 16:327–362, 2003. O.M. Kiselev and S.G. Glebov. The capture into parametric autoresonance. Nonlinear Dynamics, 2006, v48, n1–2, pp. 217–230. O.M. Kiselev and B.I. Suleimanov. The solution of the painleve equations as special functions of catastrophes, deﬁned by a rejection in these equations of terms with derivative. ArXiv:solv-int/9902004, 1999. O.M. Kiselev and N. Tarkhanov. The capture of a particle into resonance at potential hole with dissipative perturbation. Chaos, Solitons Fractals, 58:27–39, 2014. O. Kiselev and N.i Tarkhanov. Scattering of trajectories at a separatrix under autoresonance. J. Math. Phys., 55:063502, 2014. A. V. Kitaev. Elliptic asymptotics of the ﬁrst and the second painlevé transcendents. Russ. Math. Surv., 49(1):81–150, 1994. A.V. Kitaev. Elliptic asymptotics ﬁrst and second Painleve-transcendents. Usp. Matem. nauk, 49(1):77–140, 1994.

Bibliography

[99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125]

333

A.V. Kitaev and N. Joshi. On Boutroux’s tritronquée solutions of the ﬁrst Painlevé equation. Studi. Appl. Math., 107:253–291, 2001. N.E. Kochin, I.A. Kibel’, and N.V. Roze. Theoretical Hydromechanics. John Wiley & Sons, Ltd. Hoboken, New Jersey, 1964. H.A. Kramers. Wellenmechanik und halbz?hlige quantisierung. Z. Phys., 39(10–11):828–840, 1926. G.E. Kuzmak. Asymptotic solutions of nonlinear second order differential equations with variable coefﬁcients. J. Appl. Math. Mech., 23(3):730–744, 1959. A.N. Kuznetsov. Differentiable solutions to degenerate systems of ordinary equations. Funct. Anal. Appl., 6(2):119–127, 1972. N. Kryloff and N. Bogoliubov. Introduction in Non-linear Mechanics. Princeton University Press, London, 1949. A. Lindstedt. Beitrag zur integration der differentialgleichungen der strungstheorie. Mem. Acad. Imperiale Sciences St. Petersberg, 31(4):20, 1883. J. Liouville. Sur le developpement des fonctions et series. J. Math. Pures Appl., 1:16–35, 1837. S.A. Lomov. A power boundary layer in problems with a singular perturbation. zv. Akad. Nauk SSSR Ser. Mat., 30(3):525–572, 1966. L.G. Loistyanskii. Laminarnyi pogranichnyi sloy. Nauka, Moscow, 1962. J.C. Luke. A perturbation method for nonlinear dispersive wave problems. Proc. R. Soc. Ser. A,, 292:403–412, 1966. G.J.M. Maree. Slow passage through a pitchfork bifurcation. SIAM J. Math. Appl., 56:889–918, 1996. V.P. Maslov and M.V. Fedoryuk. Quasiclassical Approximations for Equations of Quantum Mechanics. Nauka, Moscow, 1976. V.P. Maslov and G.A. Omel’yanov. Asymptotic soliton-like solutions with small dispersion. Uspekhi Matem. Nauk., 36(3):63–126, 1981. Maxima, a computer algebra system. L.F. McGoldrich. Resonant interactions among capillary-gravity waves. J. Fluid Mech., 21(2):305–331, 1965. E.M. McMillan. The synchrotron-a proposed high energy particle accelerator. Phys. Rev., 68:143–144, 1945. B. Meerson and S. Yariv. A rigid rotator under slowly-varying kicks: Dynamic autoresonance and time-varying chaos. Phys. Rev. A, 44:3570–3582, 1991. V.K. Mel’nikov. On the stability of a center for time-periodic perturbations. Trudy Moskov. Mat. Obshch., 12:3–52, 1963. Yu. A. Mitropolskii. Problems on Asymptotic Methods of Non-stationary Oscillations. Nauka, Moscow, 1964. Y.A. Mitropol’skii and G.P. Khoma. Mathematical Justiﬁcation of Asymptotic Methods of Nonlinear Mechanics. Naukova Dumka, Kiev, 1983. A.D. Morozov. On the question of the complete qualitative investigation of Dufﬁng’s equation. Z. Vychisl. Mat. Mat. Fiz, 13(5):1134–1152, 1973. A.D. Morozov. The complete qualitative investigation of dufﬁng’s equation. Differencial’nye Uravnenija, 12(2):241–255, 1976. A.D. Morozov. Quasi-Conservative Systems. Cycles, Resonances and Chaos. World Scientiﬁc Publishing Co., Inc., River Edge, 1998. O. Naaman, J. Aumentado, L. Friedland, J. S. Wurtele, and I. Siddiqi. Phase-locking transition in a chirped superconducting Josephson resonator. Phys. Rev. Lett, 101:117005, 2008. A.H. Nayfeh. Perturbation Methods. Wiley Interscience, New York, 1981. A. Neishtadt. A. probability phenomena in perturbed dynamical systems. Mechanics of the 21st century. 241–261, Springer, Berlin, 2005.

334

Bibliography

Bibliography

335

[150] V.A. Trenogin. Development and applications of the asymptotic method of lyusternik-vishik. Usp. Math. Nauk, 25(4):123–156, 1970. [151] A.B. Vasil’eva. Asymptotic behavior of solutions to certain problems involving nonlinear differential equations containing a small parameter multiplying the highest derivatives. Russ. Math. Surv., 18:13–84, 1963. [152] A.B. Vasil’eva and V.F. Butuzov. Asymptotic Expansion of Solutions of Singularly Perturbations. Nauka, Moscow, 1973. [153] V.I. Veksler. A new method of acceleration of relativistic particles. J. Phys. USSR 9:153–158, 1945. [154] M.I. Vishik and L.A. Lyusternik. Regular degeneration and boundary layer for linear differential equations with small parameter. Usp. Math. Nauk, 12(5):3–122, 1957. [155] M.I. Vishik and L.A. Lyusternik. The asymptotic behaviour of solutions of linear differential equations with large or quickly changing coefﬁcients and boundary conditions. Russ. Math. Surv., 15(4):3–91, 1960. [156] W. Wasow. On boundary layer problems in the theory of ordinary differential equations. PhD thesis, New York University, 1941. [157] W. Wasow. On the asymptotic solution of boundary value problems for ordinary differential equations containing a parameter. J. Math. Phys., 23:173–183, 1944. [158] W. Wasow. Asymptotic Expansion for Ordinary Differential Equations, volume XIV of Pure Andapplied Mathematics Series. Interscience, New York, 1965. [159] G. Wentzel. Eine verallgemeinerung der quantenbedingungen fur die zwecke der wellenmechanik. Z. Phys., 38 (6–7):518–529, 1926. [160] G.B. Whitham. A general approach to linear and nonlinear dispersive waves using a lagrangian. J. Fluid Mech., 27:.273–283, 1965. [161] G.B. Whitham. Linear and Nonlinear Waves. Wiley, New York, 1974. [162] E.T. Whittaker and G.N. Watson. A Course of Modern Analysis. University Press, Cambridge, 1920. [163] O. Yaakobi, L. Friedland, and Z. Henis. Driven, autoresonant three-oscillator interaction. Phys. Rew. E, 76:026205, 2007. [164] S. Yariv and L. Friedland. Autoresonance interaction of three nonlinear adiabatic oscillators. Phys. Rev. E, 48:3072–3076, 1993.

Index sn as k → 0 76 sn as k → 1 – 0 77 Weierstrass ℘-function 68 action 116 adiabatic invariant 116 Airy function 27 asymptotic sum 10 autoresonant equation with dissipation 172 Bloch function 85 calibration sequence 6 chirp-rate 170 delta of amplitude 74 double periodic 73 Dufﬁng’s oscillator 132 elliptic integral k = 1 – : 79 formal asymptotic series 6 Fresnel integrals 18 gamma-function, integral 16

Krylov-Bogolyubov’s ansatz 111 Lame equation 90 Laplace method 14 Mathieu equation 86 Mathieu functions 88 Melnikov’s integral 138 modulus of elliptic function 72 non-linear resonance equation with disspation 161 non-linear WKB-approximation 48 Painlevé-1 equation 221 Painleve-2 equation 209 parabolic cylinder function 33 primary resonance equation 171 primary parametric resonance equation 200 pumping 170 Riemann lemma 18 Riemann surface 68, 72 sin of amplitude 72 slow varying equilibrium 171, 172 stationary phase method 23 Stirling’s approximation 17

Hill equation 84 uniform asymptotics of sn 83 Jacobi cn 74 Jacobi manifold 69 Jacobi sn 72

Weierstrass ℘-function 70 WKB-approximation 37

De Gruyter Series in Nonlinear Analysis and Applications Volume 22 Miroslav Bácak Convex Analysis and Optimization in Hadamard Spaces, 2014 ISBN 978-3-11-036103-2, e-ISBN (PDF) 978-3-11-036162-9, e-ISBN (EPUB) 978-3-11-039108-4, Set-ISBN 978-3-11-036163-6 Volume 21 Moshe Marcus, Laurent Véron Nonlinear Second Order Elliptic Equations Involving Measures, 2013 ISBN 978-3-11-030515-9, e-ISBN (PDF) 978-3-11-030531-9, Set-ISBN 978-3-11-030532-6 Volume 20 John R. Graef, Johnny Henderson, Abdelghani Ouahab Impulsive Differential Inclusions: A Fixed Point Approach, 2013 ISBN 978-3-11-029361-6, e-ISBN (PDF) 978-3-11-029531-3, Set-ISBN 978-3-11-029532-0 Volume 19 Petr Hájek, Michal Johanis Smooth Analysis in Banach Spaces, 2014 ISBN 978-3-11-025898-1, e-ISBN (PDF) 978-3-11-025899-8, e-ISBN (EPUB) 978-3-11-039199-2, Set-ISBN 978-3-11-220385-9 Volume 18 Smaïl Djebali, Lech Górniewicz, Abdelghani Ouahab Solution Sets for Differential Equations and Inclusions, 2012 ISBN 978-3-11-029344-9, e-ISBN (PDF) 978-3-11-029356-2, Set-ISBN 978-3-11-029357-9 Volume 17 Jürgen Appell, Józef Banas, Nelson José Merentes Díaz Bounded Variation and Around, 2013 ISBN 978-3-11-026507-1, e-ISBN (PDF) 978-3-11-026511-8, Set-ISBN 978-3-11-026624-5

www.degruyter.com