Computational Physics: With Worked Out Examples in FORTRAN and MATLAB [2 ed.] 3110782367, 9783110782363

The work shows, by means of examples coming from different corners of physics, how physical and mathematical questions c

271 58 51MB

English Pages 408 Year 2023

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Computational Physics: With Worked Out Examples in FORTRAN and MATLAB [2 ed.]
 3110782367, 9783110782363

Table of contents :
Contents
1 Introduction
2 Nonlinear maps
3 Dynamical systems
4 Ordinary differential equations I, initial value problems
5 Ordinary differential equations II, boundary value problems
6 Ordinary differential equations III, memory, delay and noise
7 Partial differential equations I, basics
8 Partial differential equations II, applications
9 Monte Carlo methods
A Matrices and systems of linear equations
B Program library
C Solutions of the problems
D README and a short guide to FE-tools
Index

Citation preview

Michael Bestehorn Computational Physics

Also of Interest Computational Methods for Data Analysis Yeliz Karaca and Carlo Cattani, 2018 ISBN 9783110496352, e-ISBN (PDF) 9783110496369

MATLAB® Kompakt Wolfgang Schweizer, 2022 ISBN 9783110741704, e-ISBN (PDF) 9783110741780

Scientific Computing For Scientists and Engineers Timo Heister and Leo G. Rebholz, 2023 ISBN 9783110999617, e-ISBN (PDF) 9783110988451 Analog Computing Bernd Ulmann, 2022 ISBN 9783110787610, e-ISBN (PDF) 9783110787740

Numerical Methods with Python for the Sciences William Miles, 2023 ISBN 9783110776454, e-ISBN (PDF) 9783110776645

Michael Bestehorn

Computational Physics �

With Worked Out Examples in FORTRAN® and MATLAB® 2nd edition

Author Prof. Dr. Michael Bestehorn Brandenburg University of Technology Cottbus–Senftenberg Dep. of Statistical Physics and Nonlinear Dynamics Erich-Weinert-Str. 1 03046 Cottbus Germany [email protected] MATLAB and Simulink are registered trademarks of The MathWorks, Inc. See www.mathworks.com/ trademarks for a list of additional trademarks. The MathWorks Publisher Logo identifies books that contain MATLAB and Simulink content. Used with permission. The MathWorks does not warrant the use or discussion of MATLAB and Simulink software or related products does not constitute endorsement or sponsorship by The MathWorks of a particular use of the MATLAB and Simulink software or related products. For MATLAB® and Simulink® product information, or information on other related products, please contact: The MathWorks, Inc. 3 Apple Hill Drive Natick, MA, 01760-2098 USA Tel: 508-647-700 Fax: 508-647-7001 E-mail: [email protected] Web: www.mathworks.com

ISBN 978-3-11-078236-3 e-ISBN (PDF) 978-3-11-078252-3 e-ISBN (EPUB) 978-3-11-078266-0 Library of Congress Control Number: 2023930636 Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. © 2023 Walter de Gruyter GmbH, Berlin/Boston Cover image: Michael Bestehorn Typesetting: VTeX UAB, Lithuania Printing and binding: CPI books GmbH, Leck www.degruyter.com

Contents 1 1.1 1.2 1.2.1 1.2.2 1.2.3 1.2.4 1.3 1.3.1 1.3.2 1.3.3

Introduction � 1 Goal, contents, and outline � 1 The environment required for program development � 5 Operating system � 5 Software packages � 6 Graphics � 6 Program development and a simple script � 6 A first example – the logistic map � 7 Map � 7 FORTRAN � 9 Problems � 12 Bibliography � 12

2 2.1 2.1.1 2.1.2 2.1.3 2.1.4 2.2 2.2.1 2.2.2 2.2.3 2.3 2.3.1 2.3.2 2.3.3 2.4 2.4.1 2.4.2 2.4.3 2.5 2.5.1 2.5.2 2.5.3

Nonlinear maps � 13 Frenkel–Kotorova model � 13 Classical formulation � 13 Equilibrium solutions � 14 The standard map � 14 Problems � 15 Chaos and Lyapunov exponents � 16 Stability, butterfly effect, and chaos � 16 Lyapunov exponent of the logistic map � 17 Lyapunov exponents for multidimensional maps � 19 Affine maps and fractals � 22 Sierpinski triangle � 23 About ferns and other plants � 25 Problems � 26 Fractal dimension � 26 Box-counting � 26 Application: Sierpinski triangle � 27 Problem � 28 Neural networks � 29 Perceptron � 29 Self-organized maps: Kohonen’s model � 36 Problems � 40 Bibliography � 40

3 3.1 3.1.1

Dynamical systems � 42 Quasilinear differential equations � 42 Example: logistic map and logistic ODE � 44

VI � Contents 3.1.2 3.2 3.2.1 3.2.2 3.2.3 3.2.4 3.2.5 3.2.6 3.2.7 3.3 3.3.1 3.3.2 3.3.3

4 4.1 4.1.1 4.1.2 4.2 4.2.1 4.2.2 4.2.3 4.3 4.3.1 4.3.2 4.3.3 4.4 4.4.1 4.4.2 4.4.3 4.4.4 4.4.5 4.5 4.5.1 4.5.2 4.5.3 4.5.4 4.5.5 4.5.6 4.6 4.6.1

Problems � 45 Fixed points and instabilities � 45 Fixed points � 45 Stability � 45 Trajectories � 47 Gradient dynamics � 47 Special case N = 1 � 48 Special case N = 2 � 48 Special case N = 3 � 50 Hamiltonian systems � 53 Hamilton function and canonical equations � 53 Symplectic integrators � 54 Poincaré section � 60 Bibliography � 62 Ordinary differential equations I, initial value problems � 63 Newton’s mechanics � 63 Equations of motion � 63 The mathematical pendulum � 64 Numerical methods of the lowest order � 65 Euler method � 65 Numerical stability of the Euler method � 67 Implicit and explicit methods � 68 Higher order methods � 69 Heun’s method � 70 Problems � 73 Runge–Kutta method � 73 RK4 applications: celestial mechanics � 79 Kepler problem: closed orbits � 79 Quasiperiodic orbits and apsidal precession � 82 Multiple planets: is our solar system stable? � 83 The reduced three-body problem � 86 Problems � 92 Molecular dynamics (MD) � 93 Classical formulation � 93 Boundary conditions � 94 Microcanonical and canonical ensemble � 95 A symplectic algorithm � 96 Evaluation � 97 Problems � 101 Chaos � 102 Harmonically driven pendulum � 103

Contents

� VII

4.6.2 4.6.3 4.6.4 4.6.5 4.7 4.7.1 4.7.2 4.7.3 4.7.4 4.7.5

Poincaré section and bifurcation diagrams � 105 Lyapunov exponents � 105 Fractal dimension � 115 Reconstruction of attractors � 117 Differential equations with periodic coefficients � 119 Floquet theorem � 119 Stability of limit cycles � 121 Parametric instability: pendulum with an oscillating support � 121 Mathieu equation � 123 Problems � 125 Bibliography � 125

5 5.1 5.1.1 5.1.2 5.2 5.2.1 5.2.2 5.3 5.3.1 5.3.2 5.4 5.4.1 5.4.2 5.4.3 5.4.4 5.5 5.5.1 5.5.2 5.5.3 5.6

Ordinary differential equations II, boundary value problems � 127 Preliminary remarks � 127 Boundary conditions � 127 Example: ballistic flight � 128 Finite differences � 129 Discretization � 129 Example: Schrödinger equation � 132 Weighted residual methods � 138 Weight and base functions � 138 Example: Stark effect � 140 Nonlinear boundary value problems � 142 Nonlinear systems � 143 Newton–Raphson � 144 Example: the nonlinear Schrödinger equation � 145 Example: a moonshot � 148 Shooting � 151 The method � 151 Example: free fall with quadratic friction � 152 Systems of equations � 154 Problems � 155 Bibliography � 155

6 6.1 6.1.1 6.1.2 6.1.3 6.2 6.2.1 6.2.2

Ordinary differential equations III, memory, delay and noise � 156 Why memory? Two examples and numerical methods � 156 Logistic equation – the Hutchinson–Wright equation � 157 Pointer method � 159 Mackey–Glass equation � 160 Linearized memory equations and oscillatory instability � 163 Stability of a fixed point � 163 One equation, one δ-kernel � 164

VIII � Contents 6.2.3 6.3 6.3.1 6.3.2 6.3.3 6.3.4 6.4 6.4.1 6.4.2 6.4.3 6.4.4 6.4.5 6.5 6.5.1 6.5.2 6.5.3 6.5.4 6.6 6.6.1 6.6.2 6.6.3 6.6.4

7 7.1 7.1.1 7.1.2 7.1.3 7.2 7.2.1 7.2.2 7.2.3 7.2.4 7.3 7.3.1 7.3.2 7.3.3 7.4 7.4.1 7.4.2 7.4.3

One equation, one arbitrary kernel � 169 Chaos and Lyapunov exponents for memory systems � 171 Projection onto a finite dimensional system of ODEs � 171 Determination of the largest Lyapunov exponent � 171 The complete Lyapunov spectrum � 173 Problems � 174 Epidemic models � 175 Predator-prey systems and SIR model � 175 SIRS model � 180 SIRS model with infection rate control � 183 Delayed infection rate control � 183 Problems � 186 Noise and stochastic differential equations � 186 Brownian motion � 186 Stochastic differential equation � 189 Stochastic delay differential equation � 191 SIRS model with delayed and noisy infection rate control � 191 A microscopic traffic flow model � 194 The model � 194 The nondimensional basic system � 196 Stationary solution and linear stability analysis � 196 The fully nonlinear system – numerical solutions � 199 Bibliography � 201 Partial differential equations I, basics � 202 Classification � 202 PDEs of the first order � 202 PDEs of the second order � 205 Boundary and initial conditions � 207 Finite differences � 211 Discretization � 211 Elliptic PDEs, example: Poisson equation � 214 Parabolic PDEs, example: heat equation � 220 Hyperbolic PDEs, example: convection equation, wave equation � 226 Alternative discretization methods � 232 Chebyshev spectral method � 233 Spectral method by Fourier transformation � 238 Finite-element method � 242 Nonlinear PDEs � 246 Real Ginzburg–Landau equation � 247 Numerical solution, explicit method � 248 Numerical solution, semi-implicit method � 249

Contents �

7.4.4

Problems � 251 Bibliography � 253

8 8.1 8.1.1 8.1.2 8.2 8.2.1 8.2.2 8.2.3 8.3 8.3.1 8.3.2 8.3.3 8.3.4 8.4 8.4.1 8.4.2 8.4.3

Partial differential equations II, applications � 254 Quantum mechanics in one dimension � 254 Stationary two-particle equation � 254 Time-dependent Schrödinger equation � 257 Quantum mechanics in two dimensions � 263 Schrödinger equation � 264 Algorithm � 264 Evaluation � 265 Fluid mechanics: flow of an incompressible liquid � 266 Hydrodynamic basic equations � 266 Example: driven cavity � 269 Thermal convection: (A) square geometry � 272 Thermal convection: (B) Rayleigh–Bénard convection � 281 Pattern formation out of equilibrium � 289 Reaction-diffusion systems � 289 Swift–Hohenberg equation � 298 Problems � 303 Bibliography � 304

9 9.1 9.1.1 9.1.2 9.1.3 9.2 9.2.1 9.2.2 9.3 9.3.1 9.3.2 9.4 9.4.1 9.4.2

Monte Carlo methods � 305 Random numbers and distributions � 305 Random number generator � 305 Distribution function, probability density, mean values � 306 Other distribution functions � 307 Monte Carlo integration � 311 Integrals in one dimension � 311 Integrals in higher dimensions � 313 Applications from statistical physics � 315 Two-dimensional classical gas � 316 The Ising model � 322 Differential equations derived from variational problems � 332 Diffusion equation � 332 Swift–Hohenberg equation � 334 Bibliography � 335

A A.1 A.1.1 A.1.2

Matrices and systems of linear equations � 337 Real-valued matrices � 337 Eigenvalues and eigenvectors � 337 Characteristic polynomial � 337

IX

X � Contents A.1.3 A.1.4 A.2 A.2.1 A.2.2 A.3 A.3.1 A.3.2 A.3.3 A.3.4 A.3.5 A.4 A.4.1 A.4.2 A.4.3

B B.1 B.2 B.2.1 B.2.2 B.2.3 B.2.4 B.2.5 B.2.6 B.3 B.3.1 B.3.2 B.3.3 B.3.4 B.4 B.4.1 B.4.2 B.4.3 B.4.4 B.4.5 B.4.6 B.4.7 B.4.8 B.4.9

Notations � 338 Normal matrices � 338 Complex-valued matrices � 339 Notations � 339 Jordan canonical form � 340 Inhomogeneous systems of linear equations � 341 Regular and singular system matrices � 341 Fredholm alternative � 342 Regular matrices � 343 LU decomposition � 343 Thomas algorithm � 346 Homogeneous systems of linear equations � 347 Eigenvalue problems � 347 Diagonalization � 347 Application: zeros of a polynomial � 350 Bibliography � 351 Program library � 353 Routines � 353 Graphics � 354 init � 354 contur � 354 contur1 � 354 ccontu � 355 image � 355 ccircl � 355 Runge–Kutta � 355 rkg � 355 drkg � 356 drkadt � 356 rkg_del � 356 Miscellaneous � 356 tridag – Thomas algorithm � 356 ctrida � 357 dlyap_exp – Lyapunov exponents � 357 dlyap_del – Largest Lyapunov exponent of delay-system � 357 dlyap_exp_del – Lyapunov exponents of delay-system � 358 schmid – orthogonalization � 358 FUNCTION volum – volume in n dimensions � 359 FUNCTION deter – determinant � 359 random_init – random numbers � 359

Contents �

C C.1 C.2 C.3 C.4 C.5 C.6 C.7 C.8

Solutions of the problems � 361 Chapter 1 � 361 Chapter 2 � 362 Chapter 3 � 362 Chapter 4 � 364 Chapter 5 � 374 Chapter 6 � 376 Chapter 7 � 378 Chapter 8 � 382

D D.1 D.2 D.2.1 D.2.2 D.2.3

README and a short guide to FE-tools � 387 README � 387 Short guide to finite-element tools from Chapter 7 � 390 mesh_generator � 390 laplace_solver � 391 grid_contur � 392

Index � 393

XI

1 Introduction 1.1 Goal, contents, and outline Nowadays, (theoretical) physicists are mainly concerned with two subjects: one is the devising of models, normally in the form of differential equations, ordinary or partial, while the other is their approximate solution, normally with the help of larger or smaller computers. In a certain way, computational physics can be seen as the link between theory and experiment. The often involved equations need a numerical treatment, usually in connection with certain simplifications and approximations. Once the code is written, the experimental part begins. A computer on which, for instance, a turbulent pipe flow is simulated can to some extent be regarded as a test station with easy access to output values such as flow rates, velocities, temperatures and also to input parameters like material constants or geometric boundary conditions. The present text tries to show, by means of examples coming from different corners of physics, how physical and mathematical questions can be answered using a computer. Problems from classical mechanics normally rely on ordinary differential equations, while those of electrodynamics, quantum mechanics, continuum mechanics or hydrodynamics are formulated by partial differential equations, which are often nonlinear. This is neither a numeric book, nor a tutorial for a special computer language, for which we refer to the literature [1]. On the other hand, most of the discussed algorithms are realized in FORTRAN. FORTRAN is an acronym composed of FORmula TRANslator. This already indicates that it is more appropriate and convenient for solving problems originating from theoretical physics or applied mathematics, like managing complex numbers or solving differential equations. During the past 20 years or so, the software package MATLAB, also based on FORTRAN routines, has enjoyed more and more popularity in research as well as in teaching [2]. The notion is formed as another acronym from MATrix LABoratory. Thus, MATLAB shows its strengths in the interactive treatment of problems resulting from vector or matrix calculus, but other problems related to differential equations can also be tackled. Programming is similar to FORTRAN and for beginners many things are easier to implement and handle. In addition, MATLAB captivates with its comfortable and convenient graphic output features. MATLAB is devised as a standalone system. It runs under Windows and Linux. No additional text editor, compiler or software libraries are needed. However, contrary to FORTRAN, once a MATLAB code is written, it is not compiled to an effective machine code, but rather interpreted line by line. Processes may thus become incredibly slow. Almost everything said here about MATLAB is also valid for PYTHON that came into vogue during the last 10 years. MATLAB (and PYTHON) may have some advantages in program development and testing, but if in the end a fast and effective code is desired (which should usually be https://doi.org/10.1515/9783110782523-001

2 � 1 Introduction the case), a compiler language is inevitably preferred. Nevertheless, we shall give some examples also in MATLAB, mainly those based on matrix calculation and linear algebra. In the meantime, highly developed computer algebra systems exist [3] and can be used to check the formulas derived in the book, but we do not go into this matter. The motto of the book could read ‘Learning by Doing’. Abstract formulations are avoided as much as possible. Sections where theory dominates are followed by an example or an application. The chapters, of course, can be read in sequence, albeit not necessarily. If someone is interested, for instance, in partial differential equations, it is possible to start with Chapter 7 and consult Chapters 3–6 for potential issues. The same is true for the statistical part, Chapter 9. It should be understandable also without knowledge of the preceding chapters. Depending on the subject, certain basic knowledge in (theoretical) physics is required. For Chapters 4, 5, and 8, classical mechanics and a little bit of quantum mechanics should be at hand, while for Chapter 8 additional basics in hydrodynamics won’t hurt. In Chapter 9 we presume a certain familiarity with the foundations of statistical mechanics. In any case, it is recommended to have a computer or notebook running nearby and, according to the motto, experiment with and check the presented routines immediately while reading. The topics up to Chapter 5, and to some extent also in Chapter 6, can be categorized under the notion low-dimensional systems, problems that can be described with only a few variables. In Chapter 2, we start with iterative maps as a pre-stage to differential equations. Here, notions like stability, chaos, or fractals are introduced and related to each other. Neural networks are considered as special maps mapping input neurons to output neurons by dynamic connections (Figure 1.1). Although going back to the 50s, neural nets nowadays have regained huge attention because they provide the basic principle of artificial intelligence devices. Besides iterative maps, applications from Newton’s classical mechanics come to mind, for instance, the computations of planetary orbits or the motion of a pendulum (Figure 1.2). Naturally for this case, ordinary differential equations play a central role.

Figure 1.1: A simple neuronal network where five input neurons are connected to three output neurons by dynamic connections (from Chapter 2).

1.1 Goal, contents, and outline

� 3

Figure 1.2: Time evolution of low-dimensional systems from Chapter 4. Left: two-planet problem with gravitational interaction. The orbit of the inner planet (red) is unstable and the planet is thrown out of the system. Right: deterministic chaos for the driven pendulum, Poincaré section.

For three dimensions and more (or better: degrees of freedom), chaotic behavior becomes possible, if not the rule. Large parts of Chapters 2–4 are dedicated to the so-called deterministic chaos. Chapter 6 leaves physical subjects for a while and turns to models where time-delay, memory, and stochastic terms are on the main focus. Two systems are studied in detail. The first is an epidemic model, also known as SIR or compartment model, Figure 1.3, based on ordinary differential equations but including memory terms; the second treats road traffic flow by means of microscopic considerations that allow the prediction of self-organized congestion, Figure 1.4.

Figure 1.3: Epidemic compartment models consist of normally three or more compartments where the individuals are located according to their state of health, see Chapter 6.

Figure 1.4: A microscopic traffic flow model describes a number of interacting cars by their positions and velocities, see Chapter 6.

4 � 1 Introduction

Figure 1.5: Numerical solutions of field equations, see Chapter 8. Left: time-dependent Schrödinger equation, probability density and center of mass of an electron in a constant magnetic field. Right: Hydrodynamic vortex for the driven cavity.

Figure 1.6: Macroscopic pattern formation for the example of a reaction-diffusion system, time series. Out of a large-scale oscillating solution, nuclei emerge spontaneously followed by the generation of smallscaled Turing structures. The numerical integration of the Brusselator equations is shown in Chapter 8.

Starting with Chapter 6 high-dimensional (or infinite-dimensional) systems come into the focus, normally described by field theories in the form of partial differential equations (Chapter 7), but also by ordinary differential equations with memory or timedelay terms (Chapter 6). These can be linear, as is very often seen in quantum mechanics or electrodynamics, but also nonlinear, as in hydrodynamics (Figure 1.5) or in diffusiondriven macroscopic pattern formation (Figure 1.6). Caused by nonlinearities, patterns may emerge on most diverse spatio-temporal scales (Chapter 8) , the dynamics becoming chaotic or turbulent (Figure 1.7). Finally, Chapter 9 is devoted to Monte Carlo methods. Here, randomness plays a key role in the determination of equilibrium states. Using Monte Carlo simulations, phase

1.2 The environment required for program development

� 5

Figure 1.7: Two-dimensional flow field showing turbulence, time series. Due to a vertical temperature gradient, an initially stable layer becomes unstable (top). Warmer air (red) rises in the form of ‘plumes’ in the colder air (blue) (from Chapter 8).

Figure 1.8: Monte Carlo simulation of a gas comprised of 1000 particles with hardcore rejection and longrange attraction from Chapter 9. The temperature decreases from left to right, and a phase transition to a condensed state can be detected.

transitions can be studied without resorting to differential equations (Figure 1.8). The application on the Ising model of spatially arranged spins is a prominent example.

1.2 The environment required for program development 1.2.1 Operating system We shall discuss programs mainly written in FORTRAN, but also in MATLAB. MATLAB (and PYTHON) is conceived as its own environment, where codes can be written, developed, and run without leaving the MATLAB application. The operating system of the performing computer plays a minor role, so Windows can be used as well. On the other hand, writing and developing FORTRAN codes is quite different. One needs a compiler, a text editor and a convenient file system. For more ambitious applications, software libraries like LAPACK should be at hand. Also, a graphics library is important, at least to have the possibility to plot results in a pleasant and appealing way. If one wants to have

6 � 1 Introduction all these possibilities preferably free of charge, the only reasonable possibility is to do it under Linux. Thus, we assume the reader has a Linux operating system running where GNU FORTRAN 95 is installed, together with the important libraries. A certain code written with a text editor, e. g., emacs or xemacs, can be compiled via $ gfortran program.f to a binary file named a.out. That can be executed with $ ./a.out This is, of course, the most simple case, where no further libraries are involved. 1.2.2 Software packages Besides a compiler, e. g., gfortran from GNU, we shall need external routines to solve standard problems, mainly from the LAPACK-Library (Linear Algebra PACKage). LAPACK can be also installed for free via the internet [4]. If the source code resorts to LAPACK routines, one should use the compiler options -l and -L: gfortran program.f -L/usr/lib -llapack The option -L/… points to the path where the following library is located on the computer. 1.2.3 Graphics For graphic output we employ the package PGPLOT [5]. All the figures in this first chapter and most of the book are generated with PGPLOT. Of course, the reader can use other packages if available, but PGPLOT provides all necessary basic tools and is also free of charge. The compiler options for a source that relies both on LAPACK and GNUPLOT should look like this: gfortran program.f -L/usr/lib -llapack -lpgplot if the PGPLOT-Library is installed in /usr/lib. 1.2.4 Program development and a simple script Program development can be considered as an iterative process. The desired result is approached slowly and reached, if at all, after N iterations. One iteration step is built up of the following points:

1.3 A first example – the logistic map



7

(1) editing, (2) compiling, (3) executing, (4) go back to (1) In (1), an ASCII file is generated using a text editor. This file is the source and usually has the extension .f, say, name.f. The editor is invoked, e. g., emacs by $ emacs name.f After finishing the first round of editing the source file, compiling and possible linking with libraries (2) can be achieved by a script file gfortran -O1

$1.f -o $1 -L/usr/lib -llapack -lpgplot

Let us call the script containing the above line ’make_f95’. Before executing the file, it must receive the attribute executable, which is done by $ chmod u+x make_f95 Running the script with $ ./make_f95 name initializes the variable $1 with name. Further options (optimization, etc.) of the FORTRAN compiler can be found in the built-in manual by $ man gfortran The option -o transfers the name in $1 to the created binary executable, instead of calling it a.out (default). (3) If the compiler passes without error messages, one finds in the working directory a binary file, here name, that can simply be executed with $ ./name

1.3 A first example – the logistic map 1.3.1 Map To inspire the reader a bit, we conclude this chapter with a numerically very simple example. We study the map xn+1 = axn (1 − xn ),

n = 0, 1, 2 . . .

0 ≤ a ≤ 4, 0 ≤ x0 ≤ 1

(1.1)

8 � 1 Introduction which is known as the logistic map. In this way we can also demonstrate the usage of the graphic package PGPLOT. The recursion rule (1.1) can be considered a simple model of the temporal evolution of a certain population if xn denotes the population density at time (in the year) n. The term axn alone leads to an exponential increase if a > 1, otherwise leading to the extinction of the species. For a > 1, the nonlinearity −axn2 restricts the growth due to lack of resources (limited food) and leads to a saturation. The constraint 0 ≤ a ≤ 4 originates from the condition 0 ≤ xn ≤ 1 for all xn . Depending on a, we have one or two stationary solutions, namely xs(1) = 0

for all a,

xs(2) = 1 − 1/a

for 1 < a ≤ 4,

(1.2)

which are computed from xs(i) = f (xs(i) )

(1.3)

f (x) = ax(1 − x).

(1.4)

with the abbreviation

However, shown by a linear stability analysis, the nontrivial solution xs(2) becomes unstable if a > 3. To this end, we examine the behavior of an infinitesimal deviation |ϵ0 | ≪ 1 of xs(2) xn = xs(2) + ϵn

(1.5)

under iteration. Inserting (1.5) into (1.1) yields (Taylor expansion) xs(2) + ϵn+1 = f (xs(2) + ϵn ) = f (xs(2) ) + f ′ (xs(2) ) ϵn + O(ϵn2 )

(1.6)

or, considering only linear terms in ϵn ϵn+1 = f ′ (xs(2) ) ϵn .

(1.7)

The small deviation ϵ0 will grow during iteration if |f ′ (xs(2) )| > 1, which leads with (1.4) to the two regions of instability a < 1,

a > 3.

The first one is excluded by (1.2). What happens for a > 3? The computer solutions introduced below show a periodic behavior, (Figure 1.9) xn = xn−2 = xp1 ,

xn+1 = xn−1 = xp2

1.3 A first example – the logistic map

� 9

or xp1 = f (f (xp1 )),

xp2 = f (f (xp2 )).

Applying again a linear stability analysis shows that the periodic solution becomes unstable for a > 1 + √6 and a more complex periodic cycle where xn alters now between four values arises (Problems).

1.3.2 FORTRAN Numerically we can study the logistic map with the following code

C c

c C

c

PROGRAM logist REAL*8 :: a,x,amin,amax ! double precision (8 bytes) region for a amin=2.8; amax=4. initialize graphics (pgplot) CALL INIT(SNGL(amin),SNGL(amax),0.,1.,7.,1.,1) itmax=200 ! number of iterations for each a ivor=1000 ! number of hidden iterations before plotting tiny=1.E-6 ! initial value for x da=(amax-amin)/999. ! step width for a loop for 1000 a values DO ia=1,1000 logistic map a=da*FLOAT(ia-1)+amin x=tiny; ic=0 ! x0 DO it=1,itmax+ivor x=a*x*(1.-x) IF(it > ivor) THEN after pre-iter. plot a pixel at (a,x) with color ic ic=MOD(ic,15)+1; CALL PGSCI(ic) CALL PGPT(1,SNGL(a),SNGL(x),-1) ENDIF ENDDO ENDDO

c labels for axes CALL PGSCI(1); CALL PGSCH(2.5) CALL PGTEXT(3.5,0.02,'a'); CALL PGTEXT(1.2,0.9,'x') c steady solution xs(a)

10 � 1 Introduction

CALL PGSLW(5) ! in bold CALL PGMOVE(SNGL(amin),1.-1./amin) da=(amax-amin)/99. DO ia=1,100 as=da*FLOAT(ia-1)+amin xs=1.-1./as CALL PGDRAW(as,xs) ENDDO c terminate graphics CALL PGEND END Compiling and executing creates a figure like Figure 1.9 on the screen.

Figure 1.9: Logistic map, period doublings, and chaos. The stationary solution xs(2) (bold) becomes unstable for a > 3.

The subroutine init comprises PGPLOT calls for initializing the graphics screen. It is described in more detail in Appendix B. A quick glance at the program shows that the implicit type declarations of FORTRAN have been utilized. This means that it is not necessary to declare variables in the header of each routine if they are real or integer scalar quantities. Each variable beginning with a letter from a to h and then from o to z is defined as real (single precision) by default, while all variables beginning with i to n are integers (4 bytes). On the other hand, you can often read in FORTRAN guides (normally written by mathematicians or computer scientists) sentences like we advise strongly against using type conventions in FORTRAN. Of course, the usage of default declarations seems to be a bit sloppy, but it saves a lot of typing and going back and forth while writing a program. Those who do not like this way can put the statement IMPLICIT NONE

1.3 A first example – the logistic map

� 11

on the first line of each routine and switch off the implicit declarations (there are also compiler options that effect the same). However, each variable has to then be defined explicitly. 1.3.2.1 MATLAB The same code can be written in MATLAB. The syntax is very similar to FORTRAN, but easier to write. The graphics calls are different. A code that produces the same Figure 1.9 is shown here: clear; amin=2.8; amax=4.0; itmax=200; ivor=1000; tiny=1.e-6;y=zeros(1,imax);xx=zeros(1,imax); da=(amax-amin)/1000; for a=2.8:da:4.0 x=tiny; for i=1:itmax+ivor; x=a*x*(1-x); if i>ivor y(i-ivor)=x; xx(i-ivor)=a; % to save CPU time it is essential to end % store the x-values for each a and end % then plot them all together plot(xx,y,'.'); end

However, execution time in MATLAB is normally much longer than in a compiler language. To demonstrate this, we remove all graphic output and compare the two codes, first in MATLAB: clear; t=cputime; itmax=100000000; ivor=itmax; y=zeros(1,imax); a=3.95; x=0.01; for i=1:ivor+itmax x=a*x*(1-x); if i>ivor y(i-ivor)=x; end end cputime-t % prints the CPU time used

12 � 1 Introduction then in FORTRAN REAL*8 :: a,x,y(100000000) common y ! outwit the FORTRAN optimizer! itmax=100000000; ivor=itmax a=3.95; x=0.01 DO it=1,itmax+ivor x=a*x*(1.-x) IF(it.gt.ivor) THEN y(it-ivor)=x ENDIF ENDDO CALL CPU_TIME(s) WRITE(6,*) s END Both codes are running on an HP ProDesk 400 G3 MT, the MATLAB code under Windows 7 Professional, the FORTRAN code under Linux using GNU-FORTRAN 95. MATLAB needs about 3 seconds for the 20×106 iterations. If the optimizer in FORTRAN is turned off (-O0), FORTRAN takes 0.95 seconds. At the first optimization level (-O1), we get 0.65 seconds, using the second up to the fourth we find 0.6 seconds. Starting with -O2, it is necessary to outflank the FORTRAN optimizer and to write the y-array into a dummy COMMONblock. Otherwise gfortran would recognize the computed quantities inside the iteration loop are not used further and would just skip the loop completely. 1.3.3 Problems With pencil and paper: 1. Compute the period two cycle xp1 , xp2 of the logistic map. 2. Show that this cycle becomes unstable if a > 1 + √6. And to code: Change the code for the logistic map to scale up certain regions in the x-a plane. The regions could be defined using the mouse cursor.

Bibliography [1] [2] [3] [4] [5]

S. J. Chapman, FORTRAN 95/2003 for Scientists and Engineers, McGraw Hill Book Co. (2007). Matlab – Mathworks, http://mathworks.com/products/matlab/, February 5, 2023. Maplesoft – Mathworks, http://www.maplesoft.com/solutions/education/, February 5, 2023. LAPACK – Linear Algebra PACKage – Netlib, www.netlib.org/lapack/, February 5, 2023. T. J. Pearson, Guide to the graphics package PGPLOT, Caltech, USA: http://www.astro.caltech.edu/~tjp/ pgplot/, February 5, 2023.

2 Nonlinear maps In this chapter we continue with recursion formulas where, starting from an initial (set of) number(s), all subsequent values of one (ore more) variable(s) follow in a unique and deterministic way.

2.1 Frenkel–Kotorova model 2.1.1 Classical formulation As a simple model of a solid body in an externally given, periodic potential V (x) with V (x) = V (x + 1)

(2.1)

we examine a one-dimensional chain of N mass points, each with mass m. The mass points are connected with chains having the spring rate D = 1 and the equilibrium length zero (Figure 2.1). They are located at xn and have the momentum pn . The Hamilton function reads: N

H(xn , pn ) = ∑ [ n=1

p2n 1 + V (xn ) + (xn − xn−1 )2 ]. 2m 2

(2.2)

To state the dynamic problem one may write up the canonical equations ṗ n = −

𝜕H , 𝜕xn

ẋn =

𝜕H 𝜕pn

and have to solve 2N coupled ordinary differential equations.

Figure 2.1: Sketch of a one-dimensional chain of springs in an external periodic potential. https://doi.org/10.1515/9783110782523-002

(2.3)

14 � 2 Nonlinear maps 2.1.2 Equilibrium solutions We shall come back in detail to problems like (2.3) in the next chapters. Here we concentrate only on steady configurations of the chain where ṗ n = 0, ẋn = 0, pn = 0. From (2.3) follows the equilibrium condition 𝜕H =0 𝜕xn

(2.4)

V ′ (xn ) + (xn − xn−1 ) − (xn+1 − xn ) = 0,

(2.5)

or

where the prime denotes the derivative. We define the reciprocal density at location xn yn ≡ xn − xn−1

(2.6)

and obtain from (2.5) and (2.6) the two-dimensional map yn+1 = yn + V ′ (xn )

xn+1 = xn + yn+1 = xn + yn + V ′ (xn ).

(2.7)

From a certain initial pair (x0 , y0 ) the whole series (xn , yn ) is uniquely determined. Of course nothing is said about the stability of these solutions. At this stage we only know that they are at equilibrium where the net force on each mass point vanishes. Due to (2.4), the potential energy has an extremum, but this might be a maximum as well.

2.1.3 The standard map To go on we must specify V (x). With (2.1) we assume V (x) =

K cos(2πx) (2π)2

(2.8)

with the control parameter K. Then (2.7) takes the form xn+1 = xn + yn − yn+1 = yn −

K sin(2πxn ) 2π

K sin(2πxn ). 2π

(2.9)

The recursion relation (2.9) is called Standard map, Chirikov map or Circle map and is discussed in detail in the literature (e. g., [1]).

2.1 Frenkel–Kotorova model



15

The two-dimensional map (2.9) is easy to code: PROGRAM circle_map REAL*8 :: x,y,pi2,vor CHARACTER*1 :: c INTEGER :: pgcurs pi2=4.*ASIN(1.D0) ! 2*Pi amp=1. ! value of K imax=10000 ! number of iterations for each initial value CALL INIT(0.,1.,0.,0.5,10.,0.5,1) ncol=15 ! use 15 colors DO k=pgcurs(x0,y0,c) ! mouse query x=x0; y=y0 ! initial values ic=mod(ic,ncol)+1; CALL PGSCI(ic) DO i=1,imax y=y-amp/pi2*SIN(pi2*x); x=x+y ! circle map xp=MOD(x,1.); yp=MOD(y,1.) ! plot modulo 1 CALL PGPNTS(1,xp,yp,-1,1) ENDDO if(ICHAR(c).eq.32) EXIT ENDDO CALL PGEND END For each initial value, 10 000 points are plotted. The initial pair is determined using the mouse. The map is periodic in y → y±m, x → x±m′ with integers m, m′ and is symmetric with respect to the origin. Therefore it is sufficient to take the initial values from the region 0 ≤ x ≤ 1,

0 ≤ y ≤ 1/2.

Two fixed points xn+1 = xn , yn+1 = yn exist. At x, y = 0 we have a center, and x = 1/2, y = 0 is called saddle point (see Chapter 3). Figure 2.2 shows several series for K = 1. 2.1.4 Problems 1. 2.

Examine (2.9) for different K. For K = 0 you will not need your computer. Plot the chain length for fixed x0 as a function of y0 . Try different values for x0 .

16 � 2 Nonlinear maps

Figure 2.2: Standard map for K = 1.

2.2 Chaos and Lyapunov exponents As already known from the discussion of the logistic map in Section 1.3, there exist regular (periodic or quasiperiodic) and chaotic regimes if the control parameters or, at least for the Standard map, the initial conditions are changed. Instead of only looking qualitatively at the patterns of the iteration maps, we shall now derive a more quantitative measure for chaos.

2.2.1 Stability, butterfly effect, and chaos Let us turn again to the logistic map (1.1). Prevenient to the chaotic regime exists a cascade of period doublings where the relative distance of doubling or bifurcation points decreases with increasing a. The first few bifurcation points can be computed analytically, but the effort required increases quickly. Therefore it is desirable to find a quantity that reflects the stability of a series for any arbitrary a. Butterfly effect: What do we mean by stability? A descriptive definition of chaos is given by the development of some tiny disturbances of variables and/or parameters: if an (infinitesimally) small cause, namely a perturbation of i. e., the initial value y0 = x0 + ϵ after a certain time (here: number of iterations) is amplified to a finite and measurable outcome, a completely different series yn may result. Then the series starting at x0 is called unstable. A butterfly flaps its wings and perturbs the atmosphere very slightly.

2.2 Chaos and Lyapunov exponents

� 17

Provided sufficiently unstable weather conditions, this ‘butterfly effect’ may completely change the further development of the weather after a certain amount of time. 2.2.2 Lyapunov exponent of the logistic map Thus we are looking for a criterion that describes the divergence of two initially infinitesimally close series xn and yn if n ≫ 1. Let us start the first iteration with x0 . Then we have for xn xn = f (f (f (. . . f (x0 ) . . . ))) ≡ f (n) (x0 ),

(2.10)

where f (n) denotes the nth iterate of f . In parallel, a second series y1 , y2 . . . , yn is computed for a closely nearby initial value x0 + ϵ, |ϵ| ≪ 1 and reads yn = f (n) (x0 + ϵ).

(2.11)

To see if the two series are receding from each other, we compute their distance at each n and find with ϵ → 0 󵄨󵄨 f (n) (x + ϵ) − f (n) (x ) 󵄨󵄨 󵄨󵄨󵄨 d (n) 󵄨󵄨󵄨 󵄨 0 0 󵄨󵄨 dn = |xn − yn | = 󵄨󵄨󵄨ϵ 󵄨󵄨 = lim󵄨󵄨󵄨ϵ f (x)󵄨󵄨󵄨 . 󵄨󵄨 󵄨󵄨 ϵ→0󵄨󵄨 dx 󵄨󵄨x0 ϵ

(2.12)

If we assume that this distance increases (unstable) or decreases (stable) exponentially with n, dn = d0 eλn = |ϵ| eλn ,

(2.13)

the series is unstable if λ > 0, stable if λ < 0. With (2.12) we obtain from (2.13) 1 󵄨󵄨󵄨󵄨 d (n) 󵄨󵄨󵄨󵄨 ln󵄨󵄨 f (x)󵄨󵄨 , n→∞ n 󵄨󵄨 dx 󵄨󵄨x0

λ = lim

(2.14)

where λ is called the Lyapunov exponent.1 For the evaluation of (2.14) the chain rule must be applied d (n) 󵄨󵄨󵄨 f (x)󵄨󵄨󵄨 = dx f (x0 )dx f (x1 ) . . . dx f (xn ) dx 󵄨 x0 leading finally to 󵄨󵄨 1 n 󵄨󵄨󵄨󵄨 d 󵄨 . ∑ ln󵄨󵄨 f (x)󵄨󵄨󵄨 n→∞ n 󵄨󵄨 dx 󵄨󵄨x=xk k=0

λ = lim

1 Alexander M. Lyapunov, Russian mathematician and physicist, 1857–1918.

(2.15)

18 � 2 Nonlinear maps This relation is easy to code. Of course it is not possible to take infinite n, one should choose a sufficiently large n, say 1000, to find a convergent result. Based on the code logist from Chapter 1, the program could look like this: PROGRAM lyapunov REAL*8 :: a,x,flyap .... DO it=1,itmax+ivor ! iteration loop x=a*x*(1.-x) ! Logistic map IF(it.GT.ivor.AND.ABS(x-.5).GT.1.E-30) * fly=fly+LOG(ABS(a*(1.-2.*x))) ! sum for Lyap. exp. ENDDO flyap=flyap/FLOAT(itmax) ! Lyapunov exponent .... plot flyap as function of a ... Figure 2.3 shows the Lyapunov exponent over the control parameter a. One finds negative λ for the fixed points and for the periodic series, they are stable. But in the chaotic regions, the Lyapunov exponent turns out to be positive. At the bifurcation points a = ak , λ vanishes, since the 2k−1 -periodic cycle gets unstable at ak and gives way to the 2k -periodic solution, which then becomes stable from a > ak through a = ak+1 .

Figure 2.3: Lyapunov exponent of the logistic map. The chaotic regime commences at about a ≈ 3.55 and λ > 0. Inside the chaotic regime, periodic windows of different size persist.

2.2 Chaos and Lyapunov exponents �

19

2.2.3 Lyapunov exponents for multidimensional maps The method can be generalized to N-dimensional maps. Let x⃗n+1 = f ⃗(x⃗n ) with x,⃗ f ⃗ in RN . The distance dn now turns into the length of a vector d⃗n and instead of (2.12) one has d⃗n = L(n) (x⃗0 ) d⃗0 ,

|d⃗0 | = ϵ ≪ 0

with the Jacobi matrix L(n) (x⃗0 ) = ij

󵄨 𝜕fi(n) 󵄨󵄨󵄨 󵄨󵄨 𝜕xj 󵄨󵄨󵄨󵄨x⃗ 0

of the nth iterate. With d⃗n = d⃗0 eλn ≡ d⃗0 σ the eigenvalue problem (L(n) − σi ) d⃗0 = 0 results. Knowing the N eigenvalues σi we may finally compute the N Lyapunov exponents λi = lim

n→∞

1 ln σi . n

(2.16)

The Jacobi matrix of the nth iterate can be written as a product of the simple Jacobi matrices: L(n) (x⃗0 ) = L(x⃗n ) L(x⃗n−1 ) . . . L(x⃗0 ).

(2.17)

As an example we take the Standard map (2.9). Because of L(x)⃗ = (

1 − K cos(2πx) −K cos(2πx)

1 ), 1

x⃗ = (x, y)

we have here the special situation det L = 1 independent on x.⃗ From (2.17) it follows det L(n) = 1 and as a consequence σ1 σ2 = 1. Since L(n) is a real valued matrix, we may distinguish between the two cases

20 � 2 Nonlinear maps (i) σi real valued, one σk ≥ 1 and λk ≥ 0, (ii) σ1 = σ2∗ complex valued, |σi | = 1, and Re λi = 0, Case (i) belongs to an unstable series (saddle point, see Chapter 3), case (ii) denotes a stable one (center). How can we compute the Lyapunov exponents in practice? To achieve a convergent outcome, one should do at least a few thousand iterations and the product (2.17) will accordingly include as many matrix multiplications. In the chaotic regime where the series diverge, an overflow error is unavoidable due to the exponential growth with n. On the other hand, the relation (2.17) is linear in x⃗0 and L can be normalized after each iteration. If we choose the maximum norm 1 L(̄ x⃗i ) = L(x⃗i ) si

󵄨 󵄨 with si = max(󵄨󵄨󵄨Ljk (x⃗i )󵄨󵄨󵄨), jk

the product (n) L̄ (x⃗0 ) = L(̄ x⃗n ) L(̄ x⃗n−1 ) . . . L(̄ x⃗0 )

(2.18)

will not diverge anymore but differs from (2.17) in a possibly rather large factor n

s = ∏ si . i=0

The eigenvalues will then differ in the same factor and instead of (2.16) we obtain n 1 1 (ln σ̄i + ln s) = lim (ln σ̄i + ∑ ln sk ). n→∞ n n→∞ n k

λi = lim

(2.19)

All we have to do is to sum up ln sk during the iteration which will not lead to an overflow due to the logarithm. The code section computing λ in connection with the program circle_map could look like this: REAL*8 :: x,y,bs,s,sp,det,amp,fm1(2,2) complex xl1,xl2 .... x=x0; y=y0 ! init. values for iteration fm1=fm(pi2*x,amp) ! Jacobi matrix, defined as function below bs=MAXVAL(ABS(fm1)) ! maximum norm fm1=fm1/bs ! normalization s=LOG(bs) ! norm to s DO i=1,imax ! iteration loop y=y-amp/pi2*SIN(pi2*x) x=x+y

2.2 Chaos and Lyapunov exponents

� 21

c recursive matrix multiplication fm1=MATMUL(fm(pi2*x,amp),fm1) bs=MAXVAL(ABS(fm1)) fm1=fm1/bs s=s+LOG(bs) ENDDO c compute eigenvalues of fm1 det=(fm1(1,1)*fm1(2,2)-fm1(1,2)*fm1(2,1)) sp=fm1(1,1)+fm1(2,2) xl1=(log(.5*(sp+csqrt(cmplx(sp**2-4.*det,0.))))+s) xl2=(log(.5*(sp-csqrt(cmplx(sp**2-4.*det,0.))))+s) .. plot result c Jacobi matrix CONTAINS ! makes function fm visible in MAIN FUNCTION fm(x,amp) REAL*8 :: fm(2,2),x,amp fm(1,1)=1.-amp*COS(x) fm(1,2)=1. fm(2,1)=-amp*COS(x) fm(2,2)=1. END FUNCTION fm END If this section is integrated into a double loop over the initial values x0 , y0 , we can scan the whole plane with a certain resolution. The largest Lyapunov exponent, i. e., the maximum of the real parts of xl1, xl2 are then stored in a two-dimensional array and finally contour lines or a colored bitmap can be plotted. This can be done using the routine image, which is based on the PGPLOT routine PGPIXL and explained in detail in Appendix B. The colors can be initialized using the routine ccircl (Appendix B): CALL CCIRCL(2,100) ! initializes the color circle for the ! color indices 2..100 Figure 2.4 shows the color-coded largest Lyapunov exponent for three different values of K with a resolution of 500 × 500. Blue regions correspond to a regular, nonchaotic series, and all other colors reveal a positive Lyapunov exponent and denote chaotic behavior. The complete code can be found on the book’s website [2].

22 � 2 Nonlinear maps

Figure 2.4: Largest Lyapunov exponent of the Standard map for K = 0.5 (top), K = 1 (middle), K = 1.5 (bottom). In the blue regions, the series are not chaotic.

2.3 Affine maps and fractals Next we turn to linear, bijective maps composed of the three operations translation: rotation: scaling and shear:

q⃗ ′ = q⃗ + a⃗ q⃗ ′ = LR q⃗ q⃗ ′ = LS q.⃗

2.3 Affine maps and fractals



23

We restrict ourselves to two dimensions, q⃗ = (x, y), where LR is the rotation matrix LR = (

cos θ sin θ

− sin θ ) cos θ

and Ls is the scaling-shear matrix s LS = ( x 0

b ). sy

A compound transformation may read q⃗ ′ = LR LS q⃗ + a,⃗

(2.20)

in which the different mappings do not commute, i. e., the sequence is important (in fact rotation and scaling commute if sx = sy , b = 0). If a triangle with area A is transformed, one finds with Det(LR ) = 1 A′ = Det(LS ) A = sx sy A. If the mapping (2.20) is performed recursively, q⃗ n+1 = LR LS q⃗ n + a,⃗

(2.21)

and sx sy < 1, a self-similar structure is obtained. 2.3.1 Sierpinski triangle As an application we construct the Sierpinski triangle. With the FORTRAN call CALL RANDOM_NUMBER(x) we activate the random number generator. If x is a scalar it contains after the call an equally distributed random number in the interval 0≤xplot ‘fd.dat’ with lines Then the slope can be determined at a convenient position where the curve is more or less straight. If we take, e. g., points 3 and 8 to construct the slope triangle, we find df ≈ 1.6. The value of the fractal dimension can also be computed analytically using another construction for the Sierpinski gasket, see [3]. One then obtains df = log(3)/ log(2) = 1.5849625 . . . 2.4.3 Problem Determine the fractal dimension of the fern and the tree of Figure 2.6 numerically.

2.5 Neural networks



29

Figure 2.8: Number of squares over side length, double log scale. For the ideal case one observes a straight line, and its slope corresponds to the fractal dimension of the covered object.

2.5 Neural networks In about 1990 the discipline neuro-informatics emerged. It comprises certain subjects from physics, mathematics, chemistry, and medicine, and aims to study and to understand the way the (human) brain works. One approach is the modeling of biological intelligence such as memory, learning, and logical relations using neural networks. Here we can only take a short look at this huge and still developing area. We have to concentrate on two examples, the perceptron and the so-called Kohonen maps. For further details we refer to the literature [6]. 2.5.1 Perceptron The human brain is built of about 80 billion neurons that are linked among each other with more than 1014 contacts, or synapses. These connections are dynamic and can be intensified or attenuated, depending on their usage. This time dependent adaption process is considered the key to learning. The perceptron is a very simple system realizing this concept. It possesses an input layer of N neurons S1 . . . SN which are not connected to each other. One processing layer, namely the adjustable weighted connections (synapses) wj , links the input neurons to a single output neuron (Figure 2.9). Each neuron has two states, active (Sj = 1) or passive (Sj = −1). The coupling of the input layer to the output neuron is given by the simple relation N

So = sign(∑ wj Sj ) j

where 1 for x ≥ 0 sign(x) = { −1 for x < 0

(2.31)

30 � 2 Nonlinear maps

Figure 2.9: Perceptron with N = 5 input neurons. Since there is only one processing layer, it is also called single-layer perceptron (SLP).

is the slightly modified sign function. The strength of the connections is described by the synaptic weights wj . If wj < 0, the connection is inhibitive; otherwise it is activating. During the learning process, the values of the wj are changed accordingly. 2.5.1.1 Learning rule What do we mean by learning process? A neural network is not coded and does not contain any inherent rules or connections. One offers input/output pairs that are assigned to each other (learned). Assume we have M such pairs denoted by (Sj = xj(n) , So = y(n) ),

n = 1 . . . M.

(2.32)

The network learns the assignment xj(n) → y(n) by dynamically changing the synaptic weights wi . For the optimal case, as many pairs as possible out of (2.32) should fulfill the relation (2.31): N

y(n) = sign(∑ wj xj(n) ). !

j

(2.33)

In 1949, D. Hebb2 formulated what is nowadays called Hebb’s learning rule or Hebbian learning, which says that the synaptic weights are adjusted to the activity of the input and output neurons they do connect. Let Δwi denote the modification of wi at each learning step, then Hebb’s rule can be written up as Δwj =

1 (n) (n) y xj . N

(2.34)

If there was only one pair (M = 1) to learn and initially all wj = 0, we find after one step wj = N1 y xj . Inserting this into (2.33) yields 2 Donald Hebb, Canadian psychologist, 1904–1985.

2.5 Neural networks

y = sign(y



31

1 N 2 ∑ x ) = sign(y), N j j

which is fulfilled for all y = ±1. In practice, the network should learn of course more than just one pair. Then one has to modify the rule (2.34): each pair changes the weights only if it is not yet learned correctly (Rosenblatt rule3 ): 1 (n)

Δwj = { N 0

y

xj(n)

if y(n) (∑Nj wj xj(n) ) ≤ 0

otherwise.

(2.35)

In the ideal case when all pairs have been learned correctly, the algorithm stops by itself. However, as we shall see below, the capability to learn depends on the input/output pairs (2.32), and the complete learning of all pairs is rather an exception. But let us first continue with an example. 2.5.1.2 Prime numbers We wish to teach our perceptron to distinguish between prime and not prime numbers. The input of an integer K > 0 is realized by activating input neuron no. K to +1. All other neurons are in the rest state −1. Obviously, the range of input integers is restricted to K ≤ N. The output neuron should be active (+1) if K is prime. To gain a clear and flexible code we adjust the three subtasks 1. Generation of the input/output pairs 2. Evaluation 3. Learning step, adjustment of the synaptic weights to the three subroutines 1. SUBROUTINE inout 2. SUBROUTINE eval 3. SUBROUTINE learn. The main program is then a simple loop over the repeated learning of integers as long as a stop-bit (here istop) is set: PARAMETER (n=100, m=1)

! number of neurons, input(n) and ! output(m) layer

C INTEGER :: ie(n),ia(m),ial(m) REAL :: w(n,m)

3 Frank Rosenblatt, American psychologist, 1928–1971.

32 � 2 Nonlinear maps

w=-1. ! initiate all weights with -1 imax=n ! max. number of different input/output pairs k=1; istop=0 DO WHILE(istop.EQ.0) CALL inout(ie,ial,n,m,k) CALL eval(ie,ia,w,n,m) CALL learn(ie,ia,ial,w,n,m,istop,imax) wskal=MAXVAL(ABS(w)) w=w/wskal ! scaling of weights k=MOD(k,imax+1)+1 k1=k1+1 CALL synplo(w,n,m,k1) ! plot of the synaptic weights ENDDO write(6,*)'learning steps needed:',k1 END The array ie contains the value of the input neurons, ia those of the outputs (here only one). For K, the natural numbers 1, 2, 3, . . . , N are successively offered. The routine learn sets the stop-bit if imax numbers are recognized correctly one after the other without changing w. The subroutines read in detail: SUBROUTINE inout(ie,ia,n,m,k) INTEGER :: ie(n),ia(m) c provides input/output pairs ie=-1 ie(k)=1 ! input layer ia(1)=iprim(k) ! output neuron END The integer-function iprim(k) returns +1 if k is prime, otherwise −1. We leave the coding of iprim to the reader. SUBROUTINE eval(ie,ia,w,n,m) INTEGER :: ie(n),ia(m) REAL :: w(n,m) DO j=1,m s=SUM(w(1:n,j)*FLOAT(ie)) ia(j)=SIGN(1,FLOOR(s)) ENDDO END

2.5 Neural networks



33

This is the coding of formula (2.31). The subroutine learn finally realizes Rosenblatt’s rule (2.35): SUBROUTINE learn(ie,ia,ial,w,n,m,istop,imax) INTEGER :: ie(n),ia(m),ial(m) REAL :: w(n,m) c modify weights according to Rosenblatt istop=0 ll=ll+1 DO j=1,m IF(ia(j).NE.ial(j)) THEN ll=0 w(1:n,j)=w(1:n,j)+float(ial(j)*ie)/FLOAT(n) ENDIF ENDDO IF(ll.EQ.imax) istop=1 ! set stop-bit END If the computed result ia differs from the one that should be learned, ial, the weights are adjusted accordingly. The stop-bit istop is set if imax numbers are learned correctly in a row. In Figure 2.10 we show the learning steps needed as a function of N for different initializations of the weights wi .

Figure 2.10: The number K1 of the learning steps needed depends strongly on the number of neurons N of the input layer, but also on the initialization of the synaptic weights wi .

2.5.1.3 Logic gates and linear separability We extend the switching condition (2.31) according to N

So = sign(∑ wj Sj − ϑ). j

(2.36)

Now the output neuron becomes active if the sum of its inputs exceeds the threshold value ϑ, which can be different for each neuron. To indicate the threshold, we write

34 � 2 Nonlinear maps

Figure 2.11: The logic functions NOT, AND, and OR can be realized by an SLP.

its value in the respective output neuron. To train the perceptron as a ‘normal’ binary computer it should at least be able to learn the basic logic functions NOT, AND, OR, and XOR. Taking an appropriate value for ϑ, this is possible for the first three functions, but not for XOR (Figure 2.11), as already proven by M. Minsky4 in the 60s. The truth table of an XOR gate looks like S�

S�

So

−� � −� �

−� � � −�

−� −� � �

Let a1 = −1 (passive) and a2 = 1 (active). Then we must have w1 a1 + w2 a1 < ϑ

w1 a2 + w2 a2 < ϑ w1 a1 + w2 a2 > ϑ

(2.37)

w1 a2 + w2 a1 > ϑ. Subtracting the third from the first inequality yields w2 (a1 − a2 ) < 0, subtracting the fourth from the second leads to w2 (a2 − a1 ) < 0, which is obviously a contradiction. In N-dimensional vector space, all possible input patterns of N neurons lay on the corners of an N-dimensional hypercube. Only these patterns that can be separated by an N − 1-dimensional hyperplane can be distinguished 4 Marvin Minsky, American artificial intelligence researcher, 1927–2016.

2.5 Neural networks



35

Figure 2.12: For AND and OR the input states assigned to the same output state can be separated by a straight line; for XOR this is impossible.

by the (single-layer) perceptron, Figure 2.12. If we try to teach our perceptron the XOR function, the learning process would never come to an end. Problems of this kind are called not linearly separable. Unfortunately most of the problems are not linearly separable, at least for larger N. A solution can be the extension to more than one processing layer (multilayer perceptron), for which we refer to the literature. 2.5.1.4 Decimal to binary converter Taking M perceptrons having one output neuron each in parallel, one arrives at a perceptron with M neurons in the output layer, Figure 2.13. The synaptic weights can now be cast in a rectangular matrix wij ,

i = 1 . . . N, j = 1 . . . M

and Hebb’s rule (2.34) turns into Δwij =

1 (n) (n) y x . N j i

(2.38)

Such an extended perceptron can not only distinguish between two properties like prime or not prime, but between 2M states.

Figure 2.13: An SLP having more than one output neuron. Here, N = 5 input neurons are connected with M = 3 output neurons via the rectangular weight matrix wij .

36 � 2 Nonlinear maps As an example, we wish to teach the perceptron how to convert decimals into binary numbers. During the learning process we offer again integer values K running from 0 to N. The routine inout creates the input ie according to ie=-1 IF(k.ne.0) ie(k)=1 The related output is the binary image of K, which can be obtained in FORTRAN using the BTEST statement: ie=-1 DO j=0,m-1 IF(BTEST(k,j)) ie(m-j)=1 ENDDO (the most significant bit is ie(1)). The complete listing is provided in [2]. Figure 2.14 shows the synaptic connections for N = 15, M = 4 when finally all numbers 1…15 have been learned after 80 learning cycles (wij = 1 initialized). Negative (inhibitive) connections are drawn dashed.

Figure 2.14: The SLP as a decimal to binary converter. Inhibiting connections w < 0 are shown dashed. In the output layer the most significant bit is on the left-hand side.

2.5.2 Self-organized maps: Kohonen’s model Since there exist no horizontal connections within each layer of the perceptron, the spatial arrangement of the neurons plays no role. But the brain works differently: similar stimuli are processed in nearby regions of the cortex. This constitutes the basic idea of the self-organized maps developed by the Finnish engineer Teuvo Kohonen in 1982.

2.5 Neural networks



37

2.5.2.1 The model We resort to the case where the neurons are located in a plane layer representing a two-dimensional network. To each neuron we may assign a vector I r⃗ = ( ) J with (I, J) = (0, 0) . . . (M − 1, N − 1) (Figure 2.15). Each neuron is connected with an input layer consisting of L neurons by synapses with the weights wrℓ⃗ ,

ℓ = 1...L

(the upper index ℓ denotes the neurons of the input layer). The Kohonen algorithm modifies Hebb’s learning rule in the following way: 1. Initializing. Take an appropriate initial value for the weights wrℓ⃗ , e. g., randomly distributed or all zero or ±1. 2. Input signals. The signals or patterns to learn are offered as vℓ , either one by one or in a random sequence. 3. BMU. Identify the Best Matching Unit (BMU). The BMU is the neuron that has the smallest (euclidean) distance to the learning signal. If L

2

L

2

∑(vℓ − wrℓ⃗′ ) ≤ ∑(vℓ − wrℓ⃗ ) ℓ

4.

for all r,⃗

(2.39)



the BMU is located at r⃗ ′ . Then r⃗ ′ is called the center of excitation. Dynamics. During the adjustment step all synaptic weights are adapted according to Δwrℓ⃗ = ϵ h(d) (vℓ − wrℓ⃗ ),

󵄨 󵄨 with d ≡ 󵄨󵄨󵄨r⃗ − r⃗ ′ 󵄨󵄨󵄨

(2.40)

for all r.⃗ Except for the new function h(d) this is exactly Hebb’s rule (2.34). The function h(d) is unimodal and has its maximum at d = 0. It could be for instance a

Figure 2.15: 25 neurons arranged in a quadratic processing layer.

38 � 2 Nonlinear maps Gaussian of width σ h(d) = exp(−d 2 /2σ 2 ).

5. 6. 7.

(2.41)

Note that in this step not only the BMU is changed but also the neurons lying in its neighborhood. Refinement. To refine the spatial structure, σ is decreased after each learning step. If σ ≥ σmin go to 2. The algorithm stops when only the BMU is changed.

2.5.2.2 Color maps To be more clear we wish to present an application: a Kohonen map sorting colors. Colors can be classified by a three-component vector measuring their red, green, and blue composition, called the RGB value. An RGB value of (1/0/0) denotes red, (0/1/0) green, (1/1/0) yellow, (0.5,0.5,0.5) gray and (1/1/1) white. Each neuron bears three synaptic values, namely wr(1) ⃗ for red,

wr(2) ⃗ for green,

wr(3) ⃗ for blue,

where 0 ≤ wr(ℓ) ⃗ ≤ 1. The code that implements the Kohonen algorithm can be found in [2] (the array R is assigned to w(1) , G to w(2) and B to w(3) using an EQUIVALENCE statement). Figure 2.16 shows color maps formed by a layer of 15×15 neurons after learning 100, 500, and 3000 randomly generated RGB patterns. For a monotonically decreasing σ we

Figure 2.16: Evolution of a color map consisting of 15 × 15 neurons applying Kohonen’s algorithm. Upper left: initial state; upper right: after t = 100 learned random patterns; bottom left: after t = 500; bottom right: after t = tmax = 3000.

2.5 Neural networks



39

use the relation σ = σ0 ⋅ (0.05)t/tmax ,

σ0 = 5,

tmax = 3000,

in addition we put ϵ = 0.03. It turns out that the completely random color pattern from the beginning is eventually organized in such a way that similar colors are assigned to neurons located close to each other. 2.5.2.3 Traveling Salesman Problem Next we wish to study a one-dimensional configuration of N neurons in a row. Each neuron in the chain bears a two-dimensional vector x w⃗ i = ( i ) yi pointing at a certain location in 2D real space, e. g., of a road map. If one passes the neurons one after the other, the adjoint locations (xi , yi ) constitute a certain path on the road map. If we offer certain given points (places) to the Kohonen algorithm as learning inputs, the path will be matched closer and closer to these points. This leads us to the famous Traveling Salesman Problem (TSP). The task of the TSP is to find the shortest connection between K given places on a road map with the constraint that each place is only touched once. It turns out that there exist 21 (K −1)! different closed paths to fulfill this condition. But which is the shortest? Of course one could try all paths, but the computational effort increases exponentially with K. Even for a rather small number of places the largest computers would be overcharged. To apply Kohonen’s method for an approximate solution, we take 30 prescribed locations, for instance, randomly chosen positions on a map; see the red points in Figure 2.17. In each learning step we select randomly one of these locations as the input and adjust the synaptic weights according to (2.40). Also in Figure 2.17 we show the states of the network after 20, 80, and 2000 learning steps for two different initial values of the weight vectors w⃗ i . The solid lines correspond to initial values on a circle with radius R = 0.15, the dashed ones with R = 0.5. After 2000 steps all places are passed by the paths. For the parameters we use ϵ = 0.8 and σ = σ0 ⋅ (0.02)t/tmax ,

σ0 = 2,

tmax = 4000.

Considering the figure, it is evident that different initial configurations lead to at least slightly different solutions. But only one path can be the shortest and Kohonen’s algorithm can solve the TSP only approximately.

40 � 2 Nonlinear maps

Figure 2.17: Two approximate solutions of the TSP computed with Kohonen’s algorithm based on a chain of 40 neurons (black dots). A certain number of places, here 30 (red dots), should be connected by the shortest path. Shown is the situation after 20 (top right), 80 (bottom left), and 2000 (bottom right) learning steps. In the beginning, the chain is initialized on a circle with radius R = 0.15 (solid lines) and R = 0.5 (dashed). The path length after 2000 steps is 4.176 (solid) and 4.304 (dashed).

2.5.3 Problems 1. 2.

Code the routines for the perceptron as a decimal to binary converter. Solve the TSP for K locations and N neurons applying the Kohonen algorithm.

Bibliography [1] J. Argyris, G. Faust, M. Haase and R. Friedrich, An Exploration of Dynamical Systems and Chaos, Springer (2015).

Bibliography

[2] [3] [4] [5] [6]

� 41

FORTRAN codes at https://www.degruyter.com/document/isbn/9783110782523/html. W. Kinzel and G. Reents, Physics by Computer, Springer (1997). B. Mandelbrot, The Fractal Geometry of Nature, Freeman and Co. New York (1983). M. F. Barnsley, Fractals Everywhere, Dover Pub. (2012). H. Ritter, T. Martinetz and K. Schulten, Neural Computation and Self-Organizing Maps: An Introduction, Addison–Wesley Pub. (1992).

3 Dynamical systems The foundation of physical theories is normally based on differential equations. In field theories like quantum mechanics, electrodynamics, or hydrodynamics, one is confronted with (a system of) partial differential equations Fk [r,⃗ t, Ψi (r,⃗ t), 𝜕t Ψi (r,⃗ t), 𝜕tt Ψi (r,⃗ t), . . . , 𝜕xn Ψi (r,⃗ t), 𝜕xn xm Ψi (r,⃗ t), . . . , ] = 0,

(3.1)

while in Newton’s classical mechanics ordinary differential equations Gk [xi (t), dt xi (t), dtt2 xi (t), . . . ] = 0

(3.2)

are the focus of attention. Systems of the form (3.1) can be expanded into a certain basis for further numerical analysis, according to N

Ψi (r,⃗ t) = ∑ aki (t) ϕik (r)⃗ k

(3.3)

and are thereby converted to the form (3.2). We postpone the treatment of partial differential equations to Chapters 7 and 8 and commence here with problems of the kind of (3.2).

3.1 Quasilinear differential equations Let t be the independent variable, for instance, time. To begin with, we restrict the treatment to one dependent variable, say x and search for the function x(t). In classical mechanics this corresponds to a one-dimensional motion. Quasilinear differential equations are linear in the highest occurring derivative. Then, (3.2) can be written as x (N) = f (t, x, x (1) , x (2) , . . . , x (N−1) )

(3.4)

with x = x(t),

x (k) ≡

dk x . dt k

Equation (3.4) is equivalent to a system of N first order ordinary differential equations (ODEs), obtained by the substitution x1 = x,

https://doi.org/10.1515/9783110782523-003

xk+1 = x (k) ,

k = 1...N − 1

3.1 Quasilinear differential equations

� 43

as dt x1 = x2

dt x2 = x3 .. .

(3.5)

dt xN−1 = xN

dt xN = f (t, x1 , x2 , . . . , xN ),

or, using vector notation d x⃗ = f ⃗(t, x)⃗ dt

(3.6)

with f ⃗ = (x2 , x3 , . . . , xN , f ). It is sufficient to study first order systems as (3.6). Equations of this kind but for arbitrary f ⃗ constitute a dynamical system with N as the number ⃗ lives in an N-dimensional state space (or phase of degrees of freedom. The vector x(t) space). If we consider the definition of the derivative by the differential quotient ⃗ + Δt) − x(t) ⃗ d x⃗ x(t = lim , dt Δt→0 Δt we can write (3.6) for small but finite Δt approximately (Taylor expansion) in the form of an N-dimensional map ⃗ + Δt) = x(t) ⃗ + f ⃗(t, x(t))Δt ⃗ x(t + O(Δt 2 ),

(3.7)

⃗ we can compute x(t ⃗ + Δt) and then where Δt is denoted as step size. From each x(t) iteratively all the following values ⃗ + mΔt) x(t for m integer. It is clear that in addition to the function f ⃗ we need N initial values ⃗ 0 ) = x⃗0 . x(t

(3.8)

⃗ 0 ) uniquely onto the vector x(t ⃗ 1 ). Formally, the dynamical system (3.6) maps the vector x(t ⃗ This can be also expressed by the help of the time-evolution operator U: ⃗ 1 , t0 )[x(t ⃗ 1 ) = U(t ⃗ 0 )]. x(t

(3.9)

Equations (3.6), (3.8) define an initial value problem, and (3.7) provides a first numerical approximation scheme for its solution, the so-called Euler-forward method or explicit Euler method.

44 � 3 Dynamical systems 3.1.1 Example: logistic map and logistic ODE In context with the logistic map from Chapter 1 we examine the first order ODE (logistic ODE) dx = (a − 1)x − ax 2 , dt

(3.10)

defining the behavior of x(t) for any given x(0) = x0 (one-dimensional initial value problem). An exact solution can be found by the separation of variables x(t) =

a−1 , a + c e(1−a)t

(3.11)

where the integration constant c = (a − ax0 − 1)/x0 is determined by the initial condition. As t → ∞ one obtains the asymptotic solutions xs = {

0

if a < 1

1 − 1/a

if a > 1

(3.12)

which coincide with the fixed points of the logistic map. How is the ODE (3.10) linked to the logistic map (1.1)? Following (3.7) we express dx/dt by the differential quotient dx x(t + Δt) − x(t) ≈ dt Δt

(3.13)

which becomes exact for Δt → 0. Inserting (3.13) on the left-hand side of (3.10) one finds after a short calculation xn+1 = axn Δt(1 −

1 1 (1 − ) − xn ). a Δt

(3.14)

Here, xn denotes the x values at the time nΔt, or xn = x(nΔt). If we put now Δt = 1, Equation (3.14) becomes equivalent to (1.1). In other words, the discretized form of the ODE (3.10) corresponds mathematically to the logistic map. This explains the similar ‘temporal’ development up to a < 3, i. e., the asymptotic approach to xs and the correct asymptotic solutions. What cannot be explained are the further bifurcations of the solutions of the logistic map for a > 3 and their chaotic behavior for even larger a. All this is of course not included in the solution (3.11). How can we explain the much richer dynamic behavior of the discretized form (3.14)? To get an answer we have to look at the stability of (3.14). A stationary solution (fixed point) of (3.14) reads xs = 1 − 1/a. If we put x(t) = xs + u(t)

3.2 Fixed points and instabilities

� 45

and linearize with respect to u, we can show that for Δt = 1 the fixed point xs becomes numerically unstable as soon as a > 3. Coming from the logistic ODE, all the bifurcations through the chaotic sequences originate from a too-large time step during discretization (see Problems) and vanish with a smaller Δt. However, then (3.14) is no longer equivalent to the logistic map.

3.1.2 Problems With a pencil, show that (3.14) becomes numerically unstable if Δt > 2/(a − 1). And to code: 1. Plot the function (3.11) using PGPLOT for different initial conditions and different values of a. 2. Examine the map (3.14) as a numerical solution of the ODE (3.10) for different time steps Δt and different a.

3.2 Fixed points and instabilities We continue with the N-dimensional dynamical system (3.6) with a given initial condition. For the time being we concentrate on autonomous systems, i. e., the right-hand sides of (3.15) do not depend explicitly on t: dxi = fi (x1 . . . xN ), dt

i = 1 . . . N.

(3.15)

3.2.1 Fixed points For an autonomous system, fixed points as a stationary solution of (3.15) may exist. They can be found by solving the algebraic system 0 = fi (x1(0) . . . xN(0) ),

i = 1 . . . N.

(3.16)

We note that for somewhat involved nonlinear fi , solutions of (3.16) are normally only accessible numerically, even for small N.

3.2.2 Stability If a fixed point xi(0) is known, the behavior of the system in its neighborhood is of interest. How do small deviations evolve? Are they amplified or do they fade away in time?

46 � 3 Dynamical systems If the deviations stay small or decay, the fixed point is called stable; if they depart exponentially, we have an unstable fixed point. If N > 1, the disturbed trajectories may in addition oscillate around the fixed point. This is called oscillatory or Hopf bifurcation.1 If one restricts to infinitesimal deviations ui (t) from the fixed point xi (t) = xi(0) + ui (t),

(3.17)

one may linearize (3.15) with respect to u. This is called linear stability analysis. Inserting (3.17) into (3.15) yields after linearization the system N dui = ∑ Lij uj dt j

(3.18)

with the Jacobi matrix Lij =

󵄨 𝜕fi 󵄨󵄨󵄨 󵄨󵄨 . 𝜕xj 󵄨󵄨󵄨x=⃗ x⃗ (0)

(3.19)

For an autonomous system, the Lij are constant and uj = qj eλt

(3.20)

transforms (3.18) into a linear eigenvalue problem N

∑[Lij − λ δij ] qj = 0. j

(3.21)

Due to (3.20), the real parts of the N eigenvalues λk characterize the stability of the fixed point xi(0) . Let the eigenvalues be ordered according to Re(λ1 ) ≥ Re(λ2 ) ≥ ⋅ ⋅ ⋅ ≥ Re(λN ). We distinguish between the four cases: (i) All λk have a negative real part: the fixed point is stable. (ii) Re(λ1 ) > 0, and Im(λ1 ) = 0; the fixed point is monotonically unstable. (iii) Re(λ1 ) > 0, and Im(λ1 ) ≠ 0; the fixed point is oscillatory (Hopf) unstable. (iv) Re(λ1 ) = 0, critical point, center if Im(λ1 ) ≠ 0; the fixed point is marginally stable. (i) is also named node or stable focus if Im(λ1 ) ≠ 0, (ii) is called saddle point, and (iii) is an unstable focus.

1 Eberhard Hopf, German-American mathematician, 1902–1983.

3.2 Fixed points and instabilities



47

3.2.3 Trajectories ⃗ 0 ) be the location of a particle in the N-dimensional state space at time t0 . The Let x(t ⃗ corresponds to a line (trajectory) and (3.15) creates a flow. Given parametrization x(t) f , the time-evolution is deterministic: how the particle evolves for t > t0 is determined ⃗ 0 ). But then it can be easily seen that crossings of trajectories are excluded. solely by x(t ⃗ 0 ) there would be two possible different paths for Taking a potential cross point as x(t t > t0 . But this would contradict the deterministic behavior described by the flow (3.15). This is formulated as the No-Intersection Theorem: Two distinct trajectories in N-dimensional state space cannot intersect, nor can a single trajectory cross itself. The theorem is valid for arbitrary N. If f ⃗ is taken as a particle density flow, the source strength N 𝜕f Q(x)⃗ = ∇ ⋅ f ⃗ = ∑ i 𝜕x i i

(3.22)

can be defined. Knowing Q, we can compute the temporal change of a volume element ⃗ ΔV that swims in the flow at x(t): ⃗ ⃗ ⃗ dt ΔV (x(t)) = ΔV (x(t)) Q(x(t)).

(3.23)

⃗ Normally Q depends on x⃗ and thus Q(x(t)) depends on time. For a dissipative system, one finds ⟨Q⟩ ≤ 0 with a convenient time average ⟨. . . ⟩. Then every volume element whose corners travel along trajectories shrinks in the course of time and vanishes at infinite times.

3.2.4 Gradient dynamics If the circulation (or curl) of f ⃗ vanishes everywhere, a potential function U exists with fi = −

𝜕U(x1 , . . . , xN ) . 𝜕xi

Then one has Q(x)⃗ = − ∑ i

and

𝜕2 U 𝜕xi2

48 � 3 Dynamical systems

Lij = −

𝜕2 U . 𝜕xj 𝜕xi

The Jacobi matrix is symmetric and, as a consequence, has only real eigenvalues, excluding the existence of oscillatory instabilities. On the other hand one easily shows with N dU 𝜕U dxi =∑ = −|f ⃗|2 ≤ 0 dt 𝜕x dt i i

that U decays with t monotonically until a fixed point is reached where f ⃗ = 0.

3.2.5 Special case N = 1 The most simple case is N = 1 and reads dt x = f (x). If the solution is bounded, due to the No-Intersection Theorem, every temporal development must eventually end in a stable fixed point: lim x(t) = xs

t→∞

with f (xs ) = 0

and dx f |xs < 0.

(3.24)

A potential of U(x) = − ∫ dx f (x) always exists, which must have at least one minimum at xs because of (3.24).

3.2.6 Special case N = 2 A direct consequence of the No-Intersection Theorem is the Poincaré–Bendixson Theorem:2 If in two-dimensional state space the trajectories are confined to a finite size region (i. e., do not go to infinity), they either end on a stable fixed point or approach a stable limit cycle as t → ∞.

2 Henri Poincaré, French physicist and mathematician, 1854–1912; Otto Bendixson, Swedish mathematician, 1861–1935.

3.2 Fixed points and instabilities

� 49

Figure 3.1: Fixed points and their neighborhood in two-dimensional state space, from left to right, top row: λi real valued, stable node, saddle point, unstable node. Bottom row: λi complex valued, stable focus, center, unstable focus.

A limit cycle is a periodic solution in state space, that is, a closed trajectory x⃗lc with x⃗lc (t) = x⃗lc (t + T) where T denotes the period. Near any fixed point, the local behavior of the trajectories is restricted in two dimensions to only a few qualitatively different possibilities, shown in Figure 3.1. 3.2.6.1 Example: Van der Pol Oscillator Van der Pol’s equation reads d2 x dx − μ(1 − x 2 ) +x =0 2 dt dt and shows for μ > 0 self-excited nonlinear oscillations as the solution. We shall examine the system in the form (3.15): d t x1 = x2

dt x2 = −x1 + μ(1 − x12 ) x2 . There exists only one fixed point, x1(0) = x2(0) = 0. Its Jacobi matrix turns out to be L=( and has the eigenvalues

0 −1

1 ) μ

(3.25)

50 � 3 Dynamical systems λ12 = μ/2 ± √μ2 /4 − 1. Depending on μ, the fixed point exhibits different characteristics: μ ≤ −2: −2 < μ < 0: μ = 0: 0 < μ < 2: 2 ≤ μ:

stable node stable focus center, crit. point unstable focus unstable node

λi λi λi λi λi

∈ ℛ, ∈ 𝒞, ∈ 𝒞, ∈ 𝒞, ∈ ℛ.

For the source strength introduced in (3.22) one has Q(xi ) = μ(1 − x12 ). Taking polar coordinates x1 = r cos φ, x2 = r sin φ one finds for r ≫ 1, μ > 0 dt r ≈ −μr 3 cos2 φ sin2 φ ≤ 0 and the trajectories remain in a bounded region. For r < 1, it follows from dt r = μr(1 − r 2 cos2 φ) sin2 φ, that the region is limited for μ < 0. Without solving the ODEs (3.25) explicitly we obtain the following scenario: If μ < 0, all trajectories starting somewhere inside the circle x12 (0) + x22 (0) < 1 end at the stable fixed point (focus or node) x1 = x2 = 0. If μ > 0, the fixed point becomes unstable. Since all trajectories remain in a bounded region in state phase and no other (stable) fixed point exists, the Poincaré–Bendixson Theorem applies: for t → ∞ all trajectories approach asymptotically a limit cycle circuiting the unstable fixed point.

3.2.7 Special case N = 3 In the preceding paragraph we saw that in the bounded two-dimensional state space only regular behavior is possible. All trajectories asymptotically either reach fixed points or limit cycles. This changes for N = 3. A new object comes into play, the socalled strange attractor. It can be thought as a certain trajectory having both attractive and repelling character. The first strange attractor was discovered by E. Lorenz3 as a numerical solution of a system of nonlinear ODEs [1].

3 Edward Lorenz, American mathematician and meteorologist, 1917–2008.

3.2 Fixed points and instabilities



51

3.2.7.1 Example: the Lorenz equations In the 60s, Lorenz derived a model of three coupled nonlinear ODEs intended for weather forecasting: dx1 = −α (x1 − x2 ) dt dx2 = (δ + 1) x1 − x2 − x1 x3 dt dx3 = −β x3 + x1 x2 . dt

(3.26)

Thereby, α, β > 0 denote system parameters and δ is the control parameter or bifurcation parameter. The system (3.26) is nowadays called Lorenz equations. It possesses the three fixed points (i) xi = 0,

(ii) x1 = x2 = ±√βδ, x3 = δ,

the latter two existing only if δ ≥ 0. We wish to examine their stability and begin with the fixed point (i). The Jacobi matrix reads −α L = (δ + 1 0

α −1 0

0 0) −β

(3.27)

and has the eigenvalues λ12 = −

1 + α 1√ ± (1 + α)2 + 4αδ, 2 2

λ3 = −β.

For δ < 0 all λ have a negative real part and the fixed point is a stable node or a stable focus. At δ ≥ 0 the fixed point turns into a saddle node with one unstable and two stable directions (or manifolds, that is the eigenvectors of (3.27)). For (ii) (δ > 0), the Jacobi matrix reads −α L=( 1 ±√βδ

α −1 ±√βδ

0 ∓√βδ) −β

(3.28)

with the eigenvalues as roots of the polynomial P(λ) = λ3 + λ2 (1 + β + α) + λβ(1 + δ + α) + 2βαδ.

(3.29)

Since all the coefficients of (3.29) are positive, real eigenvalues may only occur with λ < 0. Evaluation of (3.29) for certain fixed parameters (α = 10, β = 8/3) shows that for δ larger than ≈ 0.35 the spectrum has the form λ1 = λ∗2 ∈ 𝒞 , λ3 < 0. The real parts of the conjugate complex pair become positive if

52 � 3 Dynamical systems δ > δc =

α(α + β + 3) 451 −1= ≈ 23.7. α−β−1 19

Then the two stable focus nodes (ii) turn into unstable saddle foci having a twodimensional unstable and a one-dimensional stable manifold. The two fixed points become oscillatory unstable with the Hopf frequency (δ = δc ) ωc = Im λ1 = √

2αβ(α + 1) ≈ 9.62 α−β−1

and finally the Lorenz attractor emerges, an almost two-dimensional object in state space; see Figure 3.2. A peculiarity of (3.26) is its constant source strength QL = −α − 1 − β < 0, the Lorenz equations are dissipative in the complete state space. From (3.23) it follows that the size of each arbitrary volume element ΔV0 exponentially decays in time ΔV (t) = ΔV0 eQL t . Every volume element is eventually mapped onto the 2 + ϵ-dimensional attractor.

Figure 3.2: The Lorenz attractor for α = 10, β = 8/3, δ = 24. Shown is the projection of a trajectory on the x1 -x2 -plane. The two fixed points (ii) are marked by asterisks.

3.3 Hamiltonian systems

� 53

3.3 Hamiltonian systems We continue with autonomous systems without explicit time-dependence as (3.15).

3.3.1 Hamilton function and canonical equations In many problems formulated in classical mechanics the forces are derived from a potential V (q1 , . . . , qN ) where q1 . . . qN are the (generalized) positions. To achieve a complete description, the momenta p1 . . . pN need to be known. The Hamilton4 function, or for short, the Hamiltonian N

H(qi , pi ) = ∑ i

p2i + V (qi ) 2m

(3.30)

coincides with the total mechanical energy of the system and is a constant of motion. The canonical equations are equivalent to Newton’s equations of motion and can be formulated as [2] 𝜕H 𝜕pi 𝜕H dt pi = − , 𝜕qi dt qi =

i = 1 . . . N.

(3.31)

Since we have two sets of variables, each of them with N variables, the motion described by (3.31) has 2N degrees of freedom. However, this is the mathematical definition of a degree of freedom. In mechanics, each pair of conjugated variables qi , pi corresponds to one degree of freedom, halving their total number. To exclude further confusion we adopt the notion mechanical degree of freedom for the number of pairs of variables (here N), degree of freedom for the number of first order ODEs in (3.31) (here 2N). The system (3.31) has the form of (3.15) but, due to the special form of f ⃗, N

Q = ∇ ⋅ f ⃗ = ∑[ i

𝜕2 H 𝜕2 H − ]=0 𝜕qi 𝜕pi 𝜕pi 𝜕qi

applies. The size of volume elements is constant in time, and the dynamics in (3.31) is called conservative. 4 William R. Hamilton, Irish mathematician and physicist, 1805–1865.

54 � 3 Dynamical systems If one wishes to know the time dependence of some variable A(qi , pi , t) one must use the chain rule according to N dA 𝜕A N 𝜕A dqi 𝜕A dpi 𝜕A 𝜕H 𝜕A 𝜕H − = ∑( + ) = ∑( − ) = {A, H} dt 𝜕t 𝜕qi dt 𝜕pi dt 𝜕qi 𝜕pi 𝜕pi 𝜕qi i i

where the abbreviation N

{A, B} ≡ ∑( i

𝜕A 𝜕B 𝜕A 𝜕B − ) 𝜕qi 𝜕pi 𝜕pi 𝜕qi

is called the Poisson bracket. Not explicitly time-dependent variables (𝜕A/𝜕t = 0) for which the Poisson bracket vanishes are constants of motion (dA/dt = 0). This of course is especially true for H.

3.3.2 Symplectic integrators The structure created by the flow (3.31) has symplectic geometry [2]. Normally this geometry is destroyed when (3.31) is integrated numerically. Then a nonvanishing source strength is generated and constants of motion, especially the total energy, are no longer conserved. So-called symplectic methods have been developed to avoid such problems. Here we can only give a short overview, see also [3]. For the sake of simplicity we restrict ourselves to a two-dimensional phase space corresponding to one mechanical degree of freedom, that is H = H(p, q) and dt q =

𝜕H , 𝜕p

dt p = −

𝜕H . 𝜕q

(3.32)

We abbreviate x⃗ = (q, p) and write (3.32) as ⃗ x]⃗ dt x⃗ = H[

(3.33)

with the (in general nonlinear) operator ⃗ x]⃗ = ( 𝜕H/𝜕p|x⃗ ) . H[ −𝜕H/𝜕q|x⃗ ⃗ x]⃗ indicates that H⃗ acts on x,⃗ and the result of the A remark on the notation: writing H[ operation is a vector of the same dimension. Successive operations of H⃗ are then denoted ⃗ H[. ⃗ . . x]⃗ . . . ]. It is convenient to omit the brackets and write H⃗ x⃗ instead of H[ ⃗ x]. ⃗ In by H[ the same manner we write products of different operators, e. g., ⃗ A⃗ B⃗ x⃗ ≡ A[⃗ B[⃗ x]].

3.3 Hamiltonian systems

� 55

⃗ As already done in (3.9) we may introduce the time-evolution operator U(t) ⃗ x(0) ⃗ = U(t) ⃗ x(t) with (an autonomous system assumed) ⃗ = exp(t H) ⃗ = exp(t(T⃗ + V⃗ )), U(t)

(3.34)

which can be seen by insertion into (3.33). In the last step, we have defined the operators 𝜕T/𝜕p T⃗ = ( ), 0

V⃗ = (

0 ) −𝜕V /𝜕q

(3.35)

with H(p, q) = T(p) + V (q). Thus the Hamiltonian should be separable into a momentum-dependent kinetic part and into a space-dependent potential part. Each part operator ⃗ U⃗ T (t) = exp(t T),

U⃗ V (t) = exp(t V⃗ )

(3.36)

generates a time-evolution being trivial in a certain sense. The evolution U⃗ T is that of a free particle where p = const., q = (p/m)t, whereas U⃗ V describes the ‘motion’ of an infinitely heavy mass point, where q = const., p = −(dV /dq)t. The trick to constructing symplectic methods is the conversion of (3.34) into products of U⃗ T and U⃗ V . But since T⃗ and V⃗ do not commute, we cannot simply write U⃗ = U⃗ T U⃗ V . The transformation must be done applying the Campbell–Baker–Hausdorff relation exp(C) = exp(A) exp(B)

1 1 C = A + B + [A, B] + ([A, [A, B]] + [B, [B, A]]) + . . . 2 12

(3.37)

([A, B] = AB − BA denotes the commutator). If A, B is of O(t), then [A, B] = O(t 2 ), [A, [A, B]] = O(t 3 ) and so on. We divide t into many small intervals (time steps) of Δt ⃗ ⃗ and instead of U(t), we look at U(Δt). The formula (3.37) can then be considered an expansion with respect to Δt and can be truncated at a certain desired order in Δt. After a longer manipulation it can be shown that ⃗ U(Δt) = U⃗ T (an Δt)U⃗ V (bn Δt)U⃗ T (an−1 Δt) . . . U⃗ T (a1 Δt)U⃗ V (b1 Δt) + O(Δt n+1 )

(3.38)

56 � 3 Dynamical systems where the coefficients ak , bk must fulfill the constraints n

∑ ak = 1, k

n

∑ bk = 1. k

(3.39)

Since both operators (3.36) are symplectic, this holds for the product (3.38) as well. Taking certain ak , bk , we may then generate methods having a given order in the time step, where all of them conserve the symplectic geometry of the Hamiltonian flow. The relation (3.38) is strongly simplified if we take T⃗ T⃗ x⃗ = 0,

V⃗ V⃗ x⃗ = 0

into account. The Taylor expansion stops after the second term 1 U⃗ T = exp(Δt T)⃗ = 1 + Δt T⃗ + Δt 2 T⃗ T⃗ + ⋅ ⋅ ⋅ = 1 + Δt T,⃗ 2 which also holds for U⃗ V . Then we can substitute U⃗ T = 1 + Δt T,⃗

U⃗ V = 1 + Δt V⃗

in (3.38). Let us state the scheme in the lowest order n = 1. Due to (3.39) a1 = b1 = 1 and with (3.38) we obtain ⃗ ⃗ + Δt) = U(Δt) ⃗ = (1 + Δt T)⃗ (1 + Δt V⃗ )x(t) ⃗ + O(Δt 2 ). x(t x(t) Taking (3.35) and 𝜕T(p)/𝜕p = p/m we finally have (

0 qn+1 p/m q ) = (1 + Δt ( )) [( n ) + Δt ( 𝜕V )] − 𝜕q (qn ) pn+1 0 pn 0 (p − Δt 𝜕V (q ))/m q 𝜕q n = ( n ) + Δt ( 𝜕V ) + Δt ( n ) − 𝜕q (qn ) pn 0 pn+1 /m q = ( n ) + Δt ( 𝜕V ). − 𝜕q (qn ) pn

(3.40)

Thus, the iteration formula for one time step for p and q read 𝜕V (q ) Δt 𝜕q n p = qn + n+1 Δt. m

pn+1 = pn − qn+1

(3.41)

3.3 Hamiltonian systems



57

The one-step scheme (3.41) is analogous to the two-step scheme qn+1 = 2qn − qn−1 −

Δt 2 𝜕V (q ), m 𝜕q n

(3.42)

which can easily be shown by eliminating pn . The scheme (3.42) is also known as the Verlet algorithm. It can be derived alternatively from Newton’s second law discretizing the second time derivative by d2 q dt 2



qn+1 − 2qn + qn−1 . Δt 2

A particular scheme of the 3rd order is generated from (3.38), taking a1 = a2 = 1/2,

b1 = 0,

b2 = 1,

and ⃗ = U⃗ T (Δt/2) U⃗ V (Δt) U⃗ T (Δt/2) + O(Δt 3 ). U(t) Without going into detail we quote the iteration rule 𝜕V (q ) Δt/2 𝜕q n pn+1/2 = qn + Δt m 𝜕V = pn+1/2 − (q ) Δt/2 𝜕q n+1

pn+1/2 = pn − qn+1 pn+1

(3.43)

which is similar to the ‘Leapfrog’ version of Verlet’s algorithm where space and momentum variables are evaluated at Δt/2 differing times. 3.3.2.1 Example: Hénon–Heiles model To demonstrate the advantage of the symplectic method compared to a simple Euler method, we shall explore in some detail the Hénon–Heiles model [4]. The model describes the two-dimensional motion of a star near the center of a disk-shaped galaxy. Taking a suitable scaling (m = 1), the Hamiltonian reads 1 1 H(p1 , p2 , q1 , q2 ) = E = p21 + p22 + V (q1 , q2 ) 2 2

(3.44)

1 1 1 V (q1 , q2 ) = q12 + q22 + q12 q2 − q23 . 2 2 3

(3.45)

with

58 � 3 Dynamical systems A transformation to polar coordinates immediately shows the 3-fold rotational symmetry of V : 1 1 V (r, φ) = r 2 ( + r sin 3φ). 2 3 The motion of the star is bounded close to the origin as long as the total energy (3.44) E < V0 =

1 . 6

For a larger E, the star may leave the triangle (see Figure 3.3), first in the directions of φ = 90°, 210° and 330°. The canonical equations (3.31) read dt q1 = p1

dt q2 = p2

dt p1 = −q1 − 2q1 q2

(3.46)

dt p2 = −q2 − q12 + q22 .

Figure 3.3: Potential V (q1 , q2 ) of the Hénon–Heiles model. Inside the triangle around the origin the motion of the star is bounded if its total energy is less than 1/6. Solid lines: V = 1/6, 1/12, 1/24, 1/48, 1/96, dashed: V = 1/3, 2/3, 1.

3.3 Hamiltonian systems

� 59

The iteration (3.41) turns into p1,n+1 = p1,n − (q1,n + 2q1,n q2,n )Δt

2 2 p2,n+1 = p2,n − (q2,n + q1,n − q2,n )Δt

q1,n+1 = q1,n + p1,n+1 Δt

(3.47)

q2,n+1 = q2,n + p2,n+1 Δt. The Euler algorithm with the same order looks almost identical. One only has to substitute pi,n+1 with pi,n in the third and fourth equation. However, the results differ vastly. For the initial conditions q1 = 0.1,

q2 = 0,

p1 = 0,

p2 = 0.1,

(E = 0.01)

both methods provide nonclosed quasiperiodic orbits around the origin. For the Euler code, their distance from the center increases at each revolution; after some 100 cycles the star leaves the bounded region and disappears into infinity. Accordingly the energy increases constantly, whereas using the symplectic method the star remains in the bounded region and its energy is constant for many 100 000 cycles, as it should be (Figure 3.4).

Figure 3.4: Comparison between the Euler and the symplectic method, both of the same order Δt 2 . Left frame: r = √q12 + q22 over t; right frame: difference of the total energies.

60 � 3 Dynamical systems 3.3.3 Poincaré section For larger values of E and/or for different initial conditions, the trajectories can become more involved, and chaotic regions may appear. The trajectories can be visualized in a comfortable way with the Poincaré section method. To this end, one places an (N − 1)dimensional hyperplane, the Poincaré plane, somewhere in the N-dimensional phase space and marks the points of intersection of a trajectory for many revolutions. Periodic motion is then correlated to single isolated points on the plane, quasiperiodic behavior to curves, and chaotic regions to fractal objects having a dimension between one and two on the Poincaré plane. Let ξi(n) , i = 1 . . . N − 1 be the coordinates of the nth intersection with the plane. Then from each set ξi(n) the values for the next intersection point follows uniquely. Instead of a system of N ODEs, one finds the relation ξ ⃗ (n+1) = f ⃗(ξ ⃗ (n) )

(3.48)

which is nothing other than an (N −1)-dimensional map, called a Poincaré map. However, computing f ⃗ in (3.48) normally requires solving the ODEs of the full system. 3.3.3.1 Example: Hénon–Heiles model For the Poincaré plane we choose q1 = 0 and obtain for (3.48) ξ1 = q2 (ts ),

ξ2 = p1 (ts ),

ξ3 = p2 (ts )

with ts from q1 (ts ) = 0. Since a conserved quantity exists, namely the total energy E (3.44), we can reduce the three-dimensional hyperplane further to a two-dimensional plane, for instance q2 (ts ), p2 (ts ). Since p21 (ts ) ≥ 0, all intersection points on this plane must be located inside the region 2 −g(q2 ) ≤ p2 ≤ g(q2 ) with g(q2 ) = √2E − q22 + q23 . 3 A code for the symplectic integration, plotting of trajectories, and of the Poincaré section is provided in [5]. The initial values q2 (0), p2 (0) can be set using the mouse cursor, p1 (0) for a given E is computed from (3.44). Figure 3.5 shows some solutions for E = 0.12 and different initial values on the Poincaré plane (p2 , q2 ) as well as in position space (q1 , q2 ).

3.3 Hamiltonian systems



61

Figure 3.5: Poincaré plane (left), trajectories in position space (right) for E = 0.12, and several initial conditions. Dashed: Bounded area defined by the total energy.

62 � 3 Dynamical systems

Bibliography [1] [2] [3] [4]

E. N. Lorenz, Deterministic nonperiodic flow, J. Atmos. Sci. 20, 130 (1963). J. V. José and E. J. Saletan, Classical Dynamics, Cambridge Univ. Press (1998). J. M. Thijssen, Computational Physics, Cambridge Univ. Press (2007). M. Hénon and C. Heiles, The applicability of the third integral of motion: Some numerical experiments, Astrophys. J. 69 (1964). [5] FORTRAN codes at https://www.degruyter.com/document/isbn/9783110782523/html.

4 Ordinary differential equations I, initial value problems 4.1 Newton’s mechanics The present chapter mainly deals with problems arising from classical mechanics. We shall further investigate ordinary differential equations (ODEs) having the form Gk [xi (t), ẋi (t), ẍi (t)] = 0. The highest derivative occurring is now of the second order, normally in time. From here on we denote the first time derivative of x by x,̇ the second by x.̈

4.1.1 Equations of motion The second law of Newton provides equations of motion for N mass points at the positions ri⃗ (t), i = 1 . . . N mi rï⃗ = F⃗i (r1⃗ , . . . rN⃗ , r1⃗̇ , . . . rN⃗̇ , t)

(4.1)

with the force F⃗i acting on the mass point i. The necessary initial conditions are normally given by positions and velocities at t = t0 ri⃗ (t0 ) = ri⃗ (0) ,

ri̇⃗ (t0 ) = v⃗i (0) .

(4.2)

Often, the forces do not explicitly depend on time, F⃗i = F⃗i (rj⃗ , rj̇⃗ ) and (4.1) is called an autonomous system. If, on the other hand F⃗i = F⃗i (rj⃗ ) with 𝜕V (r1⃗ . . . rN⃗ ) F⃗i (rj⃗ ) = − , 𝜕ri⃗ the system (4.1) is denoted as conservative. In this case, the total energy E= https://doi.org/10.1515/9783110782523-004

1 N ∑ m (ṙ⃗ )2 + V (r1⃗ . . . rN⃗ ) 2 i i i

(4.3)

64 � 4 Ordinary differential equations I constitutes a constant of motion. For a translational invariant system, a further condition F⃗i = F⃗i (rj⃗ − rk⃗ , rj̇⃗ , t) holds for the forces.

4.1.2 The mathematical pendulum We commence our elaboration with the equation of motion of the damped one-dimensional mathematical pendulum: φ̈ + α φ̇ + Ω20 sin φ = 0,

Ω20 = g/ℓ,

(4.4)

realized for instance by a bar pendulum with a massless bar. Here, ℓ is the bar length, α > 0 the damping rate and φ denotes the displacement angle from the vertical. The equivalent first-order system for the two variables φ(t) and ω(t) reads φ̇ = ω ω̇ = −α ω − Ω20 sin φ

(4.5)

with the initial conditions φ(0) = φ0 ,

ω(0) = ω0 .

The system (4.5) has two fixed points (rest positions) φ̇ = ω̇ = 0: φ(0) 0 = 0,

φ(0) 1 = π,

(0) where φ(0) 0 has the properties of a stable focus, φ1 of an unstable saddle node. Multiplication of (4.4) by φ̇ and integration with respect to t yields

1 2 φ̇ − Ω20 cos φ = E0 − R(t) 2

(4.6)

with the integration constant E0 and a monotonically growing function R(t) = α ∫ dt φ̇ 2 . The expression on the left-hand side of (4.6) can be identified with the total mechanical energy E/mℓ2 of the system, R corresponds to the energy dissipated by friction and transformed to heat:

4.2 Numerical methods of the lowest order

� 65

R(t) = E0 − E(t). Since E(t) ≥ −Ω20 and Ṙ ≥ 0, the rest position φ = 0, ω = 0 is asymptotically reached with t → ∞. During this process the mechanical energy R(t → ∞) = E0 + Ω20 has been converted to heat and the pendulum has come to rest, independently of its initial conditions. As long as E > Ec with Ec = Ω20 , the mass point passes through the upper unstable rest position (rotation), and if E < Ec one observes oscillations around the lower rest position, also called libration. For the conservative case without damping (α = 0), one finds E = E0 and the pendulum, once moving, never comes to rest. Depending on E0 the motion persists either as oscillation or rotation. For E0 = Ec the motion is an infinitely long approach to the upper rest position. If the initial condition is also chosen with the upper rest position, the trajectory in phase space (φ-ω-plane) consists of a closed loop with initial and end points at the very same fixed point, also called a homoclinic orbit or separatrix.

4.2 Numerical methods of the lowest order 4.2.1 Euler method Next we try to solve the system (4.5) numerically using the Euler method. Introducing the discrete variables φn = φ(nΔt), ωn = ω(nΔt), and the time step Δt, the iteration rule reads φn+1 = φn + ωn Δt

ωn+1 = ωn − (α ωn + Ω20 sin φn )Δt.

(4.7)

Figure 4.1 shows phase space and energy over t for different α and different time steps. For α = 0 (Figure 4.1 top) we expect a constant energy, which is obviously not the case. Moreover, all trajectories should be closed (periodic motion) instead of spiraling outwards. The simple Euler forward method produces an artificial negative(!) damping, the energy increases as well as the amplitude. After a certain time the separatrix is crossed and the oscillations turn into rotations. This behavior is qualitatively independent of Δt. Figure 4.1 shows the damped case in the middle; the result looks more realistic. The trajectories spiral inwards and the energy reaches the value −Ω20 asymptotically. In contrary the behavior shown in Figure 4.1 bottom is again ruled by numerical artifacts. For

66 � 4 Ordinary differential equations I

Figure 4.1: Trajectories in phase space (left) and total energy over t for the damped pendulum (Ω0 = 1), computed using Euler’s method. Top α = 0, Δt = 0.05, middle: α = 0.1, Δt = 0.05, bottom: α = 0.1, Δt = 0.15. The behaviors shown in the top and the bottom frames do not even qualitatively correspond to reality but are rather a product of numerical errors and instability.

4.2 Numerical methods of the lowest order

� 67

larger time steps (Δt = 0.15) again a kind of ‘negative damping’ is obtained and the energy increases again. Let us draw some preliminary conclusions: – The results for α = 0 are all wrong (negative numerical ‘damping’). – For α > 0 we find qualitatively correct results for small Δt. – Also for the damped case the energy grows continuously and the trajectories get off of the stable (!) fixed point if Δt is chosen too large.

4.2.2 Numerical stability of the Euler method As we have already remarked in Section 3.1 the time step of the Euler method is bounded from above by the demand for numerical stability. We start with the general autonomous system ̇⃗ = f ⃗(x(t)). ⃗ x(t)

(4.8)

The Euler method provides the discretization ⃗ + Δt) = x(t) ⃗ + f ⃗(x(t))Δt ⃗ x(t + O(Δt 2 ).

(4.9)

⃗ Let x⃗ (0) be a stable fixed point f ⃗(x⃗ (0) ) = 0. Then we may linearize (4.9) with x(t) = ⃗ for small deviations u⃗ around this fixed point: x⃗ (0) + u(t) ⃗ + Δt) = u(t) ⃗ + L ⋅ u(t) ⃗ Δt u(t

(4.10)

with the Jacobi matrix Lij =

𝜕fi 󵄨󵄨󵄨󵄨 󵄨 . 𝜕xj 󵄨󵄨󵄨x⃗ (0)

(4.11)

Due to the stability of x⃗ (0) , the eigenvalues of L must all have a nonpositive real part. We write (4.10) as ⃗ + Δt) = Q u(t

Ex

⃗ ⋅ u(t)

(4.12)

with Q

Ex

= 1 + Δt L.

To reflect the behavior close to a stable fixed point, the deviations u⃗ should not grow while iterating (4.12). Thus the spectral radius of Q must be less or equal to one: Ex

ρ(Q ) ≤ 1. Ex

(4.13)

68 � 4 Ordinary differential equations I This condition yields an upper boundary for Δt. To demonstrate the method we take again the mathematical pendulum. Linearizing around the stable fixed point φ = ω = 0 yields the Jacobi matrix 0 L=( 2 −Ω0

1 ). −α

For the sake of simplicity we exclude here the overdamped case and assume α < 2Ω0 . Then the spectral radius of Q turns out to be Ex

ρ = √1 − α Δt + Δt 2 Ω20 and the stability criterion ρ ≤ 1 yields Δt ≤

α . Ω20

This of course explains the strange behavior of the solutions shown in Figure 4.1. The stability condition at Ω0 = 1 simply reads Δt ≤ α, which is only fulfilled for the solutions of the middle row in Figure 4.1. For the undamped case α = 0 the Euler method is unstable for arbitrarily small steps Δt.

4.2.3 Implicit and explicit methods Since the dependent variables at time t + Δt occur explicitly on the left-hand side of the iteration rule, the Euler forward method belongs to the group of explicit integration schemes. For an implicit method instead of (4.9) one finds the rule ⃗ + Δt) = x(t) ⃗ + f ⃗(x(t ⃗ + Δt))Δt + O(Δt 2 ). x(t

(4.14)

To compute the variables at the next iteration step, one must first solve (4.14) for ⃗ + Δt). This is only possible in a unique way for a linear function f ⃗, x(t f ⃗(x)⃗ = A ⋅ x,⃗ and one obtains ⃗ + Δt) = x(t) ⃗ (1 − Δt A) ⋅ x(t or

4.3 Higher order methods

⃗ + Δt) = Q x(t

Im

⃗ ⋅ x(t)

� 69

(4.15)

with Q−1 = 1 − Δt A. Im

Now the stability is determined from the spectral radius of (1 − Δt A)−1 , which normally yields a much weaker restriction. One finds for the linearized pendulum ρ(Q ) = Im

1 √1 + αΔt + Ω20 Δt 2

,

which is less than one for all Δt, even if α = 0. Thus the implicit method for the linearized pendulum (that is the harmonic oscillator) is unconditionally stable and therefore much more suitable than the explicit scheme. On the other hand, the numerical truncation error is still of order Δt 2 and this time generates a positive damping, leading to decaying oscillations in time even if α = 0, Figure 4.2.

Figure 4.2: Harmonic oscillator, implicit first-order scheme, α = 0. Although closed trajectories and a constant energy are expected, the method computes a damped solution. The artificial numerical damping goes with Δt. Despite being stable for arbitrary time steps, the implicit method is still not very accurate.

4.3 Higher order methods The Euler method converges slowly and provides inaccurate and partially nonphysical results. Mainly for conservative systems this may lead to a qualitatively wrong behavior of the solutions. To increase the accuracy, higher order methods have been developed

70 � 4 Ordinary differential equations I where the truncation error is of order Δt 3 or even better. In the following we introduce two higher order methods. Both are explicit and belong to the Runge–Kutta class.

4.3.1 Heun’s method We begin with the one-dimensional system dx = f (t, x), dt

(4.16)

which is generalized later to n dimensions. We integrate (4.16) over t t+Δt

t+Δt

dx = x(t + Δt) − x(t) = ∫ dt ′ f (t ′ , x(t ′ )) ∫ dt dt ′ ′

t

(4.17)

t

and obtain the still exact iteration rule t+Δt

x(t + Δt) = x(t) + ∫ dt ′ f (t ′ , x(t ′ )).

(4.18)

t

If we approximate the integral by the square rule according to t+Δt

∫ dt ′ f (t ′ , x) ≈ f (t, x(t)) Δt, t

the Euler method results. Taking the more accurate trapezoidal rule t+Δt

∫ dt ′ f (t ′ , x) ≈ (f (t, x(t)) + f (t + Δt, x(t + Δt))) t

Δt , 2

one obtains for (4.18) x(t + Δt) = x(t) + (f (t, x(t)) + f (t + Δt, x(t + Δt)))

Δt . 2

(4.19)

Since x(t + Δt) now occurs on the right-hand side as well, this is an implicit (or better a semi-implicit) algorithm. To convert (4.19) to an explicit scheme one computes x(t + Δt) on the right-hand side applying the Euler method x(t + Δt) = x(t) + f (t, x(t)) Δt and finally finds

4.3 Higher order methods

x(t + Δt) = x(t) + (f (t, x(t)) + f (t + Δt, x(t) + f (t, x(t)) Δt))

Δt . 2

� 71

(4.20)

This is called Heun’s method, or sometimes the second-order Runge–Kutta method. 4.3.1.1 Accuracy To which order of Δt is the truncation error in Heun’s method? We write (4.20) as (for the sake of simplicity we assume f (x) instead of f (t, x)) x(t + Δt) − x(t) = (f (x(t)) + f (x(t) + f (x(t)) Δt))

Δt 2

and expand the left-hand side with respect to Δt L.H.S. =

dx 1 d2 x 2 1 d3 x 3 Δt+ Δt + Δt + ⋅ ⋅ ⋅ dt 2 dt 2 6 dt 3 = f Δt +

2

d2 f df 1 df 1 f Δt 2 + (f 2 2 + f ( ) )Δt 3 + ⋅ ⋅ ⋅ , 2 dx 6 dx dx

the right-hand side with respect to f Δt: R.H.S. =

df Δt 1 d2 f 2 2 (2f + f Δt + f Δt ) + ⋅ ⋅ ⋅ , 2 dx 2 dx 2

where . . . denote terms of the order Δt 4 . Both sides coincide up to the order Δt 2 , that is, the truncation error is of the order Δt 3 and thus one order smaller than that of Euler’s method. 4.3.1.2 Numerical stability Similar to the Euler method we may formulate one iteration step for Heun’s method after linearization around a fixed point according to ⃗ + Δt) = Q ⋅ x(t) ⃗ x(t H

with 1 Q = 1 + Δt L + Δt 2 L2 H 2 and the Jacobi matrix L as in (4.11). Again, numerical stability is given by the restriction ρ(Q ) ≤ 1, which eventually leads to H

󵄨󵄨 󵄨󵄨 1 󵄨 󵄨 ρ = max󵄨󵄨󵄨1 + Δtλi + Δt 2 λ2i 󵄨󵄨󵄨 ≤ 1, 󵄨󵄨 i 󵄨󵄨 2 where λi are the eigenvalues of L. For the example of the harmonic oscillator with

72 � 4 Ordinary differential equations I λ12 = −

α 1 ± i√4Ω20 − α2 2 2

we obtain the stability limit: 1 1 1 −α + α2 Δt − αΩ20 Δt 2 + Ω40 Δt 3 = 0. 2 2 4 This provides an upper time step Δtc which is for α > 0, larger than that for Euler’s method (Figure 4.3, Δtc ≈ 1.4). However, for the undamped case α = 0, Heun’s method is still unstable for all Δt. Taking the symplectic method from Chapter 3, one iteration step for the harmonic oscillator can be formulated according to (3.41): φ φ ( ) =Q ( ) s ω ω n+1 n with (see Problems) Q =( s

1 − Ω20 Δt 2 −Ω20 Δt

Δt − αΔt 2 ). 1 − αΔt

(4.21)

The stability is given by the spectral radius of Q . As long as both eigenvalues are complex s one finds ρ(Q ) = √|Det Q | = √|1 − αΔt|. s

s

Figure 4.3: Spectral radii over Δt for the harmonic oscillator, Ω0 = 1, left: damped, α = 1/2, right: undamped, α = 0. Especially for the undamped case the symplectic method discussed in Chapter 3 is superior to any other, even higher-order method.

4.3 Higher order methods



73

For the special (Hamiltonian) case of the undamped oscillator α = 0, one obtains the pleasant result ρ = 1, and the symplectic method should be stable for Δt < 2/α. However, for Δt ≥ (−α + √α2 + 4Ω20 )/2Ω20 the complex conjugate pair of eigenvalues of Q becomes s real valued and the spectral radius quickly exceeds one; compare Figure 4.3. Despite this restriction and especially for a weak damping (α ≪ Ω), the symplectic method is much better suited than other methods.

4.3.2 Problems 1.

For the Crank–Nicolson method, the right-hand sides of the ODEs are written as a linear combination of explicit and implicit terms. Let y(x) be the solution of dy(x) = f (y(x)) dx ̃ ̃ 0 ) = y(x0 ), the and y(x) the numerically obtained approximation. Starting with y(x Crank–Nicolson scheme provides ̃ 0 + Δx) − y(x ̃ 0) 1 y(x ̃ 0 )) + f (y(x ̃ 0 + Δx))]. = [f (y(x Δx 2

(4.22)

Show that the numerical error after one time step reads 󵄨󵄨 ̃ 󵄨 3 󵄨󵄨y(x0 + Δx) − y(x0 + Δx)󵄨󵄨󵄨 = O(Δx ). 2.

Derive the amplification matrix (4.21) for the symplectic method.

4.3.3 Runge–Kutta method Taking a higher order algorithm increases both step size and accuracy. Even higher order methods are possible, however, the numerical effort would also increase. For Heun’s method one has to evaluate f at two different function values for each iteration step. To reduce the algorithm to practice one should always find a compromise between accuracy and effort. An algorithm that has stood the test of time is the already in 1895 developed Runge– Kutta method (RK). RK exists in different orders; normally the fourth order scheme (which we shall abbreviate by RK4) is taken. For the sake of simplicity we explain the principle at a method of the second order. Let x = x(t) be the desired solution of the ODE ẋ = f (t, x(t)).

74 � 4 Ordinary differential equations I We expand x at t + Δt/2 2

x(t) = x(t + Δt/2) −

Δt 1 Δt ̇ + Δt/2) + ( ) x(t ̈ + Δt/2) + O(Δt 3 ) x(t 2 2 2

x(t + Δt) = x(t + Δt/2) +

Δt 1 Δt ̇ + Δt/2) + ( ) x(t ̈ + Δt/2) + O(Δt 3 ) x(t 2 2 2

2

and subtract the two equations ̇ + Δt/2) + O(Δt 3 ). x(t + Δt) = x(t) + Δt x(t It results in an iteration scheme of the order of Δt 2 . Similar to Heun’s method one must know ẋ = f for a time later than t to compute the right-hand side. Again this is accomplished by the Euler method ̇ + Δt/2) = f (t + Δt/2, x(t + Δt/2)) = f (t + Δt/2, x(t) + Δt/2 f (t, x(t))) + O(Δt 2 ) x(t and one iteration step takes the following form: k1 = Δt f (t, x(t))

k2 = Δt f (t + Δt/2, x(t) + k1 /2)

(4.23)

x(t + Δt) = x(t) + k2 .

Thus an explicit scheme of order Δt 2 is constructed where f must be evaluated twice at each step. Although it looks different, the scheme (4.23) is similar to Heun’s method (4.20) up to the order Δt 3 , which can be shown by expanding (4.23) with respect to Δt/2 at t and t + Δt. In the same manner the fourth order RK scheme is derived: k1 = Δt f (t, x(t))

k2 = Δt f (t + Δt/2, x(t) + k1 /2)

k3 = Δt f (t + Δt/2, x(t) + k2 /2)

k4 = Δt f (t + Δt, x(t) + k3 )

1 x(t + Δt) = x(t) + (k1 + 2 k2 + 2 k3 + k4 ), 6 and is easily coded in FORTRAN95:

C C C C

SUBROUTINE rk4(x,t,n,dt,eq) integrates the ODE system defined in eq from t to t+dt x dependent variables x(n) t independent variable n number of dependent var.

(4.24)

4.3 Higher order methods

C C



75

dt time step eq the ride-hand sides of the ODEs (subroutine) REAL, DIMENSION(n) :: x,f1,f2,f3,f4 CALL eq(f1,x,t); f1=dt*f1 CALL eq(f2,x+.5*f1,t+dt/2.); f2=dt*f2 CALL eq(f3,x+.5*f2,t+dt/2.); f3=dt*f3 CALL eq(f4,x+f3,t+dt) x=x+(f1+2.*f2+2.*f3+f4*dt)/6. END

The error for RK4 is of the order Δt 5 and the restriction for the step size originates now from the stability condition (the detailed computation is left for the reader) 󵄨󵄨 󵄨󵄨 1 1 1 󵄨 󵄨 ρ = max󵄨󵄨󵄨1 + Δtλi + Δt 2 λ2i + Δt 3 λ3i + Δt 4 λ4i 󵄨󵄨󵄨 < 1. 󵄨 󵄨󵄨 i 󵄨 2 6 24

(4.25)

An evaluation of ρ is also shown in Figure 4.3. As expected, the stability region is much larger than those of the other schemes having a lower order. 4.3.3.1 Example: mathematical pendulum with RK4 As an application we use RK4 to solve the equations for the nonlinear pendulum with f ⃗ from (4.5). Note that even for the problematic conservative case α = 0 convergent results and constant energy are obtained if the time step is not chosen to be too large (ρ ≈ 1 in (4.25)); see Figure 4.4. A code section computing a trajectory with initial values φ = 2, ω = 0 could read as PROGRAM pendulum_rk4 REAL, DIMENSION(2) :: y

! variables phi, omega

... y=(/2.,0./) ! initial values for phi and omega t=0.; dt=0.01 DO WHILE(t.LT.tend) ! time iteration loop begins t=t+dt CALL rk4(y,t,2,dt,pendulum_deq) ! compute one time step with rk4 ... ENDDO ... If the subroutine rk4 has been placed in the program library (see Appendix B), only the subroutine pendulum_deq has to be specified in the file below the main section (MAIN), here pendulum_rk4. This can be done even inside MAIN using the CONTAINS statement.

76 � 4 Ordinary differential equations I

Figure 4.4: The undamped mathematical pendulum (α = 0) with RK4. For a time step too large, numerical friction distorts the result (top, Δt = 1), for Δt = 0.1 (bottom) the energy is conserved well for many periods (Ω0 = 1, T = 2π).

Then the parameters, here omega and alpha, are global and also defined in the subroutine. Moreover, MAIN recognizes pendulum_deq as the name of a subroutine and transfers it accordingly to rk4. ... CONTAINS SUBROUTINE pendulum_deq(rhside,y,t) REAL, DIMENSION(2) :: rhside,y rhside(1)=y(2) rhside(2)=-alpha*y(2)-omega**2*SIN(y(1)) END SUBROUTINE pendulum_deq END PROGRAM pendulum_rk4

4.3 Higher order methods

� 77

The other, more traditional ‘FORTRAN77-like’ way is to avoid the CONTAINS statement and the sometimes confusing use of global variables by defining pendulum_deq by an EXTERNAL statement in MAIN. The parameter transfer to the subroutine defining the right-hand sides of the ODEs can then be realized by a COMMON block. 4.3.3.2 RK4 with adaptive step size Up to here the step size Δt was constant and, besides stability restrictions, chosen rather arbitrarily. This is the most easiest way but may not always be the optimal one. A step width too large renders the results inaccurate, while a step size too small unnecessarily increases the number of iterations and therefore the computational effort. An algorithm adapting the step size during the computations can improve the situation considerably. On the one hand one can thus prescribe a desired accuracy (or upper error limit); on the other hand the method can automatically increase the step size at times where x⃗ is slowly varying and vice versa. ⃗ and Using RK4 the truncation error is of O(Δt 5 ). Let us start the integration at x(t) compare two numerically computed x⃗1 (t +Δt), x⃗2 (t +Δt), one found with step size Δt (one iteration), the other with Δt/2 and two consecutive iterations. Let d(Δx) be the euclidean distance d(Δt) = |x⃗1 − x⃗2 |, then d(Δt) = |c|Δt 5 where c depends somehow on the fifth derivative of x⃗ at time t. If we compute d for two different step sizes we may eliminate c and find 5

d(Δt1 ) Δt = ( 1) . d(Δt2 ) Δt2 This can be solved for Δt2 : Δt2 = Δt1 (

1/5

d(Δt2 ) ) . d(Δt1 )

Now we compute one iteration with a certain fixed Δt1 and require d(Δt2 ) = ϵ as the desired accuracy. Then the step width to achieve this accuracy reads Δt2 = Δt1 (

1/5

ϵ ) . d(Δt1 )

In practice the method is more robust if the exponents are taken variably, depending whether Δt increases or decreases:

78 � 4 Ordinary differential equations I

Δt2 = Δt1 (

ϵ ) d(Δt1 )

p

and p={

1/5

if

d 0,

(4.26)

were proposed by Lotka (1920) and Volterra (1931) to model the interaction between a prey n1 (t) and a predator n2 (t) population. They are nowadays known as the Lotka– Volterra model and constitute the most simple predator-prey system. 1. Explain the terms on the right-hand sides of (4.26). Show that performing a proper scaling of time, n1 and n2 , (4.26) can be cast into the form ṅ̃ 1 = añ 1 − ñ 1 ñ 2 ñ̇ 2 = −ñ 2 + ñ 1 ñ 2 ,

a>0

(4.27)

4.4 RK4 applications: celestial mechanics

2. 3.

79

Compute the fixed points of (4.27) and examine their stability. Show that W (ñ 1 , ñ 2 ) = ñ 1 + ñ 2 − ln ñ 1 − a ln ñ 2

4.



(4.28)

is a conserved quantity. Compute numerical solutions of (4.27) using RK4 and check the conservation of (4.28). Plot the trajectories in state space for several a and different initial conditions.

4.4 RK4 applications: celestial mechanics The basic subject of classical celestial mechanics is the computation of the trajectories of point masses (planets or stars) interacting with Newton’s law of gravity. Only the most simple situation of two bodies in an otherwise empty universe can be solved exactly. This situation is known as the two-body or Kepler problem. Its solutions are the well-known ellipses, parabolas, or hyperbolas. 4.4.1 Kepler problem: closed orbits Two bodies (point masses) with mass mi are located at ri⃗ (t). Between both bodies a force according to Newton’s law of gravity is assumed. The equations of motion read r1⃗ − r2⃗ |r1⃗ − r2⃗ |3 r⃗ − r1⃗ m r2̈⃗ = −G m1 2 = − 1 r1̈⃗ 3 m2 |r2⃗ − r1⃗ | r1̈⃗ = −G m2

(4.29)

with Newton’s gravity constant G ≈ 6.67 ⋅ 10−11 m3 /(kg s2 ). Introducing relative coordinates η⃗ = r1⃗ − r2⃗ , the six mechanical degrees of freedom can be reduced to three (this is a consequence of the conservation of the total momentum) η⃗ η̈⃗ = −G M 3 η

(4.30)

with the total mass M = m1 +m2 . Taking the center of mass as the origin of the coordinate system, one has the relations r1⃗ =

m2 η,⃗ M

r2⃗ = −

m1 η.⃗ M

(4.31)

Due to the central forces the total angular momentum is conserved and the motion can be further restricted to a plane, leaving only two mechanical degrees of freedom. With-

80 � 4 Ordinary differential equations I out loss of generality, we can study the 2D problem with η⃗ = (x, y). After appropriate 1/3 ̃ scaling |η| = |η|(GM) one obtains the two ODEs of second order without any further parameters (tildes removed) x r3 y ÿ = − 3 r

ẍ = −

(4.32)

where r = √x 2 + y2 . The total energy measured in multiples of GM 2/3 reads 1 1 E = (ẋ 2 + ẏ2 ) − 2 r

(4.33)

and represents another constant of motion. Together with the constant angular momentum L = x ẏ − yẋ

(4.34)

the number of dynamical degrees of freedom in (4.32) is further reduced from four to two. But this excludes chaotic behavior and one expects at least for bounded solutions either fixed points or periodic solutions.1 Fixed points with a finite E do not exist for the Kepler problem and the periodic solutions correspond to the closed orbits of the planets. An obvious solution of (4.32) is a circular orbit with radius R, x = R cos(ωt),

y = R sin(ωt).

Inserting this into (4.32) yields ω2 =

1 , R3

(4.35)

in agreement with Kepler’s third law. Numerical solutions of (4.32) can be achieved with the help of RK4. The subroutine defining the system (4.32) takes the form SUBROUTINE kepler_deq(rhside,y,t) REAL, DIMENSION(4) :: rhside,y r32=(y(1)**2+y(3)**2)**(3/2.) rhside(1)=y(2) rhside(2)=-y(1)/r32 rhside(3)=y(4) rhside(4)=-y(3)/r32 END 1 However, this applies only for the 1/r-potential, see the remarks in Section 4.4.2 and Problem 4.4.5,1.

4.4 RK4 applications: celestial mechanics



81

Taking the initial conditions x(0) = R,

y(0) = 0,

̇ x(0) = 0,

̇ y(0) = L/R,

one finds a circular orbit for L = L0 = √R with radius R, otherwise ellipses, or for even larger L, hyperbolas. For L < L0 , the initial point coincides with the aphelion (this is the largest distance from the planet to the sun), and for L > L0 , with the perihelion of the ellipse. For E, (4.33) is evaluated as E=

L2 1 − , 2R2 R

i. e., if L > √2R one has E > 0 and no bounded solutions exist anymore. The two bodies are free and move on hyperbolas around their common center of mass. In general there is now friction in the celestial mechanics of mass points. Then it may become crucial how numerical friction will modify the results. How far is the energy (4.33) conserved? To shed light on this issue we show in Figure 4.6 the value of E after 10 000 revolutions as a function of Δt for a circular orbit with R = 1, L = 1, ω = 1. One recognizes a very precise energy conservation up to time steps of about 0.05. Note that the orbital period is 2π, and thus for Δt = 0.05, one circle is resolved by about 120 time steps only. Nevertheless the total energy after t = 10 000 ⋅ 2π is about E = −0.50024, a deviation of less than 0.05 %.

Figure 4.6: Deviation of the total energy of a planet revolving on a circular orbit (R = 1, L = 1) from the exact value E = −1/2 after 10 000 periods, plotted over the time step Δt. Although only of the order Δt 2 , the symplectic algorithm is superior to the fourth order Runge–Kutta method for larger Δt.

82 � 4 Ordinary differential equations I

by

C

Taking the symplectic method of the lowest order one time step is easily integrated

SUBROUTINE symplect(y,dt) symplectic integrator for one time step REAL, DIMENSION(4) :: y r32=(y(1)**2+y(3)**2)**(3/2.) ! denominator y(2)=y(2)-y(1)/r32*dt y(4)=y(4)-y(3)/r32*dt y(1)=y(1)+y(2)*dt y(3)=y(3)+y(4)*dt END

As it can be seen also from Figure 4.6, the symplectic method is not always preferable. For a larger time step it conserves the energy better, but for smaller time steps the accuracy of RK4 is much higher.

4.4.2 Quasiperiodic orbits and apsidal precession For E < 0 all orbits are closed. This is a special property of the 1/r-potential and any tiny deviation V (r) = −

1 , r (1+ϵ)

|ϵ| ≪ 1

leads to nonclosed, quasiperiodic trajectories whose perihelion turns slowly around the central mass (Figure 4.7). For ϵ ≠ 0 the only closed orbit is the circle (for a more detailed discussion see Problems, Section 4.4.5).

Figure 4.7: Perihelion precession for a tiny deviation from the 1/r-potential. Orbits computed with RK4 and R = 1, L = 0.7, and ϵ = 0.05.

4.4 RK4 applications: celestial mechanics



83

Previously we argued that the motion is reduced due to the conservation laws on a plane with two dynamical degrees of freedom. But how then can crossing trajectories emerge as depicted in Figure 4.7? Apparently there exist two possible paths after each ̇ y). ̇ y), ẏ = y(x, crossing point. This is a consequence of the not unique relation ẋ = x(x, If we eliminate for instance ẏ with the help of (4.34) from the energy law (4.33) we find ̇ y) having two different solutions. a quadratic equation for x(x,

4.4.3 Multiple planets: is our solar system stable? Things are getting much more involved if the system is extended to more than one planet circling the central star. Analytic solutions of the equations of motions can be found only by perturbation theory and for very special cases. In 1887 the King of Sweden Oscar II asked the question ‘is our solar system stable?’ and offered a prize of 2500 crowns for the answer. Even nowadays this question is not definitely decided. On the one hand, in 1890 Henri Poincaré could find a kind of counterevidence. He mathematically showed that already for the three-body problem, i. e., a planet close to a double star, no regular orbits can exist. On the other hand, the French astronomer Jacques Laskar computed numerically in 1989 the orbits of the four inner planets, and in 1994, of all planets in our solar system for the next 25 billion years and found that the trajectories are ‘weakly chaotic,’ but collisions between the planets are rather unlikely in the next 200 million years [1, 2]. A further indication to the instability of certain orbit radii is the distribution of asteroids between Mars and Jupiter. One observes the so-called Kirkwood gaps occurring at orbit radii having a period of revolution TA as a rational of the period of Jupiter: TA =

TJ n

,

n = 2, 7/3, 5/2, 3, . . . .

(4.36)

Resonances occur along those orbits, which pull out the asteroids located close to these radii in a short time (at least on the astronomic scale); see also left frame of Figure 1.2. The interaction of three celestial bodies can be examined numerically. Based on (4.29) we consider the extended ODE system r⃗ − r3⃗ r⃗ − r2⃗ 1 ̈ r⃗ = −m2 1 − m3 1 G 1 |r1⃗ − r2⃗ |3 |r1⃗ − r3⃗ |3 r⃗ − r3⃗ r⃗ − r1⃗ 1 ̈ r2⃗ = −m1 2 − m3 2 3 G |r2⃗ − r1⃗ | |r2⃗ − r3⃗ |3 r⃗ − r1⃗ r⃗ − r2⃗ 1 ̈ r3⃗ = −m1 3 − m2 3 . 3 G |r3⃗ − r1⃗ | |r3⃗ − r2⃗ |3

(4.37)

84 � 4 Ordinary differential equations I The system (4.37) is equivalent to 18 first-order ODEs. Again the number of degrees of freedom can be reduced due to the conservation of the total momentum. Taking the center of mass as the origin of the coordinate system, we may eliminate for instance r3⃗ : r3⃗ = −α1 r1⃗ − α2 r2⃗ .

(4.38)

Here, αi = mi /m3 denote the dimensionless mass ratios. Scaling the time according to τ = t(

Gm3 ) ℓ3

1/2

where ℓ is a length to be determined later, one finds the dimensionless system r⃗ − r3⃗ r1⃗ − r2⃗ − 1 3 ⃗ ⃗ |r1 − r2 | |r1⃗ − r3⃗ |3 r⃗ − r3⃗ r⃗ − r1⃗ r2̈⃗ = −α1 2 − 2 3 ⃗ ⃗ |r2 − r1 | |r2⃗ − r3⃗ |3 r1̈⃗ = −α2

(4.39)

with r3⃗ from (4.38). The energies of the three bodies in units of ℓ/Gm32 read 3 1 Ei = αi (ri̇⃗ )2 + ∑ Vik , 2 k =i̸

Vik = −

αi αk , |ri⃗ − rk⃗ |

(4.40)

which of course are not conserved. The total energy, however, 3

E = ∑ Ei = i

1 3 1 3 ∑ αi (ri̇⃗ )2 + ∑ Vik 2 i 2 i,k =i̸

(4.41)

constitutes again a constant of motion and can be used as a control value for any numerical integration. In addition, the three components of the total angular momentum 3

L⃗ = ∑ mi (ri⃗ × ri̇⃗ ) i

must also be conserved. Counting the number of dynamical degrees of freedom in three dimensions, one finds 18 (number of equations) −1 (energy) −3 (momentum) −3 (angular momentum) = 11, more than enough to expect chaotic behavior. We shall however continue the treatment with the restriction to two spatial dimensions. This of course is a special case that can be realized only if for all bodies the initial condition reads zi (0) = żi (0) = 0. The number of degrees of freedom is then reduced to only 12−1−2−1 = 8 but still enough to leave room for chaos. The system (4.39) can be integrated by either applying RK4 or using a symplectic algorithm. Depending on the αi and on the initial conditions, various qualitatively different motions can be obtained. A solution for relatively small αi is depicted in Figure 4.8.

4.4 RK4 applications: celestial mechanics



85

Figure 4.8: The three-body problem and numerical solutions for α1 = α2 = 0.001. Left: the distances |ri | from the origin; right: the energies E1 , E2 from (4.40). The two lighter bodies (red, green) circuit the heavy one (blue) on weakly distorted circular orbits.

Figure 4.9: Same as Figure 4.8, but for α1 = 0.015, α2 = 0.025. Now the evolution seems much more chaotic. At t ≈ 100 the at first inner planet (green) moves outwards. It continues absorbing energy from the other two bodies and reaches even larger orbits. Eventually it may escape from the influence of the two other masses (E2 ≈ 0, t > 1400). Then the two remaining bodies form a stable periodic Kepler system.

Because of m3 ≫ m1 , m2 , the mass point denoted by i = 3 (blue) plays the role of the central star. It is revolved by the ‘planets’ with i = 2, 3. Affected by the (weak) interaction, the orbits are no longer closed and show weakly chaotic behavior. If the αi are larger, the solutions become very irregular and chaotic. Often one body can absorb enough energy to escape the system even if the total energy is negative, E < 0. This mass point detaches from the other two over the course of time and for t → ∞, a two-body problem remains, showing periodic and closed Kepler orbits (Figure 4.9). Both the solutions shown in Figure 4.8 and Figure 4.9 have been computed with a symplectic method of order Δt 2 , and the changes of total energy and angular momentum have been recorded. With the time step Δt = 2 ⋅ 10−6 the maximal relative deviations 󵄨󵄨 X − Xmin 󵄨󵄨󵄨󵄨 󵄨 ΔX = 2󵄨󵄨󵄨 max 󵄨 󵄨󵄨 Xmax + Xmin 󵄨󵄨󵄨

86 � 4 Ordinary differential equations I in the time interval 0 < t < 2600 are for Figure 4.8 ΔE = 4.8 ⋅ 10−7 ,

ΔL = 3.0 ⋅ 10−12 ,

ΔE = 2.2 ⋅ 10−5 ,

ΔL = 2.9 ⋅ 10−12 .

and for Figure 4.9

The restriction of the motion on a plane strongly increases the probability of collisions between two bodies. Here of course, a 3D-computation would be more realistic. 4.4.4 The reduced three-body problem To arrive from (4.37) at the so-called reduced three-body problem, the following approximations are in order: – All masses move in the same plane (2D). – One of the three masses is light enough (a small planet, an asteroid, or a space ship) not to influence the motion of the two other ones. Let us assume m1 ≪ m2 , m3 . Then m1 is called the test body (TB). – The two heavy bodies (named main bodies, e. g., sun and Jupiter or a double star) move in circles around their common center of mass with the angular frequency ω. Lengths and time are scaled to their distance and period so that |r2⃗ − r3⃗ | = 1 and ω = 1. 4.4.4.1 The explicitly time-dependent equations of motion An important system parameter is provided by the mass relation μ=

m3 , m2 + m3

0 < μ < 1.

(4.42)

Scaling of time with τ = t(

Gm3 ) μℓ3

1/2

yields after some computations the two circular orbits r2⃗ (t) = −μ (

cos t ), sin t

r3⃗ (t) = (1 − μ) (

cos t ). sin t

(4.43)

For the test body m1 , the equation of motion has the form r1̈⃗ = −(1 − μ)

r⃗ − r3⃗ (t) r1⃗ − r2⃗ (t) −μ 1 . 3 |r1⃗ − r2⃗ (t)| |r1⃗ − r3⃗ (t)|3

(4.44)

4.4 RK4 applications: celestial mechanics



87

4.4.4.2 The equations of motion in the corotating system As shown first by Euler, the explicit time dependence of (4.44) can be removed by a transformation to a coordinate system r⃗̃ = (x,̃ y)̃ rotating with the two main bodies according to x̃ = x cos t + y sin t ỹ = −x sin t + y cos t. In a uniformly rotating system, apparent forces due to inertia occur, named Coriolis and centrifugal forces. One finds (we drop the tildes): ẍ = 2 ẏ − 𝜕x V

ÿ = −2 ẋ − 𝜕y V

(4.45)

with the potential (Jacobi potential): V (x, y) = −

1−μ μ 1 2 − − r r2 r3 2

(4.46)

and r = √ x 2 + y2 , as well as the distances r2 = √(x + μ)2 + y2 ,

r3 = √(x + μ − 1)2 + y2

between the TB and the two main bodies. In the corotating system the main bodies rest on the x-axis: r2⃗ = (−μ, 0),

r3⃗ = (1 − μ, 0),

their common center of mass coincides with the origin. As can be shown easily, the total energy E=

1 2 (ẋ + ẏ2 ) + V (x, y) 2

(4.47)

is conserved and determined by the initial conditions. Since the kinetic energy cannot be negative, the TB can only be located in regions where E − V (x, y) ≥ 0, see Figure 4.10.

88 � 4 Ordinary differential equations I

Figure 4.10: Forbidden regions (gray) for μ = 1/4 and E = −2.0, −1.8, −1.7, −1.55 (from left to right). For E ≥ −3/2 − μ(μ − 1) ≈ −1.406 the test body has access to the complete plane.

4.4.4.3 Lagrangian points and Trojans The equations (4.45) possess five fixed points (ẋ = ẍ = ẏ = ÿ = 0), the TB ‘rests’ in the rotating system and relative to the main bodies. The fixed points are called Lagrangian or libration points. They are computed from the equilibrium conditions 𝜕x V = 𝜕y V = 0. Three of the Lagrangian points lay on the x-axis (Figure 4.11), their x-values can only be computed numerically (see Problems). They are unstable for all μ. Of more interest are the two other fixed points y ≠ 0, the so-called Trojans. They turn out to be x4,5 = 1/2 − μ,

y4,5 = ±√3/2

(4.48)

and form an equilateral triangle with the two main bodies with side length one. Linearizing (4.45) around (4.48) x = x4,5 + x,̄

y = y4,5 + ȳ

yields a system for the deviations 3 3√3 ẍ̄ = 2 ẏ̄ + x̄ ∓ (2μ − 1) ȳ 4 4 3√3 9 ȳ̈ = −2 x̄̇ ∓ (2μ − 1) x̄ + y,̄ 4 4

(4.49)

Figure 4.11: The five Lagrangian points for μ = 1/4. L4 and L5 are named Trojans and prove to be stable for small values of μ.

4.4 RK4 applications: celestial mechanics



89

and with x,̄ ȳ ∼ exp(λt) the characteristic polynomial λ4 + λ2 +

27 μ(1 − μ) = 0 4

(4.50)

as the solvability condition. The fixed points x4,5 , y4,5 become unstable if λ2 ∈ 𝒞 . This is the case if 1 − 27μ(1 − μ) < 0, or 1 1 √ 23 1 1 23 − 1 to the scaling: dτ = dt,

1 u = x,̇ s

1 v = ẏ s

and obtain dτ x = s u dτ y = s v

1 1 dτ u = 2 v − 𝜕x V − 3 Fx s 2s 1 1 dτ v = −2 u − 𝜕y V − 3 Fy . s 2s

(4.54)

The system (4.52), (4.54) can be coded using RK4. Even better suited is RK4 with adaptive time step, see Section 4.3.3.2. The accuracy can also be controlled dynamically taking ϵ = ϵ0 s. Then the relative accuracy ϵ/r remains approximately constant. For the complete program we refer to the website [3]. Figure 4.12 shows numerically computed orbits for μ = 1/4 and several values of E. The Lagrangian points are located at x1 = 0.361,

x2 = 1.266,

x3 = −1.103,

y1,2,3 = 0,

x4,5 = 1/4,

y4,5 = ±√3/2,

and the corresponding potential values read V1 = −1.935,

V2 = −1.781,

V3 = −1.622,

V4,5 = −1.406.

4.4 RK4 applications: celestial mechanics



91

Figure 4.12: Trajectories for μ = 1/4 and E = −2.0 (top row), E = −1.8 (second row), E = −1.75 (third and fourth row), E = −1.55 (bottom row).

For all solutions we fixed ϵ0 = 0.001, leading to a dynamical time step between 10−5 and 10−2 . For E < V1 the orbits are normally periodic or quasiperiodic. If E > V1 , the trajectories soon become chaotic. For even higher energies most of the orbits leave the region close to the main bodies and escape to infinity. On the other hand, even here, bounded orbits exist (last row on the left).

92 � 4 Ordinary differential equations I

Figure 4.13: Scan through the allowed region of initial conditions with a resolution of 400 x 400 dots, μ = 1/4, E = −1.8 (left), and E = −1.75 (right). The different colors correspond to the different types of the trajectories (see text).

As can be seen in the figure, depending on the initial conditions, one obtains for the same values of μ and E completely different trajectories. For any given E we may scan the allowed region for x(t = 0), y(t = 0) and integrate the equations from t = 0 up to a certain given (long) time t = te . Then the resulting trajectory is evaluated according to its behavior and the initial point x(t = 0), y(t = 0) is colored accordingly. Two such orbittype diagrams are shown in Figure 4.13, again for μ = 1/4 and E = −1.8; E = −1.75. The absolute value of the initial velocity follows from the energy equation (4.47), its direction chosen perpendicularly2 to r1⃗ . At te = 1000 we distinguish between – White: escape trajectory. The trajectory leaves the area r < 4 within t < te . – Red: orbit around main body at −μ. Bounded up to t = te . – Green: orbit around main body at 1 − μ. Bounded up to t = te . – Blue: orbit around both main bodies. Bounded up to t = te . The boundaries between different regions look at least for E = −1.75 fractal, a hint toward chaotic orbits. At this energy value, orbits only enclosing the heavier main body are rather uncommon.

4.4.5 Problems 1.

Study the system (4.32) for a more general gravity force ẍ = −

x , rα

ÿ = −

y rα

(4.55)

2 The direction is an additional parameter in the form of an initial value. For other directions, one obtains other diagrams.

4.5 Molecular dynamics (MD)

2.

3.

� 93

for arbitrary α. Show by linearization around a circular orbit with radius r0α = 1/Ω20 and angular frequency Ω0 that such a circle is closed only for certain α. Determine these α and verify the result by integrating (4.55) numerically. Hint: transform (4.55) to a rotating frame with angular frequency Ω0 as done in Section 4.4.4.2. Examine the full three-body problem (4.39) in three room dimensions. Code the ODE system with the help of a symplectic algorithm. Take as the initial values that the two orbits are not located in the same plane. Check numerically the stability of the orbits for several values of αi . Check also if E and L⃗ are conserved. Determine the Lagrangian points L1 to L3 depending on μ numerically.

4.5 Molecular dynamics (MD) During the past five decades, direct numerical simulations of microscopic many-particle systems became more and more a focus of interest. In this way material parameters or even phase transitions can be computed based on ‘first principles.’ However, in statistical physics, ‘many’ means some 1023 particles, a number that will remain always beyond numerical realization. The first papers in the field [4] clearly dealt with fewer than 1000 particles, whereas nowadays handling some 100 million is feasible.

4.5.1 Classical formulation Most MD systems are based on the laws of classical mechanics. In the simplest case, one considers N equal particles (mass points) interacting with two-body forces. The force is often described by the Lennard-Jones potential3 12

6

σ σ V (r) = 4ϵ(( ) − ( ) ) r r

(4.56)

where r is the distance of the two interacting particles. The parameters ϵ and σ depend on the material to be simulated and take, e. g., for the inert gas Argon, the values ϵ/kB = 120 K, σ = 3.4 Å (kB = 1.381 ⋅ 10−23 J/K, Boltzmann’s constant). Again we shall restrict ourselves to a plane problem with two spatial dimensions. Although this seems not very realistic for a gas or a solid, the codes will be shorter and execute much faster. The method does not differ from the three-dimensional treatment and we can leave a 3D extension to the reader. 3 John Lennard-Jones, English mathematician and physicist, 1894–1954.

94 � 4 Ordinary differential equations I Applying a scaling to nondimensional variables r = σ r,̃

t = σ √m/ϵ t,̃

we obtain the 2N dimensionless equations of motion (tildes removed) N

ẍi = 24 ∑ f (rij )(xi − xj ), j=i̸

N

ÿi = 24 ∑ f (rij )(yi − yj ) j=i̸

(4.57)

where f (r) = 2r −14 − r −8

(4.58)

and rij = √(xi − xj )2 + (yi − yj )2 are the two-particle distances. The scaled Lennard-Jones potential 12

6

1 1 U(r) = 4(( ) − ( ) ) r r

(4.59)

possesses a minimum at r0 = 21/6 ≈ 1.12 with U(r0 ) = −1. This would correspond to the equilibrium configuration if there were only two particles around. The total (internal) energy scaled by ϵ reads E = Ek + Ep

(4.60)

and must be conserved if no external forces are present. Here we denote the total kinetic internal energy by Ek =

1 N ̇ 2 ∑(r⃗ ) , 2 i i

(4.61)

and the total potential internal energy by N

Ep = ∑ U(rij ). j>i

(4.62)

4.5.2 Boundary conditions For the majority of MD simulations, periodic boundary conditions are assumed. The integration domain with periodic length L is thought to be repeated in each space direction infinitely many times. Each particle is not only interacting with every other one in L but also with the ones in the periodic copies. In this way a much larger system is simulated. Wall effects, however, cannot be studied since the system is in fact infinite.

4.5 Molecular dynamics (MD)

� 95

A much more simple way is to take rigid, impermeable, and perfectly reflecting walls around the integration domain. If while integrating a particle position xi , yi outside of the domain turns out, say on the right-hand side xi > L, the particle is to be reflected. Then its new position and x-velocity are simply set to xi → 2L − xi ,

ẋi → −ẋi .

The same can be done for y and correspondingly for the walls at x = 0, (xi → −xi ), and y = 0. The particles interact with the walls in the form of elastic collisions where energy and absolute value of momentum are conserved.

4.5.3 Microcanonical and canonical ensemble The macroscopic state variables of a closed many-particle system are the internal energy E, the volume V (in 2D V = L2 ), and the number of particles N. All three quantities are constants of motion and can be adjusted by the system and the initial conditions, respectively. Using the notions of statistical mechanics, such a system is called a member of the microcanonical ensemble. Each member is represented by a point in the 4N-dimensional state space, the Γ-space. When time goes on, these points travel on trajectories in a 4N − 1-dimensional hyperplane E(x1 , y1 . . . xN , yN , u1 , v1 . . . uN , vN ) = const. where ui = ẋi , vi = ẏi denote the particle velocities. We may assign a temperature to each ensemble member using the equipartition theorem according to f k T = Ek 2 B

(4.63)

where f denotes the number of mechanical degrees of freedom. During the temporal evolution the temperature will change and reaches its equilibrium value T0 for t → ∞. Scaling the temperature with ϵ/kB and as in (4.60), the energy with ϵ one obtains with f = 2N: T = Ek /N.

(4.64)

If one is interested in the thermal behavior of the simulated material, one normally prescribes the temperature by coupling the system to a heat bath. Then, the internal energy will adjust itself accordingly and the ensemble is called canonical. There are different ways to fix the temperature to a desired value. These algorithms are called thermostats. Due to (4.63) Ek is given through T. Thus, after a certain number of time steps one can repeatedly multiply the speed of each particle with the factor

96 � 4 Ordinary differential equations I √Ts /T, fulfilling (4.63). Here, Ts denotes the prescribed nominal temperature of the heat bath. Another way to realize a thermostat is assuming a particle-speed-dependent friction in the equations of motion. If we add to the right-hand sides of (4.57) the terms −γ(T) ri̇⃗ and take γ(T) = γ0 (1 −

Ts ), T

(4.65)

it can be shown that the temperature Ts is reached exponentially in time with exp(−2γ0 t) (Problems). Each particle is thereby permanently in contact with the heat bath, releasing energy to the bath if T > Ts , receiving energy for T < Ts . 4.5.4 A symplectic algorithm Here we shall only briefly describe the algorithm; a detailed and commented listing can be found in [3]. To integrate (4.57) with (4.65) a symplectic scheme is recommended. ̇ a system of 4N first-order ODEs is obIntroducing the particle velocities (u, v) = (x,̇ y), tained having the form: N

u̇ i = 24 ∑ f (rij )(xi − xj ) − γ(T) ui j=i̸

ẋi = ui

(4.66)

N

v̇i = 24 ∑ f (rij )(yi − yj ) − γ(T) vi j=i̸

ẏi = vi , which after time discretization can be brought into the symplectic scheme (see Section 3.3.2) N

ui(n+1) = ui(n) + [24 ∑ f (rij(n) )(xi(n) − xj(n) ) − γ(T) ui(n) ] Δt j=i̸

xi(n+1) = xi(n) + ui(n+1) Δt v(n+1) i

=

v(n) i

+

N

[24 ∑ f (rij(n) )(y(n) i j=i̸

y(n+1) = y(n) + v(n+1) Δt. i i i

(4.67) −

y(n) ) j



γ(T) v(n) ] Δt i

4.5 Molecular dynamics (MD)



97

4.5.4.1 Optimization To find the forces acting on a certain particle (called reference particle in the following) one must sum over all N −1 pair forces caused by the other particles. This corresponds to N(N − 1) evaluations of the pair potential at each time step. The program can be vastly optimized if only those particles located within a certain distance rm to the reference particle are taken into account. The Lennard-Jones potential decreases with 1/r 6 , thus it should be sufficient to take say rm = 4r0 . Then U(rm ) ≈ −0.0005 can safely be neglected against U(r0 ) = −1. To restrict the pair sum on the neighbors laying in a sphere (or disc in 2D) with radius rm , one must know the ambient particles for the reference particle, demanding a certain bookkeeping during the dynamical process. This can be achieved most easily with a two-dimensional array NB. Let the particles be numbered with k = 1 . . . N (ordinal number). Then the first index of NB is assigned to the ordinal number of the reference particle, say k, while the second one lists the neighbors. There, NB(k, 1) is equal to the number of neighbors of the reference particle k. If we have, for instance, NB(k, 1) = 5, particle number k has five neighbors inside rm , whose ordinal numbers are stored in NB(k, 2) through NB(k, 6). Since the particle positions change continuously, the neighborhood array must be updated, taking an additional computational effort. However, depending on the maximal particle speed this update is not necessary after each single time step. In practice it proves to be sufficient to recompute NB after, say, 20–30 steps. 4.5.4.2 Initial conditions As usual in classical mechanics one needs two complete sets of initial conditions, namely particle positions and velocities at t = 0. The particles, for instance, can be ordered on a square grid with width r0 . This of course cannot correspond to an equilibrium state since next-nearest neighbors also interact (Problems). The velocities can be chosen close to thermal equilibrium according to a Maxwell–Boltzmann distribution, applying the Box–Muller method described in more detail in Section 9.1.3.3. On this behalf, one may take ui(0) = √−2T ln(1 − ξ1 ) cos(2πξ2 ) v(0) = √−2T ln(1 − ξ1 ) sin(2πξ2 ) i

(4.68)

with two independent, equally distributed random numbers ξ1 , ξ2 in [0,1). 4.5.5 Evaluation 4.5.5.1 Phases in configuration space Figure 4.14 shows three snapshots of 1600 particles, each one for a different temperature. For large T, the gaseous state is clearly distinguished. The main difference between

98 � 4 Ordinary differential equations I

Figure 4.14: Configurations of 1600 particles and different temperatures, each after t = 200.

Figure 4.15: Distance of two initially neighbored particles as a function of time.

‘liquid’ and ‘solid’ is, apart from small fluctuations, the constant particle position. In Figure 4.15 we plot the distance of two initially neighbored particles over time. Averaging over all distances one should find for the liquid state a behavior as ⟨r 2 ⟩ = D t. For the solid state, the diffusion constant D goes to zero. Taking only 1600 particles, a rather large percentage is close to phase separating walls, and surface effects may play an important role. Surface effects can be avoided by applying periodic boundary conditions in space, for which we refer to the literature, e. g., [5].

4.5 Molecular dynamics (MD)

� 99

4.5.5.2 Pair correlation function In 3D configuration space, the pair correlation function g(r) describes the probability of finding a particle at distance r from another particle in a spherical shell with volume 4πr 2 dr. In 2D space, the shell turns into a ring with the area 2πrdr. Let n(r) be the number of particles located on the ring [r, r + Δr], N

n(r) = ∑ Θ(rij − r) Θ(r + Δr − rij ), i,j=i̸

(4.69)

where Θ denotes the step function, and g(r) can be computed as: g(r) 2πrΔr =

1 n(r). N(N − 1)

(4.70)

To find n(r), one needs a histogram. To this end, the particles n(r) laying on the ring are counted and stored for several discrete values of ri = iΔr at a certain time. To avoid fluctuations, one averages (4.69) over a certain time interval (time average = ensemble average). Figure 4.16 shows the (scaled) pair correlation functions for the three different temperatures from Figure 4.14. Clearly visible is the transition from a far order (solid state) to a near order (liquid) and finally to a pure repulsion (gaseous) if temperature is increased.

Figure 4.16: Scaled pair correlation function for several temperatures, solid: T = 0.12, dash-dotted: T = 0.44, dashed: T = 1.25.

100 � 4 Ordinary differential equations I 4.5.5.3 Specific heat The specific heat of a mechanically closed system (constant volume) is given by cv =

dE dT

(4.71)

with the total energy E as (4.60). There are different ways to compute cv from MD simulations. For a canonical ensemble one finds cv =

1 Var(E) kb T 2

with the variance Var(E) = ⟨E 2 ⟩ − ⟨E⟩2 and ⟨. . . ⟩ as the ensemble or time average. However, for the computations described above, E is a conserved quantity and thus Var(E) = 0. If, on the other hand, one substitutes in (4.60) Ek = NT, (4.71) yields cv = N +

dEp dT

.

(4.72)

Ep (T) can be easily gained from an MD simulation. The dependence of Ep on T is shown in Figure 4.17. The strong increase around T ≈ 0.45 indicates a phase transition. Indeed, cv should diverge in the thermodynamic limit at a phase transition.

Figure 4.17: Internal energy Ep over T , averaged over a time interval Δt = 100. When close to a phase transition, Ep varies strongly. Simulation with N = 1600.

4.5 Molecular dynamics (MD)



101

Figure 4.18: Specific heat from (4.73). For T → 0 one expects cv = 2N (2D elastic solid), for T → ∞ cv = N (2D perfect gas). Simulation with N = 1600.

Another method to compute cv was developed by Lebowitz et al. [6]. There, one needs only the variance of the kinetic energy (4.61) and gets: cv =

N

1−

Var(Ek ) NT 2

.

(4.73)

Figure 4.18 shows cv after (4.73) from the same simulation as used for Figure 4.17. The data is rather noisy. A maximum can nevertheless be clearly recognized at T ≈ 0.45, as well as the correct asymptotic behavior of cv . For small T the equipartition theorem provides E = 2NT (two degrees of freedom from translation and another two from oscillation around rest position) and therefore cv = 2N. For a large T one expects ‘almost free’ particles (perfect gas) with cv = N.

4.5.6 Problems 1. Thermalization A method to keep the temperature of a many-particle system constant at Ts is to introduce friction terms into the equation of motion as done in (4.66): dtt2 ri⃗ = F⃗i − γ(T, Ts ) v⃗i where

(4.74)

102 � 4 Ordinary differential equations I

γ(T, Ts ) = γ0 (1 −

Ts ). T

For the interaction-free case F⃗i = 0, derive from (4.74) a first-order ODE for T(t). Solve this equation and show that T approaches Ts exponentially in time with exp(−2γ0 t). 2. 2D-equilibrium configurations An (infinitely extended) 2D-many-particle system at equilibrium forms a periodic lattice. Consider the two configurations: square and hexagonal (Figure 4.19). Let the two-particle interaction be given by a Lennard-Jones potential as in (4.59). Compute the equilibrium distances rS and rH and consider (i) Only nearest neighbors (blue), (ii) Also next nearest neighbors (red), (iii) Also next-next nearest neighbors (green). Which of the two configurations is the more stable one?

Figure 4.19: Two possible periodic configurations in 2D.

4.6 Chaos As it already discussed in Chapter 3 but in even more pronounced examples in the two previous sections of the present chapter, chaotic behavior is more the rule than the exception if the phase space possesses enough dimensions. Chaotic trajectories may appear both in conservative (Section 3.3, 4.4.3, 4.4.4) and in dissipative systems (Section 3.2.7, 4.5). For Hamiltonian systems, initial conditions normally play a decisive role. For closely neighbored initial values, qualitatively different behaviors very often emerge, such as periodic, quasiperiodic, or chaotic; see for instance Figure 4.13. For

4.6 Chaos

� 103

a dissipative system, typically the initial conditions play a minor role and are ‘forgotten’ after a certain, by friction effects defined, relaxation time (there are exceptions: a trajectory starting, e. g., close to a stable fixed point or limit cycle, will stay close and stationary or periodic behavior will persist. However, this does not exclude more involved or even chaotic behavior for trajectories starting in another region of state space).

4.6.1 Harmonically driven pendulum We continue working with low-dimensional systems (N > 2) having the form dyi = fi (y1 , y2 , . . . , yN ), dt

yi (0) = ai ,

i = 1 . . . N,

(4.75)

where friction is included (dissipative systems, no energy conservation). Let us begin with the harmonically driven nonlinear mathematical pendulum in 2D phase space φ̈ + αφ̇ + Ω20 sin φ = A cos ωt.

(4.76)

This is a nonautonomous problem with explicit time dependence. It can be transformed to an autonomous one by introducing an additional dependent variable y1 = φ, y2 = φ,̇ y3 = ωt having now the form (4.75) and N = 3: ẏ1 = y2

ẏ2 = −α y2 − Ω20 sin y1 + A cos y3

(4.77)

ẏ3 = ω.

The corresponding RK4-subroutine pendulum_deq reads: SUBROUTINE pendulum_deq(rhside,y,t) REAL, DIMENSION(3) :: rhside,y COMMON /PARAM/ alpha,omega0,a,omega rhside(1)=y(2) rhside(2)=-alpha*y(2)-omega0**2*SIN(y(1))+a*COS(y(3)) rhside(3)=omega END Clearly the last equation in (4.77) can be immediately integrated and one may just as well (and with less effort) work with the nonautonomous original system: rhside(1)=y(2) rhside(2)=-alpha*y(2)-omega0**2*SIN(y(1))+a*COS(omega*t)

104 � 4 Ordinary differential equations I

Figure 4.20: Chaos for the driven pendulum. Left: phase space, y2 over y1 ; right: Poincaré section, intersection points with the planes y3 = 3π/2 + 2nπ. A = 1, ω = 0.8, Ω0 = 1, α = 0.1.

Figure 4.21: Near a separatrix, tiny effects due to the driving force may turn into qualitatively different behavior. Solid: without external driving; dashed: possible trajectories with external force.

However, the following considerations are valid only for autonomous systems and we shall resort to (4.77). In the left frame of Figure 4.20 a chaotic trajectory is shown in phase space (y1 , y2 ). The existence of a separatrix of the nondriven system is an important though not necessary ingredient for the emergence of chaos. If the trajectory comes close to the separatrix, a suitable phase and amplitude of the driving force can push the phase point to the other side of the separatrix, and the motion then changes qualitatively, here from oscillation to rotation (Figure 4.21).

4.6 Chaos

� 105

4.6.2 Poincaré section and bifurcation diagrams To visualize three-dimensional trajectories, certain projections are needed. In this way the left frame of Figure 4.20 shows a projection onto the y1 -y2 -plane. Another possibility is to prepare a stroboscopic snapshot, for instance synchronized with the driving frequency ω in the case of the pendulum. As already discussed in Section 3.3.3, a Poincaré section results. In the right frame of Figure 4.20 the y1 -y2 -values are plotted when the driving force undergoes a zero-crossing with a positive slope, that is for ωt = 3π/2 + 2nπ. If the Poincaré section for t → ∞ consists of a finite number of separated points, the trajectory is periodic; a dense line is assigned to quasiperiodicity (cut through a torus) and filled areas with fractal gaps can be interpreted as chaotic behavior. A bifurcation diagram is obtained if some control parameter is changed and the value for a dependent variable is plotted at certain consecutive times over this parameter. Figure 4.22 (bottom) shows such a diagram where the value of y1 has been plotted over the strength of the driving force, again for fixed times ωt = 3π/2 + 2nπ.

Figure 4.22: Top: the two nontrivial Lyapunov exponents for the driven pendulum with friction. Parameters same as in Figure 4.20. Bottom: bifurcation diagram for the driven pendulum, ordinate: |y1 | at ωt = 3π/2 + 2nπ, abscissa: A.

4.6.3 Lyapunov exponents Let y⃗ (0) (t) be a numerically determined solution of (4.75). We ask for its stability by performing a linear stability analysis: ⃗ = y⃗ (0) (t) + u(t). ⃗ y(t)

106 � 4 Ordinary differential equations I Contrary to the linearization around a fixed point, the Jacobi matrix depends now on time ⃗ d u(t) ⃗ = L(t) u(t) dt

(4.78)

with Lij (t) =

𝜕fi 󵄨󵄨󵄨󵄨 . 󵄨 𝜕yj 󵄨󵄨󵄨y⃗(0) (t)

Due to the linear character of (4.78) one may assume that for large times, exponential behavior emerges 󵄨󵄨 ⃗ 󵄨󵄨 σt 󵄨󵄨u(t)󵄨󵄨 ∼ h(t)e ,

t→∞

with a bounded h(t). Then the sign of σ σ = lim

t→∞

1 󵄨󵄨 ⃗ 󵄨󵄨󵄨󵄨 ln󵄨u(t) t 󵄨

(4.79)

classifies the stability of y⃗ (0) . For σ > 0 arbitrarily small deviations will grow exponentially over the course of time, the trajectory is unstable, and chaotic behavior is encountered. The σ defined as (4.79) is called the largest Lyapunov exponent. How can we compute σ? One nearby possibility is to integrate (4.78) numerically and evaluate (4.79) for large t. However, due to the exponential increase of |u|⃗ for a positive σ this is not practical since a numerical overflow would soon occur. 4.6.3.1 Determination of the largest Lyapunov exponent To construct a method that works we begin with the time-evolution operator Q. We may ⃗ as then formally write u(t) ⃗ = Q(t, 0) u(0). ⃗ u(t)

(4.80)

⃗ ⃗ As a matter of fact, u(t), and therefore σ, depends on u(0). Thus one finds as many different Lyapunov exponents as the number of possible linearly independent initial conditions, namely N, the dimension of the state space. Often it is sufficient to know the largest σ because it distinguishes between chaotic and regular dynamics. Since Q(t, t0 ) = Q(t, t1 ) Q(t1 , t0 ),

t > t1 > t0

we may write (4.80) as a product ⃗ = Q(t, t − ΔT) Q(t − ΔT, t − 2ΔT) . . . Q(ΔT, 0) u(0). ⃗ u(t)

4.6 Chaos

� 107

Introducing the abbreviations Q ≡ Q(kΔT, (k − 1)ΔT), k

⃗ u⃗ k ≡ u(kΔT),

k = 0, 1, 2 . . .

we obtain u⃗ k = Q Q k

k−1

. . . Q u⃗ 0

(4.81)

1

where k = t/ΔT. Taking a small enough ΔT avoids an overflow at a single step u⃗ k = Q u⃗ k−1 . k

After each step one may normalize according to û k =

u⃗ k , dk

dk = |u⃗ k |

and obtains u⃗ k = d0 Q Q k

k−1

... Q Q û = d0 d1 Q Q ... Q û = d0 d1 . . . dk û k . ⏟⏟⏟⏟⏟⏟⏟⏟⏟ 2 ⏟⏟⏟⏟⏟⏟⏟⏟⏟ 1 0 k k−1 2 1 =û 1 d1

=û 2 d2

From here one receives k

|u⃗ k | = ∏ dℓ ℓ=0

and finds by substitution in (4.79) k 1 1 k ln(∏ dℓ ) = lim ∑ ln dℓ , k→∞ kΔT k→∞ kΔT ℓ=0 ℓ=0

σ = lim

(4.82)

the Lyapunov exponent to the initial condition û 0 . The following algorithm illustrated in Figure 4.23 has to be implemented: 1. After a certain relaxation time Tv compute the reference trajectory from (4.75). 2. Take an arbitrary û 0 , d0 = 1 at time t = Tv , and set ℓ = 1. 3. Integrate (4.75), and in the meantime (4.78), over the interval ΔT. ⃗ + ℓΔT)|. Normalize u⃗ to û ℓ and store dℓ . 4. Determine dℓ = |u(t 5. ℓ := ℓ + 1, t = t + ΔT. 6. If ℓ < k go to 3. 7. Compute σ according to (4.82). If we denote ln dℓ as a local Lyapunov exponent, (4.82) is the average value of these local exponents.

108 � 4 Ordinary differential equations I

Figure 4.23: Numerical computation of the largest Lyapunov exponent. After a constant time interval the absolute value of the deviation dk is determined. Then the deviation is normalized to one without changing its direction. Blue: reference trajectory (nonlinear system), red: solution of the linearized system.

The limit k → ∞ has to be substituted by a large k. But what does large mean? We define the Lyapunov exponent computed after k iterations: σk =

1 k ∑ ln dℓ kΔT ℓ=0

(4.83)

and find for the next iteration σk+1 =

k 1 k ( ∑ ln dℓ + ln dk+1 ) = σ + Δσ (k + 1)ΔT ℓ=0 k+1 k

and with k ≫ 1 Δσ = σk+1 − σk ≈

1 ln dk+1 . kΔT

Obviously, Δσ ∼ 1/k converges with k → ∞. The sum in (4.82) can be truncated if the error |Δσ| reaches a fixed lower bound. Alternatively the relative error 󵄨󵄨 Δσ 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 < ϵrel 󵄨󵄨 σk 󵄨󵄨 can be used as a truncation criterion. An N-dimensional system possesses N Lyapunov exponents, also called Lyapunov spectrum. The above described method provides the largest one. This is easy to under⃗ for t → ∞ in the base v̂k , built by the N differstand if we decompose the deviation u(t) ent û 0 : ⃗ = c1 v̂1 eσ1 t + ⋅ ⋅ ⋅ + cN v̂N eσN t , u(t)

4.6 Chaos

� 109

where σk denotes now the ordered Lyapunov spectrum according to σ1 ≥ σ2 ⋅ ⋅ ⋅ ≥ σN . ⃗ The constants ck are determined by the initial value u(0). As time goes on u⃗ will orient parallel to v̂1 , independently from the initial conditions. However, if c1 would exactly vanish one would expect u⃗ parallel to v̂2 , etc. In this way one could at least in principle obtain the complete spectrum, what, however, cannot work in practice. Numerically there is always a tiny component in the direction v⃗1 that increases exponentially fast and dominates the deviations after a certain time ∼ 1/σ1 . We shall see below how the complete spectrum can be found anyway. But first let us prove an important theorem. 4.6.3.2 Theorem: one Lyapunov exponent vanishes for all trajectories that do not terminate at a fixed point We further assume bounded systems with 󵄨󵄨 ⃗ 󵄨󵄨 󵄨󵄨y(t)󵄨󵄨 ≤ D1 ,

󵄨󵄨 ̇⃗ 󵄨󵄨 󵄨󵄨 ⃗ ⃗ 󵄨󵄨 󵄨󵄨y(t)󵄨󵄨 = 󵄨󵄨f (y(t))󵄨󵄨 ≤ D2 ,

Di > 0

and show the simple proof of the theorem. Differentiating (4.75) yields N

ÿi = ∑ j

N 𝜕fi ẏj = ∑ Lij ẏj , 𝜕yj j

from where it follows that with u⃗ = ẏ⃗ the time derivative of each solution of (4.75) also provides a solution of the linear system (4.78). In particular this is valid for the reference trajectory y⃗ (0) (t) itself, i. e., the deviation points always tangential to the trajectory (Figure 4.24). This special (marginal) Lyapunov exponent holds with (4.79) as σm = lim

t→∞

1 󵄨󵄨 ̇ (0) 󵄨󵄨 1 ln󵄨y⃗ (t)󵄨󵄨 ≤ lim ln |D2 | = 0, t→∞ t t 󵄨

and therefore σm ≤ 0. Assuming on the other hand σm < 0, due to 󵄨󵄨 ̇⃗ (0) 󵄨󵄨 σ t 󵄨󵄨y (t)󵄨󵄨 ∼ e m ,

t→∞

one arrives at 󵄨󵄨 ̇⃗ (0) 󵄨󵄨 󵄨󵄨y (t)󵄨󵄨 = 0,

t→∞

(4.84)

110 � 4 Ordinary differential equations I

Figure 4.24: A deviation pointing in the direction of the reference trajectory has a vanishing Lyapunov exponent if the trajectory does not terminate at a fixed point.

and the trajectory would end at a fixed point. Therefore, if the trajectory does not terminate at a fixed point as assumed, the only possibility remaining is σm = 0. 4.6.3.3 Higher order Lyapunov exponents How can we compute the complete Lyapunov spectrum? To this end we introduce the Lyapunov exponents of the order p; the quantity defined with (4.79) is then the Lyapunov exponent of the first order corresponding to the average rate of contraction of a vector (one-dimensional). Accordingly, the exponent of the order p stands for the mean rate of contraction of the volume of a p-dimensional parallelepiped: σ (p) = lim

t→∞

1 󵄨󵄨 󵄨 ln󵄨V (t)󵄨󵄨. t 󵄨 p 󵄨

(4.85)

Considering a parallelepiped moving along trajectories and spanned by the base v⃗k , one can show that σ (p) = σ1 + σ2 + . . . + σp holds. If all N orders of the Lyapunov exponent are known, we can compute the exponents of the first order according to σ1 = σ (1) (4.86)

σ2 = σ (2) − σ (1) σN = σ

(N)

−σ

(N−1)

.

4.6.3.4 Classification If all first-order exponents are known, the reference trajectory y⃗ (0) (t) can be classified (Table 4.1). In particular, a positive Lyapunov exponent denotes the divergence of the trajectories. If, however, the dynamics is bounded on a finite region in state space, contractions must also be present and at least one of the Lyapunov exponents must be negative.

4.6 Chaos

� 111

Table 4.1: Classification of trajectories of an autonomous, dissipative system in three-dimensional state space with the help of the signs of their three Lyapunov exponents. σ�

σ�

σ�

− 0 0 +

− − 0 0

− − − −

trajectory terminates at a fixed point stable limit cycle (periodic dynamics) stable torus (quasiperiodic dynamics) strange attractor (chaotic dynamics)

An N-dimensional volume element VN (t) whose corners move along trajectories in state space is contracted in the temporal average if the system is dissipative; see Section 3.2.3. For the sake of simplicity let us first assume div f ⃗ = c with c < 0 and c as constant. From (3.23) follows VN (t) = VN (0) ect and with (4.85), N

σ (N) = ∑ σk = c = div f ⃗ = Tr L. k=1

(4.87)

The sum over all Lyapunov exponents corresponds to the divergence of f (or the trace of the Jacobi matrix L) and is negative for a dissipative system. Hence it follows also that at least one Lyapunov exponent must be less than zero. Thus, for a three-dimensional state space all possible combinations are listed in Table 4.1. But in general, div f ⃗ is not a constant. Then one can still compute the average and σ

(N)

t

N

1 = ∑ σk = lim ∫ dt div f ⃗ t→∞ t k=1

(4.88)

0

holds, an expression being also negative for dissipative systems. For the example of the damped pendulum one obtains 3

σ (3) = ∑ σk = −α. k=1

For Hamiltonian systems the equations of motions have canonical form (see Section 3.3) and div f ⃗ = 0 results. The sum over all Lyapunov exponents vanishes. 4.6.3.5 Numerical determination of all Lyapunov exponents Next we present a method for the computation of the complete spectrum σn and follow mostly [7]. Starting with a p-dimensional parallelepiped, due to the exponentially differ-

112 � 4 Ordinary differential equations I ent growth rates of the different directions, all the spanning vectors will point soon in more or less one direction, namely in that assigned to the largest Lyapunov exponent. This can be avoided if the spanning vectors are orthogonalized repeatedly after certain time intervals, applying a Gram–Schmidt procedure for instance. We demonstrate the algorithm for the three-dimensional state space, N = 3: 1. Select three orthonormal unit vectors û 1 , û 2 , û 3 , (û i ⋅ û j ) = δij , t = 0, ℓ = 0. 2. Integrate (4.78) from t through t + ΔT: w⃗ i = Q(t + ΔT, t) û i 3.

(p)

Compute the volumes Vℓ , which are spanned by w⃗ 1 . . . w⃗ p : (p)

Vℓ

= √|det V |

with the help of the p × p matrix N

Vij = ∑ wn(i) wn(j) n

4.

and wk(i) as the kth component of the vector w⃗ i . Using a Schmidt–Gram method, determine the next orthogonal set û i in such a way that the first p vectors û i span the subspace defined by w⃗ 1 till w⃗ p : û 1 =

w⃗ 1 , |w⃗ 1 |

u⃗ 2 = w⃗ 2 − c12 û 1 ,

û 2 =

u⃗ 3 = w⃗ 3 − c13 û 1 − c23 û 2 , 5. 6. 7.

u⃗ 2 , |u⃗ 2 |

c12 = û 1 ⋅ w⃗ 2 ,

û 3 =

u⃗ 3 , |u⃗ 3 |

c13 = û 1 ⋅ w⃗ 3 ,

c23 = û 2 ⋅ w⃗ 3

t := t + ΔT, ℓ := ℓ + 1 If ℓ < k go to 2. After sufficiently many steps (large k) the Lyapunov exponent of the order p can be computed: σ (p) =

1 k (p) ∑ ln Vℓ kΔT ℓ=1

and from there according to (4.86) finally the whole spectrum σ1 . . . σN . The subroutine dlyap_exp implementing the steps explained above is described in the Appendix B.

4.6 Chaos

� 113

4.6.3.6 Example: the driven pendulum We demonstrate the method for the driven pendulum (4.77). The Jacobi matrix in (4.78) reads 0 L(t) = (−Ω20 cos y(0) 1 (t) 0

1 −α 0

0 ). −A sin y(0) 3 (t) 0

(4.89)

Due to the last equation (4.77) there are no fixed points and at least one Lyapunov exponent must vanish. Because of the simple structure of (4.89) we can make some analytic conclusions. From the third line it follows immediately u̇ 3 = 0,

u3 = const.

(0) For small A we may assume self-consistently |y(0) 1 |, |y2 | ∼ A and linearize

cos y(0) 1 ≈ 1.

(4.90)

In particular for u3 = 0 the two equations for u1 , u2 are equivalent to the damped pendulum having the solutions α 󵄨󵄨 󵄨 󵄨󵄨(u1 , u2 )󵄨󵄨󵄨 ∼ hi (t) exp(− ), 2

i = 1, 2

where hi (t) is an oscillating bounded function. With (4.79) one finds the two Lyapunov exponents σ2 = σ3 = −α/2. Putting u3 = 1, the two equations for u1 , u2 correspond to the driven harmonic oscillator. The long-time behavior turns out to be y1 ∼ sin(ωt + β), providing σ1 = 0. Therefore, for small A the solution y⃗(0) (t) is a stable limit cycle (Table 4.1). For larger A the linearization (4.90) is invalid, the limit cycle becomes unstable, and eventually chaotic trajectories emerge. Figure 4.25 shows numerical results for smaller A, while Figure 4.22 shows those for larger values. In the code published on the website, instead of the original three-dimensional autonomous system (4.77), the equivalent two-dimensional nonautonomous system ẏ1 = y2

ẏ2 = −α y2 − Ω20 sin y1 + A cos ωt

(4.91)

114 � 4 Ordinary differential equations I

Figure 4.25: Lyapunov exponents and bifurcation diagram for the driven pendulum with friction; parameters as in Figure 4.20.

is used. The Jacobi matrix then reduces to L(t) = (

0

−Ω20 cos y(0) 1 (t)

1 ) −α

(4.92)

and only two Lyapunov exponents exist. The one vanishing for the 3D-system does not occur (the theorem Section 4.6.3.2 is only valid for autonomous systems). 4.6.3.7 Lyapunov time For dynamical systems originating from physical problems, the initial conditions are only known at a finite accuracy Δϵ(0). However, if the dynamics is chaotic, small errors may exponentially grow in time and the largest Lyapunov exponent σ1 constitutes a measure for their growth rate (dimension 1/time). If σ1 is known, a time t ∗ can be estimated after the initial error has exceeded a certain size L, typically the circumference of the attractor in state space. Then the initial conditions will no longer play any role and the albeit fully deterministic theory (the differential equations (4.75)) does not allow for any forecasting for times beyond t ∗ . With L = Δϵ(t ∗ ) = Δϵ(0) eσ1 t one finds



4.6 Chaos

t∗ =

� 115

1 L ln( ) σ1 Δϵ(0)

for the so-called Lyapunov time. Depending on atmospheric conditions, for the weather forecast, t ∗ lies in the region between only a few hours up to a maximum of two weeks. Studying the Centennial Calendar or similar products is therefore proven useless. 4.6.4 Fractal dimension Initiated by the Euclidean notion of dimension and analogue to the concept already discussed in Section 2.4.1, we may assign a dimension to an object in state space. A fixed point has the dimension d = 0, a limit cycle d = 1, a torus d = 2, etc. As it follows from the No-Intersection theorem, a chaotic attractor cannot be completely fit into a plane and therefore must have a dimension larger than 2. On the other hand it normally does not fill the whole state space leading for N = 3 to a fractal dimension 2 < d < 3. How can we compute d? 4.6.4.1 Capacity dimension From the right frame of Figure 4.20 it seems possible to determine the dimension with the box-counting method introduced in Section 2.4.1. Taking the Poincaré section it is important to note that the actual dimension of the attractor is larger by one (a limit cycle corresponds to a finite number of points on the Poincaré section, which has the dimension of zero). It is of course also possible to compute d in the N-dimensional state space by covering the complete space with hypercubes of dimension N and edge length L. As in Section 2.4.1, one determines d by counting the cubes visited by the attractor as a function of L: dK = −

log M . log L

This magnitude is called capacity dimension. To demonstrate the method we compute dK of the driven pendulum for two different values of A, one in the periodic, the other in the chaotic regime. The threedimensional phase space in the region −π ≤ y1 ≤ π,

−3.5 ≤ y2 ≤ 3.5,

0 ≤ y3 ≤ 2π

is thereby covered with n3 cubes, where n runs through the powers of two n = 2, 4, 8, 16 . . . 512. For each n, M is determined and plotted as a function of the edge

116 � 4 Ordinary differential equations I

Figure 4.26: Fractal dimension (slope) found with the box-counting method, called capacity dimension. Driven pendulum, Ω0 = 1, α = 0.1, ω = 0.8, A = 0.4 (dots), A = 1.0 (squares). For A = 0.4 one obtains dK ≈ 1.04; for A = 1.0 a dimension dK ≈ 2.01.

length L in Figure 4.26. It turns out that in the chaotic regime the dimension is only weakly larger than two. Hence, based on the fractal dimension alone one may hardly distinguish between a chaotic attractor and a quasiperiodic motion. Finally, Figure 4.27 depicts dK in a wide region of A. But also here, dK only slightly exceeds values of two whenever chaos emerges.

Figure 4.27: Fractal dimension of the driven pendulum in the domain 0.4 ≤ A ≤ 1.4, parameters as in Figure 4.20. The regions where dK ≈ 2 agree well with the chaotic regions of Figure 4.22.

4.6 Chaos

� 117

4.6.4.2 Correlation dimension Let P be the number of given data points in an N-dimensional state space y⃗ i = (yi1 , yi2 , . . . , yiN ),

i = 1 . . . P.

The set can be the result of a numerical solution of an ODE but could also originate from some series of measurements, e. g., temperature depending on time at N different places, etc. The correlation dimension is computed by determining for each point the number of neighbor points within the distance ≤ R: 󵄨 󵄨 C(R) = number of pairs with 󵄨󵄨󵄨y⃗i − y⃗j 󵄨󵄨󵄨 ≤ R

for all i ≠ j.

This can be formulated by the help of the step-function Θ: P

P

󵄨 󵄨 C(R) = ∑ ∑ Θ(R − 󵄨󵄨󵄨y⃗i − y⃗j 󵄨󵄨󵄨). i=1 j=i+1

If we have enough points P we should find C(R) ∼ Rd with d the dimension of the object where the data points are located. This can be illustrated for N = 3: if the points fill the space uniformly their number is proportional to the volume of the sphere with radius R resulting in d = 3. If the points lie on a plane one finds C(R) ∼ R2 and d = 2 and for a line d = 1. Taking a small enough R, this is the definition of the correlation dimension: ln C(R) . R→0 ln R

dC = lim

(4.93)

One obtains dC by plotting C over R in a log-log scale as the slope (Figure 4.28). However, the slope is not constant and as already discussed in Section 2.4.1, the question is where to evaluate it. For R too small the curve flattens because too few points lay inside the sphere to do a statistical evaluation. For too large R the slope also decreases since the attractor has a finite extension (for the pendulum of the order 2π).

4.6.5 Reconstruction of attractors One may come to the position to analyze a one-dimensional collection of data, a time series of a certain magnitude Y (t): Y0 , Y1 , Y2 , . . . , YP−1 ,

Yn = Y (nΔt).

118 � 4 Ordinary differential equations I

Figure 4.28: Correlation dimension (slope) for the driven pendulum, Ω0 = 1, α = 0.1, ω = 0.8. Chaotic attractor at A = 1, dC ≈ 2.12.

But even without knowing the dimension N of the state space where the system that produced the series lives, a fractal dimension can be determined. However, for a complex system, N can be quite large. If the data points are completely uncorrelated (e. g., the results of a coin toss), the series cannot be embedded at all in a finite-dimensional state space and one expects N → ∞. To reconstruct an attractor, we first unfold the series into a space with some given dimension N ′ , called embedding dimension. To this end we choose a certain time interval ΔT, the delay rate, which is normally much larger than Δt. Let K = ΔT/Δt ≫ 1 be an integer. One constructs a sequence of N ′ -dimensional vectors by y1 (tk ) = Yk

y2 (tk ) = Yk+K

y3 (tk ) = Yk+2K .. .

yN ′ (tk ) = Yk+(N ′ −1)K and finds for tk = kΔt,

k = 0 . . . kmax ,

kmax = P − 1 − (N ′ − 1)K

all P − (N ′ − 1)K vectors of dimension N ′ that can be used to determine the correlation dimension as described above.

4.7 ODEs with periodic coefficients

� 119

How large should we take N ′ ? No intersections of the trajectory should be certainly assured. F. Takens4 was able to prove in 1981 the following theorem [8]: If a deterministic system generates an N-dimensional flow y:⃗ d y⃗ ⃗ = f ⃗(y), dt then x1 = yi (t),

x2 = yi (t + ΔT),

...

x2N+1 = yi (t + 2NΔT)

represents a continuously differentiable embedding, where yi can be any component i = 1 . . . N of y.⃗ Thus the dimension of an attractor is conserved if N ′ ≥ 2N + 1. In practice one has to increase N ′ step by step until the fractal dimension computed for consecutive N ′ converges.

4.7 Differential equations with periodic coefficients 4.7.1 Floquet theorem The Floquet5 theorem is equivalent to Bloch’s theorem in solid state physics and applies for explicitly time-dependent linear ODEs of the form (4.78), but for a time-periodic system matrix L(t): L(t) = L(t + T). As already done in (4.80) we may introduce a time-evolution operator C: ⃗ ⃗ u(T) = C(T) u(0),

(4.94)

now developing u⃗ for one period T. This special operator is also called monodromy matrix. Let w⃗ k be the eigenvectors of C C w⃗ k = σk (T) w⃗ k . Applying C two times yields

4 Floris Takens, Dutch mathematician, 1940–2010. 5 Gaston Floquet, French mathematician, 1847–1920.

(4.95)

120 � 4 Ordinary differential equations I C C w⃗ k = σk (T) σk (T) w⃗ k = σk (2T) w⃗ k . But since n

(σk (T)) = σk (nT) one immediately finds σk = exp(λk T).

(4.96)

The σk are named Floquet multipliers, while the λk are the Floquet exponents. Next we assume that the w⃗ k represent a complete base in N-dimensional state space. Thus we may expand N

⃗ = ∑ ak (t) w⃗ k eλk t . u(t)

(4.97)

k

Applying C yields N

N

⃗ = ∑ ak (t) C w⃗ k eλk t = ∑ ak (t) σk w⃗ k eλk t C u(t) k

k

N

⃗ + T). = ∑ ak (t) w⃗ k eλk (t+T) = u(t k

!

(4.98)

From (4.97) it follows N

⃗ + T) = ∑ ak (t + T) w⃗ k eλk (t+T) u(t k

and finally, comparing with (4.98) ak (t) = ak (t + T). Hence the expansion coefficients in (4.97) are periodic in t with the period T. We have derived the Floquet theorem that summarizes as: The solution of ̇⃗ = L(t) u(t) ⃗ u(t)

with L(t) = L(t + T)

has the form N

⃗ = ∑ q⃗ k (t) exp(λk t), u(t) k

(4.99)

4.7 ODEs with periodic coefficients



121

where q⃗ k are periodic functions q⃗ k (t) = q⃗ k (t + T). The Floquet exponents λk can be computed from the eigenvalues σk of the monodromy matrix C(T) with (4.96) λk =

iα 1 1 1 ln σk = ln |σk |eiαk = ln |σk | + k . T T T T

(4.100)

4.7.2 Stability of limit cycles We study the stability of a periodic solution of (4.75): y⃗ (0) (t) = y⃗ (0) (t + T).

(4.101)

Inserting ⃗ = y⃗ (0) (t) + u(t) ⃗ y(t) and linearizing (4.75) with respect to u⃗ leads to a problem as (4.99) for the small devi⃗ ations u(t). If one of the Floquet exponents has a positive real part the deviations will grow exponentially and the limit cycle (4.101) will be unstable. The condition for stability thus reads |σk | ≤ 1

for all k.

4.7.3 Parametric instability: pendulum with an oscillating support As an application we study the excited pendulum sketched in Figure 4.29. The pivot oscillates with A sin ωt in the vertical direction, leading to an apparent force that can be included in (4.4) by the substitution g → g(1 + a sin ωt),

a=

Aω2 . g

The system (4.5) for the undamped case (α = 0) then reads ẏ1 = y2

ẏ2 = −Ω20 (1 + a sin y3 ) sin y1

ẏ3 = ω.

(4.102)

122 � 4 Ordinary differential equations I

Figure 4.29: Pendulum driven by a vertically oscillating support.

Contrary to (4.77) the time-dependence is now multiplicative and the two fixed points for the undriven case persist in the (y1 , y2 )-plane. The lower rest position is simply given by (0) y(0) 1 = y2 = 0,

y(0) 3 = ωt.

Linearizing around the lower rest position leads to u̇ 1 = u2

u̇ 2 = −Ω20 (1 + a sin ωt) u1

(4.103)

u̇ 3 = 0.

Since u3 decouples from u1 , u2 it is sufficient to further consider the 2D-problem u̇⃗ = L u⃗

(4.104)

with u⃗ = (u1 , u2 ) and L(t) = (

0 −Ω20 (1 + a sin ωt)

1 ). 0

(4.105)

The system (4.104) has the form (4.99) with T=

2π . ω

To determine the Floquet exponents we must first compute the monodromy matrix. To this end we take two orthogonal initial conditions u⃗ 1 (0) = (1, 0),

u⃗ 2 (0) = (0, 1)

4.7 ODEs with periodic coefficients

� 123

and numerically integrate (4.104) with (4.105) for each u⃗ i until t = T. In this way we find u⃗ i (T). On the other hand, due to (4.94) one obtains (u⃗ 1 (T), u⃗ 2 (T)) = C ⋅ (⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ u⃗ 1 (0), u⃗ 2 (0)) = C

(4.106)

=1

and the vectors u⃗ i (T) form the columns of C. Now we can compute the eigenvalues σ12 according to: σ12 =

1 1 Tr C ± √(Tr C)2 − 4 Det C 2 2

(4.107)

and from there with (4.100) also the Floquet exponents. The relation (4.88) or (4.87) holds also for the sum of all the Floquet exponents and from Tr L = 0 it follows λ1 + λ2 = 0.

(4.108)

There are two possibilities for the roots of (4.107): 1. Both σk are real valued and larger than zero, so the two λk are also real and one of them is larger than zero due to (4.108). The fixed point y⃗ (0) is unstable (saddle node). 2. The σk as well as the λk form a complex-conjugate pair. Then because of (4.108) λ1 = −λ2 = −λ∗1 and λ12 = ±iα,

α ∈ ℛ.

The fixed point y⃗ (0) is stable (center). The loci in parameter space (a, ω) where two realvalued solutions of (4.107) turn into a complex-conjugate pair are found from (Tr C)2 − 4 Det C = 0

(4.109)

and separate the stable from the unstable regions.

4.7.4 Mathieu equation The two first-order ODEs (4.104) with (4.105) are equivalent to one ODE of the second order, the so-called Mathieu6 equation: ü + Ω20 (1 + a sin ωt)u = 0 where u = u1 . In the literature, one often finds the form 6 Emile Mathieu, French mathematician, 1835–1890.

(4.110)

124 � 4 Ordinary differential equations I ̃ = 0, ü + (p + 2b sin 2t)u

(4.111)

which can be obtained from (4.110) by the scaling t̃ =

ω t 2

and p=

4Ω20 , ω2

b=

2Ω20 a. ω2

(4.112)

As described above we determine the stability regions of the lower fixed point by computing the monodromy matrix for certain parameters p, b: C = C(p, b). Then the zero lines of the function 2

f (p, b) = (Tr C(p, b)) − 4 Det C(p, b) are plotted in a parameter plane; Figure 4.30, left frame, shows the result. At the resonances p = n2 ,

n = 1, 2, . . .

(4.113)

Figure 4.30: Left: stability chart for the Mathieu equation (4.111). Black: λ1 = λ2 = 0, stability limits, red: λk real valued, unstable; blue: λk = ±iα, stable. Right: blow-up of the dashed area for negative p: the upper rest position is stable in the small wedge between the black lines.

Bibliography

� 125

arbitrarily small amplitudes b are sufficient to destabilize the pendulum, so the oscillations grow exponentially. Inserting p = n2 into (4.112), one finds for the resonances the ratio Ω0 n = ω 2 between the eigenfrequency of the pendulum and the driving frequency. Interestingly enough one obtains even for a negative p a tiny region with imaginary Floquet exponents; see Figure 4.30, right frame. But negative p corresponds to a linearization around the unstable, upper fixed point of the pendulum, y1 = π. If b is chosen appropriately, the pendulum can be kept stable in its top position. 4.7.5 Problems 1.

Linearize (4.102) around the upper, unstable rest position y(0) 1 = π,

y(0) 2 = 0,

y(0) 3 = ωt.

Derive an equation of the form (4.111) for the deviations y1 = π + u(t). What is the difference from (4.111)? Show with the equation u(t) = u0 cos(t + α) eλt , that the pendulum may be stabilized in the upper position. Neglect higher harmonics and use the approximation 1 1 sin 2t cos(t + α) = (sin(t − α) + sin(3t + α)) ≈ sin(t − α). 2 2 2.

Which condition holds for λ to stabilize the upper rest position? Examine the damped Mathieu equation ü + αu̇ + Ω20 (1 + a sin ωt)u = 0 numerically. Plot the stability regions in the q-b-plane for given α > 0. What changes qualitatively in Figure 4.30?

Bibliography [1] J. Laskar, Large-scale chaos in the solar system, Astron. Astrophys. 287, L9 (1994). [2] J. Laskar and M. Gastineau, Existence of collisional trajectories of Mercury, Mars and Venus with the Earth, Nature 459, 7248 (2009).

126 � 4 Ordinary differential equations I

[3] [4] [5] [6]

FORTRAN codes at https://www.degruyter.com/document/isbn/9783110782523/html. A. Rahman, Correlations in the Motion of Atoms in Liquid Argon, Phys. Rev. 136, A405 (1964). J. M. Thijssen, Computational Physics, Cambridge Univ. Press (2007). J. L. Lebowitz, J. K. Percus and L. Verlet, Ensemble Dependence of Fluctuations with Application to Machine Computations, Phys. Rev. 153, 250 (1967). [7] J. Argyris, G. Faust, M. Haase and R. Friedrich, An Exploration of Dynamical Systems and Chaos, Springer (2015). [8] F. Takens, Dynamical Systems and Turbulence, Lect. Notes Math. Vol. 898 (1981).

5 Ordinary differential equations II, boundary value problems 5.1 Preliminary remarks 5.1.1 Boundary conditions We continue considering systems of N ordinary differential equations of the first order ⃗ d y(x) ⃗ = f ⃗(y(x), x). dx

(5.1)

Contrary to initial value problems, for boundary value problems one prescribes conditions at two (or more) different points, say x = a, b: ⃗ + B y(b) ⃗ = c.⃗ A y(a)

(5.2)

Normally, a and b coincide with the boundaries of x and one wants to find solutions for ⃗ in the region between y(x) a ≤ x ≤ b. The boundary conditions written in the form (5.2) linearly depend on y.⃗ However, nonlinear boundary conditions may also occur, having the more general form ⃗ ⃗ gi (y(a), y(b)) = 0,

i = 1...N

where the functions gi depend on 2N variables. In practice, boundary conditions are often separable: ⃗ A1 y(a) = c1⃗ ,

⃗ = c2⃗ . B1 y(b)

(5.3)

In order not to have an over- or under-determined problem, N linearly independent conditions must appear in (5.3). The initial value problem (4.2) from Chapter 4 is included in the form (5.3) with A1 = 1,

B1 = 0,

c1⃗ = (ri⃗ (0) , v⃗i (0) ).

We shall restrict ourselves on linear, separable boundary conditions (5.3). Wellposed initial value problems normally have a unique solution. On the other hand, even linear boundary value problems may have multiple solutions or no solution at all. This becomes evident at a simple example. Consider the ODE of second order: y′′ + y = 0. https://doi.org/10.1515/9783110782523-005

128 � 5 Ordinary differential equations II For the boundary conditions y(0) = 0, y(π/2) = 1 there is just one solution y(x) = sin x, while for y(0) = 0, y(π) = 0 we have infinitely many y(x) = A sin x with arbitrary A, and for y(0) = 0, y(π) = 1 there exists no solution at all. 5.1.2 Example: ballistic flight (0) A mass point starts at (x, y) = 0 with a certain given velocity v⃗ (0) = (v(0) x , vy ) in a constant gravity field and lands after a certain time t = T at x(T) = L, y(T) = 0. The equations of motion read

ẍ = −α ẋ + β y

(5.4)

ÿ = −α ẏ − g,

where speed-dependent friction α as well as a linearly, height-increasing horizontal wind force (shear flow, β) are included. If α = β = 0, the trajectory is a parabola 1 2 gt , 2

y(t) = v(0) y t− or y(x) =

v(0) y v(0) x

x−

x(t) = v(0) x t

g

2 2 (v(0) x )

x2

with T=

2v(0) y g

,

L=

(0) 2v(0) x vy

g

.

(5.5)

(0) ̇ ̇ Given the initial conditions x(0) = 0, y(0) = 0, x(0) = v(0) x , y(0) = vy , the solution x(t), y(t) as well as the flight time T and the landing point L follow uniquely. This of course is a classical initial value problem. How can we formulate it as a boundary value problem? We seek a solution of (5.4) fulfilling the boundary conditions

x(0) = y(0) = 0,

x(T) = L,

y(T) = 0

for a given T (parameter). Hence the mass point should arrive after a given flight time (0) T at the given place x = L. From (5.5) we find v(0) y = g T/2 and vx = L/T or y(t) =

1 g t (T − t), 2

x(t) =

L t. T

(5.6)

5.2 Finite differences �

129

But what to do if (5.4) looks a bit more involved (nonlinear wind or friction forces, etc.) (0) and can be only solved numerically? One could iteratively try values for v(0) x , vy in such a way that the trajectory ended after t = T in (L, 0). This is the idea behind the so-called shooting method to which we shall come back in more detail in Section 5.5.

5.2 Finite differences As already done for initial value problems one can express the derivatives by the help of differential quotients resulting in an algebraic system, the discretized equations.

5.2.1 Discretization We demonstrate the method with the example of the ballistic flight in Section 5.1.2. At first we divide the axis 0 ≤ t ≤ T with equidistant mesh points ti = iΔt,

i = 0 . . . n,

Δt = T/n.

Then the derivatives are substituted by ẋi =

xi+1 − xi−1 , 2Δt

ẍi =

xi+1 − 2xi + xi−1 , Δt 2

(5.7)

and accordingly for y. Here, xi , yi stands for x(ti ), y(ti ). From (5.4) the two linear systems result: n−1

∑ Aij yj = ai

(5.8)

i=1

n−1

∑ Aij xj = β yi + bi .

(5.9)

i=1

Here Aij denotes the tridiagonal matrix Aii = −

2 , Δt 2

Ai,i+1 =

1 α + , 2 2Δt Δt

Ai,i−1 =

and ai = −g. The boundary conditions read x0 = y0 = 0,

xn = L,

yn = 0

1 α − 2 2Δt Δt

(5.10)

130 � 5 Ordinary differential equations II and have to be incorporated into the system (5.8), (5.9). Since A1,0 = 0, the points on the left side are already included in the first equations (i = 1). On the right side (i = n − 1) the last equation of (5.9) yields An−1,n−1 xn−1 + An−1,n L = β yn−1 + bn−1 . Because An−1,n does not exist (A is an (n−1)×(n−1) matrix), the additional inhomogeneity must be considered in bn−1 : bn−1 = −(

1 α + ) L, Δt 2 2Δt

bi = 0

for i = 1 . . . n − 2.

Numerically we can solve the equations (5.8), (5.9) one after the other applying a LAPACK routine, e. g., SGTSV [1]: PARAMETER (n=10) ! number of mesh points REAL, DIMENSION(n) :: x,y,dl,du,di g=9.81 tend=1. ! T, right side boundary xl=1. ! length L=1 .... C matrix A (tridiagonal) to solve y eqs. dl=1./dt**2-alpha/2./dt ! sub-diagonal du=1./dt**2+alpha/2./dt ! super-diagonal di=-2./dt**2 ! diagonal y=-g ! inhomogeneity c LAPACK call for y eqs. CALL SGTSV(n,1,dl,di,du,y,n,info) C matrix A (tridiagonal) to solve x eqs. dl=1./dt**2-alpha/2./dt ! sub du=1./dt**2+alpha/2./dt ! super di=-2./dt**2 ! diagonal x=beta*y ! inhomogeneity x(n)=x(n)-xl*du(n) ! boundary cond. at x=L c LAPACK call for y eqs. CALL sgtsv(n,1,dl,di,du,x,n,info) ... output, plotting, etc... The results for only 10 mesh points (in practice one would use much more) is shown in Figure 5.1 for several values of α and β. A, however, more involved analytical solution can be even found for the case α, β ≠ 0 (see Problems), which is also shown in the figure. The finite difference scheme provides a quite exact result even for a small number of mesh points. Due to the approximations in (5.7) the discretization error is ∼ Δt or ∼ 1/n.

5.2 Finite differences �

131

Figure 5.1: The ballistic flight, numerical solutions of (5.4) for 10 mesh points (circles) compared to the exact solution (solid), L = 1 m, T = 1 s.

The central task concerning this problem is the inversion of matrices, thus an implementation using MATLAB seems to be nearby. For demonstration we list the program including a graphical evaluation: clear; n=100; tend=1; g=9.81; xl=1; alpha=[0.5,3,5.5]; beta=[-5,-2.5,0]; dt=tend/(n-1); dt2=dt^2; k=0; for i=1:3 for j=1:3 a=(1/dt2-alpha(j)/2/dt)*ones(n,1); b=(1/dt2+alpha(j)/2/dt)*ones(n,1);

! matrix elements

132 � 5 Ordinary differential equations II

m=spdiags([b,a],[-1,1],n,n)-2/dt2*speye(n); inh(1:n)=-g; ! inhomogeneity y=inh/m; ! inversion y inh=beta(i)*y; inh(n)=inh(n)-xl*m(n,n-1); x=inh/m; ! inversion x k=k+1; subplot(3,3,k); plot(x,y); ! plot end end A figure similar to Figure 5.1 will show up.

5.2.2 Example: Schrödinger equation One of the standard problems of quantum mechanics is solving the Schrödinger equation [2] [−

ℏ2 𝜕 Δ + U(r,⃗ t)]Ψ(r,⃗ t) = iℏ Ψ( r, t) 2m 𝜕t

(5.11)

for a certain given potential U(r,⃗ t). This is a partial differential equation that can be transformed into an ODE for the special case of only one spatial dimension, say x, and a time-independent potential U = U(x). Doing the separation Ẽ Ψ(x, t) = Φ(x) exp(−i t), ℏ one arrives at the time-independent or stationary (one-dimensional) Schrödinger equation − Φ′′ (x) + V (x) Φ(x) = E Φ(x)

(5.12)

with the abbreviations V (x) =

2m U(x), ℏ2

E=

2m ̃ E. ℏ2

The ODE (5.12) has to be completed by boundary conditions for the wave function Φ, which depends on the problem under consideration. Contrary to (5.4), equation (5.12) constitutes a homogeneous boundary value problem, however, with variable coefficients. We can also interpret (5.12) as a linear eigenvalue problem ̂ n = En φn Hφ

5.2 Finite differences



133

where En denote the eigenvalues and φn (x) the eigenfunctions of the differential operator (Hamiltonian) d2 Ĥ = − 2 + V (x). dx Discretizing (5.12) leads to a homogeneous algebraic eigenvalue problem of the form: ∑ Hij Φj = E Φi

(5.13)

j

with the tridiagonal matrix Hii =

2 + V (xi ), Δx 2

Hi,i−1 = Hi,i+1 = −

1 Δx 2

(5.14)

and Φi = Φ(xi ). 5.2.2.1 Stark effect As an application we wish to examine a particle in a one-dimensional quantum well with length L. If the walls are infinitely high the probability density outside the well must be zero. From the continuity of Φ it follows for the boundary conditions Φ(0) = Φ(L) = 0.

(5.15)

If an external electric field is applied, the potential in (5.12) reads V (x) = V0 ⋅ (x − L/2).

(5.16)

If V0 = 0 the exact solutions 2 kπ Φk (x) = √ sin x, L L

Ek =

k2π2 , L2

k = 1, 2, . . .

are known. Hence, Ĥ has a countably infinite set of eigenfunctions, all with different eigenvalues. From first-order perturbation theory [2] it follows Ek(1)

L

2 kπ = ∫ dx V (x) sin2 x, L L 0

which vanishes for all k due to symmetry reasons. Thus, the change of the spectrum is at least of the second order, ∼ V02 and named the quadratic Stark effect. For a direct numerical solution one has to find the eigenvalues and eigenvectors of the problem (5.13)–(5.15) with V (xi ) = V0 ⋅ (iΔx − L/2),

i = 1 . . . n,

Δx =

L . n+1

134 � 5 Ordinary differential equations II

Figure 5.2: Numerical solutions of the stationary Schrödinger equation with (5.16). Shown in each frame are the first three states (probability densities |Φ|2 ) (solid, thin, dashed) for V0 = 0, 300, 1000, 5000 (from left to right), L = 1, n = 1000.

Here, i = 0 corresponds to the left, i = n + 1 to the right wall and H has the form of an n × n tridiagonal matrix. Figure 5.2 depicts the probability densities |Φ(x)|2 for the first three states and several values of V0 . It turns out that for increasing V0 the probability densities are shifted more and more to the left where the potential is minimal. To compute the eigenvalue problem the LAPACK routine SSTEQR (real-valued, symmetric tridiagonal matrix) can be used: ... xl=1. ! length L=1 of the well dx=xl/FLOAT(n+1) DO i=1,n dl(i)=-1./dx**2 ! matrix elements di(i)=2./dx**2 + v0*(FLOAT(i)*dx-xl/2) ENDDO CALL SSTEQR('i',n,di,dl,z,n,work,info) c .. eigenvalues in di, eigenvectors in z c .. z(1:n,k) belongs to di(k) .. output, plot, etc The first three eigenvalues as a function of V0 are shown in Figure 5.3. All energies decrease for large V0 . Then also the higher states are localized in the region of a negative energy (left). Since this code again is mainly based on matrix computation we present here the complete MATLAB code producing Figure 5.2: clear; n=1000; v0=[0,300,1000,5000]; dx=1/(n+1); dx2=dx^2; x=-.5+dx:dx:0.5-dx; a=-1/dx2*ones(n,1); for k=1:4

5.2 Finite differences



135

b=2/dx2*ones(n,1)+v0(k)*x'; m=spdiags([a,b,a],[-1,0,1],n,n); [v,e]=eigs(m,3,'sr'); subplot(1,4,k); plot(x,v(1:n,1).^2,x,v(1:n,2).^2,x,v(1:n,3).^2); end

Figure 5.3: The first three eigenvalues over V0 for L = 1.

5.2.2.2 The harmonic oscillator The stationary Schrödinger equation of the harmonic oscillator with eigenfrequency ω0 reads [−

ℏ2 d 2 1 + mω20 x 2 ]Φ(x) = Ẽ Φ(x), 2m dx 2 2

or with the scaling of (5.12) − Φ′′ (x) + Ω20 x 2 Φ(x) = E Φ(x) with Ω0 = ω0 m/ℏ.

(5.17)

136 � 5 Ordinary differential equations II The problem belongs to the few ones in quantum mechanics that can be solved exactly. One finds the equidistant eigenvalues, the energy levels1 1 Ekex = 2 Ω0 ( + k), 2

k = 0, 1, . . .

and for the eigenfunctions, the Hermitian polynomials. Anyway we shall describe first a numerical solution and will later generalize the quadratic potential to a nonlinear spring. A new complication in solving (5.17) arises from the asymptotic boundary conditions lim Φ(x) = 0,

x→±∞

saying that the x-region is in fact infinite. In practice one can choose the x-interval still finite but large enough to ensure an almost vanishing wave function at its borders x = ±L/2, leading to the boundary conditions Φ(L/2) = Φ(−L/2) = 0.

(5.18)

Hence we are again in the position to solve the problem of an infinitely high potential well, but now with an additional quadratic potential inside. We can resort to the same program written for the Stark effect and must only adjust the potential. The appropriate value for L can be found by trial and error. Figure 5.4 depicts the first three probability densities as well as the 50th. One should respect that the spatial extension of the wave

Figure 5.4: The first three eigenfunctions |Φ|2 together with the 50th for the harmonic oscillator, L = 30, Ω0 = 1, 1000 mesh points.

1 Doing the rescaling, we of course arrive at the more familiar relation Ẽ k = ℏω0 ( 21 + k).

5.2 Finite differences �

137

Table 5.1: Energy levels of the harmonic oscillator, Ω0 = 1; finite difference method compared to the exact values. state k

Ek (numerical)

Ekex = � + �k

error |Ek − Ekex |/Ek

0 1 2 3 4 5 6 7 8 9 … 49

0.999717832 2.99929714 4.99920225 6.99843979 8.99729633 10.9976883 12.9952221 14.9935150 16.9906712 18.9886627 … 96.7354813

1 3 5 7 9 11 13 15 17 19 … 97

−2.82168388E-04 −2.34285995E-04 −1.59549716E-04 −2.22887305E-04 −3.00407410E-04 −2.10155136E-04 −3.67531407E-04 −4.32332366E-04 −5.48755401E-04 −5.96698956E-04 … −2.72699725E-03

functions increase with k, see Section 5.2.2.4 below. To compute the higher states correctly L should be large enough. Table 5.1 notes the computed energy values together with the exact ones. The relative errors stay well below one percent. The number of mesh points has been chosen with n = 1000 for the whole interval. 5.2.2.3 Anharmonic oscillator Now we can examine other potentials. Figure 5.5 shows the first 100 levels for potentials having the form V (x) = Ω20 |x|p

Figure 5.5: The first 100 energy levels for different potentials (5.19).

(5.19)

138 � 5 Ordinary differential equations II for Ω0 = 1 and p = 3/2, 2, 3, 4. The larger the p, the stronger the increase of the energies, and in the classical picture, the spring becomes more rigid for larger amplitudes. A deviation for p = 3/2 becomes visible at k ≈ 70. Here, the wave functions are too wide for the given area L = 30 and the approximate boundary conditions (5.18) distort the result. 5.2.2.4 Lateral extension of wave function The lateral width of the wave function of the quantum mechanical oscillator can be simply estimated by writing (5.17) as Φ′′ = γ(x) Φ with γ = Ω20 |x|p − E. If γ > 0, Φ decreases monotonically. Thus, γ = 0, or xL = (

E ) Ω20

1/p

gives an estimate of the wave function’s extension xL belonging to a certain energy level E. For the harmonic oscillator (p = 2) one obtains xL = √

1 + 2k , Ω0

in very good agreement with Figure 5.4.

5.3 Weighted residual methods 5.3.1 Weight and base functions The keynote of the weighted residual methods (WRMs) is to solve ODEs of the form ̂ x )y(x) = b(x) L(d

(5.20)

approximately in the domain X by the ansatz ̃ 1 , a2 , . . . , aN , x), y = y(a

(5.21)

where ỹ is called a test function. The N free parameters ak are determined in a way that the residuum (the remainder) R(x) = L̂ ỹ − b, projected on a certain set of problem specific weight functions wk (x) vanishes:

5.3 Weighted residual methods

Rk = ∫ dx R(x) wk (x) = 0,

k = 1 . . . M.



139 (5.22)

X

If the number of the weight functions M is equal to the number of parameters N, the ak can be determined from the N equations (5.22). In particular we wish to examine in this section linear differential operators L̂ and in ak linear test functions (5.21) of the form N

̃ y(x) = ∑ ai φi (x)

(5.23)

i=1

with linearly independent base functions φi (x). Then (5.22) turns into the linear system N

∑ Lij aj = bi

(5.24)

j=1

with the matrix elements ̂ x )φj (x), Lij = ∫ dx wi (x)L(d

bi = ∫ dx wi (x) b(x).

X

X

For a nonlinear operator L,̂ instead of (5.24), a nonlinear algebraic system for the set ak would result, which can be only solved iteratively. Depending on the weight functions, one distinguishes between different WRMs. We mention the most important ones: 1. Subdomain method. The domain X is divided into M subdomains Dk which may overlap. One chooses wk (x) equal to one if x is in Dk , otherwise zero: wk (x) = {

2.

1 0

if x ∈ Dk else.

In this way one can enlarge the accuracy (smaller and more subdomains) of the method at certain places where the expected solution varies strongly, while in other ‘quieter’ regions larger subdomains can be appropriate. The collocation method can be considered as a special case of 1. Let each Dk shrink to a point wk (x) = δ(x − xk ) where δ(x) denotes Dirac’s delta-function. Due to (5.22) one has R(xk ) = 0 and

140 � 5 Ordinary differential equations II ̂ x )φj (x)󵄨󵄨󵄨 , Lij = L(d 󵄨x=x i

3.

bi = b(xi ).

The xk are called collocation points. Least-squares method. Instead of requiring (5.22) one can also minimize the mean squared residuum S = ∫ dx R2 (x)

(5.25)

X

and determine ai from 𝜕S = 0. 𝜕ai

(5.26)

From (5.25) one obtains 𝜕S 𝜕R = 2 ∫ dx R(x) = 0. 𝜕ai 𝜕ai X

4.

For the least-squares method no weight functions are needed. Galerkin method. Here, weight and base functions are identical: wk (x) = φk (x),

k = 1 . . . N.

Due to (5.22) the residuum is orthogonal to the subspace spanned by the base functions. If one further increases N, this subspace is getting more and more complete leaving less and less space for the residuum. Thus, R(x) must vanish for N → ∞. 5.3.2 Example: Stark effect We wish to compute the ground state and the first excited state of the stationary Schrödinger equation − Φ′′ (x) + V0 ⋅ (x − 1/2) Φ(x) = E Φ(x),

Φ(0) = Φ(1) = 0,

(5.27)

corresponding to the Stark effect in an infinitely high potential well with L = 1. Let the test function be a polynomial of the third degree: ̃ Φ(x) = a0 + a1 x + a2 x 2 + a3 x 3 . If Φ̃ fulfills the boundary conditions it follows that a0 = 0,

a3 = −a1 − a2 ,

5.3 Weighted residual methods

� 141

or Φ̃ = a1 φ1 + a2 φ2 with the two linearly independent base functions φ1 = x − x 3 ,

φ2 = x 2 − x 3 .

Contrary to (5.20), the problem (5.27) is homogeneous and instead of (5.24) we obtain a generalized linear eigenvalue problem of the form 2

∑(Lij − E Mij ) aj = 0.

(5.28)

j=1

The matrix elements Lij , Mij thereby depend on the method. The two eigenvalues E are then determined from the solvability condition Det(Lij − E Mij ) = 0. 1.

Subdomain method. Defining the two subdomains D1 : 0 ≤ x ≤ 1/2, D2 : 1/2 < x ≤ 1 yields E0,1 = 30 ∓

2.

4.

1 √32 400 + 5V02 . 10

The collocation method with x1 = 1/4, x2 = 3/4 provides E0,1 =

3.

(5.29)

64 1 ∓ √16 384 + 9V02 . 3 12

For the least-squares method we obtain from (5.26) two homogeneous algebraic equations where E occurs quadratically. Evaluating the solvability condition yields a polynomial of the 4th degree in E. Therefore this method seems not to be adequate in determining the eigenvalues E. Galerkin method. With wk = φk we eventually find E0,1 = 26 ∓ √256 +

V02 28

.

Of course the results depend strongly on the applied method. Also note that only two base functions have been used and more exact results cannot be expected from the first. Figure 5.6 compares the outcome of the three methods with that of the finite difference algorithm from Section 5.2.2.1. In Table 5.2 we list the values for the undisturbed problem V0 = 0.

142 � 5 Ordinary differential equations II

Figure 5.6: Stark effect, the methods by comparison, energy of the ground state, and the first excited state over V0 . Table 5.2: The first two energy levels for V0 = 0 for the three different methods.

E� E�

exact

subdomain

collocation

Galerkin

π ≈ �.�� �π � ≈ ��.�

12.0 48.0

10.7 32.0

10.0 46.0



All three methods provide a quadratic dependence on V0 as well as the correct sign for the curvature of E. Not surprisingly, the first excited state inaccurately takes a test function with only two free parameters. On the other hand, the energy of the ground state agrees amazingly well at least for the Galerkin method. For the two other methods, additional freedom lies in the choice of the subdomains or collocation points, respectively.

5.4 Nonlinear boundary value problems So far we have examined only linear problems of the form (5.20). In this section we wish to extend the treatment to certain nonlinear systems ̂ n )y(x) + g(y, d n−1 y, . . . ) = b(x) L(d x x

(5.30)

where L̂ denotes a linear differential operator and comprises the highest occurring derivative. The nonlinear function g is at least bilinear in y and/or its derivatives. Independently from the applied algorithm, one arrives after discretization at a nonlinear system of algebraic equations.

5.4 Nonlinear boundary value problems

� 143

5.4.1 Nonlinear systems There is no general receipt for solving nonlinear algebraic systems. This can be seen already at a system of only two equations. Let us seek a solution of f (x, y) = 0

g(x, y) = 0

(5.31)

with some nonlinear functions f , g of the two variables x, y. Graphically, we can plot the zero lines of f and g and determine their intersections, as in Figure 5.7. But the two functions are completely independent on each other. The zeros of each function can consist of an arbitrary number of discontiguous curves. Already for two equations, zero, one, many, or even infinitely many solutions can exist. Without any further information a solution of (5.31) can be very arbitrary and sometimes hard to find. This additional information can be the approximate position(s) of the zero(s) to be computed. With other words, one needs an initial value x (0) , y(0) from where an iterative refinement then could yield the desired solution of (5.31). We shall come back to this question with some applications in Section 5.4.3.2 and 5.4.4.

Figure 5.7: The zero lines of two functions f (x, y) = 0 (solid) and g(x, y) = 0 (dashed). The intersections correspond to the solutions of (5.31). As we can see, many solutions may exist, among them also ‘almost’ solutions and double zeros. A numerical solution found iteratively will surely depend on the initial value x (0) , y (0) of any iterative scheme.

144 � 5 Ordinary differential equations II 5.4.2 Newton–Raphson The Newton or Newton–Raphson method is well-known for finding zeros of a function f (x) = 0

(5.32)

iteratively. Let x (0) be an (initial) value not too far from the desired zero xk . In its neighborhood, the Taylor expansion f (x (0) + δx) = f (x (0) ) + dx f (x (0) ) δx + O(δx 2 )

(5.33)

is therefore valid. The values for f (x (0) ) and dx f at x (0) are given with f (x). From the requirement f (x (0) + δx) = 0 we can determine δx: δx = −

f (x (0) ) + O(δx 2 ) dx f (x (0) )

(5.34)

and from there the zero xk ≈ x (0) + δx.

(5.35)

Because of the neglect of higher order terms in (5.33) the zero will not be reached exactly and (5.35) provides only an approximate value x̃k . However, x̃k will be closer to the exact zero than the initial value x (0) . Substituting x̃k in formula (5.34) will yield an even closer value and so on. Thus, an iteration rule x (i+1) = x (i) −

f (x (i) ) dx f (x (i) )

(5.36)

is obtained, where the sequence x (i) converges against the zero xk . For a stopping criterion one may take 󵄨󵄨 (i) 󵄨󵄨 󵄨󵄨 (i+1) 󵄨 − x (i) 󵄨󵄨󵄨 < ϵ 󵄨󵄨δx 󵄨󵄨 = 󵄨󵄨x with given accuracy ϵ. If (5.32) possesses many solutions the result will depend on the initial value x (0) . The Newton–Raphson method can be generalized straightforwardly to systems with N equations: fi (x1 , x2 , . . . , xN ) = 0

i = 1 . . . N.

(5.37)

5.4 Nonlinear boundary value problems

� 145

Instead of (5.33) one finds N

fi (x⃗ (0) + δx)⃗ = fi (x⃗ (0) ) + ∑ j=1

𝜕fi δx + ⋅ ⋅ ⋅ 𝜕xj j

(5.38)

and from there N

∑ αij δxj = −fi

(5.39)

j=1

with the Jacobian αij =

𝜕fi . 𝜕xj

(5.40)

To determine δxi one has to solve at each iteration step the inhomogeneous system (5.39). From δxi one obtains along (5.35) x⃗ (i+1) = x⃗ (i) + δx⃗ (i) . 5.4.3 Example: the nonlinear Schrödinger equation We study the nonlinear stationary Schrödinger equation (NSE) in one dimension − Φ′′ + γ |Φ|2 Φ = EΦ

(5.41)

that can serve as a model of a charged particle in a self-generated charge density cloud, having the potential 󵄨 󵄨 V = γ 󵄨󵄨󵄨Φ2 󵄨󵄨󵄨. Taking a real-valued Φ, we obtain Φ′′ + EΦ − γ Φ3 = 0.

(5.42)

Applying the finite difference method of Section 5.2.2 leads to the algebraic system N

fi (Φ1 , Φ2 . . . ΦN ) = ∑ Lij Φj − γ Φ3i = 0 j

(5.43)

with the tridiagonal matrix Lii = −

2 + E, Δx 2

Li,i+1 = Li,i−1 =

1 . Δx 2

(5.44)

146 � 5 Ordinary differential equations II For the matrix α in (5.40) one obtains with (5.43) αij =

𝜕fi = Lij − 3 δij γ Φ2i 𝜕Φj

with the Kronecker symbol δij . 5.4.3.1 Exact solutions We saw above that it is important to have a certain idea of the sought solutions, providing reasonable initial values for the iteration procedure. For the NSE two exact nontrivial solutions are known that we shall derive briefly. Multiplying (5.42) by Φ′ and integrating yields γ 2 (Φ′ ) = C − EΦ2 + Φ4 2

(5.45)

with a constant C. A second integration can be performed by the separation of variables resulting for an arbitrary C in elliptic integrals. For certain C one may distinguish two physically relevant cases: 1. E > 0, γ > 0, C = E 2 /2γ, repelling potential, free solution: Φ(x) = ±√ 2.

E E tanh(√ (x − x0 )). γ 2

(5.46)

E < 0, γ < 0, C = 0, attractive potential, bounded self-focusing solution: Φ(x) = ±√

2E 1 . γ cosh(√−E (x − x0 ))

(5.47)

The solution (5.46) corresponds to a front or a kink at x = x0 connecting the two asymptotic solutions Ψ(x → ±∞) = ±√E/γ, (5.47) to a localized wave function around x = x0 , Figure 5.8.

Figure 5.8: Left: front for E, γ > 0, right: localized wave for E, γ < 0. Both are exact solutions of the NSE.

5.4 Nonlinear boundary value problems

� 147

5.4.3.2 Numerical implementation Both solutions (5.46), (5.47) are infinitely extended along the x-axis and fulfill asymptotic boundary conditions at x → ±∞. Again we use the trick from above and apply these boundary conditions at x = ±L/2, taking a large L. Let us begin with the first case, E, γ > 0. For the front solution, the derivatives have to vanish at infinity, requiring dx Φ|x=−L/2 = 0,

→ Φ0 = Φ1 ,

dx Φ|x=L/2 = 0,

→ ΦN+1 = ΦN .

(5.48)

This can be incorporated into the differential matrix (5.44) by modifying the first and the last diagonal element according to L11 = LNN = −

2 1 1 + E + 2 = − 2 + E. 2 Δx Δx Δx

As the initial value for Φ we can take a straight line Φ = x/L. Depending on parameters one obtains different solutions where the exact one (5.46) is also present (Figure 5.9).

Figure 5.9: Two numerically found solutions of the NSE. Left: front four E = 1/2, γ = 1, right: another solution for E = 1, γ = 1. The left frame corresponds to the exact solution (5.46).

For the second case, E, γ < 0, the wave function has to vanish at infinity. Thus we set Φ|x=−L/2 = 0 → Φ0 = 0,

Φ|x=L/2 = 0 → ΦN+1 = 0.

(5.49)

These boundary conditions are already included in (5.44). To obtain a convergent iteration one should again take an initial condition close to the expected solution, e. g., π Φ = cos( x). L Again the computed solutions depend strongly on parameters. For γ = −1, E = −1 the algorithm converges to the state (5.47); for γ = −1, E = −2 one obtains numerical rubbish

148 � 5 Ordinary differential equations II

Figure 5.10: Two solutions in the parameter region of bounded states. Left: localized wave for E = −1, γ = −1, right: numerical artifact for E = −2, γ = −1. For all computations 100 mesh points have been used.

that, however, also fulfills the stopping criterion (Figure 5.10). As always, ‘prudence is the better part of valor!’

5.4.4 Example: a moonshot In J. Verne’s2 famous novel [3] a space ship is shot toward the moon by a giant cannon. Before the development of smoothly launching rockets this was the only at least hypothetical possibility for leaving the gravity field of the earth. Albeit not realizable, at least with surviving passengers, this idea is perfectly appropriate to be formulated as a boundary value problem as done in Section 5.1.2 for the ballistic flight. Assuming that the flight path of the space ship lies in the earth-moon plane and the moon revolves in a circular orbit, the system earth-moon-space ship can be considered as a reduced threebody problem and the equations derived in Section 4.4.4 can be applied. For the mass parameter μ one has μ=

Mmoon ≈ 0.012. Mearth + Mmoon

After discretization of the time derivatives by finite differences, (4.45) turns into a nonlinear algebraic system 2n

∑ Mij qj = pj (q1 . . . q2n ) i

with the difference matrix (band matrix, width = 6) 2 Jules Verne, French novelist, 1828–1905.

(5.50)

5.4 Nonlinear boundary value problems

−2 0 ( 1 ( (−Δt 1 ( ( M= 2( Δt ( ( ( (

0 −2 Δt 1

1 Δt −2 0 ......

−Δt 1 0 −2

0 0 1 Δt

.. .. −Δt 1

0 0

...... 1 −Δt

Δt 1

−2 0

...... .. ..

(

0 0

� 149

.. ) ) .. ) ) ) ), ) ) ) ) 0 −2)

the solution vector q⃗ = (x1 , y1 , . . . , xn , yn ), and the nonlinear inhomogeneities 1−μ μ (xi + μ) − 3 (xi + μ − 1) + xi r23 (xi , yi ) r3 (xi , yi ) 1−μ μ p2i = − 3 yi − 3 yi + yi , i = 1 . . . n r2 (xi , yi ) r3 (xi , yi )

p2i−1 = −

(5.51)

(n is the number of mesh points; for the other notations see Section 4.4.4.2). Differing from Jules Verne our space ship should ‘land’ on the moon after a given flight time t = T. This leads to the Dirichlet boundary conditions: x(0) = x0 = xs ,

y(0) = y0 = ys ,

x(T) = xn+1 = xt ,

y(T) = yn+1 = yt

with the start vector (xs , ys ), prescribed somewhere on the surface of the earth and the target coordinates (xt , yt ) somewhere on the moon. Being nonlinear, the system (5.50) can only be tackled iteratively. To improve convergence it is advantageous to add on the right-hand side of (5.50) a damping term having the form s (q⃗ k+1 − q⃗ k ). Solving for q⃗ k+1 then yields q⃗ k+1 = (M − s 1)−1 (p(⃗ q⃗ k ) − s q⃗ k ).

(5.52)

Essentially we have to solve successively inhomogeneous systems of equations. We list the MATLAB Code: clear; length_scale=380000; % earth-moon distance n=10000; tend=[.2,0.75,2.5] % flight times (scaling as in sect.4.4.4) mu=1./82; earth_radius=6300/length_scale; moon_radius=1600/length_scale; xs=-mu+earth_radius; ys=0; xt=1-mu-moon_radius; yt=0; % Dirichlet % conditions

150 � 5 Ordinary differential equations II

s=100; % convergence factor for kk=1:3 % three flight times dt=tend(kk)/(n-1); dt2=dt^2; q(1:2*n)=0; for i=1:2:2*n-1 % define sparse matrix a(i)=dt; b(i+1)=dt; end a(2*n)=0; delta=1; m=spdiags([-b',ones(2*n,1),a',(-2-s*dt2)*ones(2*n,1),b', ones(2*n,1),-a'],[-3,-2,-1,0,1,2,3],2*n,2*n); while delta>0.1 % iteration loop for i=1:2:2*n-1 % right-hand side i1=i+1; r13=sqrt((q(i)+mu)^2+q(i1)^2)^3; r23=sqrt((q(i)+mu-1)^2+q(i1)^2)^3; p(i)=(-(1-mu)/r13*(q(i)+mu)-mu/r23*(q(i)+mu-1)+q(i)*(1-s))*dt2; p(i1)=(-(1-mu)/r13*q(i1)-mu/r23*q(i1)+q(i1)*(1-s))*dt2; end % boundary conditions as additional inhomogeneity. p(1)=p(1)-ys*dt-xs; p(2)=p(2)+xs*dt-ys; p(2*n)=p(2*n)-xt*dt-yt; p(2*n-1)=p(2*n-1)+yt*dt-xt; q1=q; q=p/m; delta=norm(q-q1); % stopping criterion end yof=(kk-1)*3; subplot(3,3,1+yof); plot(q(1:2:2*n),q(2:2:2*n)); % output subplot(3,3,2+yof); plot(1:n,q(1:2:2*n)); subplot(3,3,3+yof); plot(1:n,q(2:2:2*n)); end Taking the L2 norm, the deviation from one step to the next reads 󵄨 󵄨 δk = 󵄨󵄨󵄨q⃗ k − q⃗ k−1 󵄨󵄨󵄨. If δk falls below a certain given value, the iteration is stopped and the results are plotted (Figure 5.11.). Apart from initial and target conditions, the flight paths depend strongly on the given flight time T. Going back to dimensional variables one finds for the three flight times 20 h, 75 h, and 250 h, the touch-down speeds v(T) = √ẋ 2 (T) + ẏ2 (T) of 18 000 km/h, 7000 km/h, and

5.5 Shooting

� 151

Figure 5.11: Trajectories of the space ship (left), x (middle), and y (right) coordinates over time for three different flight times T = 20 h, 75 h, 250 h (from top to bottom).

5500 km/h, respectively. At launch the speed for all three cases is roughly 40 000 km/h. Jules Verne was right assuming that his pilots could not have been injured by the pressure wave of the detonation of the cannon because they would have exceeded the speed of sound in the first moment. However, the devastating acceleration would have mashed them anyway beforehand.

5.5 Shooting 5.5.1 The method The main idea of the shooting method is to first neglect the boundary conditions on one side, say on the right, and to convert the system under consideration to an initial value problem. Then the solution is not unique and one varies iteratively the initial values (aiming) until the boundary conditions on the right are met (meaning the shot is on target). Let us explain the method in more detail with a simple example. We are looking for a solution y(x) of a second-order ODE in 0 ≤ x ≤ 1: y′′ = f (y, y′ , x) or of the equivalent problem y′1 = y2 ,

y′2 = f (y1 , y2 , x)

(5.53)

with the boundary condition y(0) = a,

y(1) = b.

(5.54)

152 � 5 Ordinary differential equations II

Figure 5.12: Shooting method. Three different trials that fulfill the left side boundary conditions only.

One integrates (5.53) for instance with RK4 numerically from x = 0 through x = 1 taking the initial conditions y1 (0) = a,

y2 (0) = s

and different values for s (Figure 5.12). For each s one finds on the right-hand boundary some yR (s) = y1 (x = 1, s). If one is able to get a value for s leading to yR = b, this solution corresponds to a solution of (5.53) fulfilling (5.54). Thus one has to seek the zero(s) of the function f (s) = yR (s) − b, by applying for instance the Newton–Raphson method from Section 5.4.2: s(i+1) = s(i) −

f (s) . f ′ (s)

However, f ′ is in general not known and one has to approximate the derivative by taking the differential quotient f (s + Δs) − f (s) . Δs Hence, for each iteration step, one needs two different numerical solutions for s and a nearby s + Δs of (5.53). f ′ (s) ≈

5.5.2 Example: free fall with quadratic friction A parachutist exits a nonmoving helicopter at the height of 1000 m at t = 0. At which height yf should he open his parachute if he wants to touch down at t = 60 seconds

5.5 Shooting

� 153

on the ground y = 0? Assume the flight takes place without friction if the parachute is closed. For the open parachute, assume friction proportional to the square of the speed. The task can be formulated as a boundary value problem according to ÿ = −f (y, y,̇ yf ) ẏ − g

(5.55)

with the boundary conditions y(t = 0) = 1000 m,

̇ = 0) = 0, y(t

y(t = 60s) = 0.

(5.56)

The function f allows for friction: f ={

0 α |y|̇

if y > yf

if y ≤ yf .

Since we have a second order ODE and there are three conditions to fulfill, the problem looks over-determined at first glance. However, an additional degree of freedom comes with the choice of yf . Taking the initial conditions (5.56), the equations (5.55) are integrated with RK4 until t = 60s for different values of yf . The results for α = 0.1/m and g = 10 m/s2 are shown in Table 5.3. The sought value is about 500 m, which is now determined more exactly by a Newton–Raphson method. Taking an accuracy of 1 meter, the iteration converges within four steps at yf = 518 m. The solution for this value is shown in Figure 5.13. Table 5.3: Height y after one minute flight time and touch-down velocity v for different values of yf . The optimal yf is between 450 and 550 m. yf [m] y(t = �� s) [m] v(y = �) [m/s]

10

30

250

450

550

750

−��� −��

−��� −��

−��� −��

−�� −��

28

207

Figure 5.13: Solution y(t) over t for the correct value yf = 518 s.

154 � 5 Ordinary differential equations II 5.5.3 Systems of equations Next we wish to extend the method to the general case of a system of N first-order ODEs dyi = fi (y1 , y2 . . . yN , x), dx

i = 1 . . . N.

(5.57)

Again we seek a solution in a ≤ x ≤ b. Assuming linear and separable boundary conditions we may have on the left-hand boundary n1 conditions ⃗ A y(a) = a,⃗

(5.58)

⃗ = b⃗ B y(b)

(5.59)

and on the right-hand side

n2 = N − n1 conditions to fulfill. The rectangular matrices A and B have n1 × N and n2 × N elements, respectively. Using (5.58) we can uniquely determine n1 of the yi (a) as an initial condition. The still undetermined yi can be confined to an n2 -component vector V⃗ . By numerically integrating (5.57) we obtain on the right-hand boundary yRi (V⃗ ) = yi (x = b, V⃗ ), which should fulfill the n2 right side boundary conditions (5.59). Thus, one has to find the simultaneous zeros of the functions f ⃗(V⃗ ) = B y⃗R (V⃗ ) − b.⃗

(5.60)

The n2 equations (5.60) determine the n2 components of V⃗ . Applying the Newton– Raphson method we obtain an iteration algorithm according to α δV⃗ (i) = −f ⃗(V⃗ (i) ) and V⃗ (i+1) = V⃗ (i) + δV⃗ (i) with the n2 × n2 matrix αkℓ =

𝜕fk . 𝜕Vℓ

Since f is known only numerically, the derivatives with respect to Vℓ can only be found numerically as well. Approximating the derivatives with the differential quotients

5.6 Problems

� 155

fk (V1 , . . . , Vℓ + ΔVℓ , . . . , Vn2 ) − fk (V1 , . . . , Vℓ , . . . , Vn2 ) 𝜕fk ≈ , 𝜕Vℓ ΔVℓ the system (5.57) must be integrated n2 + 1 times at each iteration step.

5.6 Problems 1. 2. 3.

Find an analytical solution of (5.4) for the general case α, β ≠ 0. Verify the solutions (5.46) and (5.47). What is different for other (arbitrary) values of C? Solve the problem from Section 5.1.2 (ballistic flight) with the help of the shooting method. Also examine the influence of nonlinear friction having the form 1/2 ẋ F⃗f = −α (ẋ 2 + ẏ2 ) ( ) . ẏ

Bibliography [1] LAPACK–Linear Algebra PACKage–Netlib, www.netlib.org/lapack/, February 5, 2023. [2] See text books on quantum mechanics, e. g., D. J. Griffiths, Introduction to Quantum Mechanics, Cambridge Univ. Press (2016). [3] Jules Verne, From the Earth to the Moon, Bantam Classics (1993).

6 Ordinary differential equations III, memory, delay and noise 6.1 Why memory? Two examples and numerical methods In the Chapters 3 and 4 we saw that an arbitrary system of quasi-linear differential equations can be formulated as an initial value problem of the form (3.15) dxi = fi (x1 (t) . . . xN (t)), dt

i = 1 . . . N.

(6.1)

For a given state xi (t0 ), each later state xi (t > t0 ) follows in a unique way. Now we wish to extend this formulation and allow also for xi (t ′ ) on the r. h. s. of (6.1), where due to causality t ′ should be in the past, i. e. t ′ < t. However, this dependence can be rather involved and we shall consider only some special cases in this chapter. If we first restrict ourselves for the sake of simplicity to a one-dimensional problem N = 1, (6.1) can be written as dx = f (x(t), y1 (t), y2 (t), . . . ), dt

(6.2)

where we have introduced the auxiliary variables yk depending on the past of x(t), ∞

yk (t) = ∫ dτ Kk (τ)x(t − τ).

(6.3)

0

The weight functions Kk (τ) describe the characteristics of the memory and are also denoted as “kernels”. A special case is that of a δ-kernel Kk (τ) = δ(τ − τk )

(6.4)

and (6.2) turns into a delay-differential equation dx = f (x(t), x(t − τ1 ), x(t − τ2 ), . . .) dt

(6.5)

where only values of x at certain “sharp” former times t − τi occur. In physics, equations are normally derived from conservation laws or first principles and memory terms do not occur. However, there are a huge number of problems where memory effects are quite natural. Many of these problems originate from population dynamics like logistic models or the nowadays very popular epidemic models. Also, for traffic models delay terms can be very important to model finite reaction times. In this chapter, we shall not discuss problems from physics, but rather refer to heuristically derived models that may describe certain observed behaviors in a quantitative way. To start, we shall briefly discuss two simple examples. https://doi.org/10.1515/9783110782523-006

6.1 Why memory? Two examples and numerical methods



157

6.1.1 Logistic equation – the Hutchinson–Wright equation Perhaps the most simple one-dimensional nonlinear ODE is the logistic equation, also called Verhulst equation, already introduced as logistic ODE in Chapter 3, dx x(t) = r x(t)(1 − ) dt xm

(6.6)

with xm > 0. It was first proposed by Verhulst1 in 1845 to study the growth of a population density x. For small x and r > 0, x(t) grows exponentially with rate r and reaches a certain saturation value x = xm if t goes to infinity. We have already examined this equation in Section 3.1, equation (3.10), with the substitution r = a − 1, xm = 1 − 1/a and found that there exist two fixed points x = 0 and x = xm , the former is stable for r < 0, the latter for r > 0. If r > 0, the solution of (6.6) starting from an arbitrary initial condition x0 > 0 approaches monotonically the asymptotic value xm , see the exact solution (3.11). In one dimension, more complicated behavior, like limit cycles or chaotic motion, is excluded by the Poincaré-Bendixson theorem. The saturation term x/xm in (6.6) is justified by restriction of food or space for the population at time t. Hutchinson2 in 1948 and Wright3 in 1955 examined the case where the saturation acts after a certain delay time τ which may account for the time between birth and sexual maturity. This turns (6.6) into dx x(t − τ) = r x(t)(1 − ) dt xm

(6.7)

what has the form of (6.5). Hutchinson noted that for small τ the dynamics remains qualitatively the same as for (6.6), but for increasing τ damped oscillations are obtained, which change to saturated oscillations for τ larger than a certain value τc . As it will turn out later, τc = π/2r and the oscillations set in with the frequency ωc = r. For larger delay times, the amplitude of these oscillations increases while their frequency decreases and their shapes become more and more nonlinear, Figure 6.1. To solve a delay equation numerically, any method for time discretization as discussed in Chapter 4 can be used. There are, however, some subtleties that receive attention: (i) The dependent variable, say x(t), must be stored in an array over the whole memory time (or that of the maximal delay time). If time is discretized with a step size Δt according to ti = iΔt,

x(t) = x(ti ) ≡ xi ,

1 Pierre-François Verhulst, Belgian mathematician, 1804–1849. 2 George Evelyn Hutchinson, British ecologist, 1903–1991. 3 Sir Edward Wright, English mathematician, 1906–2005.

158 � 6 Ordinary differential equations III

Figure 6.1: Numerical solutions of the Hutchinson–Wright equation (6.7) for xm = 1, r = 1 and different τ = 0.0 (black), 0.3 (red), 1.0 (green), 1.5 (blue), top frame. Bottom frame for τ > τc : τ = 1.8 (black), 2.0 (red), 2.5 (green), 3.0 (blue).

and τ is the maximal delay, this array must have at least nt = INT(τ/Δt) elements, where INT(r) denotes the integer part of r. (ii) Instead of only one initial value x 0 = x(0) it is now necessary to allocate the whole memory array with initial values, according to Mi = x((i − nt − 1)Δt) = xi−n−1 ,

i = 1 . . . nt.

Already from here it is clear that we are now dealing with an nt-dimensional problem instead of the one-dimensional one of the no-memory case. But remember that nt is finite only for the (finite) time discretization. In fact, including a delay term renders the one-dimensional problem (one equation) to an infinite dimensional problem what opens the door for arbitrarily complicated temporal behaviors of the solutions.

6.1 Why memory? Two examples and numerical methods



159

But even if Δt is finite nt can go to infinity if the memory has an infinite length, as indicated in (6.3). However, in practice, the memory is always finite and there is a maximal τm for which Kk (t > τm ) becomes neglectable small. 6.1.2 Pointer method A MATLAB code is shown in the following that solves (6.7) for r = 1, xm = 1 and various τ. For the initial conditions, we put x(t) = 0.01, clear; tau=input('tau?'); dt=0.0001; nt=int16(tau/dt); np=0; l=0; k=1; x=0.01; xd(1:nt)=x for t=0:dt:60 xt=xd(k); x=x+x*(1-xt)*dt; xd(k)=x; k=mod(k,nt)+1; np=np+1; if(np==3000) np=0; l=l+1; ypl(l)=x; end; end; plot(ypl)

−τ < t ≤ 0.

% % % % % %

ask for tau time step size of memory array pointers for plotting pointer on memory array initial values

% % % % %

time loop fetch delayed xt=x(t-tau) Euler forward store x(t) increment pointer

% store plot values in ypl after 3000 steps

The last nt values of x are stored in the array xd(nt), see Figure 6.2. The pointer k points onto the array element of time t − τ. The value of x(t − τ) is read off from xd(k) and, after integrating over one time step the new value x(t) is written into xd(k). Then the pointer k is incremented. In the beginning, k = 1. If k = nt + 1, k is set to zero again. We call this method where only two allocations to the storage array are necessary instead of shuffling around the hole array at each time step, the “pointer method”.

160 � 6 Ordinary differential equations III

Figure 6.2: The nt = τ/dt past values of x(t) are stored in an array. At time t the pointer directs to x(t − τ). After each time step, x(t − τ) is read off and the updated x(t) is stored at k. Then the pointer k is incremented, pointing at time t + dt to the next element and so on. If k exceeds nt, it is put to one.

6.1.3 Mackey–Glass equation In mathematical biology and physiology, a large number of models are based on delay differential equations. Certain substances are controlled by negative feedback, where the most simple model reads dX(t) = p − γ X(t) dt

(6.8)

with X as the controlled concentration, p its production rate, and γ its decay rate. In general, p may also depend on X and therefore on t in a nonlinear way. For many systems, p does not follow immediately to X but is rather described by a certain time delay p = p(X(t − τ)), accounting for the length of the production process. Mackey4 and Glass5 in 1977 [1] proposed a model for the control of blood cell concentration by the production in the bone marrow, which is achieved by a certain, rather long time delay. Their original model had the form β θm P(t − τ) dP(t) = m − γ P(t) dt θ + P(t − τ)m

(6.9)

with P(t) as the concentration of blood cells and certain positive constants β, γ, θ, τ, m given from physiological data. A dimensionless form of (6.9) is obtained by scaling time with β and P with θ and reads 4 Michael C. Mackey, Canadian-American biomathematician, 1942. 5 Leon Glass, American chemisist, 1943.

6.1 Why memory? Two examples and numerical methods



dx(t) x(t − τ) = − μ x(t) dt 1 + x(t − τ)m

161 (6.10)

with P = θx and μ = γ/β. There exist two stationary solutions (fixed points), which read x 0 = 0, (1/μ − 1)1/m . The first one is unstable for μ < 1, the latter exists only for μ < 1 under the condition x ≥ 0 (concentration) and, as we shall see below, loses stability at a certain value of τ. Figure 6.3 shows time series for three different values of τ. For smaller τ, the concentration oscillates, but for large enough delay times, even chaotic solutions are possible.

Figure 6.3: Numerical solutions of the Mackey–Glass equation (6.10) for μ = 1/2, m = 10 and different τ = 2 (black), 3 (red), 7 (green). For τ = 7 the time behavior is aperiodic or chaotic.

In Figure 6.4 we present trajectories obtained if x(t − τ) is plotted over x(t). After period doubling, chaos occurs for certain values of τ. If higher accuracy is desired, it is better to integrate the differential equation(s) by a Runge-Kutta method as introduced in Section 4.3.3. There, the r.h.s. of the ODEs is evaluated at times t, t + Δt/2, t + Δt and the same must be done for the delayed function x(t − τ). In the subroutine rkg_del(...) explained in the Appendix B and included in the fortlib library [2], this is achieved by linear interpolation between x(t−τ) and x(t+Δt−τ). The subroutine EQS, containing the r.h.s. of the ODE(s), has now 4 parameters. For the Mackey–Glass equation it looks like SUBROUTINE EQS(rhs,y,yd,t) DIMENSION rhs(*),y(*),yd(*) ! not needed here because nt=n1=1 COMMON /param/ m,fmu ! parameters from MAIN rhs=yd/(1.+yd**m)-fmu*y END

! l.h.s of Mackey-Glass eq.

162 � 6 Ordinary differential equations III

Figure 6.4: Trajectories in the x(t)-x(t − τ) plane for different τ, same parameters as for Figure 6.3.

6.2 Linearized memory equations and oscillatory instability



163

Note that in general yd is an array of length n1 that contains the n1 delayed variables in the same order than yt(nt,n1). If there are N equations and n1 ≤ N delayed variables, the first n1 variables of the vector y(N) must contain the variables, which occur delayed on the r. h. s.

6.2 Linearized memory equations and oscillatory instability 6.2.1 Stability of a fixed point Starting point is the system (6.2), (6.3), but now for N dependent variables d x⃗ ⃗ y⃗1 (t), y⃗2 (t), . . . , y⃗ℓ (t)) = f ⃗(x(t), dt ∞

⃗ − τ), y⃗k (t) = ∫ dτ Kk (τ)x(t

k = 1...ℓ

(6.11a) (6.11b)

0

with x,⃗ y⃗k , f ⃗ ∈ ℝN . The kernels are assumed to be normalized ∞

∫ dτ Kk (τ) = 1. 0

For a stationary solution x⃗ = x⃗0 one finds y⃗k = x⃗0 and f ⃗(x⃗0 , x⃗0 , . . . , x⃗0 ) = 0

(6.12)

from where x⃗0 can be computed. Stability of x⃗0 is determined by the ansatz ⃗ = x⃗0 + u⃗ eλt x(t)

(6.13a) ∞

y⃗k (t) = x⃗0 + u⃗ eλt ∫ dτ Kk (τ) e−λτ = x⃗0 + u⃗ eλt K̃ k (λ)

(6.13b)

0

where we have introduced the abbreviation ∞

K̃ k (λ) = ∫ dτ Kk (τ) e−λτ 0

which is nothing else than the Laplace transform of the kernel Kk (t). Inserting (6.13) into (6.11a) and linearizing with respect to u⃗ yields the linear algebraic system

164 � 6 Ordinary differential equations III ℓ

λu⃗ = [A0 + ∑ Ak K̃ k (λ)]u⃗

(6.14)

k

with the Jacobi matrices A0ij =

󵄨 𝜕fi 󵄨󵄨󵄨 󵄨󵄨 , 𝜕xj 󵄨󵄨󵄨x⃗

Akij =

0

󵄨 𝜕fi 󵄨󵄨󵄨 󵄨󵄨 . 𝜕(yk )j 󵄨󵄨󵄨x⃗ 0

The λ can be in principle determined from the solvability condition ℓ

Det(A0 + ∑ Ak K̃ k (λ) − λ 1) = 0 k

(6.15)

which is the characteristic equation of (6.14). Contrary to the case without memory, this equation is in general not a polynomial of order λN , but can be, depending on K̃ k , a rather involved transcendental function of λ. But also here, if we find one solution having a positive real part, the fixed point x⃗ is unstable due to (6.13).

6.2.2 One equation, one δ-kernel We concentrate on the most simple possibility of N = 1 and ℓ = 1. Then (6.15) turns into ̃ λ = a + b K(λ)

(6.16)

with a=

𝜕f 󵄨󵄨󵄨 󵄨󵄨 , 𝜕x 󵄨󵄨x0

b=

𝜕f 󵄨󵄨󵄨 󵄨󵄨 . 𝜕y 󵄨󵄨x0

Let us assume that for the case of no memory, i. e. x(t) = y(t) and K̃ = 1, the fixed point x0 is stable. Thus it immediately follows from (6.16) a + b < 0.

(6.17)

Taking the δ-kernel K(τ) = δ(τ − τ0 ), renders (6.2) into a delay equation dx = f (x(t), x(t − τ0 )) dt

(6.18)

̃ and from (6.16) with K(λ) = exp(−λτ0 ) λ = a + b e−λτ0 , a transcendental equation for λ.

(6.19)

6.2 Linearized memory equations and oscillatory instability



165

6.2.2.1 Iterative solution Even for this most simple situation, the characteristic equation has an infinite number of solutions and can only be solved approximately for λ. In the following we use the decomposition λ = σ + iω where σ denotes the growth rate and ω the frequency. One may figure out an iterative solution of the form λ(n+1) = g(λ(n) ),

(6.20)

̃ (n) ). g(λ(n) ) = a + b K(λ

(6.21)

with g as the r. h. s. of (6.16):

However, the convergence criterion of the iteration is 󵄨󵄨 dg(λ) 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 < 1, 󵄨󵄨 dλ 󵄨󵄨

(6.22)

where λ is the solution of (6.16) we are searching for. For the more simpler case of a δ-kernel this leads with (6.19) to |b|τ0 e−σ

(n)

τ0

0. From (6.27b) we find 0 < ωτ0 < π, from (6.27a) −

π π < ωτ0 < 2 2

if a > 0 and π 3π < ωτ0 < 2 2 if a < 0. Finally, the critical delay is obtained by solving (6.27b) for τ0 : − arcsin(ω/b) + 2nπ, { { ωc τc = {arcsin(ω/b) + (2n + 1)π, { {π/2 + 2nπ,

a>0 a

2 , 1−μ

otherwise ωc would be imaginary. For the values we used in Section 6.1.3 we find ωc =

√15 ≈ 1.94, 2

τc =

√15 1 [π − arcsin( )] ≈ 0.94. ωc 4

6.2.3 One equation, one arbitrary kernel If the kernel represents an arbitrary memory function, a solution of (6.16) can become rather involved. Of course, the threshold of an Hopf instability is still determined by putting λ = iω and separation of imaginary and real parts of (6.16) similar to (6.27) yields ̃ f1 (ω, τ0 ) = a + b Re(K(iω)) =0 ̃ f2 (ω, τ0 ) = ω + b Im(K(iω)) = 0.

(6.32a) (6.32b)

The “delay time” τ0 stands now for a certain parameter characterizing the kernel K(τ). Depending on the particular form of K̃ no general strategy to solve this set can be given. But for the example of a Gaussian kernel K(τ) =

1 𝒩

2

e−(τ−τ0 ) /γ

2

with the normalization 𝒩 =

√πγ (erf(τ0 /γ) + 1), 2

one finds erf(τ0 /γ − iωγ/2) + 1 −iωτ0 −ω2 γ2 /4 ̃ K(iω) = e e . erf(τ0 /γ) + 1

(6.33)

170 � 6 Ordinary differential equations III

Figure 6.7: Graphical solutions, zero contours of f1 (red) and f2 (black) for the Gaussian kernel with a = 1, b = −2, and γ = 1 (left), γ = 3 (right). Note the similarity of the left frame with Figure 6.6.

An analytical solution of (6.32) is no longer available. But note that for the limit γ → 0 (6.33) is equal to the δ-kernel discussed in Section 6.2.2. A graphical solution, as introduced in Section 6.2.2.3, can always be obtained, even for the cases of more than one variable and more than one kernel. Figure 6.7 shows the contour lines of the zeros of (6.32). It is seen that the critical delay time (now the location of the maximum of the Gaussian) increases with increasing width γ. The numerical method for integrating the full nonlinear memory equation with arbitrary kernel differs from that of a delay equation in so far as the integral in (6.3) has to be evaluated at each time step. Of course, the memory should have a finite maximal length t0 (the kernel must be normable) and the integral can be approximated as t0

n

y(t) = ∫ dτ K(τ)x(t − τ) ≈ ∑ Ki xj−i Δt, 0

i=0

j = t/Δt,

(6.34)

where Ki = K(iΔt),

xi = x(iΔt),

n = t0 /Δt.

As for the delay case, the last n values of xi had to be stored in an array, where xn corresponds to x(t) and x0 to x(t − t0 ). The evaluation needs n multiplications and additions at each time step, which increases the numerical effort considerably.

6.3 Chaos and Lyapunov exponents for memory systems



171

6.3 Chaos and Lyapunov exponents for memory systems As already explained in Chapter 4 characterization of a chaotic temporal evolution can be done best by computing the spectrum of Lyapunov exponents. 6.3.1 Projection onto a finite dimensional system of ODEs We restrict the treatment to the case of one variable (6.2). Although there is only one ODE for one variable, we saw that the system has an infinite dimension in function space if delay or memory is included. But as in Section 6.1.1 we can discretize time by introducing a finite time step and define n + 1 variables xn = x(t),

xn−1 = x(t − Δt),

...,

x0 = x(t − nΔt)

(6.35)

with n = INT(t0 /Δt) where t0 stands for the largest length of the memory kernels. In this way, we have approximated the memory equation to an n + 1 dimensional system with an n + 1 dimensional vector of variables x⃗ = (x0 , . . . , xn ). For the temporal evolution of xn , we have from (6.2) dxn = f ̃(x)⃗ dt

(6.36)

where f ̃ follows from f by evaluation of the integrals (6.3) as sums, see (6.34). Now we ⃗ to the next time step, can update the complete vector x(t) ⃗ xn (t + Δt) = xn (t) + Δt f ̃(x(t)) xi (t + Δt) = xi+1 (t),

i = 0 . . . n − 1.

(6.37a) (6.37b)

The first relation is merely a first-order forward discretization of the time derivative of (6.36), the latter n follow directly from (6.35). One easily shows that the scheme (6.37) is a first-order discretized version of the ODE system dxn ⃗ = f ̃(x(t)) dt dxi 1 = (xi+1 (t) − xi (t)), dt Δt

(6.38a) i = 0 . . . n − 1,

(6.38b)

having now the form (3.15) or (6.1) of an n + 1 dimensional autonomous initial value problem without delay. 6.3.2 Determination of the largest Lyapunov exponent Instead of computing all n + 1 Lyapunov exponents, we present a method according to Section 4.6.3.1 that determines the largest Lyapunov exponent only. This is often suffi-

172 � 6 Ordinary differential equations III cient, since the largest one allows to distinguish between chaotic and regular (periodic or quasi-periodic) behavior. In principle, we can apply the routine, developed in Chapter 4, that computes all n + 1 Lyapunov-exponents to the set (6.38). However, n depends on the time step and for small Δt (higher accuracy) n can become rather large. On the other hand, all equations of (6.38) except for the first one are already linear and the numerical effort can be strongly reduced if the pointer method outlined in Section 6.1.2 is applied instead of solving the n differential equations (6.38b) or performing the n variable allocations of (6.37b). We introduce the method with a certain example of a one-variable delay equation that shows chaotic solutions. The simplest equation with a sinusoidal nonlinearity is dx = sin x(t − τ) dt

(6.39)

with the only control parameter τ, the time delay. Fixed points of (6.39) are x0 = mπ which for τ = 0 are stable (unstable) for odd (even) m. Without restriction we may take m = 1. Linearizing yields the transcendental equation (6.19) with a = 0,

b = −1.

For τ < 1/e there exist two negative λ that merge at τ = 1/e to a complex conjugate pair with negative real part (stable focus). If τ reaches τc = π/2, the real part becomes zero and a Hopf instability occurs with frequency ωc = 1, according to (6.28) and (6.30). Figure 6.8 shows three numerical solutions for different τ. For τ ≈ 5 the limit cycle becomes unstable and chaotic behavior occurs, for much larger τ a kind of erratic Brownian motion emerges, where x(t) jumps over the unstable fixed points 2mπ either in direction to the left ((2m − 1)π) or to the right ((2m + 1)π). The largest Lyapunov exponent is computed by the function dlyapd(...) that is explained in the Appendix B.4.4. The full code is in the fortlib library [2]. The main trajectory is computed in x(i). After a certain number of preliminary iterations, the perturbations are determined in aux(j), starting with a certain (unimportant) initial condition. After nt time steps, the iterations are normalized and the logarithm of their absolute values is summed up. This process terminates when t reaches tend. The user must supply the two functions SUBROUTINE eqs(xt,x0,dt) REAL*8 xt,x0,dt x0=x0+dt*sin(xt) ! Delay-ODE, xt: x(t-tau) END

6.3 Chaos and Lyapunov exponents for memory systems

� 173

Figure 6.8: Three solutions of the delay-differential equation (6.39) for different τ. The last two show chaotic behavior.

for the full system, and SUBROUTINE eql(xt,x0,xl,dt) REAL*8 xt,x0,dt,xl x0=x0+dt*cos(xl)*xt ! Delay-ODE (linearized) END ! xt: x(t-tau), xl: main trajectory (t) for the linearized one, where the linearization is done around the main trajectory xl. Figure 6.9 shows the largest Lyapunov exponent as function of τ. The two analytically determined values of τ = 1/e ≈ 0.368 (birth of a stable focus) and τ = π/2 ≈ 1.57 (birth of a limit cycle) as well as the transition to chaos for τ > 5 are clearly visible. Also a periodic window can be recognized at τ ≈ 5.6. 6.3.3 The complete Lyapunov spectrum Writing the memory equation in the form of (6.38), we can use the method explained in Section 4.6.3.5 and determine the whole spectrum of n + 1 Lyapunov exponents. But

174 � 6 Ordinary differential equations III

Figure 6.9: Largest Lyapunov exponent of (6.39) as a function of τ. Chaos occurs at τ ≈ 5.

Figure 6.10: First five Lyapunov exponents of (6.39) as a function of τ.

mind that in practice n can be a very large number depending on the ratio of maximal memory time and (arbitrary) time step. Then the method becomes slow and inaccurate. But the code provided allows also for the computations of only a certain number of the largest exponents. The complete FORTRAN code is in the fortlib library [2], the header can be found in Appendix B.4.5. Figure 6.10 shows the first five Lyapunov exponents of the delay equation (6.39) over the delay time τ. For larger τ more and more exponents become positive, indicating complex chaotic behavior of the solutions, for a more detailed study see [3].

6.3.4 Problems The mean temperature T(t) of a room that is heated by a heat source driven by a thermostat shall be given as dt T(t) = −γT(t) − α(tanh β(T(t − τ) − T0 ) − 1).

(6.40)

6.4 Epidemic models



175

Here, γ stands for the loss rate through the walls and windows, α and β denote the characteristics of the thermostat and T0 is a prescribed temperature. The delay time τ accounts for the retardation of the thermostat. 1. By scaling of T, time and β, bring (6.40) to the form dt ̃T(̃ t)̃ = −T(t)̃ − tanh β(̃ T(̃ t ̃ − τ)̃ − T̃0 ) + 1. 2. 3. 4.

(6.41)

For the following, take the nondimensional form (6.41), but leave all tildes. Compute the fixed point Ts up to the order ε for T0 = 1 + ε, |ε| ≪ 1. Linearize (6.41) (order ε) and compute the critical τ for which Ts becomes unstable. Compute also the critical frequency. Give numerical solutions of (6.41) using the pointer method from Section 6.1.2.

6.4 Epidemic models 6.4.1 Predator-prey systems and SIR model Since the advent of the Covid-19 pandemic in 2020 the interest in the mathematical modulation of the development of infectious diseases has grown enormously. Nevertheless, the simplest model serving nowadays as the basic for numerous extensions and specialisations goes back to the work of Kermak6 and McKendrick7 already in 1927 [4] and is called SIR model. The acronym SIR stands for Suscetible, Infectious, Recovered and the model consists of three nonlinearly coupled ODEs describing the time behavior of the three relative concentrations s(t) = S(t)/N,

j(t) = I(t)/N,

r(t) = R(t)/N

(6.42)

where N =S+I +R denotes the total number of individuals of the population. SIR models are also named compartment models because each individual can only be in one of the three compartments S, I, R, depending on its state of health. From the above definition it follows that s(t) + j(t) + r(t) = 1.

(6.43)

The SIR model can be considered a special case of a predator-prey system if the susceptible individuals are identified with the prey and the infected with the predators. In 6 William Ogilvy Kermack, Scottish biochemist, 1898–1970. 7 Anderson Gray McKendrick, Scottish physician, 1876–1943.

176 � 6 Ordinary differential equations III Section 4.3.3.3 we introduced the Lotka–Volterra equations dn1 = α1 n1 − α2 n1 n2 dt dn2 = −β1 n2 + β2 n1 n2 , dt

(6.44) αi , βi > 0

as the most simple predator-prey system proposed by Lotka8 in 1920 and Volterra9 in 1931. Numerical solutions yield an oscillatory behavior of the prey (n1 ) and the predator (n2 ) rates around the fixed point n1 = β1 /β2 , n2 = α1 /α2 with the approximate frequency (small amplitudes) ω = √α1 β1 , see Figure C.2. The basic SIR model is obtained if we neglect the reproduction of the prey (susceptibles) and put α1 = 0. Further, we assume α2 = β2 and obtain from (6.44) with s = n1 , j = n2 ds = −α2 s j dt dj = −β1 j + α2 s j. dt

(6.45a) (6.45b)

In the language of epidemic, α2 denotes the infection rate and 1/β1 is the average time of being infectious, or the time of recovery. If neither death nor birth processes are included, the rate equation for the recovered simply reads dr = β1 j, dt

(6.46)

see Figure 6.11.

Figure 6.11: The dynamics of (6.45), (6.46) consists in a unidirectional transition from compartment S via I to R.

6.4.1.1 Dimensionless SIR equations One can still get rid of one parameter by scaling the time with β1 , leading to the dimensionless SIR model ds = −R0 s j dt 8 Alfred James Lotka, American mathematician, 1880–1949. 9 Vito Volterra, Italian mathematician, 1860–1940.

(6.47a)

6.4 Epidemic models

dj = −j + R0 s j dt dr =j dt



177

(6.47b) (6.47c)

with the only one control parameter left R0 =

α2 , β1

(6.48)

called the basic reproduction number. Adding the three equations (6.47), one sees that s + j + r is a conserved quantity, in agreement with (6.43). Because r does not couple to the first two equations, it is sufficient to consider solely (6.47a), (6.47b) and compute r from (6.43). 6.4.1.2 Fixed points and their stability The system (6.47) has infinitely many fixed points laying dense on the s-axis j(0) = 0,

0 ≤ s(0) ≤ 1.

The fixed points denote a disease-free (healthy) state, either before the disease spread out or when the virus has completely died. The latter is called endemic equilibrium. A linear stability analysis according to s(t) = s(0) + u exp(λt)

j(t) = v exp(λt)

yields after linearization from (6.47) the linear system λ u = −R0 s(0) v λ v = (R0 s(0) − 1)v that has the solvability condition λ(λ − R0 s(0) + 1) = 0. The two roots are λ1 = 0,

λ2 = R0 s(0) − 1.

The marginal mode belonging to λ1 consists in a mere shift of (the arbitrary) s(0) . From λ2 we see that those fixed points located on the s-axis on the right side from 1/R0 are unstable: s(0) >

1 R0

󳨀→

λ2 > 0.

178 � 6 Ordinary differential equations III If we assume for instance a healthy initial state with s(0) = 1, this state becomes unstable for R0 > 1 and the disease spreads out. 6.4.1.3 Approximate analytical solution Before we present numerical solutions, we shall consider some results that can be obtained analytically. Dividing (6.47b) by (6.47a) yields an ODE for the trajectories j(s) in sj-phase space: dj 1 = − 1. ds R0 s

(6.49)

This can be integrated to the ‘first integral’ j(s) =

1 s ln( ) + s0 − s + j0 R0 s0

(6.50)

where s(t0 ) = s0 , j(t0 ) = j0 are the initial values at t = t0 . Figure 6.12 shows several trajectories for s0 = 1, j0 = 0 (healthy state) and different R0 . For R0 > 1 all trajectories end in an endemic equilibrium with je = 0 and se < 1. Thus, they connect an unstable fixed point with a stable one, an object called heterogeneous orbit. In the same way, s(r) can be found solving the equation ds = −R0 s dr

(6.51)

which is obtained by dividing (6.47a) by (6.47c). It has the solution s(r) = s0 exp (−R0 (r − r0 )).

(6.52)

Figure 6.12: Trajectories for R0 = 1.1, 1.2, 1.3, 1.4. The unstable fixed points (s > 1/R0 ) are connected with the stable ones (s < 1/R0 ) by heteroclinic orbits.

6.4 Epidemic models



179

To find r(t), we can write (6.47c) with the help of (6.43) as dr =1−r−s dt and, inserting (6.52) (for the sake of simplicity, we take r0 = 0, t0 = 0), dr = 1 − r − s0 exp (−R0 r) dt or r(t)

t

0

0

dr ′ = ∫ dt ′ = t. ∫ 1 − r ′ − s0 exp (−R0 r ′ )

(6.53)

Unfortunately, the antiderivative of this integral is not known. But as long as r ≪ 1, we may expand the exponential up to the third term and find from (6.53) the approximate solution r(t) =

√X 1 1 − [1 − √X tanh( t − φ)] R0 s0 R20 2

(6.54)

with the abbreviations X = (s0 R0 − 1)2 + 2s0 R20 (1 − s0 ),

φ = atanh(

s0 R0 − 1 ). √X

Finally, j(t) is obtained from (6.47c) j(t) =

dr X = dt 2s R2 cosh2 ( √X t − φ) 0 0 2

(6.55)

which has the form of a pulse with width 1/√X. Figure 6.13 shows the three functions r, j, s = 1 − r − j over time for s0 = 0.999, r0 = 0 and different R0 . Clearly, the epidemic enters into its final phase, denoted with herd immunity, if the fraction of susceptible falls below 1/R0 . For the limit t → ∞ one finds j = 0. From (6.54), one can compute the fraction of infected people when the epidemic is over (endemic equilibrium) re = r(t → ∞) =

1 1 − √X − . R0 s0 R20

For s0 = 1, this is simplified to re =

2 (R0 − 1). R20

180 � 6 Ordinary differential equations III

Figure 6.13: The approximate analytical solutions s(t), j(t), r(t) for R0 = 1.1 (red), 1.2 (blue), 1.3 (green), 1.4 (light blue).

6.4.2 SIRS model In the SIR model, the dynamics of the infected consists in a single peak function that reaches zero again in the long time limit, leaving a diminished number of susceptible se < 1/R0 and a certain amount of recovered re = 1 − se . If, on the other hand, the immunity gained after infection has a finite life time, another transition from R back to S is appropriate, see Figure 6.14.

Figure 6.14: The SIRS model has an additional transition from the R to the S compartment, reflecting the finite life time of immunity.

6.4.2.1 Dimensionless SIRS equations The dimensionless system (6.47) is thus extended to the SIRS model and reads ds = −R0 s j + μ r dt dj = −j + R0 s j dt dr = j − μr dt

(6.56a) (6.56b) (6.56c)

where μ is the immunity loss rate and 1/μ the finite life time of immunity, in units of the time of recovery. Note that also here the relation s + j + r = 1 holds and again it is

6.4 Epidemic models



181

sufficient to consider only the first two rate equations after eliminating r ds = −R0 s j + μ(1 − s − j) dt dj = −j + R0 s j. dt

(6.57a) (6.57b)

6.4.2.2 Fixed points and their stability The system (6.57) has two fixed points. The first one, jh = 0,

sh = 1

(6.58)

corresponds to the healthy state. Linear stability analysis shows that this state becomes unstable for R0 > 1. The other fixed point je =

μ(R0 − 1) , R0 (μ + 1)

se =

1 R0

(6.59)

exists only for R0 > 1 and denotes the endemic equilibrium, Figure 6.15. The endemic equilibrium is unconditionally stable.

Figure 6.15: Endemic equilibrium as a function of R0 for μ = 0.1.

6.4.2.3 Numerical solutions For the SIRS model a first integral as (6.50) cannot be found. We resort to numerical solutions of (6.57) by applying a fourth-order Runge-Kutta scheme. Figure 6.16 shows the heteroclinic orbits connecting the two fixed points (6.58), (6.59) for different R0 . All endemic fixed points lie on the dashed line, given as je =

μ(1 − se ) , μ+1

182 � 6 Ordinary differential equations III

Figure 6.16: SIRS model (6.56), trajectories for μ = 0.1 and R0 = 1.1, 1.2, 1.3, 1.4. The unstable fixed point (saddle) (s = 1, j = 0, healthy state) is connected with the stable ones (foci) laying on the dashed line (je = μ(1 − se )/(μ + 1), endemic states) by heteroclinic orbits.

their location depends on R0 . The endemic states are stable nodes for 1 < R0 < 1 +

μ + O(μ2 ), 4

for larger R0 they turn into stable foci. Figure 6.17 shows the dynamics of s(t), j(t), r(t) for different R0 . The trajectories leave the unstable fixed point sh = 1, jh = 0 in the direction of the unstable manifold (s, j) = (−μ−R0 , R0 −1+μ) and approach the stable fixed point (6.59) in the long time limit. The dynamics is qualitatively similar to the SIR model, however, due to the continuing reproduction of susceptible individuals by weakening of immunity, the virus, if once in the world, never dies out completely, je ≠ 0.

Figure 6.17: Numerical solutions s(t), j(t), r(t) of (6.56) for μ = 0.1 and R = 1.1 (black), 1.2 (red), 1.3 (green), 1.4 (blue).

6.4 Epidemic models



183

6.4.3 SIRS model with infection rate control Next, we shall allow for a dependence of the infection rate R0 in (6.56) on the fraction of infected individuals j to model mitigation measures against the epidemic like social distancing, wearing masks, etc. Since the strength of the containment measures should normally increase with the number of infected individuals, it is nearby to replace in (6.56) R0 by R0 (j) =

r0 , f (j)

r0 > 0

(6.60)

with f (j) ≥ 1 as a monotonically increasing function of j where f (0) = 1. The healthy state (6.58) is not affected from f , but the endemic equilibrium is now a solution of je (1 + μ) +

μ f (je ) −μ=0 r0

(6.61)

and depends on f . For the most simple form f (j) = 1 + αj,

α≥0

(6.62)

eq. (6.61) becomes linear in je and je =

μ(r0 − 1) , r0 (μ + 1) + αμ

se =

μ + αμ + 1 , r0 (μ + 1) + αμ

re =

r0 − 1 . r0 (μ + 1) + αμ

(6.63)

The number of infected individuals in the endemic equilibrium is monotonically decreasing with increasing α due to the containment measures. Endemic equilibrium exists again only for r0 > 1 where it is proved to be unconditionally stable. For r0 ≫ 1, je reaches the saturation value μ/(1 + μ), independent on f .

6.4.4 Delayed infection rate control The containment measures normally need a certain time to become effective. That means they are not instantaneously coupled to the number of infected but follow them rather with a certain time delay. To include this issue, instead of (6.60) we may formulate R0 (j) =

r0 f (j(t − τ))

(6.64)

where τ stands for the time between cause (infection number) and effect (containment measures become active). If we assume f given as (6.62) the complete model reads r0 j s ds =− + μ(1 − j − s) dt 1 + αj(t − τ)

(6.65a)

184 � 6 Ordinary differential equations III r0 j s dj = − j. dt 1 + αj(t − τ)

(6.65b)

Its solutions are defined by the control parameters r0 , μ, τ and α. Due to the delay term, the initial conditions have to be extended to s(0),

j(t), −τ < t ≤ 0.

6.4.4.1 Stability of the endemic equilibrium To investigate the stability of the endemic equilibrium, we insert s = se + u eλt ,

j = je + v eλt

with se , sj from (6.63) into (6.65) and linearize with respect to (u, v). A linear system of the form (6.14) is found with the solvability condition ̃ ̃ Det(λ) = aλ2 + λ(r0 + αK(λ) + aμ) + r0 (1 + μ) + μα K(λ) = 0,

(6.66)

̃ where K(λ) = exp(−λτ) and a = 1+1/je . Since α, μ, r0 > 0, there exists no real valued λ = 0 as solution of (6.66). As a consequence, the endemic equilibrium (6.63) can only become unstable due to an oscillatory (Hopf) instability. Inserting λ = iω, (6.66) is separated into real and imaginary parts and a quadratic equation for the squared Hopf frequency ω2 can be derived: a2 ω4 + ω2 (r02 + a2 μ2 − α2 − 2r0 a) + r02 (1 + μ)2 − α2 μ2 = 0 from where ω is determined from the larger root. Finally, τ follows from τ=

−r (μ(1 + μ) + ω2 ) 1 arccos( 0 ). ω α(μ2 + ω2 )

Figure 6.18 shows τ and the time period 2π/ω for which the fixed point je , se becomes oscillatory unstable as a function of r0 for fixed α = 50 and μ = 0.1. 6.4.4.2 Numerical solutions The system (6.65) is solved numerically by a Runge-Kutta method with fixed time step δt = 0.001 as explained in Section 6.1.3. Figure 6.19 shows the number of infectious and the actual effective reproduction number Reff (t) =

r0 s(t) 1 + αj(t − τ)

(6.67)

over time t. From Figure 6.18 the endemic equilibrium is unstable for r0 = 1.6 if τ > 4.9 with the Hopf frequency ω = 0.33. If τ is increased, the oscillations become more

6.4 Epidemic models



185

Figure 6.18: τ and 2π/ω as a function of r0 for α = 50 and μ = 0.1. Time in units of the recovery time 1/β1 . Above the red line, the endemic equilibrium is oscillatory unstable.

Figure 6.19: Top: j(t) over time for (6.65), dashed line is the endemic equilibrium je . Bottom: effective reproduction number (6.67). Parameters as in Figure 6.18, r0 = 1.6, τ = 5.2 (black) and τ = 6.2 (red).

and more anharmonic, their frequency decreases and their amplitude increases significantly, together with the mean values of j. We find ⟨j⟩ = je = 0.008

for τ < 4.9,

⟨j⟩ = 0.009

for τ = 5.2,

⟨j⟩ = 0.012

for τ = 6.2.

It is interesting to see that also here the inclusion of delay terms allows for the occurrence of oscillatory instabilities and the existence of limit cycles. It is evident that such a periodic behavior is normally encountered in the dynamics of many diseases. The in these days most prominent example, the Covid-19 epidemic, comes in pronounced waves, but also other diseases like measles or mumps may show periodic behavior,

186 � 6 Ordinary differential equations III which is not always only triggered by external influences (seasons), but rather occurs by self-organized mechanisms. 6.4.5 Problems Consider the Lotka–Volterra equations (6.44) for the case αi = βi = 1. 1. Show that for population amplitudes close to the fixed point ni0 = 1 (linear case) the prey population follows the predator with a certain time delay: n1 (t) = A n2 (t − τ). 2.

Determine τ and A. Substituting (6.68) into the 2nd eq. (6.44) yields the Hutchinson–Wright equation (6.7): dt n2 (t) = n2 (t) − A n2 (t) n2 (t − τ).

3.

(6.68)

(6.69)

Show that for the case of marginal stability λ = iω the frequency of (6.69) is the same than that of (6.44) if you take for A and τ the values determined in (1.). Show numerically that the solution of (6.44) (n2 (t)) is well approximated by the solution of (6.69). Play with the initial conditions of (6.69) to obtain agreement in amplitude and phase.

6.5 Noise and stochastic differential equations Before we continue with our epidemiological models, we shall do a brief excursion into the world of fluctuations and stochastic differential equations (SDE). For a much more detailed discussion, we refer to [5]. 6.5.1 Brownian motion Brown10 in 1827 observed the irregular motion of pollen grains suspended in water. Einstein, almost 80 years later, explained this motion by the random impacts coming from the collisions with the water molecules. A simple one-dimensional model can be formulated just assuming that the impacts come regularly with dimensionless time distance Δt but differ arbitrarily in their direction. Let xi be the position of the grain particle at time t = iΔt. Then xi+1 = xi + aξi Δt 10 Robert Brown, Scottish botanist, 1773–1858.

(6.70)

6.5 Noise and stochastic differential equations



187

is the position at the next time step and ξi stands for a random variable that can take only the two equally distributed values ξi = ±1. The amplitude (velocity) a denotes the strength of the impacts. The crucial point is to assume the ξi completely uncorrelated, thus ⟨ξi ξj ⟩ = δij

(6.71)

where ⟨. . .⟩ denotes the average over different realizations of a sequence ξi (ensemble average) and δij is the Kronecker-delta. The position of the particle at time t = NΔt is found as N

xN = ∑ Δxi + x0 i=1

where Δxi = xi − xi−1 and from (6.70) N−1

xN = aΔt ∑ ξi + x0 . i=0

We can put the origin of the coordinate system to x0 and shift the index at ξ so N

xN = aΔt ∑ ξi . i=1

(6.72)

A path described by (6.72) is called Wiener process after Wiener11 who studied this kind of random walks in detail. Of course, the value of xN depends on the random sequence ξi , see Figure 6.20, and possesses a large variety. But we can easily compute the mean value of many realizations as N

⟨xN ⟩ = aΔt ∑⟨ξi ⟩ = 0 i=1

where we used the fact that ξ is equally distributed and therefore ⟨ξi ⟩ = 0. For the square of xN of a single path of N steps we find N

xN2 = a2 Δt 2 ∑ ξi ξj i,j=1

11 Norbert Wiener, American mathematician, 1894–1964.

188 � 6 Ordinary differential equations III

Figure 6.20: Solutions of (6.70) with Δt = 1 and a = 1 for different random sequences ξ. The dashed lines denote the mean values of ±√⟨x 2 ⟩ = ±√t, see (6.75).

and for the ensemble average N

N

i,j=1

i,j=1

⟨xN2 ⟩ = a2 Δt 2 ∑ ⟨ξi ξj ⟩ = a2 Δt 2 ∑ δij = a2 Δt 2 N,

(6.73)

where we used (6.71). With N = t/Δt we can compute the variation ⟨x 2 ⟩ at time t with ⟨x 2 ⟩(t) = a2 tΔt. This is a strange result: the variation should not depend on the time step, numerical artifacts neglected! So, the correct result is obtained by replacing in (6.70) a with a/√Δt, leading to xi+1 = xi + aξi √Δt

(6.74)

where a is independent of the time step and the correct result ⟨x 2 ⟩(t) = a2 t

(6.75)

is found. Although ⟨x⟩ = 0, we see that ⟨x 2 ⟩ → ∞, if time goes to infinity, expressing the large variance of the single paths of the Wiener process.

6.5 Noise and stochastic differential equations



189

6.5.2 Stochastic differential equation Now, we assume an (infinitely) small time step Δt and write (6.70) as dx = aξ(t)dt or dx = aξ(t). dt This is the most simple version of a stochastic first-order differential equation. It is sometimes called Langevin12 equation. For constant a, its solution is the Wiener process. We can also study the more general case dx = b + aξ(t), dt

(6.76)

where a and b can be functions of x and t as well. If a = a(x) the noise is termed multiplicative, otherwise additive. Occasionally, b is called drift coefficient. Applying the Euler forward method on (6.76) we arrive at the Euler–Maruyama scheme for stochastic differential equations xi+1 = xi + bΔt + aξi √Δt,

(6.77)

where we did the same substitution for a as in (6.74). 6.5.2.1 Example: the noisy Verhulst equation For demonstration, we shall investigate the influence of noise on the logistic (Verhulst) equation (6.6) with r = xm = 1. To this end, we put in (6.76) b = x(1 − x) and a = σx, dx = x(1 − x) + σxξ(t). dt

(6.78)

Because a depends on x, our noise term is multiplicative. The explicit Euler–Maruyama scheme reads xi+1 = xi + xi (1 − xi ) Δt + σxi ξi √Δt.

(6.79)

For the deterministic case σ = 0 two fixed points x = 0, 1 exist, the first one unstable, the latter stable. An exact solution for σ = 0 reads x(t) =

x0 x0 + (1 − x0 ) e−t

with the initial value x0 = x(0). 12 Paul Langevin, French physicist, 1872–1946.

(6.80)

190 � 6 Ordinary differential equations III

Figure 6.21: Solutions of (6.79) for different σ = 0.1 (black), 0.3 (red), 1.0 (green), 1.5 (blue). The black dashed line corresponds to the deterministic solution (6.80). For σ = 1.5, x → 0 in the long time limit.

Figure 6.21 shows solutions of (6.79) for Δt = 0.001, x0 = 0.1 and different σ. It is interesting to see that a large value of σ can stabilize the in the deterministic case unstable fixed point x = 0 and leads to extinction of the population number x. This can be understood by considering (6.79) close to xi = 0, i. e., neglecting the xi2 term. Then (6.79) can be written as xi+1 = xi (1 + Δt + σξi √Δt) or N

x(NΔt) = xN = x0 ∏(1 + Δt + σξi √Δt).

(6.81)

i

If N is large and ξi = ±1 with equal probability, we may say that N/2 of the factors of the product belong to ξ = −1 and N/2 to ξ = 1, thus N/2

x(NΔt) = x0 [(1 + Δt − σ √Δt)(1 + Δt + σ √Δt)] or, using binomial theorem, N/2

x(NΔt) = x0 ((1 + Δt)2 − σ 2 Δt)

N/2

= x0 (1 + (2 − σ 2 )Δt)

.

(6.82)

For the last conversion we have neglected the term Δt 2 in agreement with the order of the Euler method. From the last term in (6.82) it is clear that if 1 2 σ >1 2 x(t) tends to zero for large t.

6.5 Noise and stochastic differential equations



191

6.5.3 Stochastic delay differential equation Next, we examine the influence of (multiplicative) noise on delay-differential equations. As the most simple of such models, we may use the earlier studied Hutchinson equation (6.7) with a noise term as in (6.78). The scaled model has two free parameters τ and σ and takes the form dx = x(t)(1 − x(t − τ)) + σ x(t) ξ(t). dt

(6.83)

Note that the noise here acts on the actual value of x but could also have a delay, replacing the last term with σ x(t − τ) ξ(t). For the epidemiological model that follows this will be the case. In Figure 6.22 we show numerical solutions of (6.83) with a combination of the schemes developed for (6.7) and (6.78). Even for weak noise the amplitudes of the oscillations become rather irregular and tend to form bursts, whereas the frequency remains almost unchanged. For larger σ a main frequency is still present but the dynamics become more and more unpredictable. The amplitudes may reach very small values compared to the deterministic case, note the logarithmic vertical scale. As in the non-delayed case, strong noise may suppress completely the instability and changes the unstable fixed point x = 0 into a stable one.

6.5.4 SIRS model with delayed and noisy infection rate control Comparing the oscillations depicted in Figure 6.19 with real world data, the periodic behavior is much too regular. Epidemic waves posses varying frequency and amplitude. It is nearby to account the noisy influence of the environment for this irregularity and include stochastic terms into the models. A straightforward assumption is that of a fluctuating infection rate. We replace r0 in (6.65) by rf (t) = r0 (1 + σ ξ(t)) where ξ(t) is a Gaussian distributed random variable (white noise) with ⟨ξ(t)⟩ = 0,

⟨ξ(t)ξ(t ′ )⟩ = δ(t − t ′ )

and σ denotes the noise intensity. Thus, the stochastic delay model reads now ds = [− dj = [

r0 j s σ r0 j s + μ(1 − j − s)] dt − dW 1 + αj(t − τ) 1 + αj(t − τ)

r0 j s σ r0 j s − j] dt + dW , 1 + αj(t − τ) 1 + αj(t − τ)

(6.84a) (6.84b)

192 � 6 Ordinary differential equations III

Figure 6.22: Solutions of (6.83) for different σ and τ = 2. The black line corresponds to the deterministic solution σ = 0. For σ = 1.8, x → 0 in the long time limit.

where dW is the one-dimensional Wiener process with dW = ξ(t) dt. A numerical realization of (6.84) applying the Euler–Maruyama method with constant time step δt from Section 6.5.2 reads sk+1 = sk + [−

r0 j k sk σ r 0 j k sk √ + μ(1 − jk − sk )] δt − z δt 1 + αjk−n 1 + αjk−n k

(6.85a)

6.5 Noise and stochastic differential equations

jk+1 = jk + [

r0 j k sk σ r0 jk sk √ − j ] δt + z δt. 1 + αjk−n k 1 + αjk−n k

193



(6.85b)

Here, n = τ/δt, jk = j(kδt), sk = s(kδt) and zk is a Gaussian or Bernoulli distributed random variable with mean zero and variance one, ⟨zk ⟩ = 0,

⟨zk zℓ ⟩ = δkℓ

(6.86)

where δkℓ denotes the Kronecker symbol. For δ → 0, the scheme (6.85) converges to the Itô stochastic ODE system (6.84), for more details see [6]. 6.5.4.1 Irregular periodic behavior We repeat the simulations of Section 6.4.4.2, now including fluctuations. System (6.85) is iterated numerically with a fixed time step δt = 10−3 . The random variable zk is computed by an equally distributed series zk = ±1 with probability 1/2, fulfilling (6.86). The result for σ = 0.075 is shown in Figure 6.23. A main influence of the noise terms can be seen on the amplitudes of the oscillations. Contrary to the series of Figure 6.19, there is now no distinct difference between the amplitudes of τ = 5.2 and τ = 6.2. The main frequency decreases with increasing delay time for both cases.

Figure 6.23: Infection and Reff for the stochastic model (6.84) with σ = 0.075, other parameters as in Figure 6.19.

6.5.4.2 Outbreaks For large σ, the infection dynamics shows long phases where the infection number remains very small, interrupted by sharp periodic bursts, Figure 6.24. The amplitudes of

194 � 6 Ordinary differential equations III

Figure 6.24: Infection number for the stochastic model with large fluctuations σ = 0.6, r0 = 1.6, τ = 4.

these outbreaks are larger up to a factor 10 than those for the deterministic model (Figure 6.19) and may differ strongly from each other. In this context, it is interesting to note that for large σ, the minimal values for j(t) become very small. For the series with σ = 0.6 we have min(j) ≈ 10−6 , for σ = 1 we find min(j) ≈ 10−9 . But if the population N is finite, the minimal number of infected individuals according to (6.42) is Im = N min(j). If we take N ≈ 108 , corresponding to the population of a rather large country, for j < 10−8 there would be no infected individual anymore and the disease would have become extincted. Thus, large fluctuations could lead to extinction even if the basic reproduction number stays larger than one. According to Section 6.5.2.1, we estimate for our model the critical sigma for extinction with σc =

√2(r0 − 1) . r0

6.6 A microscopic traffic flow model In the final part of this chapter, we present a model that describes simplified (road) traffic flow by only a very few basic assumptions. The model is based on a set of coupled delay ODEs where the dependent variables describe the positions of the traffic members (cars) along a one-dimensional road. Delay comes into play by considering a finite time of reaction of the car drivers.

6.6.1 The model ̃ Quantities bearing a tilde have a physical dimension. Consider N cars with location x̃i (t), i = 1 . . . N on a one-dimensional, periodically closed road, Figure 6.25. The model is based on the rather simple assumption that each driver selects his/her velocity ṽi only according to the distance ξĩ = x̃i+1 − x̃i

6.6 A microscopic traffic flow model



195

Figure 6.25: Cars on a one-dimensional, periodically closed lane. Each car has a position xi (t) and an individual velocity vi (t) that depends on its distance ξi to the car ahead. Overtaking is not included in the model.

to the car driving ahead. The velocities ṽi =

d x̃i dt ̃

can take values in the range 0 ≤ ṽi ≤ ṽ0 , where ṽ0 is the maximal speed when the ‘road is free’, i. e. when ξĩ is infinite. Later on, ṽ0 will serve as important control parameter. We compute the individual speed by the law ṽi = ṽ0 f (ξĩ )

(6.87)

where f (0) = 0, f (∞) = 1 and f being a monotonically increasing function. Subtracting the equation (6.87) for i from that for i + 1 and assuming that there is a reaction time τ̃ between recognizing the distance and acceleration yields d ξĩ (t)̃ ̃ (t ̃ − τ)) ̃ − f (ξĩ (t ̃ − τ))], ̃ = ṽ0 [f (ξi+1 dt ̃

i = 1 . . . N − 1.

(6.88)

The value for ξÑ is obtained from N−1

ξÑ = L̃ − ∑ ξĩ i

where L̃ denotes the length of the road ring. For the following we shall take f (x)̃ =

(α̃ x)̃ m 1 + (α̃ x)̃ m

(6.89)

with m > 0 (integer) and α̃ > 0 as two parameters that may be used for adjusting the behavior of acceleration and velocity. So far, the model is described by the six parameters N, ṽ0 , τ0̃ , L,̃ α,̃ m.

196 � 6 Ordinary differential equations III 6.6.2 The nondimensional basic system Let us define the following nondimensional variables (without tildes): t = t/̃ τ,̃

x = x/̃ d,̃

ξi = ξĩ /d,̃

vi = ṽi τ/d,̃

α = α̃ d.̃

̃ denotes the equilibrium distance between two cars. Here, d̃ = L/N The basic system of delayed ODEs takes now the form m αm ξi+1 (t − 1) αm ξim (t − 1) dξi (t) = v0 [ − ], m (t − 1) dt 1 + αm ξi+1 1 + αm ξim (t − 1)

i = 1...N − 1

N−1

ξN = N − ∑ ξi ,

(6.90a) (6.90b)

i

where only the four parameter N, v0 , α, m are left.

6.6.3 Stationary solution and linear stability analysis A stationary solution (equilibrium state) of (6.90) reads ξi0 = 1,

i = 1 ... N

(6.91)

where all cars have the same distance of one and the same velocity v0i = v0

αm . 1 + αm

(6.92)

To determine the stability range of the equilibrium state with respect to the velocity parameter v0 , we perform a linear stability analysis. The ansatz ξi = 1 + ui exp(λt) yields after linearization the homogeneous system for uj λuj = v0

mαm exp(−λ)(uj+1 − uj ). (1 + αm )2

(6.93)

We changed the index i → j because we shall need i = √−1 for the following. Assuming plane waves for the perturbations uj and inserting uj = u0 exp(ikj)

6.6 A microscopic traffic flow model

� 197

where k is an arbitrary real wave number, we find λ = v0

mαm exp(−λ)(exp(ik) − 1) (1 + αm )2

(6.94)

as transcendental solvability condition. For given α and m, λ(k) can be determined iteratively as described in Section 6.2.2.1. We present a short MATLAB script: clear; np=100; % number of cars over=1; % over-relaxation m=4; alpm=3.4; v0=1 % parameters m, alpha^m, v0 rk0=0.01; rk1=3.14; dk=(rk1-rk0)/(np-1);

% region for k, 0...pi

v=m*v0*alpm/(1.+alpm)^2; rl=0.1+1i; % initial lambda for itertion for k=1:np % k loop rk=dk*(k-1)+rk0; rkp(k)=rk; b=v*(exp(1i*rk)-1.); dif=1.; n=0; while(dif>1.e-6) % stop condition rla=rl; rl=(b*exp(-rl)+over*rl)/(1.+over); % iteration with over-rel. dif=abs(rl-rla); n=n+1; end lambda(k)=rl; end plot(rkp,real(lambda)); Figure 6.26 shows the result for m = 4 and αm = 3.4. If v0 exceeds a certain critical value vc0 , the basic state becomes oscillatory unstable and the cars organize themselves in form of density waves, leading to accumulation and traffic jams without external cause. Before we present numerical solutions of the full system (6.90), we wish to determine the critical speed vc0 exactly. To this end, we put λ = iω as in Section 6.2.2.2 and separate (6.94)

198 � 6 Ordinary differential equations III

Figure 6.26: Real part (growth rate, left) and imaginary part (frequency, right) of λ from (6.94) for m = 4 and αm = 3.4. The values of v0 are 0.7 (blue), 0.75 (green), 0.85 (red), 1.0 (black). The critical value for v0 is 0.712.

into real and imaginary parts: 0 = cos(k − ω) − cos(ω) ω = v0

m

mα (sin(k − ω) + sin(ω)). (1 + αm )2

(6.95a) (6.95b)

From (6.95a) we find ω = k/2, and, substituted in (6.95b) and solving for v0 , v0 (k) =

(1 + αm )2 k . 4mαm sin(k/2)

(6.96)

v0 (k) denotes the value, where Re λ = 0. Taking the limit k → 0 gives finally the critical speed vc0 =

(1 + αm )2 2mαm

(6.97)

where the instability occurs. For the values used for Figure 6.26 we compute vc0 ≈ 0.712. If all cars are traveling with their largest possible velocity for a stable equilibrium vc0 and have the distance of one, their individual velocity according to (6.92) reads vci =

1 + αm . 2m

(6.98)

There is a rule of thumb that you have already learned in driving school: the distance not to go below to the car running ahead is ‘half of the speedometer’, that is ṽ d̃ = h 3.6 [s] 2

6.6 A microscopic traffic flow model



199

where the factor 3.6 comes from the fact that the speed in the rule is measured in km/h and we have to convert into m/s. Taking for τ one second, which is normally the average reaction time assumed for car drivers, one finds for the nondimensional halfspeedometer rule vh = 2/3.6 ≈ 0.55.

(6.99)

If we now identify the critical velocity (6.98) with the half-speedometer velocity (6.99), we may compute αm = 2mvh − 1 ≈ 1.1m − 1 what yields for m = 4 α4 ≈ 3.4.

6.6.4 The fully nonlinear system – numerical solutions The nonlinear delay system (6.90) is solved by an explicit first-order Euler method, the delay terms are determined by the pointer method outlined in Section 6.1.2. The time step is fixed with Δt = 10−4 and the last 10000 values of ξi (t), i = 1 . . . N have to be stored. For N = 100 this allocates 8 MB of the memory (double precision). The dependent variables are initialized around the equilibrium values ξi (0) = 1 + 0.01ζi where ζi are equally distributed random numbers in [−1/2,1/2]. It is important that the constraint N

∑ ξi (0) = N i

is fulfilled. We start the simulation with a smaller αm = 2 where we take m = 4 for all runs (the reader is invited to check different m and α and see how this influences the dynamics). For these parameters vc0 = 0.563, vci = 0.375 and the homogeneous traffic flow becomes unstable for a velocity far below the half-speedometer rule (6.99). A time series is shown in Figure 6.27 where the occurrence of density waves followed by a phase separation into two different distinct distances and velocities can be clearly seen. The areas of small distances (low velocities, congestion) thereby travel always backwards relatively to the overall motion of the cars.

200 � 6 Ordinary differential equations III

Figure 6.27: Time series computed from the full system (6.90) for α4 = 2, v0 = 0.61 at times t = 200, 300, 400, 500, 1000, top to bottom. Starting from arbitrarily small perturbations, the jam occurs without reason due to the instability of the equilibrium state. Black: distances ξi , red: velocities vi over i, N = 100.

A more realistic αm = 3.4, m = 4 yields vc0 = 0.712, vci = 0.55 where the critical velocity matches the half-speedo velocity. Now phase separation is even more pronounced and very slow areas where the cars are almost stopping change with areas with a higher speed, Figure 6.28.

Bibliography

� 201

Figure 6.28: Same as Figure 6.27 but for α4 = 3.4, v0 = 0.712.

Bibliography [1] [2] [3] [4]

D. Mackey, L. Glass, Oscillations and chaos in physiological control systems, Science 197, 28 (1977). FORTRAN codes at https://www.degruyter.com/document/isbn/9783110782523/html. J. C. Sprott, A simple chaotic delay differential equation, Phys. Lett. A 366, 397 (2007). W. O. Kermack, A. G. McKendrick, A contribution to the mathematical theory of epidemics, Proc. Roy. Soc. A 115, 700–721 (1927). [5] P. E. Kloeden, E. Platen, Numerical Solution of Stochastic Differential Equations, Springer, Berlin (1992). [6] C. Gardiner, Stochastic Methods: A Handbook for the Natural and Social Sciences, Springer, Berlin, 4th ed. (2009).

7 Partial differential equations I, basics If we are interested in studying and modeling physical problems originating from continuum physics, we are rapidly confronted with partial differential equations (PDEs). ‘Continuum physics’ in a wider sense deals with time-dependent objects filling the whole (real) space, like flow fields, mass or charge densities, velocity or displacement fields, electromagnetic fields, or quantum mechanical wave functions, to name but a few. In most physical models, however not in all, first and/or second order derivatives in both the time and space of the field functions may appear. The PDEs can be classified by a number of characteristics and are grouped into elliptic, parabolic, and hyperbolic equations [1].

7.1 Classification We commence with the mathematical standard classification based on characteristics and study the consequential specifications for a numerical implementation and boundary conditions. A characteristic, or characteristic curve, is a certain curve in the space of independent variables along which the PDE becomes an ordinary differential equation. If the characteristic is known, this ODE can be solved and a special solution of the PDE is thereby found. If one of the variables is the time, the characteristic defines curves along which information from the initial conditions is spread in space. 7.1.1 PDEs of the first order We restrict ourselves for the time being to one dependent variable, say u. Let us start with only two independent variables, x and t. The most general (quasilinear) PDE of the first order then reads A(u, x, t)

𝜕u 𝜕u + B(u, x, t) = C(u, x, t). 𝜕x 𝜕t

(7.1)

If A and/or B depend additionally on the first derivatives of u, (7.1) is a nonlinear PDE. In contrast, a linear first-order PDE has the general form, ̃ t) u + D(x, ̃ t) 𝜕u + B(x, ̃ t) 𝜕u = C(x, ̃ t). A(x, 𝜕x 𝜕t

(7.2)

With the help of (7.1), the variation of u (the total differential) can be written as du =

𝜕u 𝜕u C 𝜕u dx + dt = dx + [dt − 𝜕x 𝜕t A 𝜕t C 𝜕u = dt + [dx − B 𝜕x

https://doi.org/10.1515/9783110782523-007

B dx] A A dt]. B

(7.3)

7.1 Classification

� 203

If we choose the path in the tx-plane in such a way that dx A = , dt B

(7.4)

both square brackets vanish and du =

C C dt = dx B A

(7.5)

remains along the characteristic curve xc (t) that is given as a solution of the ODE (7.4). For the special case C = 0 (homogeneous PDE) one obviously obtains du = 0 and thereby, u(xc (t), t) = const. 7.1.1.1 Example: convection equation To exemplify the idea, we wish to study the convection equation 𝜕t u + v(x)𝜕x u = 0

(7.6)

with a space-dependent velocity as a special case of (7.1) with A = v(x), B = 1, C = 0. Let v(x) = α (xs − x).

(7.7)

Integrating (7.4) one finds the characteristics (Figure 7.1) as a family of curves along which u is not changing (C = 0): xc (t) = xs (1 − e−α(t−t0 ) ) with arbitrary but constant t0 (integration constant). Equation (7.6) could serve as a model for the spatial evolution and spreading of a certain concentration, i. e., of some pollutants, transported by a given air flow (7.7). Due to the special form of v(x), the pollutants accumulate in the course of time near x = xs .

Figure 7.1: Characteristics of the convection equation (7.6) with v(x) as (7.7). Along the characteristics, initial information is transported in the positive t-direction.

204 � 7 Partial differential equations I, basics 7.1.1.2 Example: Burgers equation As a prototype for a quasilinear PDE we shall examine the one-dimensional Burgers equation 𝜕t u + u 𝜕x u = 0.

(7.8)

The nonlinearity steepens the wave front more and more and finally causes a singularity at which the wave would break, Figure 7.2. However, at this point (7.8) loses its validity because u is not any longer differentiable. A special solution of Burgers equation is linear in x and reads u(x, t) = −

αx . 1 − αt

(7.9)

For α > 0, a singularity exists at t ∗ = 1/α, corresponding to an infinite slope and wave breaking. This can also be recognized looking at the characteristics. From (7.4) we obtain

and once integrated,

dxc α xc =u=− , dt 1 − αt xc (t) = x0 ⋅ (1 − αt)

with the constant x0 = xc (0). All characteristics meet at the singularity t = t ∗ , x = 0 (Figure 7.3).

Figure 7.2: As the convection equation, the Burgers equation yields as a solution for u > 0 traveling waves to the right. Since the velocity is proportional to the amplitude, wave crests move faster and catch up to wave troughs (b), eventually leading to a breaking wave (c). In (c), u can no longer be expressed as a function of x.

Figure 7.3: Burgers equation; characteristics of the special solution (7.9). The intersection of the characteristics is linked with a singularity of u.

7.1 Classification

� 205

7.1.2 PDEs of the second order We discuss again the case of two independent variables, now x, y. The general form of a second order PDE reads A

𝜕2 u 𝜕2 u 𝜕2 u 𝜕u 𝜕u + B + C +D +E +F =0 2 2 𝜕x𝜕y 𝜕x 𝜕y 𝜕x 𝜕y ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟

(7.10)

≡H

with A . . . F as functions of x, y and possibly of u. As we shall see in a moment, only the coefficients of the second derivatives A, B, C determine the characteristics. We try the same procedure as in Section 7.1.1: what are the curves along which the variation of u is described by an ODE as in (7.5)? Let the curves sought after be expressed as y = yc (x)

(7.11)

with their local slope m(x, yc ) = dyc /dx at x, yc . Introducing the abbreviations P = 𝜕x u,

Q = 𝜕y u

one obtains the total differentials du = P dx + Q dy

(7.12)

and 2 2 dP = (𝜕xx u) dx + (𝜕xy u) dy,

2 2 dQ = (𝜕xy u) dx + (𝜕yy u) dy.

(7.13)

Using (7.13), the second derivatives along (7.11) can be expressed as 2 𝜕xx u=

dP 2 − m 𝜕xy u, dx

dQ 1 2 − 𝜕 u. dy m xy

(7.14)

dP dQ + H) m + C = 0. dx dx

(7.15)

2 𝜕yy u=

Inserting (7.14) into (7.10) yields 2 − 𝜕xy u[A m2 − B m + C] + (A

If we choose the local slope of the characteristics as m1,2 =

1 B √ B2 ( ± − AC), A 2 4

(7.16)

the square bracket of (7.15) vanishes and the variation of P and Q along that curve can be computed from mi A dPi + C dQi = −H mi dxi .

(7.17)

P und Q (and with (7.12) also u) follow then from their previous values Pi , Qi , u (see Figure 7.4) as P = Pi + dPi ,

Q = Qi + dQi .

(7.18)

206 � 7 Partial differential equations I, basics

Figure 7.4: From a given Pi , Qi one can determine P, Q with the help of (7.17), (7.18).

From the six equations (7.17), (7.18), the six unknowns P, Q, dPi , dQi can be uniquely determined. Knowing mi from (7.16), the characteristics can be found solving the ODEs dy(i) c = mi (y(i) c , x). dx

(7.19)

For a linear PDE, (7.16) and therewith yc (x) are independent from the solution u. 7.1.2.1 Discriminant Are there always two different families of characteristic curves? Obviously this can be only the case if the two roots (7.16) are real-valued and do not coincide. For a further classification we consider the discriminant 𝒟=

B2 − AC 4

(7.20)

and distinguish the cases – 𝒟 > 0: m1 ≠ m2 , both real-valued, two characteristics, hyperbolic PDE – 𝒟 = 0: m1 = m2 , real-valued, one characteristic, parabolic PDE – 𝒟 < 0: m1 = m2∗ , complex-valued, no characteristics, elliptic PDE In Table 7.1 we list several PDEs and their allocation according to the three classes above, now for three spatial dimensions. Table 7.1: Several important second order PDEs from different areas of physics and their classification with 2 2 2 respect to characteristics in three space dimensions, Δ = 𝜕xx + 𝜕yy + 𝜕zz . The eigenvalues of the coefficient matrix A (7.21) are denoted by λi . hyperbolic

parabolic

elliptic

wave equation 𝜕tt� u − c � Δu = � λ� = �, λ�,�,� = −c � 𝒟 = c� > �

diffusion equation 𝜕t u − κΔu = � λ� = �, λ�,�,� = −κ 𝒟=� Navier–Stokes equation ⃗ ρ ddtv = ηΔv ⃗ − ∇P λ� = �, λ�,�,� = −η 𝒟=�

Poisson equation Δu = −ρ/ϵ� λ�,�,� = � 𝒟 = −� stationary Schrödinger equation −Δu + Vu = Eu λ�,�,� = −� 𝒟 = −�

7.1 Classification

� 207

7.1.2.2 N independent variables Next, we extend the concept to N independent variables xi , i = 1 . . . N and write the general form as N

∑ Aij

i,j=1

N 𝜕2 u 𝜕u + ∑ Bi + C u = 0. 𝜕xi 𝜕xj i=1 𝜕xi

(7.21)

The coefficient matrix Aij is symmetric and has only real eigenvalues whose signs allow for a classification according to Section 7.1.2.1. At first we take N = 2. Using the notation from (7.10), the matrix A reads A=(

A B/2

B/2 ) C

and has the eigenvalues 1 λ1,2 = (Tr A ± √(Tr A)2 + 4𝒟) 2 and 𝒟 = − Det A.

With λ1 λ2 = Det A we obtain according to Section 7.1.2.1 – 𝒟 > 0: λ1 λ2 < 0, hyperbolic, – 𝒟 = 0: λ1 λ2 = 0, parabolic, – 𝒟 < 0: λ1 λ2 > 0, elliptic. This can be extended straightforwardly to N variables: Let λi be the spectrum of the N × N coefficient matrix A. The PDE (7.21) is – hyperbolic if all λi ≠ 0 and all λi but for one have the same signs (saddle node), – parabolic if one arbitrary λi = 0, – elliptic if all λi ≠ 0 and all λi have the same signs (node). 7.1.3 Boundary and initial conditions As for ordinary differential equations additional information for a unique solution of a PDE is needed to arrive at a well-posed problem, again in the form of boundary and/or initial conditions. But here the type of the PDE plays an essential role. If we take for example the convection equation with v = const. > 0 and search for a solution in the domain 0 ≤ x ≤ L, we may prescribe u = u0 (x) at t = 0 (initial condition). But in the course of time, u moves along the characteristic u(x, t) = u0 (x − vt), excluding some prescribed boundary conditions on the right-hand side x = L. For PDEs of the second

208 � 7 Partial differential equations I, basics order, boundary value problems on a closed boundary are well-posed only for elliptic equations where no characteristics exist. 7.1.3.1 Elliptic PDEs One distinguishes between problems where u is given along a boundary R (Dirichlet condition) and those where the derivative normal to the boundary is fixed (Neumann condition), Figure 7.5. The most general form is a combination of both, called the Robin condition: α u + β n̂ ⋅ ∇u = γ,

r⃗ ∈ R.

(7.22)

It includes the two former cases Dirichlet (β = 0) and Neumann (α = 0).

Figure 7.5: Typical boundary value problem for an elliptic second order PDE. On a closed boundary R, u or the gradient of u is prescribed and n̂ denotes the unit vector normal to the boundary (here for two dimensions).

As an example consider the stationary heat equation in two dimensions (Laplace equation) 𝜕2 T 𝜕2 T + = 0. 𝜕x 2 𝜕y2

(7.23)

If we fix the temperature along R (Dirichlet) T(x, y) = TR (x, y),

x, y ∈ R,

the solution is unique and corresponds to the stationary temperature profile of a heat conducting plate. If, on the other hand, the heat current through the boundary (λ = heat conductivity) −λ n̂ ⋅ ∇T = jR ,

x, y ∈ R

7.1 Classification

� 209

Figure 7.6: Specifying the temperature (left) on, or the heat current (right) through the boundary determines the temperature inside the domain as a solution of the stationary heat equation.

is given, jR must be compatible with (7.23). This becomes clear by integrating (7.23) over the whole area 1 ∬ dxdy ΔT = ∬ dxdy div(∇T) = ∮ n̂ ⋅ ∇T dr = − ∮ jR dr = 0. λ

F(R)

F(R)

R

(7.24)

R

The amount of heat flowing in the domain must be the same as that flowing out. Otherwise a stationary temperature on the plate is not feasible, Figure 7.6. 7.1.3.2 Parabolic and hyperbolic PDEs As it turned out in Section 7.1.2.2 while diagonalizing the coefficient matrix, for parabolic as well as for hyperbolic PDEs, a distinguished direction exists, namely that belonging to the vanishing eigenvalue or that with the different sign, respectively. In physical problems this variable corresponds very often to time. Since properties of the solution travel along characteristics, it is not possible to pose boundary conditions for this variable. One can still prescribe, say, u at t = t0 . If the PDE is of the second order one may additionally fix the initial derivative 𝜕t u at t0 , obtaining in this way an initial value problem (Cauchy problem). Take as an example the time-dependent, (2+1)-dimensional1 heat equation 𝜕T 𝜕2 T 𝜕2 T = κ( 2 + 2 ), 𝜕t 𝜕x 𝜕y

(7.25)

which is parabolic. In addition to the boundary conditions discussed in Section 7.1.3.1 we need an initial condition of the form T(x, y, t = t0 ) = T0 (x, y). 1 With (n + 1)-dimensional, we denote a system or an equation in n spatial dimensions plus one for time.

210 � 7 Partial differential equations I, basics Note that for Neumann conditions the compatibility restriction (7.24) is now obsolete. If the total heat flux through the boundaries is not zero, there will be no stationary solution and, depending on the sign of the flux, the plate is heating up or cooling down forever. The distinction between initial and boundary values is also important for hyperbolic equations. If we take for instance the (1+1)-dimensional wave equation 2 𝜕2 u 2𝜕 u = c , 𝜕t 2 𝜕x 2

(7.26)

there exist unique solutions in 0 ≤ x ≤ L, t ≥ 0 for given boundary conditions u(0, t) = uℓ ,

u(L, t) = ur

(7.27)

and additional initial conditions u(x, 0) = u0 (x),

𝜕t u(x, t)|t=0 = v0 (x).

(7.28)

Table 7.2 lists several combinations of second order PDEs and boundary/initial values. Table 7.2: Only certain combinations of equation types and boundary conditions constitute well-posed problems. With ‘Cauchy’ we denote here an initial value problem of the form (7.28). For the definition of ‘open’ and ‘closed’; see Figure 7.7. elliptic

hyperbolic

parabolic

Dirichlet

open closed

under-determined well-posed

over-determined over-determined

well-posed over-determined

Neumann

open closed

under-determined well-posed

under-determined over-determined

well-posed over-determined

Cauchy

open closed

nonphysical over-determined

well-posed over-determined

over-determined over-determined

Figure 7.7: Open and closed boundaries.

7.2 Finite differences



211

7.2 Finite differences As for ODEs, the procedure for a numerical integration of PDEs consists in discretizing the different independent variables and finally in an iterative or, in rare cases, a direct solution of a normally large algebraic system of equations. The discretization can be realized by different methods which should be suited to the problem at hand. We wish to introduce here the most important ones and shall start with the finite-difference method (FD). 7.2.1 Discretization For the moment we concentrate on the case of two independent variables. One reduces the sought solution u(x, y) on discrete mesh points or grid points uij = u(xi , yj ),

xi = x0 + iΔx,

yj = y0 + jΔy,

i = 0 . . . I, j = 0 . . . J,

which are assumed to lie on a (not necessarily) regular grid in the integration domain x0 ≤ x ≤ x1 ,

y0 ≤ y ≤ y1 .

The quantities Δx = (x1 − x0 )/I,

Δy = (y1 − y0 )/J

are denoted as step sizes. In its most simple form the FD method obviously will work only for rectangular geometries. Extensions are possible; one could choose space-dependent step sizes, which could account for a better resolution in domains where u is strongly varying. Derivatives with respect to space are approximated by differential quotients, which are available in different orders of Δx and Δy and in several symmetries. We list the discretization for the first four derivatives:2

du 󵄨󵄨󵄨󵄨 1 = (u − ui−1,j ) + O(Δx 2 ), 󵄨 dx 󵄨󵄨󵄨xi ,yj 2Δx i+1,j

(7.29)

d 2 u 󵄨󵄨󵄨󵄨 1 = (ui+1,j − 2ui,j + ui−1,j ) + O(Δx 2 ), 󵄨 dx 2 󵄨󵄨󵄨xi ,yj Δx 2

(7.30)

2 The formulas can be derived most easily by a Taylor expansion of u around xi .

212 � 7 Partial differential equations I, basics

d 3 u 󵄨󵄨󵄨󵄨 1 = (ui+2,j − 2ui+1,j + 2ui−1,j − ui−2,j ) + O(Δx 2 ), 󵄨 dx 3 󵄨󵄨󵄨xi ,yj 2Δx 3

(7.31)

d 4 u 󵄨󵄨󵄨󵄨 1 = (u − 4ui+1,j + 6ui,j − 4ui−1,j + ui−2,j ) + O(Δx 2 ). 󵄨 dx 4 󵄨󵄨󵄨xi ,yj Δx 4 i+2,j

(7.32)

For elliptic PDEs and if no direction is further distinguished by means of geometrical restraints, one normally uses square grids with Δx = Δy. For this case (2D) we state the four most important formulas for the discretized Laplace operator (Figure 7.8) which, except for (d), are all of the same order in Δx, but have different symmetries with respect to the rectangular grid geometry: 1 (ui+1,j + ui−1,j + ui,j+1 + ui,j−1 − 4ui,j ) + O(Δx 2 ) Δx 2 (b) 1 = (ui+1,j+1 + ui−1,j+1 + ui+1,j−1 + ui−1,j−1 − 4ui,j ) + O(Δx 2 ) 4Δx 2 (c) 1 = (ui+1,j+1 + ui+1,j + ui+1,j−1 + ui,j+1 3Δx 2 + ui,j−1 + ui−1,j+1 + ui−1,j + ui−1,j−1 − 8ui,j ) + O(Δx 2 )

Δ2 u|xi ,yj =

(a)

(d)

=

1 (ui+1,j+1 + ui+1,j−1 + ui−1,j+1 + ui−1,j−1 6Δx 2 + 4(ui+1,j + ui,j+1 + ui,j−1 + ui−1,j ) − 20ui,j ) + O(Δx 4 ).

(7.33)

Configuration (a) is most common and (b) normally leads to numerical oscillations since even grid points (i + j even) are decoupled from odd points. (b) is therefore not suited to diffusion equations. The nine-points formulas (c) and (d) are linear combinations of (a) and (b). The symmetry of (c) comes closest to the (continuous) rotational symmetry of Δ, while (d) minimizes the truncation error to the order Δx 4 . The biharmonic operator is built from fourth order derivatives also including the mixed one. Here we state only the 13-points formula of lowest order (Figure 7.9): Δ22 u|xi ,yj = ( =

𝜕4 u 𝜕4 u 𝜕4 u + 2 + ) 𝜕x 4 𝜕x 2 𝜕y2 𝜕y4 xi ,yj

1 (20ui,j − 8(ui+1,j + ui−1,j + ui,j+1 + ui,j−1 ) Δx 4 + 2(ui+1,j+1 + ui−1,j+1 + ui+1,j−1 + ui−1,j−1 ) + ui+2,j + ui−2,j + ui,j+2 + ui,j−2 ) + O(Δx 2 ).

(7.34)

7.2 Finite differences �

Figure 7.8: Several discretizations of the Laplace operator in two dimensions.

Figure 7.9: The biharmonic operator in two dimensions.

213

214 � 7 Partial differential equations I, basics 7.2.2 Elliptic PDEs, example: Poisson equation The standard problem of electrostatics is the computation of the electric field for a given charge distribution and boundary conditions. Thus one has to find solutions for the Poisson equation [2] Δ u(x, y) = −4π ρ(x, y)

(7.35)

with ρ as the charge density and u as the electrostatic potential. From the latter, the electric field directly follows E⃗ = −∇u. We shall treat the problem again in two dimensions. If u is prescribed along the boundaries (Dirichlet condition) the problem is well-posed and a unique solution exists.

Figure 7.10: A separable problem.

For the special case ρ = 0 and a rectangular geometry as shown in Figure 7.10, the problem is separable for certain boundary conditions. If, for instance, u(0, y) = u(a, y) = 0,

u(x, 0) = 0,

u(x, b) = ub (x),

the separation ansatz ∞

u(x, y) = ∑ An sin( n=1

nπ nπ x) sinh( y) a a

(7.36)

is successful. Note that the boundary conditions at x = 0, a and at y = 0 are already implemented. The coefficients An are determined to fulfill the boundary conditions at y = b: ∞

∑ An sin(

n=1

Thus we find

nπ nπb x) sinh( ) = ub (x). a a

7.2 Finite differences �

215

a

An =

2 nπ ∫ dx ub (x) sin( x). a sinh(nπb/a) a 0

Depending on ub the sum in (7.36) will converge more or less quickly. For more involved boundary conditions or geometries, or if ρ = ρ(x, y) ≠ 0, a separation will normally no longer work. Taking the FD scheme from Figure 7.8 (a), the PDE (7.35) is transformed into an inhomogeneous system of equations 4ui,j − ui,j+1 − ui,j−1 − ui+1,j − ui−1,j = 4πh2 ρi,j ,

(7.37)

where Δx = Δy = h, I = a/h, J = b/h, and ρi,j = ρ(xi , yj ). The walls are located at the grid lines i = 0, I, and j = 0, J. Since the boundary values are prescribed, the system (7.37) has to be solved only in the bulk for i = 1 . . . I − 1, j = 1 . . . J − 1, resulting in N = (I − 1) × (J − 1) equations. If we combine the node values uij and ρij in vector notation, e. g., using the assignment ũ k = uij ,

ρ̃ k = ρij ,

k = i + (j − 1)(I − 1),

we can write (7.37) in the form N

∑ Lkk ′ ũ k ′ = ρ̃ k ,

k ′ =1

(7.38)

where L is an N × N sparse matrix with only five nonzero elements in each line. 7.2.2.1 Implementation of the boundary conditions The boundary values appear in the rows or columns next to the walls as additional inhomogeneities. Take for instance the equation j = 1 from (7.37): 4ui,1 − ui,2 − ui+1,1 − ui−1,1 = 4πh2 ρi,1 + ui,0 .

(7.39)

Here we have already written the boundary term ui,0 on the right-hand side. Thus one may substitute ρ′i,1 = ρi,1 + ui,0 /(4πh2 ) in (7.37). The other boundaries are treated in the same manner. If Neumann or mixed (Robin) conditions are posed, uij has to be computed also on the boundaries, increasing the number of equations and variables to (I + 1) × (J + 1). To compute the derivatives at the walls an additional ‘virtual’ line outside the area is appropriate whose node values must be computed from the boundary conditions.

216 � 7 Partial differential equations I, basics To make it more clear we consider as an example a given Neumann condition 𝜕y u|y=0 = f (x) on the lower wall at y = 0. Taking the discretization (7.29) the virtual line ui,−1 = ui,1 − 2h fi

(7.40)

is established and can be added as in (7.39) to the corresponding inhomogeneity. 7.2.2.2 Iterative methods The matrix L in (7.38) contains many zeros. However, their nonzero elements can be located far from the diagonal. Also N can become very large, especially in three dimensions, and a direct solution by inverting L may cause problems. Iterative methods can help. A sequence of approximations uij(n) is constructed converging preferably quickly to the desired solution. The most simple iteration rule is obtained by rearranging (7.37) according to (n+1) ui,j =

1 (n) (n) (n) (n) (u + ui,j−1 + ui+1,j + ui−1,j ) + πh2 ρi,j . 4 i,j+1

(7.41)

This is the Jacobi method which, however, already converges not very quickly in two dimensions. As a stopping condition, an upper bound for the total difference 󵄨 󵄨 d (n) = ∑󵄨󵄨󵄨uij(n) − uij(n−1) 󵄨󵄨󵄨 < ϵ ij

(7.42)

can be used. As an improvement of the Jacobi method, faster convergence is provided by the Gauss–Seidel method where on the right-hand side of (7.41) values of uij(n+1) are used if they have been already computed in the same loop: (n+1) ui,j =

1 (n) (n+1) (n) (n+1) (u + ui,j−1 + ui+1,j + ui−1,j ) + πh2 ρi,j . 4 i,j+1

(7.43)

But note that the scheme (7.43) breaks the symmetries x → −x, y → −y inherent in the Laplace-Operator (and also in (7.41)) and artificial results with predominant direction may emerge. This shortage can be compensated running the loops in (7.43) over i and j alternately back and forwards. For a backwards loop the iteration is modified to (n+1) ui,j =

1 (n+1) (n) (n+1) (n) (u + ui,j−1 + ui+1,j + ui−1,j ) + πh2 ρi,j . 4 i,j+1

Also combinations like i forwards and j backwards should occur.

7.2 Finite differences



217

Figure 7.11: Convergence for different ω. The Jacobi method diverges as soon as ω > 1.

Convergence can be accelerated further by so-called successive over-relaxation. One first writes (n+1) (n) ui,j = ui,j + ωRi,j

with the relaxation factor ω and Rij depending on the method, e. g., taking Gauss–Seidel Ri,j =

1 (n) (n+1) (n) (n+1) (n) (u + ui,j−1 + ui+1,j + ui−1,j ) + πh2 ρi,j − ui,j . 4 i,j+1

If ω = 1, the Gauss–Seidel method is recovered; for ω > 1 one obtains over-relaxation. Figure 7.11 shows the number of necessary iterations for the Poisson equation (7.35) with Dirichlet conditions u = 0 at the boundaries and a quadratic geometry with 50 × 50 grid points. As a stopping criterion according to (7.42), we choose ϵ = 10−4 , and the charges are given as ρ20,20 = −1,

ρ40,30 = +1.

(7.44)

Figure 7.12 illustrates the result (left frame), together with a run for a charge free area and an inhomogeneous Dirichlet condition (right). The simple Jacobi method shows a rather slow convergence and becomes unstable for over-relaxation. The Gauss–Seidel method converges much faster up to ω ≈ 1.9 but then also diverges. Even fewer iteration steps for the same accuracy are needed for the ADI method, an acronym for ‘Alternating Direction Implicit.’ One direction, say y, and the diagonal terms in (7.37) are treated implicitly, (n+1) (n+1) (n+1) (n) (n) 4ui,j − ui,j+1 − ui,j−1 = ui+1,j + ui−1,j + 4πh2 ρi,j ,

(7.45)

218 � 7 Partial differential equations I, basics

Figure 7.12: Approximate solutions of (7.35) for ϵ ≈ 10−4 , FD, 50×50 grid. Left: ub = 0 and charges as (7.44), right: ρ = 0 and inhomogeneous boundary condition ub (x) = sin πx + sin 2πx + sin 3πx.

and can be written in matrix form L v⃗(n+1) = a⃗ i + Q v⃗(n) i i

(7.46)

by the assignment v⃗i = (ui,1 , ui,2 . . . ui,J ),

a⃗ i = 4πh2 (ρi,1 , ρi,2 . . . ρi,J )

with the two tridiagonal matrices 0 Q = (1 0

1 0 1

0 1 0

0 0 1

... . . .) , ...

4 L = (−1 0

−1 4 −1

0 −1 4

0 0 −1

... . . .) . ...

(7.47)

With the help of the Thomas algorithm (Appendix A.3.5) L is inverted effectively, v⃗(n+1) = L−1 [a⃗ i + Q v⃗(n) ], i i

(7.48)

completing the first half step of the iteration. In the second half, the other direction is treated implicitly: (n+2) (n+2) (n+2) (n+1) (n+1) 4ui,j − ui+1,j − ui−1,j = ui,j+1 + ui,j−1 + 4πh2 ρi,j .

(7.49)

Rearranging the uij according to w⃗ j = (u1,j , u2,j . . . uI,j ),

a⃗ j′ = 4πh2 (ρ1,j , ρ2,j . . . ρI,j )

(7.49) takes the same form as (7.46): Lw⃗ j(n+2) = a⃗ j′ + Q w⃗ j(n+1) .

(7.50)

7.2 Finite differences �

219

Inversion of L as in (7.48) completes the first step. In addition to the fact that more elements are treated implicitly, the method has the advantage of a symmetric handling of x and y. It can be easily extended to three dimensions adding a further step per iteration. A complete step is shown in the following code section: PARAMETER (idim=..., jdim=...) ! define grid size REAL, DIMENSION (idim,jdim) :: u,rho REAL, DIMENSION (max(idim,jdim)) :: a,b,c,d ..... v=4.*dx**2*Pi a=-1.; c=-1. epsilon=.... DO

! upper, lower diagonals ! accuracy wanted

! iteration loop s=0. DO i=2,idim-1 ! 1st half step, y implicitly DO j=2,jdim-1 d(j-1)=v*rho(i,j)+u(i+1,j)+u(i-1,j) ENDDO b=4. CALL tridag(a,b,c,d,jdim-2) ! Thomas algorithm DO j=2,jdim-1 ua=u(i,j) u(i,j)=d(j-1) s=s+ABS(u(i,j)-ua) ENDDO ENDDO DO j=2,jdim-1 ! 2nd half step, x implicitly DO i=2,idim-1 d(i-1)=v*rho(i,j)+u(i,j+1)+u(i,j-1) ENDDO b=4. CALL tridag(a,b,c,d,idim-2) DO i=2,idim-1 ua=u(i,j) u(i,j)=d(i-1) s=s+ABS(ua-u(i,j)) ENDDO ENDDO IF(s 1. Having Ti already isolated on the left-hand side, (7.58) is called an explicit scheme in time, sometimes also FTCS scheme, as an acronym for ‘Forward Time Centered Space’. The corresponding implicit (backward) formula is obtained by considering the right-hand side of (7.58) at time (j + 1)Δt: (j+1)

Ti

(j+1)

= Ti + (j)

Δt (j+1) (j+1) (j+1) κ(Ti+1 − 2Ti + Ti−1 ). Δx 2

(7.59)

Here we have to solve a linear system of equations at each time step for Ti , which seems to be, compared to (7.58), needlessly complicated. However, implicit (or partially implicit) methods are very common in practice and have large advantages due to their much better numerical stability properties. We shall come back to this point in more detail in Section 7.2.3.3. (j+1)

7.2.3.2 Example: temperature in a room with a stove and a window As an application we wish to compute the temperature field in a room with a length and height of 3 meters. In the room, a stove with a given temperature TH is placed below the window. Walls, floor, and ceiling are assumed to be perfect thermal insulators where homogeneous Neumann boundary conditions 𝜕n T = 0 apply. Along the open window the temperature is fixed to the outdoor temperature TE (Dirichlet); see Figure 7.13.

7.2 Finite differences �

223

Figure 7.13: Sketch of the system: room with window and heating. FD-grid, dashed: virtual points. Along window and heating we take Dirichlet conditions; along walls, Neumann conditions apply.

Except for the mesh points coinciding with the window and the heating, the system (7.58) is iterated, but now in two dimensions: Ti,j(n+1) = Ti,j(n) +

Δt (n) (n) (n) (n) κ(Ti,j+1 + Ti+1,j − 4Ti,j(n) + Ti−1,j + Ti,j−1 ). Δx 2

(7.60)

To complete the computations for the boundary nodes, virtual points outside the domain are again needed that can be determined from the Neumann conditions. One finds for instance along the right-hand wall TI+1,j = TI−1,j , and so on. The nodes along Dirichlet boundaries (at the stove and the window) are just fixed at the appropriate values. Figure 7.14 shows a time series computed on a 350 × 350 grid. The step sizes Δx = 3/349,

Δt = 0.1Δx 2 /κ

have been used where κ = 2⋅10−5 m2 /s denotes the thermal diffusivity of air. As an initial condition, the temperature inside the room was fixed to the outdoor value TE . Once the temperature distribution is known the heat current densities according to ̂ j = −λn∇T

224 � 7 Partial differential equations I, basics

Figure 7.14: Time series, temperature distribution after 0.1, 1, 20 days. Stove and window marked in green, TE = 0 ℃, TH = 70 ℃. The extremely long relaxation time is a consequence of the total disregard of any convection.

Figure 7.15: Heat current densities integrated over window, and heating in units of J/s as a function of time.

can be computed (λ ≈ 0.025 W/mK, thermal conductivity of air3 ) and integrated along the window and the surface of the stove: 󵄨󵄨 󵄨󵄨 󵄨 󵄨 ⟨JH ⟩ = λ󵄨󵄨󵄨 ∫ d 2 f ⃗ ⋅ ∇T 󵄨󵄨󵄨, 󵄨󵄨 󵄨󵄨 FH

󵄨󵄨 󵄨󵄨 󵄨 󵄨 ⟨JE ⟩ = λ󵄨󵄨󵄨 ∫ d 2 f ⃗ ⋅ ∇T 󵄨󵄨󵄨. 󵄨󵄨 󵄨󵄨

(7.61)

FE

The temporal evolution is depicted in Figure 7.15. For the stationary (equilibrium) case, both currents should be equal. The discrepancy of about 5 percent seems to have numerical reasons. 7.2.3.3 Von Neumann stability analysis In the iteration rules (7.58)–(7.60) the grid-dependent numerical parameter s=

Δt κ Δx 2

3 Thermal diffusivity and thermal conductivity are different material properties but linked by the relation κ = λ/ρc with density ρ and specific heat c.

7.2 Finite differences �

225

occurs on which the stability of the method depends. To shed light on this, one decomposes Tℓ into the eigenmodes of a linear chain of springs Tℓ(n) = A(n) eikℓ .

(7.62)

Substitution into (7.58) and canceling eikℓ yields A(n+1) = A(n) [1 + 2s(cos k − 1)]. For a stable method the solution should asymptotically approach some stationary value, which is only possible if 󵄨󵄨 (n+1) 󵄨󵄨 󵄨󵄨 (n) 󵄨󵄨 󵄨󵄨A 󵄨󵄨 ≤ 󵄨󵄨A 󵄨󵄨. As a consequence, the absolute value of the so-called amplification factor Vk = 1 + 2s(cos k − 1)

(7.63)

must be less than or equal to one for arbitrary k. This is the stability criterion of von Neumann.4 From (7.63) one finds Vk ≤ 1 for all k and the stability limit is given as Vk ≥ −1, leading to s≤

1 . 1 − cos k

Since this constitutes an upper bound for Δt we must find the specific k that minimizes s, giving k = π,

s = 1/2

and therefore the condition Δt ≤

1 Δx 2 . κ 2

(7.64)

If Δt is chosen as larger, the grid becomes numerically unstable first with the ‘most dangerous mode’ eiπℓ , corresponding to a change of signs from one grid point to the next (Figure 7.16). The relation (7.64) is typical for an explicit method. In general one has Δt ∼ (Δx)n 4 John von Neumann, American mathematician, 1903–1957.

226 � 7 Partial differential equations I, basics

Figure 7.16: Typical numerical instability for an explicit FD scheme, corresponding to the ‘most dangerous mode’ k = π in (7.62).

if the time derivative is first order and the highest occurring space derivative is of order n. If the mesh has N grid points the number of equations is N and the number of time iterations K ∼ 1/Δt ∼ 1/(Δx)n = N n . Then the total numerical effort turns out to be NE ∼ N n+1 , which may restrict the application of fully explicit methods, in particular if higher derivatives are around. The same stability consideration can be done for the implicit scheme (7.59) for which one obtains Vk =

1 , 1 + 2s(1 − cos k)

an amplification factor whose absolute value is less than or equal to one for all k. Thus, the implicit method is unconditionally stable, and restrictions for the time step are only due to accuracy constraints. The numerical effort is then independent on the order of the appearing derivatives NE ∼ N. 7.2.4 Hyperbolic PDEs, example: convection equation, wave equation 7.2.4.1 Convection equation We begin with the most simple hyperbolic PDE in (1+1) dimensions, the convection equation (7.6). For v > 0 the family of characteristics has a positive slope in the tx-plane, Figure 7.1. Open Cauchy initial conditions of the form u(x, 0) = u0 (x),

u(0, t) = ua (t)

allow for a unique solution in 0 ≤ x ≤ L, t ≥ 0. An explicit FTCS scheme takes the form ui(n+1) = ui(n) −

1 (n) (n) C(ui+1 − ui−1 ), 2

(7.65)

where we have introduced the Courant number C=

vΔt . Δx

(7.66)

7.2 Finite differences



227

However, a von Neumann analysis shows the numerical instability already for arbitrarily small time steps. To construct a stable algorithm one can change the representation of the derivatives, the differential quotients. The most simple way is the so-called upwind scheme. Here, a space derivative is taken asymmetric in the ‘upstream’ direction. Instead of (7.65) one obtains in the lowest order (n) (n) {u − ui−1 , ui(n+1) = ui(n) − C { i (n) (n) {ui+1 − ui ,

C, v > 0 C, v < 0

(7.67)

which can be extended to the more general case v = v(x) by substituting v with vi . With a constant v the amplification factor reads |V |2 = 1 − 2|C|(1 − |C|)(1 − cos k) and with |V |2 ≤ 1 one finds the Courant condition |C| ≤ 1.

(7.68)

Substituting ui(n) on the left-hand side of (7.65) by its average 1 (n) 1 (n) (n) (n) ui(n+1) = (ui+1 + ui−1 ) − C(ui+1 − ui−1 ), 2 2

(7.69)

leads to the Lax scheme, which also proves to be stable if the Courant condition (7.68) holds. For both upwind and Lax methods one pays for stability with artificial numerical friction. Comparing (7.67) or (7.69) with the FTCS schemes of the parabolic transport equation 2 𝜕t u + v 𝜕x u = D 𝜕xx u,

one finds for the upwind scheme D = |v|Δx and for the Lax scheme D=

Δx 2 . 2Δt

A second order scheme with respect to the time step is obtained when discretizing the time derivative by u(n+1) − ui(n−1) 𝜕u(t) 󵄨󵄨󵄨󵄨 󵄨󵄨 = i 𝜕t 󵄨󵄨i 2Δt

(7.70)

228 � 7 Partial differential equations I, basics and results in (n) (n) ui(n+1) = ui(n−1) − C(ui+1 − ui−1 ).

(7.71)

The second order time discretization (7.70) applied on the diffusion equation leads to an unstable scheme. For the convection equation on the other hand, it yields a much better stability behavior as the one-step schemes discussed before. Hence for the amplification factor one finds V = −iC sin k ± √1 − C 2 sin2 k, which for all |C| ≤ 1 has an absolute value equal to one. At least in the case of the convection equation, the method is free of any numerical dissipation. Since time integration ‘jumps’ over the spacial grid line at each time step, the method is also called the leapfrog scheme. Even and odd grid points (black and white squares on a checkerboard) are decoupled and may drift apart in the course of the iterations. This can be avoided by introducing a weak additional diffusion. Figure 7.17 shows three numerical solutions of the convection equation with the same initial condition 2

2

u0 = e(x−0.1) /γ ,

γ = 0.2,

ua = 0

and the same time-depending v(t) = 2 cos(πt). For the upwind scheme, one clearly recognizes numerical dissipation ∼ Δx that vanishes using leapfrog (right frame).

Figure 7.17: Numerical solutions of the convection equation with contour lines in the xt-plane. Left: upwind with Δx = 1/1000, middle: upwind with Δx = 1/100, right: leapfrog with Δx = 1/1000. Maximal Courant number C = 0.1. The leapfrog method shows no dissipation.

7.2.4.2 Wave equation Wave equations are encountered in almost all fields of physics. We shall again start with the most simple (1+1)-dimensional case (7.26). For constant Dirichlet boundary conditions

7.2 Finite differences �

u(0, t) = ua ,

229

u(L, t) = ub

an analytical solution can be written up as in (7.55), but now with gn (t) = An sin ωn t + Bn cos ωn t,

ωn = ckn =

ncπ . L

Thus one has to determine twice as many coefficients as for the diffusion equation and needs accordingly twice as many initial conditions. For physical problems one often prescribes u(x, 0) = u0 (x),

𝜕t u(x, 0) = v0 (x)

(7.72)

and finds after some short manipulation L

Bn =

2 ∫ dx u0 (x) sin kn x, L

L

An =

0

2 ∫ dx v0 (x) sin kn x. ωn L 0

Discretizing the second time derivative with the help of (7.30) yields a leapfrog scheme of the form ui(n+1) = 2ui(n) − ui(n−1) + c2

Δt 2 (n) (n) (u − 2ui(n) + ui−1 ). Δx 2 i+1

(7.73)

As already seen in (7.71), this is a two-step method. To compute ui(n+1) we must know ui(n) and additionally ui(n−1) . Let the initial conditions be given in the form (7.72). Then we find ui(0) and ui(−1) from ui(0) = u0 (xi ),

ui(−1) = ui(0) − v0 (xi )Δt.

All further values for n > 0 then follow by iterating (7.73). Is the explicit forward scheme (7.73) numerically stable? Because this is a two-step method, von Neumann’s stability analysis is a bit more involved. With the well-tried mode ansatz uj(n) = A(n) eikj follows from (7.73) A(n+1) = 2A(n) − B(n) + 2q2 A(n) (cos k − 1) B(n+1) = A(n) with q = cΔt/Δx and the auxiliary variable B(n) . We write (7.74) in matrix form A (n+1) A (n) ( ) = M( ) B B

(7.74)

230 � 7 Partial differential equations I, basics with M=(

2 + 2q2 (cos k − 1) 1

−1 ). 0

To achieve a stable algorithm, the absolute values of the two eigenvalues λ12 of M have to be less than or equal to one. Due to λ1 λ2 = Det M = 1 this is only possible if λ1 = λ2 = 1 or the two λi form a complex conjugated pair having an absolute value of one. From this we derive q2 ≤

2 . 1 − cos k

An upper limit for q and hence for the time step is provided by k = π (compare Figure 7.16) and results in q2 ≤ 1 or Δt ≤ Δx/c. If we substitute the phase velocity c by the transport velocity v this is again the Courant condition (7.68). As an application, we wish to study a problem from fluid mechanics. Based on the Euler equations for an inviscid fluid, one systematically derives the (two-dimensional) shallow water equations: 𝜕t h = −(h − f ) ⋅ ΔΦ − (∇(h − f )) ⋅ (∇Φ)

1 𝜕t Φ = −g ⋅ (h − h0 ) − (∇Φ)2 . 2

(7.75)

2 2 Φ and h depend on x, y and t, g = 9.81 m/s2 is the gravity acceleration, while Δ = 𝜕xx + 𝜕yy and ∇ = (𝜕x , 𝜕y ) stand for the two-dimension Laplace and gradient operator, respectively. The water is enclosed in the vertical direction by the two planes z = f (x, y) (ground) and z = h(x, y, t) (surface; see Figure 7.18), and its horizontal velocity at the surface is determined by the potential Φ as

v⃗H = ∇Φ. Equations (7.75) are valid only for surface structures (waves) that are slowly varying on the scale of the average water depth h0 . For a derivation of the shallow water equations we refer to fluid mechanics textbooks, e. g., [3]. Like the underlying basic hydrodynamic equations, the shallow water equations are nonlinear. However, for small wave amplitudes η h(x, y, t) = h0 + η(x, y, t)

7.2 Finite differences �

231

Figure 7.18: Sketch of the shallow water system. The water is confined by the stationary ground at z = f and the deformable surface at z = h.

we may linearize (7.75) according to 𝜕t η = −(h0 − f )ΔΦ + (∇f ) ⋅ (∇Φ)

𝜕t Φ = −gη.

(7.76)

Differentiating the first equation with respect to time and inserting the second one yields 𝜕tt2 η − g ⋅ (h0 − f )Δη + g ⋅ (∇f ) ⋅ (∇η) = 0. If the slope of the ground is also small we can finally neglect the last term and arrive at the (2+1)-dimensional wave equation 𝜕tt2 η − c2 Δη = 0

(7.77)

with a space dependent phase speed c(x, y) = √g ⋅ (h0 − f (x, y)). Thus waves are propagating slower in shallow water, in agreement with experience. What happens if a wave travels from deep water into more shallow regions, for instance while approaching the shore? To find out we solve (7.77) numerically applying the scheme (7.73). For the ground we use a parabola in x direction: f (x) = 0.9 x 2 on the unit square. One can choose the scaling of space and time to obtain h0 = g = 1. Neumann boundary conditions at x = 0, 1 and y = 0, 1 allow for unique solutions. To create the waves we assume an oscillating δ-shaped source term on the right-hand side of (7.77)

232 � 7 Partial differential equations I, basics ∼ δ(x − x0 )δ(y − y0 ) cos ωt, which can be realized numerically by setting a certain single grid point, e. g., in the middle of the layer x0 = y0 = 1/2 to uI/2,J/2 = cos ωt. Figure 7.19 shows three snapshots at consecutive times as well as the corresponding surface profiles along the line y = 1/2. If the waves reach the shore their speed and their length decrease but their amplitude increases. The grid consists of I ×J = 200 × 200 mesh points; the code can be found in [4].

Figure 7.19: Wave propagation caused by an oscillating point source in the middle of the integration domain. The water depth decreases with increasing x. Bottom row: cross section along the dashed lines.

7.3 Alternative discretization methods FD schemes can be developed and coded in a straightforward way with rather less effort while proving reliable and robust for various applications. On the other hand, more sophisticated methods have been developed to approximate differential equations with a system of algebraic equations that have higher accuracy and better convergence. Although the effort in code writing can be higher at first, this normally pays off with the lower CPU consumption and faster performance.

7.3 Alternative discretization methods

� 233

7.3.1 Chebyshev spectral method The method is again explained with one spacial dimension and can be generalized afterwards. We follow mainly the first parts of the recommended book by Guo et al. [5]. As already seen in Section 5.3, the idea behind spectral methods is to decompose the desired solution u(x) into a (finite) sum of certain given functions φi (x), called base functions, Galerkin functions, or ansatz functions: N

u(x) ≈ u(N) (x) = ∑ ai φi (x).

(7.78)

i=0

Even the FD method fits in this representation, taking the special base functions φi (x) = {

1 0

if x = xi

(7.79)

else

with the N grid points xi . However, here the notion FD has become more accepted. 7.3.1.1 Lagrange polynomials For the Chebyshev spectral method one takes for the base the Lagrange polynomials of Nth degree: N

φi (x) = ℒ(N) (x) = ∏ i

j=0,j=i̸

x − xj

xi − xj

.

(7.80)

As one easily may recognize, the Lagrange polynomials have the same property (7.79) as the ‘FD base’, namely ℒi (xj ) = δij

(7.81)

(N)

with the Kronecker symbol δij . As for the FD methods x is only defined at certain discrete points xj , called mesh points or collocation points. For the Chebyshev–Gauss–Lobatto method the mesh points are generated via xj = − cos(

jπ ), N

j = 0...N

(7.82)

and span the interval [−1, 1]. Note that the mesh points are no longer equidistant but accumulate towards the borders, which improves the convergence and accuracy of the method. Figure 7.20 shows the four Lagrange polynomials for N = 3. All polynomials are of third degree, the mesh points from (7.82) are computed as x0 = −1,

x1 = −1/2,

x2 = 1/2,

x3 = 1.

234 � 7 Partial differential equations I, basics

Figure 7.20: The four Lagrange polynomials for N = 3. The four mesh points are located at x0 = −1, x1 = −1/2, x2 = 1/2, x3 = 1 (dashed).

In the following we write the numerical approximation (7.78) in the decomposition N

u(N) (x) = ∑ ui ℒ(N) (x). i i=0

(7.83)

Here, the N + 1-dimensional vector u⃗ represents the function u(N) (x) in the base of the N + 1 Lagrange polynomials ℒ(N) . Due to (7.81) the inverse relation i ui = u(N) (xi ) is immediately written down. 7.3.1.2 Differentiation matrix Differentiating (7.83) with respect to x yields N d ℒ(N) (x) du(N) (x) i = ∑ ui . dx dx i=0

(7.84)

Defining ui(1) ≡

du(N) (x) 󵄨󵄨󵄨󵄨 󵄨 dx 󵄨󵄨󵄨x=xi

we may write (7.84) in the form u⃗ (1) = D u.⃗ The differentiation matrix D has the dimension (N + 1) × (N + 1). Its elements d ℒ(N) (x) 󵄨󵄨󵄨 j 󵄨󵄨 Dij = dx 󵄨󵄨󵄨x=xi

(7.85)

7.3 Alternative discretization methods

� 235

can be computed explicitly as: Dij =

ci (−1)i+j cj xi − xj

xi Dii = − 2(1 − xi2 ) D00 = −DNN = −

for i ≠ j = 0 . . . N

with ck = {

2 1

if k = 0, N else

(7.86)

for i = 1 . . . N − 1 2N 2 + 1 . 6

For the case N = 3 one finds 4 1/3 −1 4/3

−19/6 −1 D=( 1/3 −1/2

−4/3 1 −1/3 −4

1/2 −1/3 ). 1 19/6

(7.87)

The second derivative with respect to x is obtained by applying D twice on u:⃗ u⃗ (2) = D D u⃗ = D2 u⃗

(7.88)

with 16 10 1 D2 = ( −2 3 −8

20 8 −16 −28

−28 −16 8 20

−8 −2 ). 10 16

(7.89)

7.3.1.3 Differential equations We start with the stationary heat equation in one dimension including an internal heat source Q: d2 T = −Q(x). dx 2

(7.90)

In addition we assume the Dirichlet conditions T(−1) = Tℓ ,

T(1) = Tr .

(7.91)

To incorporate the boundary conditions the differentiation matrix D2 has to be modified. Again we consider the case N = 3. Instead of (7.89) we now use 1 10/3 D̃ = ( −2/3 0 2

0 −16/3 8/3 0

0 8/3 −16/3 0

0 −2/3 ) 10/3 1

(7.92)

236 � 7 Partial differential equations I, basics and obtain for the discretized form of (7.90) 1 10/3 ( −2/3 0

0 −16/3 8/3 0

0 8/3 −16/3 0

0 T0 Tℓ −2/3 T1 −Q1 )( ) = ( ) 10/3 T2 −Q2 1 T3 Tr

(7.93)

2 with Qi = Q(xi ). The numerical task consists now in inverting D̃ and the determination of the solution Ti . If, on the other hand, Neumann conditions are given on one or on both boundaries, D2 has to be changed accordingly. Take, for instance,

dx T|x=−1 = S,

T(1) = Tr ,

so the first line of D2 has to be substituted by the first line of D (first derivative) and instead of (7.93), we obtain the system −19/6 10/3 ( −2/3 0

4 −16/3 8/3 0

−4/3 8/3 −16/3 0

1/2 T0 S −2/3 T1 −Q1 )( ) = ( ). 10/3 T2 −Q2 1 T3 Tr

(7.94)

Figure 7.21 shows a solution of (7.93) but for N = 100 and the Dirichlet conditions, T(−1) = T(1) = 0. The heat source was chosen randomly, but obeys the constraint 1

∫ dx Q(x) = 0. −1

Figure 7.21: Temperature distribution (bold) created by randomly chosen heat sources as a solution of (7.93) with N = 100 Chebyshev–Gauss–Lobatto mesh points. The thin line shows Q(x).

7.3 Alternative discretization methods

� 237

Note that in the discretized form of the integral the variable step sizes (7.82) must be accounted for, thus 1

N

∫ dx Q(x) ≈ ∑ Qi Δxi , i=1

−1

with Δxi = xi − xi−1 .

Next we consider the time-dependent heat equation (7.54) with the boundary conditions (7.91) and the initial condition T(x, 0) = T a (x). Discretizing the time derivative as in (7.56) yields the iteration rule T⃗ (j+1) = M T⃗ (j)

(7.95)

with T⃗ (j) = (T0 (jΔt), T1 (jΔt), . . . , TN (jΔt)) and the matrix 2 M = 1 + κΔt D̃ 2 with the modified matrix D̃ that takes for the special case N = 3 the form

0 10/3 D̃ = ( −2/3 0 2

0 −16/3 8/3 0

0 8/3 −16/3 0

0 −2/3 ). 10/3 0

(7.96)

The iteration is started with the initial vector T⃗ (0) = (Tℓ , T1a , T2a , Tr ) where initial and boundary conditions are incorporated. The scheme (7.95) is fully explicit. With only a bit more effort an implicit method can be constructed (

1 T⃗ (j) 2 − κD̃ )T⃗ (j+1) = , Δt Δt

where now the matrix between the brackets must be inverted.

(7.97)

238 � 7 Partial differential equations I, basics 7.3.2 Spectral method by Fourier transformation 7.3.2.1 Fourier modes For periodic boundary condition u(x) = u(x + L) a natural base is provided by Fourier modes. Instead of (7.78) we write N−1

u(x) = ∑ ũ n e−2πinx/L n=0

(7.98)

where i = √−1. The function u at the now equidistant mesh points xj = jΔx = jL/N is given as N−1

uj = ∑ ũ n e−2πinj/N n=0

(7.99)

and the inverse relation reads ũ n =

1 N−1 2πinj/N . ∑ue N j=0 j

(7.100)

The method is best suited for linear PDEs with constant coefficients but can also be extended to certain nonlinear problems, as we shall see below. 7.3.2.2 Stationary problems Again we consider (7.90) for 0 ≤ x ≤ L. Inserting (7.98) yields T̃n = kn−2 Q̃ n

(7.101)

with the Fourier transforms T̃n , Q̃ n of T(x), Q(x) and kn = 2πn/L. To avoid a singularity at k = 0 one must require Q̃ 0 = 0. Due to 1 N−1 Q̃ 0 = ∑Q, N j=0 j the mean of Q must be zero, which also follows by integrating (7.90) over x and supposing periodic boundary conditions. Obviously the Laplace operator is in diagonal

7.3 Alternative discretization methods

� 239

form in Fourier space for arbitrary spatial dimensions and can be inverted easily. Backtransformation via (7.98) yields T(x) in real space. The Fourier transforms can be executed effectively applying the FFT-algorithm (Fast-Fourier-Transform), the numerical effort thereby is ∼ N log N. Taking the FD-representation of (7.90) Tj+1 − 2Tj + Tj−1 = −Δx 2 Qj and inserting the Fourier transform (7.99), a different result from (7.101) is obtained, namely T̃n =

Δx 2 Q̃ . 2 − 2 cos(kn Δx) n

(7.102)

Expanding the fraction with respect to Δx, Δx 2 1 Δx 2 ≈ 2 + + O(Δx 4 ), 2 − 2 cos(kn Δx) kn 12 we see that the difference vanishes with Δx 2 but can become important for larger values of kn , as in Figure 7.22.

Figure 7.22: The difference between (1): 1/(2N 2 (1 − cos(2πn/N))) and (2): 1/(4π 2 n2 ) for N = 100 becomes noticeable for larger n (smaller wave length).

7.3.2.3 Other boundary conditions The procedure can be readily extended to homogeneous Dirichlet conditions u(0) = u(L) = 0.

240 � 7 Partial differential equations I, basics Instead of (7.98) we simply take the sine transform N−1

us (x) = ∑ ũ n sin(πnx/L).

(7.103)

n=1

Inhomogeneous boundary conditions u(0) = uℓ ,

u(L) = ur

(7.104)

can be also fulfilled with (7.103) by adding the homogeneous solution of (7.90) d 2 uh = 0, dx 2

(7.105)

leading to u(x) = us (x) + uh (x), where uh must satisfy the inhomogeneous boundary conditions (7.104). In one dimension the solution of (7.105) is x uh (x) = uℓ + (ur − uℓ ) , L with possible extension to more dimensions. Thus, for the rectangle 0 ≤ x ≤ Lx ,

0 ≤ y ≤ Ly

we have instead of (7.103) us (x, y) = ∑ ũ nm sin(πnx/Lx ) sin(πmy/Ly ), n,m

(7.106)

and the homogeneous problem reads (

𝜕2 𝜕2 + )uh (x, y) = 0. 𝜕x 2 𝜕y2

(7.107)

If one wants to fulfill the inhomogeneous boundary condition u(0, y) = u(x, 0) = u(x, Ly ) = 0,

u(Lx , y) = f (y),

the exact solution of (7.107) reads ∞

uh (x, y) = ∑ An sinh(kn x) sin(kn y), n=1

kn = πn/Ly

7.3 Alternative discretization methods

� 241

with Ly

2 An = ∫ dy f (y) sin(πny/Ly ). Ly sinh(πnLx /Ly ) 0

For the discretized form we obtain M−1

uijh = ∑ Ã n sinh(πni/M) sin(πnj/M) n=1

with à n =

M−1 2 ∑ f sin(πnℓ/M) M sinh(πnN/M) ℓ=1 ℓ

and fℓ = f (ℓΔy). Here we have assumed equal step sizes Δx = Δy = Lx /N = Ly /M for an N × M grid. 7.3.2.4 Time-dependent problems We continue with the one-dimensional heat equation (7.54). An implicit iteration algorithm is obtained taking the backward differentiation with respect to time 𝜕T T(x, t) − T(x, t − Δt) T (j) (x) − T (j−1) (x) = = 𝜕t Δt Δt

(7.108)

which leads to [

1 𝜕2 T (j) − κ 2 ]T (j+1) = . Δt Δt 𝜕x

(7.109)

FD-discretization of the second derivative yields the scheme (7.59). Applying the Fourier transform (7.98) the operator in the square brackets takes diagonal form and can be immediately inverted: −1 ̃ (j) T 1 T̃n(j+1) = [ + κkn2 ] n . Δt Δt

(7.110)

If inhomogeneous Dirichlet conditions as (7.104) are given, the sine transform (7.103) can be used again. In addition, the homogeneous problem to (7.109) [

1 𝜕2 − κ 2 ]Th (x) = 0 Δt 𝜕x

(7.111)

242 � 7 Partial differential equations I, basics has to be solved, which can be done exactly in one dimension. One finds Th (x) = A sinh(αx) + B cosh(αx) with α = (κΔt)−1/2 and A, B from the boundary conditions (7.104) A=

Tr − Tℓ coth(αL), sinh(αL)

B = Tℓ .

7.3.3 Finite-element method The finite-element method (FE) was developed to solve PDEs, mainly from fluid mechanics, on complex geometries. It belongs to the spectral methods with base functions that are only nonzero in certain places, namely at the finite elements. Inside each element a linear space dependence or a simple polynomial is assumed [6]. We shall introduce the method during the example of the two-dimensional Laplace equation 𝜕2 u 𝜕2 u + =0 𝜕x 2 𝜕y2

(7.112)

with Dirichlet boundary conditions. 7.3.3.1 Grid and base functions We are given an irregular, possibly multiply connected domain as in Figure 7.23. We choose n inner mesh points or nodes (xi , yi ),

i = 1...n

and m (outer) points lying on the boundaries. The domain is divided in triangles having the inner and outer points as corners (vertices). In higher (d) dimensions, cells with d + 1 corners are defined. The approximate solution sought after can be written similarly to (7.78) according to: n

u(x, y) = ∑ φi (x, y), i

(7.113)

where φi is nonzero only in those cells containing the node xi , yi as a vertex. There, one adjusts φi (xi , yi ) = ui = u(xi , yi ) with ui as the function value (node value) at the node i. Let the points (xi , yi ), (xk , yk ), (xℓ , yℓ ) span triangle number j, Figure 7.24. We split the function φi into φi =



j ∈ neighbors(i)

αj ,

(7.114)

7.3 Alternative discretization methods

� 243

Figure 7.23: Irregular geometry with n = 1894 inner and m = 242 outer points. The domain, here multiply connected, is paved with finite elements that are triangles for the two-dimensional case.

Figure 7.24: ui , uk , uℓ span the element j inside which the base functions αj are linear and given as (7.115). φi is composed of all (here five) αj and is continuous along the grid lines.

where αj is nonzero only in the triangle j and there linear, thus αj (x, y) = aj + bj x + cj y.

(7.115)

The sum over j in (7.114) runs over all triangles having xi , yi as vertex and shown in Figure 7.24. The coefficients aj , bj , cj , different in each triangle, follow (7.113) to form three equations:

244 � 7 Partial differential equations I, basics 1 (1 1

xi xk xℓ

yi aj ui yk ) (bj ) = (uk ) yℓ cj uℓ

(7.116)

and, after an inversion of the matrix: aj xk yℓ − xℓ yk 1 ( b j ) = ( yk − yℓ D cj xℓ − xk

xℓ yi − xi yℓ yℓ − yi xi − xℓ

xi yk − xk yi ui yi − yk ) ( uk ) . xk − xi uℓ

(7.117)

Here, D is the determinant: D = xi yk + xk yℓ + xℓ yi − xk yi − xℓ yk − xi yℓ . The coefficients aj , bj , cj and hence the element functions αj depend linearly on the node values ui . 7.3.3.2 Variational problem Equation (7.112) can be derived from an extremum principle. Taking the functional S[u] =

1 ∫ dx dy [(𝜕x u)2 + (𝜕y y)2 ] 2

(7.118)

one finds for its functional derivative δS 𝜕2 u 𝜕2 u = − 2 − 2. δu 𝜕x 𝜕y Hence, a function u for which S has an extremum is a solution of the Laplace equation (7.112). But if u is decomposed as (7.113), (7.114) in the element functions αj we find for the triangle j due to (7.115) simply (𝜕x u)j = bj ,

(𝜕y u)j = cj .

Then (7.118) turns into S(ui ) =

1 ∑ F (b2 + cj2 ), 2 j j j

(7.119)

where the sum runs over all triangles and Fj denotes the surface area of triangle j. The extremum of S is found by equating all its derivatives with respect to ui to zero: 𝜕bj 𝜕cj 𝜕S = ∑ Fj (bj + cj ) = 0. 𝜕ui 𝜕ui 𝜕ui j

(7.120)

7.3 Alternative discretization methods

� 245

Since bj and cj depend linearly on ui , (7.120) has the form of a linear, inhomogeneous system of equations for the n inner node values ui . The m boundary points thereby play the role of the inhomogeneities. 7.3.3.3 Algorithm The tasks of an FE-algorithm can be divided into three parts: (a) Mesh generation The location of the nodes is chosen as appropriate to the domain and the geometry of the boundaries. The node coordinates xi , yi ,

i = 1...n + m

are stored and the boundary points are marked, e. g., by sorting as the first or the last m pairs. To evaluate (7.117) later, for each point i one must know the amount as well as the ordinal number of its neighbor points (the points with a common triangle). This can be achieved introducing an array NN(i, j), where NN(i, 0) denotes the amount k of neighbors and NN(i, j), j = 1 . . . k the corresponding ordinal numbers. (b) Laplace solver The linear system (7.120) has the form n

m

j

k

∑ Lij uj = ∑ Rik ukr ,

i = 1 . . . N.

(7.121)

Here, L is an n × n, R an n × m matrix, uj denotes the inner points, and ukr the given outer boundary points (we restrict ourselves here to Dirichlet conditions). Since each point is connected only to a few surrounding points the matrix L is sparse. (c) Evaluation To visualize the results one can plot, for instance, contour lines. Let ui , uj , uk be the node values at the corners of a certain triangle. The contour line u = h crosses the triangle if min(ui , uj , uk ) < h < max(ui , uj , uk ).

(7.122)

For a triangle where (7.122) is fulfilled one computes the intersections of h with its edges and connects them with a line. Normally, two intersection points on two different edges exist. The cases where h is exactly equal to a value on one or more corners can be treated separately.

246 � 7 Partial differential equations I, basics

Figure 7.25: Numerical solution for the Laplace equation with the annotated Dirichlet conditions and three point charges in the interior. The contour lines correspond to the potential values u = 0.001/0.05/0.2/0.4/0.6/0.9 (solid) und u = −0.001/ − 0.05/ − 0.1/ − 0.2/ − 0.4 (dashed).

Figure 7.25 shows the solution of the two-dimensional Laplace equation on the domain of Figure 7.23, built of n = 1894 inner and m = 242 outer points. To facilitate mesh generation, an unambitious ‘semiautomatic’ program mesh_generator is presented in [4]. A ppm-file (portable pixmap) containing the geometry of the domain in the form of the boundaries is needed, created with a standard graphics program, e. g., xfig. Then the Laplace equation is solved on the prescribed domain by the program laplace_solver. An output file is generated containing the node values, which can be plotted by grid_contour. A plot as shown in Figure 7.25 is the result. More detailed instructions can be also found in [4] as well as in Appendix D.2.

7.4 Nonlinear PDEs In the next chapter we shall investigate in more detail several nonlinear problems as applications and examples. Here, we wish to concentrate on the Ginzburg–Landau equation.

7.4 Nonlinear PDEs



247

7.4.1 Real Ginzburg–Landau equation Based on the original idea of Landau5 [7] the Ginzburg–Landau equation serves as a phenomenological model describing phase transitions. A macroscopic quantity, for instance the magnetization of a ferromagnetic material, is described by the help of an order parameter q(t). The values q = ±q0 correspond to the magnetic state where all elementary magnets are parallel and point in the same direction, and q ≈ 0 denotes the nonmagnetic state above the Curie temperature Tc where the spins are not ordered. If there is no preferred direction in space, the free energy F should not depend on the sign of q and F(q) = F(−q). Expanding F up to the fourth order yields 1 1 F(q, T) = F(0, T) + αq2 + βq4 . 2 4

(7.123)

If β > 0 there exists a minimum of F at q = 0 if α > 0, at q = ±q0 = (−α/β)1/2 if α < 0. Obviously, α = 0 can be considered as the critical point where a phase transition between the ordered, ferromagnetic state and the disordered, nonmagnetic state takes place. Thus it is easy to relate α to temperature: α∼−

Tc − T = −ε Tc

where ε is called the ‘bifurcation parameter’ or ‘control parameter.’ To achieve a dynamical description of the phase transition one assumes further that F is minimized in the course of time. Thus a gradient dynamics model is obtained having the form dq dF =− = εq − βq3 dt dq or, after appropriate scaling of q: dq = εq − q3 . dt

(7.124)

The three stationary solutions of (7.124) q0s = 0,

s q12 = ± √ε

(7.125)

correspond to the different phases; see Figure 7.26. By the natural extension q(t) 󳨀→ q(x, t) a space-time dependent magnetization is introduced, allowing for the description of dislocations and magnetic domains separated by Bloch walls. Each wall and each dislocation increases the total free energy. The ground state minimizing F corresponds still to 5 Lew Dawidowitsch Landau, Russian physicist, 1908–1968.

248 � 7 Partial differential equations I, basics

Figure 7.26: Pitchfork bifurcation at ε = 0. For ε > 0 the nonmagnetic state becomes unstable and the two stable states qs = ±√ε branch off. This is a typical scenario for a phase transition of the second order.

one of the homogeneous solutions (7.125). A sensitivity of F with respect to dislocations and walls is achieved by adding the ‘Ginzburg’ term6 ∼ (∇q)2 to (7.123). Integration over the spatial coordinates yields the free energy functional 1 1 1 F[q] = ∫ dx dy { αq2 + βq4 + D((𝜕x q)2 + (𝜕y q)2 )}. 2 4 2

(7.126)

A

As above a gradient dynamics is obtained, we now apply the functional derivative 𝜕q δF 𝜕F 𝜕F 𝜕F =− =− + 𝜕x ( ) + 𝜕y ( ), 𝜕t δq 𝜕q 𝜕(𝜕x q) 𝜕(𝜕y q) which eventually leads to the real Ginzburg–Landau equation: 𝜕t q = −αq + D Δq − βq3 . Finally, scaling the time, length, and q results in the normal form: 𝜕t q = εq + Δq − q3 .

(7.127)

7.4.2 Numerical solution, explicit method With (7.127) we have a PDE that is nonlinear with the dependent variable q. A fully explicit scheme (FTCS) as an extension of (7.60) is obtained adding the extra terms evaluated at time t: (n+1) qi,j

=

(n) qi,j

+

(n) Δt (εqi,j



(n) 3 (qi,j )

+

(n) (n) (n) (n) (n) qi,j+1 + qi+1,j − 4qi,j + qi−1,j + qi,j−1

6 Witali Lasarewitsch Ginsburg, Russian physicist, 1916–2009.

Δx 2

).

(7.128)

7.4 Nonlinear PDEs



249

To check the numerical stability we must linearize around one of the stable solutions, e. g., qs = √ε: qij = √ε + uij . For small u the linear system (n+1) (n) (n) ui,j = ui,j + Δt (−2εui,j +

(n) (n) (n) (n) (n) ui,j+1 + ui+1,j − 4ui,j + ui−1,j + ui,j−1

Δx 2

)

(7.129)

results. A von Neumann analysis similar to Section 7.2.3.3 but now in two dimensions (n) um,n = A(n) ei(kx m+ky n)

leads to A(n+1) = V A(n) with the amplification factor V = 1 + 2[−ε +

1 (cos kx + cos ky − 2)]. Δx 2

The stability condition |V | ≤ 1 yields an upper limit for the time step Δt ≤

Δx 2 . εΔx 2 + 4

(7.130)

In practice, the inequality ε Δx 2 ≪ 1 holds and one may estimate Δt ≤

Δx 2 , 4

(7.131)

which is the same result already found for the two-dimensional diffusion equation. Hence the diffusion part of (7.127) is crucial for the numerical stability and not the nonlinearities. 7.4.3 Numerical solution, semi-implicit method For a fully implicit method all terms inside the brackets on the right-hand side of (7.128) have to be known to the ‘new’ time t+Δt. Hence a nonlinear system of equations had to be

250 � 7 Partial differential equations I, basics solved which would have been possible only iteratively. An easier realizable procedure is obtained if only the linear parts of the right-hand side are considered at t +Δt whereas the nonlinearity remains explicit at t. Instead of (7.128) we arrive at the semi-implicit scheme (n+1) qi,j

=

(n) qi,j

+

(n+1) Δt (εqi,j



(n) 3 (qi,j )

+

(n+1) (n+1) (n+1) (n+1) (n+1) qi,j+1 + qi+1,j − 4qi,j + qi−1,j + qi,j−1

Δx 2

).

(7.132)

The same stability analysis as above yields now the upper time step limit Δt ≤

1 , 2ε

(7.133)

independently of the spatial step size Δx. Coding in MATLAB we can use the toolbox ‘gallery’ [8], providing several standard matrices, amongst others the FD-version of the Laplacian in two dimensions, as shown in Figure 7.8 (a). Running the script clear; kplot=[5,10,20,30,60,100,150,200,300]; % at k=kplot(m) make pic. m n=200; dx=0.5; dt=1; kend=300; eps=0.1; % mesh size, steps,epsilon psi=zeros(n); % 2D-array for plotting lap=-gallery('poisson',n)/dx^2; m=(1/dt-eps)*speye(n^2)-lap;

% Laplace discretizing % matrix M

phi=rand(n^2,1)'-.5; ib=1;

% random init. cond. % count for figs.

for k=0:kend phi=(phi/dt-phi.^3)/m;

% time loop % matrix inversion

if k==kplot(ib) % plotting subplot(3,3,ib); ib=ib+1; psi(:)=phi; % reshape phi to 2D contourf(psi); axis off; title(sprintf('t = %5.1f',k*dt)); drawnow; end end creates a time series of nine snapshots as shown in Figure 7.27.

7.4 Nonlinear PDEs



251

Figure 7.27: Numerical solution of the real Ginzburg–Landau equation (7.127) obtained with the scheme (7.132) for ε = 0.1, time series.

7.4.4 Problems The Fisher–Kolmogorov equation. With pencil and paper: 1. The Fisher–Kolmogorov equation is a nonlinear diffusion equation having the form: 2 𝜕t u(x, t) = D 𝜕xx u(x, t) + αu(x, t) − βu2 (x, t),

u(x, t) ≥ 0,

D, α, β > 0.

(7.134)

252 � 7 Partial differential equations I, basics By the help of an appropriate scaling of t, x, u, transform the Fisher–Kolmogorov equation to the normal form 2 𝜕t u(x, t) = 𝜕xx u(x, t) + u(x, t) − u2 (x, t).

2. 3.

Show with a linear stability analysis that the fixed point (stationary solution) u(0) = 0 is unstable, and that the fixed point u(0) = 1 is stable. Monotonic solutions that connect the two fixed points u = u(x − vt) = u(ξ),

4.

u(ξ → −∞) = 1,

6.

u(ξ → ∞) = 0,

v>0

(7.136)

are denoted as fronts or kinks. Therefore the front moves with a certain velocity v towards the fixed point u = 0, i. e., to the right-hand side. Find a lower limit for v. Show that for a certain v and κ u(x − vt) =

5.

(7.135)

1

(1 +

Ce−κ(x−vt) )2

,

C>0

(7.137)

is a solution of (7.135). Determine v and κ. What is the meaning of the (arbitrary but positive) constant C? Release the restriction u ≥ 0. With the help of the analogy to a one-dimensional motion of a point mass in a potential, show that a stationary, localized solution of (7.135) exists. Determine this solution by integrating the law of energy conservation of the mass point. Perform a linear stability analysis u(x, t) = u0 (x) + w(x) exp(λt) about the solution u0 (x) found in 5. and derive a stationary Schrödinger equation for the disturbances w(x) of the form 2 (−dxx + V (x))w(x) = −λw(x).

(7.138)

Find the ground state by expanding V (x) up to the order x 2 . What can you say about the stability of u0 (x)? And to code: 1. Verify the results of 1–4 solving (7.135) numerically with an FTCS method and the appropriate boundary conditions u(x = 0) = 1, u(x = L) = 0. Assume a large enough L. 2. Solve the Schrödinger equation (7.138) writing for instance a MATLAB code. Analyze its spectrum and its eigenfunctions.

Bibliography

� 253

Bibliography [1] [2] [3] [4] [5]

A. P. S. Selvadurai, Partial Differential Equations in Mechanics, Vol. 1, 2, Springer (2010). J. D. Jackson, Classical Electrodynamics, Wiley & Sons (1998). L. D. Landau and E. M. Lifshitz, Fluid Mechanics, Vol. 6, Butterworth–Heinemann, 2nd edn. (1987). FORTRAN codes at https://www.degruyter.com/document/isbn/9783110782523/html. W.Guo, G. Labrosse and R.Narayanan, The Application of the Chebyshev-Spectral Method in Transport Phenomena, Springer (2012). [6] T. I. Zohdi, A Finite Element Primer for Beginners: The Basics, Springer (2015). [7] L. D. Landau, On the Theory of Phase Transitions, Zh. Eksp. Teor. Fiz. 7, (1937). [8] D. J. Higham and N. J. Higham, MATLAB Guide, SIAM (2005).

8 Partial differential equations II, applications Now we shall apply the methods and schemes learned in the last chapter to problems from several fields of physics. Let us start with some simple quantum mechanical examples.

8.1 Quantum mechanics in one dimension 8.1.1 Stationary two-particle equation We first consider two interacting particles in one spatial dimension. The particles, each with mass m, are located at x1 , x2 and connected with a spring having the spring constant −α. The stationary Schrödinger equation thus reads Ĥ Ψ(x1 , x2 ) = E Ψ(x1 , x2 ), where Ψ is the two-particle wave function and the Hamiltonian is given as ℏ2 𝜕2 𝜕2 1 Ĥ = − ( 2 + 2 ) − α(x1 − x2 )2 . 2m 𝜕x1 𝜕x2 2

(8.1)

We shall also look at the case of a repelling interaction α > 0. In addition the particles are trapped in a potential well with length L and infinitely high walls, Ψ(0, x2 ) = Ψ(L, x2 ) = Ψ(x1 , 0) = Ψ(x1 , L) = 0. With the scaling xi =

L x̃ , π i

α=

π 4 ℏ2 α,̃ mL4

E=

π 2 ℏ2 ̃ E 2mL2

(8.2)

the stationary Schrödinger equation takes the dimensionless form (

𝜕2 𝜕2 ̃ + + α(̃ x̃1 − x̃2 )2 + E)Ψ = 0. 𝜕x̃12 𝜕x̃22

(8.3)

In the following we leave the tildes. For the decoupled case α = 0 the exact solution simply reads (N → ∞) Ψ(x1 , x2 ) =

2 N ∑ c sin ℓx1 sin mx2 π ℓ,m=1 ℓm

(8.4)

where the coefficients cℓm are determined from the initial condition. For the energy, one finds the levels https://doi.org/10.1515/9783110782523-008

8.1 Quantum mechanics in one dimension

� 255

E = Eℓm = ℓ2 + m2 ,

(8.5)

which are two-fold degenerated if ℓ ≠ m, reflecting the symmetry x1 ↔ x2 . The decoupled problem is similar to the single particle Schrödinger equation in a two-dimensional potential well. Inserting (8.4) in (8.3) with α ≠ 0 and multiplying with 2 sin kx1 sin nx2 π we obtain after integrating over x1 , x2 the linear system ∑ Mknℓm cℓm = E ckn

(8.6)

ℓ,m

with the matrix elements Mknℓm

ππ

4α = (ℓ + m )δℓk δmn − 2 ∫∫ dx1 dx2 (x1 − x2 )2 sin kx1 sin nx2 sin ℓx1 sin mx2 π 2

2

00

32αkℓmn −1 + (−1)k+ℓ + (−1)m+n − (−1)k+ℓ+m+n { { − , { { π2 (ℓ2 − k 2 )2 (m2 − n2 )2 { { { { m+n { { { {− 4αmn(1 + (−1) ) , { { 2 2 2 (m − n ) ={ { { 4αkℓ(1 + (−1)k+ℓ ) { { − , { { { (k 2 − ℓ2 )2 { { { 2 2 2 2 2 { { {α 3(n + k ) − π k n + k 2 + n2 , { 6k 2 n2

k ≠ ℓ, m ≠ n k = ℓ, m ≠ n

(8.7)

k ≠ ℓ, m = n k = ℓ, m = n.

As happens quite often in quantum mechanics, one is confronted with the task to diagonalize a large matrix. To convert M into a two-dimensional matrix, one can confine each pairs of indices to one: (k, n) → k ′ = k + (n − 1)N

(8.8)

and the same for (ℓ, m). N is the number of modes included in (8.4). The larger N, the better spatial resolution of the wave functions is achieved. However, M has N 2 × N 2 elements and the numerical effort grows fast with N 4 . Figure 8.1 shows the lower energy levels depending on α for N = 10. Note that the degeneracy vanishes due to the coupling. On the other hand, for large α nearly degenerated states occur again where a symmetric and an antisymmetric state belong to (almost) the same energy value. Figure 8.2 depicts contour lines of the 12 lowest states for α = 10. The probability densities are largest near the corners of the line x1 +x2 = L, corresponding to the furthest

256 � 8 Partial differential equations II, applications

Figure 8.1: Energy levels as a function of α. The states Ψℓm , Ψmℓ , degenerated for α = 0, split up. Inset: by linear combination of two degenerated states (bottom), new states can be constructed (top).

Figure 8.2: The 12 lowest states in the x1 , x2 -plane, from top left to bottom right, α = 10. The states (1,2), (3,4) (5,6) are almost degenerated, compare Figure 8.1.

possible separation of the two particles. In Figure 8.3 we show the states for attractive interaction, α = −10. Here, the probability density is maximal along the line x1 = x2 , hence the particles prefer to be close to each other.

8.1 Quantum mechanics in one dimension

� 257

Figure 8.3: The lowest states for α = −10. Here, degeneracy is completely removed.

8.1.2 Time-dependent Schrödinger equation The time-dependent one-particle Schrödinger equation already appeared in Chapter 5. We wish to continue our study again in one spatial dimension. Scaling of x as in (8.2) leads to its dimensionless form −

̃ 𝜕2 Ψ(x,̃ t)̃ ̃ x,̃ t)̃ Ψ(x,̃ t)̃ = i 𝜕Ψ(x,̃ t) , + U( ̃ 𝜕x̃ 2 𝜕t

(8.9)

where in addition time has been scaled with t̃ =

π2ℏ t 2mL2

and 2mL2 Ũ = 2 2 U. π ℏ In what follows we leave all tildes again. 8.1.2.1 The infinitely high potential well Again we consider a potential well with infinitely high walls U(x) = {

0, ∞,

0 0. Since the two diffusion constants Di are surely positive, this can only be valid for all k if b1 b2 < 0,

und a2 < 0

holds. Now consider a1 as a control parameter that can be increased from outside by varying e. g., temperature or pressure. The mode becoming unstable at first then has a wave number k = ±kc that minimizes (8.81), (a1 (kc ) = a1c ). The zero of the derivative of (8.81) with respect to k yields kc2 =

a2 bb a 1 a + √− 1 2 = ( 1 + 2 ). D2 D1 D2 2 D1 D2

(8.82)

For a1 > a1c the fixed point wi0 becomes unstable and the mode (8.78) with k = kc grows exponentially in time. This spatially periodic structure is nowadays called Turing instability or Turing structure. 8.4.1.2 Hopf instability To compute the critical point we simply assumed λ = 0 in (8.80). But there is another way for an instability to occur: λ may be complex valued and the mode oscillates in time. Then one has either a damped (old, stable state) or an exponentially increasing oscillation, depending on the sign of the real part of the eigenvalues λ. Thus the critical point is now determined by Re(λ) = 0 or λ = iω

(8.83)

292 � 8 Partial differential equations II, applications with a real valued frequency ω. Inserting (8.83) into (8.80) and separating real and imaginary parts one obtains the two equations ω2 = (D1 k 2 − a1 )(D2 k 2 − a2 ) − b1 b2 2

0 = (D1 + D2 )k − a1 − a2 .

(8.84a) (8.84b)

Equation (8.84b) determines the critical point depending on the wave number: a1 = (D1 + D2 )k 2 − a2 .

(8.85)

Obviously the mode with k = 0 becomes unstable at first as soon as a1 ≥ −a2 . Hence the oscillatory mode comes with a long-wave, even homogeneous, spatial structure, corresponding to kc = 0.

(8.86)

As already noted in Section 3.2, instabilities belonging to a pair of conjugate complex eigenvalues are also denoted as Hopf instabilities, and the frequency at the critical point is named Hopf frequency. It follows from (8.84a) with k = 0: ωc = √a1 a2 − b1 b2 = √−a12 − b1 b2 .

(8.87)

To obtain a real valued frequency, b1 b2 < −a12 must hold. Let b1 < 0 and b2 > 0. Then the substance w1 creates via the coupling b2 the substance w2 , while w2 diminishes w1 via b1 . This is why (8.74) is also denoted as an activator-inhibitor system with w1 as the concentration of the activator and w2 as that of the inhibitor. Temporal oscillations of a spatially uniform state were first experimentally observed by Belousov15 in 1950 and are now named Belousov–Zhabotinskii reactions. 8.4.1.3 Codimension-two But what happens now if one of the control parameters, say a1 , is increased over the threshold? The uniform stationary state becomes unstable, but how so exactly is determined by the other parameters. Depending on the values of a1 , b1 , and b2 as well as on the diffusion constants, the homogeneous state will give way either to a time periodic (Hopf) or to a space periodic (Turing) structure. From mathematics, the notion codimension is familiar. The codimension of an object is defined as the difference of the dimension of

15 Boris P. Belousov, Russian chemist, 1893–1970.

8.4 Pattern formation out of equilibrium

� 293

Figure 8.20: Sketch of Turing and Hopf instability for the two-equation system (8.76). If increasing a1 and a2 < a2CD2 , the Turing instability comes first (a), if a2 > a2CD2 the Hopf instability occurs directly above threshold (b). For (c), a2 = a2CD2 .

the room in which the object lives and the dimension of the object itself. A plane in the three-dimensional space possesses the codimension 3 − 2 = 1. Instability conditions like (8.81) or (8.85) determine the value of one parameter related to the others. They define an n−1-dimensional subspace of the n-dimensional parameter space and therefore have the codimension one. But now we can adjust the parameters in such a way that the Turing and Hopf modes are unstable at the same a1 . This additionally restricts the parameter setting into an n − 2-dimensional subspace. A ‘double’ instability of this kind is named codimension-two instability (CD2); see also Figure 8.20. Inserting the critical wave number (8.82) into (8.81), one finds the smallest necessary value for a1 to get a Turing instability: a1T =

D1 D a + 2√−b1 b2 1 . D2 2 D2

(8.88)

In the same way one obtains for the oscillatory instability from (8.85) with (8.86) a1H = −a2 .

(8.89)

Equating the two expressions finally leads to a condition for a2 : a2CD2 = −2

√−b1 b2 D1 D2 , D1 + D2

a1CD2 = −a2CD2 .

(8.90)

This value is denoted as codimension-two-point. Obviously, for a2 > a2CD2 the Hopf mode becomes unstable first; for a2 < a2CD2 the Turing mode is observed at threshold. For a2 ≈ a2CD2 one expects the simultaneous emergence of both modes and a particularly rich spatio-temporal behavior of the solutions of the fully nonlinear equations.

294 � 8 Partial differential equations II, applications 8.4.1.4 The Brusselator During the 60s, a model based on a hypothetical chemical reaction scheme was devised by Prigogine16 and coworkers at the University of Brussels. The model was motivated by the chemical oscillations discovered about ten years before by Belousov and consisted in its original form of two coupled ODEs (K = 2). Later, diffusion terms were added and spatio-temporal pattern formation was examined extensively. The system has the form of (8.74) and may show, depending on parameters, either Hopf or Turing instabilities. (A) Equations and linear analysis For the Brusselator one finds f1 = A − (B + 1) w1 + w12 w2 ,

f2 = B w1 − w12 w2 ,

(8.91)

and the only fixed point is given by w10 = A,

w20 = B/A.

Since there are only two free parameters (A, B), the four coefficients ai , bi of the Jacobian (8.77) are not independent from each other and read a1 = B − 1,

b1 = A2 ,

a2 = −A2 ,

b2 = −B.

(8.92)

If A is considered to be fixed and B is defined as a variable control parameter, the relations (8.81), (8.85) can be solved for B: Bc(T) =

A2 + (D1 A2 + D2 )k 2 + D1 D2 k 4 D2 k 2

Bc(H) = (D1 + D2 )k 2 + A2 + 1.

(8.93)

From (8.82) we obtain for the Turing mode the critical wave number kc2 =

A . √D1 D2

(8.94)

The conditions (8.88), (8.89) turn into BT = Bc(T) (kc ) = (A√

2

D1 + 1) , D2

BH = Bc(H) (0) = A2 + 1,

(8.95)

from which one finds the codimension-two point by equating BT = BH : ACD2 =

2√D1 D2 , D2 − D1

BCD2 =

(D1 + D2 )2 . D2 − D1

16 Ilya Prigogine, Russian–Belgian physical chemist, 1917–2003.

(8.96)

8.4 Pattern formation out of equilibrium

� 295

Figure 8.21: Parameter plane for the Brusselator with D2 = 2D1 . On the left-hand side of the dashed line the homogeneous state becomes oscillatory unstable; on the right-hand side one observes Turing patterns.

Figure 8.21 shows the parameter plane for D1 = 1/2, D2 = 1. Depending on A, both instability types are possible. (B) Semi-implicit pseudospectral method Next we wish to integrate numerically the fully nonlinear system applying a semiimplicit pseudospectral scheme as already done in Section 8.3.4.3. Chemical reactions are often performed on thin gels and a two-dimensional representation is natural. It is of further advantage to transform the basic system to the deviations ui from the fixed point u1 = w1 − A,

u2 = w2 − B/A

as in (8.76). For ui = ui (x, y, t) one derives the system 𝜕t u1 = D1 Δu1 + (B − 1) u1 + A2 u2 + f (u1 , u2 )

(8.97a)

𝜕t u2 = D2 Δu2 − B u1 − A2 u2 − f (u1 , u2 )

(8.97b)

B + u2 ) u12 + 2A u1 u2 . A

(8.98)

with the nonlinearity f (u1 , u2 ) = (

Similar to (8.69), the semi-implicit formulation leads to an inhomogeneous system of equations M k u⃗ k (t + Δt) =

1 1 u⃗ (t) + fk (u1 (t), u2 (t))( ) Δt k −1

(8.99)

with u⃗ k = (uk1 , uk2 ) and uk , fk as the Fourier transform of u and f . Since the third dimension is absent, M k is here a simple 2 × 2-matrix

296 � 8 Partial differential equations II, applications

Mk = (

1 Δt

+ D1 k 2 + 1 − B B

1 Δt

−A2 + D2 k 2 + A2

(8.100)

),

which can be inverted explicitly: M −1 k =

1 + D2 k 2 + A2 1 Δt ( Det Mk −B

A2 1 Δt

+ D1 k 2 + 1 − B

).

(C) Results We begin with D1 = 0.1, D2 = 1. For A above the codimension-two-point ACD2 ≈ 0.7 we obtain Turing patterns mostly in the form of more or less regular hexagons, including some point and line defects. Below the CD2-point, large-scaled regions in space oscillating with about the Hopf frequency can be seen, in agreement with the linear theory, as in Figure 8.22.

Figure 8.22: Depending on the value of A, for the plane (2D) Brusselator either (asymptotically stationary) Turing structures (left, A = 0.8), or spatially large-scaled temporally oscillating Hopf patterns (right, A = 0.35) emerge. For D1 /D2 = 0.1 it follows that ACD2 ≈ 0.7.

Closer to the CD2-point a complex interchange of Turing and Hopf modes is observed; compare also Figure 1.6 from the introduction. Qualitatively different structures, namely the experimentally known target or spiral patterns, are found for less differing diffusion constants, for instance D1 = 0.5, D2 = 1, and much farther in the Hopf region, A = 0.2, B = 1.2; see Figure 8.23. We conclude this section presenting a MATLAB code for the semi-implicit pseudospectral scheme based on the built-in 2D fast-Fourier-transform FFT2. clear n=512; % parameter input

% mesh points

8.4 Pattern formation out of equilibrium

d=input('d1/d2? '); acd2=2.*sqrt(d)/(1.-d) a=input('value for a? bh=1.+a^2 % bt=(1.+a*sqrt(d))^2 % b=input('value for b?

� 297

% cd-2 point '); threshold Hopf mode threshold Turing mode '); ba=b/a;

xkc=sqrt(a/sqrt(d)); dx=2.*pi/xkc/15.; dx2=dx^2; dt=0.1; % step sizes dkx2=(2.*pi/n/dx)^2; for k=1:n; % coefficients of M for j=1:n; xk2=((k-n/2)^2+(j-n/2)^2)*dkx2; det=(1./dt+d*xk2+1.-b)*(1./dt+xk2+a^2)+b*a^2; m22(k,j)=(1./dt+d*xk2+1.-b)/det; m11(k,j)=(1./dt+xk2+a^2)/det; m12(k,j)=a^2/det; m21(k,j)=-b/det; end end m11=ifftshift(m11); m12=ifftshift(m12); m22=ifftshift(m22); m21=ifftshift(m21); u1=0.2*(rand(n)-.5); u2=0.2*(rand(n)-.5); t=0; tend=1000; n=0; for t=0:dt:tend; n=n+1; fn=(u1.^2).*(ba+u2)+2.*a*u1.*u2; rhs1=u1/dt+fn; rhs2=u2/dt-fn;

% initial conditions

% time loop % nonlinearity % rhs in real space

fs1=fft2(rhs1); fs2=fft2(rhs2); % Fourier transform rhs1=m11.*fs1+m12.*fs2; % comp. in Fourier space rhs2=m21.*fs1+m22.*fs2; u1=real(ifft2(rhs1)); u2=real(ifft2(rhs2)); % back to real space if(n==100) n=0; contour(u1); drawnow; t end end

% plotting

298 � 8 Partial differential equations II, applications

Figure 8.23: Spreading fronts can be seen for other parameters in the oscillatory region. They are similar to the chemical nonequilibrium patterns observed experimentally in the Belousov–Zhabotinsky reaction. Time series for D1 /D2 = 0.5, A = 0.2, B = 1.2.

8.4.2 Swift–Hohenberg equation 8.4.2.1 Motivation Turing structures originate from instabilities where the fastest growing, most unstable modes have a finite wave number kc . If we evaluate for instance (8.80) for the Brusselator, a growth rate λ(k 2 ) is obtained in the super-critical domain B > Bc as shown in Figure 8.24. The function λ(k 2 ) seems to be rather involved, including a square root. If we restrict ourselves only to the region of the unstable (linearly growing) modes, we may expand λ at Bc , kc2 , according to: 2

λ ≈ a(B − Bc ) − b(k 2 − kc2 ) with a=

𝜕λ 󵄨󵄨󵄨 󵄨󵄨 , 𝜕B 󵄨󵄨Bc ,kc2

b=−

1 𝜕2 λ 󵄨󵄨󵄨󵄨 . 󵄨 2 𝜕(k 2 )2 󵄨󵄨󵄨Bc ,kc2

(8.101)

8.4 Pattern formation out of equilibrium

� 299

Figure 8.24: The basic idea behind the Swift–Hohenberg equation is to expand the eigenvalue λ(k 2 ) of a Turing instability, here for the Brusselator, in the neighborhood of the critical point, leading to the polynomial (8.101). Parameters: D1 = 0.1, D2 = 1, A = 0.8, Bc = 1.57, B = 1.01Bc .

In position space one may simply replace k 2 with −Δ and obtain a linear PDE whose solution reflects the instability behavior (8.101): 2

𝜕t Ψ(x, y, t) = a(B − Bc )Ψ(x, y, t) − b(Δ + kc2 ) Ψ(x, y, t).

(8.102)

After appropriate scaling of time and space, (8.102) takes the form 𝜕t Ψ(x, y, t) = εΨ(x, y, t) − (Δ + 1)2 Ψ(x, y, t)

(8.103)

with the sole control parameter ε ∼ B − Bc . To account for a nonlinear saturation of the linearly unstable modes one adds to (8.103) the most simple terms that guarantee global stability (bounded solutions): 𝜕t Ψ(x, y, t) = εΨ(x, y, t) − (Δ + 1)2 Ψ(x, y, t) + A Ψ2 (x, y, t) − Ψ3 (x, y, t).

(8.104)

Therewith we have derived the Swift–Hohenberg equation [9], nowadays a standard model of pattern formation. It describes the spatio-temporal pattern evolution close to the critical point of a (monotonic) Turing instability.17 A similar ‘normal form’ can be derived for an oscillatory (Hopf) bifurcation and yields a complex Ginzburg–Landau equation; see Problems.

17 The original work was for A = 0. The extension with a square term was first proposed and studied by H. Haken [6].

300 � 8 Partial differential equations II, applications The Swift–Hohenberg equation can be formulated as a gradient system: 𝜕t Ψ = −

δL[Ψ] . δΨ

(8.105)

Here, δ/δΨ denotes the functional derivative and L is the functional 1 2A 3 1 4 2 L[Ψ] = − ∫ dx dy {εΨ2 − [(Δ + 1)Ψ] + Ψ − Ψ }, 2 3 2

(8.106)

also called the Lyapunov functional. As one easily shows it follows from (8.105) that dt L ≤ 0

(8.107)

and L decreases monotonically as long as a stationary state (minimum or secondary minimum of L) is reached asymptotically. The property (8.107) can be used as a check as well as a stop criterion for any numerical method. 8.4.2.2 Algorithm For periodic boundary conditions it is easy to construct a pseudospectral scheme as already done in Section 8.4.1.4. Since we now have only one equation to iterate, the matrix inversion turns into a simple division in Fourier space: Ψk (t + Δt) =

fk (Ψ(t)) 1/Δt − ε + (1 − k 2 )2

(8.108)

with the Fourier transform fk of f (Ψ) =

Ψ + A Ψ2 − Ψ3 . Δt

Another possibility is the discretization in position space with finite differences. Let N be the number of mesh points in each direction (for simplicity we assume a square geometry). We organize the node values in a one-dimensional vector Φk , k = 1 . . . N 2 as in (8.23) and formulate a large (N 2 ) system of equations N2

∑ Mij Φj (t + Δt) = j

Φi (t) + A Φ2i (t) − Φ3i (t) Δt

where M has to be inverted. Of course M is a rather large matrix with N 2 × N 2 = N 4 elements but most of them are zero and special routines for sparse matrices can apply. MATLAB is suited to treat sparse problems. A simple code could look like that: clear; n=200; dx=0.5; dx2=dx^2; dt=3; tend=1000; eps=0.1; a=0.;

8.4 Pattern formation out of equilibrium

� 301

lap=-gallery('poisson',n); lap2=lap*lap; % Laplace and biharmonic FD in 2D pot=speye(n^2)+lap/dx2; % matrix needed for Lyapunov potential m=(1/dt-eps+1)*speye(n^2)+2*lap/dx2+lap2/dx2^2; % matrix M psi=zeros(n); phi=rand(1,n^2)-0.5;

% random initial values

k=0; for t=0:dt:tend % time loop till tend k=k+1; phi=(phi/dt-phi.^3+a*phi.^2)/m; % matrix inversion lyap=eps*phi.^2-.5*phi.^4+2./3.*a*phi.^3-(phi*pot).^2; % potential lyap_plot(k)=-0.5*dx2*sum(lyap); t_plot(k)=t; % plot values if(mod(k,10)==0) % plott each 10 time steps psi(:)=phi; % compute 2D Psi from Phi subplot(1,2,1); plot(t_plot,lyap_plot) subplot(1,2,2); contourf(psi) drawnow; t end end

It produces output as shown in Figures 8.25, 8.26. After t = 200 the depicted patterns are not completely stationary but still evolving towards their fixed points. Performing a linear analysis around a perfectly regular stripe solution, it turns out that at a critical value Ac =

√3ε 2

Figure 8.25: Numerical solution of the Swift–Hohenberg equation at t = 200 for ε = 0.1, A = 0 (right) and Lyapunov functional (8.106) over t (left), computed on a 200 × 200 grid and for a random-dot initial condition.

302 � 8 Partial differential equations II, applications

Figure 8.26: The same as in Figure 8.25 but for A = 0.5.

stripes become unstable and give way to hexagons already found for the Brusselator. If the symmetry Ψ → −Ψ is broken, i. e., A ≠ 0, one always finds hexagons directly above threshold ε = 0 since Ac = 0. Thus, hexagons are the typical (generic) structure for a Turing instability with no further symmetry restrictions. As a secondary instability for larger ε stripes, or for more involved nonlinearities, squares may also occur. Boundary conditions The Swift–Hohenberg equation is of the fourth order in the spatial derivatives and thus needs two independent boundary conditions along each side. Taking the gallery MATLAB call, the matrix ‘lap’ contains the 2D Laplace FD-representation of Figure 7.8a, cut at the boundaries. This corresponds to Ψ = 0 along all sides. The biharmonic operator ‘lap2’ in the form of Figure 7.9 is obtained by squaring the matrix ‘lap.’ Then a second boundary condition is automatically built in of the form of ΔΨ = 0, corresponding together with the first condition to 𝜕ξξ Ψ = 0 where ξ stands for the variable perpendicular to the boundary. Very often, the Swift–Hohenberg equation is solved with the boundary conditions Ψ = 0,

n̂ ⋅ ∇Ψ = 0

(8.109)

with n̂ normal to the boundaries. Conditions of the form (8.109) can be realized by modifying the diagonal elements of the matrix ‘lap2’ that belong to the points next to the four sides, according to for j=1:n:n^2 j1=j+n-1; lap2(j,j)=lap2(j,j)+2; lap2(j1,j1)=lap2(j1,j1)+2; end for j=1:n j1=j-n^2-n-1 lap2(j,j)=lap2(j,j)+2; lap2(j1,j1)=lap2(j1,j1)+2; end

8.4 Pattern formation out of equilibrium

� 303

This code section must be included in the code on page 300 before computing the matrix ‘m’ in line 5. The modification of the diagonal elements can be understood by assuming a ‘virtual line’ outside of the integration domain, as explained in more detail in Section 7.2.2.1. 8.4.3 Problems Similar to the derivation in Section 8.4.2.1, a model equation valid for the weakly nonlinear case close to an oscillatory (Hopf) instability can be found. This equation is also named the complex Ginzburg–Landau equation and reads: 󵄨 󵄨2 2 𝜕t Ψ(x, t) = [ε + iω0 + (1 + ic1 ) 𝜕xx ] Ψ(x, t) − (1 + ic3 )󵄨󵄨󵄨Ψ(x, t)󵄨󵄨󵄨 Ψ(x, t)

(8.110)

where ε, ω0 , ci ∈ ℛ and Ψ ∈ 𝒞 . Assume periodic boundary conditions in the domain 0 ≤ x ≤ L: Ψ(x = 0, t) = Ψ(x = L, t). 1.

Without loss of generality, the frequency ω0 can be put to zero. Why? Show that for ε > 0 an exact solution of (8.110) reads Ψ = A exp(iΩt)

2.

with A, Ω ∈ ℛ. Determine A and Ω. Examine the stability of the solution (8.111). Show that it becomes unstable if c1 c3 < −1.

3.

(8.111)

(8.112)

Solve (8.110) for ε = 0.1 numerically using an FTCS scheme in the stable regime, i. e., for c1 = 1/2,

c3 = −1/2,

and in the unstable regime, i. e., with c1 = 1/2,

4.

c3 = −4.

Take L = 800, ε = 0.1 and the step sizes Δx = 0.8 und Δt = 0.05. As the initial condition, assume a random dot distribution of Ψ(x, 0) in the interval 0.9A . . . 1.1A. Represent the solution graphically in a space-time diagram. If (8.112) is fulfilled, (8.110) possesses spatio-temporal solutions that are chaotic. Examine them by determining their largest Lyapunov exponent Λ for different values of c3 and fixed c1 = 1/2. Plot Λ as a function of c3 .

304 � 8 Partial differential equations II, applications

Bibliography [1] [2] [3] [4] [5] [6] [7] [8] [9]

L. D. Landau and E. M. Lifshitz, Fluid Mechanics, Vol. 6, Butterworth–Heinemann, 2nd edn. (1987). C. A. J. Fletcher, Computational Techniques for Fluid Dynamics 2, Springer (2013). FORTRAN codes at https://www.degruyter.com/document/isbn/9783110782523/html. C. H. Bruneau and M. Saad, The 2D lid-driven cavity problem revisited, Computers & Fluids 35, 326 (2006). P. N. Swarztrauber and R. A. Valent, FFTPACK5 – a FORTRAN library of fast Fourier transforms, http: //www.netlib.org/fftpack, February 5, 2023. H. Haken, Synergetics: Introduction and Advanced Topics, Springer (2012). A. M. Turing: The chemical basis of morphogenesis, Phil. Trans. R. Soc. London B 237, 37 (1952). J. D.Murray, Mathematical Biology, Springer (2008). J. Swift, P. C. Hohenberg, Hydrodynamic fluctuations at the convective instability, Phys. Rev. A 15, 319 (1977).

9 Monte Carlo methods The terms ‘Monte Carlo methods’ or ‘Monte Carlo simulations’ refer to a variety of numerical methods mainly based on random numbers. On the one hand these can be problems arising from physics where fluctuations or other random processes play a central role, for example thermal motion in statistical physics, trajectories of particles in disordered media, or equilibrium phase transitions. On the other hand, mathematical problems like the approximate computation of high-dimensional integrals or the determination of extreme values in a high-dimensional variable space can be often only tackled by Monte Carlo methods.

9.1 Random numbers and distributions 9.1.1 Random number generator Since random numbers play a crucial role, we wish to start with their properties and their realization on a computer. Because all processes on computers are in principle deterministic, one should better talk about ‘pseudorandom numbers’. A pseudorandom number generator is implemented in FORTRAN95. The statement CALL RANDOM_NUMBER(x) assigns to the real variable x an equally distributed random number in the interval 0 ≤ x < 1.

(9.1)

If x is defined as an array, all its elements are filled with different random numbers. Random number generators are based on maps of the form xi = f (xi−1 , xi−2 , . . . , xi−n ).

(9.2)

A simple, however not applicable, example with n = 1 is the logistic map somewhere in the chaotic region, studied in Chapter 1. FORTRAN95 uses different maps with a much larger n, producing much better and much less correlated random sequences. The value of n thereby depends on the compiler and operating system. The actual n can be found by CALL RANDOM_SEED(size=n), for GNU-FORTRAN running under Linux one obtains n = 12. At each run the very same values (x1 . . . xn ) are initialized and as a consequence exactly the same reproducible random series is generated. This can be changed by the call CALL RANDOM_SEED(put=ix), https://doi.org/10.1515/9783110782523-009

306 � 9 Monte Carlo methods where ix is an integer array of length n, containing the initial values for the iteration (9.2). If one wishes to create a different sequence for each run, one can link the initial values to the system clock. A subroutine for the initialization is given below; see also Appendix B: SUBROUTINE RANDOM_INIT INTEGER, DIMENSION(100) :: ix ! auxiliary array for initial values CALL RANDOM_SEED(size=n) CALL SYSTEM_CLOCK(ic) ix=ic CALL RANDOM_SEED(put=ix(1:n)) END

9.1.2 Distribution function, probability density, mean values The distribution function Φ(x) denotes the probability to draw a random variable X that is less than x. For the equal (rectangular) distribution (9.1) it follows 0, { { { Φr (x) = {x, { { {1,

if x < 0

if 0 ≤ x < 1 .

(9.3)

if 1 ≤ x

If we restrict ourselves to the domain [0, 1), (9.3) simply reads Φr (x) = x,

x ∈ [0, 1).

Another name for the distribution function is cumulative probability. The probability density function (PDF) ρ(x) =

dΦ(x) dx

(9.4)

specifies the probability to find x in the infinitesimal region x, x + dx. For the equal distribution one has 0, { { { ρr (x) = {1, { { {0,

if x < 0

if 0 ≤ x < 1 .

(9.5)

if 1 ≤ x

If the PDF is known, the expectation value, or mean, ⟨x⟩ = ∫ dx x ρ(x)

(9.6)

9.1 Random numbers and distributions

� 307

as well as the variance 2

Var(x) = ⟨(x − ⟨x⟩) ⟩ = ⟨x 2 ⟩ − ⟨x⟩2 = ∫ dx x 2 ρ(x) − ⟨x⟩2

(9.7)

and the standard deviation σ(x) = √Var(x)

(9.8)

can be computed. The integrals thereby run over the domain of definition of the random variable. If, on the other hand, a series x1 . . . xk of randomly distributed numbers is given, one evaluates mean and variance as ⟨x⟩ =

1 k ∑x , k i=1 i

Var(x) =

1 k 2 ∑ x − ⟨x⟩2 . k i=1 i

Particularly for the equal distribution in [0, 1) one finds ⟨x⟩ = 1/2,

Var(x) = 1/12.

9.1.3 Other distribution functions Based on the equally distributed random numbers x produced by a computer, how can we create a random number y with a different, desired probability density ρ(y)? We shall present two methods: 9.1.3.1 Acceptance-rejection method We seek for a random variable y in the region a≤y 8, and for the simple rectangle rule already, for n > 2. 9.2.2.1 Example As an example we wish to compute the integral 1

1

0

0

Jn = ∫⋅ ⋅ ⋅ ∫ dx1 dx2 . . . dxn (x1 + x2 + ⋅ ⋅ ⋅ + xn )2 .

(9.23)

It can be solved exactly as JnA =

n n2 + . 12 4

(9.24)

314 � 9 Monte Carlo methods Evaluation using the rectangle rule with N mesh points in each dimension yields N

N

i1 =1

in =1

JnR ≈ (Δx)n ∑ ⋅ ⋅ ⋅ ∑ (xi1 + xi2 + ⋅ ⋅ ⋅ + xin )2 ,

xi = Δx (i − 1).

Monte Carlo integration provides JnM ≈

N

1 T ∑ (ξ + ξ2 + ⋅ ⋅ ⋅ + ξn )2 NT i =1 1 1

with the equally distributed, uncorrelated random numbers 0 ≤ ξi ≤ 1. To compare the results we take the same number of summands for each method, leading to N = (NT )1/n . The relative error in percent is given as Er(%) = 100 ⋅

|JnR,M − JnA | ; JnA

see Figure 9.3.

Figure 9.3: Numerical computation of the integral (9.23) for n = 1, 4, 7 room dimensions. Monte Carlo integration compared to rectangle rule. Relative error in percentage over the total number of mesh points NT = N n . For the rectangle rule N = 3 . . . 20 have been used in each dimension. The MCI error goes with NT−1/2 (straight line). For n = 1 the rectangle rule is superior, whereas for n = 4, MCI is already more precise.

9.3 Applications from statistical physics

� 315

9.2.2.2 Simple sampling and importance sampling To determine integrals of the form (9.16) with (9.17), we assumed equally distributed sampling points xi ; this is also called simple sampling. If f (x) varies strongly, the deviation from the mean according to (9.21), and hence the error, can become large. On the other hand we can rewrite (9.16) for an arbitrary, nonzero function ρ(x) as: b

b

J = ∫ dx f (x) = ∫ dx [ a

a

f (x) ]ρ(x). ρ(x)

(9.25)

If ρ(x) is the probability density of x, we can interpret (9.25) as the mean ⟨f /ρ⟩ and compute the variance b

Var(f /ρ) = ∫ dx [ a

2

f (x) ] ρ(x) − J 2 . ρ(x)

(9.26)

Taking for ρ an appropriate PDF, (9.26) can be minimized. Vanishing functional derivative δ/δρ of b

F[ρ] = Var(f /ρ) + λ ∫ dx ρ a

with λ as Lagrange parameter originating from the constraint (normalization) b

∫ dx ρ(x) = 1

(9.27)

a

yields −f 2 /ρ2 + λ = 0 or ρ=

|f | , √λ

where λ is to be determined with (9.27). The error becomes smaller the more the shape of ρ resembles that of |f |. Adjusting the PDF according to the form of the integrand is named importance sampling, and can be readily extended to higher dimensions.

9.3 Applications from statistical physics Now let us turn to the other important field of Monte Carlo applications, namely to problems where randomness and fluctuations play an intrinsic role. To this end we first re-

316 � 9 Monte Carlo methods visit many-particle systems. In Chapter 4 we examined such systems solving Newton’s equations of motion for certain interaction potentials. Applying now the methods of statistical physics, one considers an ensemble of macroscopically identical systems that differ microscopically, i. e., by different initial conditions of the particles. Averaging over the ensemble one can compute certain macroscopic values like density or temperature. In thermal equilibrium one may also consider snapshots of the same system at different times, making use of the identity of ensemble average and time average.

9.3.1 Two-dimensional classical gas 9.3.1.1 Hard spheres model We start with a two-dimensional gas of N hard spheres, or better hard discs, each having the diameter d0 . Let the particles be randomly distributed in a certain given volume V , say the unit square. We draw the positions ri⃗ from an equal distribution in the unit square ri⃗ = (xi , yi ),

xi ∈ [0..1],

yi ∈ [0..1],

i = 1 . . . N.

However, the particles must not overlap. Thus we have to choose the positions fulfilling the constraints dij = √(xi − xj )2 + (yi − yj )2 ≥ d0 for each pair i ≠ j. The code section for the initialization could look as: PARAMETER (n=1000) ! number of particles REAL, DIMENSION(2,n) :: x ! positions REAL, DIMENSION(2) :: xn ! new positions .... d0 = ... ! sphere diameter, minimal distance c draw n particle positions randomly DO i=1,n 11 CALL RANDOM_NUMBER(xn) ! random position DO j=1,i-1 IF(d(xn,x(:,j)).LT.d0) GOTO 11 ! distances ok? ENDDO x(:,i)=xn ! yes, accept position ENDDO .... The function d computes the Euclidean distance according to (9.28).

(9.28)

9.3 Applications from statistical physics

� 317

To create a new configuration one assigns fresh random coordinates xk , yk to an arbitrarily chosen particle k = 1 . . . N. If all constraints (9.28) are fulfilled the new position is accepted. Otherwise the coordinates are rejected and drawn again. Hence one produces successively different ensemble members in accordance with the macroscopic conditions V = const., N = const. Since there is no motion of particles, a mechanical kinetic energy is not defined. Nevertheless, the interaction between the particles can be easily considered. A hard core repulsion is already manifested in the restriction (9.28). How can we model an additional attractive force? Let the interaction for instance be expressed by an attractive Coulomb potential u(ri⃗ − rj⃗ ) = −

α , dij

α > 0.

Then one computes the total energy for each configuration according to N

N

U(r1⃗ , . . . , rN⃗ ) = ∑ ∑ u(ri⃗ − rj⃗ ). i=1 j=i+1

(9.29)

One could be tempted to find the ‘ground state’ with the lowest energy just by testing as much as possible configurations. Assuming N = 1000, we had to vary 2000 variables (3000 in 3D) in the region [0, 1]. To detect a minimum in such a high-dimensional space seems to be rather hopeless. 9.3.1.2 Metropolis algorithm A quite elegant method to an approximate solution of the problem is provided by the Metropolis algorithm.1 In particular we shall here consider the canonical ensemble where volume, particle number, and temperature are prescribed. A central part of the statistical description is the partition sum [2] Z(N, V , T) = ∑ exp(−Ei /kB T) i

(9.30)

with Boltzmann’s constant kB and energy Ei of the microstate i. The sum runs over all macroscopically possible microstates. If one deals with continuous variables as for the hard spheres gas model, the sum is converted into a 2N-dimensional integral according to Z(N, V , T) ∼ ∫ d 2 r1⃗ ⋅ ⋅ ⋅ ∫ d 2 rN⃗ exp(−U(r1⃗ . . . rN⃗ )/kB T).

(9.31)

If the partition sum is known, the probability in finding a state with energy Ei in the canonical ensemble is expressed by 1 Devised by Metropolis, Rosenbluth, Rosenbluth, Teller and Teller in 1953 [1].

318 � 9 Monte Carlo methods ρi =

1 exp(−Ei /kB T), Z

(9.32)

also denoted as Boltzmann distribution. With (9.32) we can compute mean values as ⟨Y ⟩ = ∑ Yi ρi , i

(9.33)

where again the sum runs over all states and has to be replaced by the appropriate integrals for the case of continuous variables. How can the sums or high-dimensional integrals for the computations of Z be evaluated numerically? It is possible to apply Monte Carlo integration. Normally (9.32) is concentrated in a very narrow region around its mean ⟨E⟩ = ∑ Ei ρi i

and simple sampling will not work effectively. A much better strategy is to draw only states with Ei ≈ ⟨E⟩. Hence one varies the state variables not arbitrarily but rather in such a way that the system moves towards thermal equilibrium. Let Xi be a set of variables describing the system in the state i, i. e., the position coordinates of the gas from Section 9.3.1.1. Now one seeks a sequence of states X0 → X1 → . . . Xn that converges preferably quickly to equilibrium. Each state should be uniquely determined by its predecessor; such a sequence is denoted as a Markov chain. Let P(Xi → Xj ) be the transition probability from state i into state j. A stationary distribution ρ(X) is given if the condition ρ(Xi )P(Xi → Xj ) = ρ(Xj )P(Xj → Xi ), known as detailed balance, holds. For ρ as (9.32) one obtains P(Xi → Xj ) = P(Xj → Xi )

ρ(Xj ) ρ(Xi )

= P(Xj → Xi ) exp(−ΔEji /kB T)

(9.34)

with ΔEji = Ej −Ei . If Ei > Ej , the transition i → j should always take place (approximation to the equilibrium): P(Xi → Xj ) = 1

if ΔEji < 0.

From (9.34) one finds for the reverse transition j → i

9.3 Applications from statistical physics

P(Xj → Xi ) = exp(−ΔEij /kB T)

and

� 319

ΔEij > 0.

Importance sampling with the Metropolis algorithm is built by the following steps: 1. Choose an initial configuration X0 . Compute E0 (X0 ). i = 1. 2. Construct the next configuration Xi by modifying the position of an arbitrary particle. Compute Ei (Xi ) and ΔEi,i−1 3. If ΔEi,i−1 ≤ 0, accept the new configuration Xi . i = i + 1. Go to 2. 4. If ΔEi,i−1 > 0, draw an equally distributed random number 0 ≤ r < 1. If r < exp(−ΔEi,i−1 /kB T), accept the new configuration Xi . i = i + 1. Go to 2. 5. If r > exp(−ΔEi,i−1 /kB T), restore the old configuration Xi = Xi−1 . Go to 2. We exemplify steps 1–5 with a code for Monte Carlo simulation of the 2D gas with the interaction potential given in (9.29). For the complete listing see [3]. Step 1 Initial position is computed with program section on page 316. Computation of E0 : E0=0. DO i=1,n-1 DO j=i+1,n E0=E0-1./d(x(1:2,i),x(1:2,j)) ENDDO ENDDO Step 2 100 CALL RANDOM_NUMBER(rk) k=rk*FLOAT(n)+1 ! random particle k Ek=0. DO i=1,n IF(i.NE.k) Ek=Ek-1./d(x(1:2,i),x(1:2,k)) ! contribution of k to E0 ENDDO xn=x(1:2,k) ! store old config. CALL RANDOM_NUMBER(x(1:2,k)) ! new random position of k Ekn=0. DO i=1,n ! constraints fulfilled? (hard spheres)? IF(i.NE.k) THEN IF(d(x(1:2,K),x(1:2,i)).LT.dmin) THEN x(1:2,K)=xn GOTO 100 ! No, then draw again ENDIF Ekn=Ekn-1./d(x(1:2,k),x(1:2,i)) ! contribution of k to E0

320 � 9 Monte Carlo methods

ENDIF ENDDO dE=Ekn-Ek

!

delta E

Step 3 IF(dE.LT.0.) THEN E0=E0+dE ! new E0 GOTO 100 ! next configuration ENDIF Step 4 CALL RANDOM_NUMBER(r) IF(r.LT.EXP((-dE/t)) THEN E0=E0+dE ! new E0 GOTO 100 ! next configuration ENDIF Step 5 x(1:2,k)=xn ! restore old config, E0 unchanged GOTO 100 ! next draw Figure 9.4 presents particle configurations for high and low T. Scaling of temperature and distances are such that α = kB = 1 and d0 = 0.01. Obviously, a circular assignment of the particles minimizes the energy for lower T and a single drop is formed. For T = 2000 one obtains a gaseous state. The transition becomes visible if a histogram of the particle distances is recorded as a function of temperature. From the histogram one finds the average particle distance as ⟨r⟩ =

∑i ri H(ri ) ; ∑i H(ri )

see Figure 9.5. The influence of gravitation in the negative y-direction can be readily included by adding the potential energy of the form N

Up = g ∑ yi i

(9.35)

to (9.29). For g > 0 the particles ‘move’ downwards and eventually a drop having a certain contact angle is formed on the lower wall. For larger g the drop is squeezed and the contact angle becomes smaller; see Figure 9.6.

9.3 Applications from statistical physics

� 321

Figure 9.4: (N = 500), configuration at T = 2000 (left frame) and drop formation at T = 20 (right).

Figure 9.5: Mean particle distance ⟨r⟩ over temperature for N = 500, d0 = 0.01.

Figure 9.6: Drop formation under gravity for g = 103 (left) and g = 104 (right) at T = 20.

322 � 9 Monte Carlo methods 9.3.2 The Ising model Certain metals such as iron or nickel show spatial regions where spins orient themselves in a parallel fashion and create a permanent magnetic field, a property called ferromagnetism. If the metal is heated above a critical temperature Tc named Curie temperature, polarization and magnetic field vanish. Hence these metals undergo a phase transition from a magnetic to a nonmagnetic state or vice versa with temperature as the control parameter and magnetization as the order parameter. A simplified model describing ferromagnetism was devised by E. Ising2 almost 100 years ago [4]. Thereby, N spins are located on a regular periodic lattice (in Ising’s original work on a one-dimensional chain) and each spin has two possible orientations, say up and down, denoted by ±1. If only the nearest neighbor interaction is taken into account, the energy is given by N

γ

N

E(s1 , . . . , sN ) = −ε ∑ ∑ si sj − H ∑ si , i j∈NN(i)

i

(9.36)

where ε measures the coupling strength, NN(i) denote the indices of the nearest neighbors and H stands for an external, homogeneous magnetic field. The amount γ of nearest neighbors depends on the room dimension n as well as on the geometry of the periodic grid. One finds 2 { { { { { { 4 { { { γ = {6 { { { { 6 { { { { {8

if n = 1, chain

if n = 2, square lattice

if n = 2, hexagonal lattice

if n = 3, simple cubic lattice

if n = 3, body-centered cubic lattice.

Taking (9.36) we compute the partition sum (we shall use from here the abbreviation β = 1/kB T) Z(H, T) = ∑ ∑ ⋅ ⋅ ⋅ ∑ exp(−βE(s1 . . . sN )), s1 s2

sN

(9.37)

where each sum runs only over the two values si = −1, 1. Hence we have to evaluate 2N expressions in total. With the partition sum at hand, we can compute the free energy F(H, T) = −kB T ln Z(H, T)

(9.38)

and from there further important macroscopic quantities like the internal energy 2 Ernst Ising, German physicist, 1900–1998.

9.3 Applications from statistical physics

U(H, T) = −kB T 2

� 323

𝜕 F ( ), 𝜕T kB T

(9.39)

𝜕U , 𝜕T

(9.40)

the specific heat C(H, T) = or the overall magnetization N

M(H, T) = ⟨∑ si ⟩ = − i

𝜕 F ( ). 𝜕H kB T

(9.41)

9.3.2.1 Mean-field approximation A rather crude ‘solution’ to the problem is achieved applying the mean-field approximation. Although this method in its simplest form yields inaccurate or even wrong results, it shows at least qualitatively why a phase transition can take place and why the critical temperature depends both on coupling strength and room dimension. With (9.36) the energy of a single spin reads γ

̄ − Hs, Es = −ε s ∑ sj − Hs = −εγs sNN j∈NN(s)

(9.42)

with ̄ = sNN

1 γ

γ

∑ sj

j∈NN(s)

(9.43)

as the mean of the neighbor spins. With the partition sum (two states, s = ±1) ̄ + βH) + exp(−βεγsNN ̄ − βH) = 2 cosh(βεγsNN ̄ + βH) Zs = exp(βεγsNN ̄ the mean of s with given sNN s̄ =

1 +1 ̄ + βHs) = tanh(βεγsNN ̄ + βH) ∑ s exp(βεγssNN Zs s=−1

(9.44)

is computed. One now makes the strongly simplified assumption that the mean of the neighbors (9.43) is equal to the mean (9.44) and obtains the transcendental equation s̄ = tanh(βεγs̄ + βH) from which s̄ and therewith the magnetization can be determined.

(9.45)

324 � 9 Monte Carlo methods We start with the case H = 0 where the trivial solution s̄ = 0 exists. However, for βεγ > 1

(9.46)

this solution becomes unstable and two new real branches ±s1̄ appear (three intersections of tanh(βεγs)̄ and s̄ exist). The two new branches are identified with the ferromagnetic phases. The Curie temperature is given from (9.46) kB Tc = εγ. It is proportional to the coupling strength ε and depends on the number of neighbors and the room dimension. The magnetization is visualized easiest by solving (9.45) with respect to T T=

εγs̄ kB atanh(s)̄

and plotting T over s̄ ∈ [−1, 1], Figure 9.7 left. In the moment where an external field is applied, a preferred direction exists. Then, one of the two nontrivial states becomes more stable, and the other can still be metastable for a certain range of H, while a hysteresis loop shows up: Figure 9.7, right frame. Based on the mean-field approach, magnetic phases should occur also in one dimension (spin chain) at finite T = 2ε/kB . However, in real life this is not the case, which we briefly demonstrate in the next section.

Figure 9.7: Left: magnetization for H = 0 over T . Right: magnetization over external field H for T = 2. The dashed line is computed from (9.45). It is unstable where its slope is negative and the true process follows the arrows. Scaling: kB = ε = 1 and γ = 4, Tc = 4.

9.3 Applications from statistical physics

� 325

9.3.2.2 The one-dimensional Ising model Consider a chain of N spins having equal distances. The relation (9.36) turns into N−1

N

i

i

E = −2 ε ∑ si si+1 − H ∑ si

(9.47)

and the partition sum can be computed exactly. For the sake of simplicity we shall concentrate on the case H = 0 and substitute 2ε = ϵ. Using (9.47) one finds from (9.37) Z(T) = ∑ ∑ ⋅ ⋅ ⋅ ∑ exp(βϵs1 s2 ) exp(βϵs2 s3 ) . . . exp(βϵsN−1 sN ). s1 s2

sN

(9.48)

The sum over s1 can be evaluated: ∑ exp(βϵs1 s2 ) = exp(βϵs2 ) + exp(−βϵs2 ) = 2 cosh(βϵs2 ) = 2 cosh(βϵ) s1

and shows to be independent on s2 . Next, the sum over s2 is executed in the same way and so on, and finally one arrives at Z(T) = 2N (cosh(βϵ))

N−1

N

≈ (2 cosh(βϵ)) .

(9.49)

From (9.39) the internal energy ϵ ) kB T

(9.50)

ϵ2 N kB T 2 cosh(ϵ/kB T)

(9.51)

U(T) = −ϵN tanh( and from (9.40) the specific heat C(T) =

can be determined. The shape of C(T) could allude to a phase transition for a specific Tc ≈ 1, Figure 9.8, left. However, this is not the case as we shall see if we compute the magnetization according to (9.41). To this aim one must know the partition sum for H ≠ 0. This can be achieved also exactly but the computation is more involved; for details we refer to the literature, e. g., [2]. Here, we only present the result (N ≫ 1): N

Z(H, T) = exp(Nβϵ)[cosh(βH) + √sinh2 (βH) + exp(−4βϵ)] . We finally get M(H, T) =

N sinh(βH) 2

√sinh (βH) + exp(−4βϵ)

(9.52)

326 � 9 Monte Carlo methods

Figure 9.8: Left: specific heat per spin for the one-dimensional Ising model (H = 0); right: magnetization per spin depending on an external magnetic field for T = 1 (solid) and T = 2 (dashed), kB = ϵ = 1.

and from there M(0, T) = 0,

for T > 0.

No spontaneous magnetization occurs for finite temperature; see also Figure 9.8, right. Applying thermodynamics, this can be understood by an estimate of the free energy F = U − TS. In a homogeneous magnetic phase all spins are aligned (Figure 9.9 (a)), the entropy is S = 0, and the internal energy reads U = −Nϵ. If, on the other hand, a dislocation as shown in Figure 9.9 (b) is included, the internal energy increases by 2ϵ, but in the meantime the entropy increases also by kB ln N since there exist N possibilities to place the dislocation. Thus, the free energy is in total changed by ΔF = 2ϵ − kB T ln N,

Figure 9.9: One-dimensional chain of spins, (a) magnetic ground state, (b) dislocation.

9.3 Applications from statistical physics

� 327

which becomes negative for T > 2ϵ/(kB ln N). Since F decreases on the way to thermodynamic equilibrium and finally takes a minimum there, more and more dislocations are created by thermal fluctuations and the order vanishes. In the thermodynamic limit N → ∞ fluctuations at even infinitesimal temperature are large enough to destroy a one-dimensional magnetic structure independently from the coupling strength ϵ. Why then does the mean-field approximation in one dimension yield a value of Tc = 2ε/kB ? If one assumes as in (9.45) for all spins the same mean value, the influence of fluctuations is considerably diminished, leading to larger values of Tc in general. For only one spatial dimension, even a qualitatively wrong result can turn out. 9.3.2.3 The two-dimensional Ising model Coupling is stronger in two room dimensions and may finally lead to ordered states at finite T. A dislocation in 2D (a grain boundary) on an M × M lattice costs the internal energy 2Mϵ and thus changes the free energy by ΔF = 2Mϵ − kB T ln M, an expression that is nonnegative for finite T and large enough M. Onsager3 [5] was able to find an exact solution of the two-dimensional Ising model using transfer matrices. He derived for Tc the relation Tc =

2ε ε ≈ 2.269 . kB asinh(1) kB

The derivation is however rather involved and lengthy and an exact solution in three dimensions is still absent. The Metropolis algorithm introduced in Section 9.3.1.2 can be readily applied on the Ising model. We shall use a square grid built up by N = M × M spins. The method can be extended in a straightforward manner to three dimensions. We refer to the procedure given on page 319. The initial state (step 1) may consist either of equally distributed (hot start-up) or of parallel orientated spins (cold start-up). To compute E, (9.36) is evaluated with γ = 4. A new configuration is created by changing the direction of a randomly drawn spin. How can ΔE be computed if spin sk changes to −sk ? Let the initial energy be E({s}) = −ε sk

γ

∑ sj − H sk + rest,

j∈NN(k)

and after changing sk 3 Lars Onsager, Norwegian-American physicist, 1903–1976.

(9.53)

328 � 9 Monte Carlo methods

E({s′ }) = ε sk

γ

∑ sj + H sk + rest,

(9.54)

j∈NN(k)

where ‘rest’ denotes all terms that do not depend on sk . Subtracting the two equations above yields ΔE = E({s′ }) − E({s}) = 2sk hk

(9.55)

with the abbreviation hk = ε

γ

∑ sj + H.

(9.56)

j∈NN(k)

In two dimensions the sum in (9.56) can only take one out of the five values −4,−2,0,2,4 and one finds ΔE = ±2(4ε + H, 2ε + H, H, −2ε + H, −4ε + H). The Boltzmann factors are needed only for the cases ΔE > 0. Assuming |H| < 2ε and taking the scaling T=

ε ̃ T, kB

H = εH̃

the five Boltzmann factors ̃ T)̃ exp(−4T), ̃ exp(±2H/

̃ T)̃ exp(−8T), ̃ exp(±2H/

̃ T)̃ exp(−2H/

occur. It suffices to compute the four different exponential functions for one time at the beginning of the program what increases its speed considerably. A code section containing steps 2–5 of page 319 could look like this (for the complete code see [3]): ex(0)=0.5 ex(1)=EXP(-4./T) ! the four exp-functions ex(2)=EXP(-8./T) exh(1)=EXP(2.*H/T) exh(2)=EXP(-2.*H/T) .... c should the spin is(i,j) be changed? in=-is(i,j)*(is(i,j+1)+is(i,j-1)+is(i+1,j)+is(i-1,j)) ! sum over ! 4 neighbors

9.3 Applications from statistical physics

� 329

IF(in.GT.0) THEN ! Delta E > 0, change! is(i,j)=-is(i,j) ELSE ! Delta E 0, draw a random number r out of [0,1]. If r < exp(−ΔF/T), accept the new configuration; otherwise restore the old one.

What is here the role of the ‘temperature’? Analogue to thermodynamics, the possibility to accept a Metropolis step with increasing F increases with T. Thus the system may leave a secondary minimum to reach eventually the main minimum. Too large temperatures however may yield rather noisy and not converging results and one has to play around a bit to find the appropriate T. Also a ‘time-dependent,’ slowly decreasing temperature could help in some cases. 9.4.1.2 Implementation with finite differences Approximating the derivatives and the integral in (9.60) with finite differences, the functional F turns into a function of N = M × M variables M 1 1 F(Ψij ) = ∑[ (Ψi+1,j − Ψi−1,j )2 + (Ψi,j+1 − Ψi,j−1 )2 − ρij Ψij Δx 2 ]. 8 8 ij

(9.63)

If only one certain Ψmn is changed, ΔF can be effectively evaluated just as (9.53) by dividing (9.63) in terms which contain Ψmn and the rest. After some manipulations one arrives at 1 1 F = Ψ2mn − Ψmn (Ψm+2,n + Ψm−2,n + Ψm,n+2 + Ψm,n−2 ) − ρmn Ψmn Δx 2 + rest. 2 4 Let Ψ̃ mn be the new value at node m, n. Then the modification of F reads 1 ΔF = (Ψ̃ 2mn − Ψ2mn ) 2 1 + (Ψ̃ mn − Ψmn )(−ρmn Δx 2 − Ψm+1,n − Ψm−1,n − Ψm,n+1 − Ψm,n−1 ). 4 In the last formula we have halved the step size and replaced the nodes m, n + 2 with m, n + 1 and so on. 9.4.1.3 Results Figure 9.13 shows a ‘development’ of the initial condition Ψij = 0 on a 100 × 100 grid with Δx = 1. Two point sources ρ20,20 = −1,

ρ80,80 = 1

334 � 9 Monte Carlo methods

Figure 9.13: Solution of the Poisson equation for two point charges applying the Metropolis algorithm. The snapshots are taken from left to right after 106 , 108 , 2 ⋅ 109 variations of the node values and for a temperature of T = 4 ⋅ 10−5 .

are given. At each step, the new node value is achieved by Ψ̃ mn = Ψmn + αξ

(9.64)

with an equally distributed random variable ξ ∈ [−0.5, 0.5] and α = 0.1, while the temperature is fixed with T = 4 ⋅ 10−5 . The boundary values Ψ0,j , ΨM+1,j , etc. are not changed and remain as zero (Dirichlet condition). 9.4.2 Swift–Hohenberg equation As demonstrated in Section 8.4.2, the Swift–Hohenberg equation may also be derived from a functional (8.106). Thus we can again use the Metropolis algorithm. Changing a node value Ψmn now causes the change of the potential ΔL ε 1 4 10 = (Ψ̃ 2mn − Ψ2mn )(− + − 2 + 4 ) 2 2 Δx Δx Δx 2 2 8 + (Ψ̃ mn − Ψmn )(( 2 − 4 )(Ψm+1,n + Ψm−1,n + Ψm,n+1 + Ψm,n−1 ) Δx Δx 2 + 4 (Ψm+1,n+1 + Ψm−1,n+1 + Ψm+1,n−1 + Ψm−1,n−1 ) Δx 1 + 4 (Ψm,n+2 + Ψm,n−2 + Ψm+2,n + Ψm−2,n )) Δx 1 1 − A(Ψ̃ 3mn − Ψ3mn ) + (Ψ̃ 4mn − Ψ4mn ). 3 4 The expression is more involved because (8.106) contains the Laplacian. Figure 9.14 shows two states on a 100 × 100 grid for different parameters, each obtained after 2 ⋅ 109 variations. The node modifications were done as in (9.64) but with α = 0.01, Δx = 1/2, T = 2.5 ⋅ 10−4 . Figure 9.15 depicts the corresponding potentials. The temperature chosen in Figure 9.16 is obviously too high.

Bibliography

� 335

Figure 9.14: Two solutions of the Swift–Hohenberg equation by variation of the Lyapunov potential (8.106) with the help of the Metropolis algorithm. Left frame: ε = 0.1, A = 0, right frame: ε = 0.1, A = 0.5. As in Section 8.4.2, one finds stripes for A = 0 and hexagons for A = 0.5.

Figure 9.15: Lyapunov functional over the number of variations (in millions) for the two parameter sets of Figure 9.14, T = 2.5 ⋅ 10−4 .

Figure 9.16: Lyapunov functional for ϵ = 0.1, A = 0, but now for T = 2.5 ⋅ 10−3 . At higher temperatures the fluctuations are clearly larger and it takes much longer until an equilibrium (stable) state is reached.

336 � 9 Monte Carlo methods

Bibliography [1] N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller and E. Teller, Equation of State Calculations by Fast Computing Machines, J. Chem. Phys. 21, 1087 (1953). [2] K. Huang, Statistical Mechanics, Wiley & Sons (1987). [3] FORTRAN codes at https://www.degruyter.com/document/isbn/9783110782523/html. [4] E. Ising, Beitrag zur Theorie des Ferromagnetismus, Z. Phys. 31, 253 (1925). [5] L. Onsager, Crystal statistics I. A two dimensional model with order-disorder transition, Phys. Rev. 65, 117 (1944).

A Matrices and systems of linear equations In this appendix, we concentrate on methods for the solution to large linear systems of equations. These can be homogeneous A ⋅ x⃗ = 0

(A.1)

A ⋅ x⃗ = b,⃗

(A.2)

or inhomogeneous

where x,⃗ b⃗ denote vectors in ℛN and A is an N × N square matrix. We begin with some properties and definitions of square matrices.

A.1 Real-valued matrices Let A be a real-valued N × N matrix, Aij ∈ ℛ. A.1.1 Eigenvalues and eigenvectors Each vector v⃗i in ℛN for which A ⋅ v⃗i = λi v⃗i

with v⃗i , λi ∈ 𝒞

(A.3)

holds is called the eigenvector of A and belongs to the eigenvalue λi . The eigenvalues are either real valued or come as conjugate-complex pairs; the same is found for the eigenvectors.

A.1.2 Characteristic polynomial The polynomial of degree N P(λ) = det(A − λ 1) = (λ1 − λ)(λ2 − λ) . . . (λN − λ)

(A.4)

is called the characteristic polynomial of A. Its N zeros λi are the eigenvalues of A and are also denoted as the spectrum of A. Multiple zeros: Let λi be a ki -fold zero of P, P(λ) = (λ1 − λ)k1 (λ2 − λ)k2 . . . (λs − λ)ks . The eigenvalue λi then has the algebraic multiplicity ki . https://doi.org/10.1515/9783110782523-010

338 � A Matrices and systems of linear equations A.1.3 Notations –

The transposed matrix AT is obtained by interchanging rows and columns of A: ATij = Aji .

– –

(A.5)

For a symmetric matrix A = AT holds. The inverse matrix A−1 is defined as A−1 ⋅ A = A ⋅ A−1 = 1.

(A.6)

It is computed by solving the linear, inhomogeneous system of equations (A.6), in components: N

∑ Aij A−1 jk = δik . j

– –

(A.7)

For an orthogonal matrix A−1 = AT holds. Because of (A.7) all rows and all columns are pairwise orthogonal. A normal matrix commutes with its transposed, A ⋅ AT = AT ⋅ A. Symmetric and orthogonal matrices are normal.

A.1.4 Normal matrices Let A be a normal matrix. An orthogonal matrix T exists with T −1 ⋅ A ⋅ T = B

(A.8)

where B is diagonal, Bij = bi δij . A and B are called similar and have the same characteristic polynomial and thus the same spectrum λi = bi . The eigenvectors of A (AT ) form the columns (rows) of the transformation matrix T (T −1 ). A.1.4.1 Symmetric matrices A symmetric N × N matrix has the following properties: 1. All N eigenvalues and eigenvectors are real valued. 2. Eigenvectors belonging to different eigenvalues are orthogonal. 3. If an eigenvalue has the algebraic multiplicity k (k-fold degenerated), it has k linearly independent eigenvectors that can be orthogonalized applying Gram–Schmidt orthogonalization. 4. From 2. and 3. follows: the N eigenvectors form a complete orthogonal system in ℛN , i. e., they span the complete ℛN . 5. The transformation matrix T in (A.8) is orthogonal.

A.2 Complex-valued matrices

� 339

A.1.4.2 Orthogonal matrices The spectrum of an orthogonal matrix M is located on the unit circle in the complex plane: |λj | = 1,

λj ∈ 𝒞

or λj = exp(iφj ),

φj ∈ ℛ.

Because of det M = ∏Ni λi it immediately follows with A.1.1 det M = ±1. Example: the rotational matrix in two dimensions D=(

cos θ sin θ

− sin θ ) cos θ

has the characteristic polynomial (cos θ − λ)2 + sin2 θ = 0 and the eigenvalues λ12 = exp(±iθ).

A.2 Complex-valued matrices Let A be a complex-valued N × N matrix, Aij ∈ 𝒞 . A.2.1 Notations The adjoint matrix A+ is obtained by interchanging rows and columns and forming the conjugate-complex of each element A+ij = A∗ji .

(A.9)

For the inverse matrix A−1 (A.6) and (A.7) hold. Special matrices: – The self-adjoint matrix with A = A+ . Self-adjoint matrices are also named Hermitian. All eigenvalues are real-valued. – The unitary matrix with A−1 = A+ . For all eigenvalues, |λi | = 1. – The normal matrix with A ⋅ A+ = A+ ⋅ A. – Self-adjoint and unitary matrices are also normal.

340 � A Matrices and systems of linear equations A.2.2 Jordan canonical form Each arbitrary, complex or real-valued, N × N matrix A can be transformed by an invertible matrix C (linear transformation) into a matrix having Jordan canonical form, or for short, into a Jordan matrix: J (λ1 ) 1 0 −1 ( C ⋅A⋅C =J = 0 ... ( 0

0 J (λ2 ) 2 0

0 0 J (λ3 ) 3

0 0 0

... ... ... )

(A.10)

J (λm ))

...

m

with the matrices (Jordan blocks) J (λi ) = λi , i

λ ( i 0

1 ), λi

λi (0 0

1 λi 0

0 1), λi

etc.

(A.11)

of length 1, – –



2,

3,

etc.

J (λi ) contributes to the spectrum of A with the sole (algebraic 1,2,3,…-fold) eigeni value λi . The geometric multiplicity of the eigenvalue λi is equal to the number of the linearly independent eigenvectors belonging to λi . It is thereby equal to the number of Jordan blocks with λi in J. A normal matrix has the following properties: – Algebraic and geometric multiplicity are equal. – All Jordan blocks have the length one. – All normal matrices are diagonalizable.

Example: The matrix λ1 0 J =( 0 0

1 λ1 0 0

0 0 λ1 0

0 0 ) 0 λ2

is already in Jordan canonical form and has the characteristic polynomial P(λ) = (λ1 − λ)3 (λ2 − λ).

A.3 Inhomogeneous systems of linear equations



341

It follows for the eigenvalues

algebraic multiplicity geometric multiplicity

λ�

λ�

3 2

1 1

Hence, J possesses only three eigenvectors, two belonging to λ1 and one to λ2 . Thus, the eigenvectors do not form a complete orthogonal system.

A.3 Inhomogeneous systems of linear equations We begin with N-dimensional systems having the form A ⋅ x⃗ = b.⃗

(A.12)

A.3.1 Regular and singular system matrices The system (A.12) possesses a unique solution if the system matrix A is invertible. It reads x⃗ = A−1 ⋅ b.⃗

(A.13)

Hence, det A ≠ 0 and A is called a regular matrix. Then the rank of A is rank(A) = N, and A is built up by N linearly independent rows (and columns). Conversely, if det A = 0 and rank(A) = N − K, there are K linearly dependent rows and A is named singular. The kernel of A ker(A) denotes the K-dimensional subspace of ℛN spanned by the vectors v⃗1 , . . . , v⃗K for which A ⋅ v⃗i = 0,

i = 1 . . . K.

The kernel is also called the null space. For a singular A, depending on b⃗ there exists either no or infinitely many solutions to (A.12). There is no solution if b⃗ is partially or completely in ker(A):

342 � A Matrices and systems of linear equations (a) no solution if (b) infinitely many solutions if

v⃗+i ⋅ b⃗ ≠ 0 v⃗+i ⋅ b⃗ = 0

for at least one i ≤ K for all i ≤ K.

Here, v⃗+i denotes the eigenvectors of A+ (or the left-sided eigenvectors of A): (A+ − λi 1) ⋅ v⃗+i = 0, and λ1 , . . . λK = 0, λK+1 , . . . , λN ≠ 0. For case (b), the solutions are found by expanding x⃗ and b⃗ into the base v⃗i : N

x⃗ = ∑ yi v⃗i , i=1

N

N

i=1

i=K+1

b⃗ = ∑ βi v⃗i = ∑ βi v⃗i

with βi = v⃗+i ⋅ b.⃗ From (A.12) we obtain: N

N

N

N

i=1

i=1

i=K+1

i=K+1

∑ yi A v⃗i = ∑ yi λi v⃗i = ∑ yi λi v⃗i = ∑ βi v⃗i and because the v⃗i are linearly independent yi =

v⃗+i ⋅ b⃗ , λi

i = K + 1...N

must hold. The other yi , i = 1 . . . K remain undetermined.

A.3.2 Fredholm alternative Hence, the solutions of (A.12) read either x⃗ = A−1 ⋅ b,⃗

if rank(A) = N

or, given that b⃗ ∈ ̸ ker(A), N

K v+i ⋅ b⃗ v⃗i + ∑ yi v⃗i , λi i=K+1 i=1

x⃗ = ∑

if rank(A) = N − K.

These two possibilities are named Fredholm alternative.

A.3 Inhomogeneous systems of linear equations



343

A.3.3 Regular matrices The matrix inversion to numerically solve (A.12) can be achieved by applying some standard routines provided e. g., by the LAPACK library. Depending on the form of A different routines are to be considered, a documentation is found for instance in1 http://www.netlib.org/lapack/explore-html/files.html The different names of the routines are logically derived and composed from their functions, data types, etc., for details see http://www.netlib.org/lapack/individualroutines.html To solve (A.12), the matrix A does not need to be inverted completely, it suffices to transform it into a triangular shape, applying Gauss elimination (for details see e. g., [1]). Once (A.12) is in triangular form A′ ⋅ x⃗ = b⃗ ′

(A.14)

with A′11 A′ = ( 0

. A′22

. . A′33

. . . .

A′1N . . ),

(A.15)

A′NN

the system (A.14) is easily solved by backward substitution and we obtain xi :=

b′i − ∑Nk=i+1 A′ik xk , A′ii

(A.16)

where the index i = N, N − 1, . . . 1 runs backwards. If the matrix in (A.15) is regular, det A′ = ∏Ni A′ii ≠ 0 and as a consequence all A′ii ≠ 0. A.3.4 LU decomposition LU decomposition provides a direct, noniterative solution method of an inhomogeneous system of linear equations. The matrix A is written as the product of two matrices

1 Date: December 2022.

344 � A Matrices and systems of linear equations L⋅U =A

(A.17)

where L is a lower (only nonzero elements on the diagonal and below) and U an upper (nonzero elements on the diagonal and above) triangular matrix. Taking a 4 × 4 matrix as an example, we have L11 L ( 21 L31 L41

0 L22 L32 L42

0 0 L33 L43

0 U11 0 0 )⋅( 0 0 L44 0

U12 U22 0 0

U13 U23 U33 0

U14 A11 U24 A ) = ( 21 U34 A31 U44 A41

A12 A22 A32 A42

A13 A23 A33 A43

Then (A.12) can be written as

A14 A24 ). A34 A44 (A.18)

A ⋅ x⃗ = (L ⋅ U) ⋅ x⃗ = L ⋅ (U ⋅ x)⃗ = b.⃗

(A.19)

y⃗ = U ⋅ x⃗

(A.20)

L ⋅ y⃗ = b,⃗

(A.21)

Introducing the vector

we first solve

which, because of the triangular form of L, is simple: yi :=

i−1 1 [bi − ∑ Lij yj ]. Lii j=1

(A.22)

Here we imply that the sum is not executed if i = 1 and for yj occurring on the right-hand side the values computed in the same loop are used (forward substitution). As a second step, we solve U ⋅ x⃗ = y⃗

(A.23)

by backward substitution just as in (A.14). The main task is to find the decomposition (A.17), or, in other words, to determine the matrices L and U. This is done by solving the inhomogeneous system K

∑ Lik Ukj = Aij ,

k=1

i = 1 . . . N, j = 1 . . . N,

where because of the special structure of L and U the sum runs only up to K = min(i, j).

(A.24)

A.3 Inhomogeneous systems of linear equations



345

2

These are N 2 equations for 2 × N 2+N = N 2 + N unknowns and the system is underdetermined, meaning the LU decomposition is not unique. Thus, we can arbitrarily make the N elements in the diagonal of L 1: Lii = 1,

i = 1 . . . N.

(A.25)

A direct solution of (A.24) is given by the Crout algorithm [3]: 1. j = 1. 2. Compute the j coefficients i−1

Uij := Aij − ∑ Lik Ukj , k=1

3.

(A.26)

Compute the N − j coefficients Lij :=

4.

i = 1 . . . j.

j−1

1 [A − ∑ L U ], Ujj ij k=1 ik kj

i = j + 1 . . . N.

(A.27)

j := j + 1. If j ≤ N go to 2.

Following the algorithm step-by-step, it turns out that on the right-hand sides only elements are needed that have already been computed in prior steps. A.3.4.1 Pivoting A division occurs in (A.27), which may cause numerical problems if the corresponding Ujj is small. One may show that Crout’s algorithm (as other elimination methods too) is unstable if Ujj becomes small compared to other matrix elements. A solution is provided by reordering the system. One may interchange arbitrarily rows and columns but then has to swap also the elements of the vectors x⃗ and b⃗ accordingly. Finding an appropriate permutation that leads to a large-valued divisor (the pivot element in (A.27)) is called ‘pivoting.’ One distinguishes between full and partial pivoting. In the first case, rows and columns of A are interchanged pairwise, in the latter only the rows. How can we find the optimal pivot element? Normally it suffices to take that element with the largest absolute value. To compare elements in different rows, the matrix A should be scaled first. One can use the largest element in each row and scale this to one, i. e., one computes for each row i a scaling factor si−1 = max |Aij | j

and then scales A′ij = si Aij ,

b′i = si bi .

346 � A Matrices and systems of linear equations A.3.5 Thomas algorithm If the matrix A has many zeros, the methods can often be simplified considerably. The special case of a tridiagonal matrix (only nonzeros in the diagonal, and in one upper and lower diagonal) occurs often from finite differences discretization and can be solved quite effectively. Let A ⋅ x⃗ = y,⃗ with a tridiagonal A given as b1 a2

c1 b2

0

c2 .

( A=( (

. ai

. bi

(0

ci .

) ). ) .

. aN

(A.28)

bN )

In a first step, (A.28) is reduced to triangular form 1

c1′ 1

( A′ = ( (

0

c2′ .

. 1

) ), )

ci′ .

(0

.

(A.29)

1)

which is achieved by substituting c1′ =

c1 , b1

y′1 =

y1 b1

(A.30)

and ci′ :=

ci , ′ bi − ai ci−1

y′i :=

yi − ai y′i−1 , ′ bi − ai ci−1

i = 2 . . . N.

(A.31)

The system A′ x⃗ = y⃗ ′ can then be readily solved by (A.16): xN = y′N

and

xi := y′i − xi+1 ci′ ,

i = N − 1, . . . , 1.

(A.32)

A.4 Homogeneous systems of linear equations



347

The receipt (A.30)–(A.32) is known as the ‘Thomas algorithm’ and can be extended straightforwardly to matrices having more than one upper and lower diagonal.

A.4 Homogeneous systems of linear equations A.4.1 Eigenvalue problems Now we turn to the homogeneous equations (A.3) B ⋅ x⃗ − λ x⃗ = 0,

(A.33)

that can be brought into the form (A.1) by substituting B − λ 1 = A. The system (A.33) has nontrivial solutions only if the determinant det(B − λ 1) = 0 vanishes (solvability condition). As already stated in Section A.1.2, this yields the characteristic polynomial P(λ) whose N zeros correspond to the eigenvalues of B. To each eigenvalue λi , an eigenvector v⃗i is assigned so that B ⋅ v⃗i = λi v⃗i ,

i = 1 . . . N,

(A.34)

where not all eigenvalues must be different, see Section A.2.2. A.4.2 Diagonalization The central task is to find eigenvalues and eigenvectors for a given matrix. As outlined in Section A.2.2, each matrix can be transformed into Jordan canonical form: C −1 ⋅ B ⋅ C = J. Since the transformation does not change the spectrum, the eigenvalues of A and J are equal and can simply be read off from the diagonal of J: λi = Jii . Thus, the spectrum is known if an appropriate C has been determined. If J is diagonal, Jij = λi δij , B is said to be diagonalizable and its desired eigenvectors are identical with the columns of the matrix C (for matrices which can not be brought into diagonal form there exist special methods which we shall not discuss here).

348 � A Matrices and systems of linear equations How can we obtain the transformation C? In practice, two different methods are applied, sometimes also a combination of both methods. We shall present a short overview of both, for more details see the literature, e. g., [1] or [2]. A.4.2.1 Transformation method The idea is to find different invertible transformation matrices Pi that change and simplify the original matrix in a specific way. This can be the zeroing of certain elements (Jacobi transformation), or of complete rows or columns (Householder transformation). Thus, one has −1 −1 B′ = P−1 k . . . P2 P1 B P1 P2 . . . Pk .

The resulting matrix B′ is similar to B and must not already be in diagonal form. Anyway, it should be easier to handle than the original matrix. Finally, the better conditioned B′ can be further diagonalized by factorization. A.4.2.2 Factorization method Assume that the matrix B can be factored into a right matrix F R and a left one F L according to B = F L ⋅ F R.

(A.35)

A new matrix can be built by commutation B′ = F R ⋅ F L .

(A.36)

Conversely, multiplying (A.35) with F −1 L from the left side yields F R = F −1 L ⋅ B, and, inserting this into (A.36), B′ = F −1 L ⋅ B ⋅ F L.

(A.37)

Hence, B and B′ are similar and have the same spectrum. Next, one constructs the sequence (n) B(n+1) = F −1 ⋅ F nL = F nR ⋅ F nL nL ⋅ B

(A.38)

with B(0) = B and F nL , F nR as factors of B(n) . One can show that B(n) converges for n → ∞ to a triangular matrix if the factorization (A.35) fulfills certain properties [2]. This will be described in greater detail in the next section.

A.4 Homogeneous systems of linear equations



349

A.4.2.3 LU factorization Now we use in (A.35) the LU decomposition introduced in detail in Section A.3.4 B = L ⋅ U. The iteration rule (A.38) thus reads (n) B(n+1) = L−1 ⋅ Ln = U n ⋅ Ln n ⋅B

(A.39)

and one complete iteration step looks as follows: 1. Factorize B in L and U. 2. Compute B′ = U ⋅ L. 3. Put B := B′ . 4. If B is not in triangular form, go to 1. At each step B comes closer to an upper triangular matrix. If B has reached this form it will not change further, hence each upper triangular matrix is a fixed point of the map (A.39). This is evident because an upper triangular matrix B factorizes in a trivial way into an upper triangular matrix U = B and the unit matrix as a ‘lower triangular matrix’ L. Then one has L U = U L and B = B′ . A.4.2.4 QR factorization Not every matrix can be LU decomposed. Conversely, the so-called QR decomposition B=Q⋅R with R again as the upper triangular matrix but now an orthogonal matrix Q is always possible. Instead of (A.39) we obtain B(n+1) = Q−1 ⋅ B(n) ⋅ Q = Rn ⋅ Q . n

n

n

(A.40)

We list the following properties without proof, for details see [2]: – If all eigenvalues of B have different absolute values, B(n) converges for n → ∞ to an upper triangular matrix. The eigenvalues are ordered with increasing absolute value in the diagonal. – If k eigenvalues have the modulus |λi |, i = 1 . . . k, B(n) converges to an upper triangular matrix with the exception of a diagonal block matrix of order k whose eigenvalues are λi .

350 � A Matrices and systems of linear equations A.4.3 Application: zeros of a polynomial When given a polynomial of degree N in the form N

P(x) = ∑ ak x k ,

(A.41)

k=0

whose N complex zeros P(xi ) = 0,

xi ∈ 𝒞 ,

i = 1...N

are to be determined, several methods are available. Here we introduce an algorithm leading to a linear eigenvalue problem. First, we normalize (A.41) bk = ak /aN ,

k = 0...N − 1

and write the polynomial as N−1

̃ P(x) = x N + ∑ bk x k , k=0

having of course still the same zeros as P. We construct the N × N-Frobenius matrix 0 1 ( ( F=( (. (. . (

0 1

0

1

−b0 −b1 −b2 ) ) . ) ). . ) . −bN−1 )

(A.42)

One can show that ̃ det(F − x 1) = (−1)N P(x) holds, i. e., the N eigenvalues of F correspond to the zeros of P. The matrix (A.42) has Hessenberg form, this is a triangular matrix with an additional secondary diagonal. Such matrices can be effectively diagonalized, e. g., using the LAPACK routine DHSEQR:

C C C C

SUBROUTINE zeros(a,n,z) computes all zeros of the polynomial p = a(n)*x^n + a(n-1)*x^(n-1) ... a(0) = 0 a(0:n) [in]: coefficients (not changed) real*8 z(n) [out]: the complex zeros, complex*16

Bibliography

C

C

C

� 351

Double precision version IMPLICIT REAL*8 (a-h,o-z) COMPLEX*16 z(n) DIMENSION a(0:n),zr(n),zi(n),frob(n,n),work(1000) an=a(n); a=a/an ! normalization of polynomial Frobenius matrix (Hessenberg) frob=0. frob(1:n,n)=-a(0:n-1) DO i=1,n-1 frob(i+1,i)=1. ENDDO Lapack call, computes the eigenvalues of frob CALL DHSEQR('E','N',n,1,n,frob,n,zr,zi,dumy,1,work,1000,info) z=CMPLX(zr,zi) a=a*an ! re-do normalization END

Bibliography [1] L. Hogben (editor), Handbook of Linear Algebra, Chapman & Hall (2007). [2] J. Stoer and R. Bulirsch, Introduction to Numerical Analysis, Springer (2010). [3] W. H. Press, B. P. Flannery, S. A. Teukolsky and W. T. Vetterling, Numerical Recipes, Cambridge Univ. Press (2007).

B Program library B.1 Routines We wish to supply and to describe some useful and often needed routines. They can be compiled separately and linked by specifying the corresponding object file (.o) during the compilation process of the main program. If the routines are gathered e. g., in a file named fortlib.f, one compiles once with gfortran -c fortlib.f The option -c tells the compiler not to create an executable but rather generate an object file that is named for this case fortlib.o. If a routine from the library is used, one must modify the script file used for compilation, (e. g., make_f95 from Section 1.2.4), by adding the fortlib object file. Finally, the script file should look like this (if LAPACK is also needed): gfortran -O2 $1.f -o $1 fortlib.o -L/usr/lib/ -lpgplot -llapack The complete library contains the following sources:

Name graphics init contur contur1 ccontu image ccircl Runge–Kutta rkg drkg drkadt rkg_del miscellaneous tridag ctrida dlyap_exp dlyap_del dlyap_exp_del schmid volum deter random_init

purpose graphics initialization contour lines contour lines contour lines (colored) bitmap (colored) initialize color circle Runge–Kutta Runge–Kutta (double precision) Runge–Kutta, adaptive time step (double prec.) Runge–Kutta for delay systems Thomas algorithm for tridiagonal matrix Thomas algorithm for complex tridiag. matrix computation of (all) Lyapunov exponents computation of largest Lyapunov exponent, delay computation of (all) Lyapunov exponents, delay Schmidt–Gram orthogonalization volume spanned by vectors (function) determinant of a matrix (function) initialization of random generator

https://doi.org/10.1515/9783110782523-011

see book page 9 60 260 21 21 73 77 161 346 112 172 174

306

354 � B Program library In the following, we only present the comment headers of the respective routines. Where not specified otherwise, implicit type declaration is assumed and real variables are in single precision (real*4). The complete listing is provided in https://www. degruyter.com/document/isbn/9783110782523/html.

B.2 Graphics All routines in this section resort to programs of the PGPLOT library (see chapter 1).

B.2.1 init

c c c c c c c

SUBROUTINE init(x0,x1,y0,y1,size,aspect,frame) initializes a graphics window x0,x1 [In] x-region y0,y1 [In] y-region size [In] size of window in inches aspect [in] aspect ratio (y/x) frame [In] if frame='Y' or 'y' a box is plotted

B.2.2 contur

c c c c c c

SUBROUTINE contur(a,idim,jdim,n) Plots n contour lines of an array a. The contour values are chosen equidistant between min(a) and max(a). If n=0: only the a=zero line is plotted. a(idim,jdim) [In] 2D array idim, jdim [In] dimensions of a n [In] number of lines

B.2.3 contur1

c c c c c

SUBROUTINE contur1(a,idim,jdim,n) plots contour lines as 'contur' n=0 [In] only zero line plotted n>0 n lines between 0 and max(a) (equidistant) n 3. A linear stability analysis yields for the perturbations ϵ of (C.3) ϵn+1 = A ϵn with the amplification factor A=

d (2) 󵄨󵄨󵄨 f (x)󵄨󵄨󵄨 . dx 󵄨xp1 ,p2

Forming the derivative of the second iterate (C.1) we obtain A = −4a3 x 3 + 6a3 x 2 − 2(1 + a)a2 x + a2 . https://doi.org/10.1515/9783110782523-012

362 � C Solutions of the problems Using (C.2) we find after some calculations A = −a2 + 2a + 4. The periodic solution (C.3) becomes unstable if |A| > 1. This is the case if either a < 3, which is excluded from the existence of (C.3), or if a > 1 + √6.

C.2 Chapter 2 Problems 2.1.4 1. For K = 0: yn+1 = yn ,

xn+1 = xn + yn ,

horizontal rows of dots, which are dense if yn is irrational, otherwise points with gaps. 2. The chain length after N iterations is given as ℓ = xN − x0 , thus for K = 0 ℓ = N y0 , a linear function of the initial value y0 . For small values of K the length deviates from the straight line for certain rational y0 = 1/M, M integer, and locks in the form of steps of a staircase. For larger K this effect increases and chaotic regions occur in addition; see Figure C.1.

C.3 Chapter 3 Problems 3.1.2 We study the map xn+1 = f (xn ) with f (x) = a Δt x(1 −

1 1 + − x) a aΔt

C.3 Chapter 3

� 363

Figure C.1: Chain length as function of y0 for several K after 10 000 iterations. For larger K an irregular staircase (devil’s staircase) occurs.

and df 1 1 = a Δt(1 − + − 2x). dx a aΔt In particular we wish to compute the stability of the fixed point xs = 1 − 1/a and find for the amplification factor A=

df 󵄨󵄨󵄨 󵄨󵄨 = 1 + (1 − a)Δt. dx 󵄨󵄨xs

For xs > 0, one must have a > 1 and therefore A ≤ 1. The numerical stability limit is thus given by A = −1, or Δt =

2 . a−1

364 � C Solutions of the problems

C.4 Chapter 4 Problems 4.3.2 ̃ 0 + Δx)) on the left-hand side of (4.22) can be written as a Taylor 1. The expression f (y(x expansion with respect to g: ̃ 0 + Δx)) = f (y(x0 ) + g) f (y(x = f (y(x0 )) +

df 1 d2 f 2 g+ g + O(Δx 3 ). dy 2 dy2

Now taking g from the expansion 1 ̃ 0 + Δx) = y(x0 ) + y′ Δx + y′′ Δx 2 + O(Δx 3 ), y(x ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ 2 ≡g

we obtain up to the order Δx 2 ̃ 0 + Δx)) = f (y(x0 )) + f (y(x

df ′ 1 1 d2 f ′ 2 2 (y Δx + y′′ Δx 2 ) + y Δx + O(Δx 3 ). dy 2 2 dy2

Inserting into (4.22) yields ̃ 0 + Δx) = y(x0 ) + Δx f⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ y(x (y(x0 )) + y′

Δx 2 df ′ Δx 3 1 df ′′ 1 d 2 f ′ 2 y + [ y + y ] +O(Δx 4 ). 2 ⏟⏟dy 2 dy 2 dy2 ⏟ ⏟⏟⏟ ⏟ 2 ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ y′′

1 ′′′ y 2

On the other hand, when expanding the exact solution we obtain 1 1 y(x0 + Δx) = y(x0 ) + y′ Δx + y′′ Δx 2 + y′′′ Δx 3 + O(Δx 4 ). 2 6 Subtracting the two last equations finally yields the error 󵄨󵄨 ̃ 󵄨 1 󵄨 ′′′ 󵄨 3 󵄨󵄨y(x0 + Δx) − y(x0 + Δx)󵄨󵄨󵄨 = 󵄨󵄨󵄨y 󵄨󵄨󵄨Δx 12 which is of order Δx 3 . 2. The symplectic scheme of (4.5) reads φn+1 = φn + ωn+1 Δt

ωn+1 = ωn − (αωn + Ω20 φn )Δt or

with

φ φ A( ) = B( ) ω n+1 ω n

(C.4)

C.4 Chapter 4

A=(

1 0

−Δt ), 1

B=(

1 −Ω20 Δt

� 365

0 ). 1 − αΔt

Thus, Q in (4.21) reads s

Q = A−1 B. s

Problems 4.3.3.3 1. α1 : α2 : β1 : β2 :

Meaning of the expressions: increase of prey by reproduction, exponential growth limitation of prey by predator, eating decease of predator, exponential decrease increase of predator due to prey food, eating

After scaling: t = τ t,̃

ni = γi ñ i

with τ = 1/β1 ,

γ1 = β1 /β2 ,

γ2 = β1 /α2

one arrives at the form (4.27) with the sole remaining (control) parameter a = α1 /β1 . 2. The fixed points are computed from 0 = añ 1 − ñ 1 ñ 2

0 = −ñ 2 + ñ 1 ñ 2 as (i) ñ 1 = ñ 2 = 0 Stable for a < 0, node, unstable for a > 0, saddle node. (ii) ñ 1 = 1, ñ 2 = a. This fixed point exists only for a > 0 since ñ i as a population number is by definition positive. Linearizing: ñ 1 u ( 1 ) = ( ) + ( 1 ) eλt ̃n2 a u2 leads after insertion in (4.27) to the solvability condition

366 � C Solutions of the problems

det (

λ −a

1 ) = λ2 + a = 0 λ

and λ12 = ±i√a. Hence the second fixed point is always marginally stable (Re λ = 0), but has a nonvanishing imaginary part. This is denoted as a center. 3. Time derivative of W yields dW /dt = d ñ 1 /dt + d ñ 2 /dt −

d ñ 1 /dt d ñ /dt −a 2 ñ 1 ñ 2

If we insert (4.27), the right-hand side vanishes, showing that W is conserved. 4. Numerical solutions are obtained straightforwardly using Runge–Kutta; see Figure C.2.

Figure C.2: Numerical solution for a = 1 and different initial conditions, n2 over n1 .

Problems 4.4.5 1. Transformation to a constantly rotating frame of references turns (4.55) into (tildes removed): ẋ = v

ẏ = w

v̇ = −2Ω0 w + Ω20 x − ẇ = 2Ω0 v + Ω20 y −

y rα

x rα

(C.5)

C.4 Chapter 4

� 367

The system (C.5) possesses infinitely many fixed points lying on the circle √x02 + y20 = r0 = Ω−2/α , each corresponding to the same circular orbit in the nonrotating frame. Without 0 loss of generality we can take the fixed point: x0 = r0 ,

y0 = 0,

v0 = w0 = 0.

Linearization around this point yields for the deviations q⃗ the system λq⃗ = J q⃗ with the Jacobi matrix 0 0 J =( 2 αΩ0 0

0 0 0 0

1 0 0 2Ω0

0 1 ). −2Ω0 0

(C.6)

The eigenvalues of J read λ12 = ±Ω0 √α − 4,

λ34 = 0.

The circular orbit is unstable if one of the eigenvalues has a positive real part, which is the case for α > 4. If we compute the total energy of the planet in the nonrotating system it reads E0 =

α−4 1 , 2(α − 2) r0α−2

a value becoming positive if α > 4. Thus, the planet being initially on a circle could leave the system and vanish at infinity, independently of r0 (this reasoning applies of course only if the potential energy goes to zero for r → ∞, i. e., for α > 2). For α < 4 one finds λ12 = ±iω with ω = Ω0 √4 − α and the planet will oscillate around the circular orbit when slightly disturbed. The condition for a closed orbit is resonance ω=

k Ω , ℓ 0

where k and ℓ are two coprime integers. The orbit is closed after one revolution if ℓ = 1, leaving the condition for α

368 � C Solutions of the problems α = 4 − k2,

k = 1, 2, 3, . . .

In fact there are infinitely many sharp values for α where circular orbits stay closed when slightly perturbed. We mention the first three ones, together with the behavior of the force on distance and the potential: 1. α = 3: Kepler potential, F ∼ −1/r 2 , U ∼ −1/r. 2. α = 0: 2D-harmonic oscillator, F ∼ −r, U ∼ r 2 . 3. α = −5: anharmonic oscillator, F ∼ −r 6 , U ∼ r 7 . To show numerically the existence of closed orbits we integrate the system (C.5) with the initial conditions x(0) = 1,

y(0) = 0,

v(0) = 0,

w(0) = 1 + ϵ

for 100 revolutions and average the x(ti ) − x(0) value after each revolution i: ⟨δx⟩ =

1 100󵄨󵄨 󵄨 ∑󵄨x(t ) − x(0)󵄨󵄨󵄨 100 i 󵄨 i

where ti is found from y(ti ) = 0 and w(ti ) > 0. The small deviation ϵ = 0.001 starts the instability. The result for α = −6 through α = 4 is shown in Figure C.3. Clearly, the deviations after one revolution get very small for the selected values α = −5, 0, 3, indicating a closed orbit.

Figure C.3: Averaged deviation from initial value x as a function of α.

C.4 Chapter 4

� 369

2. is left for the reader. 3. The Lagrangian points L1 through L3 are located on the x-axis. Thus it suffices to evaluate the Jacobi potential (4.46) at y = 0: V (x, 0) = −

1−μ μ x2 − − . |x + μ| |x + μ − 1| 2

Zeroing its derivative leads to the conditions for the fixed points 𝜕x V (x, 0) =

1−μ μ (x + μ) + (x + μ − 1) − x = 0. |x + μ|3 |x + μ − 1|3

After some calculations one finds a 5th degree polynomial: Pμ (x) = − x 5 + x 4 (2 − 4μ) + x 3 (6μ − 6μ2 − 1)

+ x 2 (6μ2 − 4μ3 − 2μ) + x(2μ3 − μ4 − μ2 )

+ sign(x + μ)[x 2 (1 − μ) + x(4μ − 2μ2 − 2) + (1 − μ)3 ] + μ sign(x + μ − 1)(x + μ)2 .

(C.7)

Due to the moduli one must distinguish between the three cases 1. x < −μ, left of m2 , sign(x + μ) = −1, sign(x + μ − 1) = −1 2. −μ < x < 1 − μ, between m2 and m3 , sign(x + μ) = 1, sign(x + μ − 1) = −1 3. 1 − μ < x, right of m3 , sign(x + μ) = 1, sign(x + μ − 1) = 1. The roots of (C.7) can be computed for instance using the program zeros of Section A.4.3. Figure C.4 shows the Lagrangian points depending on the mass relation μ.

Figure C.4: Solid: x-coordinates of the three Lagrangian points L1 , L2 , L3 over the mass parameter μ; dashed: position of the two main bodies.

370 � C Solutions of the problems Problems 4.5.6 1. Thermalization: Multiplying (4.74) with v⃗i and summation over all particles for F⃗i = 0 yields 1 d ∑ v2 = −γ ∑ v2i . 2 dt i i i

(C.8)

From the equipartition theorem in 2D it follows 1 ∑ v2 = NT 2 i i and then from (C.8) dt T = −2γT = −2γ0 (T − Ts ) or T − Ts ∼ exp(−2γ0 t). 2. 2D-equilibrium configurations: (i) If only the four nearest neighbors (blue) are considered, the potential energy reads Q

UG = 4 U(r) for a square lattice, and UGH = 6 U(r) for a hexagonal grid, where we take U from (4.59). The equilibrium positions are found from Q,H

dUG

dr

=0

as rQ = rH = 21/6 ≈ 1.12246. (ii) Including also the red particles, we find Q UG = 4 U(r) + 4 U(√2r)

and UGH = 6 U(r) + 6 U(√3r)

(C.9)

C.4 Chapter 4

� 371

and from the equilibrium condition (C.9) rQ = (65/36)1/6 ≈ 1.10349,

1/6

rH = ((27 + 1/27)/14)

≈ 1.11593.

(iii) Including all plotted particles, we finally obtain Q

UG = 4 U(r) + 4 U(√2r) + 4 U(2r) and UGH = 6 U(r) + 6 U(√3r) + 6 U(2r), rQ = (2

1 + 2−6 + 2−12 ) 1 + 2−3 + 2−6

1/6

≈ 1.10100,

rH = (2

1 + 3−6 + 2−12 ) 1 + 3−3 + 2−6

1/6

≈ 1.1132.

The more stable configuration is that with the lower energy. Computing UG for the given equilibrium positions, one finds for all cases Q

UGH < UG , independently on the number of neighbors taken into account. Hence the lattice should manifest a hexagonal geometry for low temperatures, in agreement with Figure 4.14, left frame. Problems 4.7.5 1. Linearizing (4.102) with respect to the upper rest position yields u̇ 1 = u2

u̇ 2 = −Ω20 (1 + a sin ωt) sin(π u1⏟)⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟+⏟⏟⏟⏟⏟⏟⏟ =−u1

or ü − Ω20 (1 + a sin ωt) u = 0

(C.10)

with u = u1 . The same scaling as in Section 4.7.4 leads to ü − (p + 2b sin 2t)u = 0. Substituting the ansatz u(t) = u0 cos(t + α) eλt , and neglecting the term sin(3t + α), one obtains

(C.11)

372 � C Solutions of the problems (λ2 − 1 − p)(cos t cos α − sin t sin α)

− 2λ(sin t cos α + cos t sin α) − b(sin t cos α − cos t sin α) = 0

or sin t[−(λ2 − 1 − p) sin α − (2λ + b) cos α]

+ cos t[(λ2 − 1 − p) cos α − (2λ − b) sin α] = 0.

The two square brackets must vanish, leading to the system (

−λ2 + 1 + p −2λ + b

−2λ − b sin α )( ) = 0. λ2 − 1 − p cos α

Its solvability condition constitutes a polynomial of 2nd degree in λ2 having the roots λ2 = p − 1 ± √b2 − 4p. For a nonpositive Re(λ), λ2 must be real valued and λ2 ≤ 0. The first condition leads to b ≥ 2√p, the latter to b ≤ 1 + p. In addition, 0 ≤ p ≤ 1 must hold. This results in a stability diagram sketched in Figure C.5.

Figure C.5: The upper rest position is stable in the shaded area.

C.4 Chapter 4

� 373

2. The damped Mathieu equation is transformed to the dimensionless normal form ̃ =0 ü + βu̇ + (p + 2b sin 2t)u

(C.12)

by the scaling of Section 4.7.4 and β = 2α/ω. To find the Floquet exponents one numerically integrates the system (4.104), but now with L(t)̃ = (

0 −(p + 2b sin 2t)̃

1 ), −β

(C.13)

from t ̃ = 0 through t ̃ = T = π. Determining the monodromy matrix according to (4.106) one finds the Floquet multipliers (4.107). Figure C.6 shows the contour lines max(|σ1 |, |σ2 |) = 1 that mark the onset of instability. The tips of the tongues no longer reach the p-axis as in Figure 4.30 but are shifted to finite critical amplitudes that are necessary to overcome friction.

Figure C.6: Stability chart for the damped Mathieu equation (C.12) with β = 0.1.

374 � C Solutions of the problems

C.5 Chapter 5 Problems 5.6 1. Exact solution of (5.4). Let α > 0. The set (5.4) is written dimensionless with the scaling ̃ t = t/α,

(x, y) = (x,̃ y)̃ ⋅ g/α2

and reads ẍ̃ = −x̃̇ + β̃ ỹ ỹ̈ = −ẏ̃ − 1.

(C.14) (C.15)

Only one parameter β̃ = β/α2 is left. We leave all tildes. Different ways to solve the system are possible. One can compute y from (C.15) and obtains y(t) =

T f (t) − t f (T)

(C.16)

with f (t) = 1 − e−t . The boundary conditions y(0) = y(T) = 0 are already implemented in (C.16). From (C.14) and (C.15) y can be eliminated: dt4 x + 2dt3 x + dt2 x = −β. Integrating twice yields β ẍ + 2ẋ + x = − t 2 + A t + B. 2 Putting x(t) = X(t) + a + b t −

β 2 t 2

with A = b − 2β, B = a + 2b − β, leads to the homogeneous equation Ẍ + 2Ẋ + X = 0, which is solved by X(t) = (c t − a) e−t .

C.5 Chapter 5

� 375

Here, x(0) = 0 is already included. Thus one finds for x x(t) = (c t − a) e−t + a + b t −

β 2 t . 2

(C.17)

The coefficients b, c are determined by inserting (C.17) and (C.16) in (C.14) and comparing the coefficients: c=

βT , f (T)

b = c + β.

The remaining coefficient a follows from the boundary condition x(T) = L: a=−

T 2 β(1 + 3e−T ) L − βT + . f (T) 2f 2 (T)

2. Solution of the nonlinear Schrödinger equation. The solution given is verified straightforwardly by insertion. Discussion for arbitrary C: The integral found by separation of variables from (5.45) has no analytic form. We discuss a ‘graphic solution’ coming from the analogy to a classical one-dimensional motion of a mass point in a potential. Writing (5.42) in the form Φ′′ = −EΦ + γ Φ3 = −

dV (Φ) dΦ

(C.18)

with V (Φ) =

E 2 γ 4 Φ − Φ , 2 4

(C.19)

(C.18) is identical with the equation of motion of a mass point with m = 1 in the potential (C.19) (x plays the role of time, Φ that of the position). Equation (5.45) corresponds to the conservation of energy and C/2 denotes the total energy C 1 ′ 2 = (Φ ) + V (Φ). 2 2 We examine the four cases (a) E > 0, γ > 0, Figure C.7 (a). For C < E 2 /2γ, nonlinear oscillations exist around Φ = 0. They have the square amplitude Φ2m = (E − √E 2 − 2γC)/γ

(C.20)

and correspond to plane, nonlinear waves in position space. For C > E 2 /2γ or C < 0, Φ diverges for x → ±∞. (b) E > 0, γ < 0, Figure C.7 (b). Nonlinear waves for C > 0.

376 � C Solutions of the problems

Figure C.7: The four different potentials. Nonlinear waves can exist for (a), (b) and (d).

(c) E < 0, γ > 0, Figure C.7 (c). Diverging solutions for all C. (d) E < 0, γ < 0, Figure C.7 (d). If E 2 /2γ < C < 0, nonlinear waves about Φ = ±√E/γ. For C > 0 nonlinear waves about Φ = 0 with square amplitude (C.20).

C.6 Chapter 6 Problems 6.3.4 1. The scaling to nondimensional variables reads ̃ t = t/γ,

T=

α ̃ T, γ

β=

γ ̃ β. α

2. The fixed point Ts is determined solving Ts − 1 = − tanh β(Ts − 1 − ε). For ε = 0 the only solution is Ts = 1. Inserting the ansatz Ts = 1 + δ with δ = O(ε) yields δ = − tanh β(δ − ε) = −β(δ − ε) + O(ε2 ). Taking the linear order only and solving for δ gives δ=

β ε, 1+β

Ts = 1 +

β ε. 1+β

3. Inserting T(t) = Ts +u exp(λt) into (6.41) and expanding tanh(βε/(1+β)) with respect to ε yields λ = −1 − β e−λτ + O(ε2 ).

C.6 Chapter 6

� 377

Considering only the lowest order this has the form of (6.19) with a = −1 and b = −β and the formulas (6.28) and (6.30) can be used to find ωc and τc , respectively. 4. Numerical solutions of (6.41) are shown in Figure C.8 for the parameters β = 5, T0 = 1 and several supercritical values of τ.

Figure C.8: Numerical solutions of (6.41) for τ = 0.4 (black), 1.0 (red), 2.0 (green).

Problems 6.4.5 1. Linearizing close to the fixed point n1 = n2 = 1 nj = 1 + uj eiωt yields in linear order of u iωu1 = u2 ,

iωu2 = −u1 .

The solvability condition is ω = 1 and one obtains u2 = u0 eit ,

u1 = −iu2 = −iu0 eit = u0 ei(t−π/2)

and thus u1 (t) = u2 (t − π/2) and the same relation for ni . Therefore, A = 1 and τ = π/2.

(C.21)

378 � C Solutions of the problems 2. Eq. (6.69) takes the form dt n2 (t) = n2 (t) − n2 (t) n2 (t − π/2).

(C.22)

Again, solving this in linear order with n2 (t) = 1 + n0 eλt gives λ = −e−λt and, separating into real and imaginary part with λ = iω (marginal stability) π 0 = cos( ω), 2

π ω = sin( ω). 2

Both eqs. have the solution ω = 1, in agreement with (C.21). 3. Numerical solution of the Lotka-Volterra eqs. as well as of (C.22) are shown in Figure C.9.

Figure C.9: Numerical solutions of the Lotka-Volterra eqs. (red = predator, green = prey) and of the delay eq. (C.22), blue. The initial conditions for (C.22) have been chosen in such a way that the amplitudes and phases coincide.

C.7 Chapter 7 Problems 7.4.4 1. We obtain (7.135) after scaling ̃ t = t/α,

x = x̃ √D/α,

u = ũ ⋅ (α/β).

C.7 Chapter 7

� 379

2. (i) Linearization with u0 = 0, u = u0 + w: 2 𝜕t w = 𝜕xx w+w

yields with w ∼ eλt eikx λ(k) = 1 − k 2 . The fixed point is unstable for any plane wave with |k| < 1. (ii) Linearization about u0 = 1: 2 𝜕t w = 𝜕xx w−w

yields λ(k) = −1 − k 2 . The fixed point is stable for all k. 3. Inserting u = u(ξ) in (7.135) leads to 2 −v𝜕ξ u = 𝜕ξξ u + u − u2 .

Since u → 0 for ξ → ∞, we can linearize for large ξ according to: 2 −v𝜕ξ u = 𝜕ξξ u + u.

The ansatz u ∼ e−σξ , σ > 0 yields v=σ+

1 σ

and hence a lower limit for the speed: vmin = 2

for σ = 1.

4. Inserting (7.137) in (7.135) leads to 2C e−κξ (κ2 − 1 − κv) − C 2 e−2κξ (2κv + 4κ2 + 1) = 0. Since this must be valid for all ξ, the expressions in the brackets must vanish separately and one computes v = −5/√6 ≈ −2.04,

κ = 1/√6.

380 � C Solutions of the problems By the substitution C = eκx0 it becomes evident that C shifts the front to the right side by x0 . 5. The stationary form of (7.135) can be written as u′′ = u2 − u = −

dV (u) du

with V (u) =

u2 u3 − 2 3

and, as in problem 5.6(2), interpreted as one-dimensional motion in the potential V . The ‘energy law’ thus reads 1 ′ 2 (u ) + V (u) = E. 2

(C.23)

For the special value E = 1/6, the ‘motion’ is along the homoclinic orbit of the fixed point u = 1. Thereby, u goes from u = 1 to u = −1/2 and back to u = 1; see Figure C.10. For E = 1/6, (C.23) may be integrated by the separation of variables: x − x0 = √ 3 ∫

du √1 − 3u2 + 2u3

= 2 atanh(√

2u + 1 ). 3

After some short manipulations one obtains the stationary solution u0 (x) = 1 −

2

3

2 cosh ((x − x0 )/2)

,

(C.24)

corresponding to a localized solution at x = x0 with u0 (x → ±∞) = 1 and u0 (x0 ) = −1/2.

Figure C.10: Potential and homoclinic orbit for E = 1/6.

C.7 Chapter 7

� 381

6. We examine the stability of (C.24) with x0 = 0. Linearization of (7.135) about u0 yields (7.138) with the potential V (x) = 2u0 − 1 = 1 −

3

2

cosh (x/2)

.

Expanding V (x) ≈ −2 +

3 2 x , 4

turns (7.138) into the equation of the quantum mechanical harmonic oscillator: 2 −dxx w+

3 2 x w = (2 − λ)w. 4

The ground state is a Gaussian w ∼ exp(−(√3/4) x 2 ), for its ‘energy’ one finds (2 − λ) = √3/2 or λ = 2 − √3/2 ≈ 1.13. Hence, λ > 0 and the solution (C.24) is proved to be unstable. The following MATLAB code computes the lowest three states of (7.138) with dx w(−L/2) = dx w(L/2) = 0 with L = 50 and n = 1000 mesh points using a finite difference scheme. The output is shown in Figure C.11. clear; n=1000; dx=50/(n+1); dx2=dx^2; x=-25+dx:dx:25-dx; a=-1/dx2*ones(n-1,1); % tridiagonal matrix for k=1:n b(k)=2/dx2+1-3/cosh(x(k)/2)^2; end b(1)=b(1)-1/dx2; b(n)=b(n)-1/dx2; m=diag(a,1)+diag(a,-1)+diag(b,0); % boundary conditions [v,e]=eigs(m,3,'sa'); % compute eigenvalues and vectors plot(x,v(1:n,1),x,v(1:n,2),x,v(1:n,3)); % output -diag(e) The values for λ are printed and read λ = 1.25, 0, −0.75. Note that the second (marginal, λ = 0) mode corresponds to a lateral shift of u0 with w ∼ dx u0 . It can be easily shown that w = dx u0 is an exact solution of (7.138) with λ = 0.

382 � C Solutions of the problems

Figure C.11: The three lowest states.

C.8 Chapter 8 Problems 8.4.3 1. With the transformation Ψ = Ψ̃ exp(iω0 t) ω0 vanishes from (8.110). Inserting (8.111) in (8.110) yields (from here ω0 = 0): iΩ = ε − (1 + ic3 ) A2 or, after separating real and imaginary parts, ε − A2 = 0,

Ω + c 3 A2 = 0

and from there A = √ε,

Ω = −c3 ε.

2. Linear stability analysis about (8.111). Inserting Ψ = (A + w(x, t)) exp(iΩt) in (8.110) and linearization with respect to w gives 2 ẇ = (1 + ic1 ) 𝜕xx w − (1 + ic3 ) ε (w + w∗ )

2 ẇ ∗ = (1 − ic1 ) 𝜕xx w∗ − (1 − ic3 ) ε (w + w∗ ).

With w ∼ exp(λt) exp(ikx) one arrives at the linear system

C.8 Chapter 8

� 383

(λ + (1 + ic1 ) k 2 + (1 + ic3 ) ε)w + (1 + ic3 ) ε w∗ = 0

(1 − ic3 ) ε w + (λ + (1 − ic1 ) k 2 + (1 − ic3 ) ε)w∗ = 0. Its solvability condition is a 2nd degree polynomial in λ having the roots λ1,2 = −k 2 − ε ± √ε2 − c12 k 4 − 2c1 c3 εk 2 .

(C.25)

Expansion with respect to k 2 yields λ = −(1 + c1 c3 ) k 2 + O(k 4 ) for small k. Thus, λ > 0 for arbitrarily small k if 1 + c1 c3 < 0. All plane wave disturbances with 0 ≤ k 2 ≤ kc2 and kc2 = −2ε

1 + c1 c3 1 + c12

are growing exponentially in time, Figure C.12. This is called Benjamin–Feir instability.

Figure C.12: Largest λ from (C.25) for 1 + c1 c3 < 0 (solid) and 1 + c1 c3 > 0 (dashed).

3. Figure C.13 shows the real part of Ψ in an xt-diagram for c1 = 1/2, c3 = −3. It can be easily seen that the initial condition (8.111) becomes unstable and after t ≈ 250 gives way to a spatio-temporal chaotic evolution.

384 � C Solutions of the problems

Figure C.13: xt-diagram of Re(Ψ) from t = 0 through t = 2000.

4. The largest Lyapunov exponent Λ is computed determining first a numerical solution ΨN (x, t) as in 3. This is the reference trajectory; compare also Section 4.6.3.1. In parallel, one integrates the linearized PDE for the complex valued deviations from the reference trajectory u(x, t): 2 𝜕t u(x, t) = [ε + (1 + ic1 ) 𝜕xx ] u(x, t) 󵄨󵄨 󵄨2 − (1 + ic3 )[2󵄨󵄨ΨN (x, t)󵄨󵄨󵄨 u(x, t) + ΨN (x, t)2 u∗ (x, t)]

which is obtained by substituting Ψ(x, t) = ΨN (x, t) + u(x, t) into (8.110) and linearizing around ΨN . For u(x, 0) we choose a randomly distributed but normalized initial condition 󵄨 󵄨2 ∑󵄨󵄨󵄨u(xj , 0)󵄨󵄨󵄨 = 1, j

with xj = jΔx.

After a certain time δt (e. g., δt = 200Δt) the growth of |u| in the interval δt is computed: 1/2

󵄨 󵄨2 s(δt) = [∑󵄨󵄨󵄨u(xj , δt)󵄨󵄨󵄨 ] . j

Thereafter u(x, δt) is again normalized. The continued numerical integration then yields s(2δt) and so on. At the end, the largest Lyapunov exponent is found as the mean value of all local exponents according to 1 K 1 ln s(kδt). ∑ K→∞ K δt k=1

Λ = lim

(C.26)

C.8 Chapter 8

� 385

For each single value of c3 the integration is carried on up to t = 34 000. After changing (increasing or decreasing) c3 , the evaluation of the corresponding Lyapunov exponent according to (C.26) is started after an initial run until t = 15 000. Figure C.14 depicts Λ over c3 for a fixed c1 = 1/2. From (8.112) chaotic dynamics is expected for c3 ≤ −2. Obviously, the behavior of Λ depends on the previous states and a large hysteretic region shows up in which both chaotic and regular dynamics may emerge at the same values of c3 .

Figure C.14: Largest Lyapunov exponent Λ of the complex Ginzburg–Landau equation over c3 for c1 = 1/2. A transition to chaos showing a wide hysteretic region results.

D README and a short guide to FE-tools D.1 README File with description of the program examples on https://www.degruyter.com/document/isbn/9783110782523/html The codes are ordered along their appearence in the book chapters. In each subdirectory a script file make_f95 exists for compilation. In addition the packages pgplot, installed in /usr/lib/ lapack, installed in /usr/lib/lapack/ fftpack, installed in /usr/lib/ (only for chapter 8) are required. In this directory: README fortlib.f fortlib.o

this file (see also appendix D.1) the library described in appendix B object files, linked by make_f95

TI stands for Terminal Input -----------chap2/ lyap_2d.f computes the largest Lyapunov exponent of the standard map, sect. 2.2.3. TI: K (control parameter) dbc_slp.f decimal to binary converter from sect. 2.5.1.4 TI: following the learning phase, input of the decimal number to be converted https://doi.org/10.1515/9783110782523-013

388 � D README and a short guide to FE-tools

kohonen_colormap.f color map after Kohonen as shown in Fig. 2.16. TI: none tsp.f Traveling Salesman Problem from sect. 2.5.2.3 TI: R (radius of the initial circle) -----------chap3/ henon_heiles.f Plots the Poincare section and trajectories from the Henon-Heiles model from sect. 3.3.3.1, compare Fig. 3.5. The value 'e' for the energy can be modified in the code. TI: initial values set by mouse

-----------chap4/ reduc_3_b_p.f Plots and computes trajectories of the reduced three body problem from sect. 4.4.4.4. mu=1/4, can be modified in the code. TI: value of energy 'e'. Then set initial value by mouse in allowed region (red). molecular_dynamics.f Molecular dynamics simulation from sect. 4.5.4. On the left-hand side particles are depicted in config. space, the right-hand side shows the pair correlation function. Simulations are also possible in a constant gravity field. TI: temperature (0.1 - 2.0), gravitation (ca 1), tend (e.g. 200) pendulum_lyapunov_exp.f Bifurcation diagram (bottom) and two Lyapunov exponents of the driven pendulum from sect. 4.6.3.6. TI: region for a, resolution ------------

D.1 README

� 389

chap7/ shallow_water.f Solution of 2D shallow water eqs. from sect. 7.2.4.2 with FTCS. Here for two oscillating point sources. TI: none grid_contur.f mesh_generator.f laplace_solver.f Program tools for FE, see separate short guide in fe_guide.pdf house.grd, house.knd, house.ppm kidney.grd, kidney.knd, kidney.ppm example files for FE fe_guide.pdf guide for the FE codes, see also appendix D.2

-----------chap8/ driven_cavity.f solution of the hydrodynamic basic equations from sect. 8.3.2.3. Plot of stream lines as in Figs. 8.9., 8.10 TI: Reynolds number benard_convection.f Solution of the hydrodynamic eqs. from sect. 8.3.4.2 in 3D following sect. 8.3.4.3. Plot of temperature field. Rayleigh number, Biot number, tend can be changed in the code. Note: the library FFTPACK is needed (see make_95) TI: none

-----------chap9/ mc_gas.f compute the equilibrium of a many-particle system based on the

390 � D README and a short guide to FE-tools

Metropolis algorithm from sect. 9.3.1.2. In addition: constant gravitation. TI: temperature, gravitation. Note: units as in Figs. 9.4 - 9.6 ising.f solution of the Ising model with Metropolis algorithm from sect. 9.3.2.3. temperature values are scanned, mean magnetization is computed. TI: external magnetic field

D.2 Short guide to finite-element tools from Chapter 7 The three codes for grid generation, Laplace solving, and evaluation: program

purpose

[I]nput/[O]utput

mesh_generator

generates a 2D-triangular mesh

laplace_solver

solves Laplace equation with Dirichlet b.c. plots contour lines

[I]: ppm-file domain boundary [O] grid file [I]: grid file [O] node file [I] node file, grid file

grid_contur

Example files: name

purpose

created by

kidney.ppm, house.ppm kidney.grd, house.grd kidney.knd, house.knd

domain bound. grid files node files

Xfig (or similar); see Figure D.1 mesh_generator laplace_solver

D.2.1 mesh_generator TI stands for ‘Terminal Input’. TI: name of ppm-file (P6-format, binary, one byte/pixel), contains the domain boundary created by a graphics program, e. g., Xfig. The dimension of the bitmap can be as large as 5000 x 5000. TI: number of mesh points in the x-direction, defines the resolution of the triangular grid. Reasonable values for the examples are 30–60. The triangular grid is plotted (red). The outer boundary points are marked in blue. If some are missing they must be added manually using the mouse.

D.2 Short guide to finite-element tools from Chapter 7



391

Figure D.1: The two domain boundaries ‘house.ppm’ and ‘kidney.ppm.’ The latter also contains inner boundaries (multiply connected domain).

If finished, type ‘s’ on keyboard. All mesh points left and right from the external boundaries are then deleted automatically. Boundary points are now marked by circles. Mouse modes: There are three modes, which one can toggle between using the keys ‘d, r, z.’ Key ‘d’: delete mode: Clicked points near the mouse cursor are deleted. Key ‘r’: boundary mode: Clicked points close to the mouse cursor are marked as boundary points. Inner boundaries (example ‘kidney’) have to be marked by hand and afterwards drawn to the desired position. Here, single points can also be set (point charges). Key ‘z’: draw mode: Clicked points near the mouse cursor can be drawn, e. g., onto the boundary. The new location is obtained by positioning the mouse cursor and repeated clicking. Attention: avoid crossing lines! Key ‘s’: TI for grid file name (output, .grd) and termination of the program.

D.2.2 laplace_solver TI: grid file name. In the code the boundary node values have to be defined one by one. The number of corresponding mesh points is obtained from mesh_generator. TI: node file name. Here, the node values are stored after computation.

392 � D README and a short guide to FE-tools D.2.3 grid_contur TI: node file name. Then request for the value of the contour line(s) to be plotted. Termination by Ctrl-d.

What could be improved? The package can be extended and improved in various ways. Here, we only note a few ideas: – Handling of the boundary conditions. Instead of setting the (Dirichlet) boundary values in the code laplace_solver it could be better to define them in an extra file, or by mouse input. – Other boundary conditions (Neumann, mixed). – Automatic detection of the inner boundaries. – Extension to the Poisson equation. – Extension to time-dependent problems (diffusion equation, Schrödinger equation). – Extension to 3D.

Index acceptance-rejection method 307 activator-inhibitor system 292 ADI method 217 algebraic multiplicity 337 anharmonic oscillator 137 apsidal precession 82 autonomous system 45 ballistic flight 128 Barnsley ferns 25 basic reproduction number 177 Belousov–Zhabotinskii reaction 292 bifurcation diagram 105, 114 biharmonic operator 212 boundary conditions 127, 207 – Dirichlet 208, 210 – Neumann 208, 210, 215 – Robin 208, 215 Boussinesq approximation 275 Box–Muller method 97, 310 box-counting method 26, 116 Brownian motion 172, 186 Brusselator 294 Burgers equation 204 butterfly effect 16 Campbell–Baker–Hausdorff relation 55 canonical ensemble 95, 317 canonical equations 53 capacity dimension 115 celestial mechanics 79 characteristics 202 Chebyshev spectral method 233 Chebyshev–Gauss–Lobatto 233 Chirikov map 14 Circle map 14 codimension-two 292 collocation method 139 color maps 38 compartment model 175 congestion 199 conservative system 63 contact angle 320 convection 273 – equation 203, 226 – fully nonlinear problem 283 – linear stability analysis 278 https://doi.org/10.1515/9783110782523-014

– Rayleigh–Bénard 281 correlation dimension 117 Courant number 226 Covid-19 175 Crank–Nicolson method 73, 271 Crout algorithm 345 cyclic reduction 220 cyclotron frequency 264 decimal to binary converter 35 degree of freedom 43 delay-differential equation 156 detailed balance 318 differential quotient 211 discretization – biharmonic operator 212 – Laplace operator 212 distribution function 306 driven cavity 269 drop formation 321 dynamical system 43 eigenvalue problems 347 elliptic PDE 207 embedding dimension 118 endemic equilibrium 177 ensemble 316 Euler method 65 Euler–Maruyama scheme 189 expectation value 306 extinction 194 finite differences 129, 211 finite element – evaluation 245 – Laplace solver 245 – mesh generation 245 – method 242 fixed point 45 Floquet – exponent 120 – theorem 119 flow field 266 flow-apart time 262 FORTRAN 1 fractal dimension 26, 115 fractals 22

394 � Index

Fredholm alternative 342 Frenkel–Kotorova model 13 frozen time analysis 279 FTCS scheme 222 Galerkin method 140 Gauss–Seidel method 216 geometric multiplicity 340 Ginzburg–Landau equation 247 – complex 303 gnuplot 28 gradient dynamics 247 half-speedometer rule 199 Hamilton function 53 Hamiltonian system 53 hard spheres 316 harmonic oscillator 135 heat equation 209, 220, 235 Hebb’s learning rule 30 herd immunity 179 heterogeneous orbit 178 Heun’s method 70 homoclinic orbit 65 Hopf instability 46, 291 Hutchinson–Wright equation 157, 168 – noisy 191 hyperbolic PDE 207 hysteresis 331 Hénon–Heiles model 57 implicit declarations 11 implicit method 68 importance sampling 315, 319 incompressible fluids 266 initial conditions 207 Ising model 322 – one-dimensional 325 – two-dimensional 327 Jacobi – matrix 19, 46 – method 216 – potential 87 Jordan canonical form 340 Kepler problem 79 kernel 156, 341 Kirkwood gap 83

Kohonen algorithm 37 Lagrange polynomials 233 Lagrangian points 88 Langevin equation 189 LAPACK 6 Laplace operator 212 Laplace transform 163 Lax scheme 227 leapfrog scheme 228 learning process 30 least-squares method 140 Lennard-Jones potential 93 linear separability 33 linear stability analysis 8, 46 Linux 5 logistic map 7 logistic ODE 44, 157 – noisy 189 Lorenz attractor 52 Lorenz equations 51 Lotka–Volterra system 78, 176 LU decomposition 343 LU factorization 349 Lyapunov exponent 105 – higher order 110 – logistic map 17 – memory systems 171 – multidimensional maps 19 – Standard map 20 Lyapunov functional 300 Lyapunov time 114 Mackey–Glass equation 160, 168 Markov chain 318 mathematical pendulum 64, 103 Mathieu equation 123 MATLAB 1 Maxwell–Boltzmann distribution 97 mean-field approximation 323 memory 156 Metropolis algorithm 317 – 2D-gas 319 – diffusion equation 332 – Ising model 327 – Swift–Hohenberg equation 334 microcanonical ensemble 95 molecular dynamics 93 monodromy matrix 119

Index

Monte Carlo integration 311 Monte Carlo methods 305 moonshot 148 Navier–Stokes equations 266 neural networks 29 Newton–Raphson 144 No-Intersection Theorem 47 normal distribution 308 null space 341 orbit-type diagram 92 Oscar II 83 outbreaks 193 over-relaxation 165, 216 pair correlation function 99 parabolic PDE 207 parametric instability 121 partition sum 317 PDF 306 perceptron 29 PGPLOT 6 pivoting 345 Poincaré – map 60 – section 60, 105 Poincaré–Bendixson Theorem 48 Poisson bracket 54 Poisson equation 214 Prandtl number 276 predator-prey 175 pressure 266 probability density 306 pseudospectral method 284, 295 QR factorization 349 quasilinear differential equations 42 random numbers 305 Rayleigh number 276 reaction time 195 reaction-diffusion systems 289 reconstruction 117 recurrence time 258 regularization 89 Reynolds number 270 Rosenblatt rule 31 Runge–Kutta method 73 – with adaptive step size 77

� 395

Schmidt–Gram method 112 Schrödinger equation 132 – nonlinear 145 – stationary 135 – stationary two-particle 254 – time-dependent 257 self-organized maps 36 self-similar 24 separatrix 65 shallow water equations 230 shooting method 151 Sierpinski triangle 23 simple sampling 315 SIR model 175 SIRS model 180 – with delayed infection rate control 183 – with infection rate control 183 – with noisy infection rate control 191 specific heat 100 spectral method 238 spiral patterns 296 standard deviation 307 Standard map 14 Stark effect 133, 140 stochastic differential equation 186, 189 strange attractor 50 stream function 267 subdomain method 139 Swift–Hohenberg equation 299 – Metropolis algorithm 334 symplectic method 54, 72, 96 Takens, embedding theorem 119 target patterns 296 Thomas algorithm 346 three-body problem 83 – reduced 86 time-evolution operator 43, 259 traffic flow 194 transformation method 308 transport equation 275 traveling salesman problem 39 Trojans 88 Turing instability 289 upwind scheme 227 Van der Pol oscillator 49 variance 307 variational problem 244

396 � Index

velocity field 267 Verhulst equation 157 – noisy 189 Verlet algorithm 57 von Neumann stability analysis 224 vorticity 267

wave equation 228 wave packet 258 Wiener process 187 zeros of a polynomial 350