Introduction to Modern Dynamics: Chaos, Networks, Space, and Time [2 ed.] 9780198844624, 019884462X

The best parts of physics are the last topics that our students ever see. These are the exciting new frontiers of nonlin

380 59 149MB

English Pages 528 [493] Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Introduction to Modern Dynamics: Chaos, Networks, Space, and Time [2 ed.]
 9780198844624, 019884462X

Table of contents :
Preface to the Second Edition
Preface: The Best Parts of Physics
A unifying concept: geometry and dynamics
A common tool: dynamical flows and the ODE solver
Traditional junior-level physics: how to use this book
What's simple in complex systems?
Acknowledgments
Contents*-12pt
3.6 Bibliography
Analytic problems
Computational projects
Analytic problems
Computational projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Preface to the Second Edition
Preface: The Best Parts of Physics
A unifying concept: geometry and dynamics
A common tool: dynamical flows and the ODE solver
Traditional junior-level physics: how to use this book
What's simple in complex systems?
Acknowledgments
Contents*-12pt
3.6 Bibliography
Analytic problems
Computational projects
Analytic problems
Computational projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Preface to the Second Edition
Preface: The Best Parts of Physics
A unifying concept: geometry and dynamics
A common tool: dynamical flows and the ODE solver
Traditional junior-level physics: how to use this book
What's simple in complex systems?
Acknowledgments
Contents*-12pt
3.6 Bibliography
Analytic problems
Computational projects
Analytic problems
Computational projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Preface to the Second Edition
Preface: The Best Parts of Physics
A unifying concept: geometry and dynamics
A common tool: dynamical flows and the ODE solver
Traditional junior-level physics: how to use this book
What's simple in complex systems?
Acknowledgments
Contents*-12pt
3.6 Bibliography
Analytic problems
Computational projects
Analytic problems
Computational projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Preface to the Second Edition
Preface: The Best Parts of Physics
A unifying concept: geometry and dynamics
A common tool: dynamical flows and the ODE solver
Traditional junior-level physics: how to use this book
What's simple in complex systems?
Acknowledgments
Contents*-12pt
3.6 Bibliography
Analytic problems
Computational projects
Analytic problems
Computational projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Preface to the Second Edition
Preface: The Best Parts of Physics
A unifying concept: geometry and dynamics
A common tool: dynamical flows and the ODE solver
Traditional junior-level physics: how to use this book
What's simple in complex systems?
Acknowledgments
Contents*-12pt
3.6 Bibliography
Analytic problems
Computational projects
Analytic problems
Computational projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Preface to the Second Edition
Preface: The Best Parts of Physics
A unifying concept: geometry and dynamics
A common tool: dynamical flows and the ODE solver
Traditional junior-level physics: how to use this book
What's simple in complex systems?
Acknowledgments
Contents*-12pt
3.6 Bibliography
Analytic problems
Computational projects
Analytic problems
Computational projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Preface to the Second Edition
Preface: The Best Parts of Physics
A unifying concept: geometry and dynamics
A common tool: dynamical flows and the ODE solver
Traditional junior-level physics: how to use this book
What's simple in complex systems?
Acknowledgments
Contents*-12pt
3.6 Bibliography
Analytic problems
Computational projects
Analytic problems
Computational projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Preface to the Second Edition
Preface: The Best Parts of Physics
A unifying concept: geometry and dynamics
A common tool: dynamical flows and the ODE solver
Traditional junior-level physics: how to use this book
What's simple in complex systems?
Acknowledgments
Contents*-12pt
3.6 Bibliography
Analytic problems
Computational projects
Analytic problems
Computational projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Preface to the Second Edition
Preface: The Best Parts of Physics
A unifying concept: geometry and dynamics
A common tool: dynamical flows and the ODE solver
Traditional junior-level physics: how to use this book
What's simple in complex systems?
Acknowledgments
Contents*-12pt
3.6 Bibliography
Analytic problems
Computational projects
Analytic problems
Computational projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Preface to the Second Edition
Preface: The Best Parts of Physics
A unifying concept: geometry and dynamics
A common tool: dynamical flows and the ODE solver
Traditional junior-level physics: how to use this book
What's simple in complex systems?
Acknowledgments
Contents*-12pt
3.6 Bibliography
Analytic problems
Computational projects
Analytic problems
Computational projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Preface to the Second Edition
Preface: The Best Parts of Physics
A unifying concept: geometry and dynamics
A common tool: dynamical flows and the ODE solver
Traditional junior-level physics: how to use this book
What's simple in complex systems?
Acknowledgments
Contents*-12pt
3.6 Bibliography
Analytic problems
Computational projects
Analytic problems
Computational projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Preface to the Second Edition
Preface: The Best Parts of Physics
A unifying concept: geometry and dynamics
A common tool: dynamical flows and the ODE solver
Traditional junior-level physics: how to use this book
What's simple in complex systems?
Acknowledgments
Contents*-12pt
3.6 Bibliography
Analytic problems
Computational projects
Analytic problems
Computational projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Preface to the Second Edition
Preface: The Best Parts of Physics
A unifying concept: geometry and dynamics
A common tool: dynamical flows and the ODE solver
Traditional junior-level physics: how to use this book
What's simple in complex systems?
Acknowledgments
Contents*-12pt
3.6 Bibliography
Analytic problems
Computational projects
Analytic problems
Computational projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Preface to the Second Edition
Preface: The Best Parts of Physics
A unifying concept: geometry and dynamics
A common tool: dynamical flows and the ODE solver
Traditional junior-level physics: how to use this book
What's simple in complex systems?
Acknowledgments
Contents*-12pt
3.6 Bibliography
Analytic problems
Computational projects
Analytic problems
Computational projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Preface to the Second Edition
Preface: The Best Parts of Physics
A unifying concept: geometry and dynamics
A common tool: dynamical flows and the ODE solver
Traditional junior-level physics: how to use this book
What's simple in complex systems?
Acknowledgments
Contents*-12pt
3.6 Bibliography
Analytic problems
Computational projects
Analytic problems
Computational projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Preface to the Second Edition
Preface: The Best Parts of Physics
A unifying concept: geometry and dynamics
A common tool: dynamical flows and the ODE solver
Traditional junior-level physics: how to use this book
What's simple in complex systems?
Acknowledgments
Contents*-12pt
3.6 Bibliography
Analytic problems
Computational projects
Analytic problems
Computational projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Preface to the Second Edition
Preface: The Best Parts of Physics
A unifying concept: geometry and dynamics
A common tool: dynamical flows and the ODE solver
Traditional junior-level physics: how to use this book
What's simple in complex systems?
Acknowledgments
Contents*-12pt
3.6 Bibliography
Analytic problems
Computational projects
Analytic problems
Computational projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Preface to the Second Edition
Preface: The Best Parts of Physics
A unifying concept: geometry and dynamics
A common tool: dynamical flows and the ODE solver
Traditional junior-level physics: how to use this book
What's simple in complex systems?
Acknowledgments
Contents*-12pt
3.6 Bibliography
Analytic problems
Computational projects
Analytic problems
Computational projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Preface to the Second Edition
Preface: The Best Parts of Physics
A unifying concept: geometry and dynamics
A common tool: dynamical flows and the ODE solver
Traditional junior-level physics: how to use this book
What's simple in complex systems?
Acknowledgments
Contents*-12pt
3.6 Bibliography
Analytic problems
Computational projects
Analytic problems
Computational projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Preface to the Second Edition
Preface: The Best Parts of Physics
A unifying concept: geometry and dynamics
A common tool: dynamical flows and the ODE solver
Traditional junior-level physics: how to use this book
What's simple in complex systems?
Acknowledgments
Contents*-12pt
3.6 Bibliography
Analytic problems
Computational projects
Analytic problems
Computational projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Preface to the Second Edition
Preface: The Best Parts of Physics
A unifying concept: geometry and dynamics
A common tool: dynamical flows and the ODE solver
Traditional junior-level physics: how to use this book
What's simple in complex systems?
Acknowledgments
Contents*-12pt
3.6 Bibliography
Analytic problems
Computational projects
Analytic problems
Computational projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Preface to the Second Edition
Preface: The Best Parts of Physics
A unifying concept: geometry and dynamics
A common tool: dynamical flows and the ODE solver
Traditional junior-level physics: how to use this book
What's simple in complex systems?
Acknowledgments
Contents*-12pt
3.6 Bibliography
Analytic problems
Computational projects
Analytic problems
Computational projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects
Analytic problems
Numerical projects

Citation preview

INTRODUCTION TO MODERN DYNAMICS

Introduction to Modern Dynamics Chaos, Networks, Space and Time Second Edition David D. Nolte Purdue University

1

3

Great Clarendon Street, Oxford, OX2 6DP, United Kingdom Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and in certain other countries © David D. Nolte 2019 The moral rights of the author have been asserted First Edition published in 2015 Second Edition published in 2019 Impression: 1 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by licence or under terms agreed with the appropriate reprographics rights organization. Enquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above You must not circulate this work in any other form and you must impose this same condition on any acquirer Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America British Library Cataloguing in Publication Data Data available Library of Congress Control Number: 2019945041 ISBN 978–0–19–884462–4 (hbk.) ISBN 978–0–19–884463–1 (pbk.) DOI: 10.1093/oso/9780198844624.001.0001 Printed and bound by CPI Group (UK) Ltd, Croydon, CR0 4YY Links to third party websites are provided by Oxford in good faith and for information only. Oxford disclaims any responsibility for the materials contained in any third party website referenced in this work.

Preface to the Second Edition

Introduction to Modern Dynamics: Chaos, Networks, Space and Time (2015) is part of an emerging effort in physics education to update the undergraduate physics curriculum. Conventional junior-level mechanics courses have overlooked many modern dynamics topics that physics majors will use in their careers: nonlinearity, chaos, network theory, econophysics, game theory, neural nets, geodesic geometry, among others. These are the topics at the forefront of physics that drive high-tech businesses and start-ups where more than half of physicists are employed. The first edition of Introduction to Modern Dynamics contributed to this effort by introducing these topics in a coherent program that emphasized common geometric properties across a wide range of dynamical systems. The second edition of Introduction to Modern Dynamics continues that trend by expanding chapters to including additional material and topics. It rearranges several of the introductory chapters for improved logical flow and expands them to add new subject matter. The second edition also has additional homework problems. New or expanded topics in the second edition include

• • • • • • • • • • • •

Lagrangian applications Lagrange’s undetermined multipliers Action-angle variables and conserved quantities The virial theorem Non-autonomous flows A new chapter on Hamiltonian chaos Rational resonances Synchronization of chaos Diffusion and epidemics on networks Replicator dynamics Game theory An extensively expanded chapter on economic dynamics

The goal of the second edition of Introduction to Modern Dynamics is to strengthen the sections on conventional topics (which students need for the GRE physics subject test), making it an ideal textbook for broader adoption at the junior

vi Preface to the Second Edition level, while continuing the program of updating topics and approaches that are relevant for the roles that physicists will play in the twenty-first century. The historical development of modern dynamics is described in Galileo Unbound: A Path Across Life,the Universe and Everything, by D. D. Nolte, published by Oxford University Press (2018).

Preface: The Best Parts of Physics

The best parts of physics are the last topics that our students ever see. These are the exciting new frontiers of nonlinear and complex systems that are at the forefront of university research and are the basis of many of our hightech businesses. Topics such as traffic on the World Wide Web, the spread of epidemics through globally mobile populations, or the synchronization of global economies are governed by universal principles just as profound as Newton’s Laws. Nonetheless, the conventional university physics curriculum reserves most of these topics for advanced graduate study. Two justifications are given for this situation: first, that the mathematical tools needed to understand these topics are beyond the skill set of undergraduate students, and second, that these are specialty topics with no common theme and little overlap. Introduction to Modern Dynamics: Chaos, Networks, Space and Time dispels these myths. The structure of this book combines the three main topics of modern dynamics—chaos theory, dynamics on complex networks and the geometry of dynamical spaces—into a coherent framework. By taking a geometric view of physics, concentrating on the time evolution of physical systems as trajectories through abstract spaces, these topics share a common and simple mathematical language with which any student can gain a unified physical intuition. Given the growing importance of complex dynamical systems in many areas of science and technology, this text provides students with an up-to-date foundation for their future careers. While pursuing this aim, Introduction to Modern Dynamics embeds the topics of modern dynamics—chaos, synchronization, network theory, neural networks, evolutionary change, econophysics, and relativity—within the context of traditional approaches to physics founded on the stationarity principles of variational calculus and Lagrangian and Hamiltonian physics. As the physics student explores the wide range of modern dynamics in this text, the fundamental tools that are needed for a physicist’s career in quantitative science are provided, including topics the student needs to know for the Graduate Record Examination (GRE). The goal of this textbook is to modernize the teaching of junior-level dynamics, responsive to a changing employment landscape, while retaining the core traditions and common language of dynamics texts.

viii Preface: The Best Parts of Physics

A unifying concept: geometry and dynamics Instructors or students may wonder how an introductory textbook can contain topics, under the same book cover, on econophysics and evolution as well as the physics of black holes. However, it is not the physics of black holes that matters, rather it is the description of general dynamical spaces that is important and the understanding that can be gained of the geometric aspects of trajectories governed by the properties of these spaces. All changing systems, whether in biology or economics or computer science or photons in orbit around a black hole, are understood as trajectories in abstract dynamical spaces. Newton takes a back seat in this text. He will always be at the heart of dynamics, but the modern emphasis has shifted away from F = ma to a newer perspective where Newton’s Laws are special cases of broader concepts. There are economic forces and forces of natural selection that are just as real as the force of gravity on point particles. For that matter, even the force of gravity recedes into the background as force-free motion in curved space-time takes the fore. Unlike Newton, Hamilton and Lagrange retain their positions here. The variational principle and the minimization of dynamical quantities are core concepts in dynamics. Minimization of the action integral provides trajectories in real space, and minimization of metric distances provides trajectories—geodesics— in dynamical spaces. Conservation laws arise naturally from Lagrangians, and energy conservation enables simplifications using Hamiltonian dynamics. Space and geometry are almost synonymous in this context. Defining the space of a dynamical system takes first importance, and the geometry of the dynamical space then determines the set of all trajectories that can exist in it.

A common tool: dynamical flows and the ODE solver A mathematical flow is a set of first-order differential equations that are solved using as many initial values as there are variables, which defines the dimensionality of the dynamical space. Mathematical flows are one of the foundation stones that appears continually throughout this textbook. Nearly all of the subjects explored here—from evolving viruses to orbital dynamics—can be captured as a flow. Therefore, a common tool used throughout this text is the numerical solution of the ordinary differential equation (ODE). Computers can be both a boon and a bane to the modern physics student. On the one hand, the easy availability of ODE solvers makes even the most obscure equations easy to simulate numerically, enabling any student to plot a phase plane portrait that contains all manner of behavior. On the other hand, physical insight and analytical understanding of complex behavior tend to suffer from the computer-game nature of simulators. Therefore, this textbook places a strong emphasis on analysis, and on behavior

Preface: The Best Parts of Physics ix under limiting conditions, with the goal to reduce a problem to a few simple principles, while making use of computer simulations to capture both the whole picture as well as the details of system behavior.

Traditional junior-level physics: how to use this book All the traditional topics of junior-level physics are here. From the simplest description of the harmonic oscillator, through Lagrangian and Hamiltonian physics, to rigid body motion and orbital dynamics—the core topics of advanced undergraduate physics are retained and are interspersed throughout this textbook.

What’s simple in complex systems? The traditional topics of mechanics are integrated into the broader view of modern dynamics that draws from the theory of complex systems. The range of subject matter encompassed by complex systems is immense, and a comprehensive coverage of this topic is outside the scope of this book. However, there is still a surprisingly wide range of complex behavior that can be captured using the simple concept that the geometry of a dynamic space dictates the set of all possible trajectories in that space. Therefore, simple analysis of the associated flows provides many intuitive insights into the origins of complex behavior. The special topics covered in this textbook are:



Chaos theory (Chapter 4)

Much of nonlinear dynamics can be understood through linearization of the flow equations (equations of motion) around special fixed points. Visualizing the dynamics of multi-parameter systems within multidimensional spaces is made simpler by concepts such as the Poincaré section, strange attractors that have fractal geometry, and iterative maps.



Synchronization (Chapter 6)

The nonlinear synchronization of two or more oscillators is a starting point for understanding more complex systems. As the whole can be greater than the sum of the parts, global properties often emerge from local interactions among the parts. Synchronization of oscillators is surprisingly common and robust, leading to frequency-entrainment, phase-locking, and fractional resonance that allow small perturbations to control large networks of interacting systems.

x Preface: The Best Parts of Physics



Network theory (Chapter 7)

Everywhere we look today, we see networks. The ones we interact with daily are social networks and related networks on the World Wide Web. In this chapter, individual nodes are joined into networks of various geometries, such as small-world networks and scale-free networks. The diffusion of disease across these networks is explored, and the synchronization of Poincaré phase oscillators can induce a Kuramoto transition to complete synchronicity.



Evolutionary dynamics (Chapter 8)

Some of the earliest explorations of nonlinear dynamics came from studies of population dynamics. In a modern context, populations are governed by evolutionary pressures and by genetics. Topics such as viral mutation and spread, as well as the evolution of species within a fitness landscape, are understood as simple balances within quasispecies equations.



Neural networks (Chapter 9)

Perhaps the most complex of all networks is the brain. This chapter starts with the single neuron, which is a limit-cycle oscillator that can show interesting bistability and bifurcations. When neurons are placed into simple neural networks, such as perceptrons or feedforward networks, they can do simple tasks after training by error back-propagation. The complexity of the tasks increases with the complexity of the networks, and recurrent networks, like the Hopfield neural net, can perform associated memory operations that challenge even the human mind.



Econophysics (Chapter 10)

A most baffling complex system that influences our daily activities, as well as the trajectory of our careers, is the economy in the large and the small. The dynamics of microeconomics determines what and why we buy, while the dynamics of macroeconomics drives entire nations up and down economic swings. These forces can be (partially) understood in terms of nonlinear dynamics and flows in economic spaces. Business cycles and the diffusion of prices on the stock market are no less understandable than evolutionary dynamics (Chapter 8) or network dynamics (Chapter 7), and indeed draw closely from those topics.



Geodesic motion (Chapter 11)

This chapter is the bridge between the preceding chapters on complex systems and the succeeding chapters on relativity theory (both special and general). This is where the geometry of space is first fully defined in terms of a metric tensor, and where trajectories through a dynamical space are discovered to be paths of force-

Preface: The Best Parts of Physics xi free motion. The geodesic equation (a geodesic flow) supersedes Newton’s Second Law as the fundamental equation of motion that can be used to define the path of masses through potential landscapes and the path of light through space-time.



Special relativity (Chapter 12)

In addition to traditional topics of Lorentz transformations and mass-energy equivalence, this chapter presents the broader view of trajectories through Minkowski space-time whose geometric properties are defined by the Minkowski metric. Relativistic forces and noninertial (accelerating) frames connect to the next chapter that generalizes all relativistic behavior.



General relativity (Chapter 13)

The physics of gravitation, more than any other topic, benefits from the overarching theme developed throughout this book—that the geometry of a space defines the properties of all trajectories within that space. Indeed, in this geometric view of physics, Newton’s force of gravity disappears and is replaced by forcefree geodesics through warped space-time. Mercury’s orbit around the Sun, and trajectories of light past black holes, are elements of geodesic flows whose properties are easily understood using the tools developed in Chapter 4 and expanded upon throughout this textbook.

Acknowledgments

I gratefully acknowledge the many helpful discussions with my colleagues Ephraim Fischbach, Andrew Hirsch, Sherwin Love, and Hisao Nakanishi during the preparation of this book. Special thanks to my family, Laura and Nicholas, for putting up with my “hobby” for so many years, and also for their encouragement and moral support. I also thank the editors at Oxford University Press for help in preparing the manuscript and especially Sonke Adlung for helping me realize my vision.

Part I Geometric Mechanics Traditional approaches to the mechanics of particles tend to focus on individual trajectories. In contrast, modern dynamics takes a global view of dynamical behavior by studying the set of all possible trajectories of a system. Modern dynamics furthermore studies properties in dynamical spaces that carry names like state space, phase space,and space–time. Dynamical spaces can be highly abstract and can have high dimensionality. This initial part of the book introduces the mathematical tools necessary to study the geometry of dynamical spaces and the resulting dynamical behavior within those spaces. Central to modern dynamics is Hamilton’s Principle of Stationary Action as the prototypical minimization principle that underlies much of dynamics. This approach will lead ultimately (in Part III) to the geodesic equation of general relativity, in which matter warps Minkowski space (space–time), and trajectories execute force-free motion through that space.

Physics and Geometry

1 1.1 State space and dynamical flows 1.2 Coordinate representation of dynamical systems

Foucault’s Pendulum in the Pantheon in Paris

Modern dynamics, like classical dynamics, is concerned with trajectories through space—the descriptions of trajectories (kinematics) and the causes of trajectories (dynamics). However, unlike classical mechanics, which emphasizes motions of physical masses and the forces acting on them, modern dynamics generalizes the notion of trajectories to encompass a broad range of time-varying behavior that goes beyond material particles to include animal species in ecosystems, market prices in economies, and virus spread on connected networks. The spaces that these trajectories inhabit are abstract, and can have a high number of dimensions. These generalized spaces may not have Euclidean geometry, and may be curved like the surface of a sphere or space–time warped by gravity. The central object of interest in dynamics is the evolving state of a system. The state description of a system must be unambiguous, meaning that the next state to develop in time is

Introduction to Modern Dynamics. Second Edition. David D. Nolte, Oxford University Press (2019). © David D. Nolte. DOI: 10.1093/oso/9780198844624.001.0001

4 10

1.3 Coordinate transformations

15

1.4 Uniformly rotating frames

25

1.5 Rigid-body motion

32

1.6 Summary

48

1.7 Bibliography

48

1.8 Homework problems

49

4 Introduction to Modern Dynamics uniquely determined by the current state. This is called deterministic dynamics, which includes deterministic nonlinear dynamics for which chaotic trajectories may have an apparent randomness to their character. This chapter lays the foundation for the description of dynamical systems that move continuously from state to state. Families of trajectories, called dynamical flows, are the fundamental elements of interest; they are the field lines of dynamics. These field lines are to deterministic dynamics what electric and magnetic field lines are to electromagnetism. One key difference is that there is only one set of Maxwell’s equations, while every nonlinear dynamical system has its own set of equations, providing a nearly limitless number of possibilities for us to study. This chapter begins by introducing general ideas of trajectories as the set of all possible curves defined by dynamical flows in state space. To define trajectories, we will establish notation to help us describe high-dimensional, abstract, and possibly curved spaces. This is accomplished through the use of matrix (actually tensor) indices that look strange at first to a student familiar only with vectors, but which are convenient devices for keeping track of multiple coordinates. The next step constructs coordinate transformations from one coordinate system to another. For instance, a central question in modern dynamics is how two observers, one in each system, describe the common phenomena that they observe. The physics must be invariant to the choice of coordinate frame, but the descriptions can differ widely.

1.1 State space and dynamical flows Configuration space is defined by the spatial coordinates needed to describe a dynamical system. The path the system takes through configuration space is its trajectory. Each point on the trajectory captures the successive configurations of the system as it evolves in time. However, knowing the current configuration of the system does not guarantee that the next configuration can be defined. For instance, the trajectory can loop back and cross itself. The velocity vector that pointed one direction at the earlier time can point in a different direction at a later time. Therefore, a velocity vector must be attached to each configuration to define how it will evolve next.

1.1.1 State space

1 See A. E. Jackson, Perspectives of Nonlinear Dynamics (Cambridge University Press, 1989).

By adding velocities, associated with each of the coordinates, to the configuration space, a new expanded space, called state space, is created. For a given initial condition, there is only a single system trajectory through this multidimensional space, and each point on the trajectory uniquely defines the next state of the system.1 This trajectory in state space can cross itself only at points where all the velocities vanish, otherwise the future state of the system would not be unique.

Physics and Geometry 5

Example 1.1 State space of the damped one-dimensional harmonic oscillator The damped harmonic oscillator in one coordinate has the single second-order ordinary differential equation2 m¨x + γ x˙ + kx = 0

(1.1)

where m is the mass of the particle, γ is the drag coefficient, and k is the spring constant. Any set of second-order time-dependent ordinary differential equations (e.g., Newton’s second law) can be written as a larger set of first-order equations. For instance, the single second-order equation (1.1) can be rewritten as two first-order equations x˙ = v mv˙ + γ v + kx = 0

(1.2)

It is conventional to write these with a single time derivative on the left as x˙ = v v˙ = −2βv − ω02 x

(1.3)

in the two variables (x, v) with β = γ /2m and ω02 = k/m. State space for this system of equations consists of two coordinate axes in the two variables (x, v), and the right-hand side of the equations are expressed using only the same two variables. To solve this equation, assume a solution in the form of a complex exponential evolving in time with an angular frequency ω as (see Appendix A.1) x(t) = Xeiωt

(1.4)

−mω2 Xeiωt + iωγ Xeiωt + kXeiωt = 0

(1.5)

Insert this expression into Eq. (1.1) to yield

with the characteristic equation 0 = mω2 − iωγ − k = ω2 − i2ωβ − ω02 where the damping parameter is β = γ /2m, and the resonant angular frequency is given by

(1.6) ω02

= k/m. The solution

of the quadratic equation (1.6) is ω = iβ ±



ω02 − β 2

Using this expression for the angular frequency in the assumed solution (1.4) gives      x(t) = X1 exp (−βt) exp i ω02 − β 2 t + X2 exp (−βt) exp −i ω02 − β 2 t Consider the initial values x(0) = A and x(0) ˙ = 0; then the two initial conditions impose the values 2 The “dot” notation stands for a time derivative: x˙ = dx/dt and x ¨ = d 2 x/dt 2 . It is a modern remnant of Newton’s fluxion notation.

(1.7)

(1.8)

6 Introduction to Modern Dynamics

Example 1.1 continued ⎛

⎞ 2 2 A ⎝ ω0 − β − iβ ⎠  X1 = 2 ω02 − β 2 ⎛ ⎞ 2 2 A ⎝ ω0 − β + iβ ⎠  X2 = = X1∗ 2 ω2 − β 2

(1.9)

0



The final solution is

x(t) = A exp (−βt) ⎣cos



 ω02 − β 2 t

+



β

sin

ω02 − β 2





ω02 − β 2 t ⎦

(1.10)

which is plotted in Fig. 1.1(a) for the case where the initial displacement is a maximum and the initial speed is zero. The oscillator “rings down” with the exponential decay constant β. The angular frequency of the ring-down is not  2 2 equal to ω0 , but is reduced to the value ω0 − β . Hence, the damping decreases the frequency of the oscillator from its natural resonant frequency. A system trajectory in state space starts at an initial condition (x0 , v0 ), and uniquely traces the time evolution of the system as a curve in the state space. In Fig. 1.1(b), only one trajectory (stream line) is drawn, but streamlines fill the state space, although they never cross, except at singular points where all velocities vanish. Streamlines are the field lines of the vector field. Much of the study of modern dynamics is the study of the geometric properties of the vector field (tangents to the streamlines) and field lines associated with a defined set of flow equations. (a)

(b)

Configuration space

State space

1

0.5 Velocity

Envelope function exp(–βt)

Position

1.0

A=1 β = 0.05 ω0 = 0.5

0

0

–0.5

–1 0

20

40

60 Time

80

100

–1.0 –1

–0.5

0

0.5

1

Position

Figure 1.1 Trajectories of the damped harmonic oscillator. (a) Configuration position versus time. (b) State space, every point of which has a tangent vector associated with it. Streamlines are the field lines of the vector field and are dense. Only a single streamline is shown.

Physics and Geometry 7

1.1.2 Dynamical flows This book works with a general form of sets of dynamical equations called a dynamical flow. The flow for a system of N variables is defined as dq1 = F1 (q1 , q2 , . . . , qN ; t) dt dq2 = F2 (q1 , q2 , . . . , qN ; t) dt .. .

(1.11)

dqN = FN (q1 , q2 , . . . , qN ; t) dt or, more succinctly, dqa = Fa (qa ; t) dt

(1.12)

which is a system of N simultaneous equations, where the vector function Fa is a function of the time-varying coordinates of the position vector. If Fa is not an explicit function of time, then the system is autonomous, with an N-dimensional state space. On the other hand, if Fa is an explicit function of time, then the system is non-autonomous, with an (N + 1)-dimensional state space (space plus time) The solution of the system of equations (1.12) is a set of trajectories qa (t) through the state space. In this book, the phrase configuration space is reserved for the dynamics of systems of massive particles (with second-order time derivatives as in Examples 1.1 and 1.2). The dimension of the state space for particle systems is evendimensional because there is a velocity for each coordinate. However, for general dynamical flows, the dimension of the state space can be even or odd. For dynamical flows, state space and configuration space are the same thing, and the phrase state space will be used.

Example 1.2 An autonomous oscillator Systems that exhibit self-sustained oscillation, known as autonomous oscillators, are central to many of the topics of nonlinear dynamics. For instance, an ordinary pendulum clock, driven by mechanical weights, is an autonomous oscillator with a natural oscillation frequency that is sustained by gravity. One possible description of an autonomous oscillator is given by the dynamical flow equations

 x˙ = ωy + ωx 1 − x2 − y2

 (1.13) y˙ = −ωx + ωy 1 − x2 − y2 continued

8 Introduction to Modern Dynamics

Example 1.2 continued where ω is an angular frequency. The (x, y) state-space trajectories of this system are spirals that relax to the unit circle as they approach a dynamic equilibrium, shown in Fig. 1.2. Without the second terms on the right-hand side, this is simply an undamped harmonic oscillator. Examples and problems involving autonomous oscillators will recur throughout this book in Chapters 4 (Chaos), 6 (Synchronization), 7 (Networks), 8 (Evolutionary Dynamics), 9 (Neurodynamics) and 10 (Economic Dynamics). y

x

Figure 1.2 Flow lines of an autonomous oscillator with a limit cycle. All trajectories converge on the limit cycle.

Example 1.3 Undamped point-mass pendulum The undamped point-mass pendulum is composed of a point mass m on a massless rigid rod of length L. It has a two-dimensional state-space dynamics in the space (θ , ω) described by θ˙ = ω

(1.14) g sin θ L The state-space trajectories can be obtained by integrating these equations using a nonlinear ODE solver. Alternatively, the state-space trajectories can be obtained analytically if there are constants of the motion. For instance, because the pendulum is undamped and conservative, the total energy of the system is a constant for a given initial condition, ω˙ = −

1 mL2 ω2 + mgL (1 − cos θ ) (1.15) 2 referenced to the bottom of the motion in configuration space. If the maximum angle of the pendulum for a given trajectory is θ0 , then E=

E = mgL (1 − cos θ0 )

(1.16)

Physics and Geometry 9

Example 1.3 continued and 1 mL2 ω2 = mgL (cos θ − cos θ0 ) 2 which is solved for the instantaneous angular velocity ω as  ω (θ ) = ±ω0 2 (cos θ − cos θ0 )

(1.17)

(1.18)

These are oscillatory motions for θ0 < π. For larger energies, the motion is rotational (also known as libration). The solutions in this case are  ω (θ ) = ±ω0 2 (cos θ0 − cos θ ) (1.19) where cos θ 0 is not a physical angle, but is an effective parameter describing the total energy as cos θ0 = 1 +

E mgL

(1.20)

The (θ, ω) state-space trajectories of the undamped point-mass pendulum are shown in Fig. 1.3. When the state space pertains to a conservative system, it is also called phase space. Conservative systems are Hamiltonian systems and are described in Chapter 3. 10 Open orbits

Momentum

5

Separatrix

0

–5

–10 –5

Closed orbits 0 Angle

5

Figure 1.3 State space of the undamped point-mass pendulum. The configuration space is one-dimensional along the angle θ . Closed orbits (oscillation) are separated from open orbits (rotation) by a curve known as a separatrix.

10 Introduction to Modern Dynamics

Example 1.4 A three-variable harmonic oscillator As an example of an odd-dimensional state space, consider the three-dimensional flow x˙ = ω0 (y − z) y˙ = ω0 (z − x) z˙ = ω0 (x − y)

(1.21)

This mathematical model is equivalent to a three-variable linear oscillator with no dissipation. To solve this flow, assume a solution in the form of a complex exponential in time evolving with an angular frequency ω as x(t) = Xeiωt . Insert this expression into Eq. (1.21) to yield iωX = ω0 (Y − Z) (1.22) iωY = ω0 (Z − X) iωZ = ω0 (X − Y ) Solve the secular determinant for the angular frequency ω:    iω −ω0 ω0           iω −ω0  = iω −ω2 + ω02 + ω0 ω0 iω − ω02 + ω0 ω02 + ω0 iω = 0  ω0    − ω0 ω0 iω 

2  ω −ω + 3ω02 = 0 The solutions for ω are

√ ω = 0, ± 3ω0

(1.23)

(1.24)

The solutions, for any initial condition, are three sinusoids with identical amplitudes and frequencies, but with relative phases that differ by ±2π/3. A dynamical system like this is not equivalent to modeling a particle with inertia. It is a dynamical flow with a state-space dimension equal to three that might model the behavior of an economic system, or an ecological balance among three species, or a coupled set of neurons. In the study of modern dynamical systems, the emphasis moves away from particles acted on by forces and becomes more abstract, but also more general and versatile. This example has what is called “neutral stability.” This means that even a slight perturbation of this system may cause the oscillations to either decay to zero or to grow without bound. In Chapter 4, a stability analysis will identify this system as a “center.” This oscillatory system is not a robust system, because a small change in parameter can cause a major change in its qualitative behavior. However, there are types of self-sustained oscillations that are robust, maintaining steady oscillatory behavior even as parameters, and even dissipation, change. These are autonomous oscillators and are invariably nonlinear oscillators.

1.2 Coordinate representation of dynamical systems Although physics must be independent of any coordinate frame, the description of what we see does depend on which frame we are viewing it from. Therefore, it often will be convenient to view the same physics from different perspectives. For

Physics and Geometry 11 this reason, we need to find transformation laws that convert the description from one frame to another.

1.2.1 Coordinate notation and configuration space The position of a free particle in three-dimensional (3D) space is specified by three values that conventionally can be assigned the Cartesian coordinate values x(t), y(t), and z(t). These coordinates define the instantaneous configuration of the system. If a second particle is added, then there are three additional coordinates, and the configuration space of the system is now six-dimensional. Rather than specifying three new coordinate names, such as u(t), v(t), or w(t), it is more convenient to use a notation that is extended easily to any number of dimensions. Index notion accomplishes this by having the index span across all the coordinate values. Vector components throughout this text will be denoted with a superscript. For instance, the position vector of a free particle in 3D Euclidean space is a 3-tuple of values ⎛ ⎞ x1 ⎜ 2⎟ (1.25) x = ⎝x ⎠ x3 Vectors are represented by column matrices (which is the meaning of the superscripts here3 ). It is important to remember that these superscripts are not “powers.” A coordinate component raised to an nth power will be expressed as (xa )n . For N free particles, a single 3N-dimensional position vector defines the instantaneous configuration of the system. To abbreviate the coordinate description, one can use the notation   x = xa

a = 1, . . . , 3N

(1.26)

where the curly brackets denote the full set of coordinates. An even shorter, and more common, notation for a vector is simply xa

(1.27)

where the full set a = 1, . . ., 3N is implied. Cases where only a single coordinate is intended will be clear from the context. The position coordinates develop in time as xa (t)

(1.28)

which describes a trajectory of the system in its 3N-dimensional configuration space.

3 The superscript is a part of the notation for tensors and manifolds in which vectors differ from another type of component called a covector that is denoted by a subscript. In Cartesian coordinates, a superscript denotes a column vector and a subscript denotes a row vector (see Appendix A.3).

12 Introduction to Modern Dynamics

1.2.2 Trajectories in 3D configuration space A trajectory is a set of position coordinate values that vary continuously with a single parameter and define a smooth curve in the configuration space. For instance, xa = xa (t)

xa = xa (s)

or

(1.29)

where t is the time and s is the path length along the trajectory. Once the trajectory of a point has been defined within its configuration space, it is helpful to define properties of the trajectory, like the tangent to the curve and the normal. The velocity vector is tangent to the path. For a single particle in 3D, this would be ⎛

⎞ dx1 (s) ⎜ ds ⎟ ⎜ ⎟ ⎜ dx2 (s) ⎟ ds ⎟ v(s)  =⎜ ⎜ ⎟ ⎜ ds ⎟ dt ⎝ dx3 (s) ⎠ ds

(1.30)

where the ds/dt term is simply the speed of the particle. In the simplified index notation, this is ds dxa (s) dt ds ds = Ta dt

va (s) =

(1.31)

where T a is a unit tangent vector in the direction of the velocity: Ta =

dxa ds

(1.32)

Each point on the trajectory has an associated tangent vector. In addition to the tangent vector, another important vector property of a trajectory is the normal to the trajectory, defined by dT a = κN a ds

(1.33)

where N a is the unit vector normal to the curve, and the curvature of the trajectory is κ=

1 R

(1.34)

where R is the radius of curvature at the specified point on the trajectory. The parameterization of a trajectory in terms of its path length s is often a more “natural” way of describing the trajectory, especially under coordinate transformations. For instance, in special relativity, time is no longer an absolute parameter, because it is transformed in a manner similar to position.Then it is

Physics and Geometry 13 ds2

possible to define a path length interval in space–time that remains invariant under Lorentz transformation (see Chapter 12) and hence can be used to specify the path through space–time.4

Example 1.5 Parabolic trajectory in a gravitational field This is a familiar problem that goes back to freshman physics. However, it is seen here in a slightly different light. Consider a particle in a constant gravitational field thrown with initial velocity v0 in the x direction. The mathematical description of this motion is dx = v0 dt dy = −gt dt with the solution, for initial conditions x = 0, y = 0, x˙ = v0 , y˙ = 0, x = v0 t 1 y = − gt 2 2

(1.35)

(1.36)

giving the spatial trajectory y=−

1 g 2 x 2 v20

(1.37)

The speed of the particle is  2  2  2 ds dx dy = + dt dt dt = v20 − 2gy = v20 + g 2 t 2 = v2 with the arc length element ds =



v20 + g 2 t 2 dt

 =

(1.38)

1+

g 2 x2 dx v40

(1.39)

and the tangent vector components T1 =

T2 =

dx =  ds

dy =  ds

1 1+ −

g 2 x2 v40

gx v20

1+

(1.40)

g 2 x2 v40 continued

4 More generally, the invariant squared path length interval ds2 is an essential part of the metric description of the geometry of space–time and other dynamical spaces, and is a key aspect of geodesic motion for bodies moving through those spaces (see Chapter 11).

14 Introduction to Modern Dynamics

Example 1.5 continued The trajectory and its tangent vector are described as functions of position— a geometric curve rather than an explicit function of time. While the results for this familiar problem may look unfamiliar, it is similar to the description of trajectories in special relativity, or to geodesic trajectories near gravitating bodies in space–time that will be treated in later chapters.

1.2.3 Generalized coordinates The configuration coordinates considered so far have been Cartesian coordinates (x, y, z). However, there are abstract coordinates, called generalized coordinates, that may be more easily employed to solve dynamical problems. Generalized coordinates arise in different ways. They may be dictated by the symmetry of the problem, like polar coordinates for circular motion. They may be defined by constraints on the physical system, like a particle constrained to move on a surface. Or they may be defined by coupling (functional dependence) between the coordinates of a multicomponent system, leading to generalized coordinates known as normal modes. Generalized coordinates are often denoted by q’s. They may be described in terms of other coordinates, for instance Cartesian coordinates, as

 qa = qa xb , t xb = xb (qa , t)

(1.41)

where the transformations associated with each index may have different functional forms and do not need to be linear functions of their arguments. The generalized coordinates do not need to have the dimension of length, and each can have different units. However, it is required that the transformation be invertible (one-to-one). Generalized coordinates can be used to simplify the description of the motions of complex systems composed of large numbers of particles. If there are N particles, each with three coordinates, then the total dimension of the configuration space is 3N and there is a dense set of system trajectories that thread their way through this configuration space. However, often there are constraints on the physical system, such as the requirement that particles be constrained to reside on a physical surface such as the surface of a sphere. In this case, there are equations that connect two or more of the coordinates. If there are K equations of constraints, then the number of independent generalized coordinates is 3N − K and the motion occurs on a (3N – K)-dimensional hypersurface within the configuration space. This hypersurface is called a manifold. In principle, it is possible to find the 3N – K generalized coordinates that span this manifold,

Physics and Geometry 15 and the manifold becomes the new configuration space spanned by the 3N – K generalized coordinates. Furthermore, some of the generalized coordinates may not participate in the dynamics. These are called ignorable coordinates (also known as cyclic coordinates), and they arise owing to symmetries in the configuration space plus constraints, and are associated with conserved quantities. The dimensionality of the dynamical manifold on which the system trajectory resides is further reduced by each of these conserved quantities. Ultimately, after all the conserved quantities and all the constraints have been accounted for, the manifold that contains the system trajectory may have a dimension much smaller than the dimension of the original Cartesian configuration space.

Example 1.6 Bead sliding on a frictionless helical wire Consider a bead sliding without friction on a helical wire with no gravity. The trajectory is defined in 3D Cartesian coordinates by x(t) = R cos ωt y(t) = R sin ωt z(t) = vz t

(1.42)

parameterized by time t. There are two constraints x2 + y 2 = R 2 z = aθ

(1.43)

where a is the pitch of the helix and θ = ωt. These constraints reduce the 3D dynamics to 1D motion (3 – 2 = 1), and the 1D trajectory has a single generalized coordinate  q(t) = t R2 ω2 + v2z (1.44) which is also equal to the path length s. The speed of the particle is a constant and is  s˙ = R2 ω2 + v2z

1.3 Coordinate transformations For a general coordinate transformation, the original Cartesian coordinates x, y, and z are related to coordinates q1 , q2 , and q3 by the functions

 x = x q1 , q2 , q3 

y = y q1 , q2 , q3

 z = z q1 , q2 , q3

(1.46)

(1.45)

16 Introduction to Modern Dynamics These equations can be inverted to yield q1 = q1 (x, y, z) q2 = q2 (x, y, z) q3

=

(1.47)

q3 (x, y, z)

which may be generalized coordinates that are chosen to simplify the equations of motion of a dynamical system.

1.3.1 Jacobian matrix The Jacobian matrix of the transformation is defined from the coordinate transformations (and inverse transformations) as ⎛

∂x ⎜ ∂q1 ⎜ ⎜ ∂y ⎜ J=⎜ 1 ⎜ ∂q ⎜ ⎝ ∂z ∂q1

∂x ∂q2 ∂y ∂q2 ∂z ∂q2

⎞ ∂x ∂q3 ⎟ ⎟ ∂y ⎟ ⎟ ⎟ ∂q3 ⎟ ⎟ ∂z ⎠



J −1

∂q3

∂q1 ⎜ ∂x ⎜ ⎜ 2 ⎜ ∂q =⎜ ⎜ ∂x ⎜ 3 ⎝ ∂q ∂x

∂q1 ∂y ∂q2 ∂y ∂q3 ∂y

⎞ ∂q1 ∂z ⎟ ⎟ ⎟ ∂q2 ⎟ ⎟ ∂z ⎟ ⎟ 3 ∂q ⎠

(1.48)

∂z

The determinant |J| is called the Jacobian. The Jacobian matrix requires two indices to define its individual elements, just as a vector required one index. Because the Jacobian matrix is generated using derivatives, an index notation that distinguishes between the differential vector in the numerator relative to the differential vector in the denominator is row index a Jba = ∂x b ∂q

(1.49)

column index where the superscript and subscript relate to xa and qb , respectively. The superscript is called a contravariant index, and the subscript is called a covariant index. One way to remember this nomenclature is that “co” goes “below.” The covariant index refers to the columns of the matrix, and the contravariant index refers to the rows. Column vectors have contravariant indices because they have multiple rows, while row vectors have covariant indices because they have multiple columns. Row vectors are also known as covariant vectors, or covectors. When transforming between Cartesian and generalized coordinates, an infinitesimal transformation is expressed as dxa =

 ∂xa  dqb = Jba dqb ∂qb b

b

(1.50)

Physics and Geometry 17 Jba

where the Jacobian matrix can depend on position. If the transformation is linear, then the Jacobian matrix is a constant. The operation of the Jacobian matrix on the generalized coordinates generates a new column vector dxa . Rather than always expressing the summation explicitly, there is a common convention, known as the Einstein summation convention, in which the summation symbol is dropped and a repeated index—one above and one below—implies summation:

Einstein summation convention

xa =



Λab qb ≡ Λab qb

(1.51)

b

where the “surviving” index—a—is the non-repeated index. Note that Λba is a linear transformation. For example, in three dimensions, this is x1 = Λ11 q1 + Λ12 q2 + Λ13 q3 x2 = Λ21 q1 + Λ22 q2 + Λ23 q3 x3 = Λ31 q1 + Λ32 q2 + Λ33 q3

(1.52)

which is recognizable in matrix multiplication as ⎛ ⎞ ⎛ x1 Λ11 ⎜ 2⎟ ⎜ 2 ⎝x ⎠ = ⎝Λ1 x3 Λ31

Λ12 Λ22 Λ32

⎞⎛ ⎞ Λ13 q1 ⎟ ⎜ 2⎟ 2 Λ3 ⎠ ⎝q ⎠ Λ33 q3

(1.53)

and is simplified to xa = Λab qb

(1.54)

with the Einstein repeated-index summation. The Einstein summation convention is also convenient when defining the inner (or “dot”) product between two vectors. For instance, A · B = Ax Bx + Ay By + Az Bz = Aa Ba

(1.55)

and the implicit summation over the repeated indices produces a scalar quantity from the two vector quantities. The inner product in matrix notation multiplies a column vector from the left by a row vector. The Jacobian matrix and its uses are recurring themes in modern dynamics. Its uses go beyond simple coordinate transformations, and it appears any time a nonlinear system is “linearized” around fixed points to perform stability analysis

18 Introduction to Modern Dynamics (Chapter 4). The eigenvalues of the Jacobian matrix define how rapidly nearby initial conditions diverge (called Lyapunov exponents—Chapters 4 and 9). The determinant of the Jacobian matrix is the coefficient relating area and volume changes (Chapter 11), and is used to prove which processes conserve volumes in phase space or Minkowski space (Chapters 3 and 12).

1.3.2 Metric spaces and basis vectors

 In Cartesian coordinates, basis vectors are the familiar unit vectors x, ˆ y, ˆ zˆ directed along the coordinate axes. In the case of generalized coordinates, basis vectors need to be defined through their relationship to the original Cartesian coordinates. In two dimensions, the differential transformation between coordinates xa and qb is expressed as dx =

∂x 1 ∂x dq + 2 dq2 ∂q1 ∂q

dy =

∂y 1 ∂y dq + 2 dq2 ∂q1 ∂q

(1.56)

which is written in matrix form as ⎛ ∂x   ⎜ ∂q1 dx =⎜ ⎝ ∂y dy ∂q1

∂x ⎞   ∂q2 ⎟ dq1 ⎟ ∂y ⎠ dq2

(1.57)

∂q2

The square matrix is the Jacobian matrix of the transformation. Transposing this expression gives ⎛ ⎞ ∂x ∂y  ⎜ 1 ∂q ∂q1 ⎟ ⎟ (dx dy) = dq1 dq2 ⎜ ⎝ ∂x ∂y ⎠ ∂q2    e 1 q 1 2 = dq dq eq2 

∂q2 (1.58)

where the rows of the matrix have become basis vectors (covectors)  eq1 =  eq2 =

∂x ∂y ∂q1 ∂q1 ∂x ∂q2



∂y ∂q2



(1.59)

Basis vectors are used to express elements of the Cartesian vectors in terms of the curvilinear coordinates as d x = eqa dqa

(1.60)

Physics and Geometry 19 where the Einstein summation is implied. Vectors describe the properties of a physical system that cannot depend on the coordinate frame that is chosen: changing coordinate systems cannot change the physics of the system! Therefore, vectors such as d x are coordinate-free expressions. Vector components, on the other hand, do depend on the choice of coordinate system used to express them. Basis vectors are attached to the coordinate system, defining the elementary components along which a vector is decomposed. Basis vectors can vary depending on their location within the coordinate system, and are not necessarily unit vectors. A vector can be expressed in terms of basis vectors as A = A1 e1 + A2 e2 + A3 e3 = Ab eb

(1.61)

which shows how vectors and covectors combine.

Example 1.7 Cylindrical coordinates The coordinate transformations describing Cartesian coordinates in terms of polar coordinates are x = r cos θ y = r sin θ z=z

(1.62)

The basis row vectors for the composition of Cartesian components in terms of the curvilinear coordinates are   ∂x ∂y ∂z er = = (cos θ sin θ 0) ∂r ∂r ∂r   ∂x ∂y ∂z eθ = = (−r sin θ r cos θ 0) (1.63) ∂θ ∂θ ∂θ   ∂x ∂y ∂z ez = = (0 0 1) ∂z ∂z ∂z

1.3.3 Metric tensor The path length element is a quadratic form that is expressed in terms of generalized coordinates as ds2 = gab dqa dqb

(1.64)

(Einstein summation implied), where gab is called the metric tensor. To find an explicit expression for the metric tensor, given a coordinate transformation, consider the differential transformation between coordinates xa and qb , dxa =

∂xa b dq ∂qb

(1.65)

20 Introduction to Modern Dynamics which is written out explicitly for three dimensions as dx =

∂x 1 ∂x ∂x dq + 2 dq2 + 3 dq3 ∂q1 ∂q ∂q

dy =

∂y 1 ∂y ∂y dq + 2 dq2 + 3 dq3 ∂q1 ∂q ∂q

dz =

∂z 1 ∂z ∂z dq + 2 dq2 + 3 dq3 1 ∂q ∂q ∂q

(1.66)

The square of the first line is 

∂x 1 ∂x dq + 2 dq2 + ∂q1 ∂q ∂x ∂x  2 ∂x = 1 1 dq1 + 2 ∂q ∂q ∂q

(dx)2 =

+2

 ∂x 3 2 dq ∂q3 ∂x  2 2 ∂x ∂x  2 dq + 3 3 dq3 2 ∂q ∂q ∂q

∂x ∂x 1 2 ∂x ∂x ∂x ∂x dq dq + 2 1 3 dq1 dq3 + 2 2 3 dq2 dq3 1 2 ∂q ∂q ∂q ∂q ∂q ∂q

(1.67)

with similar expressions for dy and dz. These squares are added (in quadrature) to give the squared line element ds2 = dx2 + dy2 + dz2

(1.68)

which leads to a new expression ds2 = g11 dq1 dq1 + g12 dq1 dq2 + g13 dq1 dq3 + g21 dq2 dq1 + g22 dq2 dq2 + g23 dq2 dq3 + g31 dq3 dq1 + g32 dq3 dq2 + g33 dq3 dq3 = gab dqa dqb

(1.69)

in terms of the metric tensor gab . Collecting the coefficients of each of the dqa dqb terms, and equating ds2 to the right-hand side, yields g11 =

∂x ∂x ∂y ∂y ∂z ∂z + 1 1 + 1 1 ∂q1 ∂q1 ∂q ∂q ∂q ∂q

g12 =

∂x ∂x ∂y ∂y ∂z ∂z + 1 2 + 1 2 1 2 ∂q ∂q ∂q ∂q ∂q ∂q ...

(1.70)

with the general expression gab =

∂x ∂x ∂y ∂y ∂z ∂z + a b + a b ∂qa ∂qb ∂q ∂q ∂q ∂q

(1.71)

Physics and Geometry 21 Cylindrical coordinates (r, θ, z)

z-axis

r cos θ = r sin θ z

x y z

xa =

ds2 = dr 2 + r 2dθ2 + dz 2

r P (r, θ, z) z O y-axis θ x-axis Spherical coordinates (r, θ, φ) z-axis

x y

xa =

P (r, θ, φ) θ r

r sinθ cosφ =

z

r sinθ sinφ r cosθ

ds2 = dr 2 + r 2dθ2 + r 2 sin 2 θdφ2 z

O

y-axis φ

x-axis

Figure 1.4 Cylindrical and spherical coordinate systems with line elements ds2 .

for each element of the metric tensor. Alternatively, one can begin with ds2 = d r · d r 

  = ea dqa · eb dqb = ( ea · eb ) dqa dqb

(1.72)

and the assignment can be made directly for gab = ea · eb where the metric tensor elements are the inner products of basis vectors.

(1.73)

22 Introduction to Modern Dynamics The metric tensor will be described extensively in Chapter 11 as a fundamental element used in the description of general geometries that go beyond ordinary Cartesian coordinates. The metric tensor is central to the description of space– time in special relativity and warped space–time in general relativity.

1.3.4 Two-dimensional rotations A common class of coordinate transformation consists of rotations. The rotation matrix is an operator that operates on the components of a vector to express them in terms of a new rotated set of coordinate axes. The components of a vector Aa are transformed as 



Ab = Rba Aa

(1.74)



where Rba is the rotation matrix and Aa are the components of the vector as viewed in the unprimed frame. For a 2D coordinate frame O that has been rotated clockwise by an angle θ relative to the unprimed frame O, the rotation matrix that transforms the vector components (described with respect to the new frame) is 

 Rba

2D rotation matrix

cos θ = sin θ

 − sin θ . cos θ

(1.75)

It is important to keep a clear distinction between basis vectors (like xˆ and y) ˆ that point along the coordinate axes and the components of the vector A projected onto these axes. If the basis vectors (and hence the coordinate frame) are rotated clockwise, then the vector components, as seen from the transformed coordinate frame, appear to have rotated counter-clockwise. This is shown in Fig. 1.5. Therefore, basis vectors are rotated through the inverse rotation matrix. The transformation of the basis vectors is therefore eb = Rab ea

(1.76)

which is the inverse of Eq. (1.74): 



Rba



−1

= Rab =

cos θ − sin θ

sin θ cos θ

 (1.77)



The inverse transformation Rba has the primed index below, while in the forward  transformation of vector components, Rba , the primed index is above. A vector quantity is expressed in terms of the basis vectors as  A = Aa ea = Ab eb

(1.78)

Physics and Geometry 23 y

y





Ab = Rba Aa



Ay Ay



Ax  Ay



A

cos θ –sin θ sin θ cos θ

=

Ax Ay

êy êx

Ax

êy 

Figure 1.5 Rotated coordinate axes  through the transformation Rab . The vector A remains the same—only the description of the vector (vector components projected onto the axes) changes.

x

 Ax

θ

êx  x

which is independent of the coordinate system—vectors are invariant quantities (elements of reality) that exist independently of their coordinate description. This invariance of vectors is described explicitly through the derivation  A = Aa ea     = Aa Rba Rab ea      = Aa Rba Rab ea

= Ab eb

(1.79)

   where the quantity Rba Rab is the identity matrix.

1.3.5 Three-dimensional rotations of coordinate frames Three-dimensional rotations of coordinate axes can be constructed as successive 2D rotations applied around different axes. Three angles are required to express an arbitrary 3D rotation, and the general rotation matrix can be expressed as 







Rad = Rcd (ψ) Rbc (θ ) Rab (φ)

(1.80)

where each rotation is applied around a different axis. When applied to a basis vector ea , this produces the successive transformations 

eb = Rab (φ) ea 

ec = Rbc (φ) eb c

ed = Rd (φ) ec

(1.81)

24 Introduction to Modern Dynamics where the original primed frame is rotated into the double-primed frame, then the double-primed frame is rotated into the triple-primed frame, which is rotated into the unprimed frame, which is the resultant frame of the 3D rotation. Although there is no unique choice for the rotation axes, one conventional choice known as Euler angles uses a rotation by φ around the z axis, then by θ around the x axis, and finally by ψ around the z axis (Fig. 1.6). The rotation matrices for this choice are

z . φ z = z . Ψ

y θ

x



⎞ cos φ − sin φ 0 ⎜ ⎟ Rz (φ) = Z = ⎝ sin φ cos φ 0⎠ 0 0 1 ⎛ ⎞ 1 0 0  ⎜ ⎟ Rx (θ ) = Xcb = ⎝0 cos θ − sin θ ⎠ 0 sin θ cos θ ⎛ ⎞ cos ψ − sin ψ 0  ⎜ ⎟ Rz (ψ) = Zdc = ⎝ sin ψ cos ψ 0⎠ 0 0 1

y

ψ x

a b

φ . θ x = x Line of Nodes

Figure 1.6 Euler angles for a 3D rotation. The original primed axes are first rotated by φ around the z-axis (fixedframe z-axis), then by θ around the x-axis (also known as the line of nodes), and finally by ψ around the z-axis (body z-axis).

(1.82)

(1.83)

(1.84)

Euler angles are important for describing spinning or rotating systems in terms of angular velocities. The angular velocities in the body frame are ⎛

⎞ ⎛ ω1 sin θ sin ψ ⎜ 2⎟ ⎜ ⎝ω ⎠ = ⎝ sin θ cos ψ ω3 cos θ

cos ψ − sin ψ 0

⎞⎛ ⎞ 0 φ˙ ⎟⎜˙ ⎟ 0⎠ ⎝θ ⎠ 1 ψ˙

(1.85)

with individual components in the body frame being ω1 = φ˙ sin θ sin ψ + θ˙ cos ψ ω2 = φ˙ sin θ cos ψ − θ˙ sin ψ ω3 = φ˙ cos θ + ψ˙

(1.86)

These expressions will be useful when solving problems of rotating or tumbling objects. The Euler angles are a natural choice for a 3D rotation to describe the complicated motions of spinning tops (see Section 1.5). However, the choice is not unique, and a different choice for the rotation matrix in 3D can be used when there is a single rotation axis and rotation angle θ . For a defined rotation axis given by a unit vector uˆ a and a rotation angle θ , the rotation matrix is

3D rotation matrix









Rab = Iba cos θ + Sba sin θ + Tba (1 − cos θ)

(1.87)

Physics and Geometry 25 where

 Iba

=

 δba

is the identity matrix, and the other matrices are ⎛

0 ⎜ Sb = ⎝ uz − uy ⎛ ux ux  ⎜ Tba = ⎝uy ux uz ux a

−uz 0 ux ux uy uy uy uz uy

⎞ uy ⎟ −ux ⎠ 0 ⎞ ux uz ⎟ uy uz ⎠ uz uz

(1.88)



with ua being the Cartesian components of the unit vector. The matrix Tba is the

 tensor product of the unit vector with itself, denoted in vector notation as uˆ ⊗ uˆ .  The matrix Sba is a skew-symmetric matrix constructed from the unit vector and

 is denoted in vector notation as the operator u× ˆ for the cross product. The structure of the skew-symmetric matrix reflects the geometry of rotations in 3D space. It is this intrinsic property of 3-space that is the origin of physics equations containing cross products, such as definitions of angular momentum and torque as well as equations that depend on the moments of inertia, which are encountered later in this chapter.

1.4 Uniformly rotating frames A uniformly rotating frame is an important example of a non-inertial frame. In this case, acceleration is not constant, which leads to fictitious forces such as the centrifugal and Coriolis forces. Consider two frames: one fixed and one rotating. These could be, for instance, a laboratory frame and a rotating body in the laboratory, as in Fig. 1.7. The fixed frame has primed coordinates, and the rotating frame has unprimed coordinates. The position vector in the fixed lab frame is 

r = xa eˆa = x eˆx + y eˆy + z eˆz

(1.89)

The position vector in the rotating frame is r = xa eˆa = xˆex + yˆey + zˆez

(1.90)

relative to the origin of the rotating frame. The primed position vector is then r = R + r

(1.91)

26 Introduction to Modern Dynamics x3 P

x3

r

x2

r  R

x1 x2

Figure 1.7 Coordinates for a rotating frame. The body frame is the unprimed frame. The lab, or fixed, frame is primed.

x1

Taking the time derivative gives ˙ + d xa eˆ r˙ = R a dt  ˙ + x˙a eˆ + xa e˙ˆ =R a a ˙ + r˙ + xa e˙ˆ =R a

(1.92)

(the Einstein summation convention on the repeated index is assumed), where the last term is a non-inertial term because the basis vectors of the rotating frame are changing in time. To obtain the time derivative of the basis vectors, consider an infinitesimal rotation transformation that operates on the basis vectors of the body frame  eˆb = Rab eˆa = δba eˆa +

d eˆa dt

 dt

(1.93)



where the infinitesimal rotation matrix Rba from Eq. (1.87) is expressed to lowest order in dθ = ωdt as ⎛

1 ⎜ Rab ≈ ⎝0 0 5 Cartesian vector components can be denoted with subscripts, but they are not to be confused with the general covectors that are defined in Chapter 11.

0 1 0

⎞ ⎛ 0 0 ⎟ ⎜ 0⎠ − ⎝ ω z 1 − ωy

−ωz 0 ωx

⎞ ωy ⎟ −ωx ⎠ dt 0

(1.94)

where the ωa are the Cartesian components of the angular velocity vector ω  along the axes.5 Therefore, the time derivatives of the basis vectors of the body frame are

Physics and Geometry 27 d eˆx = ωz eˆy − ωy eˆz dt d eˆy = ωz eˆx + ωy eˆz dt d eˆz = ωy eˆx − ωx eˆy dt

(1.95)

ωz

eˆz

The rotation of the basis vectors by the different components ωa is shown in Fig. 1.8. Using Eq. (1.95) to express the non-inertial term in Eq. (1.92) gives xa e˙ˆa = xωz eˆy − xωy eˆz



 + y −ωz eˆx + y ωx eˆz

ωy eˆx

eˆy

ωx

+ zωy eˆx − zωx eˆy

(1.96)

 xa e˙ˆa = eˆx ωy z − ωz y

 − eˆy ωx z − ωz x

x  + eˆz ω y − ωy x

(1.97)

Figure 1.8 Angular velocities related to changes in the basis vectors.

Combining terms gives

where the result is recognized as the cross product xa e˙ˆa = ω  × r

(1.98)

Cross products occur routinely in the physics of rotating frames and rotating bodies, and are efficiently expressed in vector notion, which will be used through most of the remainder of the chapter instead of the index notation.6 By using Eq. (1.98) in Eq. (1.92), the fixed and rotating velocities are related by vf = V + vr + ω  × r

(1.99)

 the time rate of change in the fixed This result is general, and, for any vector Q, frame is related to the time rate of change in the rotating frame as     dQ dQ    = +ω  ×Q dt fixed dt rotating

(1.100)

 = ω, As an example, consider the case Q    dω   dω   = +ω  ×ω  dt fixed dt rotating

(1.101)

6 Vector cross products arise from the wedge product A ∧ B of Hermann Grassmann (1844), introduced in The Theory of Linear Extension, a New Branch of Mathematics.

28 Introduction to Modern Dynamics where the last term is clearly zero. Therefore, ω ˙ f = ω ˙ r

(1.102)

proving that angular accelerations are observed to be the same, just as linear accelerations are the same when transforming between inertial frames. This equality is because the rotating frame is in constant angular motion. As a second, and more important example, take the time derivative of Eq. (1.99). This is     d vf  d V  d vr  d r ˙ × r + ω = + + ω   × dt fixed dt fixed dt fixed dt fixed

(1.103)

The second term on the right is expanded using Eq. (1.100) as   d vr  d v = +ω  × vr dt fixed dt rotating

(1.104)

The fourth term in Eq. (1.103) becomes ω ×

  d r d r = ω  × +ω  × (ω  × r) dt fixed dt rotating =ω  × vr + ω  × (ω  × r)

(1.105)

The acceleration in the fixed frame is then ¨ + a + ω af = R ˙ × r + ω  × (ω  × r) + 2ω  × vr r

(1.106)

For a particle of mass m, Newton’s second law is ¨ + ma + mω ˙ + r + mω  × (ω  × r) + 2mω  × vr F f = mR r

(1.107)

which is the force in the fixed frame. Therefore, in the rotating frame, there is an effective force ¨ − mω F eff = mar = Ff − mR ˙ × r − mω  × (ω  × r) − 2mω  × vr

(1.108)

The first two terms on the right are the fixed-frame forces. The third term is the effect of the angular acceleration of the spinning frame. The fourth term is the centrifugal force, and the last term is the Coriolis force. The centrifugal and the Coriolis forces are called fictitious forces. They are only apparent in the rotating frame because the rotating frame is not inertial.

Physics and Geometry 29 Center-of-mass acceleration

Centrifugal force

. ¨ mar = Ff – mR – mω × r – mω × (ω × r) – 2mω × νr

Angular acceleration

External force

Figure 1.9 Effective force in a frame rotating with angular velocity ω.

Coriolis force

1.4.1 Motion relative to the Earth For a particle subject to the Earth’s gravitational field (Fig. 1.10), the effective force experienced by the particle is ¨ − mω F eff = F ext + mg0 − mR ˙ × r − mω  × (ω  × r) − 2mω  × vr

gmax

(1.109)

The fourth term is related to the deceleration of the Earth and is negligible. The centrifugal term is re-expressed as   ¨ = ω R × ω  × R ˙ =ω  ×R

g0

g

−mω × ω × R

(

gmin

(1.110)

The effective force is then    F eff = F ext + mg0 − mω × ω  × r + R − 2mω  × vr

(1.111)

and redefining the effective gravitational acceleration through    geff = g0 − ω × ω  × r + R

(1.112)

 × vr F eff = F ext + mgeff − 2mω

(1.113)

gives

This last equation adds the centrifugal contribution to the measured gravitational acceleration. The last term in Eq. (1.113), −2mω×  vr , is the Coriolis force that has important consequences for weather patterns on Earth, and hence has a powerful effect on the Earth’s climate (Fig. 1.11). It is also a sizeable effect for artillery projectiles. On the other hand, it plays a negligible role in the motion of whirlpools in bathtubs.

Figure 1.10 Geometry for motion relative to the Earth.

)

30 Introduction to Modern Dynamics

Figure 1.11 Dramatic example of cyclone motion in the Northern Hemisphere for a low-pressure center between Greenland and Iceland.

1.4.2 Foucault’s pendulum z

Foucault’s pendulum is one of the most dramatic demonstrations of the rotation of the Earth. It also provides a direct measure of latitude. A simple pendulum (Fig. 1.12) swinging at a latitude λ will precess during the day as a consequence of the Coriolis force. The acceleration in the rotating Earth frame is



x

ar = g +

y

T mg

(1.114)

where the components of the tension and angular velocity are x  y Ty = −T  z Tz = −T  Tx = −T

Figure 1.12 Geometry of a Foucault pendulum of mass m attached to a massless string of length  supporting a tension T.

1  T − 2ω  × vr m

The cross product is

ωx = −ω cos λ ωy = 0 ωz = ω sin λ

(1.115)

Physics and Geometry 31 ⎛

⎞⎛ ⎞ 0 −ωz ωy vx ⎜ z ⎟⎜ ⎟ x ω  × vr = ⎝ ω 0 −ω ⎠ ⎝vy ⎠ − ωy ωx 0 vz ⎛ ⎞⎛ ⎞ 0 −ω sin λ 0 vx ⎜ ⎟⎜ ⎟ = ⎝ω sin λ 0 ω cos λ⎠ ⎝vy ⎠ 0 0 −ω cos λ 0 ⎛ ⎞ −ωy˙ sin λ ⎜ ⎟ = ⎝ ωx˙ sin λ ⎠ − ωy˙ cos λ

(1.116)

and the acceleration in the rotating frame is Tx + 2yω ˙ sin λ m Ty = y¨ = − − 2xω ˙ sin λ m

arx = x¨ = ary

(1.117)

leading to the coupled equations x¨ + ω02 x = 2ωz y˙ y¨ + ω02 y = −2ωz x˙

where ω02 =

T g ≈ m 

(1.118)

The coupled equations are added in quadrature (See Appendix A.3) to yield ˙ (¨x + i y¨ ) + ω02 (x + iy) = −2iωz (x˙ + i y)

(1.119)

This is converted into a single second-order equation through substitution of the variable q = x + iy to give q¨ + 2iωz q˙ + ω02 q = 0

(1.120)

which is the equation of a harmonic oscillator with imaginary damping. The solution is ⎡     ⎤ i

q(t) = e−iωz t ⎣Ae

ωz2 +ω02 t

+ Be

−i

ωz2 +ω02 t



(1.121)

For a typical pendulum, ω0 >> ωz and the solution simplifies to   q(t) = e−iωz t Aeiω0 t + Be−iω0 t

(1.122)

32 Introduction to Modern Dynamics where the term in the brackets is the solution of a conventional pendulum that is not rotating. Expressing this solution as q(t) = e−iωz t [x0 + iy0 ]

(1.123)

it can also be written as (see Appendix A.2)    x(t) cos ωz t = y(t) − sin ωz t

sin ωz t cos ωz t

  x0 y0

(1.124)

The matrix is the rotation matrix that rotates the pendulum through an angle θ = ωz t

(1.125)

To find the precession time Tp of the rotation, use ωz Tp = 2π 2π sin λTp = 2π TE

(1.126)

Tp = TE / sin λ

(1.127)

or

At a latitude of 45o , the precession period is 34 hours. At the North (or South) pole, the precession time is equal to the Earth’s rotation period. At the Equator, the precession time is infinite—the pendulum does not precess.

1.5 Rigid-body motion A rigid body is a multiparticle system with a very large number of constituents. In principle, the dynamical equations of the system would include the positions and velocities of each of the particles. However, in a rigid body, the particle coordinates are subject to a similarly large number of equations of constraints. Therefore, the dynamical equations are greatly simplified in practice, consisting of only a few degrees of freedom. Nonetheless, the dynamics of rotating bodies is a topic full of surprises and challenges, with a wide variety of phenomena that are not immediately intuitive.

1.5.1 Inertia tensor The dynamics of rotating bodies follow closely from the principles of rotating frames. (The analysis in this section is carried out relative to the inertial fixed

Physics and Geometry 33 frame, which will be unprimed in this discussion.) Consider a collection of N individual point masses whose relative positions are constant in the body frame. The masses are mα for α = 1, . . ., N, and their locations are rα . Because the body is rigid, there are no internal motions, and the velocities in the fixed frame are vα = ω  × rα

(1.128)

for angular velocity ω,  where the fixed frame is traveling with the center-of-mass motion of the rigid body. The only kinetic energy in this frame is rotational. By using the identity 

A × B

2

 2 = A2 B2 − A · B

(1.129)

the kinetic energy is obtained as Trot =

  1 mα ω2 rα2 − (ω  · rα )2 2 α

(1.130)

The rotational energy can be expressed in terms of the components of the position vectors and the components of the angular velocity as7 Trot

1 = mα 2 α





 ωa2

a



xbα

2

 −

 

 ωa xaα



a

b

 ωb xbα

(1.131)

b

where the sum over α is over point masses, and the sum over a and b are over ! coordinates. This expression can be simplified using ωb = ωa δab to give a

Trot =

    2 1  mα ωa ωb δab xcα − ωa ωb xaα xbα 2 α c a,b

=

    2 1  a b ω ω mα δab xcα − xaα xbα 2 α c

(1.132)

a,b

This procedure has separated out a term summed over the masses α that makes the rotational kinetic energy particularly simple, Trot =

1 Iab ωa ωb 2 ab

where the expression

(1.133) 7 Summations are written out explicitly in this section without resorting to the Einstein summation convention.

34 Introduction to Modern Dynamics

Inertia tensor Iab =







mα δab

  2 c xα − xaα xbα

(1.134)

c

α

is the moment-of-inertia tensor (or inertia tensor for short) for a rigid body. In matrix form, this is ⎛!

Iab

⎜α ⎜ ⎜ =⎜ ⎜ ⎜ ⎝





− −



!

2

+ (zα )2

mα yα xα



− !

α

!

α



mα xα yα

α



mα (xα ) + (zα ) −

mα zα xα

α

!

2

!

2



mα zα yα

α

!

mα xα zα



⎟ ⎟ ⎟ ⎟ − mα yα yα ⎟ α ⎟  ⎠ ! mα (xα )2 + (yα )2 α

!

α

(1.135) The moment-of-inertia tensor is a rank-two symmetric tensor that is quadratic in the coordinate positions of the constituent masses. There are only six independent tensor components that capture all possible mass configurations. If the rigid body has symmetries, then the number of independent components is reduced. For instance, if the rigid body has spherical symmetry, then all off-diagonal terms are zero, and the diagonal terms are all equal. The summations over the individual particles can be replaced (in the limit of a continuous distribution of mass) by integrals. The moment-of-inertia tensor for a continuous mass distribution is given by " Iab =



   2 c ρ ( r ) δab x − xa xb

d3x

(1.136)

c

V

which is expanded as ⎛# Iab

⎜V ⎜ =⎜ ⎜ ⎝

 y2 + z2 ρ ( r ) dx dy dz # r ) dx dy dz − xyρ ( #V − xzρ ( r ) dx dy dz V

# − xyρ ( r ) dx dy dz # 2V 2  x + z ρ ( r ) dx dy dz V # − yzρ ( r ) dx dy dz V

#

⎞ xzρ ( r ) dx dy dz ⎟ V# ⎟ r ) dx dy dz ⎟ − yzρ ( ⎟ V ⎠ # 2  x + y2 ρ ( r ) dx dy dz V

(1.137) The integral is carried out over the limits of the mass distribution. Several examples of moments of inertia of symmetric solids with constant mass densities are shown in Table 1.1.

Physics and Geometry 35

Table 1.1 Inertia tensors Solid sphere



z-axis

2/5 ⎜ I=⎝ 0 0

0 2/5 0

⎞ 0 ⎟ 0 ⎠ mr 2 2/5

0 2/3 0

⎞ 0 ⎟ 0 ⎠ mr 2 2/3

r

O

y-axis

x-axis

Spherical shell



z-axis

2/3 ⎜ I=⎝ 0 0

r

O

y-axis

x-axis



Rectangular prism

d 2 + h2 ⎜ I=⎝ 0 0

z-axis

w

⎞ 0 ⎟ m 0 ⎠ 12 w2 + d 2

0 w2 + h2 0

O h

y-axis x-axis d



 3r 2 + h2 /12 ⎜ I=⎝ 0 0

z-axis

Right solid cylinder r

O

h

y-axis

x-axis

0  3r 2 + h2 /12 0

⎞ 0 ⎟ 0 ⎠m 2 r /2

36 Introduction to Modern Dynamics

Example 1.8 Moment-of-inertia integral for a cube with its origin at a vertex The moments will be calculated for a cube of side length b with the origin at one of the vertices and with the axes oriented along the edges of the cube. The integral for I 11 is "b

"b   "b dz dy y2 + z2 dx

I11 = ρ 0

0

"b  = ρb

0

b3 3

 + bz2

0



=



b4 b4 + 3 3

= ρb

dz

=

2 5 ρb 3

2 Mb2 3

(1.138)

The integral for I 12 is "b I12 = −ρ

"b

"b

dz y dy x dx 0



= −ρb

0

0

 2 b2 b 2

2

1 = − ρb5 4

1 = − Mb2 4

(1.139)

By the high symmetry of the cube, I11 = I22 = I33 I12 = I13 = I23 The inertia tensor is



2/3 ⎜ I = ⎝ − 1/4 − 1/4

−1/4 2/3 −1/4

⎞ −1/4 ⎟ −1/4⎠ Mb2 2/3

(1.140)

(1.141)

1.5.2 Parallel axis theorem The inertia tensor around any rotation axis can be expressed in terms of the inertia tensor evaluated for rotation through the center of mass when the rotation center is displaced from the center of mass by a vector a,  if the new rotation axis is parallel to the original rotation axis. The new inertia tensor is defined as

Physics and Geometry 37 Jab =



 mα δab

 

α

 2 Xα,k − Xα,a Xα,b

(1.142)

k

where the new position vectors are Xr = ar + xr

(1.143)

The expression for the new inertia tensor is then Jrs =

2 x2α,k − xα,r xα,s α k    ! ! ! ! + mα δrs a2k − ar as + mα 2δrs xα,k ak − ar xα,s − as xα,r !

mα δrs

α

!

α

k

k

(1.144) But, by definition, the first quantity is Irs =







mα δrs

α



 x2α,k − xα,r xα,s

(1.145)

k

and, because O is at the center of mass 

mα xα,k = 0

(1.146)

α

the last quantity vanishes. Therefore, Jrs = Irs +

 α

 mα δrs



a2k − ar as

(1.147)

k

We also have 

mα = M

α



ak = a2

(1.148)

k

This leads to the

Parallel axis theorem

  Jrs = Irs + M a2 δrs − ar as

where Jrs is the inertia tensor around the new rotation axis, Irs is the original inertia tensor around the center of mass, and the displacement vector between the center of mass and the new axis is a. 

38 Introduction to Modern Dynamics

Example 1.9 Moment of inertia for a solid triangle Calculate the I 33 moment of inertia for an equilateral triangle of side L and thickness h with constant√ density ρ 0 , with the z axis perpendicular to the plane of the triangle and through the center of mass. The area A = 3L2 /4 and the √ 2 mass M = ρ0 3L h/4. This problem has high symmetry, reducing the number of independent integrals to perform, and we can use the parallel axis theorem. To start, consider three line elements that create a triangle of √ side L, as shown in Fig. 1.13. Each of the three line elements has been displaced from the origin by a distance√L/2 3. The moment of inertia of a line mass dM through its center is dML2 /12. When displaced by a distance L/2 3, by the parallel axis theorem, the moment is dML2 /12 + 2 dML2 /12 = dML2 /6. There are three lines, all equivalent, so the moment of inertia is I33 √ = dML /2. The contribution to a line mass is the product of the density by the height h, width dy, and length L − 2 3y, giving  √  dM = hρ0 L − 2 3y dy (1.149) When integrating along the y axis, the side length of the triangle decreases from L to zero as y approaches the center of mass. Therefore, the integral for I 33 is "  √ 2 1 I33 = dM L − 2 3y 2 =

hρ0 2

√ L/2 " 3



√ 3 L − 2 3y dy

0

hρ0 = √ 4 3

"L x3 dx 0

hρ0 1 4 = √ L 4 34 ML2 = (1.150) 12 A similar approach can be used to obtain the moments around the x axis and the y axis (see Homework problem 31). The parallel axis theorem considerably simplifies this problem compared with a direct application of Eq. (1.136).

3L 2 L 2 3

L

Figure 1.13 Equilateral triangle. The bisectors intersect at the center of mass.

Physics and Geometry 39

1.5.3 Angular momentum The total angular momentum for N discrete particles is obtained by summing the angular momentum of the individual particles as N 

L =

rα × pα

(1.151)

α=1

The individual momenta are pα = mα vα = mα ω  × rα

(1.152)

and hence the angular momentum is L =

 α

=



rα × (mα ω  × rα )   mα rα2 ω  − rα ( r α · ω) 

(1.153)

α

By again using the identity (1.129), the angular momentum can be broken into terms that are the components of the position vectors as well as the components of the angular velocity, just as was done for the moment of inertia in Eq. (1.131). The angular momentum is La =



 mα ωa

α

=





α

=

 b

 b

ωb







xcα

2

− xaα

c



xbα ωb

b

ωa δab

 2 xcα − xaα xbα ωb c

 mα δab

 2 xcα − xaα xbα

(1.154)

c

α

which is re-expressed as La =



Iab ωb

(1.155)

b

by using the definition in Eq. (1.134) for the moment-of-inertia tensor. This is a tensor relation that connects the angular velocity components to the angular momentum. The rotational kinetic energy, Eq. (1.133), is expressed in terms of the angular momentum as Trot =

1 L·ω  2

(1.156)

40 Introduction to Modern Dynamics

ω e3 m1 α r1

L e2 O r2

m2

Figure 1.14 Nonparallel angular momentum and angular velocity for a rigid rotating dumbell. The rotation axis is at an angle to the rigid (massless) rod connecting the masses.

where L and ω  are not necessarily parallel, because of off-diagonal elements in the inertia tensor. A simple example when L and ω  are not parallel is a tilted rigid dumbbell, shown in Fig. 1.14. Two masses are attached to the ends of a rigid (but massless) rod that is tilted relative to the rotation axis. The angular velocity ω  is directed along the central rotation axis, but the angular momentum L is directed perpendicular to the rod connecting the two masses. As the dumbbell spins, the angular velocity remains constant, but the angular momentum continuously changes direction. This represents a steady change of the angular momentum, which requires a steady external torque The solids in Table 1.1 all have high symmetry, the coordinate axes are directed along the primary symmetry axes, and the coordinate origin is located at the center of mass. Under these conditions, all of the inertia tensors are diagonal, with zero off-diagonal elements, the coordinate axes are called principal axes, and the diagonal elements of the inertia tensor are called the principal moments of inertia. In general, the coordinate axes might not be chosen along the main symmetry axes, and the origin might not be at the center of mass. In this case, all elements of the tensor may be nonzero, as for the case of the cube in Example 1.8. Principal moments and principal axes can always be regained by finding the eigenvalues and the eigenvectors of the inertia matrix. The goal is to find body coordinate axes so that the off-diagonal components of the inertia tensor vanish. These axes are the principal axes of the body. In this case, the angular velocity and the angular momentum are parallel, leading to L1 = Iω1 = I11 ω1 + I12 ω2 + I13 ω3 L2 = Iω2 = I21 ω1 + I22 ω2 + I23 ω3

(1.157)

L3 = Iω3 = I31 ω1 + I32 ω2 + I33 ω3 Subtracting the left-hand sides gives (I11 − I) ω1 + I12 ω2 + I13 ω3 = 0 I21 ω1 + (I22 − I) ω2 + I23 ω3 = 0

(1.158)

I31 ω1 + I32 ω2 + (I33 − I) ω3 = 0 These simultaneous equations have a nontrivial solution when det | Iab − I |= 0

(1.159)

Physics and Geometry 41 which is an eigenvalue problem. Solving for the eigenvalues and the associated eigenvectors yields the principal moments and principal axes of the body.

Example 1.10 Principal moments and axes for a cube with its origin at a vertex The eigenvalue problem for the cube of Example 1.8 is   2/3 − I −1/4    − 1/4 2/3 − I   − 1/4 −1/4

 −1/4   −1/4  = 0  2/3 − I 

The principal moments for the diagonal inertia tensor are ⎛ 1/6 0 ⎜ I=⎝ 0 11/12 0 0

⎞ 0 ⎟ 0 ⎠ Mb2 11/12

A set of eigenvectors is

⎛ ⎞ 0.5774 ⎜ ⎟ v1 = ⎝0.5774⎠ 0.5774



⎞ 0.1243 ⎜ ⎟ v2 = ⎝ − 0.7610⎠ 0.6367

(1.160)

(1.161)



⎞ 0.8070 ⎜ ⎟ v3 = ⎝ − 0.2958⎠ − 0.5111

(1.162)

The first eigenvector is along the body diagonal, which is a symmetry axis through the origin. There is a degeneracy for the last two eigenvalues, so the eigenvectors merely need to be mutually perpendicular to each other and to the body diagonal.

1.5.4 Euler’s equations The time rate of change of the angular momentum in Eq. (1.151) is   L˙ = r˙α × pα + rα × p˙α α

=



 mα

α

=

 α

α

  r˙α × r˙α + rα × p˙a

rα × p˙α =



α

rα × F α

(1.163)

α

  where r˙α × r˙α is identically zero. The force F α acting on the αth particle consists of the force fβα of particle β acting on particle α plus any external force F α acting on α. The time derivative of the angular momentum is then ext

42 Introduction to Modern Dynamics  L˙ = rα × F α α

=



ext rα × F α +



=



rα × fαβ

α β =α

α

ext

rα × F α

α

 ext =N

(1.164)

where the double sum vanishes because of action–reaction pairs, and the time derivative of the angular momentum is equal to the externally applied torque, 

d L dt

  =N

ext

(1.165)

fixed

In the body frame, from Eq. (1.100), this is 

d L dt

  +ω  × L = N

(1.166)

body

Choosing the body axes along the principal axes of the inertia tensor (defined by the directions of the eigenvalues of the inertia tensor) simplifies the analysis. The component along the body z axis is L˙ 3 + ω1 L2 − ω2 L1 = N3

(1.167)

La = Ia ωa

(1.168)

I3 ω˙ 3 − (I1 − I2 ) ω1 ω2 = N3

(1.169)

and

to give

On taking permutations of the three directions, this yields

Euler’s equations

I1 ω˙ 1 − (I2 − I3 ) ω2 ω3 = N1 I2 ω˙ 2 − (I3 − I1 ) ω3 ω1 = N2 I3 ω˙ 3 − (I1 − I2 ) ω1 ω2 = N3

(1.170)

These are Euler’s equations relating the time rate of change of the angular velocity to the applied torque in the body frame.

Physics and Geometry 43

Example 1.11 Torque on a dumbbell The torque required to rotate the dumbbell in Fig. 1.14 is obtained by finding the components of Euler’s equations. The angular velocity is described in the body frame as ω1 = 0 ω2 = ω sin α ω3 = ω cos α

(1.171)

I1 = (m1 + m2 ) d 2 I2 = (m1 + m2 ) d 2 I3 = 0

(1.172)

L1 = I1 ω1 = 0 L2 = I2 ω2 = (m1 + m2 ) d 2 ω sin α L3 = I3 ω3 = 0

(1.173)

and the principal moments are

giving the angular momentum

Using these equations in Euler’s equations leads to the externally applied torque required to sustain a constant angular velocity: N1 = − (m1 + m2 ) d 2 ω2 sin α cos α N2 = 0 N3 = 0

(1.174)

The torque is directed perpendicular to the angular momentum, and causes the angular momentum to precess around the rotation axis as a function of time, as shown in Fig. 1.14.

1.5.5 Force-free top Euler’s equations also apply to a force-free top (no gravity and no external constraints), but now the angular velocity is no longer constant. If the top is symmetric with I 1 = I 2 , and with no torques acting, then Euler’s equations are (I1 − I3 ) ω2 ω3 − I1 ω˙ 1 = 0 (I3 − I1 ) ω3 ω1 − I1 ω˙ 2 = 0 I3 ω˙ 3 = 0

(1.175)

and the angular velocity along the body axis is a constant: ω3 = const. The equations for the time rate of change of the other two components are   I3 − I1 ω3 ω2 ω˙ 1 = − I1   I3 − I1 ω˙ 2 = ω3 ω1 I3

(1.176)

44 Introduction to Modern Dynamics On substituting I3 − I1 ω3 I1

(1.177)

ω˙ 1 + Ωω2 = 0 ω˙ 2 − Ωω1 = 0

(1.178)

Ω= these become

This set of coupled equations is combined in the expression η˙ − iΩη = 0

(1.179)

η = ω1 + iω2

(1.180)

ω1 (t) = ω⊥ sin Ωt ω2 (t) = ω⊥ cos Ωt

(1.181)

with the substitution

This gives the solution

where ω⊥ is the projection of the angular velocity vector on the body x1 –x2 plane. Because both ω⊥ and ω3 are constants, the magnitude of the angular velocity is a constant: ω2 = ω12 + ω22 + ω32 2 = ω⊥ + ω33

(1.182)

x3

ω

L

x3 ω⊥

Figure 1.15 Precession of a force-free prolate (I1 > I3 ) top visualized as the body cone rolling around the surface of the fixed cone. The angular velocity ω precesses around the body symmetry axis  ω with a rate . The vectors L,  and x3 are coplanar.

Ω θ Fixed cone

. φ

ω⊥ Body cone

Physics and Geometry 45 The solution to the dynamics of a force-free spheroid is illustrated in Fig. 1.15 for a prolate top (an object shaped like an American football, with I 1 > I 3 ). The symmetry axis of the rigid body is the x3 axis. Equation (1.181) describes the precession of the angular velocity vector around the x3 body symmetry axis with a precession angular speed of . The angular momentum L is also a constant of the motion, is steady in space, and points along the fixed x3 axis. ˙ where The body x3 axis precesses around the fixed x3 axis with angular speed φ, ω⊥ = φ˙ sin θ. Another constant of the motion is the kinetic energy, which is defined by the projection of the angular velocity onto the angular momentum vector as Trot =

1 ω  · L 2

(1.183)

which means that the projection of ω  onto L is a constant. Therefore, ω  also precesses around L and the fixed x3 axis. The result is visualized in Fig. 1.15 for a prolate top (I 1 > I 3 ) as a body cone that rolls without slipping on a fixed  ω, cone. The vectors L,  and the x3 axis are coplanar, with the angular velocity vector oriented along the contact between the cones. For an oblate symmetric top (a coin-shaped object with I 1 < I 3 ), the fixed cone is inside the body cone, but in both cases it is the body cone that rolls, and the space cone is fixed because L is constant. The kinetic energy of the spinning top is 1 1 2 I1 ω ⊥ + I3 ω32 2 2 2 1 1 2 2 ˙ = I1 φ sin θ + I3 ψ˙ + φ˙ cos θ 2  2  L2 sin2 θ cos2 θ = + 2 I1 I3

Trot =

(1.184)

expressed in the body-frame coordinates. The value of this expression is the same as the rotational kinetic energy calculated in the fixed frame. For a general body, with I1 = I2 = I3 , Euler’s equations yield three eigenfrequency solutions that may be complex-valued. Real eigenfrequencies correspond to stable configurations, while imaginary eigenfrequencies correspond to unstable configurations. For I 1 < I 2 < I 3 , motion around the x2 axis is unstable. This is why a book tossed into the air, originally rotating about its x2 axis, will begin to tumble, no matter how carefully you launch it.

1.5.6 Top with fixed tip (precession with no wobble) The force-free top (Fig. 1.16) has no external torque acting on it—it is a spinning or tumbling object in free fall. On the other hand, a top with a fixed tip has a point on its rotation axis fixed in the fixed frame, and it experiences an external torque

46 Introduction to Modern Dynamics . φ . ψ θ

ψ φ

Figure 1.16 A spinning top with a fixed tip, precessing without wobble.

. θ

caused by gravity acting on the center of mass. Euler’s equations can be used to solve for the precession rate of a rapidly spinning gyroscope precessing uniformly (no wobble). The moment I 3 is along the z axis of the axle. Because Euler’s equations are in the body frame, the torque is time-varying with angular frequency ω3 . The gyroscope precession rate is different from the force-free precession rate . The Euler equations for the top are I1 ω˙ 1 − (I1 − I3 ) ω2 ω3 = N1 I1 ω˙ 2 − (I3 − I1 ) ω3 ω1 = N2 I3 ω˙ 3 = 0

(1.185)

Using the substitution Ω=

I3 − I1 ω3 I1

(1.186)

the x, y equations are rewritten as ω˙ 1 + Ωω2 = N1 /I1 ω˙ 2 − Ωω1 = N2 /I1

(1.187)

These equations combine to give ω˙ − iΩω = N/I1

(1.188)

Physics and Geometry 47 where N = mgb sin θ . Because the equations are in the body coordinates, the torque is N = N0 exp (−iω3 t)

(1.189)

Plugging in an assumed solution ω = ω⊥ exp (−iω3 t)

(1.190)

(−iω3 − iΩ) ω⊥ = mgb sin θ/I1

(1.191)

gives

where ω⊥ is ω⊥ =

=

mgb sin θ/I1 = ω3 + Ω

mgb sin θ/I1 mgb sin θ/I1 = I3 − I1 I3 ω3 + ω3 ω3 I1 I1

mgb sin θ I3 ω3

(1.192)

The perpendicular frequency ω⊥ is perpendicular to the ω3 in the body frame. However, in the fixed frame the precession frequency is vertical. Therefore, the ˙ precession in the body frame is the projection of Ωprecess = φ: Ωprecess =

ω⊥ mgb = sin θ I3 ω3

(1.193)

There is another frame that greatly simplifies this problem. It we take a frame that is rotating at the (originally unknown) precession frequency, then the gyroscope appears stationary in that frame:  d L   =N  dt  fixed   d L  d L   prec × L = +Ω dt fixed dt prec  prec × L =Ω

(1.194)

or mgb sin θ = Ωprec I3 ω3 sin θ

(1.195)

48 Introduction to Modern Dynamics Solving for the precession frequency yields Ωprec =

mgb sin θ mgb = I3 ω3 sin θ I3 ω3

(1.196)

which is the same result that was obtained using Euler’s equations.

1.6 Summary This chapter emphasized the importance of a geometric approach to dynamics. The central objects of interest are trajectories of a dynamical system through multidimensional spaces composed of generalized coordinates. Trajectories through state space uniquely define the dynamical properties of a system, and flow fields and flow lines become the fields and field lines of mathematical flows. Trajectories through configuration space are often parameterized by the path length element ds, which becomes important in problems in special relativity when time transforms between frames, and in general relativity when space–time is warped by mass and energy. The metric tensor captures the relationship of distance when comparing between two coordinate systems that are defined by basis vectors. Coordinate transformations and Jacobian matrices, especially associated with coordinate rotations, are among the central tools that are used throughout this text. Transformation to non-inertial frames introduces fictitious forces like the Coriolis force that are experienced by an observer in the non-inertial frame. Uniformly rotating frames are the reference frames for rigid-body motion and contribute the concepts of moment of inertia and angular momentum. The parallel axis theorem helps calculate the moment-of-inertia tensor, which is used in the Euler equations to determine the motion of spinning tops.

1.7 Bibliography G. Arfken, Mathematical Methods for Physicists, 3rd ed. (Academic Press, 1985). T. Frankel, The Geometry of Physics: An Introduction (Cambridge University Press, 2003). A. E. Jackson, Perspectives of Nonlinear Dynamics (Cambridge University Press, 1989). R. C. Hilborn, Chaos and Nonlinear Dynamics: An Introduction for Scientists and Engineers (Oxford University Press, 2000). D. D. Holm, Geometric Mechanics: Part I: Dynamics and Symmetry (World Scientific/Imperial College Press, 2008).

Physics and Geometry 49 B. F. Schutz, A First Course in General Relativity (Cambridge University Press, 1985). S. H. Strogatz, Nonlinear Dynamics and Chaos (Westview Press, 1994). R. Talman, Geometric Mechanics: Toward a Unification of Classical Physics (Wiley, 2007). S. T. Thornton and J. B. Marion, Classical Dynamics of Particles and Systems, 5th ed. (Brooks/Cole, 2003).

1.8 Homework problems 1. Damped harmonic oscillator: Derive the solutions for a strongly damped harmonic oscillator in the case when β > ω0 . This is called an “overdamped” oscillator. What is the characteristic relaxation time? 2. Damped harmonic oscillator: Derive the response function X (ω) for a driven damped harmonic oscillator m¨x + γ x˙ + kx = Feiωt . (a) Find the real and imaginary parts of X (ω). (b) Solve for the motion if the right-hand side equals F cos ωt, and the initial conditions are x(0) = 0 and x(0) ˙ = 0. 3. Arc length: Calculate the total path length as a function of time of a general parabolic trajectory with arbitrary initial velocity (vx , vy ) in a constant gravitational field. Evaluate (expand) the short-time and long-time behavior and interpret as simple limits. 4. Path length and tangent vector: A pendulum is constructed as a mass on a rigid massless rod. The mass is dropped from rest at a height h above the center of rotation. Find the path length as a function of time. Find the unit tangent vector as a function of time. 5. Generalized coordinate: Consider a bead on a helical wire subject to an x-dependent potential V (x). How many generalized coordinates are required to describe the physics of the bead sliding without friction on the wire? Write equations of motion for the bead. The helix can be described as x(z) = R cos (ω∗ z) y(z) = R sin (ω∗ z) where the parameter ω* is related to the pitch of the helical wire. 6. Tangent and normal of a geometric spiral: Find T and N of a geometric spiral r = r0 ebθ and z = aθ. 7. Curvature: Find the curvature κ of a general parabolic trajectory of a mass m in gravity g.

50 Introduction to Modern Dynamics 8. Autonomous oscillator: Find an appropriate coordinate transformation and transform Eq. (1.13) into polar coordinates. 9. Undamped pendulum: Derive the lowest-order correction in θ 0 for the period of oscillation of an undamped pendulum beyond the small-angle approximation. 10. Terminal velocity: Solve for the terminal velocity of a massive particle falling under gravity with m¨z + γ z˙ = mg. (a) Choose the initial condition z(0) ˙ = 0. 2mg (b) Choose the initial condition z(0) ˙ = . γ (c) Draw the state space and the flow lines for this physical system. 11. Path element: Derive, using Eq. (1.72), the expressions for ds2 for cylindrical and spherical curvilinear coordinates. 12. Inverse basis vectors: Beginning with the inverse of the polar coordinate transformation in Eq. (1.62), derive expressions for the basis vectors ex and ey . 13. Jacobian matrix: Consider the following coordinate system, sometimes used in electrostatics and hydrodynamics:

x2

xy = u − y2 = v z=z

Find the Jacobian matrix and the Jacobian of the transformation. 14. Metric tensor: Complete the expressions for gab in Eq. (1.70). 15. Metric tensor: Use Eq. (1.71) to find gab for cylindrical and spherical coordinates. 16. Metric tensor: Use Eq. (1.73) to find gab for cylindrical and spherical coordinates. 17. Deflection of light by gravity: Do a back-of-the-envelope calculation (within an order of magnitude) of the deflection of a light ray passing just above the surface of the Sun using the elevator analogy. Express your answer in terms of a deflection angle. How does it compare with the correct answer, Δφ =

4MG ? c2 R

18. Coriolis force: If a projectile is fired due east from a point on the surface of Earth at a northern latitude λ with a velocity of magnitude v0 and at an angle of inclination to the horizontal of α, show that the lateral deflection when the

Physics and Geometry 51 projectile strikes Earth is d=

4v30 ω sin λsin2 α cos α g2

where ω is the rotation frequency of the Earth. 19. Falling mass in a rotating frame: Consider a particle falling in the Earth’s gravitational field. Take g to be defined  at ground level and use the zerothorder result for the time of fall, T = 2h/g. Perform a calculation in second approximation (retain terms in ω2 ) and calculate the southerly deflection. There are three components to consider: (a) Coriolis force to second order (C 1 ); (b) variation of centrifugal force with height (C 2 ); (c) variation of gravitational force with height (C 3 ). Show that each of these components gives a result equal to h2 Ci ω2 sin λ cos λ g where C 1 = 2/3, C 2 = 5/6, and C 3 = 5/2. The total southerly deflection is therefore h2 4 ω2 sin λ cos λ g 20. Coriolis effect: A warship fires a projectile due south at latitude 50◦ S. If the shells are fired at 37◦ elevation with a speed of 800 m/s, by how much do the shells miss their target and in what direction? Ignore air resistance. 21. Effective gravity: Calculate the effective gravitational field vector g at Earth’s surface at the poles and the Equator. Take account of the difference in the Equatorial (6378 km) and polar (6357 km) radii, as well as the centrifugal force. How well does the result agree with the difference calculated with the result g = 9.780356[1 + 0.0052885 sin2 λ – 0.0000059 sin2 (2λ)] m/s2 , where λ is the latitude? 22. Projectile: A projectile is shot vertically with speed v from the Equator, rises to a maximum height (without air resistance) and falls back to Earth. What is the displacement of the projectile to first-order approximation? 23. Bathtub: Estimate the Coriolis deflection for water in a bathtub whirlpool going down the drain. Then estimate the Coriolis deflection for a hurricane. What can you conclude about bathtub whirlpools? 24. Inertia tensor: Derive the inertia tensor for a cube of side L with the origin at the center. Then derive it again with the origin at an apex. 25. Stability: Using Euler’s equations, show that for an object with I 1 < I 2 < I 3 , there are two stable rotation axes and one unstable axis. 26. Jet stream: High pressure at the Equator drives a northward air speed of nearly 100 miles per hour in the upper atmosphere. This leads to the jet stream. Estimate the air speed of the jet stream as a function of latitude.

52 Introduction to Modern Dynamics 27. Earth’s geometry: What is the ratio of the centrifugal acceleration to the gravitational acceleration at the Equator? Work the numbers. 28. Bucket of water: A bucket of water is set spinning about its symmetry axis. Determine the shape of the water in the bucket. 29. Moments of inertia: Calculate the moment of inertia for a sphere of mass M and radius R about its center by explicitly performing all the integrals. 30. Moments of inertia: Calculate the moments of inertia for an ellipsoid of mass M and semi-axes a > b > c. 31. Parallel axis theorem: Calculate the moments of inertia I 11 and I 22 for the equilateral triangle in Example 1.9. Use the parallel axis theorem where appropriate. 32. Billiard ball: At what height should a billiard ball be struck (with a force parallel to the table) such that it rotates without slipping? 33. Oscillating disk: A disk of radius r < R rolls without slipping inside the parabola y = x2 /2R. Find the oscillation period for small-amplitude oscillations. 34. Physical pendulum: A physical pendulum is a pendulum that has an extended body allowed to pivot around a single point or line. Consider a metal bar of mass M with dimensions W × W × L, where L >> W. It is allowed to pivot around a line that is centered at a distance a from the center of mass along the long dimension. What is the frequency of small oscillations as a function of a?

Lagrangian Mechanics

2

Path 1

Path 2 Lagrangian p ath of

leas t ac tion

Start q1

Path 3

End q2 Path 4

Newton’s Second Law, which relates forces to changes in momenta, is the cornerstone upon which classical physics was built. It provides the causative explanation of why things move. Cause (force) and effect (change in motion) are analyzed using the mathematical tools of dynamics. However, motion obeys deeper principles that are more general and more powerful. These deeper principles have a strongly geometric character, and a central concept in these geometric principles is the extremum (usually a minimum) principle, in which a trajectory minimizes or maximizes a quantity calculated upon it. For instance, geodesics minimize metric distance between points. In this chapter, that connection is made more formal, by introducing the principle of least action and its extension to the more general Hamilton’s Principle that is the foundation of Lagrangian and Hamiltonian dynamics.

2.1 Calculus of variations The trajectory of a particle in configuration space xa (t) is a single-parameter geometric curve that connects two points: the initial position and the final position. The simplest question one can ask about such a trajectory is: what property of that curve distinguishes it from all neighboring ones?

Introduction to Modern Dynamics. Second Edition. David D. Nolte, Oxford University Press (2019). © David D. Nolte. DOI: 10.1093/oso/9780198844624.001.0001

2.1 Calculus of variations

53

2.2 Lagrangian applications

57

2.3 Dissipation in Lagrangian systems

62

2.4 Lagrange undetermined multipliers

63

2.5 Examples of Lagrangian applications with constraints

64

2.6 Conservation laws

67

2.7 Central force motion

70

2.8 Virial theorem

78

2.9 Summary

79

2.10 Bibliography

80

2.11 Homework problems

80

54 Introduction to Modern Dynamics

2.1.1 Variational principle   Consider a trajectory y(x) and a function f x, y, y , where the prime denotes the   derivative with respect to x. We seek a trajectory such that the integral of f x, y, y , defined as x2   I = f x, y, y dx (2.1) x1

between the points x1 and x2 , is independent of small changes in the shape of the trajectory.1 This condition of independence is called stationarity and is expressed as δI = 0. The calculus of variations is used to find the stationarity conditions by defining the related integral x2 I=

  f x, Y , Y  dx

(2.2)

x1

where the general (twice-differentiable) functions Y (x) are constructed as Y (x) = y(x) + εη(x)

(2.3)

where ε = 0 yields the sought-after stationary behavior. The function η(x) is arbitrary, except that it vanishes at the endpoints, as shown in Fig. 2.1. The integral becomes x2 I (ε) =

  f x, y + εη, y + εη dx

(2.4)

x1

Differentiating Eq. (2.4) with respect to ε yields I  (ε) =

x2 

∂f ∂Y



 η(x) +

∂f ∂Y 



 η (x) dx

(2.5)

x1

and integrating Eq. (2.5) by parts yields I  (ε) =



x2 η(x) x1



∂f η x ∂Y  1



d dx ∂f + η ∂Y 

∂f ∂Y





∂f ∂Y 

 dx

(2.6)

x2

The last terms vanish at the endpoints because η(x1 ) = 0 and η(x2 ) = 0 to give   For instance, the function f x, y, y can be a dynamical function of coordinates y and velocities y and the variable x. 1





x2

I (ε) =

η(x) x1

∂f ∂Y

 −

d dx



∂f ∂Y 

 dx

(2.7)

Lagrangian Mechanics 55 y y(x) y(x) + εη(x)

x1

x2

Figure 2.1 A function defined on the trajectory y(x) between two points in configuration space is stationary with respect to variation by an arbitrary function η(x).

x

The integral vanishes when ε = 0, and therefore 

x2 η(x)

∂f ∂y

 −

d dx



∂f ∂y

 dx = 0

(2.8)

x1

Because η(x) is an arbitrary function, the only way the integral can be guaranteed to be zero is when  Euler equation

∂f ∂y

 −

d dx



∂f ∂y

 =0

(2.9)

Equation (2.9) is known as the Euler equation and is the fundamental lemma of the calculus of variations. It takes on a central importance in dynamics when the function f is an appropriately defined dynamical scalar quantity that depends on positions and velocities, f (t, x, x). ˙ The Euler equation generalizes to multiple variables as     d ∂f ∂f − =0 (2.10) ∂qa dx ∂q a for a = 1, 2, 3, . . ., N over all of the generalized coordinates.

2.1.2 Stationary action Just as momentum plays a central role in Newtonian dynamics, one can define a dynamical function for use in the Euler equations. For instance, the integral of momentum along the path of a particle is called the action.2 The simplest form of the action integral, due to Euler, integrates momentum over a path between fixed endpoints: x2 x2 S = m v· d s= m v ds x1

x1

(2.11)

2 The first to recognize the importance of the quantity now known as “action” was Maupertuis in 1744, followed by enhancements by Euler (1744), Lagrange (1760), Hamilton (1834) and Jacobi (1842).

56 Introduction to Modern Dynamics If the selected path is the dynamical trajectory, then the velocity and the path element are colinear. When the system conserves energy, this integral is S=



2m (E − U) ds

(2.12)

where E is a constant, U depends on the generalized coordinates qa , and the path element is ds2 = gab dqa dqb . The path that makes Eq. (2.12) stationary is the dynamical path of the system. In most cases the stationary integral is a minimum, in which case the principle is known as Jacobi’s Principle of Least Action. The action integral of Euler can be rewritten as an integral over time, x2 S=

mv ds x1

t2 =

mv

ds dt dt

(2.13)

t1

which is also the integral t2 S=

2T dt

(2.14)

t1

over (twice) the kinetic energy as it varies as a function of time between the start and endpoints of a trajectory. If the system conserves energy, there is the constraint  (T + U) dt = EΔt

(2.15)

where t is the duration of the trajectory. To incorporate a constraint into the stationary integral, one uses the method of Lagrange multipliers. Any function whose variation automatically vanishes can be added to the general function f (t, qa , q˙a ) with an undetermined multiplier λ. Because E is a constant when energy is conserved, its variation (in terms of variational calculus) is zero, and we can define an augmented action integral as t2 [2T + λ (T + U)] dt

S=

(2.16)

t1

where λ is an undetermined multiplier. Applying the Euler equations to f (t, qa , q˙a ) = 2T + λ (T + U) leads to m (2 + λ) q¨ a = λ

∂U ∂qa

(2.17)

Lagrangian Mechanics 57 This equation is equivalent to Newton’s Second Law when λ = −1, which determines the multiplier. The augmented action integral is then t2 S=

(T − U) dt t1

t2 =

(2.18)

Ldt t1

where L = T – U, the difference between the kinetic energy T and the potential energy U, is defined as the Lagrangian function. Hamilton’s Principle states that the system trajectory is the path that makes the augmented action integral S stationary:  δS = δ

L dt = 0

(2.19)

The assumption of conservation of energy was used merely to make the connection to Newton’s Second Law and to establish the value of the multiplier λ. Once that has been accomplished, Hamilton’s Principle is more fundamental and applies even to cases when energy is not conserved. The Euler– Lagrange equations follow directly from Hamilton’s Principle through the Euler equation (2.9):

Euler–Lagrange equations

∂L d − a ∂q dt



∂L ∂ q˙a

 =0

(2.20)

The Euler–Lagrange equations (Fig. 2.2) are the fundamental mathematical tool used to solve dynamical problems. When the appropriate Lagrangian is constructed with appropriate generalized coordinates qa , the equations of motion describing the system follow immediately from Eq. (2.20). The Euler–Lagrange equations apply to a system that can be described in terms of a potential energy, even if the potential energy is changing in time. They also apply to cases in which constraint forces do work on the system.

2.2 Lagrangian applications Examples of applications of Lagrangians to dynamical problems abound and can take on any situation for which a Lagrangian can be defined. In this section, several common applications are described.

58 Introduction to Modern Dynamics

Euler–Lagrange equations Hamilton’s Principle

Lagrangian . function: L(t; qa, qa) = T – U

t2

S = ∫ L dt t1

δS = 0

∂L d ∂L – . =0 ∂qa dt ∂qa

Figure 2.2 The Euler–Lagrange equations, derived from Hamilton’s Principle, are applied to the difference between the kinetic and the potential energies—the Lagrangian.

Generalized coordinates

Generalized velocities

2.2.1 Mass on a spring Consider a mass m hanging on a spring of spring constant k in a gravitational field. The kinetic and potential energies are T=

1 2 mz˙ 2

V =

1 k(z − z0 )2 + mgz 2

(2.21)

The Lagrangian is L=

1 2 1 mz˙ − k(z − z0 )2 − mgz 2 2

(2.22)

with derivatives ∂L = mz˙ ∂ z˙

∂L = −k (z − z0 ) − mg ∂z

(2.23)

leading to the equation of motion d dt



∂L ∂ z˙

 −

∂L = m¨z + k (z − z0 ) + mg = 0 ∂z

(2.24)

The equilibrium position is k (z∗ − z0 ) = −mg z∗ = z0 −

mg k

(2.25)

The equation of motion in the new variable z = z − z∗ is m¨z = −kz z¨  = −ω02 z which is the classic linear harmonic oscillator.

(2.26)

Lagrangian Mechanics 59

2.2.2 Simple pendulum Consider a mass m on a massless taught string of length L in a gravitational field. The kinetic and potential energies are T=

1 mL2 θ˙ 2 2

V = mgL (1 − cos θ)

(2.27)

The Lagrangian is L=

1 mL2 θ˙ 2 − mgL (1 − cos θ ) 2

(2.28)

with derivatives ∂L = mL2 θ˙ ∂ z˙

∂L = −mgL sin θ ∂z

(2.29)

The Euler–Lagrange equation is d dt



∂L ∂ z˙

 −

∂L = mL2 θ¨ + mgL sin θ = 0 ∂z

(2.30)

leading to the equation of motion g sin θ L θ¨ ≈ −ω02 θ

θ¨ = −

(2.31)

which is again the classic linear harmonic oscillator for small angles.

2.2.3 Symmetric top with fixed tip The symmetric top with a fixed tip consists of a rigid body with a symmetry axis, one point of which is fixed in space, and gravity acts on the rigid body. An example would be a rotating flywheel located a distance d along a rigid axle, as shown in Fig. 2.3 The fixed frame has unprimed coordinates x, y, and z. The axle makes an angle θ with respect to the z axis, and an angle φ with respect to the x axis. The angle ψ is the rotation angle of the body about the symmetry axis. The Lagrangian is L= =

I 1 2 I1 2 1 ω + ω2 + I3 ω32 − Mgd cos θ 2 1 2 2 2 I1  2 I1  φ˙ sin θ sin ψ + θ˙ cos ψ + φ˙ sin θ cos ψ − θ˙ cos ψ 2 2 2 1  + I3 φ˙ cos θ + ψ˙ − Mgd cos θ 2

60 Introduction to Modern Dynamics L=

1  2 I1 2 2 sin θ φ˙ + θ˙ 2 + I3 φ˙ cos θ + ψ˙ − Mgd cos θ 2 2

(2.32)

z

ψ

θ

Mg y φ x

Figure 2.3 Fixed-tip gyroscope at an angle θ relative to vertical.

L=

1 I1 sin2 θφ˙2 + θ˙2 + I ψ˙ + φ˙ sin θ 2 2 3

(

)

(

)2 – Mgd cosθ

The angular frequencies in the Euler angles are given by Eq. (1.86). The partial derivatives of the Lagrangian are   ∂L = (I3 − I1 ) φ˙ 2 cos θ sin θ + I3 ψ˙ φ˙ − Mgd sin θ ∂θ ∂L =0 ∂φ

(2.33)

∂L =0 ∂ψ with conjugate momenta pφ =

∂L = I1 φ˙ ∂ θ˙

pφ =

∂L = I1 sin2 θ + I3 cos2 θ φ˙ + I3 cos θ ψ˙ = const. ∂ φ˙

pψ =

  ∂L = I3 φ˙ cos θ + ψ˙ = I3 ω3 = const. ∂ ψ˙

(2.34)

where pφ and pψ are each constants of the motion. Comparing the last two equations, the angular momentum in φ is pφ = I1 sin2 θ φ˙ + pψ cos θ

(2.35)

Lagrangian Mechanics 61 which is rewritten as I1 sin2 θ φ˙ = pφ − pψ cos θ

(2.36)

to obtain φ˙ =

pφ − pψ cos θ

(2.37)

I1 sin2 θ

Similarly,  pψ = I3

pφ − pψ cos θ I1 sin2 θ

ψ˙ = ω3 −

cos θ + ψ˙

pφ − pψ cos θ I1 sin2 θ



cos θ

(2.38)

(2.39)

The dynamics in θ are obtained as   I1 θ¨ = (I3 − I1 ) φ˙ 2 cos θ sin θ + I3 ψ˙ φ˙ − Mgd sin θ

(2.40)

To explore some of the behavior of the gyroscope, take the special case cos θ = 0 and find the conditions for θ¨ = 0; then 0 = Mgd − I3 ψ˙ φ˙ = Mgd − I3 ω3 φ˙

(2.41)

Mgd Mgd φ˙ = = I3 ω3 pψ which is the case for steady precession. Now consider small oscillations (known as nutation) around θ = 0 I1 θ¨ = (I3 − I1 )

     pφ − pψ θ pφ θ ω − Mgd θ + I − 3 3 I1 I1 (I1 )2 p2φ

(2.42)

which simplifies to θ¨ = −

p2φ + p2ψ (I1 )2

θ = −ω02 θ

(2.43)

Small displacements in θ oscillate with a squared angular frequency proportional to the sum of the squares of the constant angular momenta. If the precession rate is much smaller than the spin, then ω0 ≈ I3 ω3 /I1 . If the gyroscope spins with higher speed, the frequency of nutation is higher.

62 Introduction to Modern Dynamics

2.3 Dissipation in Lagrangian systems Lagrange’s equations admit the inclusion of velocity-dependent forces. These forces are nonconservative, leading to path dependence in the dynamics, and cannot be described by a potential function. However, their effect can be captured as a force of constraint in the Euler–Lagrange equations. For instance, in viscous damping, the drag force (at low speed) is a Fdrag = −γ q˙a

(2.44)

and has a dissipated power of P=γ



q˙a d q˙a

(2.45)

a

summed over the generalized velocities and integrated over the path. The derivative of the power with respect to velocity is a generalized force ∂P = γ q˙a ∂ q˙a

Qa =

(2.46)

The Euler–Lagrange equations are already in the form of balanced forces, and the generalized dissipative force is simply added to the balance as d dt



∂L ∂ q˙a

 −

∂L + Qa = 0 ∂qa

(2.47)

As an example, consider a mass falling through a viscous medium: T=

1 2 mq˙ 2

U = mgq ∂L = mq˙ ∂ q˙

L=T −U =

∂L = −mq ∂q

1 2 mq˙ − mgq 2

Q = γ q˙

(2.48)

Then the Euler–Lagrange equation is d dt



∂L ∂ q˙a

 −

∂L + Qa = m¨qa + mg + γ q˙a = 0 ∂qa

(2.49)

which describes the approach to terminal velocity. Dissipative forces that depend only on velocities are common in dynamics as a system approaches steady state. Steady state is a form of dynamic equilibrium, and Lagrangian approaches capture dynamic equilibrium systems. An example of a more general approach includes forces of constraint through the method known as Lagrange undetermined multipliers.

Lagrangian Mechanics 63

2.4 Lagrange undetermined multipliers Dynamical systems often have constraints imposed on the motion. For instance, a bead sliding without friction on a stiff wire must experience forces of constraint that keep the bead on the wire as it slides (Fig. 2.4). The dimensionality of the problem equals the number of generalized coordinates N minus the number of equations of constraint M, so D = N − M. For a Lagrangian problem with N generalized coordinates qa and M equations of constraint fb (qa ) = 0, the total differential of each equation of constraint is dfb =

 ∂fb dqa = 0 a ∂qa

(2.50)

 ∂fb ∂qa = dε = 0 a ∂qa ∂ε

where ε is the expansion parameter in the variational calculus derivation of Lagrange’s equations, related to the variation function η by ∂qa = ηa ∂ε

(2.51)

∂fb dfb = ηa = 0 dε ∂qa a

(2.52)

and hence

This expression for each equation of constraint can be added to the variational integral without changing its overall variation. However, the relative contribution of each total differential is undetermined. Therefore, this introduces M undetermined multipliers λb as y Bead

Equation of constraint: f(x, y) = 0

Stiff wire

x

Figure 2.4 Constrained motion. Consider a bead sliding without friction on a stiff wire. The equation of constraint f(x, y) = 0 introduces forces of constraint that induce the bead to move along the wire.

64 Introduction to Modern Dynamics dI = dε

     d ∂L ∂L ∂fb − − λb ηa (t)dt = 0 dt ∂ q˙a ∂qa ∂qa a

(2.53)

b

The Euler–Lagrange equations of motion including the equations of constraint become   ∂L ∂fb d ∂L − − λb = 0 (2.54) dt ∂ q˙a ∂qa ∂qa b

This expression is particularly helpful, because the additional term represents the forces of constraint, which are the reaction forces required to maintain motion according to the constraints. The generalized forces of constraint are then

∂fb Qa = λb (2.55) ∂qb b

Introducing the undetermined multipliers has increased the number of unknowns to solve to M + N, but they can simplify the choice of coordinates, and they have the side benefit that they provide the constraint forces, which might be difficult to obtain directly. The equations of constraint can be viewed as adding an additional term to give a modified Lagrangian L = L +

fb λb

(2.56)

∂L =0 ∂qa

(2.57)

b

so that Lagrange’s equation d dt



∂L ∂ q˙a

 −

retains the original form of the Euler–Lagrange equations. Note that in this case, the equations of constraint do not depend on the generalized velocities. It is possible to have constraints that do depend on the generalized velocities, as, for example, in the case of a mass falling with terminal velocity.

2.5 Examples of Lagrangian applications with constraints 2.5.1 Massive pulley Consider a mass on a massless rope attached to a massive pulley able to spin without friction on its axis (Fig. 2.5). The equation of constraint is f = y − Rθ = 0

(2.58)

Lagrangian Mechanics 65 Massive pulley

DOF = 1 q = {θ, y} f = y – Rθ = 0

I

θ R

y

m

Figure 2.5 Massive pulley to be solved using Lagrange’s equations with a constraint.

mg

connecting the displacement of the mass with the rotation of the pulley. The kinetic and potential energies use two generalized coordinates: an angle for the pulley and a linear coordinate for the falling mass: T=

1 2 1 2 I θ˙ + my˙ V = mgy 2 2 1 1 L = I θ˙ 2 + my˙2 + mgy 2 2 ∂L ∂L = I θ˙ =0 ∂θ ∂ θ˙

(2.59)

∂L = my˙ ∂ y˙

(2.60)

∂L = mg ∂y

The three dynamical equations in three unknowns are

θ¨ =

mgR I + mR2

I θ¨ + λR = 0 m¨y − mg − λ = 0 y¨ − Rθ¨ = 0

(2.61)

λ = m¨y − mg   I θ¨ + mRθ¨ − mg R = 0   θ¨ I + mR2 = mgR

(2.62)

y¨ =

mgR2 I + mR2

λ = mg

I I + mR2

(2.63)

where the undetermined multiplier is equal to the tension in the rope.

2.5.2 Atwood machine with massive pulley Consider two masses on a massless rope around a massive pulley (Fig. 2.6). This problem is the same as the last one, but with an additional mass on the other

66 Introduction to Modern Dynamics Massive pulley

DOF = 1 q = {θ, y}

f = y – Rθ = 0 I

θ R

y m2 m1

m2g

m1g

Figure 2.6 Attwood machine: massive pulley with two hanging masses.

side of the pulley. The number of degrees of freedom remains the same, but the driving mass is partially balanced across the pulley. Therefore, the equations in the preceding problem can be used directly by replacing the driving mass by the difference m− = m1 – m2 and the inertial mass by the sum m+ = m1 + m2 . This yields θ¨ =

− (m1 − m2 ) R −m− gR = g I + m+ R2 I + (m1 + m2 ) R2

−m− gR2 − (m1 − m2 ) R2 = g I + m+ R2 I + (m1 + m2 ) R2     I I λ = m− g = − m g (m ) 1 2 I + m+ R2 I + (m1 + m2 ) R2 y¨ =

(2.64)

2.5.3 Cylinder rolling down an inclined plane As shown in Fig. 2.7, a cylinder rolls without slipping down an inclined plane with the equation of constraint f = y − Rθ = 0

(2.65)

The kinetic and potential energies are T=

1 2 1 I θ˙ + M y˙2 2 2

V = Mg (L − y) sin α

(2.66)

with the Lagrangian L=

1 1 MR2 θ˙ 2 + M y˙2 − Mg (L − y) sin α 4 2

(2.67)

Lagrangian Mechanics 67 y I=

1 MR2 2

θ L

α

Figure 2.7 Cylinder rolling without sliding down an inclined plane

and derivatives ∂L 1 = MR2 θ˙ 4 ∂ θ˙ ∂L = M y˙ ∂ y˙

∂L =0 ∂θ ∂L = Mg sin α ∂y

(2.68)

The equations of motion are 1 MR2 θ¨ + λR = 0 2 M y¨ − Mg sin α − λ = 0

(2.69)

y¨ − Rθ¨ = 0 with θ¨ = −

2g sin α 3R

y¨ =

2 g sin α 3

1 λ = − Mg sin α 3

(2.70)

The undetermined multiplier is now determined, and it is equal to the force of friction that is required for the cylinder to roll without slipping.

2.6 Conservation laws Although the study of dynamics is the study of change, key insights into motion arise by looking for those things that do not change in time—the constants of the motion. The invariant properties of trajectories provide the best way to gain a deeper understanding of the physical causes behind the motion and the range of motions that are possible deriving from those causes. Chief among the conservation principles is the conservation of energy that emerges naturally from Lagrangian dynamics.3 However, there is a deep connection between conserved

3 Conservation of energy in the broader context of interconversions among different types of energy emerged in the mid nineteenth century, chiefly through the work of Helmholtz, but also through Thompson (Kelvin) and Clausius. See J. Coopersmith, Energy, the Subtle Concept (Oxford, 2010).

68 Introduction to Modern Dynamics quantities and embedded symmetries of the dynamical spaces: for every symmetry implicit in Lagrange’s equations, there is a conserved quantity.4 The Euler–Lagrange equations have two common special cases that lead directly to what are known as first integrals. A first integral is a differential equation that is one order less than the original set of differential equations, and retains a constant value along an orbit in state space. First integrals are conserved quantities of the motion and arise in two general cases: when the Lagrangian has no explicit dependence on time and when the Lagrangian has no explicit dependence on a coordinate.

2.6.1 Conservation of energy When there is no explicit dependence of L on t, then ∂L =0 ∂t

(2.71)

and the total time derivative is (implicit Einstein summation over repeated indices) dL ∂L ∂L = a q˙a + a q¨ a dt ∂q ∂ q˙ Using the Euler–Lagrange equation (2.20) for

(2.72)

∂L in Eq. (2.72) leads to ∂qa

dL d ∂L ∂L = q˙a + a q¨ a dt dt ∂ q˙a ∂ q˙

(2.73)

This can be rewritten as dL d = dt dt

 q˙a

∂L ∂ q˙a

 (2.74)

or d dt

  ∂L q˙a q − L = 0 ∂ q˙

(2.75)

q˙a

∂L − L = const. ∂ q˙a

(2.76)

where the first integral is

4 This is Noether’s theorem, named after Emily Noether, who published it in 1918.

The constant can be identified with an important dynamical quantity by noting that

Lagrangian Mechanics 69 N

q˙a

a=1

∂L = 2T ∂ q˙a

(2.77)

Therefore, the first integral is const. = 2T − L = 2T − (T − U) = T + U =E

(2.78)

which is the total energy of the system. This system, with no explicit time dependence in the Lagrangian, is called conservative, and the total energy of the system remains constant.

2.6.2 Ignorable (cyclic) coordinates When a generalized coordinate does not appear explicitly in the Lagrangian, then that variable is called an ignorable variable or a cyclic variable, and the Lagrangian is said to be homogeneous in that coordinate. In this case, d dt



∂L ∂ q˙a

 =

∂L =0 ∂qa

(2.79)

and the time derivative of the quantity in the parentheses vanishes (the quantity is constant in time). Therefore, a first integral is ∂L = const. ∂ q˙a

(2.80)

and ignorable coordinates lead to conserved quantities. More fundamentally, the situations when explicit coordinates do not appear in the Lagrangian arise when there is a form of symmetry (known as Noether’s theorem). For instance, translational symmetry would mean that there is no dependence of the potential on position. Then L=

1  a 2 m q˙ 2

(2.81)

∂L = mq˙a ∂ q˙a

(2.82)

and pa =

which is the linear momentum (also known as the canonical momentum5 ) that is conserved. Likewise, rotational symmetry of a potential means that there is no explicit angular dependence (as for the gravitational potential), which means that angular momentum (the conjugate momentum to the physical angle) is conserved.

5 Note that the index for conjugate momentum is a subscript, which from Chapter 1 means that it is a covector transforming under coordinate transformations similarly to the basis vectors. In Cartesian coordinates, there is no important distinction between vectors and covectors. But this distinction becomes important for generalized coordinates.

70 Introduction to Modern Dynamics Energy conservation arises from the time invariance of the potential. Because of the wide variety of possible generalized coordinates and forms that Lagrangians can take, and many types of symmetry, there can be various forms of momenta that are conserved quantities. In the next chapter (Chapter 3), a special type of dynamical system will be described that has as many conserved quantities as the number of degrees of freedom of the system. This type of system is called integrable. Coordinate transformations exist that can convert integrable dynamics into generalized angle coordinates, each of which has a generalized conserved momentum called the action (equivalent to Euler’s action integral, Eq. (2.11)). Action-angle representations of integrable systems make it possible to identify conserved quantities even when system parameters are changing in time. As an example, the simple dissipation-free pendulum has a time-varying angular velocity and an explicit dependence of the potential on angle, which prevent conservation of angular momentum. However, an action integral that is conjugate to a generalized angle can be defined that is a conserved quantity of the motion, even if the length of the pendulum and/or its mass are changing in time. Therefore, there can be conserved quantities other than the conventional quantities like energy, linear momentum, and angular momentum.

2.7 Central force motion A central force originates from the gradient of a potential V (r) that depends only on the radial distance from the force center. A particle of mass m attracted by the force is located in three dimensions by the spherical coordinates (r, θ , φ). The velocity in spherical coordinates is v2 = r˙2 + r 2 φ˙ 2 + r 2 sin2 φ θ˙ 2

(2.83)

yielding the Lagrangian L=

1 2 m r˙ + r 2 φ˙ 2 + r 2 sin2 φ θ˙ 2 − V (r) 2

(2.84)

However, there is a conserved quantity in this motion, which reduces the dimensionality from three to two degrees of freedom. The central force cannot exert a torque on the particle, because the force is directed along the position vector. Therefore, angular momentum is conserved, and we are free to choose an initial condition θ = π /2 = const. that defines the equatorial plane. The Lagrangian becomes L= in the two variables r and φ.

1 2 m r˙ + r 2 φ˙ 2 − V (r) 2

(2.85)

Lagrangian Mechanics 71

2.7.1 Reduced mass for the two-body problem For the problem of two mutually attracting bodies of finite mass with no external forces, the total momentum is conserved: d P d = (m1 r1 + m2 r2 ) = 0 dt dt

(2.86)

In the center-of-mass frame, this equation can be satisfied by m1 r1 + m2 r2 = 0

(2.87)

r = r1 − r2

(2.88)

m2 r m1 + m2 m1 r2 = − r m1 + m2

(2.89)

Combining this with

gives r1 =

and the Lagrangian is L=

1 ˙ ˙ μr · r − V (r) 2

(2.90)

m1 m2 m1 + m2

(2.91)

where μ is the reduced mass μ=

This has reduced the two-body problem to one of an effective one-body central potential problem.

2.7.2 Effective potential energy The Euler–Lagrange equations applied to the Lagrangian (2.90) are d  2  μr φ˙ = 0 dt dV =0 μ¨r − μr φ˙ 2 + dr

(2.92)

The first equation is integrated to give the conserved angular momentum l as μr 2 φ˙ = const = 

(2.93)

72 Introduction to Modern Dynamics This is put into the second equation of Eq. (2.92) to give μ¨r −

2 dV + =0 dr μr 3

(2.94)

which is a one-dimensional dynamical equation. The last two terms depend only on the variable r, and hence it is possible to define an effective potential as V (r) = V (r) +

2 2μr 2

(2.95)

such that the Lagrangian is now L=

1 2 μ˙r − V (r) 2

(2.96)

The motion has only a single degree of freedom given by the radial position. The total energy is E=

1 2 μ˙r + V (r) 2

(2.97)

which is negative for bound orbits and positive for unbound orbits. The onedimensional Lagrangian of orbital mechanics for the inverse-square force of gravity has a potential energy V (r) = −

GMm r

(2.98)

The effective potential for an inverse square force is shown in Fig. 2.8. The difference between the total energy E and the effective potential energy is the radial kinetic energy Trad (r) = E − V (r) =

1 2 μ˙r 2

(2.99)

The turning points in the one-dimensional motion occur at r min and r max where the radial speed vanishes. The circular orbit occurs at the minimum of the effective potential: GMm 2 d V (r) = − 3 =0 2 dr r μr rc =

2 GMmμ

(2.100)

The two-dimensional orbital flow from Eq. (2.94) for the gravitation potential is given by

Lagrangian Mechanics 73

Angular kinetic energy

Veff (r ) =

–GMm ℓ2 + r 2μr 2

Energy

Effective potential energy

Figure 2.8 Effective 1D potential for an inverse square law showing the angular kinetic energy and the potential energy contributions. For a fixed negative total energy E, the orbit is bound with inner and outer turning points rmin and rmax , respectively. The radial kinetic energy is the difference between the total energy and the effective potential energy.

0 Etotal

Radial kinetic energy

Potential energy rmax

rmin

Radius

r˙ = ρ ρ˙ =

1 μr 2



2 − GMm μr



(2.101)

A set of solutions is shown in Fig. 2.9 for several different initial conditions for μ = 1,  = 1, and GMm = 1. The solution space is radial speed r˙ versus radial position r, showing orbits with different total energies. The red curves are bound orbits with E < 0. The blue curves are unbound orbits with E > 0. The dashed black curve is the orbit with E = 0. As we will see in the next section on Kepler’s laws, in configuration space, the bound orbits are elliptical, the unbound orbits are hyperbolic, and the marginal case with E = 0 is parabolic.

2.7.3 Kepler’s laws Kepler’s three laws for planetary motion about the Sun (Fig. 2.10) can be derived directly from the effective potential. The First Law states that planetary orbits are ellipses with the Sun at one focus of the ellipse. The Second Law states that planets sweep out equal areas in equal times. The Third Law states that the square of the orbital period varies as the cube of the semimajor axis of the ellipse. Kepler’s First Law, which was the most revolutionary at the time (when circular orbits were considered divine), is that all planets execute elliptical orbits with the force center at one focus of the ellipse. This law can be derived by first noting that the radial velocity is expressed as

74 Introduction to Modern Dynamics Orbital state space 1

Hyperbolic unbound orbits

Radial speed

0.5

Figure 2.9 Solved trajectories in (r, r) state space for the orbital dynamics equations for μ = 1,  = 1, and GMm = 1. The red orbits are elliptical bound orbits (E < 0) and the blue orbits are hyperbolic unbound orbits (E > 0). The black dashed orbit is a marginal parabolic orbit (E = 0).

0

Parabolic orbit Elliptical bound orbits

Circular orbit

Parabolic orbit

–0.5

Hyperbolic unbound orbits

–1 1

2

3

4

5

6

7

8

9

10

Radius

Kepler’s Laws 1st Law: Orbits are conic sections

A1

b F

c

2nd Law: Equal areas in equal times dA1 dA2 = dt dt

3rd Law: T 2 / a3 = const.

A2

a2

a

a1

α

T1 α / r = 1 + ε cosϕ ε=

c a

T2 dA 1 2 dφ = r dt dt 2

T12 T22

=

a13 a23

Figure 2.10 The three Laws of Kepler. First Law: Planetary orbits are ellipses with the Sun as one focus of the ellipse. Second Law: Planetary orbits sweep out equal areas in equal times. Third Law: The square of the period varies as the cube of the semimajor axis of the orbit.  dr = dt

2 2 dr dφ dr  = (E − U) − 2 2 = μ dφ dt dφ μr 2 μ r

(2.102)

where E is the total energy of the system and  is the conserved angular momentum. This expression is integrated to give angular position as a function of radius:

Lagrangian Mechanics 75  φ(r) =



/r 2   dr GMm 2 2μ E + − r 2μr 2

(2.103)

Making the substitution to u = 1/r, this integrates to

cos φ = 

2 1 −1 μGMm r

(2.104)

2E2 1+ μG2 M 2 m2

With the substitutions α=  ε=

2 μGMm

2E2 1+ μG2 M 2 m2

(2.105)

the solution is α = 1 + ε cos φ r

(2.106)

This is the equation of an ellipse with eccentricity ε and major and minor axes α 1 − ε2 α b= √ 1 − ε2

a=

(2.107)

respectively (Fig. 2.11). The closest and farthest approaches (called the turning points) are α 1+ε α = 1−ε

rmin = rmax

(2.108)

respectively. The total energy at the turning points is entirely from the effective potential energy, giving E=−

GMm 2 + 2 rmin 2μrmin

GMm 2 + 2 rmax 2μrmax GMm =− 2a =−

(2.109)

76 Introduction to Modern Dynamics α=

ℓ2 μGMm

r=

(

a 1– ε2

)=

1+ ε cosφ

α

ε = 1+

1+ ε cosφ

2Eα GMm

b

ε=

F

rmin

c a

c

a

a2 = b2 + c2

rmax α

Figure 2.11 Properties of an ellipse for a bound orbit (shown for ε = 0.5 with φ = 0 at rmin ).

rmin = a (1– ε) =

α

rmax = a (1+ ε ) =

1+ ε

α 1– ε

α = 10 ε = 1.2 40

Distance

20

0 ε=0 –20

–40

Figure 2.12 Orbits with ε = 0 (circular), 0.2 through 0.8 (elliptical), 1.0 (parabolic) and 1.2 (hyperbolic) for α = 10.

–100

–80

–60

–40

–20

0

Distance

For bound motion, when E < 0, the planetary motion is an ellipse or a circle. It is a circle when the energy is equal to the minimum in the effective potential, E = V min , and ε = 0. For E = 0, the orbit is a parabola. When the orbit is unbound for E > 0, the trajectory is a hyperbola. Examples are shown in Fig. 2.12, for α = 10, with ε = 0 (circular), ε = 0.2, 0.4, 0.6, and 0.8 (elliptical), ε = 1 (parabolic), and ε = 1.2 (hyperbolic).

Lagrangian Mechanics 77 Kepler’s Second Law states that the planetary orbit sweeps out equal areas in equal times. For planetary orbits around the Sun, the Sun can be considered fixed, to good approximation, because it is much more massive than the smaller planets. In this case, the Lagrangian for the motion is L (r, φ) =

GMm 1 2 μ r˙ + r 2 φ˙ 2 − 2 r

(2.110)

where the motion is in a plane (defined by θ = π /2), and the potential depends only on the radial position r. There is no explicit dependence on the azimuthal variable φ, and hence φ is an ignorable variable and the associated conjugate momentum is conserved, pφ =

∂L = μr 2 φ˙ =  ∂ φ˙

(2.111)

where  is the constant angular momentum. The area swept out by a planet in a time dt is  1  (2.112) dA = r νφ dt 2 for a speed vφ . From conservation of angular momentum, this becomes 1 2 ˙ r φdt 2  = dt 2μ

dA =

(2.113)

where we see that “equal areas are swept out in equal times.” Kepler’s Third Law, a statement about the period of the orbit and the major axis of the ellipse, begins with Kepler’s Second Law, Eq. (2.113): dt =

2μ dA 

(2.114)

For a full period, the area swept out for the ellipse is T=

2μ 2μ A= π ab  

(2.115)

The major and minor axes are taken from Eq. (2.107), and squaring yields 4μ2 2 3 π αa 2 4π 2 = a3 G (M + m)

T2 =

(2.116)

78 Introduction to Modern Dynamics When M >> m, we have Kepler’s Third Law: “The square of the period of the orbit varies as the cube of the major axis” with the same coefficient for every planet.

2.8 Virial theorem When considering the properties of systems of particles interacting through central forces, or when making connection to equivalent quantum mechanical problems, it is common to work with averages of the kinetic and potential energies over a single period of the orbit (if the orbit is periodic). It is sometimes possible to establish relationships between average properties that become powerful tools for understanding complex systems with a large number of degrees of freedom. Beginning with an inverse-power-law potential energy V (r) = −

k rn

(2.117)

one can construct the quantity dV r F · r = − dr k = n n+1 r r = nV (r)

(2.118)

d p F · r = · r dt  d r d  p · r − p · = dt dt  d  = p · r − 2T dt

(2.119)

This same quantity is equal to

By equating these results, the relationship between the kinetic energy and the potential energy is  d  p · r = nV (r) + 2T dt

(2.120)

By averaging this equation over a full cycle of a periodic bound orbit, 

  d  p · r = nV (r)orbit + 2T orbit dt orbit

(2.121)

Lagrangian Mechanics 79 the average of the first quantity vanishes, 



 d  p · r dt

=0

(2.122)

orbit

because the momentum changes sign symmetrically around the elliptic orbit, averaging to zero. Then the average of the kinetic energy is related to the average of the potential energy as n T  = − V (r) 2

(2.123)

This is known as the virial theorem for power-law potentials. The potential energy of an inverse square force has n = 1, leading to 1 T  = − V (r) 2 =

1 GMm 2 a

(2.124)

and the total energy of an orbit is E = T  + V  =−

1 GMm 2 a

(2.125)

The eccentricity ε does not appear in this expression, and the averages are defined purely by the semimajor axis. Expressions similar to these are obtained for the quantum mechanical properties of electrostatic interactions in the hydrogen atom when the semimajor axis is identified with the principal Bohr radii, and the averages are identified as expectation values over the quantum wave functions. Globular clusters of stars also obey the virial theorem, as do molecules in an ideal gas.6

2.9 Summary In this chapter, Hamilton’s Principle of stationary action was shown to be a powerful and fundamental aspect of physical law—dynamical systems follow trajectories in configuration space for which the action along the trajectory is an extremum. The action is defined as the time average of the difference between kinetic and potential energies, which is also the time average of the Lagrangian (L = T − U ). Once the Lagrangian is defined, the Euler equations of variational calculus lead immediately to the Euler–Lagrange equations of dynamics. When constraints are added to the dynamical problem, the method of Lagrange’s unde-

6 The virial theorem was derived by Rudolf Clausius in 1870 for ideal gases. Clausius is also responsible for the first derivation of entropy and gave it its name (see Chapter 5 of D. D. Nolte, Galileo Unbound (Oxford University Press, 2018)).

80 Introduction to Modern Dynamics termined multipliers can be used without reducing the number of generalized coordinates. The Lagrangian formalism leads directly to statements of conservation laws, in particular conservation of energy as well as conservation of any momentum that is conjugate to a cyclic (ignorable) coordinate. Central force dynamics are classic problems addressed within the Lagrangian formalism, leading to Kepler’s laws in the case of an inverse-square force law. When a dynamical system has many degrees of freedom, the virial theorem can help identify relationships among average properties of the system.

2.10 Bibliography V. I. Arnold, Mathematical Methods of Classical Mechanics, 2nd ed. (Springer, 1989). T. Frankel, The Geometry of Physics: An Introduction (Cambridge University Press, 2003). H. Goldstein, C. Poole, and J. Safko, Classical Mechanics, 3rd ed. (Addison-Wesley, 2001). C. Lanczos, The Variational Principles of Mechanics (Dover, 1949). D. S. Lemons, Perfect Form: Variational Principles, Methods, and Applications in Elementary Physics (Princeton University Press, 1997). D. D. Nolte, Galileo Unbound: A Path Across Life, the Universe and Everything (Oxford University Press, 2018). R. Talman, Geometric Dynamics (Wiley, 2000). R. H. Wasserman, Tensors and Manifolds: With Applications to Mechanics and Relativity, Chapters 18 and 19 (Oxford University Press, 1992).

2.11 Homework problems 1. Variational calculus: Find I (ε) for y = x and η = sin x for the function f = (dY /dx)2 , where x1 = 0 and x2 = 2π . 2. Variational calculus: Find I (ε) for y = x and η = sin x for the function  = 1 + (dY /dx)2 , where x1 = 0 and x2 = 2π . 3. Geodesic: Prove that a geodesic on the plane is a straight line. 4. Geodesic: Prove that a geodesic curve on a right cylinder is a helix. 5. Velocity-dependent potential: A Lagrangian with a velocity-dependent potential can take the general form L = Aq2 + 2Bqq ˙ + Cq˙2

Lagrangian Mechanics 81 with constant coefficients. Find the equations of motion and solve them. What role is played by the middle velocity-dependent term in the Lagrangian? Why? 6. Parabolic wire: Derive the Lagrangian and the equations of motion for a bead sliding on a frictionless parabolic wire under gravity. 7. Spherical surface: Derive the Lagrangian and the equations of motion for a bead constrained to slide on a frictionless spherical surface under gravity. Render a set of isosurfaces of the motion in state space. 8. Motion on cylinder: Apply the Euler-Lagrange equations to a particle constrained to move on the surface of a cylinder, subject to the force F = −k r , where the position vector is relative to a point on the central axis of the cylinder. 9. Particle on a cone: Derive the equations of motion for a particle constrained to move on a cone. Find the equation for r¨ but do not solve it. Find constants of the motion. 10. Double pendulum: A double pendulum consists of two ideal pendula (mass at the end of a massless rod) connected in series (the top of the second pendulum is attached to the mass of the first). The lengths of the pendula are L 1 and L 2 , and the masses are M 1 and M 2 . The problem has two generalized coordinates (the two angular deflections). Derive the Euler–Lagrange equations of motion and solve them for small deflections. 11. Central force law: Consider a central force law f = kr. What is the character of the solutions (what type of functions) occur for k < 0? For k > 0? 12. Elliptical orbits: Derive the expressions for Eq. (2.107): α GMm = 2|E| 1 − ε2 √ α  b= √ = √ = αa 2μ | E | 1 − ε2

a=

GMm 2 GMm + =− 2 rmax 2a 2μrmax 14. Three-body problem: Derive the equations of motion and the period for three equal masses that are positioned at the vertex points of an equilateral triangle. 13. Elliptical orbits: In Eq. (2.109), show that −

15. Lagrangian of a skewed cylindrical pendulum: A pendulum consists of a mass m on a massless rope that is wrapped around a cylinder of moment I that rotates about its axis. A mass mb is attached by a massless rigid rod of length L to the cylinder. Assume that the masses and lengths allow the

82 Introduction to Modern Dynamics equilibrium angle θ 0 to be less than π/2. Derive the angular frequency of oscillation ω for small oscillations as a function of the masses, sizes, and I. I R

L

θ0 mbg

mg

16. Lagrangian for cylinder rolling down a sliding wedge: A cylinder of mass m and radius R rolls without slipping down an inclined plane of mass M. The inclined plane can slide without friction on the surface of the table. Solve for the accelerations of the cylinder and wedge.

I = mR2 L

M α

17. Lagrangian for a spherical pendulum: Derive the equations of motion for a spherical pendulum for arbitrary amplitudes. A spherical pendulum is a mass on a rigid massless rod, suspended without friction at on end, that is free to move in three dimensions. Solve for small-amplitude orbits. 18. Lagrangian for a rotating hoop: A hoop in gravity rotates about a vertical axis with angular speed ω. A mass slides without friction on the hoop. What is the equilibrium angle of the mass as a function of ω? What is the oscillation frequency for small-amplitude oscillations about this angle? What is the critical angular frequency ωc for which the motion of the mass qualitatively changes? ω

θ

Hamiltonian Dynamics and Phase Space

θ

Polar angle

φ Azimuthal angle

Hamiltonian Mechanics is geometry in phase space. V. I Arnold (1978)

The transformation from dynamics based on velocities and accelerations to a description in which momenta are variables independent of the position coordinates transforms physics problems from a Lagrangian formalism to a Hamiltonian formalism. When a system is conservative, and energy is a constant of the motion (a first integral), then the Hamiltonian viewpoint is the most natural. Hamiltonian dynamics takes place in phase space, which is a state space spanned by coordinates and their conjugate momenta. A fundamental geometric property of conservative systems is the conservation of phase-space volume along a trajectory, known as Liouville’s theorem. A valuable feature of Hamiltonian dynamics is the ability to find transformations, known as canonical transformations, of dynamical variables into new variables that simplify the description of the dynamical system. For instance, action-angle variables are the starting point for understanding nonlinear Hamiltonian dynamics that will be studied in Chapter 5.

Introduction to Modern Dynamics. Second Edition. David D. Nolte, Oxford University Press (2019). © David D. Nolte. DOI: 10.1093/oso/9780198844624.001.0001

3 3.1 The Hamiltonian function

84

3.2 Phase space

91

3.3 Integrable systems and action-angle variables

96

3.4 Adiabatic invariants

103

3.5 Summary

105

3.6 Bibliography

106

3.7 Homework problems

106

84 Introduction to Modern Dynamics

3.1 The Hamiltonian function A first integral of Lagrangian dynamics, when there is no explicit dependence of the potential on time, is given by Eq. (2.77) as N  a=1

q˙a

∂L − L = const. ∂ q˙a

(3.1)

where N  a=1

q˙a

∂L = 2T ∂ q˙a

(3.2)

and the first integral is identified as the total energy E = 2T − L = 2T − (T − U) = T + U

(3.3)

Furthermore, from Eq. (2.82), momentum that is conjugate to the coordinate qa is defined as1 pa =

∂L ∂ q˙a

(3.4)

and the first integral, designated by the symbol H and known as the Hamiltonian, is expressed as

Hamiltonian function

N      H qa , pa , t = pa q˙a − L qa , q˙a , t

(3.5)

a=1

The time derivative of the Hamiltonian is2

1

In terms of notation, note that momentum is the derivative of a scalar quantity with respect to a vector. This is what is known as a covariant derivative, leading to a covariant quantity pa with the subscript below. (The mnemonic for remembering that the covariant quantity uses a subscript: “co goes below.”) 2 This expression, and many others in this chapter, use the implicit Einstein summation notation.

 dH d  a = pa q˙ − L dt dt   d ∂L q˙a + pa q¨ a − = dt ∂ q˙a   d ∂L = q˙a + pa q¨ a − dt ∂ q˙a

dL dt ∂L a ∂L ∂L q˙ − a q¨ a − ∂qa ∂ q˙ ∂t

(3.6)

where the possibility of explicit time dependence in the Lagrangian is allowed. Using the definition of conjugate momentum in Eq. (3.4) for the third term and the Euler–Lagrange equations for the first term yields

Hamiltonian Dynamics and Phase Space 85 dH ∂L ∂L ∂L = a q˙a + pa q¨ a − a q˙a − pa q¨ a − dt ∂q ∂q ∂t ∂L =− ∂t

(3.7)

Therefore, if there is no explicit time dependence in the Lagrangian, then the Hamiltonian is constant in time—energy is conserved.

3.1.1 Legendre transformations and Hamilton’s equations The mathematical form of Eq. (3.5) is called a Legendre transform, enabling transformation from one set of variables (qa , q˙a ) to another set (qa , pa ). Legendre transforms are used when it is desired to transform a function f (x) to a new function g(y), because y might be a more “natural” variable to work with than x. In mechanics, momenta may be viewed as more fundamental than velocities, because momenta are often conserved quantities in dynamical motion. Therefore, one would like to transform the Lagrangian from a function of velocities L (qa , q˙a ) to a new function of momenta H (qa , pa ). This new function would obey new equations of motion for the qa and the pa , although clearly the new equations would be consistent with the Euler–Lagrange equations because they describe the same physics. The Legendre transformation is defined for a single-variable function f (x) as Lf (x) = x

df − f = F(p) dx

(3.8)

where the function f (x) is a convex function of x (like the quadratic dependence of kinetic energy on velocity). A Legendre transform takes a function f (x) in one space and converts it to function F(p) in a corresponding space called a dual space. The Legendre transform of the Lagrangian function takes a function expressed in (q, q) ˙ and expresses it as a function in (q, p). Dual spaces are equivalent, but provide different perspectives that can be enlisted to simplify problems. A graphical example of a Legendre transform is shown in Fig. 3.1 for y = f (x) = x2 . A convex function y = f (x) is compared with lines y = px to create a difference function F = px – f (x). This difference has a maximum at a unique position xp if f (x) is convex, as shown in the figure. The value of xp is found by solving    ∂F  = F  xp = 0  ∂x xp

(3.9)

for xp . The function F(p) = F(xp ), when xp is expressed in terms of p, is the Legendre transform of f (x).

86 Introduction to Modern Dynamics 100

p = 10 y = f(x)

80

y = px p=8 60

y

F(p) = px – f(x)

p=6 40

Figure 3.1 Graphical meaning of the Legendre transform of y = x2 . The difference between y = px and y = f(x) is a maximum F at xp . The dependence of F on p defines F(p), which is the Legendre transform of f(x). The function f(x) must be convex (curving upward like a quadratic) for the difference to have a unique maximum.

p=4 20

0

0

2

4

6

8

10

x

Example 3.1 One-dimensional Legendre transform As an explicit example of a Legendre transformation of a Lagrangian, consider the one-dimensional simple harmonic oscillator. The Lagrangian is L=

1 2 1 2 mx˙ − kx 2 2

(3.10)

∂L = mx˙ from Eq. (3.4). The Legendre transformation is ∂ x˙ ∂L LL (x, x) ˙ = x˙ −L ∂ x˙ p 1  p 2 1 2 = p− m + kx m 2 m 2 p2 1 2 = + kx = H (x, p) (3.11) 2m 2 which leads to the expected equation for the Hamiltonian. This transformation took the variable x˙ in the Lagrangian to the new variable p in the Hamiltonian. The Legendre transform is its own inverse, as can be seen through and the canonical momentum is p =

Hamiltonian Dynamics and Phase Space 87

Example 3.1 continued dH −H dp 1 1 = mx˙2 − mx˙2 + kx2 2 2 1 2 1 2 = mx˙ − kx = L (x, x) ˙ 2 2

LH (x, p) = p

which leads back to the Lagrangian.

The Legendre transform for a multivariable Lagrangian can be expressed as           LL qa , q˙a , t = q˙a qa , pa , t pa − L qa , q˙a qa , pa , t , t = H qa , pa , t

(3.13)

where

pa =

Conjugate momentum

∂L (qa , q˙a , t) ∂ q˙a

(3.14)

is the momentum conjugate to the generalized coordinate qa , which holds if  2   ∂ L  det  a b   = 0 ∂ q˙ ∂ q˙

(3.15)

which is the Hessian condition for invertibility. The total differential of the Hamiltonian is dH =

∂H ∂H ∂H dq + dp + dt ∂q ∂p ∂t

(3.16)

which is also equal to dH = qdp ˙ −

∂L ∂L dq − dt ∂q ∂t

(3.17)

Equating terms between these equations gives q˙ =

∂H ∂p

p˙ =

∂L ∂H =− ∂q ∂q

The Hamilton equations of motion are

∂H ∂L =− ∂t ∂t

(3.18)

(3.12)

88 Introduction to Modern Dynamics

Hamilton’s canonical equations

∂H ∂pa ∂H p˙a = − a ∂q dH ∂L =− dt ∂t q˙a =

(3.19)

which are known as the canonical equations. They consist of 2N first-order equations that replace the N second-order equations of the Euler–Lagrange equations.

Example 3.2 Three-dimensional Legendre transform A mass subject to gravity is constrained to move on the surface of a sphere. The line element is ds2 = r 2 dθ 2 + r 2 sin2 θ dφ 2

(3.20)

s˙2 = r 2 θ˙ 2 + r 2 sin2 θ φ˙ 2

(3.21)

and the mass has a speed

The kinetic energy and potential energy are T=

1 2 m˙s 2

V = mgr cos θ

(3.22)

The Lagrangian is L=

1 2 2 m r θ˙ + r 2 sin2 θ φ˙ − mgr cos θ 2

(3.23)

Define the momenta pθ = θ˙ =

∂L = mr 2 θ˙ ∂ θ˙ pθ mr 2

φ˙ =

pφ =

∂L = mr 2 sin2 θ φ˙ ∂ φ˙

(3.24)

pφ mr 2 sin2 θ

Therefore, the Legendre transform is ˙ θ + φp ˙ φ −L H = θp =

pφ 1 2 2 pθ ˙ + r 2 sin2 θ φ˙ 2 + mgr cos θ p + − m r θ p θ φ 2 mr 2 mr 2 sin2 θ

(3.25)

p2φ p2θ + + mgr cos θ 2mr 2 2mr 2 sin2 θ

(3.26)

which simplifies to H= which is the total energy of the system.

Hamiltonian Dynamics and Phase Space 89

3.1.2 Canonical transformations The choice of generalized coordinates (and hence their conjugate momentum) is not unique. This raises the question of how to define new pairs of variables in terms of which to write the Hamiltonian function. Therefore, we seek a coordinate transformation such that p → P (q, p) q → Q (q, p)

(3.27)

H (q, p, t) → K (Q, P, t)

To keep the equations of motion invariant, the new variables must satisfy an inverse canonical transformation back to the original variables. Canonical transformations satisfy the relation N  a=1

pa q˙a − H (q, p, t) =

N  a=1

˙ a − K (Q, P, t) + dM Pa Q dt

(3.28)

summed over N generalized coordinates. The Legendre transform is defined to within a total derivative of a function M(q, Q) that depends on the position coordinates but not their velocities. The search for an appropriate function M to effect the transformation takes some experience and skill. There are several types of canonical transformation, but this textbook will primarily perform transformations to action and angle coordinates.

Example 3.3 Canonical transformation for the simple harmonic oscillator The Hamiltonian for the simple harmonic oscillator is p2 1 + mω2 q2 (3.29) 2m 2 One would like to find a canonical transformation that makes a transformed coordinate Q into a cyclic coordinate, and the conjugate momentum P of the new coordinate Q would be a conserved quantity. The new Hamiltonian, from Eq. (3.28), is H (q, p) =

˙ − pq˙ + K (Q, P) = H + P Q

dM dt

(3.30)

where M(q, Q) is a function of the old and new coordinates: dM (q, Q) ∂M ∂M ˙ = q˙ + Q dt ∂q ∂Q

(3.31)

The transformation to the new Hamiltonian is easily identified as K = H when p =

∂M ∂q

∂M P=− ∂Q

(3.32)

continued

90 Introduction to Modern Dynamics

Example 3.3 continued The quadratic forms of both the kinetic energy and potential energy in Eq. (3.29) suggest a transformation such as p = A cos Q q = B sin Q

(3.33)

where A and B are functions of P. Substituting these into the Hamiltonian gives H (Q, P) =

A2 1 cos2 Q + mω2 B2 sin2 Q 2m 2

(3.34)

By setting the coefficients equal, A2 1 mω2 B2 = 2 2m m2 ω2 B2 = A2 A B= mω

(3.35)

and then the Hamiltonian takes on the simple form A2 2m To find A as a function of P, take the ratio of p and q in Eq. (3.33), H(P) =

p=q

A cos Q ∂M = qmω cot Q = B sin Q ∂q

(3.36)

(3.37)

and identify this as the partial derivative in Eq. (3.32). This gives the expression for the transformation function M(q, Q) as M=

1 mωq2 cot Q 2

(3.38)

Then P=− =

1 q2 1 ∂M = mω 2 = mω B2 ∂Q 2 2 sin Q

1 A2 2 mω

(3.39)

Hence A=



2mωP

(3.40)

Substituting this into Eq. (3.36) gives H = ωP Hamilton’s canonical equations of motion immediately yield

(3.41)

Hamiltonian Dynamics and Phase Space 91

Example 3.3 continued ˙ =ω Q ˙ P=0

(3.42)

This canonical transformation has yielded extremely simple dynamics. For reasons that will become clear in Section 3.3, the new coordinate Q is called an “angle” coordinate, and the new coordinate P is called an “action” coordinate. Therefore, this canonical transformation of the simple harmonic oscillator has yielded an action-angle representation in which the action variable is conserved.

3.2 Phase space Hamilton’s canonical equations in Eq. (3.19) describe a dynamical flow (in the context of Chapter 1) in pairs of variables (qa , pa ). The solutions to Hamilton’s equations are trajectories in even-dimensional spaces with coordinate axes defined by (qa , pa ). This space is called phase space. For systems with N generalized coordinates, the phase space is 2N-dimensional. Phase space is more than a convenient tool for visualizing solutions to Hamilton’s equations. This is because Hamilton’s equations impose a specific symmetry (called symplectic) on the possible trajectories in phase space, and guarantee the conservation of phase space volume for Hamiltonian systems, known as Liouville’s theorem. For a dynamical system with one degree of freedom, phase space is defined by two coordinate axes: one for the space variable q and the other for the momentum p (Fig. 3.2). Trajectories in phase space are one-parameter curves (parametric curves with time as the parameter) as both position and momentum develop in time. A point on the trajectory at a specific time specifies the complete state of the system. Because energy is conserved, and is a constant of the motion for a conservative system, the energy isosurfaces have one dimension less than the dimension of the phase space. Energy isosurfaces can never cross, and so appear as contours in phase space. Generalizing to more degrees of freedom is straightforward. When there are N generalized coordinates, the dimension of the phase space is 2N. Within the 2Ndimensional phase space, the iso-energy hypersurfaces have dimension 2N −1, because energy conservation provides one constraint. When a physical system consists of M particles, there are N = 3M generalized coordinates and 2N = 6M dimensions to the phase space. The state of the system at a given instant in time is defined by a single point in the 6M-dimensional phase space. As time progresses, this point moves through the phase space on a trajectory defined by the Hamiltonian equations given initial conditions.

92 Introduction to Modern Dynamics px Phase point

Trajectory

dA2 = dA1

x

Δx

Figure 3.2 Characteristic 2D phase space for a system with one degree of freedom. A phase point uniquely defines the state of the system that evolves on a state trajectory. An area of initial values evolves under time, and may distort, but does not change area for a conservative Hamiltonian system.

Δpx dA1 = dpxdx

x2

 x˙ = f(x)

f

V(t) nˆ S(t)

S(t + Δt)

x1

Figure 3.3 Volume in state space evolves according to the flow equations.

x3

3.2.1 Liouville’s theorem and conservation of phase space volume Liouville’s theorem states that volumes in phase space are invariant to the evolution of a Hamiltonian system in time. The conservation of phase space volume follows from the geometry of phase space and Hamilton’s equations. To gain an intuitive understanding of the time evolution of a dynamical volume, consider a surface S(t) enclosing a set of initial conditions in a volume of state space, as in Fig. 3.3. The set of initial conditions enclosed by the surface evolves in time governed by the flow equations x˙ = f (x).  At a later time, the surface has changed to S(t + dt) and encloses a new volume V (t + dt). The change in volume per time is given by the net flux in or out of the surface:

Hamiltonian Dynamics and Phase Space 93

 dV V (t + dt) − V (t) = = f · nˆ dS dt dt S

= f · d S

(3.43)

S

By the divergence theorem, this becomes dV = dt



 · f dV ∇

(3.44)

V

integrated over the volume. Therefore, the time rate of change of a volume of states in state space is determined by the divergence of the flow equations integrated over the volume of states.

Example 3.4 Change of volume in state space As an example, consider the linear flow equations x˙ = ax + by y˙ = by − ax u˙ = cu − dv v˙ = dv + cu

(3.45)

It has the divergence x y u v  · f = ∂f + ∂f + ∂f + ∂f ∇ ∂x ∂y ∂u ∂v

=a+b+c+d = tr(J)

(3.46)

which is the trace of the Jacobian matrix. The rate of change in volume is dV = (a + b + c + d) dt V

(3.47)

V (t) = V (0)e(a+b+c+d)t

(3.48)

with the solution

 · f = 0, then the Depending on the sign of the divergence, the volume grows exponentially or vanishes. However, if ∇ volume does not change—it is conserved. Therefore, the vanishing of the divergence of the flow function is a strong test for volume conservation during dynamical evolution. An alternative proof of Liouville’s theorem treats a dynamical flow as a continuous coordinate transformation. This can be seen by rewriting the flow as  xa = xa (t + dt) = xa (t) + f a xb dt

(3.49)

94 Introduction to Modern Dynamics This represents a coordinate transformation through an infinitesimal shift. The Jacobian matrix for this transformation is a

Jb = 1 +

∂f a dt ∂xb

(3.50)

The volume of a transformed volume element is related to the original volume element through the Jacobian determinant: dV =



dxa

a

  ∂f a dxa 1 + a dt ∂x a

 ∂f a = 1+ dt d N x a ∂x a =



(3.51)

where terms of degree dt2 and higher are neglected. The continuous transformation has diagonal terms that are arbitrarily close to unity, and the multidimensional determinant simplifies to the product of the diagonal terms plus terms of higher order. The product of diagonal terms is approximated by unity plus the trace of the original Jacobian, as  ∂f a | Jab | = 1 + dt ∂xa a   = 1 + tr Jba dt (3.52) The trace is

  · f tr Jab = ∇

(3.53)

which is the trace of the original determinant that appears in Eq. (3.44). Because the trace vanishes for a Hamiltonian flow, | Jab |= 1 and d N x =| Jab | d N x = dN x

(3.54)

and the volume of state (phase) space is preserved by the dynamics.

3.2.2 Poisson brackets When asking how a dynamical quantity (some function of the p’s and q’s) varies along the trajectory in phase space, a new quantity emerges called the Poisson bracket. The Poisson bracket appears in several important contexts in classical mechanics, and has a direct analog in the commutator of quantum mechanics. The Poisson bracket is a useful tool with which to explore canonical transformations as well as integrals (constants) of motion.

Hamiltonian Dynamics and Phase Space 95 If we have a dynamical quantity G(q, p, t), then its time rate of change along a trajectory in phase space is ∂G dG ∂G ∂G = + a q˙a + p˙ dt ∂t ∂q ∂pa a =

∂G ∂G ∂H ∂G ∂H + a − ∂t ∂q ∂pa ∂pa ∂qa

(3.55)

dG ∂G = + {G, H} dt ∂t where the canonical equations have been used in the second step, and the last expression in brackets is called the Poisson bracket. More generally, for two dynamical quantities F and G, the Poisson bracket is

Poisson bracket

{G, F} =

∂G ∂F ∂G ∂F − ∂qa ∂pa ∂pa ∂qa

(3.56)

where the implicit summation is over the generalized coordinates of the dynamical system. The Poisson bracket has several uses in classical mechanics. The canonical equations can be written in terms of the Poisson bracket with the Hamiltonian as   q˙a = qa , H

p˙a = {pa , H}

(3.57)

Furthermore, if G is an integral of the motion, then ∂G + {G, H} = 0 ∂t

(3.58)

and if G does not depend explicitly on time, then {G, H} = 0

(3.59)

The Poisson bracket plays a central role in the correspondence between dynamics in classical systems and dynamics in quantum systems. In particular, Eq. (3.55) for classical systems has a direct correspondence to the quantum mechanical equation ˆ ˆ dG ∂G 1 ˆ ˆ = + G, H dt ∂t i

(3.60)

  ˆ Hˆ is the ˆ where G, for a quantum operator Gˆ and Hamiltonian operator H, commutator. This correspondence suggests the plausible substitution {G, F} ⇒

1  ˆ ˆ G, F i

(3.61)

96 Introduction to Modern Dynamics when beginning with classical systems and going over to quantum mechanical dynamics. Conjugate variables in classical dynamics with non-vanishing Poisson brackets lead to non-vanishing commutation relations for the corresponding quantum operators. Perhaps the most famous of these is the canonical transformation for qi ⇒ Qi and pj ⇒ Pj ,   Qi , Pj = δij

  Qi , Qj = 0



 Pi , Pj = 0

where the Poisson bracket is calculated relative to the original variables qi and pj . The corresponding quantum commutation relations are 

 qˆi , pˆj = iδij



 qˆi , qˆj = 0



 pˆi , pˆj = 0

which lead to Heisenberg’s uncertainty principle.

3.3 Integrable systems and action-angle variables When there are as many constants (integrals) of motion as there are degrees of freedom, a system is said to be integrable. The number of degrees of freedom is equal to the number of generalized coordinates minus the number of constraint equations. For an integrable system, it is possible, in principle, to find a canonical transformation leading to action and angle variables for which the trajectories of the dynamical system are geodesics on the surface of a hypertorus.3 Hamilton’s equations are transformed into the set of equations ∂H J˙a = − a = 0 ∂θ ∂H θ˙ a = = ωa (Ja ) ∂Ja

(3.62)

where the Ja are constant. The constants Ja are called the action, and the variables θ a are called the angle. The angles are not physical angles, but rather describe the position of the trajectory point in phase space. The equations of motion for the system are simply θ a (t) = ωa t + θ0a

(3.63)

The appropriate canonical transformation is achieved through the total differential (with implicit Einstein summation) 3 A hypertorus is the higherdimensional generalization of a torus. Note that a regular torus has a 2D surface and is spanned by two periodic angular variables. An n-torus is spanned by n periodic angular variables. There is also a 1-torus, which is just a circle.

dM = pa dqa − Ja dθ a

(3.64)

Because the motion is periodic in θ , the transformation function M(q, θ ) must also be a periodic function of θ. A closed path has period 2π , over which the integral of the total differential must vanish,

 0=



Hamiltonian Dynamics and Phase Space 97



dM =

pa dqa − Jk

dθ a

 =

pa dqa − Jk 2π

(3.65)

and hence Jk =

1 2π

 pa dqa

(3.66)

k

where the closed path is around N mutually orthogonal directions in phase space. Green’s theorem around a simple region is 



f dx =

C

∂f dA ∂y

(3.67)

dA = Area enclosed

(3.68)

A

By setting f = p, this is



 p dx = C

∂p dA = ∂p

A



A

Therefore, the action is the area enclosed by the phase space path: Jk =

1 2π



dpa dqa

(3.69)

Ak

Action-angle oscillators are a central concept in much of advanced dynamics. For instance, the simple rigid rotator is the beginning point of theories about stability in the Solar System and Hamiltonian systems in general (KAM theory described in Chapter 5). Simple action-angle oscillators are also known as Poincaré oscillators, which are used in theories of synchronization (the Kuramoto transition described in Chapter 6). The simple harmonic oscillator (SHO) is an unusual action-angle system because it has massive degeneracy—all ω are equal. Even though the SHO is a fundamental element of so much of modern physics, it is idiosyncratic and not representative of most real-world systems. The simple pendulum is a better example of natural systems that tend to be linear for small amplitudes but become nonlinear for large amplitudes.

Example 3.5 Special cases of action-angle coordinates Rigid rotator (Fig. 3.4) A rigid rotator consists of a rotating mass on a fixed radius without gravity. The angular momentum and moment of inertia are J = mvR

I = mR2

(3.70) continued

98 Introduction to Modern Dynamics

Example 3.5 continued The Hamiltonian is H=T =

J2 2I

(3.71)

The dynamical flow equations are J ∂H = =ω ∂J I J˙ = 0 θ˙ =

(3.72)

where the angular frequency depends linearly on J. The Hamiltonian can be expressed as 1 ωJ (3.73) 2 Note that J changes sign as the velocity changes sign, and hence ω must change sign to keep the energy positive. H=

ω=

J I

J

m θ R θ

Action-angle phase space

Configuration space

Figure 3.4 A rigid rotator consists of a rotating mass on a fixed radius (no gravity, just spinning in the plane). Circular planetary orbits (Fig. 3.5) The angular momentum and moment of inertia for circular planetary orbits are just as for the rigid rotator: J = mvR

I = mR2

(3.74)

The Hamiltonian now has a potential energy contribution that is independent of J: H =T +U =

mM J2 −G 2I R

(3.75)

The flow equations are θ˙ =

∂H J ν = = ∂J I R

J˙ = 0 From the virial theorem, we have

(3.76)

Hamiltonian Dynamics and Phase Space 99

Example 3.5 continued ν2 =

GM R

(3.77)

from which √ J = m GMR

or

R=

J2 GMm2

(3.78)

The flow equations are then m3 G2 M 2 =ω J3 J˙ = 0 θ˙ =

(3.79)

Orbits with smaller radii have smaller J and hence larger angular frequencies. The Hamiltonian in Eq. (3.75) can be expressed as a single term as a function of ω and J by substituting in Eq. (3.78): H= =

J2 mM −G 2I R 3 2 G2 M 2 m3 2m M − G 2J 2 J2

1 = − ωJ (3.80) 2 The Hamiltonian is negative because the orbits are bound. The action-angle phase space for circular orbits is shown in Fig. 3.5. Orbits can wind clockwise or counterclockwise. m3G2M2 θ˙ = J3 ˙ J=0

J

m

θ

Configuration space

Action-angle phase space

Figure 3.5 Action-angle phase space for circular planetary orbits. continued

100 Introduction to Modern Dynamics

Example 3.5 continued Simple harmonic oscillator (Fig. 3.6) The action is the enclosed area in phase space (divided by 2π):   2E 1 1 1√ E J= p dx = πpmax xmax = 2mE = 2π 2π 2 ω mω2

(3.81)

The flow equations are ∂H =ω ∂J J˙ = 0 θ˙ =

(3.82)

where the angular frequency is a constant independent of J. The Hamiltonian is expressed as H = ωJ

(3.83)

This is the same result obtained through the canonical transformation in Example 3.3. The independence of ω on E produces massive degeneracy (all energies have the same frequency), which means that the SHO is not characteristic of most real-world oscillators. p Constant-E orbits

J

q

θ Momentum position

Action-angle

Figure 3.6 Canonical transformation on a 1D harmonic oscillator taking (q, p) to the action-angle coordinates (θ , J). Each different orbit corresponds to a different total energy E. Simple pendulum The Hamiltonian is H= with angular momentum L= =

L2 + (1 − cos φ) mgR = H0 2I √



(3.84)

2IH0 − 2I (1 − cos φ) mgR

 2mgR 2 2IH0 1 − sin (φ/2) H0

(3.85)

Hamiltonian Dynamics and Phase Space 101

Example 3.5 continued The action is 1 J= 2π



1 L dφ = π

φ

max

L dφ 0

 φ

max √ 2 2IH0 2mgR 2 = 1− sin (φ/2) dφ π H0 √

=

4 2IH0 π

0  α

max

1−

2mgR 2 sin α dα H0

(3.86)

0

which is an elliptic integral, where the limit is expressed as the half-angle 

H0 −1 αmax = sin 2mgR The action is then expressed as



√ 4 2IH0 2mgR J= E αmax , π H0

(3.87)

(3.88)

where E (φ, k) is the incomplete elliptic integral of the second kind. This can be inverted numerically to yield H (J). The energy H 0 increases approximately linearly with action J up to H0 = mgR (approximating an SHO). For larger H 0 up to 2mgR, the action increases steeply. The derivative of H ( J) with respect to J is the angular frequency, ω=

∂H = θ˙ ∂J

(3.89)

which goes to zero at H 0 = 2mgR when the pendulum approaches the vertical with asymptotically zero momentum. The action angle is the first integral of the angular frequency: θ = ωt + const.

(3.90)

It is important to note that the action angle θ is not the physical angle of the pendulum, which is denoted by φ in this example. The action J and angle θ present a new coordinate system to represent phase space. It has a very simple structure—it is a torus (even for the nonlinear pendulum). For a system with one generalized coordinate (one degree of freedom), the system motion in phase space is a point on the (θ, J) phase plane describing a flow line that is a simple horizontal line, as shown on the right of Fig. 3.7. For an integrable system with two degrees of freedom, the full phase space is 4D. However, because the system is integrable, it has two constants of motion: J 1 and J 2 . Each action constant can be viewed as a fixed radius, with the angle variables θ 1 and θ 2 executing uniform circular motion. Two constants of motion in 4D phase space define a 2D hypersurface. The structure of this 2D hypersurface is

102 Introduction to Modern Dynamics 10 Constant-E orbits

Open orbits

J

Momentum

5

0

–5

–10 –5

Closed orbits 0

action-angle

5

θ

Angle φ

Figure 3.7 Phase space in the original coordinates (p, φ) transformed into action-angle coordinates (J, θ). isomorphic to a torus. One angle variable defines the motion about the origin, while the other defines the motion about the center (core) of the torus. The full phase space consists of infinitely many nested tori, like the layers of an onion, each for a specific choice of the constants of motion J 1 and J 2 . None of these tori can intersect, because of the uniqueness of the equations of motion. Once motion starts on a defined torus, it remains on that torus. For integrable systems with many degrees of freedom, the action-angle formalism produces an angle coordinate for each degree of freedom, and the motion in phase space takes place on the surface of a hypertorus. As an example, consider two independent linear harmonic oscillators that represent a system with two degrees of freedom. The Hamiltonian is H = ω1 J1 + ω2 J2

4 Such a system is sometimes called “ergodic,” which is a concept introduced by Boltzmann (see Galileo Unbound (Oxford University Press, 2018), Chapter 6).

(3.91)

This is a completely separable problem in which there are two frequencies of oscillation ω1 and ω2 and two angle variables θ and φ that increase linearly modulo 2π. The two angles define a configuration space that is equivalent to the surface of a 2D torus (Fig. 3.8). The ratio Ω = ω1 /ω2 of the two frequencies is called the winding number. If it is a rational fraction Ω = p/q, then the motion repeats itself after a period of time, T = qT2 = pT1 , and not all possible points in the configuration space are accessible for a given initial condition. On the other hand, if the ratio of frequencies is equal to an irrational number, then the motion never repeats exactly, and the orbit can come arbitrarily close to any point in the configuration space over a sufficiently long time.4 If the two oscillators are coupled by a nonintegral term in the Hamiltonian, then qualitatively different behavior may occur for commensurate versus incommensurate (rational versus irrational) ratios of the frequencies. We will see in Chapter 5 that new fixed points will appear in the

Hamiltonian Dynamics and Phase Space 103 2π

φ θ

Polar angle

φ Azimuthal angle

0

0



θ

Figure 3.8 Two equivalent (isomorphic) representations of a configuration space as a 2D torus. The configuration space has two angular variables. phase plane, and chaos can emerge. In Chapter 6, we will see the possibility of synchronization among “nearby” frequencies.

3.4 Adiabatic invariants An adiabatic invariant is a quantity that remains unaltered as some parameter of a Hamiltonian is slowly varied. Action variables are adiabatic invariants. The action integral is  1 J= p dx 2π  1 ∂x = p dθ (3.92) 2π ∂θ Taking the time derivative gives dJ 1 = dt 2π =

1 2π

  p˙

 ∂x d ∂x +p dθ ∂θ dt ∂θ

  −

 ∂H ∂x ∂H ∂ x˙ + dθ = 0 ∂x ∂θ ∂ x˙ ∂θ

(3.93)

where the second line substitutes from Hamilton’s equation for p˙ and expresses the conjugate momentum in terms of the derivative of the Hamiltonian. The time derivative vanishes as long as there is no appreciable change in the energy over a single cycle. As an example, consider an oscillating mass on a spring with a time-dependent spring constant k(t). The total energy of this system changes, but the closed action

104 Introduction to Modern Dynamics integral is an adiabatic invariant—it does not change in time. The invariant action for a harmonic oscillator is 

2π J = p dq = dp dq = A = π pmax xmax (3.94) which is the area enclosed by one period of the orbit in phase space. If the change in spring constant is slow relative to a period of oscillation, then  π √ 2mE 2E/k 2π = E/ω

J=

(3.95)

and the energy of the oscillator changes in time as 

k(t) m  k(t) = E0 k0

E(t) = J

(3.96)

where E 0 is the total energy of oscillation when the spring constant had the value k0 . If an oscillating mass on a spring is slowly cooled so that the spring increases in stiffness, then the total energy of the oscillator increases. An interesting question is: where does this increase in oscillator energy come from? (Finding the answer to this question is Homework problem 13.) Examples of adiabatic invariants are particularly transparent when the action is equal to angular momentum. Adiabatic invariance is then just conservation of angular momentum. Consider a mass m attached to a string on a frictionless table as the mass executes regular circular motion. If the string passes through a hole in the center of the table and is slowly pulled through this hole, then the radius r of the motion decreases and the angular speed increases such that angular momentum J = mvr = mωr 2 ω=

J mr 2

(3.97)

which is also how spinning figure skaters increase angular speed by pulling in their arms and legs. The energy is not conserved, because work is performed on the string by pulling it downward through the hole. The increment of work performed is dW = −F dr = m =

J2 dr mr 3

v2 dr r (3.98)

Hamiltonian Dynamics and Phase Space 105 Integrating gives

W =−

J2 m

r2

dr J2 = 3 m r

r1

= (ω1 − ω2 ) J



1 1 − 2 r12 r2

(3.99)

where we recognize the expression for the energy of a rigid rotator, and the work performed equals the change in kinetic energy. In freshman physics, this problem was approached through conservation of angular momentum. Now we see that it is a case of adiabatic invariance. In fact, adiabatic invariance is the fundamental origin of many of the problems associated with various forms of momentum conservation. In the Lagrangian and Hamiltonian formalism, there are conjugate momenta to every generalized coordinate variable, and, when the Hamiltonian is integrable, each conjugate momentum can be used to express an action variable that is an adiabatic invariant. An interesting correspondence between classical and quantum mechanics is encountered in the adiabatic invariants. If a classical quantity is an adiabatic invariant, then the associated quantum value is also an invariant, known as a quantum number. Quantum numbers are not altered by adiabatic (slow) variation of physical parameters, i.e., the adiabatic variation induces no transitions between stationary states. This correspondence was first demonstrated by Paul Eherenfest in 1913,5 pursuing a suggestion made by Einstein at the 1911 Solvay Congress. The full theory of quantum adiabatic invariants was completed by Dirac6 in 1925, just prior to the advent of Heisenberg’s quantum mechanics.

3.5 Summary Hamiltonian dynamics are derived from the Lagrange equations through the Legendre transform that expresses the equations of dynamics as first-order ODEs of the Hamiltonian (H = T + U ), which is a function of the generalized coordinates and their conjugate momenta. Consequences of the Lagrangian and Hamiltonian equations of dynamics are conservation of energy and conservation of momentum. When there are as many integrals of the motion as degrees of freedom, the Hamiltonian equations of motion are integrable. In this case, actionangle coordinates can be defined that simplify the dynamical motions. Each action integral becomes a conserved quantity, and all trajectories for a given energy lie on a hyperdimensional torus with a dimensionality equal to the number of angle coordinates. If the system properties change slowly, the action is conserved and is called an adiabatic invariant.

5 P. Ehrenfest, “Een mechanische theorema van Boltzmann en zijne betrekking tot de quanta theorie” [“A mechanical theorem of Boltzmann and its relation to the theory of energy quanta”], Verslag van de Gewoge Vergaderingen der Wis-en Natuurkungige Afdeeling, vol. 22, pp. 586–93, 1913. 6 P. A. M. Dirac, “The adiabatic invariance of the quantum integrals,” Proceedings of the Royal Society of London Series AContaining Papers of a Mathematical and Physical Character, vol. 107, pp. 725–34, Apr 1925.

106 Introduction to Modern Dynamics

3.6 Bibliography V. I. Arnold, Mathematical Methods of Classical Mechanics, 2nd ed. (Springer, 1989). L. Casetti, M. Pettini, and E. G. D. Cohen, “Geometric approach to Hamiltonian dynamics and statistical mechanics,” Physics Reports 337, 237–341 (2000). T. Frankel, The Geometry of Physics: An Introduction (Cambridge University Press, 2003). H. Goldstein, C. Poole, and J. Safko, Classical Mechanics, 3rd ed. (Addison-Wesley, 2001). D. D. Holm,Geometric Mechanics. (Imperial College Press/World Scientific, 2008). D. D. Nolte, Galileo Unbound: A Path Across Life, the Universe and Everything (Oxford University Press, 2018). R. Talman, Geometric Dynamics (Wiley, 2000).

3.7 Homework problems 1. Quartic wire: Derive the Hamiltonian and Hamilton’s equations of motion for a bead sliding under gravity on a frictionless wire with a shape given by y = x4 . Draw the isosurfaces in phase space. What are the key qualitative differences from a parabolic wire with y = x2 ? 2. Spherical surface: Derive the Hamiltonian and Hamilton’s equations of motion for a massive bead constrained to slide on a frictionless spherical surface under gravity. Render a set of isosurfaces of the motion in phase space for fixed energy E and angular momentum pφ . 3. Motion on cylinder: Apply Hamilton’s equations to a particle constrained to move on the surface of a cylinder, subject to the force F = −k r , where the position vector is relative to a point on the central axis of the cylinder. 4. Action-angle: Find the canonical transformation that converts straight-line motion into action-angle coordinates. What is the physical interpretation of the conserved action? 5. Canonical transformation: Find a canonical transformation for the simple harmonic oscillator for which the new Hamiltonian is expressed as K = −iωPQ Evaluate the Poisson bracket for P and Q. 6. Liouville’s theorem: Find the Jacobian of the continuous coordinate transformation of the flow

Hamiltonian Dynamics and Phase Space 107 θ˙ = 2pθ φ˙ = p˙θ =

2pφ sin2 θ 2p2φ cos θ sin3 θ

+ sin θ

p˙φ = 0 Show that det J = 1 to lowest order. 7. Poisson bracket: Evaluate the Poisson bracket of the dynamical quantity G = pa qa for a Hamiltonian system. 8. Action of Kepler orbit: Find the action of an elliptical Keplerian orbit of eccentricity ε and semi-major axis a. Express your answer in terms of an elliptic integral. 9. Changing spring constant: A simple harmonic oscillator has a spring with a spring constant that is increasing linearly with time: k(t) = k0 + k1 t Use the Hamiltonian formalism to derive the equations of motion. Identify the conserved quantity in this system (it is not the energy). 10. Adiabatic increase of spring constant: Solve the previous problem using the concepts of adiabatic invariance. 11. Adiabatic invariant: A mass on a frictionless horizontal plane is attached to the origin by a spring of spring constant k. The spring has vanishing length in the limit of no force. The mass executes circular motion in the plane, the centrifugal force balanced by the spring force. The spring constant begins to change slowly in time (adiabatically). How do the speed, radius, angular frequency, and energy change in time? 12. Adiabatic invariant: A spherical pendulum (in the small-angle approximation) has a rope of length L that grows slowly shorter in time. Use the principle of adiabatic invariants to derive how the energy of the pendulum changes in time when (a) the pendulum swings in a vertical plane (planar pendulum); (b) the pendulum executes circular motion in a horizontal plane. 13. Adiabatic invariant: In the case of the harmonic oscillator with slowly increasing spring constant, where does the extra energy come from? Find an integral expression of the work performed on the system and evaluate it to arrive at Eq. (3.96) directly. 14. Quasi-periodicity: Two independent action angles have time derivatives ω1 and ω2 whose ratio ω1 /ω2 = φ where φ is the golden mean (an irrational number). What is the closest rational fraction p/q to φ for (p, q) 0

Re(λ) < 0

Im(λ) = 0

Im(λ) = 0

Unstable spiral

Stable spiral

.

.

Re(λ) > 0

Re(λ) < 0

Im(λ) ≠ 0

Im(λ) ≠ 0

Center

Saddle point

.

Re(λ) = 0 Im(λ) ≠ 0

Re(λ) = ±C1,2

Figure 4.3 Fixed-point classification in 2D state space flows according to the Lyapunov exponents.

120 Introduction to Modern Dynamics

Example 4.1 continued The vector flow of the rabbit versus sheep model is shown in Fig. 4.5. There are two stable fixed points to which the dynamics are attracted. There is an unstable fixed point at the origin, and a saddle point in the middle. The two stable fixed points are solutions either entirely of rabbits or entirely of sheep. This model shows a dynamic model in which rabbits and sheep cannot coexist.

τ2 – 4Δ = 0

τ Unstable nodes

Unstable spirals Saddle points

Δ

0

Centers Stable spirals

Stable nodes

Figure 4.4 Trace–determinant space of the characteristic values of the Jacobian of the linearized 2D system.

Self-limiting Growth rate Competition

3

x˙ = x (3 − x − 2y) y˙ = y (2 − x − y)

rix arat Sep

Sheep (y)

Stable 2 fixed point

y. -N

ul

lcl

. x

- Nu

in

e

llclin

e

1

Saddle point

x tri ra pa

Se

Figure 4.5 Phase portrait for the rabbits versus sheep example.The rabbits are on the x-axis and the sheep on the y-axis. Redrawn from Strogatz (1994).

Sep 0

Unstable 0 fixed point

arat rix

Stable 1

2

Rabbits (x)

3 fixed point

Nonlinear Dynamics and Chaos 121 Example 4.1 is a case of what is known as “winner-take-all” dynamics in which one species drives the other to extinction. Either one can come out as the winner depending on the initial conditions. Coexistence is not possible for these parameters, because there is no symbiotic relationship—it is a purely competitive relationship. On the other hand, by choosing other values of the parameters that support symbiosis, coexistence can be possible. The outcomes of competition among species competing to survive, or among corporations competing for customers, are topics in the study of evolutionary dynamics and econophysics, which are explored in detail in Chapters 8 and 10.

Example 4.2 Marginal case of a center and spirals On the (, τ ) diagram, there are marginal cases when one or both of the parameters equals zero. For instance, the flow   x˙ = −y + ax x2 + y2 (4.27)   y˙ = x + ay x2 + y2 has a fixed point at the origin (x∗ , y∗ ) = (0, 0), and the Jacobian matrix is     3ax2 + ay2 −1 + 2axy  0 −1 J= = (4.28)  1 + 2ayx ax2 + 3ay2  1 0 (0,0)

for which τ = 0 and  = 1. The eigenvalues are λ = ±i, and the origin is a center for a = 0. But for small finite values of a, the origin is a fixed point for a spiral. When a < 0, the spiral is stable, and when a > 0, the spiral is unstable. This behavior for nonzero a is not captured by the Jacobian matrix for these dynamics in Cartesian coordinates. However, when the dynamics in Eq. (4.27) are converted to polar coordinates, the correct behavior of the fixed point can be obtained from the Jacobian. (See Homework problem 6, which converts Eq. (4.27) into polar coordinates.) Thus, this problem illustrates the caution that is necessary when analyzing marginal fixed points.

Example 4.3 Marginal case for Keplerian orbits As another example of a marginal case, it is possible to have a continuous line of fixed points when two nullclines overlap. For instance, the Newtonian orbits describe by Eq. (2.101) in Chapter 2 take on a simplified form through substitution of the variable continued

122 Introduction to Modern Dynamics

Example 4.3 continued u=

1 r

(4.29)

and the 2D flow becomes u˙ = −u2 ρ   u2 2 ρ˙ = u − GMm μ μ

(4.30)

The nullclines are ρ = 0, u = 0, and u = μGMm/ 2 , with an isolated fixed point at (u*, ρ*) = (μGMm/ 2 , 0) plus a continuous “wall” of fixed points along the u = 0 axis. The Jacobian matrix is ⎛ ⎞ −2uρ −u2   ⎠ J=⎝ (4.31) 2 2GMm u 3u 2 − 0 μ μ The stability of the wall of fixed points is neutral, with the trace and determinant both zero, and the entire u = 0 axis is an asymptotic attractor for hyperbolic orbits as they escape to infinity. The isolated fixed point is a center (a circular orbit) with Lyapunov exponent  √ GMm λ = ±i Δ = ±i = ±iω (4.32) μa3 where a is the circular radius, and the period of the circular orbit is given by   2π 2 4π 2 a3 T2 = = ω G (M + m)

(4.33)

which is a special case of Kepler’s Third Law (see Section 2.7.3). The phase space portrait of the flow of the Newtonian orbits is shown in (u, ρ) space in Fig. 4.6. There is one isolated fixed point at the circular orbit. In this representation, the orbits are all circles centered on the circular orbit. The entire u = 0 axis is a nullcline for both u and ρ and hence is a continuous “wall” of fixed points. All of the hyperbolic free orbits asymptotically terminate on the u = 0 axis when they escape to infinity.

4.2.4 Separatrices: stable and unstable manifolds The separatrices in Fig. 4.5 divide up the phase plane into distinct regions. No flow lines cross a separatrix. They are the boundaries between qualitatively different streamline behavior. The separatrices cross at the saddle fixed point and are the lines of fastest approach and fastest divergence from the fixed point. Because of the singular property of separatrices, they are also known as the stable and unstable manifolds of the fixed point. The behavior of stable and unstable manifolds in nonlinear dynamics is closely linked to the emergence of chaos, and played a key role in the historical discovery of chaos in Hamiltonian systems by Henri Poincaré in 1889 when he was exploring the three-body problem. In this rabbit versus sheep

Nonlinear Dynamics and Chaos 123 . ρ-nullcline

ρ

Hyperbolic orbits

Fixed point “Wall”

Parabolic orbit Circular orbit Elliptical orbits

.

u-nullcline u = 1/r

Figure 4.6 Phase space portrait of the flow of the Newtonian orbits in the (u, ρ) space corresponding to the orbits in Fig. 2.4 of Chapter 2. The orbital trajectories are all circles, centered on the circular orbit. All hyperbolic orbits terminate asymptotically on the u = 0 axis. example, one of the separatrices lies close to the y- and x- nullclines, but the other separatrix is very different from either nullcline. The separatrices are obtained through the eigenvectors of the Jacobian matrix at the saddle fixed point. The Jacobian matrix eigenvalues and eigenvectors at the saddle point in the rabbit versus sheep model (4.25) are  −1 J= −1

 −2 −1

 λ=

√  √ 2−1  1 2 √ ν1 = √ − 2+1 − 1 3

1 ν2 = √ 3

√  2 1

These eigenvectors define the directions of fastest divergence and convergence at the fixed point, respectively. The unstable manifold is found by numerically solving the equations of motion for initial conditions   x1 = ±εv1 y1

(4.34)

where ε is a small value. Because the speed of solution goes to zero at the fixed point, the equations can alternatively be expressed in terms of the path length element ds instead of dt using

124 Introduction to Modern Dynamics 

ds dt

2 = x˙2 + y˙2

ds = dt x˙2 + y˙2

(4.35)

For the rabbit versus sheep problem, the dynamical equations become dx =  ds dy =  ds

x (3 − x − 2y) x2 (3 − x − 2y)2 + y2 (2 − x − y)2 y (2 − x − y)

(4.36)

x2 (3 − x − 2y)2 + y2 (2 − x − y)2

and the unstable manifold is the trajectory arising from the initial conditions at ±εv1 . To find the stable manifold requires an additional step. This is because any initial condition that is a small step along the direction of the eigenvector ν 2 will only reconverge on the fixed point without generating a curve. Therefore, the dynamical equations need to be time-reversed (path-reversed), dx = − ds dy = − ds

x (3 − x − 2y) x2 (3 − x − 2y)2 + y2 (2 − x − y)2 y (2 − x − y)

(4.37)

x2 (3 − x − 2y)2 + y2 (2 − x − y)2

where the minus signs reverse the stable and unstable manifolds, and the stable manifold is found by taking the initial conditions   x2 = ±εν2 y2

(4.38)

and the trajectory traces out the path-reversed stable manifold. The stable and unstable manifolds are shown in Fig. 4.5. The unstable manifold emerges from the saddle fixed point and performs a trajectory to the two stable fixed points. The stable manifold arises from the unstable fixed point at (0, 0), or arises at infinity, and converges on the saddle. These stable and unstable manifolds are the separatrices for this rabbit versus sheep system, dividing up the phase plane into four regions that confine the flow lines. A closely related concept is that of a “basin of attraction.” In this rabbit versus sheep system there are two stable fixed points and two basins of attraction. These fixed points attract all of the flow lines in the two basins bounded by the stable manifold. Basins of attraction play an important role in recurrent neural networks and applications of associative memory, to be discussed in Chapter 9.

Nonlinear Dynamics and Chaos 125 Stable limit cycle

Saddle limit cycle

Unstable limit cycle

Figure 4.7 There are three types of 2D limit cycles: stable, saddle, and unstable. The saddle is rare.

4.3 Limit cycles Fixed points are not the only possible steady states of a dynamical system. It is also possible for a system to oscillate (or orbit) repetitively. This type of steadystate solution is called a limit cycle. In 2D, there are three types of limit cycles, shown in Fig. 4.7. A stable limit cycle attracts trajectories, while an unstable limit cycle repels trajectories. The saddle limit cycle is rare; it attracts from one side and repels on the other.

4.3.1 Van der Pol oscillator A classic example of a limit cycle is the van der Pol oscillator, which is a nonlinear oscillator developed originally to describe the dynamics of space-charge effects in vacuum tubes. It displays self-sustained oscillations as the system gain is balanced by nonlinear dissipation. The van der Pol oscillator begins with the simple harmonic oscillator and switches the dissipation into a gain term. This linear gain term would lead to a solution that grew without bound. To limit this uncontrolled growth, a nonlinear dissipation term is added. The nonlinear term can depend either on amplitude x or speed x. ˙ The van der Pol oscillator equation with amplitude limiting is   x¨ = 2μx˙ 1 − βx2 − ω02 x

(4.39)

where μ is positive and provides the system gain to support self-sustained oscillations. The natural frequency in the absence of the nonlinear dissipation term is ω02 . The nonlinear term in x2 provides self-limiting behavior to balance the gain and keeps the oscillations finite. A flow representation of the oscillator is x˙ = ω0 y   y˙ = −ω0 x + 2μω0 y 1 − β 2 x2

(4.40)

126 Introduction to Modern Dynamics There is a fixed point at (x∗ , y∗ ) = 0, which has the Jacobian matrix  J=

0 − ω0

ω0 2μω0



τ = 2μω0 Δ = ω02

(4.41)

with the eigenvalues  λ = μω0 ± iω0 1 − μ2

(4.42)

Therefore, the origin is a fixed point for an unstable spiral for μ > 0 and μ2 < 1. However, the spiral does not grow without bound, but instead approaches a limit cycle that is stable. A phase portrait for the van der Pol oscillator is shown in Fig. 4.8 with β = 1, μ = 0.5, and ω02 = 1. The stable limit cycle is the attractor of all initial conditions as all streamlines converge on the cycle. Several limit cycles for different values of the parameters are shown in Fig. 4.9. Smaller β allows the gain to produce larger oscillations. Larger μ produces strongly nonlinear oscillations. Two examples of the time series of a van der Pol oscillator are shown by the plots of x and dx/dt as functions of time in Fig. 4.10. The oscillations in (b) are highly nonlinear, driven by a large value of μ. The frequencies are noticeably different between (a) and (b). The limit cycle of the van der Pol oscillator is a steady-state solution, with some of the properties of a fixed point, but it is a 1D manifold that cannot be derived directly from a fixed point analysis. However, by transforming the van der Pol dynamics into polar coordinates, and making an approximation for small gain, the 6 y-nullcline ˙

4

2

y

˙ y-nullcline

0 x-nullcline ˙ y-nullcline ˙ –2

–4

Figure 4.8 Streamlines and velocity vectors for the van der Pol oscillator with β = 1, μ = 0.5, and ω02 = 1. (From vanddpolStream.m.)

–6 –6

–4

–2

0

x

2

4

6

Nonlinear Dynamics and Chaos 127 10 β = 1, μ = 1 β = 0.1, μ = 1 β = 1, μ = 0.1 β = 1, μ = 5

dx/dt

5

0

–5

10

8

6

4

2

0 x

2

4

6

8

8

8 6

Figure 4.9 Van der Pol limit cycles in state space for different parameters. Smaller β allows the gain to produce larger oscillations. Larger μ produces strongly nonlinear oscillations.

β=1 μ=1

6

β=1 μ=5

4

4

x

2

Amplitude

Amplitude

x

0 –2 –4

0 –2 dx/dt

–4

dx/dt

–6

–6 –8 60

2

64

68

72

76

80

84

–8

24

28

32

Time

Figure 4.10 Time series of x and dx/dt for two sets of parameters for the van der Pol oscillator. limit cycle can be obtained as a fixed point in the radial dynamics. For instance, when the parameter μ = 0, Eq. (4.39) is a simple undamped harmonic oscillator with the rotating solution    x cos ω0 t = y − sin ω0 t

36 Time

sin ω0 t cos ω0 t

  u v

(4.43)

40

44

128 Introduction to Modern Dynamics Substituting this unperturbed solution into Eq. (4.40) with finite μ yields the new set of equations     v˙ = 2μω0 −ucs + vs2 1 − β 2 u2 c2 + 2uvcs + v2 s2     u˙ = −2μω0 −us2 + vcs 1 − β 2 u2 c2 + 2uvcs + v2 s2

(4.44)

where c = cos ω0 t and s = sin ω0 t for simplification. Note that these equations are proportional to the gain μ. For small μ, the angular motion defined by ω0 t would be only slightly perturbed, and the period of the motion would be approximately T = 2π/ω0 . Therefore, the equations can be averaged over a single period of the unperturbed solution to yield an average solution that ignores higher-order harmonics. Averaging Eq. (4.44) over a single period (see Homework problem 7), simplifies the equations to  β2 u˙ = μω0 u 1 − 4  β2 v˙ = μω0 v 1 − 4



u2 − v2



u2

− v2

  

(4.45)



Then, on making the substitution to a radial coordinate r = ing the angular solution, these become

√ u2 + v2 and includ-

  β2 2 r˙ = μω0 r 1 − r 4 θ˙ = ω0

(4.46)

This new dynamical flow equation in polar coordinates is a first-order approximation to the van der Pol dynamics that is valid when the perturbation parameter μ is small. The angular velocity is a constant, and the amplitude takes on all of the dynamics with a fixed point at r ∗ = 2/β. This fixed point in the radial dynamics is the radius of the limit cycle in the 2D dynamics. It has a Lyapunov exponent λ = −2μω0 showing that the limit cycle is stable. The phase space portrait of this first-order van der Pol oscillator is shown in Fig. 1.2 of Chapter 1, and it is also expressed by Eq. (4.27). Converting to polar coordinates is one way to derive the location and the stability of the limit cycle. This was accomplished by reducing the 1D limit cycle into a zero-dimensional fixed point by reducing the dynamics from 2D to 1D dynamics on the radius. Another way to derive the properties of limit cycles (even for large nonlinearity) is to reduce the dimensionality of the dynamics by tracking the dynamic trajectories as they pass through a lower-dimensional section of the full phase space. This is the technique of the Poincaré section.

4.3.2 Poincaré sections (first-return map) A limit cycle is not a fixed point, but it is a steady state of the system. Like fixed points, it comes in different types of stability. Poincaré recognized these similarities

Nonlinear Dynamics and Chaos 129 Converging trajectory

Poincaré section

x1

x2 x3 x4

x∞

Limit cycle x1

x∞

Poincaré plane

x2 x3 x4

Figure 4.11 First-return map on a Poincaré section. The limit cycle in 2D is converted to a fixed point of a discrete map in 1D. and devised a way to convert a limit cycle in 2D into an equivalent fixed point in 1D. This method introduces a discrete map called the first-return map. The Poincaré section is shown schematically in Fig. 4.11. In this case, the Poincaré section is a line that intersects the limit cycle. As the trajectory approaches the limit cycle, a series of intersection points with the Poincaré section is generated, as shown on the left. The sequence of points converges on the limit cycle for large numbers of iterations. The first-return map for a 2D limit cycle is a sequence of points with the recurrence relation

xn+1 = F (xn )

(4.47)

This map has a fixed point x∗ when   x∗ = F x∗

(4.48)

Expanding about the fixed point, we have  dF  (xn−1 − x∗ ) dx x∗  dF  = Δxn dx  ∗

xn − x∗ = Δxn+1

x

(4.49)

130 Introduction to Modern Dynamics which iterates to Δxn+1 =

   n     dF  dF  dF  dF  Δx = Δx = · · · = Δx0 n n−1    dx x∗ dx x∗ dx x∗ dx x∗

(4.50)

which can be rewritten in terms of a multiplier Δxn+1 = M n Δx0

(4.51)

where the Floquet multiplier is defined as

Floquet multiplier M=

 dF  dx x∗

(4.52)

A limit cycle of the full dynamics is a fixed point of the map that can be classified as follows: Multiplier M>1 M 0. The volume rate of change is then V˙ = − (σ + 1 + b) V

(4.86)

V (t) = V (0)e−(σ +1+b)t

(4.87)

and the volume changes according to

This example illustrates that the volume of phase space shrinks exponentially to zero as t → ∞. Note that this implies that the volume of the attractor is zero, or more specifically that the dimension of the attractor is smaller than the dimension of the embedding space. This contraction in the volume is specifically caused by dissipation.

Nonlinear Dynamics and Chaos 145

4.6 Non-autonomous (driven) flows Non-autonomous flows have an explicit time dependence:   x˙ a = Fa xb , t

(4.88)

The time variable introduces a state space of one dimension higher than defined by x. The new dimension is time, although time is not on an equal footing with x, because time cannot be controlled. However, the higher dimensionality does allow different dynamics. For instance, in a 2D state space, chaos is not allowed, because of the non-crossing theorem. However, in a driven 2D system, the extra dimension of time lifts this restriction and chaos is possible. There are many ways to introduce a new variable related to time. For instance, the new variable may be introduced as z=t z˙ = 1

(4.89)

On the other hand, for θ = ωt, a natural variable to describe the dynamics is z = ωt z˙ = ω

(4.90)

and the angle can be plotted as modulus 2π. Both of these substitutions convert the non-autonomous flows into autonomous flows. Sometimes, in the case of a harmonic forcing function, the new variable can be expressed as z = sin ωt z˙ = ω cos ωt

(4.91)

This representation has the benefit that trajectories are bounded along the new dimension, while in the first cases the trajectories are not. In the following examples, 2D systems are driven by a time-dependent term that makes the flow non-autonomous and introduces the possibility for these systems to display chaotic behavior.

Example 4.9 The driven damped pendulum The driven damped pendulum of mass m and length L is described by the equations x˙ = y y˙ = a sin z − by − sin x z˙ = ω

(4.92) continued

146 Introduction to Modern Dynamics

Example 4.9 continued for drive amplitude a with damping coefficient b = γ /mL2 . This system displays no sustained self-oscillation (because of the damping). The results for a driven damped pendulum for b = 0.1 and ω = 0.7 are shown in Fig. 4.22 for three drive amplitudes. The time dependence of the amplitude is plotted in Fig. 4.22(a), and phase space trajectories are plotted in Fig. 4.22(b). The Poincaré section is shown in Fig. 4.22(c), “strobed” at the drive period. Note that the character of the section is not completely random. The Poincaré section remains relatively sparse, with noticeable gaps (known as lacunae). This Poincaré section represents part of the attractor of the nonlinear dynamics, which has a wispy or filamentary nature that is described by fractal geometry. Clearly, the additional time variable has allowed the system to exhibit chaotic behavior. Damped Driven Pendulum:

x· = y

ω = 0.7

y· = a sin z – by – sin x

b = 0.1

z· = ω

Time series

(a)

State space

(b)

Period 2

Period 8

Chaotic

Poincaré section

(c)

a = 0.400

a = 0.456

a = 0.460

Figure 4.22 Dynamics of a driven-damped oscillator for b = 0.1, ω = 0.7 for three drive amplitudes. (a) Angle variable versus time. (b) State space. (c) Poincaré section sampled at times that are integer multiples of the drive period.

Nonlinear Dynamics and Chaos 147

Example 4.10 The damped driven double-well model The double-well potential 1 4 x − 4 is frequently encountered in dynamical systems, exhibiting two driving term, it has the dynamical equation V (x) =

1 2 x (4.93) 2 degenerate minima. In the absence of damping, or a

x¨ − x + x3 = 0

(4.94)

When damped and driven harmonically, this produces the flow x˙ = y y˙ = a sin z − by + x − x3 z˙ = 1

(4.95)

which can show chaotic motion for certain ranges of the parameters. Dynamical properties of the damped-driven double-well potential are shown in Fig. 4.23. The dynamical flow in (a) has two stable fixed points (the bottom of each well) and an unstable saddle point between the two wells. For moderate damping, trajectories within the double-well potential are captured by one or the other well. The chaotic time series for a = 0.3 and d = 0.15 are shown in (b). The state-space projection (the third axis is the time axis) in (c) has the Poincaré section shown in (d), which is a strange attractor. x=y Damped-Driven double-well

Dynamical flow

State space

z=1

y = a sin z – by + x – x3

a = 0.3 δ = 0.15

Time series

Poincaré section

Figure 4.23 Dynamics of a driven-damped double-well potential for a = 0.3 and d = 0.15. (a) Dynamical flow. (b) Speed variable versus time. (c) State space density. (d) Poincaré section sampled at times that are integer multiples of the drive period.

148 Introduction to Modern Dynamics

4.7 Summary and glossary This chapter presented the essentials of nonlinear dynamics and chaos based on the geometric structure of phase portraits and stability analysis that classifies the character of fixed points based on the properties of Lyapunov exponents. Limit cycles are an important type of periodic orbit that occur in many nonlinear systems, and their stability is analyzed using Floquet multipliers. Chaos is not possible in 2D state space, because of the non-crossing theorem. However, if a time dependence is added to an autonomous 2D state space system to make it non-autonomous, then chaos can emerge, as in the case of the driven damped pendulum. Autonomous systems with 3D state spaces can exhibit chaos, with famous examples being the Lorenz and Rössler models. Dissipative chaos often has strange attractors with fractal dimensions. Nonlinear dynamics has a language all its own. Here is a glossary to help decipher texts on the subject. Attractor: A lower-dimensional hypersurface in state space to which the trajectories of a dissipative dynamical system relax. Autonomous: A flow that has no explicit time dependence. Basin: A subregion of state space containing a set of initial conditions whose trajectories remain within that subregion. Bifurcation: A sudden change of system behavior caused by an infinitesimal change in a control parameter. Degrees of freedom: The number of initial conditions needed to define a trajectory. Also, for a general flow, it is the number of independent variables. Fixed point: A position in state space of a flow that satisfies x˙ = f (x)  = 0. Floquet multiplier: Characteristic values of the Jacobian matrix of the first-return map. Flow: A multidimensional dynamical system that has the equation x˙ = f (x).  Fractal: A geometric set of points that is self-similar at all scales. Fractals scale as if they had fractional dimension (hence the term fractal). Heteroclinic: Two orbits that cross in state space (in infinite time). Homoclinic: An orbit that crosses itself in state space (in infinite time). Integrable: A Hamiltonian system with as many constants of motion as the number of independent generalized coordinates. Invariant tori: N-dimensional subsurfaces in 2N-dimensional phase space on which trajectories of integrable Hamiltonian systems reside. Limit cycle: A periodic attractor in state space . Lyapunov exponent: Characteristic values of the Jacobian matrix that define the stability of fixed points and the rate of separation of nearby trajectories.

Nonlinear Dynamics and Chaos 149 Manifold: A lower-dimensional subspace of state space. The stable manifold of a saddle point is the trajectory of the eigenvector of the Jacobian matrix that has a negative Lyapunov exponent. The unstable manifold of a saddle point is the trajectory of the eigenvector of the Jacobian matrix that has a positive Lyapunov exponent. Node: A fixed point with either all negative (stable attractor) or all positive (unstable repellor) Lyapunov exponents. Non-autonomous: A flow with explicit time dependence. Non-integrable: A Hamiltonian system with more degrees of freedom than constants of motion. Non-integrable systems can exhibit Hamiltonian chaos. Nullcline: A manifold in state space along which one of the time derivatives equals zero. Phase space: A special case of state space for Hamiltonian or Lagrangian systems consisting of 2N dimensions for N independent generalized coordinates. Poincaré section: A set of points on a lower-dimensional surface within state space that intersects dynamical trajectories. Repellor: An unstable fixed point with all positive Lyapunov exponents. Saddle point: A fixed point with at least one negative and one positive Lyapunov exponent. Spiral: Attractor or repellor with complex Lyapunov exponents. Star: An attractor or a repellor with equal Lyapunov exponents. State space. A space whose dimension equals the number of explicit variables of a dynamical flow and within which the trajectories of the dynamical system reside.

4.8 Bibliography D. Acheson, From Calculus to Chaos (Oxford University Press, 1997). K. T. Alligood, T. D. Sauer, and J. A. Yorke, Chaos: An Introduction to Dynamical Systems, Springer (1997). D. Arrowsmith and C. M. Place, Dynamical Systems: Differential Equations, Maps and Chaotic Behavior (Chapman & Hall/CRC, 1992). J. Barrow-Green, Poincaré and the Three-Body Problem (American Mathematical Society, 1997). B. Davies, Exploring Chaos (Perseus Books, 1999). R. C. Hilborn, Chaos and Nonlinear Dynamics (Oxford University Press, 2004). P. Holmes, “Poincaré, celestial mechanics, dynamical systems theory and chaos,” Physics Reports 193, 137–163 (1990).

150 Introduction to Modern Dynamics A. E. Jackson, Perspectives of Nonlinear Dynamics (Cambridge University Press, 1989). E. N. Lorenz, The Essence of Chaos (University of Washington Press, 1993). A. Medio, Chaotic Dynamics: Theory and Applications to Economics (Cambridge University Press, 1992). E. Ott, Chaos in Dynamical Systems (Cambridge University Press, 1993). C. Robinson, Dynamical Systems: Stability, Symbolic Dynamics and Chaos (CRC Press, 1995). H.-J. Stöckmann, Quantum Chaos: An Introduction (Cambridge University Press, 1999). S. H. Strogatz, Nonlinear Dynamics and Chaos (Westview, 1994).

4.9 Homework problems Analytic problems 1 Second-order dynamics to coupled first-order: Convert the van der Pol oscillator equation   x¨ = 2μx˙ 1 − βx2 − ω02 x into a coupled first-order flow. 2 Fixed-point classification: Find the nullclines and fixed points of the following 2D dynamics. Analyze the fixed points and classify them. (a) x˙ = x (x − y) y˙ = y (2x − y) (b) x˙ = x − x3 y˙ = −y (c) x˙ = x2 − y y˙ = x − y (d) x˙ = x (2 − x − y) y˙ = x − y 3 Phase portraits: Sketch the nullclines and phase portraits of each of the following. (Do not use a computer . . . sketch by hand.) Identify and classify all fixed points. (a) x˙ = x − y; y˙ = 1 − ex (b) x˙ = x − x3 ; y˙ = −y (c) x˙ = x (x − y) ; y˙ = y (2x − y) (d) x˙ = y; y = x (1 + y) − 1

Nonlinear Dynamics and Chaos 151 (e) x˙ = x (2 − x − y) ; y˙ = x − y (f) x˙ = x2 − y; y˙ = x − y (g) x˙ = xy − 1; y˙ = x − y3 (h) x˙ = sin y; y˙ = x − x3 4 Linearization: Classify the fixed point at the origin for the system x˙ = −x3 − y y˙ = x Linearization fails to classify it correctly, so prove otherwise that it is a spiral. 5 Spirals: Consider the two systems   r˙ = r 1 − r 2

r˙ = r (1 − r)

θ˙ = 1

θ˙ = 1

Find the fixed points of the flows for the one-dimensional dependence on r. Evaluate their stability. 6 Polar coordinates: Convert Eq. (4.27) into polar coordinates. Prove that the fixed point is a spiral for a = 0. 7 Polar approximation to van der Pol: Show all the steps leading from Eq. (4.40) to Eq. (4.46). 8 Velocity-saturated van der Pol: Derive the equivalent polar coordinate approximation to Eq. (4.46) for a van der Pol nonlinearity that saturates in velocity rather than amplitude, such as x˙ = ω0 y   y˙ = −ω0 x + 2μω0 y 1 − β 2 ω02 y2 9 Keplerian orbits: In Fig. 4.6, the state space volume vanishes as the particle approaches infinity. But this is a Hamiltonian system that should conserve phase space volume. Explain (mathematically) this apparent discrepancy. 10 Symbiotic relationship: Modify the parameters in the expressions in parentheses of Eq. (4.25) of Example 4.1 to establish a stable node. 11 Floquet multiplier: Consider the vector field given in polar coordinates by r˙ = r − r 2 θ˙ = 1 Find the Floquet multiplier for the periodic orbit and classify its stability. Use the positive x-axis as the section coordinate.

152 Introduction to Modern Dynamics 12 3D fixed point: For the Lorenz model x˙ = p (y − x) y˙ = rx − xz − y z˙ = xy − bz

p = 10 b = 8/3 r = 28

analyze and classify the fixed point at (0, 0, 0). What is the largest Lyapunov exponent? 13 Three-dimensional fixed point: For the Rössler model x˙ = −y − z y˙ = x + ay z˙ = b + z (x − c)

b = 0.4 c=8

Linearize and classify the fixed point that is near (0, 0, 0) as a function of a.

Computational projects 14 Poincaré return map: Explore the Poincaré section for the van der Pol oscillator. Numerically find the Floquet multiplier. 15 Andronov–Hopf bifurcation: Explore the state space of the subcritical Hopf bifurcation   r˙ = r c + r 2 θ˙ = 1 16 Logistic map: Explore the logistic map bifurcation plot. Find and zoom in on period-three and period-five cycles. 17 Bifurcation maps: Create a bifurcation map of (a) xn+1 = r cos xn (b) xn+1 = rxn − x3n 18 Amplitude–frequency coupling: For the van der Pol oscillator   x¨ = 2μx˙ 1 − βx2 − w20 x

ω02 = 1 β=1

Find the period of oscillation as a function of the gain parameter μ. 19 Driven damped pendulum: Numerically plot the Poincaré section for the driven damped pendulum for c = 0.05, F = 0.7, and ω = 0.7. 20 Resonances: Explore the driven-damped pendulum versus frequency. Are there resonances? 21 Double-well model: For the damped driven double-well model, explore the parameters that cause the system to exhibit chaos.

Nonlinear Dynamics and Chaos 153 22 Hysteresis: Explore the Duffing oscillator (harmonic oscillator with thirdorder stiffening). This shows hysteresis and a double-valued response. 23 Coexisting attractors: Find the coexisting strange attractor and stable orbit for the double-well potential. 24 Driven van der Pol: For the van der Pol equation x˙ = y   y˙ = b sin z − μ x2 − a − x z˙ = 1 explore the case for a = 1, μ = 3 and b varying from 2.5 to 3.13. Notice the subharmonic cascade. Is there synchronization? 25 Lorenz butterfly: Explore the “butterfly” numerically by changing the value of r. Can you destroy the butterfly? 26 Chaotic phase oscillator: Track the phase (modulo 2π) as a function of time for the Rössler attractor for a = 0.15 and again for a = 0.3. 27 Evolving volumes: For the Rössler system, evaluate the change in volume of a small initial volume of initial conditions. You will need to evaluate (average) the divergence around the trajectory, because it is not a constant (as it was for the Lorenz model). 28 Chua’s circuit: Explore the attractor of Chua’s diode circuit x˙ = α [ y − h(x)] y˙ = x − y + z z˙ = −βy where the single-variable nonlinear function h(x) is h(x) = m1 x +

1 (m0 − m1 ) (|x + 1| − |x − 1|) 2

Look for the “double-scroll” attractor.

Hamiltonian Chaos

5 5.1 Perturbed Hamiltonian systems and separatrix chaos

155

5.2 Nonintegrable Hamiltonian systems

160

5.3 The Chirikov Standard Map

164

5.4 KAM theory

168

5.5 Degeneracy and the web map

170

5.6 Quantum chaos [optional]

171

5.7 Summary

174

5.8 Bibliography

174

5.9 Homework problems

175

The human protein interactome.

In Hamiltonian systems, the absence of dissipation, and the conservation of phase space volume imposed by Liouville’s theorem endow them with a distinctly different character than dissipative systems. There are no strange attractors in Hamiltonian chaos, but instead there are never-ending trajectories that create patterns in phase space that are just as complicated. A single orbit will pierce the Poincaré section in an infinite set of points that may have regular or irregular patterns, having some regions that are dense and others that are sparse. Each different initial condition can display any number of different patterns that nest

Introduction to Modern Dynamics. Second Edition. David D. Nolte, Oxford University Press (2019). © David D. Nolte. DOI: 10.1093/oso/9780198844624.001.0001

Hamiltonian Chaos 155 within each other. The emergence of Hamiltonian chaos is closely connected with saddle points and perturbations in phase space. One of the most important developments in Hamiltonian chaos is KAM theory, which explains how regular and irregular orbits can coexist, and how stability is enforced for some trajectories but not others.

5.1 Perturbed Hamiltonian systems and separatrix chaos Hamiltonian systems are called integrable when they can be converted into actionangle coordinates through an appropriate canonical transformation. Actionangle systems display the simplest possible regular motion. However, integrable Hamiltonians can be perturbed in a number of ways that introduce irregular motion that grows in importance as the perturbation increases. One of the simplest perturbations is an external forcing term that induces irregular motion for orbits that lie near saddle points in phase space.

5.1.1 Perturbed pendulum and double well We saw in Chapter 4 that a driven damped pendulum produces one of the classic routes to chaotic behavior. The addition of the explicit time dependence of the drive increases the dimensionality of the system to three, enabling chaotic orbits and strange attractors on the Poincaré plane. The existence of strange attractors is made possible by the dissipation in the system. Similarly, the undamped pendulum can provide an insight into the origins of Hamiltonian chaos when it is driven by a small perturbation. Of particular interest is the behavior of the perturbed system when the trajectory comes near to the saddle point. The saddle point is a branch point (known as a hyperbolic point on a phase space portrait), where a trajectory may remain on one side of the separatrix, or the perturbation may project the trajectory across the separatrix into a new domain. As the trajectory gets arbitrarily close to the saddle point, arbitrarily small perturbations can send the solution along a different branch, crossing back and forth across the separatrix. This is known as separatrix chaos with the formation of a stochastic layer, and it is one of the principle routes to Hamiltonian chaos. The driven perturbation can be given a plane-wave expression to model a nonlinear Hamiltonian system interacting with a plane wave. For the case of the perturbed undamped pendulum, the equation is x˙ = y y˙ = F sin (kx − z) − sin x z˙ = ω

(5.1)

156 Introduction to Modern Dynamics 2.5

1.5

Angular velocity

1

F = 0.02 ω = 3/4 k=2

2

0.8 0.6

1

0.4

0.5

0.2

0

0

–0.5

–0.2 –0.4

–1

–0.6

–1.5 Stochastic layer

–2

Hyperbolic point

–0.8 –1

–2.5

0

1

2

3 Angle

4

5

6

2.4

2.6

2.8

3

3.2

3.4

3.6

3.8

Angle

Figure 5.1 Separatrix chaos for a perturbed undamped pendulum.The perturbation creates a stochastic layer along the separatrix of the pendulum in phase space. for a drive amplitude F with a k-vector and an angular frequency ω. An example of a stochastic layer on the phase space portrait, generated by a weak perturbation for initial conditions near the separatrix, is shown in Fig. 5.1. Note that the stochastic layer is widest at the saddle point in the dynamics. The double-well potential is another system that displays separatrix chaos and a stochastic layer when the system has an initial condition that generates an orbit that passes near the saddle point in the double-well potential. The dynamics are described by x˙ = y y˙ = F sin (kx − z) + x − x3 z˙ = ω

(5.2)

An example is shown in Fig. 5.2 for an orbit passing near the saddle point. The speed of the particle displays random reversals as the trajectory switches branches. This system does not strictly conserve energy, because energy is passed back and forth from the system to the drive, depending on the relative phase between the driving force and the system displacement.

5.1.2 Perturbed action-angle dynamics The separatrices of the simple pendulum and the double-well potential each have a unique saddle point where Hamiltonian chaos originates. These separatrices are associated with orbits that have infinite period and hence are special limiting

Hamiltonian Chaos 157 (c)

(a) 2 Speed

Stochastic layer

2

1

1.5

0 1

–1 –2

0.5 1.29

1.3

(b) 1.01 1 0.99 0.98 0.97 0.96 0.95 0.94 1.27 1.28

1.31

1.32

Time

1.33 ×104

Speed

1.27 1.28

Energy

2.5

Saddle point

0 –0.5 –1 –1.5 –2

1.29

1.3

1.31

1.32

Time

1.33 ×104

–2.5 –4

–3

–2

–1

0

1

2

3

4

Position (x)

Figure 5.2 Perturbed double-well system without damping for an orbit passing near the saddle point. (a) The time-varying speed shows random reversals. (b) The perturbation pumps energy into and out of the system. (c) A stochastic layer originates from the saddle point. F = 0.002, k = 1, ω = 1. cases. However, the concept of separatrix chaos is more general and extremely common for action-angle systems in which integrability is removed by small perturbations. Many separatrices (in fact an infinite number) can emerge when an action-angle system is perturbed by a periodic function. Once a separatrix is generated, the small perturbations can induce separatrix chaos and stochastic layers of trajectories that cling to the separatrix. To understand this emergence of an infinite number of stochastic layers in perturbed Hamiltonian systems, consider an action-angle Hamiltonian H0 (J0 ) that is subject to a periodic perturbation: H = H0 (J) + εV (J, θ , T )

(5.3)

where H 0 has a natural frequency θ˙ = ∂H/∂J = ω, and V is periodic in time with a period T = 2π/ν. To be explicit, consider a perturbation V (I, θ, t) = Vp,q cos (pθ − qνt + φ)

(5.4)

where p and q are integers, and εVp,q is the strength of the perturbation. This type of perturbation is known as a resonance, because the ratio of the drive frequency to the natural frequency is a ratio of integers: ν p = ω q

(5.5)

158 Introduction to Modern Dynamics Hamilton’s equations are θ˙ =

∂H (θ , J) ∂J

(5.6)

∂H (θ , J) J˙ = − ∂θ yielding the action-angle flow ∂Vp,q cos (pθ − qνt + φ) ∂J J˙ = εpVp,q sin (pθ − qνt + φ) θ˙ = ω(J) + ε

(5.7)

As an approximation, assume that the difference between the perturbed action J and the original action J 0 is small, ΔJ = J − J0

(5.8)

expand the frequency as ω(J) = ω0 +

∂ω ΔJ ∂J

(5.9)

and introduce a new phase ψ = pθ − qνt + φ − π

(5.10)

Then Eq. (5.7) becomes ΔJ˙ = −εpV sin ψ ψ˙ = p

dω ΔJ dJ

(5.11)

These are the equations of motion for a new Hamiltonian given by H=

1 dω p (ΔJ)2 − εpV cos ψ 2 dJ

(5.12)

which is the equation of a nonperturbed pendulum, but with a new angle ψ that has a different periodicity than the physical angle θ . Just as a regular pendulum has a separatrix and a hyperbolic point, each (p, q) resonance of the type in Eq. (5.5) creates a new separatrix, each with multiple hyperbolic (saddle) points surrounding an equal number of elliptical points that display periodic motion. An important requirement for this simple perturbed action-angle model is the nondegeneracy of the frequencies, because ω(J) must be

Hamiltonian Chaos 159 a varying function of the action J. Many action-angle systems are nondegenerate. On the other hand, the harmonic oscillator has perfect degeneracy. Therefore, the simple harmonic oscillator is a special case (a pathological case) that is not representative of most of the “interesting” dynamical systems that arise in physics. The phase plane for Eq. (5.12) in polar coordinates is shown in Fig. 5.3 for p = 6 and q = 1 for a 6:1 resonance. The perturbation creates a separatrix that has six lobes with six hyperbolic points enclosing six elliptical points in an island chain. Separatrix chaos arises at the hyperbolic points and creates a stochastic layer that adjoins the separatrix, just as in Figs. 5.1 and 5.2. This plot shows just one possible resonance. There are an infinite number of rational resonances, just as there are an infinite number of rational numbers on the unit interval. Each resonance creates new separatrices and new island chains. The strength of the resonance depends on the amplitude Vp,q , which (in many systems) decreases as the integers p and q grow large. Therefore, with increasing perturbation ε, resonances and island chains with ratios of small integers tend to appear first, and tend to dissolve into stochastic layers, while resonances with ratios of larger integer values dissolve only at larger perturbations. This behavior is explored by the KAM theory later in this chapter. Separatrix chaos and the stochastic layer associated with motion near a hyperbolic (saddle) point is a general concept that extends beyond simple perturbed systems. Autonomous Hamiltonian systems that have no explicit time dependence (or no external driving forces) can have terms in the potential function that cause orbits to deviate near hyperbolic points and to become chaotic. These Separatrix island chain Hyperbolic points

p=6 q=1

Elliptical points

Figure 5.3 Example of a 6:1 resonance induced in the polar phase space portrait of a perturbed action-angle Hamiltonian system. The resonance creates a series of hyperbolic points between elliptical points that have periodic orbits. The perturbation induces separatrix chaos that originates at the hyperbolic points.

160 Introduction to Modern Dynamics types of potentials remove constants of the motion, causing the system to become nonintegrable and opening the door to Hamiltonian chaos. Although these systems are autonomous, they follow the same route to chaos as the perturbed action-angle model, with increasing nonintegrability opening up island chains that dissolve into stochastic layers.

5.2 Nonintegrable Hamiltonian systems An integrable finite Hamiltonian system has as many constants of motion as degrees of freedom. For instance, the two-body gravitational problem is an integrable system that conserves energy and angular momentum. Integrable Hamiltonian systems can be reduced to action-angle variables through canonical transformations. However, it is possible to add perturbations to such systems that remove conserved quantities, or to add additional degrees of freedom without introducing new conserved quantities. In these cases, Hamiltonian chaos can occur. Hamiltonian chaos differs from dissipative chaos chiefly because phase space volume is conserved as the system evolves, and attractors cannot exist. There are no “transients” in Hamiltonian systems; every initial condition leads to a distinct orbit that can be regular (periodic), quasi-periodic (composed of two or more incommensurable frequencies) or chaotic. For a nearly integrable system, the emergence of Hamiltonian chaos can be studied as the strength of the nonintegrable perturbation increases.

5.2.1 The Hénon–Heiles Hamiltonian Nonintegrable terms can be added to an integrable Hamiltonian through a perturbation as HTot = Hint + εHnonint

(5.13)

and the motion can be studied as the perturbation strength ε increases. A classic nonintegrable Hamiltonian system that displays Hamiltonian chaos is the Hénon–Heiles Hamiltonian model of star motion in the plane of a galaxy. The Hamiltonian (for unit mass) is H=

  1 1 2 ω2 2 1 2 ω2 2 px + x + py + y + ε x2 y − y3 2 2 2 2 3

(5.14)

with a potential function V (x, y) =

  1 ω2 2 ω2 2 x + y + ε x2 y − y3 2 2 3

(5.15)

Hamiltonian Chaos 161 This is a quadratic potential with additional nonintegrable cubic terms that break azimuthal symmetry. At small perturbation, the harmonic terms dominate, but as the perturbation increases, the potential becomes successively less harmonic and less integrable, generating more complex motion. The equations for the four-dimensional flow arise from Hamilton’s equations x˙ =

∂H = px ∂px

y˙ =

∂H = py ∂py

(5.16)

∂H p˙x = − = −ω2 x − 2εxy ∂x p˙y = −

  ∂H = −ω2 y + ε −x2 + y2 ∂y

Speed

where the parameter ε sets the scale for the nonintegrable term. The four dimensions are reduced by one because energy conservation provides a single constraint. Examples of trajectories in the x–y plane are shown in Fig. 5.4 for a chaotic orbit and a quasi-periodic orbit.

(a) ε = 0.34

y

x (b) ε = 0.4

Time

Figure 5.4 Trajectories of the Hénon–Heiles model in the x–y plane. (a) Example of a quasiperiodic orbit for ε = 0.34. (b) Example of a chaotic orbit for ε = 0.4.

162 Introduction to Modern Dynamics Selecting a Poincaré plane for the Hénon–Heiles Hamiltonian reduces the point set to two dimensions. If the Poincaré map is established on the y–py plane, the energy constant is then E=

1 2 1 2 1 2 2 ε 3 p + p + ω y − y 2 x 2 y 2 3

(5.17)

Examples of Poincaré sections on the y–py plane are shown in Fig. 5.5 for ε = 0.3, 0.345 and 0.4 for a fixed energy E = 1. In each case, random directions are selected for the initial momenta px and py , subject to constant energy E, placing the initial position at the origin. For each new initial condition, an orbit pierces the Poincaré plane in either quasi-periodic or chaotic patterns. The increasing perturbation shifts the behavior from quasi-periodic to chaotic. The case for ε = 0.345 shows distinct island chains separated by stochastic layers, while increasing the perturbation to ε = 0.4 creates a chaotic sea of points. An alternative approach to increasingly chaotic behavior sets the perturbation at ε = 1 and increases the energy constant E from zero. At very small energies, the orbits sample the quadratic harmonic oscillator potential at small displacements and are regular. As the energy increases, the orbits sample more of the nonintegrable potential at larger displacements. The increasing energy becomes the continuous tunable parameter that takes the system from regular motion to chaotic motion. However, if the energy becomes too large, there is a threefoldsymmetric saddle point in the potential, and the particle (or star) can escape y

Potential

ε = 0.3

x ε = 0.345

Figure 5.5 (a) Potential function for the Hénon–Heiles model for ε = 0.3. The unperturbed harmonic potential yields to a threefold symmetric potential. (b) Poincaré section in the y–py plane for ε = 0.3. (c) Poincaré section for ε = 0.345, showing a distinct island chain. (d) Poincaré section for ε = 0.4. The energy in all three cases is E = 1.

ε = 0.4

Hamiltonian Chaos 163

py

ε = 0.3425 E=1

ε=1 E = 0.118

Figure 5.6 Sampling the nonintegrable potential for high energy and low perturbation versus low energy and high perturbation. (a) Poincaré section for ε = 0.3425 and E = 1. (b) Poincaré section for ε = 1 and E = 0.118.The qualitative aspects are nearly identical, although the axes have been rescaled by a factor of approximately 3 between the two figures.

y

to infinity. A comparison between the two cases is shown in Fig. 5.6, where the fivefold central island chain is clearly defined.

5.2.2 Liouville’s theorem and area-preserving maps A Hamiltonian system has no dissipation, and the volume in phase space occupied by a set of points cannot change and strange attractors cannot exist. The impossibility of attractors raises interesting differences about the nature of chaos in conservative, i.e., Hamiltonian, systems in contrast to dissipative systems. A given initial condition creates a single orbit that threads through phase space. When constructing a Poincaré section, this orbit creates a set of discrete points that are connected by the continuous dynamics. As Poincaré proved, the properties of the continuous dynamics are preserved in the properties of the discrete return map. Therefore, studying the discrete map provides an understanding of the underlying continuous dynamics. The discrete point set can have periodicities and patterns in the case of nonchaotic motion, such as for harmonic resonances, or can be a diffuse sets of points when the motion is chaotic. Discrete maps of these systems conserve areas on the Poincaré plane and are called area-preserving maps. The ratio of infinitesimal areas between original and transformed coordinates is equal to the determinant of the Jacobian matrix of the transformation. If the Jacobian determinant is equal to ±1, then areas are preserved under the transformation. A map with beautifully complex structure is the Lozi map xn+1 = 1 + yn − C | xn | yn+1 = Bxn

(5.18)

which is area-preserving when B = −1. A portion of the phase plane is shown in Fig. 3.12 for B = −1 and C = 0.5. The figure was generated using 200 randomly

164 Introduction to Modern Dynamics

Figure 5.7 A portion of the iterated Lozi map for B = −1 and C = 0.5.Each set of points with the same hue is from a single orbit. selected initial conditions, and the orbits for each initial condition were iterated 500 times. In the figure, there are orbits within orbits. Some orbits are regular, and others are chaotic. The regular orbits organize into “island chains” when the motion is quasi-periodic when the frequency ratio is a ratio of small integers. Orbits that appear continuous are quasi-periodic with frequency ratios that are the ratios of large integers.

5.3 The Chirikov Standard Map One of the most illustrative discrete maps in Hamiltonian dynamics is the Chirikov map, also known as the Standard Map. It represents a simple rigid rotator that is kicked with a periodic impulse. The discrete map is a Poincaré section probed at multiples of the impulse period. The Standard Map displays the important properties of resonant island chains that emerge and then dissolve into stochastic layers with increasing perturbation strength. The Standard Map is also the starting point for the important KAM theory that predicts islands of stability for some orbits while other trajectories are chaotic.

5.3.1 Invariant tori and the Poincaré–Birkhoff theorem If an integrable Hamiltonian has two degrees of freedom, then there are two angle variables and two action variables that span a four-dimensional phase space. For a fixed energy, the dimension is reduced by one to become a three-dimensional

Hamiltonian Chaos 165 Action angle-invariant tori

J

J

θ

θ˙ = ω (J )

Figure 5.8 Action-angle variables for an integrable Hamiltonian. The two plots are equivalent. The polar plot on the right is also known as a twist map. Note that the inner and outer boundaries circulate in opposite directions.

Twist map

submanifold of the four-dimensional phase space. The three-dimensional submanifold consists of a set of nested tori, like the layers of an onion, but shaped like a donut. There are two angles (θ 1 , θ 2 ) and an action variable J that define a specific torus among the set of tori, and a trajectory is a helix that spirals around that torus. By taking a Poincaré section at a plane at fixed θ , the points of intersection of the helical trajectories with the plane describe a twist map as the successive points of the first-return map twist around the origin without changing radius (Fig. 5.8). The radius of the twist map is proportional to the winding number ω1 /ω2 = p/q. A unit ratio has equal periods for each angle and corresponds to J = 0 (zero twist). Positive J twist counterclockwise, while negative J twist clockwise. If a perturbation is added to the integrable Hamiltonian, then the perturbation causes a radial motion in the map. Poincaré’s last geometric theorem, published in 1912, conjectured that a twist map with inner and outer flows that rotated in opposite directions would have at least two fixed points: one would be an elliptical fixed point and the other would be a saddle (hyperbolic) fixed point (Fig. 5.9). The following year, David Birkhoff proved the theorem,1 which is now known as the Poincaré–Birkhoff theorem. The importance of this result is that a nonintegrable perturbation of a Hamiltonian system creates a hyperbolic fixed point, which is the source of Hamiltonian chaos.

5.3.2 The Standard Map: kicked rotator The relationship between a continuous dynamical system and its discrete map is sometimes difficult to identify. However, the familiar Standard Map arises naturally from a randomly kicked dumbbell rotator. The system has an angular momentum J and a phase angle θ . The strength of the angular momentum kick is given by the perturbation parameter K , and the torque of the kick is a function of the phase angle θ. The kicked rotator has Hamiltonian2 1 H = J 2 + K cos θ 2





t δ −n T n=−∞ ∞ 

(5.19)

1 G. D. Birkhoff, “Proof of Poincare’s geometric theorem,” Transactions of the American Mathematical Society 14(1–4), 14–22 (1913). 2 The Dirac delta function δ(·) is a generalized function that is used to describe an instantaneous impulse. It has infinite height and zero width, but its integral is equal to unity. The spectral content of a delta function spans all frequencies (its Fourier transform is a constant).

166 Introduction to Modern Dynamics

Figure 5.9 Representation of the Poincaré–Birkhoff theorem, which states that a twist map, for which the inner and outer boundaries carry a flow in opposite direction, must have at least two fixed points: one an elliptical fixed point and the other a hyperbolic fixed point. Hamiltonian chaos arises from the hyperbolic fixed point.

Hyperbolic (saddle) fixed point

Elliptic fixed point

where the kicks are evenly timed with period T. The parameter K is a perturbation, which can be large. The perturbation amplitude and sign depend on the instantaneous angle θ . The equations of motion from Hamilton’s equations are J˙ = K sin θ θ˙ = J



∞ n=−∞

δ

t −n T

(5.20)

The values of J and θ just before the nth successive kick are Jn and θ n , respectively. Because the evaluation of the variables occurs at the period T, these discrete observations represent the values of a Poincaré section. The rotator dynamics are continuous between each kick, leading to the discrete map Jn+1 = Jn + K sin θn θn+1 = mod (θn + Jn+1 , 2π )

(5.21)

in which the rotator is “strobed,” or observed, at regular periods of 2π . When K = 0, the orbits on the (θ, J) plane are simply horizontal lines—the rotator spins with regular motion at a speed determined by J. These horizontal lines represent open (freely rotating) orbits. As the perturbation K increases slightly from zero, as shown in Fig. 5.10, the horizontal lines are perturbed, and a closed orbit opens up near J = 0. There is a hyperbolic fixed point (a saddle point), and a single elliptical fixed point (a center) at the edges (−π , 0) and (π , 0) as predicted by the Poincaré–Birkhoff theorem. As the perturbation increases, more closed orbits appear around J = 0, and orbits begin to emerge at other values of J. There is an interesting pattern to the increasing emergence of closed orbits (and the eventual emergence of chaos at large perturbation). This can be seen most

Hamiltonian Chaos 167 Open orbits

C = 0.01

π

Saddle point

Closed orbit J

Elliptical point

-π -π (a)

θ

π

(b)

Hyperbolic fixed point

Winding number 1/1

ε = 0.3

Winding number 1/1

3/4

3/4

2/3

2/3

1/2

1/2

1/3

1/3

1/4

1/4

0/1

–π

Figure 5.10 The (θ , J) space of the Standard Map for ε = 0.01. Most orbits are open (free rotation of the rotator), but, even at this low perturbation, a hyperbolic saddle point has appeared that separates closed orbits (oscillations).

0

π

0/1

ε = 1.0

Golden Mean orbits

–π

0

π

Figure 5.11 Simulations of the Standard Map for (a) ε = 0.3 and (b) ε = 1.0. For weak perturbation, the small-integer-ratio resonances have separated into closed orbits, with many open orbits remaining. For strong perturbation, there is fully developed chaos that evolved from the fundamental hyperbolic fixed point. The Golden Mean winding-number orbits are among the only orbits that remain open. easily by rescaling the (θ , J) axes to plot winding number versus angle, as in Fig. 5.11. The Standard Map represents the Poincaré section of perturbed motion on a torus in (θ, φ). The winding number is the ratio of the periods of θ and φ. When these periods have the ratio of small numbers, like 1:2 or 2:3, the orbits are called “resonances.” Orbits like these are highly sensitive to the perturbation, because a change in the period of one variable easily phase-locks with a change in the other variable. This is a type of synchronization, a topic that will be explored extensively in Chapter 6. The cascade of break-ups of open orbits into chains of islands with increasing perturbation strength eventually leads to chaos. Chaos emerges first at the

168 Introduction to Modern Dynamics hyperbolic saddle point of the lowest resonance 0:1 and 1:1. From computer simulations, the threshold is εc = 0.97. Examples of simulations are shown in Fig. 5.11 for ε = 0.3 and ε = 1.0. For ε = 0.3, there are open and closed orbits at winding numbers 0:1, 1:4, 1:3, 1:2, 2:3, 3:4 and 1:1. The primary hyperbolic fixed point in this plot is at (0, 0) and (0, 1). However, for ε = 1.0, many open orbits have broken up, and chaos has emerged at the primary hyperbolic point, as well as at secondary hyperbolic points. As the perturbation increases, eventually only the primary elliptic point will remain as an island in a sea of chaos. Finally, even this disappears.

5.4 KAM theory

3 A. N. Kolmogorov, “On conservation of conditionally periodic motions for a small change in Hamilton’s function,” Doklady Akadademii Nauk SSSR (N.S.) 98, 527–530 (1954). 4 J. Moser, J. (1962). “On invariant curves of area-preserving mappings of an annulus,” Nachrichten der Akademie der Wissenschaften in Göttingen II: Mathematische–PhysikalischeKlasse 1–20 (1962). 5 V. I. Arnold, “Small denominators and problems of the stability of motion in classical and celestial mechanics [in Russian],” Uspekhi Matematicheskikh Nauk 18, 91–192 (1963). 6 H. S. Dumas, The KAM Story: A Friendly Introduction to the Content, History and Significance of Classical KolmogorovArnold-Moser Theory (World Scientific, 2014).

When Poincaré proved that the three-body problem had no closed solution, this conclusion opened the possibility that the Solar System, and the Earth’s orbit in it, is not stable, and that perturbations could grow without bound, eventually ejecting the Earth from the Solar System (and ending all life). Yet the Earth has been orbiting the Sun in the habitable zone for over 4 billion years. How can one reconcile the stability of the Earth’s orbit with the growth of perturbations in a many-body Hamiltonian system? The problem of stability arises because rational resonances can grow to large amplitudes even under small perturbations. However, if orbital frequency ratios are irrational, then the resonant growth is prevented. The problem is that any irrational orbit ratio is arbitrarily close to rational ratios, so what keeps nonresonant orbits stable? The answer to this question was suggested by Kolmogorov in 1954,3 followed by rigorous proofs for restricted cases by Moser in 19624 and Arnold in 1963,5 collectively known as KAM theory. Some have called this theory one of the great discoveries in dynamics of the twentieth century on a par with relativity and quantum mechanics,6 not only because it removed the existential threat to the Earth and all life on it, but also because it ensures stability in the midst of chaos, with consequences that range from control theory to global warming. The heart of KAM theory is the identification of resonances with small denominators, and the “distance” of these orbits from sufficiently irrational orbits. The strongest resonances, and the orbits that are most strongly affected by perturbations, are those with small denominators: 1/1, 1/2, 1/3, 2/3, 1/4, 3/4 1/5, 2/5, 3/5, 4/5, etc. Any real positive number can be expressed as a continued fraction α = a0 +

1 a1 +

1 a2 +

= [a0 ; a1 , a2 , a3 , . . . ]

(5.22)

1 a3 + . . .

Numbers with small denominators have simple continued fractions. For instance,

Hamiltonian Chaos 169 1 1 = 0 + = [0; 3, 0, 0, 0, . . . ] 3 3 2 =0+ 3

1

= [0; 1, 2, 0, 0, 0, . . . ] 1 1+ 2 4 1 =0+ = [0; 1, 4, 0, 0, 0, . . . ] 1 5 1+ 4

(5.23)

A quadratic irrational is the solution to a quadratic equation with integer coefficients and has periodic continued fractions, such as √

2 = [1; 2, 2, 2, . . . ]

(5.24)

Nonquadratic irrationals have nonrepeating continued fractions, such as π = [3; 7, 15, 1, 292, 1, 1, 1, 2, 1, 3, 1, 14, 2, 1, 1, 2, 2, 2, 2, 1, 84, 2, . . . ] (5.25) Any irrational number can be approximated by a truncated continued fraction. For instance, π − 3 ≈ [0; 7, 15, 1, 0, 0, 0, . . . ] =

1 7+

= 0.14160 =

1 15 +

177 1250

(5.26)

1 1

which has converged rapidly to 4 decimal places using only 3 values in the continued fraction. The integer ratio pk /qk = [a0 , a1 , a2 , a3 , . . ., ak ] is called the kth convergent to the irrational number. One of the properties of the convergents of continued fractions is that they satisfy a Diophantine7 condition for which, if ν is an irrational number and pk /qk is the kth convergent, then     ν − pk  < 1  qk  q2k + 1

(5.27)

This mathematical conclusion has important physical consequences for orbital stability. It states that the strength of a resonant perturbation decreases rapidly as the integers become larger. Therefore, rational resonances that are described by ratios of larger integers are not as susceptible to perturbations as ratios of small integers. The consequence for dynamics from KAM theory is that, if an irrational orbital ratio is too close to a rational resonance with a small denominator, then it will be susceptible to perturbations. Conversely, if an irrational orbital ratio is far from a rational resonance with a small denominator, then it will be more immune to perturbations. Therefore, when a perturbation is turned on in a three-body

7 Diophantus was a third-century Greek mathematician from Alexandria who wrote a treatise Arithmetica on algebra. Diophantine problems are polynomials whose solutions are integers. Fermat’s Last Theorem is a famous Diophantine problem seeking integer solutions to an + bn = cn , a problem he scribbled in the margins of a Latin translation of Diophantus he was reading in 1637. The question of the general solvability of Diophantine problems was Hilbert’s tenth celebrated problem of 1900. This question was settled in the negative only in 1970—there are no general solutions.

170 Introduction to Modern Dynamics Hamiltonian system, the first orbital configurations to experience the effects of the perturbation will be those with resonances with small denominators. Orbital ratios sufficiently far away from these resonances will be remain undisturbed. From the KAM theory, an irrational torus survives if it satisfies the dynamical Diophantine condition     ν − pk  > γ (5.28)  qk  qnk where γ is a positive constant, and n is an integer (often taken as n = 2.5). This condition provides a measure for how close an irrational number is to a resonance. If qk is a small integer, then the irrational number is more likely to satisfy the Diophantine condition and for the torus to dissolve. As the perturbation increases, resonances with larger denominators finally become susceptible, and the only orbits that remain stable would be irrational ones with ratios sufficiently far from resonances. For instance, the irrational number with the slowest convergence is the Golden Mean φ = [1; 1, 1, 1, 1, . . . ]

(5.29)

which is why it is considered to be the “most irrational” of the real numbers, because it is the “farthest” from rational numbers (rational fractions) and would be the most stable orbital configuration against perturbations. The Golden Mean orbits of the Standard Map at ε = 1.0 shown in Fig. 5.11 are the only open orbits that remain at this high perturbation. KAM theory explains why periodic orbits can be stable even when chaos reigns in other parts of phase space. In the low-dimensional description of the Poincaré sections, the invariant tori that resist perturbations act as impenetrable barriers that keep chaotic motion contained to certain regions of phase space, protecting the stability of the rational orbits.8

5.5 Degeneracy and the web map 8 While this would appear to protect the Earth from expulsion from the Solar System, this protection turns out not to be absolute as the number of degrees of freedom in a system increases. Vladimir Arnold studied this effect by considering a number n larger than 3 in the n-body problem. To his surprise, even though invariant tori continue to exist, they no longer occupy contiguous regions of phase space to act as barriers to the diffusion of chaotic trajectories across the full range of the phase space. Chirikov named this effect “Arnold diffusion” in 1969.

As explained in Chapter 3, the rigid rotator has a unique frequency for each value of the action integral ω = J/I, in contrast to the harmonic oscillator, which exhibits complete degeneracy, with the same frequency ω for every value of the action integral. This creates a fundamental difference in the chaotic behavior of these two different Hamiltonian systems. For instance, the complementary example to the kicked rotator is the kicked harmonic oscillator. The Hamiltonian of the kicked harmonic oscillator is  ∞

ω K  t 1 2 0 2 2 p + ω0 x − cos x δ −n (5.30) H= 2 T T n=−∞

Hamiltonian Chaos 171 The corresponding equations of motion are ∂H =p ∂p ∞ t ∂H ω0 K p˙ = − = ω02 x + sin x δ −n ∂x T T n=−∞ x˙ =

(5.31)

Evaluating the position and momentum just before and just after the nth kick yields     x tn+ = x tn−       p tn+ = p tn− − K ω0 sin x tn−

(5.32)

which converts to the web map un+1 = (un + K sin νn ) cos α + νn sin α νn+1 = − (un + K sin νn ) sin α + νn cos α

(5.33)

where u = x/ω ˙ 0 ν = −x α = ω0 T

(5.34)

There can be resonance between the sequence of kicks and the natural oscillator frequency such that α = ω0 T = 2π/q

(5.35)

At these resonances, intricate web patterns emerge. The web map produces a web of stochastic layers when plotted on an extended phase plane. The symmetry of the web is controlled by the integer q, and the stochastic layer width is controlled by the perturbation strength K. Examples of web maps are shown in Figs. 5.12 and 5.13 for q = 4, 5, 6, and 7. When q = 4, a fourfold-symmetric web occurs. For small perturbation, most orbits are regular, but increasing perturbation causes stochastic layers to form that span the web structure. When q = 5 or q = 7, patterns emerge that are known as a quasicrystals that have fivefold and sevenfold symmetry, but also have complex substructure. Quasicrystalline structures occur in some types of metal alloys.

5.6 Quantum chaos [optional] The existence of classical chaotic Hamiltonian systems raises the question whether classical Hamiltonian chaos extends into the quantum regime. Quantum

172 Introduction to Modern Dynamics q=4

Figure 5.12 Web maps for q = 4 for K = 0.618 with regular orbits and K = 1.618 with coexisting stochastic and regular orbits.

K=1–φ

q=5

K=1–φ

K=φ

q=6

q=7

Figure 5.13 Web maps for q = 5, 6, and 7 for K = 0.618. The patterns for q = 5 and 7 are known as quasicrystals and display much more complex orbits than for the regular pattern for q = 6. dynamics is exclusively Hamiltonian, and there are direct quantum analogs to classical systems. For instance, classical billiards inside a stadium (composed of two half-circles connected by straight sections) travel in straight lines between reflections from the stadium walls. When the stadium is a perfect circle, a single trajectory can be periodic or aperiodic, but the trajectory will not cover the entire interior. However, when a straight section is added, some trajectories will randomly fill the interior. The classical orbit in this case is called “ergodic” in analogy to space-filling trajectories of statistical mechanics. On the other hand, if the stadium is quantum mechanical, then the wavefunction determined by the Hamiltonian consists of a sets of standing waves. These standing waves extend through the stadium interior, as in Fig. 5.14. The quantum wavefunction is a superposition of many eigenstates and has a complicated time dependence. Despite the ergodic character of both the classical and quantum dynamics of the stadium billiard problem, an interesting feature can be found in the quantum case that has a semiclassical interpretation. The plots on the left in Fig. 5.15

Hamiltonian Chaos 173 Quantum stadium

Classical stadium

Figure 5.14 Pseudo-chaos of stadium billiards. The classical case (top) shows only a few of the reflections of a single trajectory. The quantum case (bottom) shows a wave pattern composed of the superposition of many eigenstates. Quantum chaos

Classical chaos

Figure 5.15 Examples of quantum scars for quantum billiards in a stadium compared with the corresponding unstable periodic classical orbits. show the average probability density integrated over long times for three different initial conditions. There are regions of the stadium configuration space that appear like classical trajectories where there is enhanced probability density. These patterns of enhanced probability are called quantum scars. They correspond to the periodic classical orbits shown on the right. The classical system is Hamiltonian, and the system trajectories have zero Lyapunov exponents because Hamiltonian dynamics conserves phase space area. However, the periodic classical orbits are not stable against small perturbations. The situation for the stadium is sometimes called pseudo-chaos for this reason. The interesting (and counter-intuitive) analog in the quantum case is actually stable. The recurrence of quantum probability density along the classically unstable orbits is caused by constructive interference.

174 Introduction to Modern Dynamics Although the classical orbits are unstable, they are sufficiently stable (separating only in polynomial time) that the classical action is marginally stationary, and the quantum interference along those orbits is constructive from Feynman’s principle of stationary quantum action.9 In this sense, classical pseudo-chaos is mirrored by quantum stability.

5.7 Summary

9 Feynman wrote his PhD thesis on the quantum consequences of stationary action. Dynamical paths of stationarity are paths of maximum quantum constructive interference. The correspondence between quantum and classical behavior arises because classical trajectories are the paths that have maximum quantum constructive interference. This principle is beautifully illustrated in the quantum scars of the stadium billiard system.

When integrable Hamiltonian systems are altered by nonintegrable Hamiltonian perturbations, the perfect regularity of conserved action integrals gives way to Hamiltonian chaos. Chaos tends to emerge first (for small perturbation) at hyperbolic points as the perturbation “scrambles” the nearby trajectories, directing them to one side or another of the separatrix of the unperturbed system. These trajectories form a stochastic layer close to the separatrix in a process called separatrix chaos. Separatrix chaos is often observed for driven Hamiltonian systems. Hamiltonian chaos has a distinctly different character from dissipative chaos. Liouville’s theorem, conserving phase space volumes, can stretch and fold an initial volume of state points, but it prevents the condensation of trajectories onto strange attractors. In Hamiltonian chaos, a single initial condition creates a single trajectory that threads its way through phase space without repeating itself. Despite the absence of strange attractors, Poincaré sections from multiple trajectories can display astonishing patterns as some regions of phase space can be visited almost densely by trajectories, while other regions remain essentially empty. Because of the regular periodicity of the action angles in the original integrable system, quasi-periodicity plays a central role in the development of Hamiltonian chaos as the perturbation strength grows. The distinction between rational and irrational winding numbers (frequency ratios) is “softened” by the Hamiltonian dynamics. Perturbations open up island chains separated by hyperbolic fixed points for rational resonances with frequency ratios of small integers, but sufficiently irrational orbits remain essentially protected from the effects of weak perturbations. The proof of the stability of such orbits was a major achievement of the KAM theory of Hamiltonian chaos.

5.8 Bibliography V. I. Arnold and M. B. Sevryuk, “Translation of the V.I. Arnold paper ‘From Superpositions to KAM Theory’ (Vladimir Igorevich Arnold. Selected-60, Moscow: PHASIS, 1997, pp. 727–740),” Regular & Chaotic Dynamics 19, 734– 744 (2014).

Hamiltonian Chaos 175 H. S. Dumas, The KAM Story: A Friendly Introduction to the Content, History and Significance of Classical Kolmogorov–Arnold–Moser Theory (World Scientific, (2014). R. Hilborn, Chaos and Nonlinear Dynamics: An Introduction for Scientists and Engineers (Oxford University Press, 2001). P. Holmes, “Poincaré, celestial mechanics, dynamical systems theory and chaos,” Physics Reports 193, 137–163 (1990). H.-J. Stöckmann, Quantum Chaos: An Introduction (Cambridge University Press, 1999). G. M. Zaslavsky, Hamiltonian Chaos and Fractional Dynamics (Oxford University Press, 2005).

5.9 Homework problems Analytic problems 1. Gravitational equilibrium: A particle of unit mass moves along a line joining two stationary masses m1 and m2 , separated by a distance a. Find the particle equilibrium position and stability based on a linearized analysis of x¨ =

Gm2 (x − a)

2



Gm1 x2

Sketch the phase portrait by hand (do not use a computer). 2. Double pendulum: Derive equations for a double physical pendulum (two connected physical pendula) composed of two massive rods of lengths L 1 and L 2 . The first rod is supported from above on a frictionless pivot, and the second rod is supported on the bottom of the first with a frictionless pivot. This system is a four-dimensional flow that displays Hamiltonian chaos. 3. Perturbed pendulum: The effective Hamiltonian for a perturbed pendulum in Eq. (5.12) is an equivalent Hamiltonian for a free pendulum. What is the angular frequency of the effective Hamiltonian? What does this angular frequency describe physically? 4. Resonant island chain: Draw an island chain, as in Fig. 5.3, when p and q are relatively prime, e.g., p = 5 and q = 3. 5. Hénon map: The Hénon map is f (x, y) = 1 − Cx2 + y g (x, y) = Bx

(5.36)

and is area-preserving when B = ±1. (a) Find all the fixed points for the Hénon map and show they exist only for C > C0 for a certain C 0 .

176 Introduction to Modern Dynamics (b) A fixed point of a map is linearly stable if and only if all eigenvalues of the Jacobian satisfy | λ |< 1. Determine the stability of the fixed points of the Hénon map, as a function of B and C. 6. Hénon–Heiles: Show explicitly that the equations (5.16) for the Hénon– Heiles flow are volume-preserving in phase space. 7. KAM Theory: For γ = 5 in Eq. (5.28) with n = 2.5 what is the smallest k that satisfies the equation for the golden mean? For pi? For the square-root of 2? 8. Most irrational number: Prove that the Golden Mean continued fraction has the slowest convergence of all irrational numbers.

Computational projects 9. Width of stochastic layer: Estimate the width of the stochastic layer for the driven double well based on system and drive parameters. 10. Nonlinear resonances: In Eq. (5.11), numerically simulate low-order p and q (e.g., 1:1, 1:2, 1:3, 2:3, 1:4, 3:4, . . .) and plot their (J, θ ) phase-plane flows. 11. KAM Theory: Graph the left side of Eq. (5.28) for values of ν from zero to one for k = 3, 4 and 5. 12. Double pendulum: Numerically explore the chaotic behavior of the fourdimensional flow of a double pendulum (derived in Problem 2 above). 13. Hénon map: Show (numerically) that one fixed point is always unstable, while the other is stable for C slightly larger than C 0 . What happens when 3 λ = −1 at C1 = (1 − B)2 . Create a bifurcation diagram for the Hénon map 4 as a function of C for B = 0.3. 14. Lozi map: Numerically explore the Lozi map and test the limit when B → −1: xn+1 = 1 + yn − C | xn | yn+1 = Bxn What do you notice in the limit? 15. Lozi map: Numerically explore the Lozi map with C = 0.618. Is it qualitatively different than when C is a ratio of small numbers? 16. Hamiltonian chaos: Using the methods of Poincaré sections and invariant KAM tori, explore the dynamics of trajectories of a particle subject to the potential 2 1 1 y − 2x2 + x2 V (x, y) = 2 2 Explore the similarities and differences with the Hénon-Heiles model, first analytically, and then numerically. 17. Web map: Numerically explore the web map (5.33) for large q and also for irrational q.

Coupled Oscillators and Synchronization

6 6.1 Coupled linear oscillators

178

6.2 Simple models of synchronization

181

6.3 Rational resonances

186

6.4 External synchronization

192

6.5 Synchronization of chaos

198

6.6 Summary

200

6.7 Bibliography

200

6.8 Homework problems

201

Saturn’s Rings from Cassini Orbiter. Photograph from http://photojournal.jpl.nasa.gov/catalog/PIA07873.

Synchronization of the motions of separate, but interacting, systems is a ubiquitous phenomenon that ranges from the phase-locking of two pendulum clocks on a wall1 to the pacemaker cells in the heart that keep it pumping and keep you alive. Other examples include the rings of Saturn, atoms in lasers, circadian rhythms, neural cells, electronic oscillators, and biochemical cycles. Synchronization cannot occur among linear coupled oscillators, for which the normal modes of oscillation are linear combinations of the modes of the separate oscillators. In nonlinear oscillators, on the other hand, frequencies are “pulled” or shifted. Two different isolated frequencies can even merge, “entrained,” to a single “compromise” frequency in which both systems are frequency-locked to each other.

Introduction to Modern Dynamics. Second Edition. David D. Nolte, Oxford University Press (2019). © David D. Nolte. DOI: 10.1093/oso/9780198844624.001.0001

1 Discovered by Christian Huygens in 1665.

178 Introduction to Modern Dynamics The previous chapters described the dynamics of nonlinear systems in state space. When dynamical variables are strongly coupled, subsets of variables cannot be isolated from the others. On the other hand, in many complex systems, it is possible to identify numerous semi-autonomous subsystems contained within larger systems. For instance, there are animal species within a broader ecosystem, or compartmentalized biochemical reactions within cells, or individual neurons in a neural network. The subsystems are often nonlinear, and may be autonomous oscillators or even chaotic. This concept of distinguishable systems that are coupled within wide interaction networks lies at the boundary between two worlds. On the one hand, there is the fully holistic viewpoint in which complex systems of many variables and high dimension cannot be separated into subunits without destroying the essential behavior that is to be understood. On the other hand, there is the reductionist viewpoint in which each component of the system is isolated and studied individually. Reality often lies between, in which individual systems and their properties are understood in isolation, but, when coupled, lead to emergent properties of the interacting network that are often surprising.

6.1 Coupled linear oscillators Linear oscillators that have linear coupling cannot be synchronized. Even if all the oscillators are identical, this is a condition of neutral stability in the sense that a perturbation to the system will neither grow nor decay. The steady-state solutions for N coupled oscillators consist of a spectrum of N distinct oscillation frequencies called eigenfrequencies. Because of the coupling, the eigenfrequencies are not the same as the original uncoupled frequencies. New frequencies of the collective oscillation modes arise, but there is no overall synchronization of the phases of motion. Synchronization requires nonlinearity and dissipation, as we shall see in the next section. Here we start with a study of the general behavior and properties of coupled linear oscillators.2 A simple example of two coupled linear oscillators consists of two masses attached by springs to two walls, with a third spring attaching the masses, as in Fig. 6.1. If the masses m1 = m2 = M are equal, and the spring constants attaching them to the walls k1 = k2 = k are equal, then the equations of motion for the two masses are M x¨ 1 + (k + k12 ) x1 − k12 x2 = 0 M x¨ 2 + (k + k12 ) x2 − k12 x1 = 0

(6.1)

We seek solutions for the normal modes of oscillation when both masses move with the same frequency ω. The assumed solutions are 2 For a complete description of classical coupled oscillators, see S. T. Thornton and J. B. Marion, Classical Dynamics of Particles and Systems, 5th ed. (Brooks/Cole, 2003).

x1 (t) = B1 eiωt x2 (t) = B2 eiωt

(6.2)

Coupled Oscillators and Synchronization 179

k1

m1

k12

m2

x1

k2

x2

Figure 6.1 Two equal masses on equal springs coupled by a third spring.

which lead to the coupled equations   −Mω2 + k + k12 B1 − k12 B2 = 0   − k12 B1 + −Mω2 + k + k12 B2 = 0

(6.3)

For a nontrivial solution to exist, the determinant must vanish:  −Mω2 + k + k 12    − k12

  −k12  =0 −Mω2 + k + k12 

(6.4)

which is the characteristic (or secular) equation. Solving for the eigenfrequencies yields  ω2 =

k M



k + 2k12 M

(6.5)

η2 = x 1 + x 2

(6.6)

ω1 =

and the normal modes of oscillation are η1 = x 1 − x 2

which define asymmetric and symmetric modes, respectively. In the symmetric mode, the two masses move together and there is no relative displacement, which is why they oscillate at the same frequency as the uncoupled masses. For the asymmetric mode, the frequency of oscillation is larger than the uncoupled frequency because of the additional stiffness of the coupling spring. For two unequal masses with unequal spring constants, the coupled equations are   k12 k12 −ω2 + ω12 + B1 − B2 = 0 m1 m1 (6.7)   k12 k12 − B1 + −ω2 + ω22 + B2 = 0 m2 m2 using the isolated frequencies

180 Introduction to Modern Dynamics 300 Asymmetric Symmetric 250

m1 = 1

k2 = 100

m2 = 1.2 k12 = 20

ω2

200

150

100

50

Figure 6.2 Squared eigenfrequencies ω2 for two unequal masses and coupling k12 as a function of k1 . The parameters are m1 = 1, m2 = 1.2, k12 = 20, and k2 = 100.

0

0

50

100

150

200

250

k1

ω12 =

k1 m1

ω22 =

k2 m2

(6.8)

The normal-mode eigenfrequencies are solutions to      1 k12 k12 1 k12 k12 2 k12 k12 2 2 ω2 + ω = + ω1 + ± ω22 + − ω12 − +4 2 m2 m1 2 m2 m1 m1 m2 (6.9) 2

3 See C. Kittel, Introduction to Solid State Physics, 8th ed. (Wiley, 2005).

The eigenfrequencies are shown in Fig. 6.2 as a function of the spring constant k1 for the case when m1 = 1, m2 = 1.2, k12 = 20, and k2 = 100. There are again two normal modes—the symmetric and the asymmetric modes. As the spring constant k1 increases, the system displays an avoided crossing as the two modes mix and trade behavior. The lower branch begins with linear dependence on the stiffness of the first spring and transitions to a constant eigenfrequency at high stiffness. The upper branch begins with constant eigenfrequency, then converts to linear dependence on the first stiffness. Because the modes mix, the linear crossing is avoided. Coupled linear oscillations abound in physics and have many analogs in quantum systems and wave mechanics. For instance, such avoided crossings are common in quantum systems, and is similar to the dispersion curves of photon– phonon polaritons.3 The coupling of linear systems retains the principle of superposition. The modes may be scrambled, with new normal modes as combinations of the original oscillator motions, but nothing new arises from the mixture. This all changes

Coupled Oscillators and Synchronization 181 when the systems become nonlinear, where the oscillators themselves may be nonlinear, or linear oscillators may be coupled nonlinearly. New frequencies can appear, or multiple old frequencies can be entrained to a single frequency as the system synchronizes. Nonlinear synchronization of oscillators is a fundamental physical process that lies at the core of many electronic circuits, as well as in many biological systems like the brain and heart. Some simple models capture essential features of synchronization that hold true even for these more sophisticated systems.

6.2 Simple models of synchronization There are many simple models for the synchronization of two oscillators. These can be treated analytically in some cases, and qualitatively in others. The most common examples are (1) integrate-and-fire oscillators and (2) coupling on the torus (action-angle oscillators). These oscillators are no longer the harmonic oscillators of linear physics. The concept of an “oscillator” is more general, often intrinsically nonlinear, such as an integrating circuit that fires (resets) when its voltage passes a threshold, as in an individual neuron.

6.2.1 Integrate-and-fire oscillators Integrate-and-fire oscillators are free-running linear integrators with a threshold. They integrate linearly in time until they surpass a threshold, and then they reset to zero—they “fire.” There are familiar electronic examples of such oscillators using op-amps and comparator circuits. There are also many biological examples, such as neurons or pacemaker cells in the heart. The existence of a threshold converts such a free-running integrator into a nonlinear oscillator, and with the nonlinearity comes the possibility of locking two independent (but coupled) oscillators so that they share the same frequency or phase. An example of integrate-and-fire oscillators is shown in Fig. 6.3 for two oscillators that have identical slopes and thresholds, but different phases (initial conditions). The oscillators increase linearly until they cross a threshold value, then reset to zero and begin increasing linearly again. The free-running oscillator equation is y(t) = mod (ω0 t, α)

(6.10)

where α is the threshold and T = α/ω0 is the period of oscillation. In this example, the two oscillators are free running. However, it is possible to couple the oscillators to cause them to interact and approach a steady-state condition. The steady-state condition can be characterized as phase-locked if they are identical oscillators, and as frequency-locked if they are non-identical.

182 Introduction to Modern Dynamics

Output

Threshold

Figure 6.3 Two (uncoupled) integrateand-fire oscillators that have identical slopes and thresholds.

Time φ0

r as te

R

ec

eiv er

M

Figure 6.4 Example of two identical oscillators, with an initial phase lag, becoming phase-locked by one-way coupling.

Output

Threshold

Time

6.2.1.1 Phase locking One simple form of coupling is called one-way coupling, when a free-running master oscillator transmits a signal to a receiver oscillator. The master oscillator is unaffected by the behavior of the receiver, but the receiver is perturbed by the signal from the master. For instance, the two oscillators can have the coupled form y1 (t) = mod (φ0 + ω0 t) y2 (t) = mod (ω0 t + g|y1 (t) − y2 (t)|)

(6.11)

where the initial condition for the two oscillators has an original phase offset φ 0 and the modulus function is modulo unity. With this form of coupling, the second oscillator phase receives a “kick” of magnitude g (called the coupling constant) when the master oscillator resets to zero. A kick brings the phases of the two oscillators closer, and the succession of kicks brings the phases asymptotically to coincide—they become phase-locked, as shown in Fig. 6.4. 6.2.1.2 Frequency locking Two oscillators can have different frequencies, yet can oscillate in perfect synchrony when coupled. The asymmetric coupling in this case takes the same form as Eq. (6.11) but with ω1 > ω2 :

Coupled Oscillators and Synchronization 183

as te r M

Output

Threshold

er eiv

Figure 6.5 Frequency locking of two integrate-and-fire oscillators. The master oscillator has the higher frequency. When it resets, it gives a kick to the receiver oscillator that causes it to reset as well.

c Re

Time

y1 (t) = mod (φ0 + ω1 t) y2 (t) = mod (ω2 t + g|y1 (t) − y2 (t)|)

(6.12)

In this case, the first oscillator is the master oscillator that has a simple unperturbed integrate-and-fire behavior. The receiver function receives a contribution from the master oscillator with the coupling constant g. Before the master fires, the receiver is noticeably below threshold, because it rises at a slower rate. However, when the master resets to zero, the receiver oscillator gets a large kick upward that puts it over the threshold, and it resets right behind the master. Then they both start integrating again until they trigger again, and so forth, as shown in Fig. 6.5. It is simple to calculate the coupling strength required to lock two unequal frequencies ω1 and ω2 . The receiver oscillator must go over the threshold when the master fires and resets to zero. This puts the following condition on the coupling coefficient g just after the master resets to zero: ω2 tth + g | 0 − y2 (tth ) | > threshold ω2 tth + gω2 tth > ω1 tth ω2 (1 + g) > ω1 g>

ω1 Δω −1 = ω2 ω2

(6.13)

which yields the condition for frequency locking: g>

Δω ω2

(6.14)

Therefore, the coupling strength must be greater than the relative frequency difference between the two oscillators. This is a general feature of many types of nonlinear oscillator synchronization. It states that for a given value of the coupling g, there is a range of frequencies ω that can be frequency-locked. As the coupling increases, the range of frequencies that can be synchronized gets broader. This relation is depicted graphically in Fig. 6.6 and is called an Arnold tongue (after V. I. Arnold, who studied these

184 Introduction to Modern Dynamics ω0

Coupling strength g

Δω

Figure 6.6 Principle of an Arnold tongue. The range of frequencies that can be locked to a master oscillator increases with increasing coupling strength.

Free-running

Locked

Free-running

Frequency

features). The surprisingly simple relation stated in Eq. (6.14) is true even for more sophisticated synchronization, as we will see repeatedly in this chapter and the next.

6.2.2 Frequency-locked phase oscillators In Chapter 3, an integrable Hamiltonian system was described by motion on a hypertorus with equations of motion H = ω a Ja ∂H J˙a = − a ∂θ θ˙a = ωa

(6.15)

These dynamics describe an action-angle oscillator, or an autonomous phase oscillator. For a nonintegrable system (but one that reduces to Eq. (6.15) when the nonintegrability is small), the equation for the phase angles is θ˙a = ωa + gf (θa )

(6.16)

where g is a coupling parameter and f (θ) is periodic in the phase angles. If the phase angles are those of the limit cycles of autonomous oscillators, then Eq. (6.16) describes the mathematical flow of coupled oscillators. Consider the case of two coupled oscillators that are coupled with a sinusoidal coupling function θ˙1 = ω1 + g1 sin (θ2 − θ1 ) θ˙2 = ω2 + g2 sin (θ1 − θ2 )

(6.17)

Coupled Oscillators and Synchronization 185 2.0

1.5

1.0 φ |ω1 – ω2|

0.5 g1 + g2

Attractor

0.0

Repellor –0.5

0

20

40

60

80

Figure 6.7 There is one stable node and one unstable node for phase-locking on the torus when |w1 – w2 | < g1 + g2 .

100

φ

When the two systems are coupled (and g1 , g2 > 0), the rate of change of the relative phase becomes the measure of synchronization. If the systems are synchronized, then the phase difference remains a constant. The rate of change of the relative phase is φ˙ = θ˙1 − θ˙2 = ω1 − ω2 − (g1 + g2 ) sin φ

(6.18)

This is equivalent to a one-dimensional oscillator with a phase portrait shown in Fig. 6.7. The amplitude of the sine curve is g1 + g2 , and the mean value is ω1 − ω2 . The rate of change of the relative phase can only be zero (to define a fixed point) if the amplitude is larger than the mean value, | ω1 − ω2 |< g1 + g2

(6.19)

for which there is one stable and one unstable node. The system trajectories are attracted to the stable node as the two systems become frequency- and phaselocked. The constant phase offset is then sin φ ∗ =

ω1 − ω2 g1 + g2

(6.20)

The two systems also share the common frequency ω∗ =

g2 ω1 + g1 ω2 g1 + g2

(6.21)

186 Introduction to Modern Dynamics 2π

e

bl

U

Sl

op e

=

1

ta ns

θ2

le

ab St

Figure 6.8 The stable and the unstable limit cycles on the torus have slope = 1 when they are phase-locked.

0 0

θ1



called the compromise frequency, and the system is frequency-locked. The evolution of the motion on the torus is shown in Fig. 6.8. The trajectories are repelled from the unstable manifold and attracted to the stable manifold. For synchronization, there is clearly a threshold related to the frequency difference between the autonomous oscillators and the strength of the coupling. As long as the frequency difference is less than the coupling, the system is frequencylocked. If the frequency offset is too large, or if the coupling is too small, then the system remains quasiperiodic. This is just what we saw in Eq. (6.14) for the integrate-and-fire oscillators described by the Arnold tongue in Fig. 6.6.

6.3 Rational resonances In Chapter 5, we studied the importance of rational resonances in the Standard Map and KAM theory of Hamiltonian (conservative) systems. When commensurate frequencies are related as the ratio of small integers, increasing perturbation causes island chains to form in the discrete map, which dissolve with further increasing perturbation into separatrix chaos that originates from hyperbolic points. These conservative systems obey Liouville’s theorem and hence do not exhibit attractors. However, by introducing dissipation, attractors can form, and rational resonances play an especially important role in the system dynamics. Rather than forming island chains, the dynamics become frequencylocked. Because of the intrinsic periodicity of flow on the torus, there is a natural extension of the continuous dynamics to a discrete map called the sine-circle map.

Coupled Oscillators and Synchronization 187

6.3.1 The sine-circle map The natural period of the unperturbed oscillator can be taken as the period of the “strobe” for the Poincaré section such that ω1 T = N2π. The dynamics are assumed to be kicked (as in the kicked rotator and web map of Chapter 5) and to develop freely between kicks. The development of the slow phase φ is then φn+1 = φn + ω1 T − ω2 T − T (g1 + g2 ) sin φn   ω2 g1 + g2 = φn + ω1 T 1 − − sin φn ω1 ω1

(6.22)

Because of the assumed periodicity, this can be re-expressed as   ω2 g1 + g2 φn+1 = mod φn + 1 − − sin φn , 2π ω1 ω1

(6.23)

By assigning Ω =1−

ω1 ω2

g=−

g1 + g2 ω1

(6.24)

this becomes φn+1 = f (φn ) = mod (φn + Ω + g sin φn , 2π)

(6.25)

In this discrete map, the angle plays the role of the relative frequency difference between the two oscillators on the torus. Therefore, the one-dimensional sine-circle map captures the essential physics of the coupled oscillators on the two-dimensional torus, with an important new feature: the discontinuous “kicks” contain an infinite number of frequency harmonics that can give rise to rational resonances that were not present in the original continuous dynamics of Eq. (6.18). The sine-circle map has a fixed point when   φ ∗ = mod φ ∗ + Ω + g sin φ ∗ , 2π

(6.26)

giving sin φ ∗ = −

Ω + N2π g

(6.27)

for any integer N, positive or negative. This is real-valued for g >| Ω + N2π | with a stability determined by

(6.28)

188 Introduction to Modern Dynamics 1

1

g = 0.8

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6

0.5

0.5

0.4

0.4

0.3

0.3

0.2

0.2

0.1

0.1

0

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

1

0

g = 1.0

0

0.1

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0.4

0.5

0.6

0.7

0.8

0.9

1

1 g = 1.2

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6

0.5

0.5

0.4

0.4

0.3

0.3

0.2

0.2

0.1

0.1

0

0.2

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0

g = 1.5

0

0.1

0.2

0.3

Figure 6.9 Numerical results of the sine-circle map generated using = 3/2 with (a) g = 0.8, (b) g = 1, (c) g = 1.2, and (d) g = 1.5 . All g-values are normalized by 2π. df = 1 + g cos φ ∗ dθ

(6.29)

Hence, the fixed point is stable when this value lies between −1 and 1.4 Therefore, the limit cycle is stable when −2 < g cos φ ∗ < 0 4

Remember that the sine-circle map is a discrete mapping, and Eq. (6.29) is a Floquet multiplier

(6.30)

The integer N in Eq. (6.27) provides many possible values of g and that can yield stable limit cycles. It also opens up many routes for complicated iterative values as the system moves in and out of the stability condition (6.30).

Coupled Oscillators and Synchronization 189 1 0.9 0.8 0.7

Values

0.6 0.5 0.4 0.3 0.2 0.1 0

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

Coupling constant g

Iterations of the sine-circle map are illustrated in Fig. 6.9 for a frequency ratio of = 3:2 shown for different values of the coupling g. For g = 0.8 × 2π, the values iterate randomly. For g = 1.0 × 2π, the system has a period-2 cycle with φ ∗ = (π/2, 3π/2). For g = 1.2 × 2π, the system is again chaotic, while for g = 1.5 × 2π, it has a stable fixed point at φ ∗ = π/2. Note that the fixed point lies on the diagonal function y = x. The complicated behavior with increasing g is captured in Fig. 6.10 for a frequency ratio equal to the golden mean = 1.618, shown as successive iterates as g increases from 0 to 2 (units of 2π). There is no frequency locking below g = 0.1 × 2π. Above g = 0.1 × 2π, some structure appears, until a stable frequency lock occurs between g = 0.4 × 2π and g = 0.5 × 2π. The locking threshold occurs at the ratio Ω/g = 4. Note that this is far above the “weakcoupling” limit (when frequency locking occurs as a perturbation of uncoupled oscillators), but it shows the rich structure for strong coupling.

6.3.2 Resonances The sine-circle map has rational resonances that occur when two locked frequencies are in ratios of small integers, creating considerable harmonic structures. The resonance structure of the sine-circle map is shown in Fig. 6.11 as a plot of the frequency range that is locked for increasing coupling g. This graph was generated

Figure 6.10 Iterated values for the sinecircle map for = 1.618 (the Golden Mean) shown for g = 0 to 2 (units of 2π).

190 Introduction to Modern Dynamics Frequency ratio

Coupling constant

1.0

Figure 6.11 Arnold tongues for the sine-circle map, which captures the behavior of two mutually coupled oscillators. The dark regions are the frequencylocked regions.

0.0 1/2

2/3

4/5

1/1

3/4

6/5

4/3

3/2

5/4

by sweeping from 0 to 2 for a fixed coupling constant g. The frequency range when the frequencies are locked are plotted in black. The figure shows a strong Arnold tongue structure for frequency ratios of small integers. The frequency entrainment range is strongest for the 1:1 resonance, followed by 1/2, 2/3, 3/4, and 4/5, with symmetric tongues for resonances larger than 1:1 at 4/3, 5/4, and 6/5, respectively. The Arnold tongue structure of the sine-circle map can be partially understood (or modeled) through the locking of phase oscillators whose frequency ratio is at or near a rational resonance described by the coupled equations g sin (pθ1 − qθ2 ) p7/2 + q7/2 g sin (qθ2 − pθ1 ) θ˙2 = ω2 − 7/2 p + q7/2 θ˙1 = ω1 −

(6.31)

for p and q integers. The power n = 7/2 in the denominator is motivated by the KAM theory for resonances. The argument of the sine function defines the slow phase as φ = pθ1 − qθ2

(6.32)

Multiplying the top equation by p, the bottom equation by q, and subtracting gives

Coupled Oscillators and Synchronization 191 p+q pθ˙1 − qθ˙2 = pω1 − qω2 − 7/2 g sin (pθ1 − qθ2 ) p + q7/2 p+q g sin φ φ˙ = (pω1 − qω2 ) − 7/2 p + q7/2

(6.33)

which has the same form as Eq. (6.18). The critical threshold for locking a resonance is therefore gc =

p7/2 + q7/2 | pω1 − qω2 | p+q

(6.34)

The denominator in Eq. (6.33) reduces the coupling strength with increasing p and q, motivated by the KAM effect in Hamiltonian theory that protects certain “irrational” resonances from chaos. Synchronization of rational resonances shares a mathematical similarity with the invariant tori of KAM theory within Diophantine analysis (see Chapter 5). Frequency resonances are common in the Solar System. For instance, the Earth’s Moon is locked in a 1:1 resonance such that the period of the Moon’s rotation is equal to the period of its orbit around the Earth. In addition, frequency resonances are partly responsible for the origin of the gaps in the rings of Saturn, shown in Fig. 6.12. Saturn has many moons, and there are “shepherd” moons that entrain nearby ring particles in frequency-locked bands with gaps where ring material is swept out. Shepherd moons have been identified for all the major gaps in the rings, corresponding to resonances of frequency ratios of small integers. For instance, the outer edge of the A-ring is in a 7:6 resonance condition with the moon Janus. The ring materials circle Saturn seven times while Janus orbits

Figure 6.12 Details of Saturn’s B-ring, showing the pronounced gaps of the Cassini division. Photograph from http://photojournal.jpl.nasa. gov/catalog/PIA08955.

192 Introduction to Modern Dynamics six times. The boundary between the outer edge of the B-ring and the Cassini division is in a 2:1 resonance condition with the moon Mimas, while the Encke gap is in a 5:3 resonance condition with the same moon.

6.4 External synchronization Often, a nonlinear autonomous oscillator, such as a van der Pol oscillator or a limitcycle oscillator, is driven by an external periodic force. If the drive frequency is near the natural frequency of the oscillator, then the oscillator will be entrained by the drive frequency. Just as for the coupled oscillators, stronger coupling to the drive leads to wider frequency ranges over which the oscillator frequency will be locked to the drive.

6.4.1 External synchronization of an autonomous phase oscillator Consider an autonomous phase oscillator in Eq. (6.15) subject to a sinusoidal external periodic drive that depends on the phase of the oscillator: θ˙ = ω0 + g sin (θ − ωd t)

(6.35)

where θ is the angular coordinate of the oscillator, ωd is the drive angular frequency, and ω0 is the autonomous frequency of the isolated oscillator. In the absence of the drive, the phase oscillator has the trajectory θ = ω0 t. When the oscillator is locked to the external drive, θ˙ = ωd , and the phase of the oscillator perfectly tracks the phase of the drive such that   ωd = ω0 + g sin θ ∗ − ωd t

(6.36)

where θ ∗ − ωd t = const. The condition for frequency locking is set by the condition that | sin θ |< 1 and hence is set by the condition that | ωd − ω0 |< g

(6.37)

Therefore, if the angular frequency difference between the drive frequency and the natural oscillator frequency is smaller than the coupling constant g, then the system will be locked. When the frequency offset between the drive frequency and the natural frequency is larger than the coupling constant, then the system is not locked, and the phase of the oscillator drifts relative to the phase of the drive. In this regime, it is possible to approach the dynamics from the point of view of the phase difference between the two systems. This leads to an equation for the “slow phase” ψ defined by

Coupled Oscillators and Synchronization 193 ψ = θ − ωd t

(6.38)

ψ˙ = θ˙ − ωd

When the system is almost frequency-locked, this phase changes slowly. In terms of this slow phase, the driven autonomous oscillator (6.35) is now ψ˙ + ωd = ω0 + g sin ψ

(6.39)

dψ = −Δω + g sin ψ dt

(6.40)

Δω = ωd − ω0

(6.41)

which is

where

is the frequency difference between the drive frequency and the natural frequency of the oscillator. Equation (6.40) is integrated to yield the period of oscillation of the slow phase as π Tψ = −π

dψ g sin ψ − Δω

(6.42)

This period is associated with the beat frequency Ωψ between the drive frequency and the altered frequency of the oscillator: ⎛π ⎞−1 dψ 2π ⎠ Ωψ = = 2π ⎝ Tψ g sin ψ − Δω

(6.43)

−π

To evaluate this beat frequency, average over a single cycle of the slow phase, π −π

 π 2 dψ −1 −Δω tan (ψ/2) + g  =  tan g sin ψ − Δω Δω2 − g 2 Δω2 − g 2 −π −2π =  Δω2 − g 2

(6.44)

to obtain the important and simple result

Beat frequency

Ωψ =

 Δω2 − g 2

for | Δω |> g

(6.45)

Therefore, the frequency shift of the autonomous oscillator varies as a square root away from the frequency-locking threshold. This square-root dependence

194 Introduction to Modern Dynamics

Beat frequency Ωψ

0.5

Figure 6.13 Beat angular frequency as a function of angular frequency detuning between the drive and the autonomous oscillator. The dependence near the locking threshold has a square-root dependence.

Frequency-locked region

0.0

Δω = ωd – ω0

–0.5

0.5

0.5

–0.5

Δω

is a ubiquitous phenomenon exhibited by many externally driven autonomous oscillators, regardless of the underlying details of the physical system. It is related to universal properties that emerge from mean field theory in statistical mechanics. The beat frequency is shown schematically in Fig. 6.13 as a function of frequency detuning, showing the frequency-locked region for small detunings and the square-root dependence outside the locking region. Prior to frequency locking, the frequency of the autonomous oscillator is pulled toward the external drive frequency, known as frequency entrainment. This is shown numerically in Fig. 6.14 as a function of increasing coupling g for a single oscillator with a frequency 5% larger than the drive frequency. The frequency becomes locked when the coupling exceeds 5%.

6.4.2 External synchronization of a van der Pol oscillator The driven van der Pol oscillator is of interest because this system is fundamentally nonlinear and shows limit-cycle behavior even in the absence of a harmonic drive (it is an autonomous oscillator). However, when it is driven harmonically, the selfoscillation is influenced by external periodic forcing that can cause the oscillator to oscillate with the same frequency as the external harmonic drive. Consider a weakly nonlinear van der Pol oscillator   x¨ = 2μx˙ 1 − βx2 − ω02 x

(6.46)

In the absence of the gain term μ, this is an undamped harmonic oscillator with natural angular frequency ω0 . When this is driven by a harmonic forcing function,

Coupled Oscillators and Synchronization 195 0.05 gc = ωd – ω0

Frequency ratio (ω–ωd)/ω0

0.04 0.03 0.02 0.01

Figure 6.14 Relative frequency offset of an autonomous oscillator from the drive frequency as a function of coupling. Above the threshold, the autonomous phase oscillator is entrained by the drive frequency.

0.00 –0.01

0

0.01

0.02

0.03 0.04 0.05 Coupling constant

0.06

0.07

0.08

  x¨ − 2μx˙ 1 − βx2 + ω02 x = F sin ωd t

(6.47)

the non-autonomous equation is converted to the autonomous flow x˙ = y   y˙ = 2μx˙ 1 − βx2 − ω02 x + F sin z z˙ = ωd

(6.48)

The frequency offset between the drive frequency and the autonomous frequency (the frequency of the self-oscillating system, not of the harmonic oscillator ω0 ) is characterized by ω. For given properties of the oscillator, the parameters to explore for synchronization are the frequency offset ω and the magnitude of the force F. The numerical results for the driven system are presented in Fig. 6.15(a). The system frequency is pulled toward the external drive frequency as the frequency offset decreases, and then the van der Pol frequency becomes locked to the external frequency. The beat frequency is shown in Fig. 6.15(b), which presents qualitatively the classic frequency-locking behavior that was illustrated schematically in Fig. 6.13.

6.4.3 Mutual synchronization of two van der Pol oscillators Synchronization was first described and investigated experimentally by Christian Huygens in 1665 when he noticed that two pendulum clocks hanging near each other on a wall became synchronized in the motions of their pendulums, no matter how different their initial conditions were, even if the frequencies were slightly different.

196 Introduction to Modern Dynamics (a)

(b) 1.3 1.2

–0.6

β=1 μ = 2.5 F = 10 f0 = 1

–0.4

–0.2 Beat frequency

System frequency

1.1

β=1 μ = 2.5 F = 10 f0 = 1

1.0 0.9

0.0

0.2

0.8 0.4

0.7 0.6 0.4

0.6

0.8

1.0 1.2 Drive frequency

1.4

1.6

0.6 0.4

0.6

0.8

1.0 1.2 Drive frequency

1.4

1.6

Figure 6.15 Numerical results for synchronization of a van der Pol oscillator with a harmonic driving force.(a) System frequency as a function of drive frequency. (b) The same data expressed as a beat frequency. Van der Pol oscillators are among the best examples of coupled oscillators, because they are limit-cycle oscillators that behave much as pendulum clocks do. They have a natural frequency, yet are intrinsically nonlinear and amenable to frequency locking. The flow for two coupled van der Pol oscillators is x˙ = y + g (z − x)   y˙ = 2μx˙ 1 − βx2 − ω12 x z˙ = w + g (x − z)   w˙ = 2μz˙ 1 − βz2 − ω22 z

(6.49)

The coupling terms in g are two-way symmetric and linear and occur only between the variables x and z. The oscillators are identical, except that they have different values for their linear autonomous frequencies ω1 and ω2 . If these frequencies are not too dissimilar, and if the coupling g is strong enough, then the two oscillators will synchronize and oscillate with a common compromise frequency. Numerical results for the mutual frequency locking are shown in Fig. 6.16. The individual frequencies are shown in Fig. 6.16(a) as functions of the frequency offset for uncoupled oscillators. The two distinct frequencies are pulled and then entrained in the locked regime as the frequency offset decreases. The frequency pulling does not follow a linear behavior, even in the locked regime. However, when the data are replotted as a beat frequency, the system shows the same classic signature as for an externally driven individual oscillator (compare with Figs. 6.13 and 6.15). Frequency locking is not necessarily synonymous with phase locking. For instance, coupled chaotic oscillators (like two identical coupled Rössler oscillators)

Coupled Oscillators and Synchronization 197 (b)

(a)

0.5 ω1 ω2

g = 0.25 β=1 μ = 2.5

Frequency difference ω1–ω2

Observed oscillator frequencies ω1, ω2

1.5

1

g = 0.25 β=1 μ = 2.5

0

Locked regime 0.5 –0.6

Locked regime

–0.2 0 0.2 0.4 –0.4 Uncoupled frequency offset ω2 – ω1

0.6

–0.5 –0.6

–0.4 –0.2 0 0.2 0.4 Uncoupled frequency offset ω2 – ω1

0.6

Figure 6.16 Two coupled van der Pol oscillators. The individual frequencies are shown in (a) and the beat frequency in (b). Note that, despite the variable frequencies in (a), the beat frequency in (b) is nearly identical to for an external drive. 3 Δω = 0.05 g = 0.1

2

β=1 μ = 2.5

Amplitude

1

0

–1

–2 Transient regime –3

0

2

4

6 Time

8

10

12

can share the same phase, yet have no well-defined frequency (see Homework problem 17). However, for these coupled van der Pol oscillators, both the phase and frequency become locked, as shown numerically in Fig. 6.17. The two oscillators begin out of phase, yet become phase-locked after only a few oscillations.

Figure 6.17 Phase-locking of two coupled van der Pol oscillators from an unlocked initial condition.

198 Introduction to Modern Dynamics The coupled van der Pol oscillator system is just one example of a wide range of nonlinear systems that exhibit frequency entrainment and phase locking. These are physical systems such as Huygens’s wall clocks and the rings of Saturn, or biological systems such as pacemaker cells in the heart and even communities of fireflies that blink in synchrony, among many other examples. The beauty in the analysis of the sine-circle map, simple as it is, is how broadly the basic phenomenon is displayed in widely ranging systems. While true universality is often hard to find across broad systems, the general behavior of complicated systems can be very similar. The basic behavior of coupled van der Pol oscillators is repeated again and again across wide areas of science and engineering.

6.5 Synchronization of chaos Autonomous chaotic systems, like the Lorenz and Rössler models, are nonperiodic and hence have no single characteristic frequency. For instance, the power spectrum of the z-component of the Lorenz system exhibits a broad spectrum of frequencies. However, even a quick inspection of the time series shows that there is a roughly repeated behavior. Therefore, it is reasonable to expect that coupled chaotic systems can synchronize. A first step to explore synchronized chaos is to apply a periodic external drive to a chaotic system. A modified Lorenz model with an external drive is x˙ = p (y − x) y˙ = rx − xz − y z˙ = xy − bz − gωd sin ωd t

(6.50)

where the external drive term with angular frequency ωd and coupling parameter g is applied to the z-component. Figure 6.18(a) shows the response frequency components of the chaotic system as the external drive frequency is swept slowly through the characteristic frequency of 2 Hz (for the standard values of the parameters) for a coupling constant g = 2π. For large detuning, a broad spectrum of frequencies is observed. However, the Lorenz system becomes locked to the external drive frequency between about 1.5 Hz and 2.5 Hz. Furthermore, when the system is frequency-locked to the external drive, additional harmonics are present, including a subharmonic at half the drive frequency. The beat frequency is shown in Fig. 6.18(b) and exhibits very similar behavior to an externally driven Poincaré phase oscillator, with entrainment and locking. Two identical Lorenz systems can be synchronized by adding a coupling term between the two systems on the third variable: x˙ = p (y − x) y˙ = rx − xz − y z˙ = xy − bz − g (z − w)

u˙ = p (ν − u) ν˙ = ru − xw − ν w˙ = uν − bw − g (w − z)

Coupled Oscillators and Synchronization 199 4

1.00

g = 2π

3 Beat frequency (Hz)

Frequency response (Hz)

g = 2π

2

1

0

1

1.5

2 2.5 Drive frequency (Hz)

0.50

0.00

–0.50

–1.00

3

1

1.5

2 2.5 Drive frequency (Hz)

3

Figure 6.18 (a) Major spectral peaks as a function of drive frequency for g = 2π. The Lorenz system is locked to the external drive between 1.5 and 2.5 Hz and has a harmonic structure when locked. (b) Beat frequency between the characteristic frequency of the Lorenz equivalent phase oscillator and the drive frequency. The frequency entrainment and locking are reminiscent of the entrainment and locking of a driven Poincaré phase oscillator. 40

50 g = 0.80

g = 0.88

30

40

20

30 20

10 v

v

10 0

0 –10 –10 –20

–20

–30 –40 –25

–30 –20

–15

–10

–5

0 x

5

10

15

20

25

–40 –30

–20

–10

0 x

Figure 6.19 The x–v plots for two identical coupled Lorenz systems for g = 0.80 and g = 0.88. The projection of the dynamics onto the x − ν plane, where x and ν are variables selected from each of the Lorenz systems, is shown in Fig. 6.19 for coupling constants g = 0.80 and 0.88. At the weaker coupling constant, the two variables have a mostly random relationship, but at the stronger coupling, the two systems become synchronized, and the projection displays the classic “butterfly.” The synchronization of chaos is made possible by the internal order that is retained in low-dimensional chaotic systems. Many autonomous chaotic models

10

20

30

200 Introduction to Modern Dynamics have a strong regularity, even in the face of chaotic behavior. This pseudo-regular behavior makes it possible to synchronize to an external drive or to other chaotic systems. However, the coupling constants needed for synchronization are larger than for similar periodic nonchaotic systems, because there is a spectral bandwidth of many contributing frequencies, all of which must be entrained by stronger coupling.

6.6 Summary The starting point for understanding many complex systems begins with the coupling of two semi-autonomous systems. The behavior of the coupled pair provides insight into the behavior that emerges for larger numbers of coupled systems. In the case of linear coupled oscillators, the result is a weighted mixture of the individual motions, and the principle of superposition applies, in which the whole quite simply is the sum of the parts. In the case of nonlinear coupling, on the other hand, superposition breaks down. For example, with the nonlinear coupling of two autonomous oscillators, synchronization is possible. Simple asymmetric coupling of integrate-and-fire oscillators captures the essence of frequency locking in which the ability to synchronize the system compares the frequency offset between the two oscillators against the coupling strength. Quasiperiodicity on the torus (action-angle oscillators) with nonlinear coupling exhibits frequency locking, while the sine-circle map is a discrete map that displays multiple Arnold tongues for frequency-locking resonances. External synchronization of a phase oscillator can be analyzed in terms of the slow phase difference, resulting in a beat frequency and frequency entrainment that are functions of the coupling strength. Mutual synchronization of two unequal van der Pol oscillators displays qualitatively the same frequency entrainment as external synchronization of a single oscillator. The frequency locking of an externally driven Lorenz system shows qualitatively similar behavior as the externally driven van der Pol oscillator. Two identical Lorenz oscillators share identical dynamics for sufficiently large coupling strength.

6.7 Bibliography V. I. Arnold, Mathematical Methods of Classical Mechanics, 2nd ed. (Springer, 1989). L. Glass and M. Mackey, From Clocks to Chaos (Princeton University Press, 1988). R. Hilborn, Chaos and Nonlinear Dynamics: An Introduction for Scientists and Engineers (Oxford University Press, 2001). A. Pikovsky, M. Rosenblum, and J. Kurths, Synchronization: A Universal Concept in Nonlinear Science (Cambridge University Press, 2001).

Coupled Oscillators and Synchronization 201 S. H. Strogatz, Nonlinear Dynamics and Chaos (Westview Press, 1994). A. T. Winfree, The Geometry of Biological Time (Springer, 1990).

6.8 Homework problems Analytic problems 1 Coupled oscillators: Find the secular determinant in ω4 for a periodic coupled ring of masses and springs composed of two unequal masses (lattice with a basis). 2 Coupled oscillators: In Fig. 6.2, what are the asymptotic values for the squared frequencies at k1 = 0 and k1 = ∞? What are the equations of the two linear asymptotes? 3 Coupled oscillators: Derive an expression for the minimum separation between the symmetric and asymmetric mode frequencies as a function of coupling strength k12 . 4 Global coupling: Calculate the eigenfrequency spectrum of N identical harmonic oscillators of equal frequencies ω2 = k/m that are coupled globally to all other oscillators with spring coupling constants κ. Interpret the eigenvalue results physically. What are the eigenmodes? 5 Integrate-and-fire oscillator: For a single integrate-and-fire oscillator, derive the frequency-locking condition required on g and ω for the case when the upper threshold is sinusoidally modulated: y1 (t) = mod (ω1 t, threshold)

threshold = 1 + g sin ω0 t

6 Attractor: Prove that the continuous flow on the 1D torus, Eq. (6.18), is dissipative and does not obey Liouville’s theorem. 7 Sudden perturbation: Consider the phase oscillator   r˙ = r c − r 2

θ˙ = 2π

for c > 0. Let the system run in steady state. What is the period T 0 of the oscillation. At time t 0and phase θ 0 (t 0 ), apply a jump in the x-direction x (remember that r = x2 + y2 ). What is the time T it takes to return to the phase θ 0 (T)? If the system is repetitively perturbed at phase θ 0 with period T, it will follow the new period. 8 Slow phase dynamics: Consider a driven phase oscillator φ˙ = ω1 + g sin (φ − ω0 t)

202 Introduction to Modern Dynamics For ψ = φ − ω0 t, there is a “slow” phase dynamics given by ψ˙ = −Δω + g sin ψ, where Δω = ω1 − ω0 . Find the fixed point ψ ∗ . Show that the beat frequency is ⎡ Ωψ =

2π ⎢ = 2π ⎣ Tψ

ψ∗+π

ψ∗−π

⎤−1 dψ ⎥ ⎦ g sin (ψ − ψ ∗ ) − Δω

9 Complex amplitude equation: The complex amplitude equation (also known as the Landau–Stuart or lambda–omega equation) is an autonomous flow. The equation is dA = (1 + iη) A − (1 + iα) |A|2 A dt Show that this can be written in polar coordinates as   dR = R 1 − R2 dt dθ = η − αR2 dt and that the solution is a limit cycle. 10 Sine-circle map: Derive the discrete sine-circle map (6.25) from the coupled phase oscillators on the torus, Eq. (6.17). Show that the frequency offset ω1 − ω2 in the continuous model is captured by the frequency ratio = ω1 / ω2 in the discrete map. 11 Limit cycle: For the numerical results and parameters in Fig. 6.17, analytically derive the relaxation time to the phase-locked state for the van der Pol limit cycle. 12 Arnold tongue: For a given irrational frequency ratio ω1 /ω2 approximated by a p/q convergent, how does gc vary as a function of the order N of the convergent? (Remember that an Nth convergent is a continued fraction truncated at the Nth element.) Pick several common irrational numbers to plot.

Numerical projects 13 Driven oscillator: For a single externally driven phase oscillator (6.35) for a fixed g, numerically simulate the frequency as the drive frequency ω0 is swept through ω1 . Pick a wide enough frequency range to capture the frequency entrainment and locking. 14 Sine-circle map: For the sine-circle map at = 0.50 and then again at = 0.51, create a bifurcation plot for g increasing from 0 to 2. Why are the results so sensitive to at small g?

Coupled Oscillators and Synchronization 203 15 Stability: Consider the phase oscillator   r˙ = r c + 2r 2 − r 4

θ˙ = 2π

for −1 < c < 0. Find the two stable solutions. One is oscillatory, while one is not (it is a steady state). (a) Set the initial conditions such that the stable state is oscillatory. Explore what perturbations do to the oscillations by giving a jump in radius r of varying size and at varying times. For some choice of perturbation parameters, the oscillations can be quenched and the system goes into steady state. In physiology, a healthy oscillation (beating heart) can be quenched by a perturbation (electric shock). (b) Set the initial conditions such that the system is in its steady state. Apply a perturbation to make the system settle in the oscillatory state. In physiology, a perturbation (electric shock) can restart healthy oscillation (beating heart). 16 Coupled van der Pol Oscillators: For two coupled Van der Pol oscillators, put the coupling on the velocity terms instead of the position variables. Study the phase locking as a function of frequency offset for a fixed coupling g. What differences do you note between the two types of coupling (positional versus velocity)? 17 Coupled chaos: Consider two identical Rössler chaotic oscillators that are coupled with a strength g: x˙1 = −0.97x1 − z1 + g (x2 − x1 ) y˙1 = 0.97x1 + ay1 + g (y2 − y1 ) z˙1 = z1 (x1 − 8.5) + 0.4 + g (z2 − z1 ) x˙2 = −0.97y2 − z2 + g (x1 − x2 ) y˙2 = 0.97x2 + ay2 + g (y1 − y2 ) z˙2 = z2 (x2 − 8.5) + 0.4 + g (z1 − z2 ) (a) For the parameter a = 0.15, the phase of the individual oscillators is well-defined as φi = arctan (yi /xi ). Study the phase synchronization of this coupled chaotic system as a function of coupling strength g. For instance, define the “order parameter” to be the phase difference between the two oscillators and find when it becomes bounded as a function of time. (b) For the parameter a = 0.25, the phase is no longer coherent. Can you define an “order parameter” for the phase-incoherent regime that shows a synchronization transition? (c) Make the two oscillators non-identical. How do the phase-coherent and phase-incoherent regimes behave now? How does synchronization depend on the degree to which the oscillators are not identical?

204 Introduction to Modern Dynamics (For a network study, see C. S. Zhou and J. Kurths, “Hierarchical synchronization in complex networks with heterogeneous degrees,” Chaos 16, 015104 (2006).) 18 Chaotic phase oscillator: Convert the Lorenz system into an equivalent phase oscillator by constructing new dynamical variable r=

 x 2 + y2

and plot the dynamics on the r–z plane for the conventional Lorenz system. Extract the phase r θ = tan−1 z and drive the Lorenz system according to x˙ = p (y − x) y˙ = rx − xz − y z˙ = xy − bz − gωd sin (θ − ωd t) Explore the harmonic structure of the resulting driven chaotic phase oscillator.

Part III Complex Systems Modern life abounds in complex systems—too many to describe here. However, some have such profound influence on our lives that they rise to the top of the list and deserve special mention. These include the nature of neural circuitry (the origins of intelligence), the evolution of new forms and the stability of ecosystems in the face of competition and selection (the origins of species and the ascent of man), and the dynamics of economies (the origins of the wealth of nations and the welfare of their peoples). This section introduces these three topics, and applies many of the tools and techniques that were developed in the previous chapters.

Network Dynamics

7 7.1 Network Structures

The Human Protein Interactome.

We live in a connected world. We interact and communicate within vast social networks using email and text messaging and phone calls. The World Wide Web contains nodes and links that branch out and connect across the Earth. Ecosystems contain countless species that live off of each other in symbiosis, or in competition as predator and prey. Within our bodies, thousands of proteins interact with thousands of others doing the work that keeps us alive. All of these systems represent networks of interacting elements. They are dynamic networks, with nodes and links forming and disappearing over time. The nodes themselves may be dynamical systems, with their own characteristic frequencies, coupled to other nodes and subject to synchronization. The synchronization of simple oscillators on networks displays richly complex phenomena. Conversely, synchronization of chaotic oscillators can show surprisingly coherent behavior. This chapter introduces common network structures, explores synchronization

Introduction to Modern Dynamics. Second Edition. David D. Nolte, Oxford University Press (2019). © David D. Nolte. DOI: 10.1093/oso/9780198844624.001.0001

208

7.2 Random network topologies

212

7.3 Synchronization on networks

216

7.4 Diffusion on networks

226

7.5 Epidemics on networks

231

7.6 Summary

239

7.7 Bibliography

240

7.8 Homework problems

240

208 Introduction to Modern Dynamics on networks of coupled dynamical systems, and studies the diffusion of nodal states (like viruses) across these networks.

7.1 Network structures Networks come in many shapes and sizes, and their global dynamical properties can depend on what shape they have. Actually, shape is not quite the right description—rather, it is the network topology that determines how its properties evolve dynamically. Topology defines how the nodes of a network are connected or linked. For instance, all the nodes may be connected to all others in a network topology called a “complete graph.” Or each node may be connected in a linear sequence that closes on itself in a network topology called a “linear cycle.” The connectivity patterns differ greatly between a complete graph and a linear cycle. Networks are often defined in terms of statistics of nodes and links. For instance, the number of links that are attached to a specific node is known as the degree of a node, and one way to describe a network can be in terms of the average degree across the net. Many other types of statistical descriptions are useful as well, including measures of connectivity and clustering.

7.1.1 Types of graphs

1 This chapter confines itself to undirected graphs.

The word “graph” is the mathematical term for a network. A graph is a set of nodes and the links that connect them. Links can be undirected, or they can be directed from one node to another. A directed link starts on one node and ends on another node. They are drawn as lines with arrows. An undirected link is two-way, and is drawn without arrows.1 Regular graphs have definite (nonrandom) connectivities (Fig. 7.1). One example of a regular graph is the complete graph in which every node is connected to every other. Other examples of regular graphs include linear chains, cycles, trees and lattices. Random graphs have random connections among the nodes. There are many possible network topologies for random graphs. The three most common are (1) Erdös–Rényi graphs, (2) small-world networks, and (3) scale-free networks. The Erdös–Rényi graphs have N nodes that are randomly connected by M links. Small-world networks are characterized by many local connections, with a few long-distance connections. Scale-free networks are characterized by a few highly connected hubs, a larger number of moderately connected hubs, and many lightly connected nodes. Examples of these network topologies are found in economics, sociology, computer science, biology, materials science, statistical mechanics, evolutionary dynamics, and telecommunications, among others. Perhaps the most famous examples of networks are the World Wide Web, biological neural networks, and protein interaction networks in cellular biology.

Network Dynamics 209

Cycle Linear chain

Square lattice

Tree

Complete graph

Figure 7.1 Regular undirected graphs with nodes (circles) and links (lines).

7.1.2 Statistical properties of networks There are many statistical measures of the topology of networks that capture mean values as well as statistical fluctuations. The physical behavior of dynamical networks may be distinguished depending on the different values of these measures. 7.1.2.1 Degree and moments The degree of a node is the number of links attached to the node. The degree distribution probability is given by pk =

Nk N

(7.1)

where Nk is the number of nodes of degree k, and N is the total number of nodes. The average degree of a network is k =

N 

jpj

(7.2)

j=0

with higher moments given by N   km = j m pj



j=0

(7.3)

210 Introduction to Modern Dynamics   Not all moments are defined within a given network topology. For instance, k2 diverges for scale-free networks. 7.1.2.2 Adjacency matrix The adjacency matrix for a graph with N nodes is an N-by-N matrix with elements Aij = 1 Aij = 0 Aii = 0

i and j connected i and j not connected zero diagonal

(7.4)

The degree of the ith node is given by ki =



(7.5)

Aij

j

and the average degree of the network is k =

1  1  ki = Aij N N i

(7.6)

ij

The adjacency matrix is symmetric and has N real eigenvalues. The eigenvalue spectrum of the adjacency matrix provides a “fingerprint” of the network topology. The spectral density of the network is defined by the eigenvalues λi as ρ (λ) =

 1   δ λ − λj N

(7.7)

j

which provides a convenient way to find the moments of the spectral density by 

 λm =

 dλ λm ρ (λ) =

N 1   m λj N j=1

  1 = Tr Am N

(7.8)

The quantity λm can be interpreted as the number of closed loops of length m in the network. 7.1.2.3 Graph Laplacian The graph Laplacian of a network is an operator that is analogous to the Laplacian operator of differential calculus. It is defined as  Lij =

 a

Aia δij − Aij =

ki −1 0

i=j i and j connected otherwise

(7.9)

Network Dynamics 211 The eigenvalues of the graph Laplacian provide another fingerprint of the network topology. For instance, the eigenvalues of the graph Laplacian are ordered as 0 = λ0 ≤ λ1 ≤ · · · ≤ λN−1 ≤ N

(7.10)

where the degeneracy of the eigenvalue λ0 is the number of disconnected subgraphs in the network. For the special case of a complete graph (where every node is connected to every other), λ0 = 0

λi = N

(7.11)

On the other hand, for densely connected (but not complete) graphs, the eigenvalue spectrum will have many eigenvalues close to N. 7.1.2.4 Distance matrix The distance between two nodes in a graph is the smallest number of links that connects the nodes. The distance matrix is defined as the N-by-N symmetric matrix of internode distances. The shortest path between two nodes on a network is known as the geodesic path, in analogy with geodesics in metric spaces. The algorithm to obtain the distance matrix uses what is known as a breadth-first search. A node is selected, and then all nearest-neighbor nodes are found that are one link away. These are tabulated as distance 1. Then their nearest neighbors are found and tabulated as distance 2, unless they were already assigned a prior distance. This is iterated until no new nodes are accessed. Remaining inaccessible nodes (disconnected clusters) are assigned a negative distance. An example of a distance matrix is shown in Fig. 7.2 for a random graph with 100 nodes and an average degree of 5. The maximum distance between any two nodes is equal to 6, which is defined as the network “diameter.” Distances on networks tend to be surprisingly small. This is known as the “small world” effect, also known as “six degrees of separation.” The popular description of this effect is that anyone in the United States is only six acquaintances away from the President of the United States. In other words, you know someone who knows someone who knows someone who knows someone who knows the President. The reason for this small number, in spite of nearly 400 million Americans, is that the average number of vertices that are s steps away from a randomly chosen vertex scales as exp (s/ k). Therefore, the average distance between nodes of a random network scales logarithmically as  = 0 +

ln N ln k

(7.12)

where 0 is a constant of order unity (in the limit of large N) that depends on the network topology. The logarithmic scaling is approximately valid for random graphs and depends on the network topology. Although it fails for some regular

212 Introduction to Modern Dynamics 6 10 5 20 30 4 40 50

3

60 2

70 80

Figure 7.2 Distance matrix for a 100node random graph with an average degree of 5. The maximum distance between any two nodes, the network diameter, is equal to 6 links.

1 90 100 10

20

30

40

50

60

70

80

90

100

0

graphs (such as lattices), it holds true even for regular tree networks, and it is generally a good rule of thumb. For the example in Fig. 7.2, the average distance is ln(100)/ln(5) ≈ 3, which is equal to the mean value of the distance matrix. The network diameter in this example is equal to 6.

7.2 Random network topologies Random graphs are most like the networks encountered in the real world. Three of the most common are discussed here: Erdös–Rényi (ER), small-world (SW), and scale-free (SF) networks.

7.2.1 Erdös–Rényi graphs An ER graph is constructed by taking N nodes and connecting them with an average Nz/2 links between randomly chosen nodes. The average coordination z of the graph is a number between zero and N − 1. The connection probability p is related to z by

Network Dynamics 213 p=

2 z Nz = 2 N (N − 1) N −1

(7.13)

The degree distribution for an ER graph is given by the permutations of all the possible configurations that use k connections among N − 1 nodes. These permutations are quantified by the binomial coefficient, giving the degree distribution 

N −1 k pk = p (1 − p)N−1−k k where the binomial coefficient is  n (n − 1) · · · (n − k + 1) n = k (k − 1) · · · 1 k

(7.14)

(7.15)

For large N, this can be expanded using 

(N − 1)! (N − 1)k N −1 =  k! (N − 1 − k)! k! k

(7.16)

and x N lim 1 − = e−x N→∞ N

(7.17)

to give pk = e−pN

(pN)k zk = e−z k! k!

(7.18)

where z = pN is the average number of connections per node. This is a Poisson distribution with mean value k =

∞  k=0

ke−z



 zk−1 zk = ze−z =z k! (k − 1)!

(7.19)

k=1

The mean squared value for a large ER graph is ∞ ∞

   zk−1 zk k2 e−z = ze−z k = z (z + 1) ≈ k2 k2 = k! (k − 1)! k=0

(7.20)

k=1

but this is a special case that holds for the ER graph and is not satisfied for general random graphs. An example of an ER graph is shown in Fig. 7.3 for N = 64 and p = 0.10. The connectivity diagram in (a) shows the nodes and links, and the

214 Introduction to Modern Dynamics Erdös–Rényi graph

Distance matrix 4

3

2

1

10

20

30

40

50

60

0

Figure 7.3 (a) Erdös–Rényi random graph connectivity diagram for N = 64 and p = 0.125. (b) Distance matrix. This graph has a diameter equal to 4 links. The average degree is k = 7.9, and there are 252 edges. distance matrix is in (b). The diameter of the network is equal to 5, and the mean degree is 6.3.

7.2.2 Small-world networks Small-world networks provide a continuous bridge between regular graphs and completely random (ER) graphs. This connection was established by Watts and Strogatz in 1998 (see Watts, 2003). Small-world networks have the property of local clusters of short-range connections, with a few long-range connections that jump between local clusters. The existence of local clusters is a natural organizational principle in networks. These are cliques (using a social term), and, like all cliques, they include a few individuals that belong to other cliques. A parameter p ranges from 0 to 1 to bring the Strogatz–Watts network continuously from a regular graph to an small-world graph. This parameter is a rewiring probability. The Strogatz–Watts algorithm is as follows: (1) Define a regular linear lattice with an average degree k and cyclic boundary conditions. (2) Pick each end of a link and move it to a randomly selected node with probability p, or keep it attached to the same node with probability 1 − p. (3) Continue until each end of each link has been processed.

Network Dynamics 215 k=4

Regular graph p=0

Small-world graph p = 0.1

Figure 7.4 Small-world networks of degree 4 with rewiring probability p varying from 0 to 1.

Random graph p=1

Small-world graph

Distance matrix 5

4

3

2

1

10

20

30

40

50

60

0

Figure 7.5 (a) Small-world (SW) connectivity diagram for N = 64 nodes, with m = 8 links per node and a rewiring probability of 10%. (b) Distance matrix. This graph has a diameter equal to 5 links. The average degree is k = 7.9, and there are 253 edges. This procedure, for mid-range values of the rewiring probability, produces clusters of locally connected nodes that are linked by a few long-distance links to other local clusters. This “small-world network” has the important property that the clustering is high, while the average distance between two nodes is small. These networks have groups of highly interconnected nodes, in cliques that tend to separate the network into distinct subgroups. Yet every node is only a few links away from any other node. The rewiring of a small graph is shown in Fig. 7.4, and a small-world graph with N = 64, degree k = 8 and p = 0.1 is shown in Fig. 7.5 with its distance matrix.

216 Introduction to Modern Dynamics

7.2.3 Scale-free networks Many real-world networks grow dynamically, like new web pages popping up on the World Wide Web. These new nodes often are not connected randomly to existing nodes, but instead tend to preferentially make links to nodes that already have many links. This produces a “rich-get-richer” phenomenon in terms of web page hits, as early nodes become hubs with high degree (very popular), attaching to later hubs of lower degree (less popular), attaching to hubs of yet lower degree, and eventually to individuals with degree of one (loners). This type of network topology is called a “scale-free” network. The special property of scalefree networks is that their degree distribution obeys a power-law function pk ∼ k−γ

(7.21)

in which γ tends to be between 2 and 3 in real-world examples. The scale-free algorithm has a single iterative step: To a number m0 of existing nodes, add a new node and connect it with a fixed number m < m0 links to the existing nodes with a preferential attachment probability to the ith node given by ki Pi =  kj j

Then repeat for t time steps. For sufficiently large networks, this yields a scale-free network with an exponent γ ≈ 3. An example of a scale-free network is shown in Fig. 7.6 for N = 64 nodes and one link added each iteration per new node. The highest-degree nodes are near the center of the network, and the low-degree nodes are on the outside. The average degree is 2, but the maximum degree is 16 at the center of the network, corresponding to the small distances at the upper left corner of the distance matrix. The network diameter is equal to 9.

7.3 Synchronization on networks The synchronization of non-identical nonlinearly coupled phase oscillators (with each oscillator having its own isolated frequency) on regular graphs can demonstrate the existence of global synchronization thresholds that represent a phase transition. In particular, the complete graph displays a Kuramoto transition to global synchronization, which is analyzed using mean field theory.

Network Dynamics 217 (a)

(b)

Scale-free graph

Distance matrix 9 8

10

7 20

6 5

30

4 40 3 50

2 1

60 10

20

30

40

50

60

0

Figure 7.6 (a) Scale-free (SF) network connectivity diagram for N = 64 nodes and 1 link per new node (b) Distance matrix. This graph has a diameter equal to 9 links. The average degree is k = 2, and there are 63 edges.

7.3.1 Kuramoto model of coupled phase oscillators on a complete graph An isolated phase oscillator (also known as a Poincaré oscillator, see Eq. (3.90)), has the simplest possible flow given by dφk = ωk dt

(7.22)

in which the kth oscillator has a natural frequency ωk . A population of N phase oscillators can be coupled globally described by the flow N   dφk 1  = ωk + g sin φj − φk dt N

(7.23)

j=1

in which the coupling constant g is the same for all pairs of oscillators. To gain an analytical understanding of the dynamics of this system, we use an approach called a mean-field theory. In mean-field theory, each oscillator is considered to interact with the average field (mean field) of all the other oscillators. This approach extracts the average behavior of the network as a function of the coupling strength and the spread in natural frequencies Δω = std (ωk ) where std stands for standard deviation of the distribution of ωk values. The mean field is a single complex-valued field given by

218 Introduction to Modern Dynamics KeiΘ =

N 1  iφk e N

(7.24)

k=1

with mean amplitude K and mean phase . The population dynamics are rewritten in terms of the mean field values K and as dφk = ωk + gK sin (Θ − φk ) dt

(7.25)

The mean-field values have the properties Θ = ωt K = const. ψk = φk − ωt

(7.26)

where the last line is the slow phase that was encountered previously in Chapter 6. The evolution becomes dψk = (ωk − ω) − gK sin ψk dt = ωk − gK sin ψk

(7.27)

From this equation, we see that we have a multi-oscillator system analogous to a collection of driven oscillators in which the mean field is the driving force. Each oscillator is driven by the mean field of the full collection. The oscillators that are entrained by the mean field contribute to the mean field. There is a synchronous solution defined by ψk = sin−1



ωk − ω gK

 (7.28)

for gK >| ωk − ω |, and the number of oscillators that are synchronized is ns (ψ) = ρ (ω) |

dω |= gK ρ (ω + gK sin ψ) cos ψ dψ

(7.29)

where ρ(ω) is the probability distribution of initial isolated frequencies. The mean field is caused by the entrained oscillators π Keiωt =

eiψ+iωt ns (ψ) dψ −π π/2 

= gKe

iωt

cos2 ψ ρ (ω + gK sin ψ) dψ −π/2

(7.30)

Network Dynamics 219 0.1

Global coupling 256 oscillators

0.08 0.06

Frequencies

0.04 0.02 0 –0.02 –0.04 Kuramoto transition

–0.06 –0.08 –0.1 0

0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 Coupling constant g

0.2

Figure 7.7 Kuramoto synchronization transition as a function of coupling g for N = 256 oscillators uniformly distributed in frequency between −0.1 and 0.1. The graph has global (all-to-all) coupling. The global entrainment transition is sharp, although small groups of oscillators with similar frequencies lock frequency at lower couplings.

which is a self-consistency equation for K. This can be solved for a Lorentzian distribution of starting frequencies ρ (ω) =

γ   π (ω − ω)2 + γ 2

(7.31)

with the solution 2 = 2γ πρ (ω)  g − gc K= g

gc =

(7.32)

The most important feature of this solution is the existence of a threshold gc and the square root dependence of the mean field K on g − gc above threshold.2 As the coupling strength increases, at first there is no global synchronization, since all oscillators oscillate at their natural frequencies. But when g increases above a threshold that is related to the width of the distribution of natural frequencies, subclusters synchronize. The size of the subclusters increases as the square root of g as it increases above the threshold. Far above the threshold, the full system synchronizes to the single common frequency ω. The transition can be relatively sudden, as shown in Fig. 7.7 for N = 256 phase oscillators.

7.3.2 Synchronization and topology Complete graphs in dynamical systems are an exception rather than a rule. In many physical systems, nearest neighbors tend to interact strongly because of their

2 This square-root dependence is very common for mean-field solutions, for instance for magnetism in solid state physics.

220 Introduction to Modern Dynamics Square lattice coupling 256 oscillators

0.08 0.06

Frequencies

0.04 0.02 0 –0.02 –0.04

Figure 7.8 Synchronization of 256 phase oscillators on a square lattice with increasing coupling. Entrainment proceeds more gradually than for global coupling, and it takes larger coupling to achieve full entrainment.

–0.06 Gradual entrainment

–0.08 –0.1 0.05

0.1

0.15

0.2

0.25

Coupling constant g

close spatial proximity, while farther neighbors interact weakly, if at all. Therefore, in most physical cases, local connectivity is favored over global connectivity. As an example, spins in magnetic systems tend to interact strongly locally, as do the network of pacemaker cells in the heart. Perhaps one of the simplest topologies in physical networks are lattices. Lattices are common in solid state physics, but also apply to more abstract systems, such as percolating systems. The simplest lattice networks have only nearest-neighbor connections. In some sense, a lattice is at one extreme of local connectivity, while global connectivity is at the other extreme. Therefore, we may expect a different behavior for a lattice than for global connectivity. Consider a 16-by-16 square lattice of non-identical phase oscillators. The degree of the graph is equal to 4. The frequencies as a function of coupling strength are shown in Fig. 7.8, with a gradual coalescence of the frequencies around the average frequency. This gradual synchronization is different from the sudden global synchronization in the Kuramoto model in Fig. 7.7. To understand this, we can plot a 16-by-16 lattice in terms of common (entrained) frequencies. This is shown in Fig. 7.9. For weak coupling (on the left), all the oscillators have different frequencies. As the coupling increases, clusters form and grow with common frequencies. At strong coupling, a single large cluster of synchronized oscillators spans the lattice and eventually entrains all the remaining oscillators. Networks can have effective “dimensions” that influence how easily they can achieve global synchronization. Two obvious examples are a linear cycle (1D) and a lattice (2D). In the last example, we saw that the 2D lattice exhibits more entrainment at low coupling coefficients, but less entrainment at high coupling

Network Dynamics 221

Figure 7.9 A sequence of a 16 × 16 square lattice for low coupling (left), intermediate coupling (center), and strong coupling (right). These lattices show increasingly larger clusters of entrained frequencies. compared with a complete graph (global coupling). This has the overall effect of “smearing out” the entrainment transition. This wider transition is directly related to the dimensionality of the 2D lattice. Because networks are composed of nodes and links, there is no intrinsic embedding dimension for networks. While some networks, like the 1D cycle or the 2D lattice, have a clear dimensionality related to the topology of their interconnections, other networks are harder to define dimensionally. For instance, a tree network does not seem to have a clear dimensionality, because of its hierarchy of branches. However, a tree topology is a close analog to a 1D string of nodes—namely, if a single link is cut, then an entire subgraph disconnects from the global network. In this sense, a tree is quasi-one-dimensional. A comparison of frequency entrainment on four different network topologies is shown in Fig. 7.10 for N = 256 nodes. Each line represents a separate oscillator. The initial frequencies are distributed uniformly from −0.1 to 0.1 relative to the mean frequency. The Kuramoto model of the complete graph is shown at the bottom and has the sharpest transition to global synchronization near gc = 0.1. The 2D square lattice also has a global synchronization threshold, but the transition is more gradual. On the other hand, the small-world and scale-free networks have very indistinct transitions and require strong coupling to induce full entrainment of all the nodes, although for both these networks, the initial synchronization of some of the oscillators occurs near g = 0.1. The synchronization of oscillators on networks provides a versatile laboratory for the study of critical phenomena. The critical behavior of dynamical processes (such as global synchronization) can differ from topological structure (percolation thresholds), especially in their scaling properties near the critical transition. Network dimensionality and topology play central roles in the transitions, but the nature of the oscillators (linear or nonlinear) and their coupling (linear or nonlinear) also come to bear on the existence of thresholds, the scaling behavior near threshold, and the dependence on network size. These are current topics of advanced research in physics and mathematics, but the basic tools and methods needed to approach these problems have been provided in this chapter.

222 Introduction to Modern Dynamics

N = 256 +0.1 Scalefree

k = 2

-0.1 +0.1 Small world

Frequencies

k = 6

–0.1 +0.1 Square lattice k = 4

–0.1 +0.1 Global coupling k = 255

Figure 7.10 Plots of frequency versus coupling coefficient for a scale-free, a small-world, a square lattice, and a complete graph.

–0.1 0

0.1

0.2

0.3

0.4

Coupling coefficient g

0.5

0.6

Network Dynamics 223

7.3.3 Synchronization of chaotic oscillators on networks Networks of interacting elements are often distinguished by synchronized behavior of some or all of the nodes. The synchronization properties of networks are studied by populating the nodes of a network with a dynamical system and then coupling these according to the network topology. One particularly simple form of synchronization occurs when all the nodes have the same time evolution. The question is, what properties do the node dynamics need to allow such a solution? This section explores the problem of networks of identical (possibly chaotic) oscillators with linear coupling, and then in the next section turns to nonlinear coupling among non-identical oscillators. To begin, consider a multivariable flow in which each variable corresponds to the state of a node, and each node has the same single-variable dynamics dφi = F (φi ) dt

(7.33)

for N nodes whose states have the values φ i . When these nodes are interconnected by links into a network, their dynamics may depend on the states of their neighbors to which they are coupled. If the coupling is linear, the time evolution becomes    dφi = F (φi ) + g Cij H φj dt N

(7.34)

j=1

where g is the linear coupling coefficient, Cij is the coupling matrix, and H (φ j ) is a response function. If we make the further restriction that the coupling is proportional to the difference between the outputs, then the evolution equations become    dφi = F (φi ) + g Lij H φj dt N

(7.35)

j=1

where Lij is the graph Laplacian. The graph Laplacian has the values Lij = 0 if the nodes are not connected and Lij = −1 if they are connected, and Lii = ki is the degree of the ith node. A trivial synchronized solution to the linearly coupled equation is when all oscillators have identical time evolution s(t). In this case, the coupling term vanishes and each oscillator oscillates with the same phase. For chaotic oscillators, such as the Rössler or Lorenz oscillators, keeping the same phase is not allowed, because of the sensitivity to initial conditions, and the relative phase would slowly drift among the many coupled chaotic oscillators in the absence of coupling. However, in the presence of coupling, the key question is whether perturbations to the time evolution of s(t) grow or decay in time.

224 Introduction to Modern Dynamics To answer this question, consider a perturbation such that φ =s+ξ

(7.36)

and the oscillator and response functions are expanded as F (φi ) ≈ F(s) + ξi F (s) H (φi ) ≈ H(s) + ξi H (s)

(7.37)

The evolution of the perturbation is therefore given by  dξi = F (s)ξi + g Lij H (s)ξi dt N

(7.38)

j=1

These equations represent N differential equations that are coupled through the graph Laplacian Lij . This equation can be re-expressed in terms of the eigenvalues and eigenvectors of the graph Laplacian as   dνi = F (s) + gλi H (s) νi dt

(7.39)

where the λi are the eigenvalues of the graph Laplacian, and the ν i are the eigenvectors. For the system to be synchronized, the term in square brackets must be negative for all eigenvalues λi and for all values of the free-running solution s(t). This makes it possible to define a master function   Λ(g) = max F (s) + gλi H (s) s(t),λi

(7.40)

which is maximized over the free-running trajectory and over all the eigenvalues of the graph Laplacian. Whenever this quantity is negative, for given network topologies and appropriate values of g, then the system is synchronizable. Many physical systems have a master function in which (g) is positive for small coupling strength, negative for intermediate coupling strength, and positive for large coupling strength. In this case, there are threshold values for g, shown in Fig. 7.11. Such systems are subject to “over-coupling,” in which synchronization is lost if coupling strengths get too large. Therefore, for global synchronization, the coupling g must be between gmin and gmax : λmax gmax < λ2 gmin

(7.41)

where λmax is the largest eigenvalue of the graph Laplacian, and λ2 is the smallest nonzero eigenvalue. (The graph Laplacian is guaranteed one eigenvalue equal to zero because each row sums to zero.)

Network Dynamics 225 1

Λ(g)

0.5

gmin

gmax

0

–0.5

0

0.2

0.4

0.6

0.8

1

x2(33)

x2(13)

g

x1(13)

x1(33)

This relation is an important result that holds for many dynamical systems. It states that, as the ratio λmax /λ2 decreases, the synchronizability of a network increases. In other words, networks with very narrow ranges for their eigenvalues are more easily synchronized than networks with a wide range of eigenvalues. Although the details may vary from system to system, and synchronization is not always guaranteed, this relation provides good estimates of synchronization, and is good at predicting whether changes in a network will cause it to be more easily or less easily synchronized. An example is given in Fig. 7.12 for 50 coupled Rössler chaotic oscillators on an ER graph with N = 50 and p = 0.1. The network has a diameter equal to 5. This case is shown for a weak coupling coefficient g = 0.002 that plots the yvalue of the 33rd node in the network against the x-value of the 13th node—nodes that are one network diameter apart. Although the relative phase between the two nodes evolves monotonically (governed by the choice of oscillator parameters), the trajectory is chaotic. When the coupling is increased to g = 0.5, the nodes synchronize across the network. This is shown in Fig. 7.13, which plots the

Figure 7.11 Conceptual graph of (g) as a function of coupling g for a broad class of dynamical networks. When the eigenvalue is negative, the global synchronized state is stable. In this example, the system globally synchronizes for intermediate values of coupling, but synchronization is destroyed for too weak or too strong coupling. Figure 7.12 N = 50 Rössler oscillators on an ER graph with p = 0.1 (a = 0.15, b = 0.4, c = 8.5) with coupling g = 0.002. The graph on the left is the (x1 , x2 ) strange attractor of the 13th oscillator. The graph on the right is x1 of the 13th oscillator plotted versus x2 of the 33rd oscillator. There is no apparent structure in the figure on the right (no synchronization).

x2(33)

Figure 7.13 N = 50 Rössler oscillators on the ER graph of Fig. 7.12 with coupling g = 0.25. The graph on the left is the (x1 , x2 ) strange attractor of the 13th oscillator. The graph on the right is x1 of the 13th oscillator plotted against x2 of the 33rd oscillator. Despite the large distance separating oscillator 13 and 33 (network diameter 5), the strange attractors are identical.

x2(13)

226 Introduction to Modern Dynamics

x1(13)

x1(13)

y-value of the 33rd oscillator against the x-value of the 13th oscillator. In this case, the chaotic oscillators are synchronized. This section has provided an analytical approach (through Eq. (7.40)) to the prediction of synchronizability of a network of identical nonlinear oscillators that are linearly coupled. Analytical results are sometimes difficult to find in complex network dynamics, and one often must resort to computer simulations in specific cases. Analytical results are still possible, even in the case of nonlinear coupling among non-identical oscillators, if the coupling is on a complete graph. This is the case for the Kuramoto model of coupled phase oscillators.

7.4 Diffusion on networks The principal property of networks is their connectivity, which facilitates the movement of conditions, such as states, information, commodities, disease, or whatever can be “transmitted” from one node to another. The movement of some quantity over time is known as transport, and it is a dynamic process. For physical transport in many real-world systems, spatial distances are important, but in this chapter we only consider topological connectivity with “distanceless” links. For instance, diffusion will be a random walk on a network. Diffusion across networks is responsible for the epidemic spread of viruses or disease, of “viral” YouTube videos, of rumors (both good and bad), memes and ideas. Therefore, the dynamical properties of diffusion on networks is an important and current topic of interest, and shares much in common with evolutionary dynamics (Chapter 8), that occurs within networks of interacting species.

7.4.1 Percolation The propagation of a state or condition on a network requires most of the nodes to be connected to a single giant cluster. For instance, the goal for the Internet is to have every node connected within a single network. On the other hand, to prevent the propagation of viruses, one might like the network to have tenuous links that could be vaccinated to break apart the network into smaller isolated clusters that

Network Dynamics 227 would be protected, as behind a firewall. There is often a single giant cluster that contains most of the nodes of the network, and the network is said to percolate. The word “percolation” comes from the notion of information, or some other condition, percolating across a network from node to node. Not all networks have a giant component (percolation cluster), so it is important to find what conditions are necessary for a giant component to exist. It is also important to find what fraction of nodes belong to the giant component, and how changes in the network properties affect the size of the giant component. For instance, there is usually a percolation threshold, at some finite value of a network parameter, at which the size of the giant component vanishes. Different types of networks have different percolation thresholds, and each case has to be investigated individually. The random ER graph is one type of graph for which the percolation threshold can be studied analytically. The fraction S of nodes in the giant component is a function of the average degree k. S is obtained as the solution to S = 1 − exp (− k S)

(7.42)

which is a transcendental equation that can be solved graphically or numerically for the fraction S as a function of k. For a graph with increasing average degree (by adding links to an existing sub-percolating graph), S rises from zero at the percolation threshold when d [1 − exp (− k S)] = 1 dS

(7.43)

k exp (− k S) = 1

(7.44)

for which

At the threshold, S = 0, so the threshold for the random graph is k = 1

(7.45)

This means that when the average degree exceeds unity in an ER graph, there is a giant component that contains a finite fraction of the nodes of the network. The size of the giant component as a function of k is shown in Fig. 7.14. For average degree less than unity, there are many small clusters, but no single cluster contains most of the nodes. As the average degree crosses k = 1, the size grows linearly above threshold. The slope of the curve is discontinuous at the threshold, but the fraction is continuous. This percolation transition is known as a continuous phase transition, or a second-order phase transition.3 An important condition for a welldefined percolation transition to exist is to have a system that is very large, with a large number N of nodes. For smaller networks, the threshold is more smeared out. The strict existence of a threshold only takes place in the limit as N goes to infinity. But for practical purposes, a network with very large N is sufficient

3 Another famous example of a continuous phase transition is the superconductivity transition.

228 Introduction to Modern Dynamics Percolation threshold

Fraction in giant component (S)

1

Figure 7.14 Size of the giant component versus average degree k for a random graph. There is a giant component for k > 1.

S = 1 – exp(–ks)

0.8

0.6

0.4

0.2

0

0

1

2

3

4

5

6

7

k

to display a relatively sharp transition. As a rule of thumb, one would want N > 1000 for a sharp transition.

7.4.2 Diffusive flow equations Diffusion on networks is a combination of physical diffusion and percolation. For a percolation analysis, the state of a node is all or nothing. For diffusion, on the other hand, the interest is on continuous values of concentrations or other continuous properties on a node, and how these values are transported through different network structures. The change of a property or concentration ci of the ith node is related to the adjacent values of that property on neighboring nodes. The rate of change is given by    dci =β Aij cj − ci dt

(7.46)

j

where Aij is the adjacency matrix, and β is the diffusion coefficient. This becomes    dci =β Aij cj − βci Aij = β Aij cj − βci ki dt j j j   =β Aij − δij ki cj j

(7.47)

Network Dynamics 229 where use has been made of the degree of the ith node 

Aij = ki

(7.48)

j

In matrix form, this is dc = β (A − D) c dt

(7.49)

where the diagonal matrix D is ⎛ k1 ⎜0 ⎜ D=⎜ ⎜0 ⎝

0 k2 0

0 0 k3

0

0

0

⎞ 0 0⎟ ⎟ ⎟ 0⎟ ⎠ .. .

(7.50)

A matrix L is defined as L=D−A

(7.51)

which is recognized as the graph Laplacian of Eq. (7.9). Using the graph Laplacian, the dynamical flow for diffusion on a network becomes

dc + βLc = 0 dt

Diffusion on a network

(7.52)

where L plays a role analogous to that of the Laplacian (−∇ 2 ) in continuous systems. This is the reason why L is called the graph Laplacian, and it appears in many instances of dynamic properties of networks. The solution to Eq. (7.52) is obtained as a linear combination of eigenvectors of the Laplacian: c(t) =



ai (t)vi

(7.53)

i

The eigenvectors of the Laplacian satisfy Lvi = λi vi , which allows Eq. (7.52) to be written as    dai + βλi ai vi = 0 (7.54) dt i

The solutions are ai (t) = ai (0)e−βλi t

(7.55)

230 Introduction to Modern Dynamics 100

Concentration

10–1

10–2

Figure 7.15 Concentration trajectories for selected nodes of a ER network with N = 100 and link probability 0.03. The beginning concentration is isolated to a single node and diffuses to its neighbors. In the long-time limit, the concentration asymptotes to 0.01.

10–3

0

10

20

30

40

50 Time

60

70

80

90

100

which decay exponentially from the initial values. This is not to say that all concentrations at all nodes decay. For instance, if the initial concentration is localized on one part of the network, then the concentration everywhere else is zero. But these zero concentrations are because of the cancellation of the coefficients of the eigenvectors in Eq. (7.53) in the initial condition. As the system evolves, and the coefficients decay, this cancellation is removed and the concentration increases on most of the nodes. This is just what diffusion means: an initial concentration diffuses into the network until there is a uniform concentration everywhere. An example of diffusion is shown in Fig. 7.15 for an ER network with p = 0.03 and N = 100. The concentrations of all of the nodes are shown in the figure, including the single initial node that was given a concentration equal to unity. The original node’s concentration decays exponentially. Some of the neighboring nodes increase initially, then decrease to reach steady state, while other farther neighbors rise asymptotically to the steady state. At long times, all concentrations eventually become equal as the system reaches equilibrium.

7.4.3 Discrete map for diffusion on networks As an alternative to the continuous-time evolution of the diffusing concentration, a discrete-time evolution is given by    cn+1 = cni 1 − β ki + β

Aij cnj i j

(7.56)

Network Dynamics 231 (a)

(b) 100

100 τ = 12

10–1

Concentration

Concentration

τ = 24

10–2

10–3

0

50

100

150 Time

200

250

300

10–1

10–2

10–3

0

50

100

150 Time

200

250

300

Figure 7.16 Diffusion on networks with mean degree equal to four. (a) SW network with N = 50, m = 2, and p = 0.1. (b) SF network with N = 50 and m = 2. The diffusion times τ differ by a factor of approximately 2. This recursion relation converges to a differential equation in the limit of small diffusion coefficient β = βΔt as t goes to zero. The vector equation is cn+1 = (I − βD + βA) cn = (I − βL) cn = (I − βL)n c0

(7.57)

which is defined in terms of the graph Laplacian, and M = I − βL is the Floquet multiplier. The solution is obtained by recursive application of the Floquet multiplier on an initial concentration vector. At early times, the rate of diffusion is proportional to the product of the mean degree with the largest eigenvalue of the graph Laplacian. Therefore, different network structures that have the same mean degree can have different characteristic diffusion times, depending on the graph Laplacian. Examples are shown in Fig. 7.16 for SW and SF networks with the same average degrees.

7.5 Epidemics on networks One of the consequences of connected networks is the possibility for the spread of an infectious agent. Among computers, the infectious agent can be a digital virus that is spread through email or through downloads. In forests, the infectious agent can be a beetle or a forest fire that spreads from nearest neighbors and

232 Introduction to Modern Dynamics sweeps across the landscape. Among humans, the infectious agent can be a virus or a bacterium that is passed by physical contact. Alternatively, the infectious agent can be a thought, or a phrase (a meme) that sweeps across a population. All of these instances involve dynamics, since the states of numerous individuals within the network become the multiple variables of a dynamical flow. The rate of change of the individual states, and the rate of spread, are directly controlled by the interconnectivity of the specific networks.

7.5.1 Epidemic models: SI/SIS/SIR/SIRS In the study of epidemics, there are several simple models of disease spread that capture many of the features of real-world situations. These are known as the SI, SIS, SIR, and SIRS models. SI stands for susceptible–infected, SIS stands for susceptible–infected–susceptible, SIR stands for susceptible–infected– removed, and SIRS stands for susceptible–infected–recovered–susceptible. In the SI model, an individual can be either susceptible (not yet infected) or infected. Once infected, the individual remains infected and infectious and can spread the disease to other individuals who are susceptible. Obviously, the end state of the SI model is complete infection within any connected cluster of individuals. In the SIS model, an individual is either susceptible or infected, but can recover after being infected to become susceptible again. In the SIR model there are three states: susceptible, infectious and removed. After being infected, an individual recovers, is no longer infectious, and has acquired immunity to the disease. In the SIRS model, there are again three states, but after recovering from infection, an individual can become susceptible again. The end states of the SIS, SIR, and SIRS models depend on the character of the infection and on the connectivity properties of the network of interactions. In these models, a disease may cause an epidemic or may die out. Even in the SIS model, in which individuals always become susceptible again, an epidemic may not spread if the recovery rate is fast enough. As a first approach to understand the qualitative differences among the SI/SIS/SIR/SIRS epidemic models, it can be assumed (to lowest order) that the network has a homogeneous connectivity in which all individuals (nodes) are assumed to interact with an average k others. This is not always a good assumption for real networks, because it ignores the actual heterogeneous connectivity, Table 7.1 Four common infection models SI SIS SIR SIRS

Susceptible–infected Susceptible–infected–susceptible Susceptible–infected–removed Susceptible–infected–recovered–susceptible

Network Dynamics 233 Table 7.2 Homogenized (compartmental) infection models SI:

i∗ = 0 i∗ = 1

di(t) = k βi(t) [1 − i(t)] dt

SIS:

di(t) = −μi(t) + k βi(t) [1 − i(t)] dt s(t) = 1 − i(t)

di(t) = −μi(t) + k βi(t) [1 − r(t) − i(t)] SIR: dt dr(t) = μi(t) dt s(t) = 1 − r(t) − i(t) di(t) = −μi(t) + k βi(t) [1 − r(t) − i(t)] SIRS: dt dr(t) = μi(t) − νr(t) dt s(t) = 1 − r(t) − i(t)

i∗ = 0 i∗ = 1 −

μ k β

(i ∗ , r ∗ ) = (0, 0)

μ (i ∗ , r ∗ ) = 0, kβ



(i ∗ , r ∗ ) = (0, 0) μ 1− k β ∗ i∗ = μ r = 1+ α

 μ α

and it fails completely when the network connectivity is near or below the percolation threshold, but it does show how epidemics spread on well-connected networks. The dynamic equations and fixed points for these homogeneous models are given in Table 7.2. The infection rate per node is β, the removal rate is μ, the infected population is i(t), the susceptible population is s(t), and the recovered (or removed) population is r(t). The individual nodes have disappeared in these homogenized models (also known as compartmental models) and have been replaced by averages. The rate of infection depends on the average degree k, because contact with more infected nodes leads to higher infection rates.

7.5.2 SI model: logistic growth In the simplest population dynamics models, populations grow exponentially at early times, but if continued unchecked must eventually reach the carrying capacity of the environment. The equation for this saturated growth model was first derived by Verhulst in 1838 and is called the logistic equation, the solution of which is the logistic function. It is also known as an rk-process, where r is a growth rate and k is the carrying capacity. The logistic growth equation is   P P˙ = rP 1 − k

(7.58)

μ 1− kβ μ 1+ α



234 Introduction to Modern Dynamics with solution P(t) =

kP0 ert k + P0 (ert − 1)

(7.59)

The half-life to saturation is obtained when the population reaches half of the carrying capacity: t1/2 =

  1 k − P0 ln r P0

(7.60)

In the context of the dynamics of epidemics on networks, the homogenized logistic growth equation is called the SI model (Fig. 7.17). When the network is well connected (significantly above the percolation threshold), the SI model is di(t) = k βi(t) [1 − i(t)] dt

(7.61)

subject to the conserved number of nodes S+I =N s+i =1

(7.62)

where the normalized variables s = S/N and i = I /N are the average continuousvalued levels of susceptibility and infection across the network, even though any Population limit

x• = rx(1 – x/k)

Growth rate 1.5

3 2.5

1

2

x•

Figure 7.17 Logistic growth dynamics for the homogeneous network model for SI infection. This is also known as the rk-growth model, where r is the initial growth rate and k is the carrying capacity of the population.

x 0.5

1.5 1 0.5

0 0

0.5

1

1.5 x

2

2.5

3

0 0

2

4

6 t

8

10

Network Dynamics 235 given node can only be binary: infected or not. The dynamics is one-dimensional, with two fixed points i∗ = 0 λ = k β

i∗ = 1 λ = − k β

(7.63)

consisting of an unstable fixed point at no infection, and a stable fixed point at full infection. The fixed point at i ∗ = 0 is unstable, because a single incidence of infection will propagate through the entire population, driving the solution to the stable fixed point i ∗ = 1. The rate with which the infection spreads through the network is equal to the average degree of the network times the infection rate per node. Larger average degrees will allow the infection to spread more rapidly and approach full saturation more rapidly.

7.5.3 SIS/SIR/SIRS More general models of infection include recovery and immunity. These models can exhibit critical thresholds for infection spread or die-off, depending on infection and recovery rates. These models have simple dynamics under the homogeneous network approximation. If a node is allowed to recover from an infection, then it becomes susceptible again. This is the SIS model (susceptible–infected–susceptible). The rate of recovery is given by −μi(t), and the SIS model is di(t) = −μi(t) + k βi(t) [1 − i(t)] dt

(7.64)

This model is still one-dimensional, but now has a nontrivial fixed point with an average infection that is less than unity, although the fixed point at i ∗ = 0 is still unstable to any initial infection. The fixed points and Lyapunov exponents are  i ∗ = 0, 1 −

 μ k β λ = [k β − μ, μ − k β]

(7.65)

The crucial feature in the SIS model is the existence of an epidemic threshold set by kc =

μ β

(7.66)

If the average degree of the network falls below this critical value, then the epidemic dies out. Furthermore, if the average degree is only slightly larger than the critical value, then the steady-state infection will be close to zero. These

236 Introduction to Modern Dynamics conclusions only hold when the value of kc β is significantly larger than the percolation threshold value, i.e., when the network can be treated homogeneously. The existence of an epidemic threshold is a crucial concept with widespread ramifications, and is also key for preventing epidemics through immunization. If a fraction g of individuals are immunized, then the rate of infection becomes β (1 − g). This leads to an immunization threshold given by gc = 1 −

μ k β

(7.67)

Once the fraction of immunized individuals is larger than the threshold, the infection cannot spread exponentially. Furthermore, by sustaining the immunization, the disease can eventually die out and disappear forever, even if not everyone in he network is immunized. Only a critical fraction need be immunized to eradicate the disease (if it does not lie dormant). This is how the World Health Organization eradicated smallpox from the human population in 1977. These approximate conclusions apply to networks with relatively high degrees and high homogeneity. For heterogeneous networks, such as scale-free networks and small-world networks, the quantitative spread of disease depends on the details of the degree distributions and on the eigenvalues and eigenvectors of the adjacency matrix. If nodes are allowed to recover from the infection and no longer become susceptible again, then the model behavior becomes two-dimensional, but with an unusual phase portrait. This SIR model (susceptible-infected-removed) is di = −μi + k βis dt

(7.68)

ds = − k βis dt

and the fraction removed from the infection is r = 1 − i − s. The fixed points and Lyapunov exponents are i ∗ = [0, 0] , λ = [−μ, 0] ,



μ k β [0, 0]

0,

 (7.69)

The two fixed points have marginal stability. There is an unstable manifold at the origin for no infection, and the steady-state fixed point at s∗ = μ/ (k β) has vanishing Lyapunov exponents. The phase portrait is shown in Fig. 7.18(a). The conservation rule r = 1 – i – s leads to trajectories confined to the lower left triangle. The s-nullcline is a vertical line at s∗ . All trajectories terminate on the i = 0 axis between the origin and s∗ . These termination points are labeled s∞ . The stability along this termination line is marginal, with both the trace and determinant of the Jacobian vanishing. For a given initial infection in the population, there is always some residual susceptible population at long times when the infection vanishes.

Network Dynamics 237 SIR Nullcline

SIRS r=1–i–s

s=1–i–r

i=1–s

kβ = 1 μ = 0.25

kβ = 1 μ = 0.25 ν = 0.125

Infected

Recovered

i=1–s

s∞

Figure 7.18 Phase plots for infection models with removed or recovered nodes. (a) SIR model with β = 1 and μ = 0.25. (b) SIRS model with β = 1, μ = 0.25, and ν = 0.125.

Infected

Susceptible μ kβ

An infection model that can allow the removed population to become susceptible again is the SIRS model (where R here stands for recovered). For instance, this occurs for malaria and tuberculosis infections. The dynamics for SIRS are di = −μi + k βis dt dr = μi − νr dt ds = νr − k βis dt

(7.70)

for all μ, ν, β positive. The homogenized infection model for epidemics on networks can also be viewed as a compartmental model in which each subpopulation is considered as a separate “compartment” with fluxes coming in and going out to other compartments, as in Fig. 7.19. This view is only valid when the network connectivity is significantly above the percolation threshold. In the SIRS model, the total population is assumed to be constant, so s + i + r = 1. This constraint reduces the dynamics to a two-dimensional flow di = −μi + k βi (1 − i − r) dt dr = −μi − νr dt

Compartment Flow Chart

di = – μi + kβis kβ dt dr = μi – νr dt ds = νr – kβis i dt

s ν

r μ

(7.71)

Despite the apparently simple dynamical equations, this disease model exhibits thresholds and bifurcations for disease spread as well as steady-state solutions. An example is shown in Fig. 7.18(b) in the case when the fixed point is a stable spiral.

Figure 7.19 Compartment flow chart for SIRS. Each subpopulation is considered as a compartment with fluxes in and out relative to other compartments.

238 Introduction to Modern Dynamics

7.5.4 Discrete infection on networks The homogeneous models describe the qualitative behavior of epidemics, but fail to capture the detailed processes related to the random connectivity of network nodes. In particular, the homogeneous models fail when the network is near or below the percolation threshold. To model the discrete character of infection spread on networks, it is necessary to consider the state of each node and the connections to other nodes. Epidemic spread on discrete networks shares much in common with diffusion, but with an important difference. In the diffusion process, the net amount of diffusing material is conserved, but there is no similar conservation for infection (other than the total population that may be increasing). In addition, during diffusion, each node has a continuous concentration value between zero and unity, while in the infection model a node is either infected or not (it cannot be “half” sick), so each node has a binary state. This changes the dynamics from being dominated by the graph Laplacian (for diffusion) to the adjacency matrix (for infection spread). Finally, a stochastic element must enter into disease spread. The spreading parameter β must now be interpreted as a probability of infection. The infection spread for the SI model without recovery is captured by N equations for N nodes in a network. The state of node i at time step n + 1 depends on the infected nodes at time step n through ⎛ sn+1 i

=H

⎝sn i

⎞    n + S˜ β Aij sj ⎠

(7.72)

j

where H (· · ·) is the Heaviside function (equal to unity for positive argument and   to zero for negative or zero argument). The term S˜ β is a stochastic function that is equal to 1 with a probability β = βΔt, where t is the time step, and is equal to zero otherwise. Equation (7.72) is a discrete iterative map for N states including a stochastic process. The solution is obtained by recursive application on an initial state vector of infections. When iterated to a steady state, all nodes connected to the cluster containing the initial infection become infected. The initial rate of spread is proportional to k β, as in the homogeneous model. However, the fluctuations in infection depend on the details of the network structure. For a discrete SIS model, the iterated evolution includes a recovery step ⎛

sn+1 i

⎞      = H ⎝sni + S˜ β

Aij snj ⎠ − H S˜ μ sni

(7.73)

j

where there is a stochastic recovery probability given by μ = μΔt. Once a node has recovered, it is susceptible again to reinfection. A Monte Carlo simulation of the average infection on scale-free networks with 50 nodes, averaged over 200 networks, is shown in Fig. 7.20, which also shows the effect of inoculation on the

Network Dynamics 239 12 Inoculation Network Homogeneous

Number of infected nodes

10

8

6

4

2 Inoculation 0

0

50

100

150

200

250

300

Time

epidemic. At time step 100, the highest-degree node was inoculated (making it uninfected and unsusceptible), and the epidemic decays rapidly afterwards. This demonstrates the important role played by high-degree nodes in epidemics, and helps define a strategy where the highest-degree nodes are the most important ones to inoculate. The homogeneous model for SIS is shown for comparison, highlighting the important differences between network simulations and homogeneous models. Infections in a finite network will always eventually die out, because if a fluctuation happens to remove a last infected node, then the uninfected state becomes permanent if there is no other route for the infection to restart. This finitesize effect is not captured by the homogeneous model, which is in the infinitenode limit. In addition, inoculating a single node has a negligible effect on the homogenous model (through k), while it can have an extremely large effect on a finite network, especially in this case where the node with the highest degree has been removed.

7.6 Summary Dynamical processes on interconnected networks have become a central part of modern life through the growing significance of communication and social networks. Network topology differs from the topology of metric spaces, and even from the topology of dynamical spaces like state space. A separate language of nodes and links, degree and moments, adjacency matrix and distance matrix,

Figure 7.20 Infected SIS population as a function of time for an average over 200 SF networks with N = 50, m = 2, β = 0.2, and μ = 0.65. Note that the epidemic dies out slowly, but inoculation of the highest-degree node causes a rapid decay of the epidemic. The homogeneous model saturates without decay.

240 Introduction to Modern Dynamics among others, are defined and used to capture the wide range of different types and properties of network topologies. Regular graphs and random graphs have fundamentally different connectivities that play a role in dynamic processes such as diffusion and synchronization on a network. Three common random graphs are the Erdös–Renyi (ER) graph, the small-world (SW) graph, and the scale-free (SF) graph. Random graphs give rise to critical phenomena based on static connectivity properties, such as the percolation threshold, but also exhibit dynamical thresholds for the diffusion of states across networks and the synchronization of oscillators. The vaccination threshold for diseases propagating on networks and the global synchronization transition in the Kuramoto model are examples of dynamical processes that can be used to probe network topologies.

7.7 Bibliography M. Barahona and L. M. Pecora, “Synchronization in small-world systems,” Physical Review Letters, 89, 054101 (2002). A. Barrat, M. Barthélemy, and A. Vespignani, Dynamical Processes on Complex Networks (Cambridge University Press, 2008). L. Glass and M. C. Mackey, From Clocks to Chaos (Princeton University Press, 1988). M. E. J. Newman, Networks: An Introduction (Oxford University Press, 2010). A. Pikovsky, M. Rosenblum, and J. Kurths, Synchronization: A Universal Concept in Nonlinear Science (Cambridge University Press, 2001). D. J. Watts, Small Worlds:The Dynamics of Networks Between Order and Randomness (Princeton University Press, 2003). A. T. Winfree, The Geometry of Biological Time (Springer, 2001).

7.8 Homework Problems Analytic problems 1. Eigenvalues: Show (prove) for a complete graph that Λij has λi = N except for i = 1. 2. Kuramoto model: For the Kuramoto model, explicitly perform the integration of π/2  K= gK cos2 ψ ρ (ω + gK sin ψ) dψ −π/2

Network Dynamics 241 using ρ (ω) =

γ

  to find gc and to confirm the square-root π (ω − ω)2 + γ 2 dependence at threshold. 3. Average distance: Find the average distance between nodes for the Strogatz–Watts model as a function of rewiring probability p. Choose a range of three to four orders of magnitude on p. 4. Clustering coefficient: Define the clustering coefficient Ci as the average fraction of pairs of neighbors of a node i that are neighbors of each other (share edges): Ci =

ei ki (ki − 1) /2

(7.74)

where ei is the number of edges shared among the ki neighbors of node i. The number of shared edges is given by ei =

1 Aij Ajk Aki 2

(7.75)

jk

where Ajk is the adjacency matrix. What is the average clustering coefficient of the Strogatz–Watts model as a function of rewiring probability p. Choose a range of three to four orders of magnitude on p. 5. SIRS bifurcation: The SIRS model can display a bifurcation when at least one of its parameters is changed continuously. What parameter(s) and what effect?

Numerical projects 6. Graph Laplacian: Numerically calculate λmax /λ2 (from the graph Laplacian) for small-world graphs as a function of the rewiring probability. Choose N = 100, k = 4, and p = 0.1. Write a program that begins with the adjacency matrix of a regular graph k = 4, then loop through the columns to rewire each link with a probability p (be careful to work only on the upper or lower diagonal of the matrix and keep it symmetric). Then construct the Laplacian and use the MATLAB function eig(L). (You can type “help eig” at the » prompt to learn how to use the output of the function.) 7. Spectral width: How does the width Δλi of eigenvalues of the Laplacian depend on the rewiring probability for small-world graphs? After eliminating the lowest eigenvalue, you can simply take the standard deviation of the remaining eigenvalues and track how that changes as a function of p.

242 Introduction to Modern Dynamics 8. Synchronized chaos: Track the range of g for which identical Rössler oscillators are synchronized for the nets of Figs. 7.3, 7.5, and 7.6 as a function of the rewiring probability. 9. Synchronized chaos: Find parameters for the identical Rössler oscillators for which the network is only just barely synchronizable (α2 /α1  λmax /λ2 ). 10. Entrainment: Study synchronization on random graphs. Plot the entrainment probability of Poincaré phase oscillators as a function of the coupling constant for ER, SF, and SW graphs with N = 100 for the same average degree. 11. Epidemics on growing networks: Study the spread of an epidemic in the case when a network is growing at a rate comparable to the infection probability of a single node. How does network growth affect the spread of the infection? 12. Network structure and inoculation: How does network structure affect the effect of inoculation? Compare ER, SW, and SF examples. 13. Random inoculation: How many inoculations does it take on an ER graph to drop the infection below the threshold?

Evolutionary Dynamics

8 8.1 Population dynamics

Natural evolution is a motive force of such generality that it stands as one of the great paradigms of science, transcending its original field. This chapter introduces the topic of evolutionary dynamics. In the context of dynamical systems, it is the study of dynamical equations that include growth, competition, selection, and mutations. The types of systems that have these processes go beyond living species and ecosystems, extending to such diverse topics as language evolution, crystal growth, business markets, and communication networks. Problems in evolutionary dynamics provide some of the simplest applications of the overall themes of this book, namely, a focus on nonlinear flow equations for systems with several degrees of freedom and the approach to steady-state behavior. Questions of stability become questions of sustainability of ecosystems. In this chapter, we will see examples where species can coexist and where individual populations go through boom-and-bust cycles in classic predator–prey

Introduction to Modern Dynamics. Second Edition. David D. Nolte, Oxford University Press (2019). © David D. Nolte. DOI: 10.1093/oso/9780198844624.001.0001

244

8.2 Viral infection and acquired resistance

249

8.3 Replicator dynamics

254

8.4 Quasispecies

260

8.5 Game theory and evolutionary stable solutions

267

8.6 Summary

271

8.7 Bibliography

272

8.8 Homework problems

273

244 Introduction to Modern Dynamics models. Zero-sum games are a pervasive example that is introduced when finite resources must be divided among multiple species. An important development in modern dynamics is the idea of quasispecies and the dynamics of quasispecies diffusing across a fitness landscape in a high-dimensional “fitness” space. The mathematical flows in this chapter are relatively simple—capturing growth, competition and selection. The stability of the fixed points are generally saddles, nodes, and centers. However, the consequences of these solutions are profound, affecting not only the evolution of species, but also addressing ecosystems under pressure from global climate change, our immune systems under assault by viruses, and the rise of cancer—all topics that will be of central importance for future physicists living and working in the complexities of an overcrowded and overstressed world.

8.1 Population dynamics We encountered the logistic growth model in Eq. (7.58) of Chapter 7, describing viral growth on networks. It is known as the rk growth model for rate and carrying capacity and is a simple nonlinear growth model as a species competes with itself for finite resources. In more realistic ecosystems, there may be several or even thousands of species interacting and competing within a setting that has finite resources. This situation provides ample space for a wide range of complex behavior.

8.1.1 Species competition and symbiosis If there are two species occupying an ecosystem, then the dynamical processes include growth and competition as well as possible symbiosis. The general population dynamics for two species is given by x˙ = f (x, y) x y˙ = g (x, y) y

(8.1)

These equations admit a wide variety of fixed points and stability conditions. The flows are always in the positive quadrant of the phase plane, because the dynamics are bounded by the nullclines x = 0 and y = 0, which are also separatrices. No flow lines can move out of the positive quadrant of the phase plane, which guarantees that all populations remain non-negative. The functional forms for f (x, y) and g (x, y) can be arbitrary, but are often polynomials. For instance, the rabbit versus sheep flow described by the equations x˙ = z (3 − x − 2y) y˙ = y (2 − x − y)

(4.25)

Evolutionary Dynamics 245 in Example 4.1 of Chapter 4 used negative feedback (population pressure) on both populations because both rabbits and sheep eat the grass. These dynamics lead to a saddle point in which one or the other species went extinct (Fig. 4.5). Intraspecies population pressure can be balanced by advantages in symbiotic relationships with other species in the ecosystem. In this case, the interspecies feedback is positive, which stabilizes the populations. As an example, consider the symbiotic dynamics x˙ = x (1 − 2x + y) y˙ = y (1 + x − y)

(8.2)

with positive interspecies feedback. There are fixed points at (0,1), (0.5,0), and (2,3). The Jacobian matrix is 

1 − 4x + y J= y

 x 1 + x − 2y

(8.3)

The first two fixed points are saddles, while the fixed point at (2,3) is a stable node where the symbiotic relationship between the species balances the intraspecies population pressure, leading to a stable equilibrium in which both species thrive together.

8.1.2 Predator–prey models Predator–prey dynamics describe the dynamical evolution of at least two species, at least one of which preys upon the other. Among the simplest and earliest equations for predator–prey systems are those proposed independently by Alfred J. Lotka in 1925 and Vito Volterra in 1926. These are the Lotka–Volterra equations x˙ = x (α − βy) y˙ = −y (γ − δx)

(8.4)

where y is the number of predators and x is the number of prey. The prey reproduce at a rate α, and they are eaten with a rate β times the product of the numbers of prey and predators. The predators reproduce at a rate δ times the product of the numbers of prey and predators, and they die off at a rate γ . Rather than rabbits and sheep competing for the same food stock, we now have rabbits and foxes, with one of the species being the food for the other. The Lotka–Volterra equations have an extinction fixed point (x∗ , y∗ ) = (0, 0) and a nonzero steady-state balance (stable fixed point) at x∗ = γ /δ y∗ = α/β

(8.5)

246 Introduction to Modern Dynamics The Jacobian matrix is 

α − βy J= δy

 −βx −γ + δx

(8.6)

At the fixed point (x∗ , y∗ ) = (0, 0), the stability is governed by 

J(0,0)

0 −γ

α = 0

 (8.7)

with Lyapunov exponents (α, − γ ). Therefore, this is a saddle point (when all parameters are positive). At the steady-state fixed point, the stability is governed by ⎛ 0

⎜ J = ⎝ δα β



βγ ⎞ δ ⎟ ⎠ 0

(8.8)

√ the Lyapunov exponents are ±i αγ , and the fixed point is a center as the two populations oscillate about the nonzero fixed point. An example is shown in Fig. 8.1. Predation

Growth Prey:

x˙ = x(α – βy)

Predator:

y˙ = –y(γ – δx) Death

(a)

Predation

(b)

1.5

1

0.5

0

γ=1

β = 1.5

δ = 1.25

2.5

2

Population

Population y

2

α=1

1.5

1

0.5

0

0.5

1

1.5

Population x

2

0 10

15

20

25

30

35

40

Time

Figure 8.1 (a) Phase portrait showing the center at (0.8, 0.67). (b) Time evolution of the predator–prey model for α = 1, β = 1.5, γ = 1, and δ = 1.25 for several initial conditions.

Evolutionary Dynamics 247 Extinction threshold

2.0

1.5

1.0

0.5 Extinction point 0

0

0.5

1.0

1.5

Figure 8.2 Extinction threshold for the predator–prey model.

2.0

An ecosystem can have a “tipping point” at which low population numbers are nearly unsustainable and if a fluctuation in the number of individuals causes this number to drop below a critical threshold, then the population cannot recover and eventually goes extinct. An extinction threshold can easily be added to the predator–prey model as x˙ = x (α − βy)

x θ

−1



y˙ = −y (γ − δx)

(8.9)

where θ is the extinction threshold for the prey—and also for the predator, since die-off of the prey leads to die-off of the predator as well. The phase portrait of this extinction model is shown in Fig. 8.2. All trajectories to the left of the extinction threshold terminate on the extinction point at (0, 0).

8.1.3 Stability of multispecies ecosystems Real ecosystems are highly complex, involving competition and symbiosis among many species. The dynamics occurs in a high-dimensional state space that can have hundreds of dimensions, making it difficult to identify overarching behavior or even to find the fixed points. Even when the fixed points are found, it can be difficult to identify quickly whether they are stable. In these high-dimensional systems, the primary focus is on stability. Therefore, it would be helpful if there

248 Introduction to Modern Dynamics were general rules relating the properties of the high-dimensional Jacobian matrix to stability of fixed points. Fortunately, there are numerous conditions that can be identified that do lead to stability.1 Although there are too many for an exhaustive account to be given here, there are several simple conditions that can be identified that do lead to stability. (a) Negative-definiteness. If the determinants of the principal minors of the Jacobian matrix alternate in sign, with the first minor being negative, then the system is stable. The determinants must satisfy sgn

a11 a21 ... an1

a12 a22 ... an2

... ...

a1n a2n ... ann

= (−1)n

(8.10)

where the sequence of principal minors are

a11 < 0

a 11 a21

a12 a22

>0

a11 a21 a31

a12 a22 a32

a13 a23 a33

< 0 etc. (8.11)

(b) Quasi-negative-definiteness. Form the symmetric matrix B =  1 T A + A . If the conditions for (a) are satisfied by B, then they are 2 satisfied for A. (c) Metzlerian matrix. If all off-diagonal elements are positive (and consequently all diagonal elements are negative), then the stability conditions are the same as for (a)). (d) Dominant-negative diagonal. If all diagonal elements are negative, and the magnitude of each diagonal element is larger than the sum of the magnitudes of the off-diagonal elements in the same row or same column, then the fixed point is stable: aii < 0 and | aii |>

 j =i

| aij |

or

| aii |>



| aji |

j =i

(e) Negative trace. The trace of a stable point must be negative. (This does not guarantee stability, but it is a necessary condition for stability.) 1

See G. Gandolfo, Economic Dynamics, 4th edition (Springer, 2010).

(f) Sign of determinant. The determinant of an n-by-n Jacobian matrix of a stable point must have sign (−1)n . (This does not guarantee stability, but it is a necessary condition for stability.)

Evolutionary Dynamics 249 With the help of these rules on stability conditions for the Jacobian matrix, it is possible to study high-dimensional population dynamics and sometimes to identify analytically (without recourse to a computer) whether a fixed point will be stable. Admittedly, high-dimensional systems usually require a numerical zerofinder to identify fixed points, and finding the eigenvalues numerically then would be an obvious step. Note that these rules apply just as well to any dynamical system. For instance, they are often used to test the stability of economic dynamical systems as in Chapter 10.

8.2 Viral infection and acquired resistance Our natural immune system, which guards against the invasion of our bodies by foreign bacteria and viruses, is a complex ecosystem. When microbes invade, they multiply. But they also trigger our immune system to mount a defense against the invaders. The viruses become the prey and the immune response the predator. This sounds like rabbits and foxes again—but the essential difference is that new strains of viruses can arise through random mutation and can escape the original immune response. The body is resilient and can mount a new attack— unless the virus disables the immune response, which is what occurs in the case of HIV infection. Clearly, this is a complicated ecosystem with stochastic processes and measure–countermeasure escalation. The long-term health of the individual is at stake, and we can predict outcomes—remission or relapse—based on the properties of the viruses and the immune response.

8.2.1 Viral infection Consider the simple population dynamics given by ν˙ = ν (r − ax) x˙ = −bx + cν

(8.12)

where ν is the viral strain, and x is the specific immune response (like antibody production) to that strain. A simulation of the population response to a single strain is shown in Fig. 8.3(a) The fixed point is a stable spiral, and the populations oscillate as they relax to a steady state at long times as a balance is established between viral reproduction and the immune system’s attack on the viruses. If there are N viral strains, and a matched number of immune responses, then the equations become ν˙ a = νa (r − axa ) x˙a = −bxa + cνa

a=1:N

(8.13)

250 Introduction to Modern Dynamics (a)

(b) 2.5 Virus Immune response

Virus Immune response

12.0 Population size

Population size

2.0

14.0

1.5

1.0

10.0 8.0 6.0 4.0

0.5

0.0

2.0 0

50

100

150 Time

200

250

300

0.0

0

100

200

300

400

500

600

Time

Figure 8.3 (a) Population size of the viral load and the immune response for a single viral strain with r = 2.4, a = 2, b = 0.1, c = 1. (b) Example of antigen diversity. The total population size of the viral load and the immune response are shown with a random probability for the rise of new viral strains.

2 A more realistic model makes the mutation probability per time proportional to the total viral load, which would lead to exponential growth.

and the state space now has 2N dimensions. However, not all the viruses will be present at the start. A single virus can attack the system, eliciting an immune response that keeps it in check, but then the virus can acquire a mutation, and a different strain emerges that is independent and escapes the initial immune response, causing a new immune response, and so on. This process adds a fundamentally new character to the flow equations. The flows studied in earlier chapters of this book were fully deterministic. They might exhibit complex chaotic behavior, but it was deterministic behavior without intrinsic randomness. The random emergence of new viral strains introduces a stochastic process that is no longer deterministic. However, this new aspect causes no difficulty for the solution of the flow equations—the concentrations of the new strains are zero until the mutation event, and then the population evolution is governed by the deterministic equations. Therefore, adding a stochastic generation process to flow equations remains in the domain of the ODE solvers. For the stochastic emergence of mutants, assume that new viral strains arise with a probability per time given by P. The number N of different viral strains increases in time as existing strains mutate into new strains that are unaffected by the previous immune response, but which now elicit a new immune response. The resulting population of all strains is shown in Fig. 8.3 (b). Each subpopulation of antigen–immune pairs is independent, and the total viral load grows linearly in time (on average).2 The immune system can be smarter and can do better at fighting antigenic variation by working globally with a response that fights all viral strains arising from a single type of virus, regardless of their mutations. This can be modeled with a cross-reactive immune response that depends on the total viral load. A new variable z is the cross-reactive response and enters into the equations as

Evolutionary Dynamics 251 ν˙ a = νa (r − axa − qz) x˙a = −bxa + cνa z˙ = kν − bz

(8.14)

in which z is the cross-reactive response that decays at a rate given by b, and ν is  the total viral load ν = νa . This global immune response attacks all viral strains, a

making the infection of each new strain that arises less and less effective, until the system establishes a steady state in which the viral load is held constant and no new strain can escape from the global immune response, shown in Fig. 8.4(a). At long times, the viral load is held constant by the immune system, and the system maintains a steady state. In this last example, the cross-reactive immune response locks the viral strains into a low population size. One way for the virus to escape this limitation would be to disable the immune defense. This is what HIV does as the precursor to AIDS. Such an attack on the immune system can be modeled by knocking down both the specific immune response and the global cross-reactive immune response by adding another term to the equations that suppresses the immune response proportional to the viral load. This immune suppression is modeled as ν˙ a = νa (r − axa − qz) x˙a = −bxa + cνa − uνxa z˙ = kν − bz − uνz

(8.15)

(a)

(b) 1.2

2.5 Virus Immune response

Virus Immune response

2.0 Population size

Population size

1.0 0.8 0.6 0.4

1.5 1.0 0.5

0.2 0.0

0.0 0

100

200

300 Time

400

500

600

0

200

400

600

800

1000

Time

Figure 8.4 (a) Example of a cross-reactive immune response. The total population sizes of the viral load and the immune response are shown for r = 2.4, a = 2, b = 0.1, c = 1, q = 2.4, k = 1 with a random probability for the rise of new viral strains. (b) Disabling the immune response. The total population sizes of the viral load and the immune response are shown with a random probability for the rise of new viral strains.

252 Introduction to Modern Dynamics which has inhibitory terms in the second and third equations. Now, as new strains appear, they reduce the effectiveness of the immune system through the new terms. Eventually, the immune response saturates, as shown in Fig. 8.4(b), but the number of viral strains keeps increasing over time, and the disease eventually overwhelms the system. The phenomenon of immune escape emerges easily in this simple model of HIV infection, despite the simple competitive models. Other dynamical systems that share much in common with infection by viruses are the infection of cellular tissues by bacteria, as well as the growth and metastasis of cancer.

8.2.2 Cancer chemotherapy and acquired resistance Cancer is characterized by uncontrolled growth with health consequences that have some parallels with uncontrolled microbial growth. To impede the uncontrolled growth of either microbes or cancer, therapeutic treatments are prescribed for patients (antimicrobial agents in the case of viruses and bacteria, and anticancer agents in the case of cancer). If these agents are unable to eliminate the infection or the cancer, then the prognosis becomes worse after the treatment, because resistant strains are selectively cultivated that are resistant to the treatment and dominate the regrowth of the disease. In the case of bacteria, these have antibiotic resistance and can spread into the general population, such as MRSA (methicillin-resistant Staphylococcus aureus). In the case of cancer, the relapsing disease consists of aggressive cancer cells that are insensitive to further treatment. In both cases, new therapies must be applied to combat the disease, but multidrug resistance can emerge to produce “super bugs” in the case of bacteria or untreatable metastatic disease in the case of cancer. Despite the similarities, the biology of cancer has an environmental component that plays an important role in the growth of the disease. Every cell in the body communicates with its neighbors, sending and receiving signals to establish a microenvironment that is appropriate for each location within the organism. In normal health, if a cell is out of place, or has grown beyond its usefulness, then natural microenvironmental signals will induce programmed cell death. However, cancer cells acquire genetic mutations that help them escape the cell death response, ignoring signals to self destruct as they grow into tumors. Nonetheless, even the tumor microenvironment exerts influence and control over cancer cells. The microenvironment can provide environmental resistance to the growth of cancer cells. Environmental resistance sets a carrying capacity based on availability of nutrients or oxygen and can include inhibitory signaling that suppress uncontrolled growth. Therefore, attacking the entire tumor microenvironment by using indiscriminate cytotoxic cancer drugs can have unexpected consequences by removing the environmental resistance against the more aggressive cancer cells, allowing them to grow preferentially and to eventually dominate cancer recurrence with acquired resistance to drug treatment.

Evolutionary Dynamics 253 The emergence of acquired drug resistance can be modeled using a distribution of cancer phenotypes and including environmental resistance exerted by the tumor microenvironment. The model is     x˙a = xa (ra − βa ) − γa xb (8.16) b

where each xa is a different phenotype of the cancer. A phenotype is a subclass of cells that exhibits distinct behavior, possibly caused by specific genetic mutations, but also caused by the interaction of genetic mutations with the microenvironment. The growth rate of a phenotype is given by ra , and the inhibition caused by an applied drug is β a >> ra . Environmental resistance is exerted on the phenotypes through the parameter γ a and is proportional (in this model) to the total number of cancer cells in the neighborhood. During cancer treatment, Begin treatment Treatment cycles

100

Population suppression

Total population

10–1

Resistant subpopulation

10–2 Rare resistant subpopulation 10–3

10–4

0

1

2

3

4

5

Cycles

Figure 8.5 Simulating the emergence of drug resistance, with subpopulation numbers shown as a function of treatment cycle. The toxic drug removes the environmental resistance, allowing the resistant subpopulation to grow faster with the treatment than without.

254 Introduction to Modern Dynamics the therapy is applied in multiple cycles, usually with a period of about one week, making the parameter β a time-dependent. A simulation of the evolution of drug resistance is shown in Fig. 8.5 for an initial ensemble of 20 phenotypes with a distribution of drug sensitivities and a distribution of initial frequencies (fractions of the total population). The drug is applied during a time T and removed for an equivalent time T to make up a single cycle. The cycle is applied 5 times in this figure. The overall treatment reduces the population of cancer cells by 94% (which would be a moderately positive clinical outcome for treatment), but the fraction of the total population comprising the most resistant phenotype rises from 0.25% to 25% through the treatment. The increase of the resistant phenotype is actually faster than would have occurred without treatment. This counter-intuitive result comes about because the drug is removing many of the cancer cells and hence is removing the environmental resistance that would have inhibited the expansion of this phenotype. This example illustrates how destroying the tumor microenvironment without eliminating all cancer cells actually causes the more aggressive cells to expand by a greater amount than if left untreated. This is one reason why targeted cancer therapies are replacing the older indiscriminate cytotoxic therapies.

8.3 Replicator dynamics Zero-sum games happen when there are finite resources that must be divided among several groups. The zero-sum condition is a conservation law. In the case of populations, it states that the sum of all individuals among all subpopulations is a constant. In a zero-sum game, when one group wins, another group must lose. This means that the interaction matrix between these groups, also known as the payoff matrix, is asymmetric and ensures that the population growth dynamics remain bounded. There can be oscillations, and multigroup cycles as well as stable fixed points, but these all live in a bounded space that keeps the total number of individuals constant. This space is called a simplex. An example of a simplex for four subpopulations is shown in Fig. 8.6. This is a 3-simplex, a tetrahedron displayed in three-dimensional space. The tetrahedron has four vertices, six edges, and four faces. The vertices represent single subpopulations. An edge represents a continuous variation between two subpopulations. For a face, one subpopulation is zero, and the total number is divided up among the three other subpopulations. Inside the simplex, any combination of the four subpopulations is possible. In all cases, the sum over all subpopulations is a constant.

8.3.1 The replicator equation The bounded dynamics of a zero-sum game can be captured by a simple population growth model called the replicator model. The replicator equation has

Evolutionary Dynamics 255 3-Simplex [ , , , ] Face { , , , } Edge ( , , , ) Vertex

x+y+z+w=1

(1, 0, 0, 0)

0,

w}

[x, y, 0, w]

{x ,

0,

[x, 0, z, w]

[x, y, z, 0] {x ,y ,0 } ,0

(0, 0, 0, 1) w}

(0, 1, 0, 0)

, z,

,0 {0 w}

{x, 0, z, 0

}

{0, y, 0,

0} z, y, , {0

Figure 8.6 A 3-simplex (tetrahedron) showing 4 vertices, 6 edges, and 4 faces. The constraintx +y +z +w = 1 allows all combinations to be expressed uniquely in or on the simplex.

[0, y, z, w] (0, 0, 1, 0)

a species growth rate that is proportional to the fitness of the species, but, unlike the simpler unbounded growth models, the fitness is a function of the concentrations of all the other species. The fitness of the ath species is given by f a (x)  =

N 

xb pab ≡ xb pab

(8.17)

b=1

where xa is the fraction (also called the frequency) of a species population, pab is the payoff matrix, and the repeated index (one covariant and the other contravariant) in the last expression implies Einstein summation.3 If species a benefits over species b, then the payoff matrix element is positive. The payoff matrix is an antisymmetric matrix pab = −pba with zero diagonal paa = 0. The average fitness is given by φ (x)  = f a xa

(8.18)

The replicator equation has a growth rate proportional to the difference between the species fitness and the average fitness. The replicator equation has the simple form4 Replicator equation

 x˙a = xa · f a − φ

(8.19)

3 In this chapter, we will begin to use the index notation introduced in Chapter 1 in anticipation of its more extensive use in later chapters. A repeated index, one up and one down, means an inner product with implied Einstein summation. In evolutionary dynamics, the dynamical spaces are assumed to be Cartesian, so vector components with superscripts or subscripts behave the same. But evolutionary dynamics can also occur on manifolds where a metric tensor description would be needed. 4 The repeated index with both index up does not imply Einstein notation, but instead is a simple product. The inner product, or Einstein sum rule, requires one index up and the same index down.

256 Introduction to Modern Dynamics with the conservation condition N 

xa = 1

(8.20)

a=1

which constrains all trajectories to lie in or on the N − 1 simplex. For very small N, the cases are simple. For N = 2, the simplex is merely a line. With only two competing species, often a single species takes the full share, while the other goes extinct. This is the classic “winner-take-all” competition scenario. Whenever only two businesses are competing for a fixed market, one can win everything and the other goes out of business, although a stable equilibrium between the two (usually with unequal market shares) can also occur. For three species interacting under the constraint of a zero-sum rule, the simplex is a triangle, as shown in Fig. 8.7. Each vertex corresponds to a single species frequency (x, y, or z). Each edge is a continuous variation from one to another species, where the third species frequency is zero. All three frequencies sum to unity x + y + z = 1. For instance, the center of gravity of the triangle corresponds to x = y = z = 1/3. For N = 3, solutions of the replicator equation fall into two classes: either center oscillations in which all three frequencies are nonzero, or winner-takes-all for a single species with two unstable fixed points for the other two species. The vertices are guaranteed to be fixed points, since Payoff matrix

Payoff matrix

1

1

x Saddle

Saddle

x Saddle

0

0

–1

–1

Saddle

y

z Center

y Unstable node

Stable node z

Figure 8.7 Two solutions for the replicator equation with antisymmetric payoff matrix and zero diagonal. The solution on the left has an internal center. The solution on the right has an unstable node, a saddle, and a stable node at the three vertices.

Evolutionary Dynamics 257 Payoff matrix (antisymmetric)

Payoff matrix (random) 1

0

1 x

x Unstable node

Saddle

0 Stable node –1

–1

Stable node Unstable node Stable node y

Saddle Saddle

z

y Unstable node

z

Stable node

Saddle

Figure 8.8 Some solutions with payoff matrices with zero trace. The example on the left has an antisymmetric payoff matrix with five fixed points. The example on the right has a random payoff matrix with six fixed points that include an internal saddle point.

the intersections of the three nullclines comprise the edges of the triangle, and the dynamics will always have at least three fixed points. In addition, a center can occur internally to the simplex as a possible fourth fixed point. By replacing the restriction of zero diagonal with the weaker restriction of zero trace, more types of solutions can occur, as shown on the left in Fig. 8.8. The dynamics in this case are called autocatalytic because of the interaction of species with themselves. The internal center can become any type of two-dimensional fixed point: a spiral or a node, either stable or unstable, or a saddle point. In addition, stable or unstable nodes can occur along the edges of the triangle. The greater complexity of this case can allow three, four, or five fixed points to occur for the dynamics. When the further constraint of an antisymmetric payoff matrix is replaced by a random matrix (but still with zero trace), then up to six fixed points can occur in the dynamics, as shown on the right in Fig. 8.8. Several examples of replicator dynamics with asymmetric payoff matrices (and zero diagonals) are shown in Fig. 8.9 for N = 8 subpopulations. For many random choices of the payoff matrix elements, the solutions tend to have only a few characteristic behaviors. These include (1) stable fixed points of three or five species with five or three species, respectively, condemned to extinction; (2) period-three or period-five cycles in which three or five species oscillate against

258 Introduction to Modern Dynamics Replicator dynamics (N = 8) 5-stable

0.45 0.4

0.8

0.35

0.7

0.3

0.6

0.25

0.5

0.2

0.4

0.15

0.3

0.1

0.2

0.05

0.1

0

0

–0.05

0

200

400

600

800

3-oscillating

0.9

1000 1200 1400 1600

0.9

–0.1

0

200

400

600

0

200

400

600

800

1000 1200 1400 1600

800

1000 1200 1400 1600

0.7

0.8

0.6

0.7 0.5 0.6 0.5

0.4

0.4

0.3

0.3

0.2

0.2 0.1

0.1

0

0 –0.1

0

200

400

600

800

1000 1200 1400 1600

“Pennant”

–0.1

Instability

Figure 8.9 Examples of replicator dynamics for N = 8. Three- and five-species cycles are common, as well as three or five stable species. In all cases of N = 8, three to five species tend to go extinct.

each other while the other species go extinct; and (3) combinations of cycles and stable fixed points, usually with period-three cycles and two stable cycles and the other species going extinct. Indeed, for N competing species, there are generally N/2 that go extinct, while the others are either stable or participate in cycles.

Evolutionary Dynamics 259

8.3.2 Hypercycles An intriguing question in chemical physics is how self-replicating reactions first arose. One possible scenario for self-replication occurs when the chemical product of one reaction becomes the reactant of a subsequent reaction in a chain that closes back on itself to form a finite ring (or cycle) of reactions. The cycle is called a hypercycle,5 and it can sustain steady-state oscillations. This sequence of reactions is described by a special form of the replicator equation when the payoff matrix is non-zero only on a subdiagonal or a superdiagonal (but not both). In this case, the payoff matrix is better viewed as an interaction or transfer matrix in which the nth chemical species becomes a reactant to form the (n + 1)th chemical species. The conditions for the payoff matrix (for a nonzero subdiagonal) are paa−1 > 0 p1N > 0

(8.21)

pab = 0 otherwise To close the cycle, it is necessary that the matrix have periodic conditions such that the first reactant is provided by the Nth reactant product of an N-element cycle. An example of a 7-reactant hypercycle is shown in Fig. 8.10. The payoff matrix is subdiagonal with all-positive matrix elements. The cycle is completed by providing the Nth reactant product to the first reaction. The transients are shortlived, and the cycle establishes a sequence of pulses. The hypercycle oscillations are strongly nonlinear, behaving like a clock sequence as each reactant is formed in succession. An interesting feature of the hypercycle model is the possibility that more than one hypercycle can share some of the same reactants. For instance several chemical species that occur in one hypercycle may also participate in a different hypercycle. Questions then arise whether one or the other hypercycle will dominate, whether the two hypercycles will share the reactants, and if so, whether the hypercycles could synchronize. The answer to these questions depends on the relative fitness values of the hypercycles. In general, the hypercycle with the greater fitness will dominate, driving the other hypercycle to extinction. In this way, different hypercycles compete for resources in a winner-takes-all dynamic. This may also be called first-takes-all dynamics, because once one hypercycle has been established, it is difficult to displace, even if a later hypercycle arises that is fitter. It has long puzzled biologists why all biological molecules have the same chirality (handedness), when each chirality can occur chemically. The answer may be that one chirality emerged first, by accident in the earliest days of selfreplicating chemical reactions, and consequently that first chirality has dominated ever since.

5 The hypercyle was proposed by Eigen and Schuster. See M. Eigen and P. Schuster, “Hypercycle—Principle of natural self-organization. A. Emergence of hypercycle,” Naturwissenschaften 64, 541–65 (1977).

260 Introduction to Modern Dynamics Payoff matrix

0.8

Reactant concentration

0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

50

100

150

200

250

300

350

400

Time

Figure 8.10 Example of a 7-reactant hypercycle. The payoff matrix is subdiagonal with all-positive matrix elements. Each reaction product feeds into the next reaction in a cycle.

8.4 Quasispecies Mutation plays an essential part in evolutionary dynamics by providing the adaptability needed to survive in the face of changing environmental pressures. A model in population dynamics that includes mutation among subspecies is the quasispecies equation, developed by Manfred Eigen and Peter Schuster in the 1980s. The term “quasispecies” is synonymous with “subspecies.” This model goes beyond predator and prey and allows species and their subspecies to evolve in response to external evolutionary forces, with survival governed by subspecies fitness, often in response to predation. The mutations treated in this model are different than in the randomly arising mutations of viral infections. In that case, a new viral strain arises by mutation from an established strain, but there is no backmutation. This is appropriate for complex species with many genes. However, in the case of molecular evolution, as for DNA, point mutations in the digital code can mutate back and forth with some probability that is modeled with a transition rate in a matrix of all possible mutations.

8.4.1 The quasispecies equation The quasispecies equation contains a transformation term that mutates individuals into and out of a quasispecies. The transition matrix elements (mutation rates)

Evolutionary Dynamics 261 from one species to a mutated species is governed by a stochastic mutation matrix Qba . A stochastic matrix has the property that its rows and columns sum to unity. The transition rate at which subspecies b mutates into subspecies a is the mutation matrix multiplied by the fitness of the original species: Wab = f b Qba

(8.22)

The mutation matrix is a symmetric matrix for which the back-mutation is just as probable as the forward mutation. Multiplying the mutation from species b to species a by the fitness of the bth species makes the transition rate proportional to the replication rate of the bth species, analogous to the role played by the fitness in the replicator equation. The diagonal elements of the mutation matrix are not in general zero, because species reproduce with their given fitness. In the quasispecies model, the individual fitnesses are not functions of the subspecies population size (known as a frequency-independent fitness), which is unlike the case for the replicator equation. The average fitness of the entire population is φ = f a xa

(8.23)

which is the weighted average, by population size (frequency xa ), of the individual fitness values f a of each species. The quasispecies equations include a limiting growth term related to the average fitness φ of the entire population that keeps the relative populations bounded. The quasispecies equation is

Quasispecies equation x˙a = =

N 

 xb fb Qab − xa f b xb

b=1 xb Wba

− φxa

(8.24)

where the sums are over all species, including the ath species. The quasispecies fixed-point equation is x˙a = x∗b Wba − φ ∗ x∗a = 0

(8.25)

In operator notation, this is the eigenvalue problem x∗ W = φ ∗ x∗

(8.26)

Therefore, the left eigenvectors of the transition matrix W are the fixed points of the quasispecies equation.6 These eigenvectors are used to determine the average fitness of the population distribution in φ ∗ = φ (x∗ ).

6 Left eigenvectors of a matrix A are the transposes of the right eigenvectors of the transpose matrix AT .

262 Introduction to Modern Dynamics In the time evolution of the subpopulations within the quasispecies equation, the total number of individuals grows exponentially. The number of individuals in the ath subspecies is Xa and the fraction of the total population X is given by xa (t). These are connected by Xa (t) = xa (t)eψ(t)

(8.27)

where the function ψ(t) is related to the growth of the total population. For instance, taking the derivative of Eq. (8.27) yields ˙ X˙ a (t) = x˙a (t)eψ(t) + xa (t)eψ(t) ψ(t)

(8.28)

ψ˙ = φ

(8.29)

If we define

with the solution t ψ(t) =

φ(s) ds

(8.30)

0

then Eq. (8.28) becomes X˙ a = x˙a eψ + φxa eψ

(8.31)

and multiplying both sides by e−ψ gives X˙ a e−ψ = x˙a + φxa

=

N 

xb f b Qba

(8.32)

b=1

using the second line of the quasispecies equation (8.24). The total number of individuals is X=

N  a=1

 Xa =

N 

 xa eψ(t) = eψ(t)

(8.33)

a=1

and the time derivative of the total population is X˙ = φX

(8.34)

Evolutionary Dynamics 263 which grows exponentially with a rate equal to the average fitness. This is an important aspect of quasispecies dynamics—the unlimited growth of the total population. There is a zero-sum game within the dynamics of the relative fractions of individuals xa (t), but this is a consequence of the symmetry of the mutation matrix. The two key elements of the quasispecies equation are the fitness f a and the mutation matrix Qba . The fitness parameter f a determines the survival of the fittest, and takes on the properties of a “fitness landscape.” It often happens that groups of mutations that are “close” to each other can be more advantageous to survival than mutations that are farther away. This means that there are local maxima as well as local minima as the subspecies index a moves through the populations of subspecies. This landscape plays the role of a potential surface in a random walk, as evolution climbs peaks of fitness within the fitness landscape. The mutation matrix can best be understood at the molecular level as base–pair permutations within a gene during replication. In a gene of fixed length, there can only be a finite number of possible combinations, which sets the number N of subspecies.

8.4.2 Hamming distance and binary genomes To study the role of genetic mutations, calculations are simplified by using binary codes instead of the four-level DNA code. In a binary code, a mutation is a bit flip, and the permutation space is a four-dimensional hypercube, with one dimension per bit. For instance, the permutation space for four bits is the hypercube shown in Fig. 8.11. Each axis (x, y, z, w) has only two values: 0 or 1. The inner cube represents w = 0, and the outer cube represents w = 1. Each edge represents the flip of a single bit. The “distance” between two vertices is the smallest number of edges that connects those two points. This 4-cube is sometimes called a simplex, but is not to be confused with the 3-simplex of four population values shown in Fig. 8.6. In that simplex, the four values are continuous between 0 and 1, and sum to unity—a constraint that reduces the four dimensions to three and allows the 3-simplex to be drawn in ordinary three-dimensional space. The distance between two binary numbers can be defined simply as the minimum number of bit flips that take one number into another. On the hypercube, this is the so-called Manhattan distance, or grid distance, and is the smallest number of edges that connects two points. In the case of binary numbers, this is called the Hamming distance. A grayscale map of the Hamming distance between all 7-bit numbers from 1 to 127 is shown in Fig. 8.12. The Hamming distance is important for binary mutations because it defines how many bit errors, or flips, are needed to mutate from one binary number (quasispecies) to another. With the definition of a Hamming distance between two bit strings, there are many possible choices for the mutation matrix as a function of the Hamming distance. One simple assignment of the mutation matrix is

264 Introduction to Modern Dynamics (0, 1, 1, 1)

(1, 1, 1, 1) (0, 1, 1, 0) (1, 0, 0, 1) (1, 1, 1, 0) (1, 0, 0, 0) (0, 1, 0, 1)

(0, 1, 0, 0) (1, 0, 1, 0)

z

(1, 0, 1, 1) (1, 1, 0, 0)

y

(0, 0, 0, 0)

(1, 1, 0, 1)

(0, 0, 1, 0) (0, 0, 0, 1)

x

(0, 0, 1, 1)

Figure 8.11 A 4-bit hypercube represents all possible bit combinations among 4 bits. The outer cube represents extension into the fourth dimension.

Qba

N −1  1 1 = 1 + ε −1 Hab 1 + ε−1 Hab

(8.35)

b=0

where the mutation probability decreases with increasing Hamming distance Hab between a and b, and the parameter ε determines how fast it decreases with distance. The Hamming distance can also help define the fitness landscape. If there is a most-optimal subspecies α, a convenient way to define the fitness is

f b = exp −λHab

(8.36)

Evolutionary Dynamics 265 7

6

20

5

40

4

60

3 80 2 100 1 120 20

40

60

80

100

120

0

with the parameter λ controlling the “contrast” of the landscape. Species that are farther away from the α species have lower fitness. These two equation (8.35) and (8.36) establish a molecular evolution on binary strings. The bit-flip probability is constant per bit, meaning that it is most likely that “close” strings will mutate into each other, while more distant strings will be less likely to do so. The most-fit subspecies α will act as an attractor. As bit strings get closer to the most-fit string, they will survive and out-multiply less-fit subspecies. In this way, the population walks up the peaks in the fitness landscape. As a concrete example, consider 7-bit strings with n = 27 = 128, using the Qab and f b equations defined by the Hamming distance in Eqs. (8.35) and (8.36). The key control parameter is the strength of the mutation rate ε. If the mutation rate is large, and the landscape contrast is small, then the final population of species will tend to match the fitness landscape: high-fitness populations will be the largest, and low-fitness populations will be the smallest. For instance, simulated quasispecies are shown in Fig. 8.13(a) for a random initial population for a high mutation rate ε = 0.5. At long times, the most-fit species dominates, but other nearby species also survive. No species die off, because it is always possible to generate distant species by a few improbable bit flips. This makes the quasispecies equation noticeably different from the replicator equation: the latter does not include cross-mutations, and hence many of the populations become extinct. The fitness landscape is shown in Fig. 8.13(b) along with the final population numbers for the large mutation rate. The final population matches the fitness landscape almost perfectly for this high mutation rate.

Figure 8.12 Hamming distances Hab between 7-bit binary numbers from 0 to 127.

266 Introduction to Modern Dynamics Population

Landscape

(b) 10–1

0.05

0.14 0.12 Fitness landscape

Population

10–2

10–3

0.04

0.10 0.03

0.08 0.06

0.02

Population

(a)

0.04

10–4

0.01 0.02 10–5

0

100

200

300 Time

400

500

0.00 0

20

40

80 60 Species

100

120

0

Figure 8.13 (a) Simulation of quasispecies frequency as a function of time for a random starting population. (b) Fitness landscape compared with final population. Solid data points denote the fitness landscape, while open data points are the final population. The most-fit species in this example is the 64th species.

8.4.3 The replicator–mutator equation In the quasispecies equations, the fitness for each subspecies is independent of the population size. A straightforward extension of the quasispecies model introduces a frequency-dependent fitness. This can occur when groups of subspecies form alliances that work together. Belonging to a group increases the chances of survival. Frequency-dependent fitness can also model bandwagon effects, such as a “popular” group or a fad that induces others to change allegiance and draws them to it. The quasispecies equation with frequency-dependent fitness is called the replicator–mutator equation

Replicator–mutator equation

 x˙a = xb fb Qab − φxa fb (x)  = xc pcb φ (x)  = fb xb = xc pcb xb

(8.37)

This equation is a combination of the replicator equation and the quasispecies equation. The payoff matrix is pba , and the mutation matrix is Qba for an individual in species b mutating into species a. The fitness of the ath species is f a , which now depends on the size of the group. The coefficients of the payoff matrix pba are all positive numbers and do not need to be antisymmetric. One convenient payoff matrix uses the Hamming distance through

Evolutionary Dynamics 267 0.015

Population

0.01

0.005

0 10–2

10–1

100

101

102

103

Time

pba = exp −βHab

(8.38)

which produces a “clique” effect in which nearby populations have higher payoff and bolster the fitness of the local group of species. Equation (8.38) for payoff should be contrasted to the equation (8.36) that defined fitness for the quasispecies model. The replicator–mutator equation, with a symmetric mutation matrix Qba , has a striking property at high mutation rates (large ε): there is a global fixed point in which every population is equally probable. The time evolution of a distribution of initial population probabilities is shown in Fig. 8.14 for high mutation rate. The spread of population frequencies constricts over time until every population is equally probable. In this case, the frequency-dependent fitness combined with frequent mutation produces a collective effect that equalizes selection for all species.

8.5 Game theory and evolutionary stable solutions Game theory is the mathematical study of problems associated with multiple agents (or players) whose individual benefits are maximized by having them take actions under specified constraints. The question that game theory answers is which actions, known as strategies, produce the best outcomes for individual agents. Phrases like “win-win” and “zero-sum” are longstanding aspects

Figure 8.14 Evolution of 50 quasispecies with time until all species are equally probable.

268 Introduction to Modern Dynamics Figure 8.15 Normal form for twoperson (Rose and Colin) noncooperative non-zero-sum games. The payoff matrix defines the payoff for (Rose, Colin), respectively,when selecting among strategies, in this case strategies A and B. The arrows designate moves to improved outcomes for Rose (vertical) and Colin (horizontal). Nash equilibria are at AA in Game 1 and BB in Game 2.

7 See P. D. Straffin, Game Theory and Strategy (MAA New Mathematical Library, 1993). Using Straffin’s clever terminology, Rose = rows and Colin = columns. 8 Named after work on game theory in 1950 by John Nash (Nobel Prize winner in economics and the subject of the book and film A Beautiful Mind). 9 Minimax theorems have played a central role in game theory. They were introduced by John von Neumann in his seminal publication on game theory (J. von Neumann and O. Morgenstern, Princeton Classic Editions: Theory of Games and Economic Behavior (Princeton University Press, 2007).

Colin Game 1 A

A (2, 3)

Colin B

Game 2

(3, 2)

Rose

A

B

A

(3, 3)

(–1, 5)

B

(5, –1)

(0, 0)

Rose B

(1, 0)

(0, 1)

of classical game theory, some of whose well-known games are the Prisoner’s Dilemma, the Lady and the Tiger, and the Monte Hall game. Game theory is fundamentally an optimization problem. One reason for the complexity of game theory—why the optimum choice of game strategy is often not immediately clear—is the role of the constraints. Constraints determine the domains over which some benefit function, called a utility function, must be maximized. Games are often defined in “normal form” as in Fig. 8.15 for a two-person noncooperative non-zero-sum game. There are two players in this case, Rose and Colin, each of whom can choose among two strategies, A and B.7 The payoff matrix says how much Rose and Colin gain for each possible outcome (Rose Strategy, Colin Strategy). For instance, in Game 1, if both play strategy A, then Rose gets 2 units and Colin gets 3 units. The game is noncooperative, because each player has complete information about the payoff matrix and they pick their strategies simultaneously without any prior communication. This game is nonzero-sum, because the total amount that can be gained for a given set of strategies is not conserved. This is contrasted to a zero-sum game, where one player’s gain is another player’s loss. Even though the game is played simultaneously, the dynamics of sequential play can show how an equilibrium solution is obtained. In Fig. 8.15, each arrow shows an “improvement” in outcome. Vertical arrows allow Rose to improve her payoff, while horizontal arrows allow Colin to improve his. For example, in Game 1 beginning with strategies BB, Rose can improve her payoff by switching to A. But then Colin can improve his payoff by switching his strategy to A. Both Rose and Colin end up at AA. The same solution is achieved if the game starts at BA. The AA “solution” to the game is called a Nash equilibrium.8 In Game 1, Rose would do best with AB, while Colin would do best at AA, so the equilibrium solution favors Colin in this game. An analysis of Game 2 also shows a unique Nash equilibrium, this time at BB, but there is a disturbing character to this game solution: both players would do better if they chose AA. But they cannot get there without cooperation, because Rose minimizes her possible damage and maximizes her possible gain by picking B, and likewise for Colin. When a player seeks to minimize their maximum damage, this is called minimax.9 Game 2 is a variant of what is known as the

Evolutionary Dynamics 269 Colin A

A

B

C

Payoff matrix D 12, –6

–1, 1

1, –1

0, 0

B

5, 5

1, –1

7, –7

–10, 5

C

3, –3

2, –2

3, –4

3, –3

D

–16, 8

0, 0

0, 0

16, –8

Rose Rose minimax

Nash equilibrium Pareto-optimal

Colin minimax

Nash arbitration

Prisoner’s Dilemma. It highlights the difficulty of a noncooperative game in which both players, by playing most cautiously, end up with a mutually bad solution, albeit a little bit better than the worst case. The concepts of Nash equilibrium and minimax are only a couple of ways, among many, to look at games and outcomes. An additional approach to game analysis is known as Pareto optimality.10 A strategy is Pareto-optimal when no other strategies exist in which both players do better or one does better but the other stays the same. The solution of Game 1 at AA is Pareto-optimal, because there is no other solution in which the players both do better. The solution of Game 2 at BB is not Pareto-optimal, because both players do better at AA. By the definition of Pareto optimality, solution AB in Game 1 is also Pareto-optimal, because there is no other solution in the game in which at least one player does better while the other at least stays the same. In Game 2, AA, AB, and BA are all Pareto-optimal and represent a Pareto frontier. Selecting a solution on the Pareto frontier requires some form of cooperation, or negotiation, among the players, while the Nash equilibrium requires no cooperation. The analogy of sequential game play with phase portraits of dynamic systems can be seen in Fig. 8.16. The payoff matrix is on the right, and the game dynamics, like a phase portrait, is on the left. The Nash equilibrium at CB, indicated with a dashed circle, is an attractor in the game dynamics. The Nash equilibrium is at the intersection of the minimax solutions of both Rose and Colin. The flow of the dynamics is not a “local” flow, because a player can choose any strategy available to them, meaning that the “movement” can be to anywhere in a row for Rose and anywhere in a column for Colin. In the game flow, there are obvious saddle points, like AB and BC, but also saddle points at AA, BA, DA, BD, and DD. Pareto-optimal solutions are indicated by solid circles on the payoff matrix. Note that every Pareto-optimal solution is a saddle, although not all saddles are Paretooptimal. If the players cooperate, then they can reach Pareto-optimal solutions.

Figure 8.16 A two-person non-zerosum game with four strategies each. The payoff matrix is on the right, and the game dynamics (for hypothetical sequential play) is on the left. The Nash equilibrium at CB appears as an attractor in the game dynamics. Saddle points are at AA, AB, BA, BC, BD, DA, and DD. Rose’s minimax (minimize possible damage) is strategy C, and Colin’s minimax is strategy B. The “win-win” solution is the Nash arbitration at BA.

10 Vilfredo Pareto (1848–1923) was an Italian economist who succeeded Walras at the University of Lausanne. Pareto was the first to point out that the distribution of wealth (in developed countries) followed a power-law distribution with 80% of the wealth being held by 20% of the population.

270 Introduction to Modern Dynamics Furthermore, if they agree to find the “best” solution to both, they can settle on the Pareto-optimal solution whose summed payoff is the largest. This occurs at BA, where the total payoff (summing both) is 5 + 5 = 10. Alternatively, they could agree that the product of the payoff is maximum. In this game, this also leads to BA, but in general, maximizing the sum or maximizing the product of payoffs can lead to different solutions. The point is that once negotiations are opened between players, many other factors enter into the game, including skill at negotiation. Game theory plays an important role in evolutionary dynamics because of the concept of the payoff matrix. John Maynard Smith adapted the payoff matrix from games among rational agents to games of agents under the conditions of natural selection. In evolutionary games, the agents make no rational choices, but adapt to selection pressures.

8.5.1 Evolutionary game theory: hawk and dove

Hawk

Dove

Hawk

Dove

1 2 (V–C, V–C)

(V, 0)

(0, V)

1 (V, V) 2 V>C V 0, so that if R0 is the average number of offspring, then a winner of the competition has R0 + V offspring. The rules of the game are that if a hawk meets another hawk, they fight for the resource, and the winner gets the value V, but the loser suffers a cost C > 0. In each battle, the outcome is assumed to be 50/50. Therefore, the average payoff of the battle is (V − C)/2. If a hawk meets a dove, then the dove retires from the battlefield at no cost (C = 0), and the winner gets the value V. If two doves meet, then one gets the value V and the other gets nothing, but also suffers no cost (C = 0). The average payoff for each dove is V /2. Clearly, doves never pay a cost. The game is played iteratively, beginning with generation n = 1. At generation n, the average reproductive rates for the hawks and doves are 1 (V − C) + (1 − xn ) V 2 Dn = D0 + (1 − xn ) V /2 Hn = H0 + xn

(8.39)

(where xn = fraction of hawks). The average reproductive rate for the entire population is Rn = xn Hn + (1 − xn ) Dn

(8.40)

The fraction of hawks in the next generation is 11 Smith, J. M. and G. R. Price, “Logic of animal conflict,” Nature 246, 15–18 (1973).

xn+1 = xn

Hn Rn

(8.41)

Evolutionary Dynamics 271 x∗

One steady-state solution is = 0, where all members are doves. However, this state is unstable for any finite reproduction, because a single hawk can invade the population. Nontrivial fixed points to the iterative dynamics are given by the condition 1=

H0 + x∗ 12 (V − C) + (1 − x∗ ) V Hn   = Rn x∗ H0 + x∗ 12 (V − C) + (1 − x∗ ) V + (1 − x)∗ [D0 + (1 − x∗ ) V /2] (8.42)

This has one solution at x∗ = 1, where all members are hawks. This solution is stable, because a single dove would not be able to survive in the midst of so many hawks. However, another stable solution has an intermediate value given by x∗ =

H0 + V /2 − D0 C/2

(8.43)

This solution is stable, because a fraction x = x∗ + ε has a multiplier Hn / Rn = 1 − ε that causes the excess (deficit) hawk population to decay (rebound). Therefore, the intermediate solution is stable for small deviations. The stable solution is called an evolutionary stable strategy (ESS) of the population dynamics. If H 0 = D0 , then 0 ≤ x∗ = V /C ≤ 1, which shows that an ESS occurs if the cost of battle C is greater than the value gained V, that is, the combatants have more to lose than to gain by fighting. The importance of the Smith–Price model for evolutionary dynamics is that it provides an explicit mechanism for an observed ecological phenomenon. Ecologists have noted that individuals competing for resources like food or mates rarely engage in deadly combat, nor even in combat that leads to injury, despite possessing horns or claws or sharp beaks. Evolutionary game theory provides an explanation, showing that a mixture of aggressive and passive strategies leads to a stable situation where individuals rarely need to risk injury or death. Avoiding injury or death is clearly an advantage to survival, so the ESS provides a survival advantage for the entire population.

8.6 Summary Evolutionary dynamics treats the growth and decay of populations under conditions that influence reproduction, competition, predation and selection. The spread of viruses across populations, or the emergence of drug-resistant cancer, can be described within the context of evolutionary dynamics. The Lotka–Volterra equations were early population equations that describe population cycles between predators and prey. Symbiotic dynamics can lead to stable ecosystems among a wide variety of species. Natural selection operates through the principle of relative fitness among species. Fitness can be frequency-dependent (if it depends on

272 Introduction to Modern Dynamics the size of the population) or frequency-independent (intrinsic fitness of the species). A simple frequency-dependent fitness model is the replicator equation that follows a zero-sum game among multiple species whose total populations reside within a simplex. Many closely related species, connected by mutation, are called quasispecies. The quasispecies equation includes the mutation among quasispecies through a mutation matrix, which is a stochastic matrix. The quasispecies have an intrinsic fitness that can be viewed as a multidimensional fitness landscape, and the quasispecies diffuse and climb up fitness peaks within this landscape. Adding frequency-dependent fitness to the quasispecies equation converts it to the replicator–mutator equation that captures collective group dynamics. Game theory provides a different perspective on competition and is applied in cases when differing agents have differing payoffs leading to evolutionary stable solutions (ESS). Principles of game theory also apply to aspects of economic dynamics.

8.7 Bibliography N. Bacaer, A Short History of Mathematical Population Dynamics (Springer, 2011). M. Eigen and P. Schuster, The Hypercycle (Springer, 1979). M. Eigen, Steps Towards Life: A Perspective on Evolution (Oxford University Press, 1992). M. Eigen, J. McCaskill, and P. Schuster, “The molecular quasispecies.” Advances in Chemical Physics 75, 149–263 (1989). S. Gravilet, Fitness Landscapes and the Origins of Species (Princeton University Press, 2004). S. Kauffman, The Origins of Order: Self-Organization and Selection in Evolution (Oxford University Press, 1993). M. Kimura, The Neutral Theory of Molecular Evolution (Cambridge University Press, 1968). M. Nowak, Evolutionary Dynamics: Exploring the Equations of Life (Harvard University Press, 2006). P. Schuster and P. F. Stadler, “Networks in molecular evolution,” Complexity 8, 34–42 (2002). S. Wright, Evolution: Selected Papers (University of Chicago Press, 1986). P. D. Straffin, Game Theory and Strategy (MAA New Mathematical Library, 1993).

Evolutionary Dynamics 273

8.8 Homework problems Analytic problems 1. Species competition and symbiosis for two species: How many different types of stability are possible for Eq. (8.1) when the functions f (x, y) and g(x, y) are each of first order (linear) in the variables? What are the competition/symbiosis conditions for each? 2. Symbiosis phase portrait: Draw the phase portrait for the symbiotic case described by Eq. (8.2). 3. Species competition and symbiosis for three species: Add a third species to Eq. (8.1) and assume only linear functions. Classify all the possible types of fixed point that occur, and give the competition/symbiosis conditions for each. 4. Oscillation period in predator–prey: Estimate the oscillation period as a function of amplitude for the Lotka–Volterra predator–prey model. 5. Extinction of the predator: What happens to the ecosystem if the extinction term in Eq. (8.9) is on the predator dynamics instead of the prey? 6. Stability: Is the following dynamic matrix stable? If so, what condition does it satisfy? −0.55 −0.11 0.02 −0.17 0.09 − 0.11 −0.78 −0.15 −0.04 −0.11 0.02 −0.15 −0.45 0.13 0.12 − 0.17 −0.04 0.13 −1.17 0.03 0.09 −0.11 0.12 0.03 −0.57 7. N-species: Assume a large but finite number N of species. Under what competition/symbiosis conditions can all species coexist in stable equilibrium at finite population size? 8. Virus: For the virus problem, what needs to be added to the immuneresponse equations (8.14) to completely eradicate the virus? 9. Virus: Can the values for xn from Eq. (8.9) ever be negative when initial conditions are in the positive quadrant? Why? Note that these equations are not in the form x˙ = xf (x) that forces the axes to be nullclines and prevents negative values. 10. Replicator diagonal: What is the main difference in behaviors in the replicator equation between zero diagonal and zero trace? Why? 11. N = 3 replicator: In the replicator model for three species, what conditions are needed for stable cycles?

274 Introduction to Modern Dynamics 12. Tripartite simplex: For the tripartite simplex examples with antisymmetric payoff matrix with zero trace, what are the equations for of the nullclines when there are five fixed points? 13. Hypercycle: What happens in the hypercycle model if the matrix elements on a sub- (super-)diagonal can have random sign? 14. Hypercycle: In the hypercycle model, can there be conditions (for all positive matrix elements) that allow some reactant concentrations to go to zero? 15. Quasispecies: In the quasispecies equation, show that a small mutation parameter favors the highest peak rather than the “strongest” peak (the peak with the largest population). 16. Quasispecies: Construct an analytical three-species model for the quasispecies equation in terms of f a and Qba . Find the fixed points from the quasispecies eigenvalue equation. Evaluate their stability. Enumerate the types of behavior that can occur in a three-species model. 17. Quasispecies stability: In the quasispecies equations, why does subtracting the mean fitness guarantee stability? 18. Quasispecies: In the quasispecies equation, if the initial population is all in a single subspecies, what is the characteristic time for the emergence of other subspecies? How is this affected by the choices of ε and λ? 19. Quasispecies: Under what conditions can the quasispecies equation have oscillatory solutions? 20. Quasispecies: Show, under appropriate conditions, that a greater population can be in a lower but broader fitness peak than a narrow but higher fitness peak. 21. Replicator–mutator model: What happens within the population dynamics when the mutation rate gets too small? 22. Hawk and dove: Prove the stability conditions for all the fixed points of the hawk and dove model.

Numerical projects 23. Predator–prey: Play with the parameters of predator and prey model. What other types of behavior occur? 24. Replicator: For replicator dynamics, classify the types of behavior for N = 4 and for N = 5. Can you estimate the probabilities for each type of behavior? What is the most common type of behavior? What type of behavior is very rare? 25. Hypercycle: Couple two hypercycles through one of the reagents that is shared by both. Do the hypercycles synchronize?

Evolutionary Dynamics 275 26. Quasispecies: Compare the final populations for a Hamming distance mutation matrix versus a random mutation matrix. Does the character of the possible solutions depend on the choice of mutation matrix? 27. Quasispecies: For the Hamming distance mutation matrix, at fixed ε, does the final population depend on the landscape contrast λ? Do the results scale as a function of the ratio ε/λ? 28. Quasispecies: For the quasispecies equation, assume there are 128 species connected by a network graph with adjacency matrix Aij . Assume the mutation distance is given by distance on the graph. Explore the differences among small-world, scale-free, and ER graphs as functions of the mutation control parameter ε. 29. Quasispecies: Explore what happens if the mutation matrix in the quasispecies model is not a symmetric matrix (but is still a stochastic matrix). This model would have asymmetric mutation rates among species. 30. Quasispecies: What happens if the mutation matrix elements are all equal? 31. Quasispecies: Set up a dynamic competition between mutation and payoff. For instance, mutation can be larger at small distances, but payoff can be larger at long distances.

Neurodynamics and Neural Networks

9 9.1 Neuron structure and function

277

9.2 Neuron dynamics

280

9.3 Network nodes: artificial neurons

286

9.4 Neural network architectures

289

9.5 Hopfield neural network

295

9.6 Content-addressable (associative) memory

298

9.7 Summary

303

9.8 Bibliography

304

9.9 Homework problems

304

From http://www.banquete.org/banquete08/IMG/jpg/caja_zoom.jpg

The brain, for us, is perhaps the most important dynamic network we can study. The brain is composed of nearly 100 billion neurons organized into functional units with high connectivity. The sophistication of the information processing capabilities of the brain are unsurpassed, partly because of the immense dimensionality of the network. However, much simpler collections of far fewer neurons also exhibit surprisingly sophisticated processing behavior.

Introduction to Modern Dynamics. Second Edition. David D. Nolte, Oxford University Press (2019). © David D. Nolte. DOI: 10.1093/oso/9780198844624.001.0001

Neurodynamics and Neural Networks 277 This chapter begins with the dynamic properties of a single neuron, which are already complex, displaying bursts and bifurcations and limit cycles. Then it looks at the simplest forms of neural networks, and analyzes a recurrent network known as the Hopfield network. The overall behavior of the Hopfield network is understood in terms of fixed points with characteristic Lyapunov exponents distributed across multiple basins of attraction that represent fundamental memories as well as false memories.

9.1 Neuron structure and function The single neuron is an information processing unit—it receives multiple inputs, sums them, and compares them with a threshold value. If the inputs exceed the threshold, then the nerve sends a signal to other neurons. The inputs are received on the neuron dendrites (see Fig. 9.1), and can be either excitatory or inhibitory. Once the collective signal exceeds the threshold, the nerve body fires an electrical pulse at the axon hillock. The pulse propagates down the axon to the diverging axon terminals that transmit to the dendrites of many other downstream neurons. The signal that propagates down the axon has a specified amplitude that is independent of the strength of the input signals and propagates without attenuation because it is re-amplified along its path at the many nodes of Ranvier between the junctions of the Schwann cells. The strength of a neural signal is encoded in the frequency of the neural pulses rather than by the amplitude of the pulses. An example of an action potential pulse is shown in Fig. 9.2, which represents the voltage at a node of Ranvier along an axon. The resting potential inside the neuron is at −65 mV. This negative value is a thermodynamic equilibrium value determined by the permeability of the neuron membrane to sodium and potassium ions. When the neuron potential is at the resting potential, the neuron is said to be polarized. As the membrane voltage increases, it passes a threshold and sustains a rapid runaway effect as the entire region of the neuron depolarizes, and the voltage overshoots zero potential to positive values, followed by a relaxation that can Dendrites

Nerve cell body Axon Axon hillock

Schwann cell

Node of Ranvier

Figure 9.1 Neuron physiological structure. The neuron receives signals on its dendrites.If the sum of the signals exceeds a threshold, the axon hillock initiates a nerve pulse, called an action potential, that propagates down the axon to the axon terminals, which transmit to the dendrites of many other neurons.

278 Introduction to Modern Dynamics 20 Overshoot

Membrane voltage (mV)

0

Figure 9.2 Action potential spike of a neuron. The equilibrium resting value is typically −65 mV. When the potential rises above a threshold value, the neuron potential spikes and then recovers within a few milliseconds.

–20

–40 Threshold Resting potential

–60

Undershoot –80

0

2

4

6

8

10

12

Time (ms)

undershoot the resting potential. Eventually, the resting potential is re-established. The action potential spike is then transmitted down the axon to the next node of Ranvier, where the process repeats. The runaway spike of the action potential is caused by positive feedback among ion channels in the neuron membrane. These ion channels are voltage-gated, meaning that they open depending on the voltage difference across the membrane. The physiological and biochemical origin of the resting potential and the action potential is illustrated in Fig. 9.3, which shows sodium and potassium ion channels in the cell membrane. In the resting state, a sodium–potassium pump maintains a steady-state polarization with an internal potential around −65 mV. The pump is actively driven by the energy of adenosine triphosphate (ATP) hydrolysis. In addition to the sodium–potassium pump, there are voltage-gated ion channels for sodium and potassium, separately. At the resting potential, the voltage-gated channels are closed. However, as the potential increases (becomes less negative) as a result of an external stimulus, some of the voltage-gated sodium channels open, allowing sodium ions outside the membrane to flow inside. This influx of positive ions causes the internal potential to rise more. As the potential rises, more and more of the voltage-gated sodium channels open in a positive feedback that causes a runaway effect in which all of the sodium channels open, and the potential rises above 0 mV. In this depolarized state, the potassium voltage-gated ion channels open to allow potassium ions to flow out of the neuron, causing the potential to fall by reducing the number of positive ions inside the membrane. After the action potential has passed, the sodium–potassium pump re-establishes the steady-state condition, and the neuron is ready to generate another action potential.

Neurodynamics and Neural Networks 279 Depolarization

Neuron action potential Na+

K+

Outside Inside +40 mV

Steady state

Repolarization

Outside

Outside Membrane Inside –65 mV

Inside –70 mV

Voltage-gated Na-channel opens

Membrane

Sodium channels close Na-K pump

Voltage-gated K-channel opens

Figure 9.3 Biochemical and physiological process of action potential generation. The Na–K pump maintains the resting potential prior to an action potential pulse (left). During the upswing of the action potential (middle), voltage-gated Na channels open. Recovery occurs through equalization of the internal ionic charge when the Na channels close and the voltage-gated K channels open (right) to return to steady state and be prepared for the next action potential. The role of voltage-gated ion channels in the generation and propagation of action potentials was first uncovered by Hodgkin and Huxley (1952) in studies of the squid giant axon. They identified three major current contributions to depolarization that included the sodium and potassium channels, but also a third leakage current that depends mainly on Cl− ions. They further discovered that potassium channels outnumbered sodium channels (accounting for the strong repolarization and hyperpolarization caused by the outflux of potassium) and that sodium channels had an inactivation gate that partially balances activation gates (accounting for the need to pass a certain threshold to set off the positive-feedback runaway). Hodgkin and Huxley devised an equivalent circuit model for the neuron that included the effects of voltage-dependent permeabilities with activation and inactivation gates. The Hodgkin and Huxley model equation is CV˙ = I − gK n4 · (V − EK ) − gNa m3 h · (V − ENa ) − gL · (V − EL )

(9.1)

where C is the effective capacitance, V is the action potential, I is a bias current, gK and gNa and gL are the conductances, E K , E Na and E L are the equilibrium potentials, n(V ) is the voltage-dependent potassium activation variable, m(V ) is the voltage-dependent sodium activation variable (there are three sodium

280 Introduction to Modern Dynamics activation channels and four potassium channels, which determines the exponents) and h(V ) is the voltage-dependent sodium inactivation variable. The model is completed by specifying three additional equations for n, ˙ m, ˙ and h˙ that are combinations of decaying or asymptotic time-dependent exponentials. The Hodgkin–Huxley model is a four-dimensional nonlinear flow that can display an astounding variety of behaviors that include spikes and bursts, limit cycles and chaos, homoclinic orbits and saddle bifurcations, and many more. The field of neural dynamics provides many excellent examples of nonlinear dynamics. In the following sections, some of the simpler variations on the model are explored.

9.2 Neuron dynamics The dynamic behavior of a neuron can be simplified by dividing it into segments. The body of the neuron is a separate segment from the axon and dendrites. These, in turn, can be divided into chains of dynamic segments that interact to propagate action potentials. Each segment can be described as an autonomous nonlinear system.

9.2.1 The Fitzhugh–Nagumo model One simplification of neural dynamics is the Fitzhugh–Nagumo model of the neuron. Under appropriate conditions, it is a simple third-order limit-cycle oscillator like the van der Pol oscillator. The Fitzhugh–Nagumo model has only two dynamical variables: the membrane potential V and the activation parameter n, which is proportional to the number of activated membrane channels. Values for the model that closely match experimental observations lead to the equations V˙ = (V + 60) (a − V /75 − 0.8) (V /75 − 0.2) − 25n + B n˙ = 0.03 (V /75 + 0.8) − 0.02n

1 The coefficients in the Fitzhugh– Nagumo equations have implicit dimensions to allow voltages to be expressed in millivolts. See E. M. Izhikevich, Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting (MIT Press, 2007).

(9.2)

where B is a bias current, and a is a control parameter that determines the size of the limit cycle.1 The two nullclines (V˙ = 0, n˙ = 0) and a few streamlines are shown in Fig. 9.4 for several values of the control parameter a. The control parameter plays the role of the integrated stimulus of the neuron, with larger negative values of a corresponding to a stronger stimulus. This model can show stable fixed-point behavior (Figs. 9.4(a) and 9.4(d)) or limit cycles (Figs. 9.4(c) and 9.4(d)), depending on the value of the control parameter. When the stimulus is weak (larger positive values of a), the system is in a stable fixed point (resting potential). As the stimulus increases (larger negative values of a), the system converts from the resting potential to a limit cycle (spiking). The transition to the limit cycle is seen graphically in Fig. 9.4 when the nullclines (V˙ = 0 and n˙ = 0) intersect at the fixed point in the region where the V -nullcline has positive slope, and a stability analysis shows that the fixed point is unstable. This model captures the qualitative transition from resting to spiking that mimics the behavior

Neurodynamics and Neural Networks 281 Fitzhugh—Nagumo model

(a)

(b) 1

0

1

n∙=

0.5

0 Stable fixed point –0.5 –120 –100

a = –0.1

a = +0.1

–80

Limit cycle

Activation parameter n

Activation parameter n

∙ V= 0

–60

–40

–20

0

0.5

0

Unstable fixed point

–0.5 –120 –100

20

(c)

(d)

1

a = –0.6

0.5

0

–0.5 –120 –100

–80

–60

–40

–20

–60

–40

–20

0

20

1

0

20

a = –0.65

Stable fixed point Activation parameter n

Activation parameter n

Limit cycle

–80

Membrane potential V

Membrane potential V

0.5

0

–0.5 –120 –100

Membrane potential V

–80

–60

–40

–20

0

20

Membrane potential V

Figure 9.4 Examples of the Fitzhugh–Nagumo model.The horizontal axis is the membrane potential V, and the vertical axis is the activation parameter n. The nullclines are plotted for n (straight line) and V (cubic curve) for I = 0. In (a) for a = +0.1, there is a stable fixed point (resting potential), while in (b) for a = −0.1 and (c) for a = −0.6, there are limit cycles representing neural spiking. In (d) for a = −0.67, the system converts back to a stable fixed point. of neurons. The Fitzhugh–Nagumo model in Eq. (9.2) captures the essential nonlinearity of Eq. (9.1) caused by the voltage-gated channels and allows a simple stability analysis, but it cannot represent the detailed dynamics of neural action potentials.

Example 9.1 Simple Fitzhugh–Nagumo model A simplified dimensionless version of the Fitzhugh–Nagumo model is   V˙ = V 1 − V 2 − n n˙ = V − a

(9.3) continued

282 Introduction to Modern Dynamics

Example 9.1 continued This is a van der Pol oscillator with a velocity-limited gain instead of the amplitude-limited gain discussed in Chapter 4. The V -nullcline is a cubic curve, and the n-nullcline is a vertical line at V = a. The Jacobian, evaluated at the fixed point, is   1 − 3a2 −1 J= (9.4) 1 0 with trace τ = 1 − 3a2 , determinant  = 1, and Lyapunov exponent    1 − 3a2 1 λ= (9.5) ±i 4 − 1 − 3a2 2 2 √ making the fixed point an unstable spiral for | a | < 1 / 3. The trajectories near the fixed point spiral outward and asymptotically approach the limit cycle, as shown in Fig. 9.5. It is possible to estimate a timescale for the limit cycle of this nearly harmonic system by noting that the limit cycle is restricted to amplitudes near unity, and the speed along the n-axis is also near unity. Therefore, the period of the limit cycle is approximately 8 seconds (4 sides times a side length of 2 divided by speed of 1), which is close to the actual period of 7.2 seconds. . V-nullcline 2

. n-nullcline a

1

n 0

–2 –1.5

0 V

1.5 Limit cycle

Figure 9.5 Simplified model of the Fitzhugh–Nagumo neuron with a = 0.2.

Neurodynamics and Neural Networks 283

9.2.2 The NaK model To capture more detailed behavior of neurons, a model called the NaK model includes voltage-dependent variables, such as relaxation times and excitation parameters. This model is more closely linked to the physical processes of ion currents through neuron membranes and the dependence of these currents on membrane potentials. The dynamics can still be described in a two-dimensional phase plane, but the nullclines take on forms that lead to interesting behavior such as bifurcations and homoclinic orbits. The Hodgkin–Huxley model of voltage-gated currents across the membrane takes the form   Ij = gj hj (V ) V − Ej

(9.6)

where gj is a channel conductance of the jth ion, Ej is the equilibrium potential, and hj (V ) is an activation parameter related to the fraction of channels that are open. The activation parameter is a function of membrane potential, which makes the voltage-gated current nonlinear. In the NaK model, there are four assumed currents: a bias current I, a leakage (L) current, a sodium (Na) current, and a potassium (K) current. The neuron is assumed to have capacitance C and membrane potential V that are driven by the currents as CV˙ = I − gL (V − EL ) −

2 

  gj hj (V ) V − Ej

j=1

= I − gL (V − EL ) − gNa m(V ) (V − ENa ) − gK n(V ) (V − EK )

(9.7)

where there are two activation parameters, m(V ) for the sodium channel and n(V ) for the potassium channel. The NaK model assumes that the sodium current is instantaneous, without any lag time. The NaK model is then a two-dimensional flow in V and n through CV˙ = I − gL (V − EL ) − gNa m∞ (V ) (V − ENa ) − gK n(V ) (V − EK ) n˙ =

n∞ (V ) − n(V ) τ (V )

(9.8)

where 1 1 + exp [(Vm − V ) /km ] 1 n∞ (V ) = 1 + exp [(Vn − V ) /kn ]  τ (V ) = τ 0 + τ exp − (Vmax − V )2 /2V 2

m∞ (V ) =

(9.9)

284 Introduction to Modern Dynamics 1 n-nullcline

τ = 0.152 I=3 Activation parameter n

0.8

0.6

0.4

Separatrix V-nullcline

0.2

Figure 9.6 Bistability between a node and a limit cycle in the NaK model. A current spike can take the system from the resting potential (the node) across the separatrix to the spiking state (the limit cycle).

Node

Saddle Limit cycle

0 –80

–70

–60

–50

–40

–30

–20

–10

0

Membrane potential V

The NaK model is a two-dimensional phase plane model that can show complex dynamics, depending on the bias current I (which is integrated by the neuron dendrite over the inputs from many other contacting neurons). With sufficient current input, the dynamics of the target neuron can change dramatically, projecting its own stimulus downstream to other neurons in a vast network. Despite the apparent simplicity of the NaK model, it shows rich dynamics, in particular bistability and bifurcations. It is therefore fruitful to explore its parameter space to understand some of the different types of behavior it is capable of. Bistability is a particularly important property for neurodynamics. This allows a “prepared” neuron that is not currently spiking to receive a stimulus that causes it to spike continuously, even after the stimulus has been removed. The NaK model can display bistability, which is understood by considering Fig. 9.6. For the parameters in the figure, there are two fixed points (a saddle and a stable node), as well as a limit cycle. If the neuron is in the resting state, it is at the stable node. However, a transient pulse that is applied and then removed quickly can cause the system to cross the separatrix into the region where the dynamics relax onto the limit cycle and the system persists in a continuous oscillation, shown in Fig. 9.7. The NaK model can display bifurcations as well as bistability. The phase diagram of the NaK model is shown in Fig. 9.8 for the conditions where the stable node and the saddle point collide (merge) into a single saddle point. The limit cycle becomes a homoclinic orbit with infinite period. This type of bifurcation is called a saddle bifurcation (also known as a fold or a tangent bifurcation). The membrane potential for a trajectory near the homoclinic orbit is shown in Fig. 9.9.

Neurodynamics and Neural Networks 285

1 0.9

Activation parameter n

0.8 0.7 06 0.5 0.4 0.3 0.2 0.1 0 –80

–70

–60

–50

–40 –30 –20 –10 Membrane potential V

0

10

20

Figure 9.7 Streamlines for the NaK model, showing bistability.

1 τ=1 I = 4.5

n-nullcline

Activation parameter n

0.8

Homoclinic limit cycle 0.6

0.4

0.2

Saddle

V-nullcline

0 –80

–60

–40

–20

Membrane potential V

0

20

Figure 9.8 A homoclinic orbit in the NaK model connected to the saddle point.

286 Introduction to Modern Dynamics 20

Membrane Potential V

0

–20

–40

–60

Figure 9.9 Membrane potential as a function of time for an orbit near the homoclinic orbit.

–80

0

50

100

150

200

Time (ms)

The rapid traversal of the orbit is followed by a very slow approach to the saddle, giving the membrane potential a characteristic spiking appearance.

9.3 Network nodes: artificial neurons

2 Subscripts denote row vectors, the superscript of the neural weight matrix is the row index, and the subscript is the column index. The choice to use row vectors in this chapter is arbitrary, and all of the equations could be transposed to operate on column vectors. 3 Many equations in this chapter have parameters with repeated indices, but the Einstein summation convention is not implied. All summations are written out explicitly.

Biological neurons provide excellent examples of the complexities of neural behavior, but they are too complex as models for artificial neurons in artificial neural networks. Therefore, it is helpful to construct idealized neuron behavior, either computationally or in hardware implementations. A simple model of an artificial neuron is shown in Fig. 9.10. It has N inputs, attached with neural synaptic weights wak from the ath neuron to the summation junction of the kth neuron. The output voltage ν k of the summation junction is νk = Rk

N 

ya wak − bk

(9.10)

a=1

where the ya are the current input signals,2 bk is the neuron bias voltage (the minus sign means that the bias sets the threshold), and Rk is a characteristic resistance.3 This value is passed through a nonlinear activation function S(ν k ) to yield the output voltage value zk :

Neurodynamics and Neural Networks 287 Bias bk y1(t)

w1k

y2(t)

w2k

Σ

Input signals

Activation function

Output

S(v)

zk

Summing junction

yN(t)

Figure 9.10 Model of an artificial neuron. There are N inputs, which are connected,with weights wkm ,to the summing junction, where they are summed with a bias bk to yield ν k , which is passed through a saturating nonlinear function S(ν k ) to give the neuron output.

wN k Synaptic weights

zk = S (νk )

(9.11)

The saturating function is the nonlinear element that is essential for the neuron to change its state. It is common to use a sigmoidal function that has two limiting values for very small and very large arguments. Examples of the saturating nonlinear activation function are given in Table 9.1. These are antisymmetric functions that asymptote to ±1, and the parameter g is called the gain. The sigmoidal functions are the most common functions used in artificial neurons. These are smooth functions that vary between the two saturation limits and are differentiable, which is important for use in algorithms that select weighting values wkm for the network. The two sigmoidal functions are plotted in Fig. 9.11, both with gain g = 1. The arctan function has a slower asymptote. The neurodynamics of a network are sometimes modeled by adopting a physical representation for a node that has the essential features of a neuron. This model neuron is shown in Fig. 9.12 with capacitors and resistors, a summation junction, and the nonlinear activation function.4 The sum of the currents at the input node, following Kirchhoff ’s rule, is dνk νk (t)  =− + ya (t)wak + Ik dt Rk N

Ck

a=1

and the output voltage of the neuron is

(9.12) 4 See S. Haykin, Neural Networks: A Comprehensive Foundation (Prentice Hall, 1999).

288 Introduction to Modern Dynamics 1.0 logistic arctan g=1

0.5

z

0.0

–0.5

Figure 9.11 Two common choices for antisymmetric sigmoid functions with a gain g = 1: the logistic function (also known as the Fermi function or tanh function) and the inverse tangent (arctan) function.

–1.0 –10

–5

0 v

5

10

Table 9.1 Neural response functions Heaviside function:

1 νk ≥ 1 S (νk ) = − 1 νk < 1 Piecewise-linear:

1 S (νk ) = gνk −1

gνk ≥ 1 −1 < gνk < 1 gνk < −1

Sigmoid functions: Logistic (Fermi) function: 2 S (νk ) = 1 − = tanh (gνk ) 1 + exp (2gνk ) Inverse tangent function:

π  2 gνk S (νk ) = arctan π 2 zk (t) = S (νk (t))

(9.13)

The input signals yk are voltages, and the resistors provide the matrix elements wkj . The RC element gives a smooth response through the RC time constant.

Neurodynamics and Neural Networks 289 w1k y1(t)

y1w1k w2k Current source

y2(t) y2w2k Input signals

Current summing junction

vk

Neural output S(vk) zk(t)

Ck

Rk

yN wN k wN k yN(t) Synaptic weights

Figure 9.12 Physical (hardware) model of an artificial neuron with resistors, capacitors, a summing junction, and a nonlinear transfer function.

9.4 Neural network architectures The variety of neural network architectures is vast because there are so many possible ways that neurons can be wired together. There are two general classifications that pertain to the way that the neural network learns, which is either supervised or unsupervised. In supervised learning, the networks are trained by presenting selected inputs with the “correct” outputs. In unsupervised learning, the networks are allowed to self-organize. The human mind learns primarily through unsupervised learning as we interact with our environment as babies and our brains wire themselves in response to those interactions. On the other hand, there are many machine tasks in which specific inputs are expected to give specific outputs, and supervised learning may be most appropriate. Within the class of neural networks with supervised learning, there are generally three classifications of neural network architectures. These are (1) single-layer feedforward networks, (2) multilayer feedforward networks without loops, and (3) recurrent networks with feedback loops. There can also be mixtures, such as multilayer networks that may have loops. However, the three main types have straightforward training approaches, which will be described in this section.

290 Introduction to Modern Dynamics Input layer of source nodes

Output layers of neurons

wkj

zk

yj

Inputs

Outputs

Figure 9.13 Single-layer feedforward network (single-layer perceptron). Inputs are directed to a single layer of processing neurons with chosen weights to accomplish a supervised task.

j

k

9.4.1 Single-layer feedforward (perceptron) Single-layer feedforward neural networks are the simplest architectures, often referred to as single-layer perceptrons. They consist of a single layer of neurons that map multiple inputs onto multiple outputs. They often are used for lowlevel pattern detection and feature recognition applications, much like the way the retina preprocesses information into specific features before transmission to the visual cortex of the brain. A schematic of a single-layer perceptron is shown in Fig. 9.13. Input neurons are attached to the inputs of a single neural processing j layer where the signals are summed according to the weights wk and are output through the nonlinear transfer function. The output of a neuron is ⎛



zk = S ⎝

yj wk − bk ⎠

N 

j

(9.14)

j=1 j

where the yj are the inputs, wk is the weight for the jth input to the kth output neuron, and bk is the threshold of the kth output neuron. The transfer function S(ν k ) is one of the sigmoidal functions of Table 9.1. Single-layer perceptrons execute simple neural processing, such as edge detection, which is one of the functions that neurons perform in the neuronal layer of the retina of the eye. The retina is composed of three layers of neurons: a photoreceptor layer that acts as the inputs in Fig. 9.13, a layer of interconnected wiring that acts as the weights wkj , and a layer of ganglion cells that collect the signals and send them up the optic nerve to the visual cortex of the brain. One of

Neurodynamics and Neural Networks 291 yj

Inputs

–w

2w

zk

Output

b = 0.5

Figure 9.14 Center-surround architecture of a ganglion cell in the retina. With a simple weighting value w = 1, this acts as an edge-detection neural circuit.

–w

the simplest neural architectures in the retina is the center-surround architecture. A ganglion cell receives signals from a central photoreceptor that has a positive weight, and signals from surrounding photoreceptors that have negative weight, so that when the excitation is uniform (either bright or dark), the signals cancel out and the ganglion cell does not fire. The equivalent neural diagram is shown in Fig. 9.14. Three input neurons feed forward to the output neuron (ganglion cell) with weights [−w, 2w, −w]. The signals for a uniform excitation cancel out. This sensor configuration responds most strongly to directional curvature in the input field, and the weights perform a Laplacian differential operation. Simple perceptrons, like the center-surround design, can be “wired” easily to perform simple tasks. However, when there are more inputs and more output neurons, even simple perceptrons need to be “programmed,” which means the weights wkj need to be assigned to accomplish the desired task. There is no closed form for the selection of the weights for a given training set of inputs and desired outputs. This is because the training sets are usually not a one-to-one mapping. In other words, the problem is ill-posed. But this is precisely the situation in which neural networks excel. Neural networks work best when the problems have roughly correct solutions but ambiguities remain. Our own neural networks— our brains—are especially good at finding good solutions to ambiguous problems. Likewise, the simple single-layer perceptron also can perform well under ambiguous conditions, but only after the best compromise is made during the assignment of the weights. To find the desired weights and thresholds, a training set is used, and an iterative procedure is established that allows the weights and thresholds to relax to good μ compromise values. There is a set of μ selected inputs Yj for which specific μ outputs Zk are known. This gives the mapping μ

μ

Yj → Zk

(9.15)

or, in other words, for a perfectly trained perceptron, the weights wkj and thresholds bk would be chosen such that

292 Introduction to Modern Dynamics ⎛



Zk = S ⎝

Yj wk − bk ⎠

μ

m 

μ j

(9.16)

j=1

However, at the initiation of the perceptron, the weights and thresholds are set randomly to initial values. When the perceptron is presented with an input from μ μ the training set, the actual output Zk differs from the desired training output Zk , and a deviation signal is constructed as D=

1  μ μ 2 Zk − zk 2 μ,k

⎡ ⎛ ⎞⎤2 m  1 ⎣ μ j μ = Zk − S ⎝ Yj wk − bk ⎠⎦ 2

(9.17)

j=1

μ,k

The deviation changes as the weights and thresholds are changed. This is expressed as ∂D ∂wkj

 μ   μ  ∂νk  μ  μ μ Zk − S νk S  νk = − Δk Yj k ∂wj μ μ μ

=−

 μ   μ  ∂νk  μ  μ ∂D =− Zk − S νk S  νk = Δk ∂bk ∂bk μ μ μ

(9.18)

where the delta is  μ  μ   μ  μ k = Zk − S νk S  νk

(9.19)

and the prime denotes the derivative of the transfer function, which is why this approach requires the transfer function to be smoothly differentiable. In the iterative adjustments of the weights and thresholds, the deviation is decreased by adding small adjustments to the values. These adjustments are δwkj = ε

 μ

δbk = −ε

μ

μ

k Yj

 μ

μ

(9.20)

k

where ε is a small value that prevents the adjustments from overshooting as the process is iterated. This is known as the Delta Rule for perceptron training. In practical implementations of the Delta Rule, the weights are adjusted sequentially as each training example is presented, rather than the sum over μ being performed. A complete presentation of the training set is known as an epoch, and many epochs are used to allow the values to converge on their compromise values. Within each epoch, the sequence of training patterns is chosen randomly

Neurodynamics and Neural Networks 293 to avoid oscillating limit cycles. Training is faster when the sigmoidal transfer function S is chosen to be antisymmetric, as in Table 9.1. Single-layer perceptrons have a limited space of valid problems that they can solve. For instance, in a famous example, a single-layer perceptron cannot solve the simple XOR Boolean function.5 To extend the capabilities of feedforward neural networks, the next step is to add a layer of “hidden” neurons between the input and the output.

9.4.2 Multilayer feedforward Multilayer feedforward networks can display complicated behavior and can extract global properties of patterns. There are still some limitations on the types of problems they can solve, but the range for a two-layer perceptron is already much broader than for one with just a single layer. An example of a two-layer perceptron with a single “hidden” neuron layer is shown in Fig. 9.15. The two-layer perceptron that performs the 2-bit Boolean function XOR is shown in Fig. 9.16. There are two input values, two hidden neurons, and a single output neuron. The synaptic weight matrix between the inputs and the hidden layer is symmetric and has a weight value w, and two thresholds in the hidden layer are set to w for the upper node, and −w for the lower node. The synaptic weight Input layer of source nodes

Hidden layer

Output layer of neurons

Inputs

Outputs xi yj

wji i

zk

wkj j

k

5 Marvin Minsky of MIT (with Seymour Papert, 1969) noted that perceptrons cannot implement the XOR gate, and proved that locally connected perceptrons cannot solve certain classes of problems. Universal computability requires neural networks with either multiple layers of locally connected neurons or networks with nonlocal connections (see the recurrent Hopfield model in Section 9.5).

Figure 9.15 Two-layer feed forward network with a single layer of hidden neurons.

294 Introduction to Modern Dynamics Truth table b=w

Inputs

w –2w

w

Output

Inputs

w

w b = 2w

w

Figure 9.16 Synaptic weights and thresholds for the XOR Boolean function. The truth table is in the limit of infinite gain. A “zero” is encoded as a −1, and the bias sets a threshold.

Out

0

0

0

0

1

1

1

0

1

1

1

0

b = –w

Input layer

Hidden layer

Output layer

matrix between the hidden layer and the output is [−2w, w], and the threshold on the output neuron is set to 2w. The weight w can be set at a moderate value (e. g. w = 5) for testing, and then can be set to a large value to ensure that the Boolean output is ±1. The key to training a network with hidden neurons is to extend the Delta Rule to two layers. This is accomplished through a method known as error backpropagation. As in the single-layer perceptron, the two-layer perceptron is trained with a training set composed of μ examples μ

μ

μ

Xj → yj → Zk

(9.21)

where Xi are the training states, yj are the hidden states, and Zk are the desired output states. For the three-layer network, the indices i, j, and k correspond to input, hidden, and output layers, respectively. There are now two weight matrices and two threshold vectors that need to be chosen to solve the problem. The weight matrix for the output layer is adjusted in the same way as the Delta Rule for the single-layer perceptron, δwkj = ε

 μ

δbk = −ε

μ

μ

k Yj

 μ

μ

(9.22)

k

where  μ  μ   μ  μ k = Zk − S νk S  νk

(9.23)

as before. The adjustments of the weight matrix of the hidden layer are obtained in a similar way as for the single-layer network, giving

Neurodynamics and Neural Networks 295 j

δwi = −ε =ε

∂D j ∂wi

 μ,k





 μ  μ   μ  ∂νkμ ε Zk − S νk S  νk ∂yj μ,k

=

 ∂y j μ μ Δk wkj S  νj j ∂wi

∂yj j

∂wi

μ μ

(9.24)

Δj xi

μ

 μ  μ   μ  ∂νk ∂yj ∂D Zk − S νk S  νk =ε ∂bj ∂yj ∂bj μ

δbj = −ε

μ,k





μ Δk wkj S 

μ

νj

μ,k

= −ε



 ∂y

j j

∂bj

μ

(9.25)

Δj

μ



where μ Δj

=



 μ Δk wkj

 μ S  νj

(9.26)

k

The Delta Rule for the hidden layer has the same form as for the output layer, except that the deviation has been propagated back from the output layer. This is why the Delta Rule for the two-layer perceptron is called error back-propagation. The two-layer perceptron can solve problems that are linearly separable,6 such as the XOR or a multibit AND. But there are still simple examples of problems that cannot be solved. For instance, a simple problem might be identifying multibit patterns that are balanced, e.g., [0 1 0 1] and [1 1 0 0] have equal numbers of 1’s and 0’s, while [1 1 1 0] does not. This problem is not linearly separable, and cannot be solved with a single hidden layer. However, adding a second hidden layer can extend the range of valid problems even farther. The art of selecting feedforward architectures that are best suited to solving certain types of problems is a major field that is beyond the scope of this chapter. Nonetheless, there is one more type of neural network architecture that we will study that is well suited to problems associated with content-addressable memory. These are recurrent networks.

9.5 Hopfield neural network The third general class of neural network architecture is composed of feedback networks. Networks that use feedback have loops and hence are recurrent. One example is shown in Fig. 9.17. The network typically evolves from an initial state (input) to an attractor (fundamental memory). The recalled memory is in some sense the “closest” memory associated with the input. For this reason, these networks work on the principle of associative memory. Partial or faulty

6 In linearly separable problems, points in the classification space can be separated by lines in 2D, planes in 3D, or hyperplanes in higher dimensions.

296 Introduction to Modern Dynamics wji

Figure 9.17 Hopfield recurrent network model. To train the network, the weights wij are assigned according to the fundamental memories. To operate the network, the neuron states are initialized to an input pattern,and then the network is iterated until it settles into the steady state associated with the “recalled” pattern.

Outputs

Feedback

Asynchronous updates

i

j

Neurons

inputs recall the full fundamental memory. There are a wide variety of feedback architectures, but one particularly powerful architecture is also very simple, known as the Hopfield network. The Hopfield network is a recurrent network with no hidden layers and no self-feedback. In this specific model, the synaptic weights are symmetric, and the neuron states are updated asynchronously to prevent oscillation. The outputs are fed back into the input neurons, with the delay units ensuring “causal” operation consisting of successive distinct iterations to update each neuron state. The network is “trained” by assigning the wkj feedback weights according to a set of M fundamental memories. In the Hopfield network, the training is performed once, rather than iteratively as in the Delta Rule of the feedforward networks. The network is operated by assigning initial states of the neurons (a possibly corrupted input pattern), and then iterating the network until the neuron states settle down into steady states (the pattern that is “recalled” from memory). The iterative stage for the Hopfield network allows the system dynamics to relax onto an attractive fixed point (memory). For convenience, the nonlinear activation function in the following examples is set to be the tanh function

Neurodynamics and Neural Networks 297

g ν  a x = Sa (ν) = tanh 2

(9.27)

with an inverse that is also a smooth function ν=

Sa−1 (x)

  1 1−x = − ln ga 1+x

(9.28)

A key question about a dynamical network is whether it will converge on a stable fixed point. This is partially answered by Lyapunov’s theorem, which sets the conditions required for all Lyapunov exponents to be negative: For a state vector xa (t) and an equilibrium state x, the equilibrium state x is stable if, in a small neighborhood of x, there exists a smooth dV (xa (t)) positive definite function V (xa (t)) such that ≤ 0 ∀xa in the dt neighborhood This theorem is general for any smooth positive-definite function V, no matter what detailed form it takes. For the Hopfield model, adopting the physical representation of resistors, capacitors, and currents (see Fig. 9.12), an energy function serves in the role of the positive-definite function. The energy function is  1 1  b a wa x xb + E=− 2 Rb N

N

N

a=1 b=1

b=1

xb

Sb−1 (x)dx −

N 

Ib xb

(9.29)

b=1

0

The energy describes an energy landscape with numerous local minima, where each minimum corresponds to a fundamental memory of the network.7 Differentiating the energy E with respect to time gives N  N   νb dE dxb b a wa x − + Ib =− dt Rb dt b=1

(9.30)

a=1

The quantity inside the parentheses is Cb dν b /dt. The energy differential simplifies to  dE =− Cb dt N

b=1



dνb dt



dxb dt

(9.31)

Using the expression for the inverse nonlinear activation function gives  dxb d −1 Sb (xb ) dt dt  2   N  dxb d −1 =− Cb Sb (xb ) dt dxb b=1

N  dE =− Cb dt b=1



(9.32)

7 This energy function is analogous to the spin Hamiltonian of quantum magnets in spin-glass states when the components of the matrix wij have both signs and compete.

298 Introduction to Modern Dynamics The right hand side of this expression is strictly negative when the response function Sb (ν) is monotonically increasing (sigmoid function) and when it has a smooth inverse. Therefore, dE ≤0 dt

(9.33)

the energy is bounded, and the attracting fixed point of the dynamical system is stable for this physical neuron model. The energy function of a Hopfield network is a continuously decreasing function of time. The system executes a trajectory in state space that approaches the fixed point at the energy minimum. This fixed point is a recalled memory.

9.6 Content-addressable (associative) memory A content-addressable memory is one where a partial or a corrupted version of a pattern recalls the full ideal pattern from memory. To make the state description simpler, the Hopfield model is considered in the high-gain limit that leads to binary discrete output states of each neuron. Under high gain, the control parameter gi → ∞ in Eq. (9.29), and the simple energy expression is 1  b a wa x xb 2 N

E=−

N

for

xb = ±1

(9.34)

a=1 b=1

The state of the network is xb = [x1 , x2 , . . . , xN ]

for xb = ±1

(9.35)

This state induces a local field νa =

n 

wba xb − ba

(9.36)

b=1

where, for large gain (step function), xa = sgn [νa ]

(9.37)

and where ba is a local threshold, possibly different for each neuron. The biases can be zero for each neuron, or they can be chosen to improve the performance of the recall. In the following examples, the bias is set to zero. There are two phases in the operation of a content-addressable memory: (1) storage and (2) retrieval. The storage phase selects the ideal patterns that act

Neurodynamics and Neural Networks 299 as fundamental memories and uses these to define the appropriate elements of wba . The retrieval phase presents a partial or corrupted version of the pattern to the network as the network iterates until it converges on an ideal pattern. If the corruption is not too severe, the resulting ideal pattern is usually the correct one.

9.6.1 Storage phase In the storage phase, M fundamental memories are used to define the weight matrix. For instance, consider M fundamental memories ξ μ of length N with elements ξμ,a = ath element of μth memory

(9.38)

The synaptic weights are calculated by the outer product wba =

M 1  b μ ξμ ξa N

and

waa = 0

(9.39)

μ=1

To prevent runaway in the iterated calculations, it is usual to remove the diagonal as M 1 ← → →← →T M ← ← → ξμ ξμ − w = I N N

(9.40)

μ=1

← → → where the synaptic weight matrix ← w is symmetric, and I is the identity matrix. The outer product is a projection operator that projects onto a fundamental memory, and the synaptic weights perform the function of finding the closest fundamental memory to a partial or corrupted input. Once the synaptic weight matrix is defined, the storage phase is complete, and no further training is required.

9.6.2 Retrieval phase ← → The initial network state is set to ξprobe which is a partial or noisy version of a fundamental memory. The network is iterated asynchronously to prevent oscillation, which means that the neurons are picked at random to update. When the jth neuron is picked, the (n + 1)th state of the neuron depends on the nth state: 

 x(n+1) = sgn b

N 

a x(n) a wb − bb

(9.41)

a=1

If the new state is not equal to the old state, then it is updated. After many iterations, the states of the neurons no longer change. In this situation, the recalled

300 Introduction to Modern Dynamics state (the fixed point) is now steady. For the stable memory end state, all the output values of each neuron are consistent with all the input values. The self-consistency equation is  ya = sgn

N 

 yb wba

− bb

for a = 1, 2, . . . , N

(9.42)

b=1

Example 9.2 Three neurons N = 3, two fundamental memories M = 2 Assume that the fundamental memories are



⎞ 1 ⎜ ⎟ ξ1 = ⎝ − 1⎠ 1



⎞ −1 ⎜ ⎟ ξ2 = ⎝ 1⎠ −1

The synaptic weight matrix from Eq. (9.40) is ⎛ ⎞ ⎛ ⎞ ⎛ +1 −1 1 1 2 ← → 1⎜ ⎟ ⎜ ⎟ ⎜ W = ⎝ − 1⎠ (+1 − 1 + 1) + ⎝ + 1⎠ (−1 + 1 − 1) − ⎝0 3 3 3 +1 −1 0 ⎛ ⎞ 0 −2 +2 1⎜ ⎟ = ⎝−2 0 −2⎠ 3 + 2 −2 0

0 1 0

⎞ 0 ⎟ 0⎠ 1

As a test, try one of the fundamental memories, such as y = (1, −1, 1) as an input: ⎛ ⎞⎛ ⎞ ⎛ ⎞ 0 −2 +2 1 4 1⎜ ← → ⎟⎜ ⎟ 1⎜ ⎟ W y = ⎝ − 2 0 −2⎠ ⎝ − 1⎠ = ⎝ − 4⎠ 3 3 + 2 −2 0 1 4 The output is

⎛ ⎞ 1 ← → ⎜ ⎟ sgn W y = ⎝ − 1⎠ = y 1

and the memory is correctly recalled. Now try an input that is not a fundamental memory. For instance, use y = (1, 1, 1) as an initial input. Then ⎛ ⎞⎛ ⎞ ⎛ ⎞ 0 −2 +2 1 0 1⎜ 1 ← → ⎟⎜ ⎟ ⎜ ⎟ W y = ⎝ − 2 0 −2⎠ ⎝1⎠ = ⎝ − 4 ⎠ 3 3 + 2 −2 0 1 0 In this evaluation, there are values ya = 0 that result (a non-allowed state). When this happens during updating, the rule is to not change the state to zero, but to keep the previous value. The resultant vector after the first iteration is

Neurodynamics and Neural Networks 301

Example 9.2 continued therefore

⎛ ⎞ 1 ← → ⎜ ⎟ sgn W y = ⎝ − 1⎠ = ξ1 1

which is one of the fundamental memories, and the recalled memory is stable. As another example, use y = (1, 1, −1) as an initial input ⎛ ⎞⎛ ⎞ ⎛ ⎞ 0 −2 +2 1 −4 1 1 ← → ⎜ ⎟⎜ ⎟ ⎜ ⎟ W y = ⎝ − 2 0 −2⎠ ⎝ 1 ⎠= ⎝ 0 ⎠ 3 3 + 2 −2 0 −1 0 ⎛ ⎞ −1 ← → ⎜ ⎟ sgn W y = ⎝ 1 ⎠ −1 which, again, is a fundamental memory and is stable. The Hopfield network can have spurious states that are stable recalls but are not among the fundamental memories. This is because when M < N, W has degenerate eigenvalues of zero. The subspace spanned by the null eigenvector associated with the zero eigenvalues constitutes the null space, and the Hopfield network includes vector projectors that project onto the null space, which produces false memories. An essential question about associative memories is how many memories can be stored and recalled effectively. To provide an estimate for the capacity of the memory, chose a probe ξ probe = ξ υ that is one of the fundamental memories. Then νa =

N 

wba ξbυ

b=1

=

N M 1  μ b υ ξa ξμ ξb N μ=1

b=1

M N 1  μ b υ = ξaυ + ξa ξμ ξb N μ =ν

(9.43)

b=1

The first term is the correct recall, while the second term is an error term. Because the state values are ±1 (saturated outputs of the response function), the second term has zero mean. It has the further properties σ2 =

M−1 N

(9.44)

302 Introduction to Modern Dynamics for the variance, and thus S/N =

1 N ≈ M−1 M N

(9.45)

for the signal-to-noise ratio for large M. It is beyond the scope of the current chapter to define the desired signal-to-noise ratio for effective operation of the memory, but numerical studies have found a rough rule-of-thumb to be S/N|c ≥ 7

(9.46)

for the critical threshold for stability.8 Therefore, it is necessary for the size of the vector N (the number of neurons) to be about an order of magnitude larger than the number of memories M.

9.6.3 Example of a Hopfield pattern recognizer An example of a Hopfield network that performs number recognition is implemented with N = 120 neurons and M = 6 memories. The fundamental memories are shown in Fig. 9.18 as 10 × 12 arrays of 1’s and 0’s. These two-dimensional patterns are converted to one-dimensional vectors of 120 elements that serve as the ξi . The bias of each neuron is set to zero for this example. The synaptic weight matrix is calculated using Eq. (9.40). Examples of two runs are shown in Fig. 9.19 where one of the fundamental memories is scrambled with 40% probability. The first example choses the number 8

See Haykin (1999), p. 695

Figure 9.18 Training set for a Hopfield network. N = 120 neurons and M = 6 memories.

Figure 9.19 Examples of successful pattern recall for a bit-flip probability of 40%.

p = 0.4, 52 iterations

p = 0.4, 53 iterations

Neurodynamics and Neural Networks 303

p = 0.45, 66 iterations

“4,” and scrambles the pixels (shown in the middle). After 41 iterations, the correct memory is recalled. The second example does the same for the number “3.” One of the interesting aspects of neural networks is that they don’t always work. An example of incorrect recall is shown in Fig. 9.20. In this case, the final steady state is not a pattern that exists in the training set. Because of the large redundancy of the network (N/M = 20), there are many stable solutions that are not one of the fundamental memories. These are “false” memories that were never trained, but are valid steady-state solutions.

9.7 Summary This chapter has introduced several simple aspects of neurodynamics, beginning with single neurons that are nonlinear dynamical systems exhibiting a broad range of nonlinear behavior. They can spike or cycle or converge on stable fixed points, or participate in bifurcations. An even wider variety of behavior is possible when many neurons are coupled into networks. For instance, Hopfield networks are dissipative nonlinear dynamical systems that have multiple basins of attraction in which initial conditions may converge upon stable fixed points. Alternatively, state trajectories may be periodic, executing limit cycles, or trajectories may be chaotic. The wealth of behavior is the reason why artificial neural network models have attracted, and continue to attract, intense interest that is both fundamental and applied. Fundamental interest stems from the importance of natural neural systems, especially in our own consciousness and so-called intelligence. Applied interest stems from the potential for neural systems to solve complex problems that are difficult to solve algorithmically and for which a holistic approach to fuzzy or ill-defined problems is often the most fruitful. Neural dynamics provide several direct applications of the topics and tools of nonlinear dynamics that were introduced in Chapter 4. Individual neurons are modeled as nonlinear oscillators that rely on bistability and homoclinic orbits to produce spiking potentials. Simplified mathematical models, like the Fitzhugh– Nagumo and NaK models, capture successively more sophisticated behavior of the individual neurons such as thresholds and spiking. Artificial neurons are

Figure 9.20 Example of unsuccessful pattern recall. With p = 45%, the 4 is translated into a nonfundamental 3. The final state is a spurious stable state.

304 Introduction to Modern Dynamics composed of three simple features: summation of inputs, referencing to a threshold, and saturating output. Artificial networks of neurons were defined through specific network architectures that included the perceptron, feedforward networks with hidden layers that are trained using the Delta Rule, and recurrent networks with feedback. A prevalent example of a recurrent network is the Hopfield network, which performs operations such as associative recall. The dynamic trajectories of the Hopfield network have basins of attraction in state space that correspond to stored memories.

9.8 Bibliography S. Haykin, Neural Networks: A Comprehensive Foundation (Prentice Hall, 1999). E. M. Izhikevich, Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting (MIT Press, 2007). B. Müller and J. Reinhardt, Neural Networks: An Introduction (Springer, 1990).

9.9 Homework problems Analytic problems 1. Fitzhugh–Nagumo model: Perform a stability analysis of the fixed points and limit cycle for the Fitzhugh–Nagumo model. What are the bifurcation thresholds? 2. Fitzhugh–Nagumo model: For the simple Fitzhugh–Nagumo model, estimate the period of the limit cycle when β is large for   V˙ = βV 1 − V 2 − n n˙ = V − a 3. NaK model: In Fig. 9.9 for the NaK model, what parameters control the fast part of the trajectory? The slow part of the trajectory? 4. NaK model: Draw a graph of the three variables in Eq. (9.8) as functions of voltage. What do these curves mean in terms of neuron physiology. 5. NaK model: How does the n-nullcline prevent n from becoming negative in the NaK model? 6. NaK model: Construct a simple NaK model with a logistic function for the n-nullcline and a cubic for the V -nullcline. Analyze the stability. 7. Center-surround: What is the truth table for the center-surround perceptron of Fig. 9.14? Assume an activation function that is asymmetric between −1 and +1 with high gain.

Neurodynamics and Neural Networks 305 8. Perceptron: Design a spatial sine-wave sensor, for a given wavelength, as a single-layer perceptron. What are the weights and thresholds? Can it be made shift-invariant? 9. Exclusive Or: Find the weights and biases to create an XOR from the following 2-layer neural net topology, and prove that the deviation is a minimum at these values. Assume activation functions that are asymmetric between −1 and +1 with high gain. w13 w23

w14 w34 w24

11. Linear separability: Design a perceptron with two inputs and a single neuron that acts as a two-class classifier (−1, +1) for the following data. Interpret the weight matrix and bias as the equation for a line on the x–y plane that separates the two classes. Class +1 x y Class −1

−0.676 0.502

x y

−1.024 −0.1011

0.0887 0.1207

1.1046 0.5057

0.727 0.6813

1.0681 −0.0381

−0.5652 −0.743

0.4109 −0.8182

−0.5924 −0.7972

−0.8197 0.0432

10. Hopfield network: Consider a simple Hopfield network made up of two neurons. There are 4 possible states for the network. The synaptic weight matrix of the network is   0 −1 W= −1 0 (a) Find the two stable states using the stability condition. (b) What is the steady-state behavior for the other two input states? (c) Define the energy function for infinite gain, and evaluate for all four states. 11. Hopfield network: Construct a Hopfield synaptic weight matrix for the three fundamental memories ξ1 = [1, 1, 1, 1, 1]T ξ2 = [1, −1, −1, 1, −1]T ξ3 = [−1, 1, −1, 1, 1]T

306 Introduction to Modern Dynamics (a) Use asynchronous updating to demonstrate that these three fundamental memories are stable. (b) What happens when you flip the second element of ξ 1 ? (c) What happens when you mask (set to zero) the first element of ξ 3 ? (d) Find at least three spurious memories of the network. 12. Energy function: Show that the energy function of a Hopfield network may be expressed as M N 2 E=− mν 2 ν=1

where mν denotes an overlap defined by mν =

M 1  xj ξν,j N j=1

where xj is the jth element of the state vector x, ξ ν ,j is the jth element of the fundamental memory ξ ν , and M is the number of fundamental memories. 13. Multilayer perceptron: Derive the Delta Rule for a three-layer feedforward network with two layers of hidden neurons. 14. Multilayer perceptron: How well can a single hidden layer solve the “balanced bit” problem? This is when a group of bits have equal numbers of 0’s and 1’s. What is the error probability of a well-trained network?

Numerical projects 15. Bistability: For the NaK model in the bistable regime, start the dynamics at the fixed point and give the system an isolated membrane potential spike that takes it to the limit cycle. Once on the limit cycle, give the membrane potential a spike and track the dynamic trajectory as it relaxes back to the fixed point. 16. Bifurcation and hysteresis: Find appropriate parameters for the NaK model that demonstrate hysteresis. Start the dynamics of the NaK model and track an orbit. Slowly change a parameter until there is a bifurcation (sudden change in behavior). Then reverse the changes in that parameter until a bifurcation occurs back to the original dynamics. You are seeking conditions under which there is hysteresis: the two critical thresholds (one for the up transition and one for the down transition) on a single parameter are not the same. 17. Homoclinic orbit: In the NaK model, can you find an infinite-period homoclinic orbit? How tightly (significant figures) do you need to control the parameters to get the period to diverge?

Neurodynamics and Neural Networks 307 18. Recurrent random neural net: Explore the dynamics of the iterated map V a (n + 1) = tanh

" Wba V b (n) + I a g

!

a where √ Wb isaa normal distribution with zero mean and standard deviation σ/ N, and I is a random normal distribution with mean I and unit standard deviation. The synchronous update causes the network to oscillate in limit cycles and chaotic regimes.

19. Multilayer perceptron: Implement a 4-bit OR in a two-layer perceptron. How many hidden neurons are needed? 20. Generalization: Train the 4-bit AND with 8 training examples. What is the probability that a network that acts correctly on the training set will generalize correctly to the remaining 8 cases?

Economic Dynamics

10

Week of Feb 6, 2012:

^DJI 12845.13 14K

10.1 Microeconomics and equilibrium

309

13K

10.2 Macroeconomics

324

12K

10.3 Business cycles

331

11K

10.4 Random walks and stock prices [optional]

339

9K

10.5 Summary

348

8K

10.6 Bibliography

349

7K

10.7 Homework problems

350

10K

2007 Jul Oct 2008 Volume: 3,379,699,968

Apr

Jul

Oct

2009 Apr

Jul

Oct

2010 Apr

Jul

Oct

2011 Apr

Jul

Oct 2012 8B 6B 4B 2B

Dow Jones Average and Trading Volume, February 2007 to February 2012.

Economies are highly dynamic complex systems. Supply and demand and corporate competition are central ideas, along with inflation and unemployment. There are governmental fiscal policies that sometimes are at odds with governmental monetary policies. And there are trade imbalances, and unfair subsidies along with trade wars. On top of all of this there are human expectations, both rational and irrational, as stockbrokers gamble on futures and on derivatives while stock prices are driven by wild fluctuations. Much of this behavior can be captured in dynamical flows, or as iterated maps. Stable and unstable fixed points explain stable and unstable aspects of economies. Limit cycles capture business cycles, and saddle points can govern inflation. Evolutionary dynamics plays a role in corporate competition, in the rise and fall of companies, and in the distributed dynamical networks of businesses with interwoven supply chains and interdependent money markets. This chapter takes us far from physical systems of point masses subject to forces—the usual topic of physics texts. But economic dynamics is an emerging field that benefits from the insights gained from the physical systems studied in the preceding chapters.

Introduction to Modern Dynamics. Second Edition. David D. Nolte, Oxford University Press (2019). © David D. Nolte. DOI: 10.1093/oso/9780198844624.001.0001

Economic Dynamics 309

10.1 Microeconomics and equilibrium Adam Smith’s treatise On the Wealth of Nations, published in 1776, gave one of the first “modern” accounts of economics. His most famous phrase concerned what he called “an invisible hand” that guided self-interested individuals to take selfish economic actions that at the same time inadvertently improved the general economy for others. This was an early statement of the idea of economic optimization in which the natural forces of give-and-take tend to reach optimum performance. This idea was later developed into general equilibrium theory, which proposed that imbalances in one part of an economy would be compensated by adjustments in other parts, supporting an equilibrium in which unseen forces tended to damp down fluctuations and restore a steady condition. In microeconomics, the equilibrium between goods produced and goods bought is governed by price, where suppliers ask a price and buyers agree, or not, to buy at that price. During the “haggling” that follows, the price adjusts until both supplier and buyer are satisfied and the deal is closed. Within a community of buyers and sellers, this same process occurs in aggregate, and equilibrium is a fixed point when all supply equals all demand. Later economists realized that such equilibria were not necessarily stable, introducing into economic theory the important aspects of stability analysis.

10.1.1 Barter, utility, and price One of the oldest economic systems is the barter system. Before money existed, two people could meet and decide to trade a quantity of one thing for another, depending on their respective needs. Consider two items and two traders. If traders collectively have a total X of item one and Y of item two, then the initial allocation (also called the initial endowment) when the two traders meet is x1 + x2 = X y1 + y2 = Y

(10.1)

Divide each equation by the total amounts and call the fractions held by Trader 1 (x, y), so those held by Trader 2 are (1 − x, 1 − y). As they barter, the traders must decide what they need or want out of the trade. Each trader has a marginal rate of substitution (MRS), which defines how much of X they are willing to give up for how much of Y. Nonlinear MRS functions, that vary for each trader depending on how much of X and Y each holds, can be captured through a concept known as a utility function. The utility functions in this two-trader model are U 1 (x, y) and U 2 (x, y) . These functions define the utility of holding amounts (x, y) and (1 − x, 1 − y) for each trader. Each trader seeks to maximize their respective utility functions subject to the constraints of the barter process. The curves of constant utility for each trader are known as level curves or as indifference functions. They can be visualized as contours on the unit square domain, as in Fig. 10.1. An important principle in economic optimization, which is often a condition that is necessary to ensure that an optimum solution exists, is the analytic property

310 Introduction to Modern Dynamics Trader 1 indifference curves

y 1

Trader 1 optimum

Contract curve

Figure 10.1 Barter space for two traders trading two quantities X and Y. The indifference curves are the contours of the utility functions. For an initial endowment (x0 , y0 ), there is a lens-shaped region accessible to barter. The contract curve is the set of Paretooptimal solutions where the slopes of the indifference curves (the marginal rates of substitution) are equal.

Original endowment Trader 2 indifference curves

Trader 2 optimum 0 0

1

x

of convexity. A functional curve is convex, over some domain, if a line connecting two points on the curve intersects the curve at only the two points. Obviously, a circle is convex, because any two points are connected by a chord. All the conic sections are convex, which is a consequence of the fact that all conic sections are described by quadratic functions. If the utility functions of the traders are quadratic functions with maxima on the unit square, then the indifference curves (or contours) are convex, and a mutually beneficial equilibrium exists for the two traders to which they will converge if the bartering goes well. The marginal rate of substitution is given by the slope of each indifference curve. If the curves are convex, then there is a diminishing rate of return (diminishing contribution to utility) for each item x or y as they approach the limits of the domain. In other words, the MRS decreases to zero as the traders hold all of one or the other quantity. Convexity of the indifference curves guarantees diminishing rates of return, which then guarantees that a mutually agreeable equilibrium point exists for the trade. The barter space for the two quantities is shown in Fig. 10.1 for the fraction (x, y) held by Trader 1 and the fraction (1 − x, 1 − y) held by Trader 2. The maximum of the utility function for Trader 2 is at the origin, while the maximum of the utility function for Trader 1 is at (1, 1). The contours of the utility functions are the indifference curves along which the utility is a constant, and hence the trader would be indifferent to trades that occurred along that curve. From the original endowment point, there is a lens-shaped region bounded by the indifference curves that consists of accessible points to trade. Any point within

Economic Dynamics 311 the barter region can be approached through a succession of arcs along various indifference curves. For any motion along an indifference curve, one trader may improve their utility, but the other gets no worse. The question is whether there is a point, or set of points, to which both traders will converge after repeated bargaining. To answer this question, it is useful to invoke the method of Lagrange multipliers introduced in Chapter 2 for Lagrangian dynamics. Lagrange’s method of undetermined multipliers can be applied to optimization problems whenever there is an equation of constraint. As an example, assume there is a function U (x, y), and we wish to find the maximum value of this function when constrained by a curve f (x, y) = 0. Begin by constructing an extended Lagrangian function L = U (x, y) + λf (x, y)

(10.2)

There is no velocity variable, so the Euler–Lagrange equations are simply ∂L ∂U ∂f = +λ =0 ∂x ∂x ∂x ∂U ∂f ∂L = +λ =0 ∂y ∂y ∂y ∂L = f (x, y) = 0 ∂λ

(10.3)

Solving this system of equations for (x∗ , y∗ , λ) provides the maximum (or minimum) of the function U along the curve of constraint.

Example 10.1 Optimization with Lagrange multipliers Find the maximum of the function U (x, y) = 1 − constructing a Lagrangian L=1−

1 2 x − 2y2 when constrained by the line y = −3x + 5. Begin by 2

1 2 x − 2y2 + λ (y + 3x − 5) 2

(10.4)

The Euler–Lagrange equations are ∂L = −x + 3λ = 0 ∂x ∂L = −4y + λ = 0 ∂y ∂L = y + 3x − 5 = 0 ∂λ

(10.5)

continued

312 Introduction to Modern Dynamics

Example 10.1 continued Using the first equation, find the Lagrange multiplier λ = x/3 and substitute into the second equation −4y + x/3 = 0. Solving this and the third equation gives x∗ = 12/7 y∗ = −1/7

(10.6)

To find the set of points within the barter region where an equilibrium can be established by trading, the extended Lagrangian is constructed by constraining the utility curve of one trader by the utility curve of the other. The extended Lagrangian is   L = U 1 (x, y) + λ U 2 (1 − x, 1 − y) − U02

(10.7)

where the utility U 0 is at the initial endowment. The derivatives are ∂L ∂U 1 ∂U 2 = +λ =0 ∂x ∂x1 ∂x2 ∂L ∂U 1 ∂U 2 = −λ =0 ∂y ∂y1 ∂y2 ∂L = U 2 (x2 , y2 ) − U02 = 0 ∂λ

(10.8)

The first two equations imply ∂U 1 ∂U 2 ∂x1 ∂x2 = ∂U 1 ∂U 2 ∂y1 ∂y2

(10.9)

which is satisfied by the condition   ∂y1  ∂y2  = ∂x1 U 1 const. ∂x2 U 2 const.

(10.10)

Each expression is the marginal rates of substitution for each trader. The optimum solution is set when the MRS values of each trader are equal. In the lens-shaped barter region, there is a special set of points where the marginal rate of substitution is the same for both traders. This is the set of points of tangency between the two sets of indifference curves. This set of points is called the contract curve, because at points along the contract curve each trader has equal preference to substituting

Economic Dynamics 313 one quantity for the other. The contract curve is a Pareto frontier of Paretooptimal solutions. No motion would occur along the contract curve, because one trader would do worse as the other did better. Although both traders improve their utility at a point on the contract curve relative to the original endowment, one or the other may benefit more. The question is whether there is a unique point on the contract curve to which both traders would converge. To identify one point along the contract curve that is the natural equilibrium, it is necessary to leave the strict barter system behind and invoke an economic mechanism that would naturally constrain the types of trades that would be acceptable. This economic mechanism is known as price. When there is a price associated with a quantity, then the centralized barter system is replaced by a decentralized currency that can apply as well to N quantities as to one. There is a price px for a unit of the first quantity and a price py for a unit of the second quantity. Although each trader is subject to the same price, the price is not fixed, but adjusts dynamically until the demand for a quantity equals the supply of that quantity. Based on the prevailing prices, each trader has a budget B1 = px x + py y B2 = px (1 − x) + py (1 − y)

(10.11)

The constant budget yields dB1 = px dx + py dy = 0 dy px =− dx py

(10.12)

This establishes a budget line on the barter space that passes through the endowment point with slope equal to −px /py , and the same budget line is shared by both traders (Fig. 10.2). However, the prevailing prices still have not been set, and any line through the endowment point can be a possible budget line. To find the equilibrium prices, it is necessary for each trader to optimize their utility function subject to their budget curve as dU 1 ∂U 1 dy ∂U 1 = + dx ∂x dx ∂y =

∂U 1 px ∂U 1 − =0 ∂x py ∂y

(10.13)

dU 2 ∂U 2 px ∂U 2 = − =0 dx ∂x py ∂y Comparing these equations with Eq. (10.10) provides the solution by setting the prevailing prices such that the equal slopes of the indifference curves (equal marginal rates of substitutions) are equal to the slope −px /py of the budget line

314 Introduction to Modern Dynamics

Trader 1 utility indifference curves

y

Figure 10.2 The same budget line is shared by both traders. It intersects the contract curve at the point where both traders share the same marginal rates of substitutions and where the marginal rate of substitution equals the slope of the budget line, establishing a general equilibrium at the competitive equilibrium point.

General equilibrium

Trader 2 utility indifference curves Original endowment x

passing through the original endowment point. Then the budget line intersects the contract curve at a unique point called the competitive equilibrium point. Both traders are better off at this point than the original endowment, and this point ensures that each trader is indifferent to a further exchange of quantities or adjustments in prices. It is a general equilibrium of this two-trader two-quantity economy. Although this example is highly idealized, it illustrates several important principles of general equilibrium theory in economic dynamics, and it points to straightforward extensions to more complex economies. First, it introduced the idea of diminishing returns, with convex indifference curves that guarantee that an equilibrium point exists in the barter (or exchange) space. Second, it found a solution where the many possible Pareto-efficient allocations along the contract curve were reduced to a single self-consistent point where the marginal rate of substitution is equal to the slope of the budget line. Finally, the introduction of prices decentralized the problem of discrete exchanges, opening the way for a large number of quantities, or goods, to be traded in a complex and multidimensional economy.

Economic Dynamics 315 The barter system and equilibrium pricing in this example focused on fixed quantities that were initially endowed on the traders. However, the quantities X and Y had to be produced to begin with, which introduces the need for a supply side to be part of a broader economic system.

10.1.2 Supply and demand The simplest supply and demand model assumes that supply S is a linearly increasing function of price p (if there are higher profits, suppliers will make more goods), and demand D is a linearly decreasing function of price (if the price is too high, consumers stop buying). In other words, D = a − bp S = c + dp

(10.14)

These simple relations are obvious, and do not yet constitute a dynamical system. Dynamics arise when the price adjusts in response the current conditions between supply and demand. In economics, the rate of change of a variable with time is called an adjustment. A simple price adjustment depends on excess demand E = D – S as p˙ = kE = k (D − S)

(10.15)

Prices increase with demand, but fall with supply for k > 0. This simple dynamical system has a fixed point when D = S at the price p∗ = (a − c) / (d + b), as shown in Fig. 10.3. The coefficients in the model are all assumed to be positive, with the negative dependence of demand on price expressed explicitly in Eq. (10.14). The fixed point is stable, and fluctuations in price relax towards the fixed point with a time constant given by the Lyapunov exponent 1 = k (b + d) τ

(10.16)

which is the same for price, demand, and supply. The simple linearity of this model can be viewed as a local linearization of more complicated nonlinear dynamics that would limit prices to be strictly positive, just as population numbers in Chapter 8 were also always strictly positive. For a competitive market with multiple products, each product has an excess demand given by Ea = Da − Sa with a dynamic price adjustment p˙a = ka Ea

(10.17)

316 Introduction to Modern Dynamics Linear supply and demand: p˙ = f (p) Demand D

Supply S

D = a − bp S = c + dp

1 Slope = – = –k (b + d) τ Fixed point

Fixed point 0 0

p* =

a–c b+d

p* =

a–c b+d Price p

Price p

Figure 10.3 A simple linear supply and demand model. Prices rise if there is excess demand, but fall if there is excess supply. The fixed point is a stable equilibrium. The dynamics are one-dimensional for p˙ = f (p). for positive ka . For two products, the excess demand can be written as E1 = a − bp1 + cp2 E2 = d + ep1 − fp2

(10.18)

p˙1 = k1 (a − bp1 + cp2 ) p˙2 = k2 (d + ep1 − fp2 )

(10.19)

with the two-dimensional flow

The Jacobian matrix is 

−k1 b J= k2 e



k1 c −k2 f

(10.20)

τ = −k1 b − k2 f Δ = k1 k2 (bf − ec)

(10.21)

with trace and determinant

There is a stable equilibrium for the negative trace when bf > ec

(10.22)

Economic Dynamics 317 The negative trace is automatically satisfied by the model (stable feedback). The condition on the determinant is satisfied if the slope b/c of the p1 -nullcline is larger than the slope e/f of the p2 -nullcline. If this condition is not satisfied, then the fixed point is a saddle point with an unstable manifold that will either cause an uncontrolled price increase or cause one product to vanish from the marketplace.

Example 10.2 Two-product competition As an example, consider a market with two products. If the excess demand is E1 = 3 − 6p1 + 3p2 E2 = 16 + 4p1 − 8p2

(10.23)

p˙1 = 2E1 p˙2 = 3E2

(10.24)

with price adjustments

then this model leads to a stable fixed point for both products in a competitive equilibrium. The phase plane is shown in Fig. 10.4 for prices p1 and p2 . The stable equilibrium is at p∗ = [2, 3]. p1-nullcline

5

p2-nullcline

p2

0

0

p1

5

Figure 10.4 Phase plane for prices p1 and p2 in a competitive model. The solid lines are nullclines. The dashed curves are the separatrices.

318 Introduction to Modern Dynamics

10.1.3 Cournot duopoly The importance of nonlinear processes in economics was recognized in the mid nineteenth century, becoming a watershed in economic thought that marks the end of the classical period of economics and the beginning neoclassical economics.1 A simple example of nonlinearity in economics is the idea of diminishing returns, in which the additional amount purchased by an increment of money is a decreasing function of total amount of money. This is called the marginal rate of return, expressed as the derivative of quantity with respect to price. The introduction of marginal rates allowed nonlinear processes to be captured in economic modeling, and also introduced mathematical derivatives and principles of optimization. This is sometime referred to as the Marginalist Revolution, which is at the core of neoclassical economics, as the optimization of economic performance took center stage. Consider two companies that are competing in the same market with different versions of the same product. The total quantity manufactured is Q(t) = q1 + q2

(10.25)

where q1 and q2 are the respective quantities manufactured by each company. A simple price model assumes a maximum allowable price pmax and a linear dependence on total quantity supplied, such as p(t) = pmax − Q

(10.26)

The revenues for each company are

1

The classical era of economic thought extended from the publication of Adam Smith’s Wealth of Nations in 1776 to John Stuart Mill’s Principle of Political Economy in 1848. The neoclassical era began with the development of marginalism by Cournot (Researches on the Mathematical Principles of the Theory of Wealth, 1838) and Walras (Elements of Pure Political Economy, 1874). 2 This is a model of constant marginal cost, which means there is a simple proportionality between the cost and the amount manufactured, although it would be straightforward to introduce a nonconstant marginal cost that has diminishing returns.

R1 (t) = pq1 = (pmax − q1 − q2 ) q1 R2 (t) = pq2 = (pmax − q1 − q2 ) q2

(10.27)

The costs for production (known as marginal cost) are2 C1 (t) = m1 q1 C2 (t) = m2 q2

(10.28)

where m1 and m2 are nearly equal (for viable competition). Therefore, the profits are π1 (t) = (pmax − q1 − q2 ) q1 − m1 q1 π2 (t) = (pmax − q1 − q2 ) q2 − m2 q2

(10.29)

Economic Dynamics 319 These can be solved to yield the isoprofit curves π1 − m 1 − q1 q1 π2 q1 = pmax − − m 2 − q2 q2 q2 = pmax −

To find the desired quantities x1 and x2 that maximize profits, the partial derivatives3 of the profits are taken with respect to the manufactured quantity ∂π1 = (pmax − m1 ) − 2q1 − q2 = 0 ∂q1 ∂π2 = (pmax − m2 ) − 2q2 − q1 = 0 ∂q2

(10.30)

These are called the “reaction curves,” because they determine how each company reacts to the quantities produced by the other. The reaction curves intersect at the point at which each company’s production produces the best profits. The intersection point is 1 (pmax − m1 ) − 2 1 x2 = (pmax − m2 ) − 2 x1 =

1 q2 2 1 q1 2

(10.31)

These are the optimal production quantities and are known as the Cournot duopoly solution.4 The price adjustments are proportional to the difference between the desired price that maximizes profits and the actual price. Assuming that there is a simple proportionality between the cost and the amount manufactured, the quantity adjustment equation is q˙1 = k1 (x1 − q1 ) q˙2 = k2 (x2 − q2 )

(10.32)

The dynamical equations for quantity manufactured and sold (no inventory stock) are then 1 (pmax − m1 ) k1 − k1 q1 − 2 1 q˙2 = (pmax − m2 ) k2 − k2 q2 − 2 q˙1 =

k1 q2 2 k2 q1 2

(10.33)

In this model, each company sets their optimal production assuming their competitor is also operating at their optimal production. Therefore, this model precludes predatory price wars. The Jacobian is

3 Mathematically, partial derivatives assume that all other variables are held constant. In economics, this is known as ceteris paribus “all other things held constant.” 4 Antoine Augustin Cournot (1801– 1877) was a French mathematician whose 1829 doctoral dissertation on mathematical physics attrracted the attention of the physicist Siméon-Denise Poisson, who helped Cournot receive an appointment as a mathematics professor. Cournot turned his attention to economics and was the first to introduce partial derivatives (marginal quantities) and probabilities into mathematical economics.

320 Introduction to Modern Dynamics

Production-quantity space for duopoly 5 Reaction curves 4.5 Cournot–Nash Solution 4

Figure 10.5 The Cournot–Nash equilibrium is the intersection of the two reaction curves. Each reaction curve goes through the positions of zero slope on each of the isoprofit curves. The shaded region bounded by the isoprofit curves that extend from the Cournot–Nash equilibrium is known as the region of Pareto optimality.

Quantity q2

3.5 3

Isoprofit curves

2.5 2 1.5 1 0.5 0

⎛ ⎜ J=⎝

0

−k1 k2 − 2

1

k1 ⎞ 2⎟ ⎠ −k2



2

3 Quantity q1

4

5

6

τ = −k1 − k2 < 0 3 k1 k2 > 0 4 τ 2 − 4Δ = (k1 − k2 )2 + k1 k2 > 0 Δ=

which guarantees a stable equilibrium without spirals because the trace is strictly negative and the discriminant is strictly positive. In this model, each company sets their optimal production assuming their competitor is also operating at their optimal production and will not make any further adjustments in response to the other’s changes. Therefore, the optimal production quantities are also the Nash equilibrium for pure strategies, and hence are often called the Cournot–Nash equilibrium. The production space is illustrated in Fig. 10.5. It shows the two reaction curves: the Cournot–Nash equilibrium at the intersection of the reaction curves, and the set of isoprofit curves for each company. The Cournot solution, despite being a stable equilibrium of the duopoly, is not the best solution for the companies as a whole. By adjusting the production quantities to move into the shaded region between the two isoprofit curves that intersect at the Cournot solution point, each company can improve its profits without negatively impacting the other. This is accomplished by moving along isoprofit curves of one and then the other company. In this way, any point within the region can be reached. When no company can gain further profit without negatively impacting the other, then no further steps can be taken and the resulting

Economic Dynamics 321 point is Pareto-optimal. The condition to find the Pareto-optimal solution by taking a step dqa is  dπa  >0 dqa  dπb ≥0 dqa

(10.34)

which is the same condition as Eq. (10.10) and establishes a contract curve that is Pareto-optimal. The geometric character of this duopoly is analogous to the twotrader barter system discussed in the first section of this chapter. In the barter system, negotiation brings the traders to the general equilibrium point. In the Cournot duopoly case, the Cournot–Nash equilibrium is analogous to the original endowment. However, in the duopoly case, this negotiation would be price fixing, which may not be allowed by business law.

10.1.4 Walras’ Law Stable equilibrium points are common for a two-product market competition, even for a wide range of different model parameters. On the other hand, when there are three or more products, it is much more likely that the competition will be unstable. As the dimension of a dynamical space increases, saddle points with unstable manifolds are the most common fixed points, unless other constraints are placed on the dynamics. For instance, in Chapter 8, the quasispecies model with very many dimensions is stabilized by subtracting the average fitness of the population from the growth rate. In a closed market, a general equilibrium can be established among the many components of a market so that if there is an excess value in one market, then there must be a deficit in another. This represents a zero-sum rule called Walras’ Law,5 expressed in terms of excess demand as N

Pi Ei = 0

(10.35)

i=0

where Pi is the price of the ith product. For two products subject to Walras’ Law, and assuming linear dependences on x = p1 and y = p2 , the excess demands and price adjustments are E1 = a + bx + cy E2 = d + ex + fy (10.36) x˙ = k1 E1 y˙ = k2 E2

5 Marie-Esprit-Léon Walras (1834– 1910) was a French economist at the University of Lausanne who is credited with founding the Lausanne School of economics that developed the first theories of marginalism and general equilibrium.

322 Introduction to Modern Dynamics The fixed point is cd − fa fb − ce ea − bd y∗ = fb − ce x∗ =

(10.37)

The constraint equation from Walras’ Law yields a quadratic equation xE1 + yE2 = 0 ax + bx2 + (c + e) xy + dy + fy2 = 0

(10.38)

The constraint turns the two-dimensional dynamics into a one-dimensional dynamical problem where the trajectory is constrained to be on the curve defined by the equation of constraint called the Walras manifold. The equation for the Walras manifold is   F (x, y) = fy2 + [(c + e) x + d] y + ax + bx2 = 0

(10.39)

which defines a curve on which the motion takes place. The speed along the curve is ds = dt

 (k1 E1 )2 + (k2 E2 )2

(10.40)

where y(x) is expressed in terms of x through the equation of constraint, Eq. (10.39). The tangent to the curve is (B, −A) − → T = √ A2 + B2

(10.41)

where A=

∂F = (c + e) y0 + a + 2bx0 ∂x

B=

∂F = 2fy0 + (c + e) x0 + d ∂y

(10.42)

are evaluated at the fixed point (x0 , y0 ). A displacement along the curve by a path length s relative to the fixed point is then   −   → δx , δy = T s = T x , T y s

(10.43)

Economic Dynamics 323 and the velocities (for small displacement s) are x˙ = k1 (bT x + cT y ) s y˙ = k2 (eT x + fT y ) s

(10.44)

The velocity along the curve is the projection of the velocity onto the tangent: s˙ = T x x˙ + T y y˙     = k1 T x bT x + cT y s + k2 T y eT x + fT y s     2  2 = k1 b T x + k1 cT x T y + k2 eT x T y + k2 T y s

(10.45)

If the expression in square brackets is negative, then the system is stable. The important consequence of Walras’ Law is that a system that is unstable in twodimensional dynamics (e.g., a saddle point) can be stabilized by constraining the dynamics to lie on the Walras manifold, leading to the principle of general equilibrium, which assumes economic dynamics takes place on stable manifolds.

Example 10.3 Two-product Walras’ Law For a two-product example of Walras’ Law, consider the excess demand E1 = −x + y E2 = −4 + x + y

(10.46)

x˙ = 2E1 y˙ = 3E2

(10.47)

where the adjustment is

The two-dimensional dynamics are governed by a saddle fixed point that is inherently unstable against small fluctuations. The phase portrait of this system is shown in Fig. 10.6. The saddle point has stable and unstable manifolds, and this market system would be unstable, with runaway prices as a trajectory first approaches but then diverges away from the fixed point, following the unstable manifold. However, Walras’ Law leads to the quadratic equation of constraint xE1 + yE2 = y2 + (2x − 4) y − x2 = 0

(10.48)

The partial derivatives of the equation of constraint, evaluated at the fixed point, are ∂F = 2y0 − 2x0 = 0 ∂x ∂F = 2y0 + (2x0 − 4) = 4 ∂y

(10.49)

continued

324 Introduction to Modern Dynamics

Example 10.3 continued and the tangent vector is T = (1, 0) for this example. The dynamics along the curve are then, from Eq. (10.45), s˙ = −k1 s

(10.50)

for small displacements s. Therefore, this constrained system is stable. The constraint curve—the Walras manifold—is plotted on the phase portrait in Fig. 10.6. The dynamical system lies on the Walras manifold, which has a dimension one less than the dimension of the original dynamics. The Walras manifold in this example is closer to the stable manifold than the unstable manifold. This example illustrates how a two-dimensional flow that is unstable (saddle) can become stable when the zero-sum condition (Walras’s Law) constrains the problem. 5

ul lc

ani fo

yn

ld

x = –2x + 2y y = –12 + 3x + 3y

lin

Un

Price y

stab

le m

e

– +[2x old y manif 2

as Walr

Stab l

em

4]y –

2 =0

x

anif old

e

lin

llc nu x-

0

0

5 Price x

Figure 10.6 Walras’s Law for a dynamic market with a saddle point. The zero-sum rule constrains the dynamics to lie on the Walras manifold. The dynamics on the Walras manifold is stable.

10.2 Macroeconomics The productivity of a nation, and the financial well-being of its citizens, depends on many factors with hidden dependencies that are often nonlinear. For instance, fiscal policy that raises corporate taxes boosts short-term income to a government that seeks to reduce deficits, but can decrease long-term income if businesses fail

Economic Dynamics 325 to succeed. With monetary policy, the government can lower interest rates to try to boost consumer spending, but lower gains on investment can stall economic recovery. Just how strong the feedback might be is usually not known a priori, and a transition to qualitatively different behavior can occur in a national economy with even relatively small adjustments in rates. The complexities of macroeconomics have challenged economic predictions from the beginning of civilization, and even today great debates rage over fiscal and monetary policies. Part of the challenge is the absence of firm physical laws that are true in any absolute sense. There are many trends that hold historically for many situations, but there are always exceptions that break the rule. When the rules of thumb break down, economists scramble to get the economy back on track. These complexities go far beyond what this chapter can explore. However, there are some simple relationships that tend to be robust, even in the face of changing times.

10.2.1 IS-LM models One of the foundations of macroeconomics is the relationship between investmentsavings (IS) in the goods markets on the one hand and liquidity-money (LM) in the money markets on the other. As the interest rate on borrowed money increases, companies are less likely to make investment expenditures, and consumers are more likely to save—the IS part of the economy. This is balanced by the demand for money in the economy that decreases with increasing interest rates—the LM part of the economy. These two trends lead to the so-called IS-LM models, of which there are many variants that predict a diverse range of behaviors from stable nodes (equilibrium models) to saddle-point trajectories (unbounded inflation or deflation). Let us begin with a simple IS-LM model that is purely linear and that balances expenditures against real income, and in which the dynamics are adjusted according to simple linear relaxation. The dynamics take place in a two-dimensional space of income g and interest rate r. The expenditure as a function of time is assumed to be E(t) = G + c (1 − T ) g(t) + ig(t) − ur(t)

(10.51)

for positive G, u, i > 0, with 0 < c < 1 and 0 < T < 1, where the meanings of the variables and coefficients are given in Table 10.1. Governmental fiscal policies affect the amount of government spending G and the tax rate T. These are called exogenous variables because they are controlled externally by the government. The other parameters, such as i, the response of investment to income, are called endogenous because they cannot directly be controlled and are properties of the economy. The endogenous parameters need to be tuned in the model to fit observed economic behavior. The demand for money is assumed to be related to g (the real income or the gross domestic product, GDP) and negatively related to the rate of interest r as

326 Introduction to Modern Dynamics Table 10.1 Variables and coefficients in the IS-LM model E g r T G c i u D k v

Real expenditure Real income (gross domestic product, GDP) Interest rate Tax rate Autonomous (government) expenditure Propensity to consume Increased investment in response to income g Suppressed investment in response to interest rate r Demand for money Enhanced demand for money in response to income y Suppressed demand for money in response to interest rate r D(t) = kg(t) − vr(t)

(10.52)

for positive coefficients k > 0, ν > 0. The adjustments in the real income and the interest rate are g˙ = α [E(t) − g(t)] r˙ = β [D(t) − m0 ]

(10.53)

for positive rates α, β > 0, where m0 is the nominal available money (liquidity preference equal to M/P, the amount of money M relative to price of money P). The amount of money available to the economy is controlled by governmental monetary policies of the central bank (Federal Reserve). Plugging the expenditures and demand for money into the adjustment equations yields a twodimensional flow in the space of income and interest rates. Defining the consumer consumption-investment index as C = c (1 − T ) + (i − 1)

(10.54)

g˙ = α (Cg − ur + G) r˙ = β (kg − vr − m0 )

(10.55)

the two-dimensional flow is

The two nullclines are linear functions of g. The LM curve is the r-nullcline, and the IS curve is the g-nullcline: r= r=

1 (G + Cg) u

g − nullcline

IS curve

1 (kg − m0 ) v

r − nullcline

LM curve

(10.56)

Economic Dynamics 327 The LM curve is a line with positive slope as a function of income (interest rate is an increasing function of income). The IS curve can have a negative or positive slope, depending on the sign of the consumption-investment index C. In macroeconomics, one of most important consequences of changes in rates or coefficients is the conversion of stable equilibria into unstable equilibria or saddle points. In economic terms, this is the difference between a stable national economy and runaway inflation (or deflation). The stability of the IS-LM model is determined by the determinant and trace of the Jacobian matrix, which are Δ = αβ (uk − νC) Tr = αC − βν

(10.57)

Nodes are favored over saddle points for positive , and stable behavior is favored over unstable behavior for negative trace. Therefore, stability in national economies require positive and negative trace—both of which result for smaller values of the consumption-investment index C. The IS curve has negative slope (negative C) when propensity to consume c is low (for a fixed tax rate T ), and investment i in response to income g is also low, which provides stability to the economy.

Example 10.4 IS-LM model A numerical model with a stable equilibrium is shown in Fig. 10.7. The LM curve is the r-nullcline, and the IS curve is the y-nullcline. The parameters of the model are G = 20, k = 0.25, c = 0.75, m0 = 2, T = 0.25, ν = 0.2, i = 0.2, u = 1.525, α = 0.2, and β = 0.8. The stability is determined by = 0.096 and Tr = −0.488, which determines the fixed point to be a stable spiral. 50

ve

y ar et on ck” M ho “s

Interest rate

LM

r cu

O L M r ig in cu al rv e

Transient trajectory

IS curve New fixed point 0

0

Original fixed point 50

Real income

Figure 10.7 IS-LM model with decreasing real income versus increasing interest rates. The IS curve and the LM curve are the nullclines. The fixed point is a stable spiral. The result of a sudden decrease in money m0 leads to a “shock” and a transient trajectory that spirals to the new fixed point at lower GDP and higher interest rate.

328 Introduction to Modern Dynamics The IS-LM model makes it possible to predict the effect of governmental fiscal and monetary policies on the national economy. Fiscal policy controls the government expenditure G and the tax rate T. Monetary policy controls the amount of money m0 available to the economy. Controlling these quantities affects interest rates on money and affects the GDP. An important feature of the IS-LM model is the ability to model “shocks” to the economy caused by sudden changes in government spending or taxes, or by sudden changes in the available money. When any of these are changed discontinuously, then a previous equilibrium point becomes the initial condition on a trajectory that either seeks a new stable equilibrium or becomes unstable. An example of a shock is shown in Fig. 10.7 when the money supply m0 is suddenly decreased. The original fixed point under large supply becomes the initial value for relaxation to the new stable fixed point. The transient trajectory is shown, which spirals as it converges on the new equilibrium. Stability can be destroyed if there is too much increased expenditure in response to income, which occurs when an economy is “overheated.” This effect is captured by increasing the parameter i in Eq. (10.51). It is also possible to have saddle nodes if the IS curve has a higher slope than the LM curve. In either case, the economy becomes unstable.

10.2.2 Inflation and unemployment Inflation and unemployment are often (but not always) linked by a negative relationship in which government policy, which seeks to increase employment above its current demand, leads to higher inflation. This relationship is known as the Phillips curve and is captured mathematically by the linearized form

Phillips curve

π = −a (u − un ) + πe

(10.58)

where π = p˙ is the current rate of inflation (the rate of change in the price p of money), u is the unemployment rate, πe is the expected inflation rate, and un is the non-accelerating inflation rate of unemployment (NAIRU). This somewhat unwieldy acronym stands for the employment rate that keeps inflation constant (non-accelerating). Unemployment, in turn, has an negative relationship to the GDP through −a (u − un ) = α (g − gn )

(10.59)

where g is the GDP, and gn is the GDP under the condition that u = un . Therefore, expected inflation (predicting the future) differs from actual inflation (reality) to a degree determined by the GDP. If an economy is “overheated,” then national

Economic Dynamics 329 productivity is higher than the normal GDP, and inflation exceeds the expected inflation. In tandem with the Phillips curve, expected inflation adjusts according to the difference of current from expected inflation

Adaptive expectations

π˙ e = β (π − πe )

(10.60)

Finally, the GDP depends on the money markets and on the goods markets, which are characterized respectively by the difference m − p between money availability and money price and by the expected rate of inflation πe . This is captured by the aggregate demand equation g = a0 + a1 (m − p) + a2 πe

(10.61)

with coefficients all positive. Combining the Phillips curve, Eq. (10.58), with adaptive expectations, Eq. (10.60) leads to π˙ e = βα (g − gn )

(10.62)

and taking the derivative of the aggregate demand equation (10.61) leads to g˙ = a1 (m˙ − π ) + a2 π˙ e

(10.63)

where m˙ is the growth of the money stock that is printed by the federal govern˙ ment (an exogenous parameter controlled by fiscal policy) with inflation π = p. Equations (10.62) and (10.63) combine into the two-dimensional flow g˙ = a1 (m˙ − πe ) + α (a2 β − a1 ) (g − gn ) π˙ e = αβ (g − gn )

(10.64)

in the space defined by the GDP and the expected rate of inflation. The nullclines are πe = g = gn

α (a2 β − a1 ) (g − gn ) + m˙ gn

g−nullcline

(10.65)

πe −nullcline

where the g-nullcline is a line with negative slope for a1 >a2 β, and the π e -nullcline is a vertical line.

Example 10.5 Gross domestic product and expected inflation The dynamics of inflation and unemployment are similar in character to the linearized IS-LM models and share the same generic solutions. One difference is seen in the π e -nullcline in Fig. 10.8, which is simply a vertical line at g = gn continued

330 Introduction to Modern Dynamics

Example 10.5 continued that maintains the NAIRU. The example uses the parameters a1 = 2, m˙ = 1, a2 = 1, gn = 1, α = 2, and β = 0.5. The figure also shows a governmental money shock as it increases m˙ to increase the GDP and hence lower unemployment. However, the previous fixed point becomes the initial value for a transient trajectory that relaxes back to the original unemployment level. Therefore, the increased rate of money availability only increases employment temporarily (perhaps preceding an election), but ultimately leads to long-term inflation without long-term employment. Government monetary policy

2

πe - nullcline

y-

Expected inflation

ne

lcli nul

New fixed point

Original fixed point

0

0

2 Gross domestic product (GDP)

Figure 10.8 Expected rate of inflation versus GDP related to the Phillips curve for unemployment. The vertical line is the π e -nullcline, which is the condition that the GDP has the value needed to maintain the NAIRU. Government monetary policy (increasing m) ˙ can increase GDP and decrease unemployment temporarily, but the system relaxes to the original unemployment rate, meanwhile incurring an increase in inflation.

The simple linearized examples of macroeconomics that have been described here are just the tip of the iceberg for economic modeling. There are many complexities, such as nonlinear dependences and delays in adjustments, that lead to complex dynamics. There are often unexpected consequences that defy intuition—and that make it difficult to set fiscal and monetary policies. The lack

Economic Dynamics 331 of precise laws (the Phillips curve is just a rule of thumb, and only works over short time frames) together with ill-defined functional dependences open the door to subjective economic philosophies, i.e., politics. In this sense, economics does not need to be entirely “rational,” but can be driven by perceived benefits as well as real benefits. However, the astute student should note that even philosophy and political inclinations can be captured by probabilistic payoff matrices of the type that were studied in Chapter 8 in the context of evolutionary dynamics, and hence even “irrational” behavior sometimes can be modeled quantitatively.

10.3 Business cycles Business cycles are a common phenomenon in economics as prices and quantities oscillate. Some business cycles are driven literally by the changing seasons, which is a phase-locked process. For instance, winter creates a demand for winter coats, and summer creates a demand for swimsuits. Other types of business cycles are driven by nonlinear cost of materials or labor and by time lags in manufacturing or distribution networks. The two most common models for business cycles are Lotka–Volterra-type models for which the dynamics are a “center,” and limitcycle-type models that are like the van der Pol oscillator.

10.3.1 Delayed adjustments and the cobweb model Many economic time series (price trajectories) have a natural discrete-time nature because of day-to-day cycles. For instance, the opening price today depends on the closing price yesterday, and current demand may depend on the expected price tomorrow. Discrete maps naturally have delays, and, when combined with nonlinearities, can lead to chaotic behavior. In the language of economics, discrete time-delay models are called cobweb models.6 A common cobweb model of supply and demand, with a time delay, assumes the demand is a decreasing linear function of current price at the moment t in time: qD t = a − bpt

(10.66)

Similarly, the supply is assumed to be an increasing function of price, but in this discrete-time case the supply depends on the previous price set during the previous sales period: qSt = c + dpt−1

(10.67)

This introduces a delay into the mathematical model that approximates the time it takes for a supplier to respond to increased (or decreased) price by increasing (or decreasing) supply. The dynamical step that connects demand with the delayed

6 So named by the British economist Nicholas Kaldor (1934) because the discrete trajectories often look like cobwebs.

332 Introduction to Modern Dynamics supply is a process known as market clearing, when all quantities that are supplied are purchased. Therefore, at the time that the supplies are made available, they equal the demand, so S qD t = qt

(10.68)

The map can be expressed in terms of deviations from the equilibrium as ∗ ∗ qD t − q = −b (pt − p )

qSt − q∗ = d (pt−1 − p∗ )

(10.69)

where the equilibrium values are p∗ =

a−c b+d

q∗ =

ad + bc b+d

(10.70)

which are the same expressions as for the continuous supply and demand case discussed previously. The motions of the price and quantity deviations from equilibrium are ΔqS1 = dΔp0 ΔqD 1 = −bΔp1 = dΔp0

(10.71)

with the new price related to the old price as d Δp1 = − Δp0 b

(10.72)

Continuing the sequence ΔqS2 = dΔp1 d ΔqD 2 = −bΔp2 = dΔp1 = − b Δp0

(10.73)

yields   d 2 Δp0 Δp2 = − b .. .   d N Δp0 ΔpN = − b

(10.74)

Economic Dynamics 333 where the Floquet multiplier is M=−

d b

(10.75)

Quantity q

The discrete mapping iterates, and the prices can evolve to a stable fixed point where the price becomes steady, or to a center where the price oscillates, or to a growing spiral of prices, depending on the magnitude of the Floquet multiplier. The price movement converges on the equilibrium | M | α+β K

(10.87)

meaning that the output yield per capital must be greater than the aggregate growth rate of output per labor and of the labor pool. The oscillations in this model are on growth rates rather than on the actual values, meaning that the economy continues to expand even in the presence of the oscillations, unless the amplitude of the oscillations exceeds the aggregate growth rate. The oscillations in u and ν represent a form of continuous time lag similar to that occurring in an LC circuit in electronics as charge oscillates between storage in a capacitor and current in an inductor. In this economic model, high profits occur when employment is at its average value, but the high profits lead to expansion of production, with the need for more labor. As labor wages rise, profits decline, production slows, and employment decreases until profits begin to increase again, restarting the cycle. As a dynamic fixed point, centers are only marginally stable. In real-world economics, additional factors are likely to lead to negative feedback and damping,

336 Introduction to Modern Dynamics causing a negative trace of the Jacobian and a stable spiral converging on a fixed point. However, it is also possible to have positive feedback, like gain in an oscillator, which can cause increasing amplitude of oscillation until limited by other factors, as for a limit-cycle oscillator.

10.3.3 Business cycles as limit cycles A model closely related to the IL-SM model incorporates nonlinearity and supports a limit cycle that can model business cycles. Just as for the IS-LM model, stable economic behavior occurs for an IS curve with negative slope and an investment curve with positive slope. Savings in the economy generally decrease as output increases. However, for small increases in production near the equilibrium point, market inefficiencies can allow capital stock to increase with increasing production, which would lead to an unstable spiral in capital and output. Yet as the gyrations increase in amplitude, the general trend of negative slope in the IS curve would limit this growth and establish a limit-cycle oscillation. Production adjustments mirror the IS curve as excess investment over savings leads to additional output. The capital stock adjustment is equal to investment, and increased investment produces higher returns. The dynamical flow is Y˙ = α [I (Y , K ) − S (Y , K )] K˙ = I (Y , K )

(10.88)

where I is investment, S is savings, Y is output, and K is capital stock. An explicit model has the flow equations8 Y˙ = −a (K − K0 ) + b (Y − Y0 ) − c(Y − Y0 )3 K˙ = −d (K − K0 ) + e tanh [g (Y − Y0 )]

(10.89)

with a normalized form relative to the equilibrium point given by y˙ = −ak + by − cy3 k˙ = −dk + e tanh(gy)

(10.90)

The first equation is a cubic equation that has positive slope (creating an unstable spiral) near the point of intersection with the second equation, but at larger excursions the overall negative trend is established, stabilizing the dynamics into a limit cycle. The hyperbolic tangent in the second equation forces the rate of change of the capital stock to saturate. The Jacobian evaluated at the origin is ⎛ J0,0 = ⎝ 8

The model is after Kaldor as described in Gandolfo (2010), p. 446.

b

−a

eg −d Δ = −bd + aeg τ =b−d

⎞ ⎠ (10.91)

Economic Dynamics 337 3

2

Capital stock K

1

0

–1

–2

–3 –2

–1.5

–1

Limit cycle

–0.5

0 Output Y

0.5

Investment = Savings

1

1.5

2

Investment = 0

This is either an unstable spiral or a saddle, depending on the sign of the determinant. Typically, it is an unstable spiral that spirals out to a limit cycle as shown in Fig. 10.10. This is like an IS-LM model with positive IS slope (unstable spiral), but limited by higher-order nonlinearity in the IS curve. The positive slope on investment-savings is local, but overall investment-savings falls with output Y.

10.3.4 Chaotic business cycles In inventory businesses, there can be cycles in inventory stocks in response to production rates. The cycles are driven by time lags and nonlinearity. There is often a time lag between desired inventory stock and the actual stock. A simple lag equation assumes that there is a lag time τ between a desired inventory stock and the actual stock B: y˙ =

1 (bye − B) τ

(10.92)

where y is the sales, and the desired stock is given by the product bye , where ye is the expected sales. The expected sales depends on the level of sales y, but also can depend on the time-rate of change of sales as well as the acceleration in sales. This is captured through the equation ye = a2 y¨ + a1 y˙ + y

(10.93)

Figure 10.10 The Kaldor model of business cycles.

338 Introduction to Modern Dynamics The nonlinearity in the problem arises from an optimal production rate for a nominal investment. If sales and production are low, then the rate of change of the stock increases with production. However, if the production is too high (high sales), then there is insufficient investment to support increased stocks. This leads to an adjustment equation for the inventory stocks that depend on sales as B˙ = my (1 − y)

(10.94)

These equations define a three-dimensional flow V˙ = c3 B − c2 V − c1 y r B˙ = y (1 − y) c3

(10.95)

y˙ = V where the coefficients are c3 =

r 1 = >0 m ba2

c2 =

(ba1 − τ ) ba2

c1 =

1 >0 a2

There are two equilibria: one at y = 0 and one at y = 1. The Lyapunov exponents at y = 1 are either all negative (stable), or else have one real negative value and two complex pairs, leading to cyclic behavior and the potential for chaos. The condition for the transition from stable behavior to periodic solutions (through a Hopf bifurcation) is c1 c2 < mc3 Note that c2 decreases as the lag increases. If the lag is very short (rapid adjustment to sales), then there is a stable equilibrium. However, in real distribution systems, there is always a time lag between production and sales, which can lead to a limitcycle oscillation in the dynamics between sales and stock inventory.

Example 10.6 Inventory cycles The dynamics of sales and stocks in Eq. (10.95), as the control parameter r increases, is shown in Fig. 10.11. When r = 0.3 (small gain on the nonlinearity for a fixed c3 and c2 ), there is a stable fixed point. As the nonlinear gain r increases to r ≈ 0.4, a Hopf bifurcation occurs as the fixed point becomes unstable, but the trajectories are entrained by a limit cycle. This limit cycle expands with increasing gain until a period-doubling event occurs at r = 0.715. With increasing r, a bifurcation cascade brings the system to chaotic behavior above r = 0.8. This example shows how a stable equilibrium converts to a business cycle and then to chaotic cycles with increasing nonlinear gain of the stock inventory adjustments. Business cycles and chaos are possible in this nonlinear problem because of the acceleration

Economic Dynamics 339

Example 10.6 continued term in Eq. (10.93) that increases the dimensionality of the dynamics to three dimensions (required for chaos) and because of the lag between production and sales in Eq. (10.92). r = 0.6

r = 0.715

r = 0.4 r = 0.3

r = 0.7

1 Sales

Sales

1

r = 0.5

0

0

2

1

2

1

Inventory stock

Inventory stock

r = 0.800

r = 0.780

1 Sales

Sales

1

0

0 2

1 Inventory stock

1

2 Inventory stock

Figure 10.11 Production and stock cycles for increasing values of r for c1 = 1, c2 = 0.4, and c3 = 0.5. There is a Hopf bifurcation at r = 0.4. Period doubling begins at r = 0.715 and leads through a bifurcation cascade to chaos.

10.4 Random walks and stock prices [optional] The price of a stock is a classic example of a stochastic time series. The trajectory of a stock price has many of the properties of a random walk, or Brownian motion. Although the underlying physics of fluctuating stock prices is not as direct as that of Brownian motion, the description in terms of random forces still applies. The analogy of a stock price to a random walk underlies many of today’s investment strategies for derivatives and hedge funds.9

9 The character of this section is substantially different from that of the rest of the text. Stochastic calculus and random walks do not involve continuous functions like the flow equations that are the main themes of this book. This section is included here as an optional section because of the importance of this topic to econophysics, and because it introduces the idea of a noncontinuous trajectory.

340 Introduction to Modern Dynamics

10.4.1 Random walk of a stock price: efficient market model Consider a stock that has a long-term average rate of return given by μ. The average value of the stock increases exponentially in time as S = S0 eut

(10.96)

However, superposed on this exponential growth (or decay if the rate of return is negative) are day-to-day price fluctuations. The differential change in relative stock price, in the presence of these fluctuations, is given by √ dS = d (ln S) = μdt + 2D dW S

(10.97)

where D plays the role of a diffusion coefficient, and the differential dW is a stochastic variable that represents the fluctuations.10 Fluctuations can arise from many sources, and may have underlying patterns (correlations) or not. The simplest models of stock prices assume that there are no underlying patterns in the fluctuations and that the stock price executes a random walk. This assumption is known as the efficient market hypothesis. It assumes that there is perfect information about the value of a stock, and hence the fluctuations are not driven by any “insider information” but by random fluctuations in the numbers of buyers and sellers. This condition of randomness and history independence is called a Markov process, and the differential dW is known as a Wiener process. The trajectory of a stock price, with day-to-day fluctuations, is modeled as a discrete process by the finite difference approximation to Eq. (10.97): xn+1 = xn + μΔt +



2DΔt ξn

(10.98)

where the variable x = ln (S/S0 ), and ξn is a random value from a normal distribution with standard deviation of unity. Examples of stochastic stock-price trajectories are shown in Fig. 10.12. The dashed line is the average exponential growth. This type of trajectory is called a geometric random walk on the price S (or an arithmetic random walk on the variable x).

10

A stochastic variable, or Wiener process, is an ordered set of random values that is no-where differentiable. The character of stochastic processes is fundamentally different than for continuousvariable flows. However, the time evolution of stochastic systems can still be defined, and stochastic processes can be added to continuous flows.

10.4.2 Hedging and the Black–Scholes equation Hedging is a venerable tradition on Wall Street. To hedge means that a broker sells an option (to purchase a stock at a given price at a later time) assuming that the stock will fall in value (selling short), and then buys, as insurance against the price rising, a number of shares of the same asset (buying long). If the broker balances enough long shares with enough short options, then the portfolio’s value is insulated from the day-to-day fluctuations of the value of the underlying asset.

Economic Dynamics 341 50 D = 0.001 dt = 1 r = 0.01

Stock price

40

30

20

10

0

0

20

40

60

80

Figure 10.12 Examples of geometric random walks on a stock price. The dashed line is the average growth of the stock.

100

Time

This type of portfolio is one example of a financial instrument called a derivative. The name comes from the fact that the value of the portfolio is derived from the value of the underlying assets. The challenge with derivatives is finding their “true” value at any time before they mature. If a broker knew the “true” value of a derivative, then there would be no risk in the buying and selling of derivatives. To be risk-free, the value of the derivative needs to be independent of the fluctuations. This appears at first to be a difficult problem, because fluctuations are random and cannot be predicted. But the solution actually relies on just this condition of randomness. If the random fluctuations in stock prices are equivalent to a random walk superposed on the average rate of return, then perfect hedges can be constructed with impunity. The equation for a geometric random walk of a stock price is dS = d (ln S) = μdt + σ dW S

(10.99)

where S is the price of the stock, μ is the average rate of return, and σ is the size of the fluctuations. A derivative based on the stock has a “true” value V (S, t) that depends on the price of the stock and on the time that the option will be called. To determine this value V (S, t), it is necessary to draw from a fundamental result of stochastic calculus known as Itô’s lemma. Consider a random variable that executes a drift and random walk dx = adt + bdW

(10.100)

342 Introduction to Modern Dynamics If there is a function f (x, t) that depends on the random variable, then the differential of that function is given by df =

∂f 1 ∂ 2f 2 ∂f dt + dx + dx ∂t ∂x 2 ∂x2

∂f ∂f 1 ∂ 2f dt + (adt + bdW )2 (adt + bdW ) + ∂t ∂x 2 ∂x2   ∂f ∂f ∂f b2 ∂ 2 f = +a dt + b dW + dW 2 ∂t ∂x ∂x 2 ∂x2 =

(10.101)

The Wiener process W is a function that is differentiable nowhere, but it has the property dW 2 = dt

(10.102)

Using this result, and keeping only the lowest orders in differentials, gives  It o’s ˆ formula

df =

∂f ∂f b2 ∂ 2 f +a + ∂t ∂x 2 ∂x2

 dt + b

∂f dW ∂x

(10.103)

which is known as Itô’s lemma. Itô’s lemma forms the foundation of many results from stochastic calculus. To make the connection between Itô’s lemma and derivative pricing, take the correspondences x=S

a = μS

b = σS

(10.104)

and Eq. (10.100) then gives the geometric random walk dS = μSdt + σ SdW

(10.105)

The differential of the value of a derivative value dV (S, t) is obtained from Itô’s lemma:   ∂V 1 ∂V ∂ 2V ∂V + + σ 2 S 2 2 dt + σ S dW dV = μS ∂S ∂t 2 ∂S ∂S

(10.106)

The goal of a hedge is to try to zero-out the uncertainties associated with fluctuations in the price of assets. The fluctuations in the value of the derivative arise in the last term in Eq. (10.106). The strategy is to combine the purchase of stocks and the sale of options in such a way that the stochastic element is removed. To make a hedge on an underlying asset, create a portfolio by selling one call option (selling short) and buying N shares of the asset (buying long) as insurance against the possibility that the asset value will rise. The value of this portfolio is π(t) = −V (S, t) + NS(t)

(10.107)

Economic Dynamics 343 If the number N is chosen correctly, then the short and long positions will balance, and the portfolio will be protected from fluctuations in the underlying asset price. To find N, consider the change in the value of the portfolio as the variables fluctuate, dπ = −dV + NdS

(10.108)

and insert Eqs. (10.105) and (10.106) into Eq. (10.108) to yield   ∂V ∂V 1 ∂ 2V ∂V dπ = − μS + + σ 2 S 2 2 dt + σ S dW + N (μSdt + σ SdW ) ∂S ∂t 2 ∂S ∂S     ∂ 2V ∂V ∂V 1 ∂V = NμS − μS − − σ 2 S 2 2 dt + N − σ SdW ∂S ∂t 2 ∂S ∂S (10.109) Note that the last term contains the fluctuations. These can be zeroed-out by choosing N=

∂V ∂S

(10.110)

and then  dπ = −

∂V 1 ∂ 2V + σ 2 S2 2 ∂t 2 ∂S

 dt

(10.111)

The important observation about this last equation is that the stochastic function W has disappeared. This is because the fluctuations of the N share prices balance the fluctuations of the short option. When a broker buys an option, there is a guaranteed rate of return r at the time of maturity of the option, which is set by the value of a risk-free bond. Therefore, the price of a perfect hedge must increase with the risk-free rate of return. This is dπ = rπ dt

(10.112)

or 

 ∂V dπ = r −V + S dt ∂S

(10.113)

Equating Eq. (10.111) with Eq. (10.113) gives   ∂ 2V ∂V 1 dπ = − − σ 2 S 2 2 Δt ∂t 2 ∂S   ∂V = r −V + S Δt ∂S

(10.114)

344 Introduction to Modern Dynamics Simplifying, this leads to a partial differential equation for V (S, t), the Black– Scholes equation:11

Black–Scholes equation

∂V 1 ∂ 2V ∂V + σ 2 S 2 2 + rS − rV = 0 ∂t 2 ∂S ∂S

(10.115)

The Black–Scholes equation is a partial differential equation whose solution, given boundary conditions and time, defines the “true” value of the derivative, and determines how many shares to buy at t = 0 at a specified guaranteed return rate r (or, alternatively, stating a specified stock price S(T ) at the time of maturity T of the option). It is a diffusion equation that incorporates the diffusion of the stock price with time. If the derivative is sold at any time t prior to maturity, when the stock has some value S, then the value of the derivative is given by V (S, t) as the solution to the Black–Scholes equation.12 One of the interesting features of this equation is the absence of the mean rate of return μ (in Eq. (10.99)) of the underlying asset. This means that any stock of any value can be considered, even if the rate of return of the stock is negative! This type of derivative looks like a truly risk-free investment. You would be guaranteed to make money even if the value of the stock falls, which may sound too good to be true . . . which of course it is. The success (or failure) of derivative markets depends on fundamental assumptions about the stock market. These include that it would not be subject to radical adjustments or to panic or irrational exuberance, which is clearly not the case. Just think of booms and busts. The efficient and rational market model, and ultimately the Black–Scholes equation, assumes that fluctuations in the market are governed by Gaussian random statistics. When these statistics fail, the derivatives fail, which they did in the market crash of 2008. The main focus of this textbook has been on trajectories, which are captured in simple mathematical flow equations. The Black–Scholes equation does not describe a trajectory. It is a partial differential equation that must be solved by satisfying boundary conditions, which is outside the scope of this book. However, because it plays a central role in econophysics, an example of its solution is given here. One of the simplest boundary conditions is for a European call option for which there is a spot price that is paid now and a strike price that will be paid at a fixed time in the future. The value of the derivative for a given strike price K is a function of the spot price S and the time to maturity T as 11 Black–Scholes is a partial differential equation (PDE) rather than an ordinary differential equation (ODE). This book has exclusively focused on ODEs (flows), but deviates here because of the importance of Black–Scholes to econophysics. 12 The equation was first derived by Fischer Black and Myron Scholes in 1973, with contributions by Robert Merton. Scholes and Merton won the Nobel Prize in economics in 1997 (Black died in 1995).

V (S, T ) =

      1 d1 e−rT d2 1 + erf √ S− 1 + erf √ K 2 2 2 2

(10.116)

where r is the guaranteed rate of return. The arguments of the error functions are d1 =

1 √

σ T



 ln



d2 = d1 − σ T

S K



   1 + r + σ2 T 2

(10.117)

Economic Dynamics 345

1.4 1.2 1

Value

0.8 0.6 0.4 0.2 0 –0.2 100 80

50 60

40 30

40

Tim e

20

20 0

10

S

0

Figure 10.13 The Black–Scholes solution for an European call option. r = 0.15, time = 0–2, price = 0–2, σ = 0.5.

where σ is the stock volatility. A solution of V (S, T ) for a strike price K = 1 as a function of the spot price and time to maturity is shown in Fig. 10.13 for a guaranteed rate of return r = 0.15 and a volatility σ = 0.5.

10.4.3 Econophysics Within the broader field of mathematical economics, econophysics is a subfield that emerged out of the physics of statistical mechanics, drawing on concepts from critical phenomena, disordered systems, scaling, and universality.13 It deals fundamentally with stochastic processes, as in the Black–Scholes theory, but it takes a more general approach to the role played by probability distributions in economic processes. Econophysics often studies probability distributions that have power-law behavior at large values of the random variable. For instance, the probability in the tail may behave as P (|x|) ∝

1 |x|1+α

(10.118)

Such distributions are said to have heavy tails, because the probability falls more slowly than exponential for large arguments. Heavy tails on a distribution cause rare but high-amplitude events that are referred to as outliers and sometimes as “black swans.” These events are fundamentally part of the distribution and are not anomalies, but can have a disproportionate effect when attempts are made to calculate variances or even mean values and therefore are important contributors to economic fluctuations.

13 Eugene Stanley coined the term “econophysics” in 1995 at a conference talk in Calcutta. The first econophysics meeting was held in 1998 in Budapest.

346 Introduction to Modern Dynamics In probability theory, a class of distributions are called stable if a sum of two independent random variables that come from a distribution have the same distribution. The normal (Gaussian) distribution clearly has this property, because the sum of two normally distributed independent variables is also normally distributed. The variance and possibly the mean may be different, but the functional form is still Gaussian. The general form of a probability distribution can be obtained by taking a Fourier transform as 1 P(x) = 2π

∞

ϕ(k)e−ikx dk

(10.119)

−∞

where ϕ(k) is known as the characteristic function of the probability distribution. A special case of a stable distribution is the Lévy symmetric stable distribution obtained as PL (x) =

1 π

∞

e−γ |q| cos(qx)dq α

(10.120)

0

and characterized by the parameters α and γ . The characteristic function in this case is called a stretched exponential. The Lévy distribution has a power law tail at large values, given by Eq. (10.118), but for smaller values it has a characteristic lengthscale set by the parameter γ . The special case of the Lévy distribution for α = 2 is a normal distribution. The special case of the Lévy distribution for α = 1 is the Cauchy distribution given by PC (x) =

14 The central limit theorem holds if the mean value of a number of N samples converges to a stable value as the number of samples increases to infinity.

1 γ π γ 2 + x2

(10.121)

The Cauchy distribution is normalizable (probabilities integrate to unity) and has a characteristic scale set by γ , but it has a divergent mean value, violating the central limit theorem.14 For distributions that satisfy the central limit theorem, increasing the number of samples from the distribution allows the mean value to converge on a finite value. For the Cauchy distribution, on the other hand, increasing the number of samples increases the chances of obtaining a black swan, which skews the mean value to larger values as the mean value diverges to infinity in the limit of an infinite number of samples. Examples of Lévy stable probability distribution functions are shown in Fig. 10.14(a) for a range between α = 1 (Cauchy) and α = 2 (Gaussian). The heavy tail is seen even for the case α = 1.99 close to the Gaussian distribution. Examples of two-dimensional Lévy walks are shown in Figs. 10.14(b), (c), and (d) for α = 1, 1.4, and 2. In the case of the Gaussian distribution, the meansquared displacement is finite. However, for the other cases, the mean-squared

Economic Dynamics 347 (a)

(b) 100

γ=1

Probability

10–1

Cauchy α=1 Lorentzian (Cauchy) α=1

10–2

10–3

α = 1.1 α = 1.3 α = 1.5

10–4

α = 1.9 Gaussian α=2

10û5

0

α = 1.99 10

(c)

α = 1.7

20

(d)

Lévy α = 1.4

Gaussian α=2

Figure 10.14 Lévy stable probability distribution functions between α = 1 (Cauchy) and α = 2 (Gaussian). The heavy tail is seen even for α = 1.99 close to the Gaussian case. Two-dimensional Levy walks are shown for N = 104 steps for Gaussian, Cauchy, and Lévy (α = 1.4) distributions of mean-free paths. displacement is divergent, caused by the large path lengths that become more probable as α approaches unity. An early and famous example of a power-law distribution in economics is the Pareto distribution ⎧ α ⎨ ax0 x≥γ 1+α PP (x) (10.122) x ⎩ 0 x 1, the time duration observed in O is longer than what is observed in O (the rest frame of the clock), and hence moving clocks tick slowly. One way to visualize moving clocks running slowly is with a “light clock.” A light clock in its rest frame is shown in Fig. 12.5(a). A light pulse originates at the bottom and travels a distance L to the top mirror, where it is reflected and returns to the bottom. The time for the round trip is Δt  2L/c. When the same clock is viewed moving past in the laboratory frame in Fig. 12.5(b), the light pulse must travel a longer distance. It is still traveling at the speed of light, so it takes longer to perform the round trip. The total time is Δt =

2 L2 + (vΔt/2)2 c

(12.14)

Solving this for t gives (a)

(b) Δt =

2 c √L2 + (vΔt/2)2

+ √L 2

L

(v Δ

t/2

)2

Δt = 2L/c

vΔt/2

L

Figure 12.5 (a) A light clock with a round trip Δt  = 2L/c in the clock rest frame. (b) When viewed in the lab frame, the light path is longer, so the clock ticks slower.

394 Introduction to Modern Dynamics  4  2 L + (vΔt/2)2 2 c    2L 2 Δt 2 1 − β 2 = c Δt 2 =

Δt = γ Δt 

(12.15)

which is the time dilation. 12.2.3.2 Length contraction Another consequence of the Lorentz transformation is that objects moving at relativistic speeds appear shortened in the direction of motion. This is derived from Eq. (12.3) in the standard configuration in which a moving meter stick is oriented along the x-axis. The primed frame is the rest frame of the meter stick, which has a proper length L . The unprimed frame is the laboratory frame relative to which the meter stick is moving with speed v. The meter stick is measured at t = 0 when the trailing edge of the meter stick is at the origin of both frames at  xTrail = xTrail = 0 and tTrail = tTrail = 0. Using the first equation in Eq. (12.3) for the leading edge of the meter stick,   v  t = γ tLead + 2 xLead = 0 c v   tLead = − 2 xLead c

(12.16)

and inserting the time of the measurement in the moving frame into the second equation of Eq.(12.3) gives   v2 xLead = γ xLead − 2 xLead c  v = γ xLead 1 − 2 c xLead = (12.17) γ Therefore, the measured length in the fixed frame is L = xLead − xTrail =

xLead L = γ γ

(12.18)

and the meter stick is observed to be shorter by a factor of γ . Hence, moving objects appear shortened in the direction of motion. 12.2.3.3 Velocity addition Velocity addition arises from the problem of describing how two observers see the same moving object. For instance, in the primed frame, an observer sees

Relativistic Dynamics 395 ux , while

a speed in the unprimed frame, an observer see the same object moving with a speed ux . The speed of the primed frame relative to the unprimed frame is v. The speed ux is obtained using the differential expressions of Eq. (12.3): dx γ (dx − vdt) =  v  dt  γ dt − 2 dx c

(12.19)

Dividing the right-hand side by dt gives dx dx/dt − v  =  v  dt 1 − 2 dx/dt c

(12.20)

With ux = dx/dt, this is simply ux =

ux − v ux v 1− 2 c

(12.21)

The same procedure applies to transverse velocities. dy dy =  v  dt  γ dt − 2 dx c dy/dt  =  v dx γ 1− 2 c dt

(12.22)

and the full set of equations are ux − v ux v 1− 2 c uy  uy =  ux v  γ 1− 2 c u z uz =  ux v  γ 1− 2 c ux =

(12.23)

The inverse relationships between O and O’ simply reverse the sign of the relative speed v. Examples of velocity addition are shown in Fig. 12.6. The resultant observed speeds never exceed the speed of light. 12.2.3.4 Lorentz boost Relativistic velocity addition leads to the Lorentz boost, also known as the headlight effect (Fig. 12.7). When two relative observers see a light ray inclined at an oblique angle relative to the direction of frame motion, they observe the

396 Introduction to Modern Dynamics 1

0.5

0.9 0.8 0.5

βx

0 0 –0.5 –0.8 –0.9

–0.5

Figure 12.6 Observed speed β x in the O frame for an object moving along the x-axis with a speed βx in the O frame. The different curves are marked with the relative speed v between the frames.

–1 –1

veladd.qpc

–0.5

0 βx

0.5

1

light ray making different angles. If the angle observed in the primed frame is given by cos θ  =

Δx cΔt 

(12.24)

then the angle in the unprimed frame is   γ Δx + vΔt  Δx   = cΔt cγ Δt  + vΔx /c2   γ Δx /Δt  + v   = cγ 1 + vΔx /Δt  c2   Δx / cΔt  + β = v 1 + Δx / (cΔt  ) c

cos θ  =

(12.25)

Re-expressing this in terms of the primed-frame angle gives cos θ =

cos θ  + β 1 + β cos θ 

(12.26)

This is the transformation rule to transform a light ray angle from one frame to another. The reason it is called the headlight effect is because an object that emits isotropically in its rest frame will appear as a forward-directed headlight in the unprimed frame if it is moving at highly relativistic speeds. As the relative speed

Relativistic Dynamics 397

f

e

cosθ =

d

g

cosθ + β g

1 + β cosθ

h

f

c e



y

h

d

y

b

c

LT a

i

b a

i

x

x

Figure 12.7 The headlight effect for β = 0.8. In the rest frame, an isotropic light source emits uniformly in all directions. The same light source, observerd in the lab frame, has a stronger emission in the forward direction—like a headlight. approaches the speed of light, cos θ approaches unity for any angle θ  , and the rays are compressed into the forward direction along the line of motion.1 12.2.3.5 Doppler effect The Doppler effect is an important effect in astrophysics, and it also leads to effects that are easily measured in the laboratory even for very low speeds that are far from the speed of light. For instance, green light reflected from a mirror moving at only 1 cm/s is shifted in frequency by 20 kHz, which is easily measured on an oscilloscope using an interferometer. The Doppler effect is derived from the Lorentz transformation by considering the wavelength measured by two relative observers. In the primed frame, which is taken as the rest frame of a light emitter, the measured wavelength is λ = cΔt 

(12.27)

But in the unprimed frame, the emitter moves with a speed u during the emission of a single wavelength, and the resulting wavelength that is observed during an emission time is λ = cΔt ∓ uΔt = (c ∓ u) Δt

(12.28)

1 This principle is used in X-ray light sources in so-called wrigglers and undulators that cause transverse accelerations and radiation of X-rays, which experience a Lorentz boost to form strongly collimated X-ray beams.

398 Introduction to Modern Dynamics 10

ν/ν

Redshift 1 Blueshift

Figure 12.8 Relative Doppler ratio showing redshifts and blueshifts for a source moving away from, or toward, the observer.

0.1 –1

–0.5

0

0.5

1

β

where the minus sign is when the source is moving in the same direction as the emission direction, and vice versa. The emission time is related to the emission time in the rest frame by time dilation, which gives λ = (c ∓ u) γ Δt  = γ (1 ∓ β) λ

(12.29)

to give a change in wavelength  λ=

1∓β  λ 1±β

(12.30)

1±β  ν 1∓β

(12.31)

or a change in frequency (Fig. 12.8)  ν=

The plus sign in this last equation is for blueshifted light, and the minus sign for redshifted light.

12.3 Metric structure of Minkowski space Minkowski space is a flat space without curvature, but it is not a Euclidean space. It is pseudo-Riemannian and has some unusual properties, such as “distances” that can be negative. Its geometric properties are contained in its metric tensor.

Relativistic Dynamics 399

12.3.1 Invariant interval For two events, consider the quantity (Δs)2 =

3 

Δx j − c2 Δt  2

2

(12.32)

j=1

How does it transform? Individually,   2 Δx = γ 2 (Δx − vΔt)2 2  v c2 Δt  2 = c2 γ 2 Δt − 2 Δx c

(12.33)

and combining these gives     2 v2 2 2 2 2 2 2 2 2 2 Δx − c Δt = γ Δx − 2vΔtΔx + v Δt − c Δt + 2vΔtΔx − 2 Δx c     2 2 2 2 2 2 2 2 2 = γ Δx − c Δt + γ β c Δt − Δx    = γ 2 1 − β 2 Δx2 − c2 Δt 2 = Δx2 − c2 Δt 2

(12.34)

Therefore,  2 2 Δs2 = Δx − c2 Δt  = Δx2 − c2 Δt 2

(12.35)

where s2 between the same two events takes on the same value in all reference frames, no matter what the relative speeds are. The quantities x and t need not be small, and can have any magnitude. The invariant interval leads to the closely associated idea of an invariant hyperbola. For a given s, the equation Δs2 = x − c2 t  = x2 − c2 t 2 2

2

(12.36)

describes a hyperbola in Minkowski space. All points on a given hyperbola are “equidistant” from the origin. Lorentz transformations keep an event always on the same hyperbola. In Fig. 12.9, a set of events (the solid circles) that are simultaneous in the rest frame of a moving system is transformed into coordinates observed by a fixed observer (the open circles). The Lorentz transformation transforms each event onto a new point on its own hyperbola, because the metric “distance” s between two events is an invariant property, unaffected by transformation. For each event, the associated hyperbolas in the fixed and moving frames coincide—indeed, it does not matter what frame is used to view the hyperbola—but the individual event locations on each hyperbola shift as the frame of view changes.

400 Introduction to Modern Dynamics 6 4 2 ct

Figure 12.9 Graph showing invariant hyperbola. All points on the same hyperbolic curve are the same “distance” from the origin. The solid points are simultaneous events in the primed frame. The open circles are the same events viewed in the un-primed frame. A given event, when transformed between frames, remains always on the same hyperbola.

0 –2 –4 –6 –6

–4

–2

0 x

2

4

6

The invariant interval suggests the choice for a differential line element in Minkowski space (see Chapter 11) as ds2 = −c2 dt 2 + dx2 + dy2 + dz2 = gab dxa dxb

(12.37)

where ⎛

gab

2 The metric tensor is not strictly a matrix, because it has two covariant indices. However, it is convenient to give it matrix form as an operator that can multiply 4-vectors.

−1 ⎜ 0 ⎜ =⎜ ⎝ 0 0

0 1 0 0

0 0 1 0

⎞ 0 0⎟ ⎟ ⎟ 0⎠

(12.38)

1

is the Minkowski metric expressed in matrix form.2 The determinant of the Minkowski metric equals −1, so this metric is known as pseudo-Riemannian. It defines a metric “distance” on the space, but this “distance” can take on positive, zero, or negative values. Therefore, although Minkowski space is flat, it is not Euclidean with Cartesian distances. In spite of its unusual geometry, the metric tensor plays all the same roles in Minkowski space that it does in Riemannian geometry. The inner product is defined in terms of gab through Eq. (12.37). Similarly, a position 4-covector is obtained from a vector as xb = gab xa

(12.39)

Relativistic Dynamics 401 or ⎛ −1 ⎜ 0 ⎜ (−ct x y z) = ⎜ ⎝ 0 0

0 1 0 0

0 0 1 0

⎞⎛ ⎞ 0 ct ⎜ ⎟ 0⎟ ⎟⎜ x ⎟ ⎟⎜ ⎟ 0⎠ ⎝ y ⎠ 1 z

(12.40)

The Minkowski matrix in Eq. (12.38) converts a column vector into another column vector, but the latter is represented here as a row vector (with a change in sign on the first element of the position 4-vector) as a reminder that it is a covector.

12.3.2 Proper time and dynamic invariants The invariant interval leads to an important quantity called proper time. The word “proper” here is from the French propre which means “one’s own”. It is also known as the rest-frame time, such as the mean decay lifetime for a radioactive particle, or the ticking of a well-made clock. These are physical properties of the particles or the apparatus and therefore are invariant. When two events are separated in time, but occur at the same location in space, the invariant interval is Δs2 = −c2 Δt  = −c2 dτ 2 2

(12.41)

The proper time is then  dτ =

−Δs2 c2

(12.42)

where only the positive square root is taken. By time dilation, it is also

Proper time interval

dτ =

1 dt γ

(12.43)

The proper time plays an important role when defining other 4-vectors beyond the position 4-vector. For instance, the velocity 4-vector is obtained from the position 4-vector by taking the derivative with respect to the proper time: ⎛ ⎞ c ⎜ a x⎟ dx ⎜u ⎟ νa = = γ ⎜ y⎟ ⎝u ⎠ dτ uz

(12.44)

402 Introduction to Modern Dynamics The derivative with respect to proper time produces an object with the same transformation properties as the position 4-vector. If the derivative had been taken with respect to time, the corresponding 4-velocity would not have transformed the same way as a 4-vector, because the time in the denominator would also be transformed. When the 4-velocity is defined in this way by using the invariant interval, the inner product of the 4-velocity with itself is gab va vb = −γ 2 c2 + γ 2 u2   = −γ 2 c2 − u2 = −c2

(12.45)

which is clearly an invariant quantity. This result is dictated by the metric properties of any physical space. The length of a vector is an intrinsic scalar property and cannot be dependent on the choice of coordinate system. Scalars are just numbers, and must be the same in any frame. The 4-momentum of a particle is obtained readily from the 4-velocity by multiplying by the rest mass of the particle: pa = mva

⎛ ⎞ c ⎜ux ⎟ ⎜ ⎟ = γ m ⎜ y⎟ ⎝u ⎠

(12.46)

uz The inner product of the 4-momentum with itself is gab pa pb = −γ 2 m2 c2 + γ 2 m2 u2 = −m2 c2

(12.47)

where m is the rest mass of the particle, and the inner product is again an invariant.

12.4 Relativistic trajectories In Chapter 11, a Lagrangian was constructed based on the invariant line element, as in Eq. (11.53), to establish the geodesic equation for geodesic curves in a manifold. For the case of a free particle in Minkowski space-time, one choice for a Lagrangian makes an analogy to kinetic energy through

  L xa , x˙a = −mc −gab x˙a x˙b

(12.48)

Relativistic Dynamics 403 where m is the rest mass of the particle, and the dot denotes the derivative with respect to proper time τ . There is no explicit potential energy, because the free particle experiences no forces. This Lagrangian leads to the geodesic equation by using Lagrange’s equation parameterized by proper time, d dτ



∂L ∂ x˙a

 −

∂L =0 ∂xa

(12.49)

which gives d 1 ∂gbc (mgab x˙b ) − m a x˙b x˙c = 0 dτ 2 ∂x

(12.50)

On dividing out the particle mass, changing the index a to d, and using g˙bc = x˙c ∂c gab , this becomes gdb x¨ b +

1 b c x˙ x˙ (2∂c gdb − ∂d gbc ) = 0 2

(12.51)

Multiplying by the inverse metric g ad and applying symmetry of the dummy indices gives 1 x¨ a + x˙b x˙c g ad (∂b gdc + ∂c gdb − ∂d gbc ) = 0 2

(12.52)

The final term is recognized as the Christoffel symbol, and hence this is simply the geodesic equation x¨ a + Γbca x˙b x˙c = 0

(12.53)

Therefore, the trajectories of particles in force-free motion satisfy the geodesic equation with respect to proper time. Although the particles have masses m, the mass divides out of Lagrange’s equations, yielding the familiar result that geodesic trajectories are independent of the particle mass.

12.4.1 Null geodesics In relativity, both Special and General, the trajectories of photons hold a special place as a family of invariant curves. They are geodesics, but of a special kind called null geodesics. Once you have mapped out the light curves of a space-time, you have a fairly complete picture of how the space-time behaves—even in the case of strongly warped space-time around black holes. A metric space with the property gab V a V b ≥ 0 is positive-definite, and distances are calculated as we intuitively expect in two and three dimensions. Conversely, a metric space with the property gab V a V b < 0 is indefinite (also known as pseudo-Riemannian). Indefinite metric spaces can have curves with

404 Introduction to Modern Dynamics zero arc length. Recall that in Minkowski space, which is pseudo-Riemannian, every point on an invariant hyperbola is equidistant from the origin. Similarly, light curves have zero distance to the origin. These null geodesics have the property ds2 = 0

(12.54)

along the curve. Because the path length is zero, null geodesics require parameterization by the variable σ rather than s: d 2 xa dxb dxc + Γbca =0 2 dσ dσ dσ

(12.55)

with the further constraint grs

dxr dxs = dσ dσ



ds dσ

2 =0

(12.56)

which is the constraint (11.65) with ds = 0. Null geodesics in Minkowski space-time are found by considering the Christoffel symbols Γbca = 0 that lead to the linear xa = xa0 + ν a t. Furthermore, gab V a V b = 0 leads to    gab xa − xa0 xb − xb0 = 0

(12.57)

(ct − ct0 )2 = (x − x0 )2 + (y − y0 )2 + (z − z0 )2

(12.58)

which gives

In the ct–x plane, these are lines at 45◦ , as shown in Fig. 12.10. The null geodesics in Minkowski space are the trajectories of photons.

ll s nu ic + des o ge

Time

ll s nu ic – des o ge

Figure 12.10 Null geodesics in Minkowski space. Photons have trajectories (world lines) that are oriented at 45◦ .

Light cone

Space

Relativistic Dynamics 405

12.4.2 Trajectories as world lines Trajectories of massive particles are drawn on Minkowski space-time diagrams as “world lines.” The particles can be subject to varying forces that cause them to accelerate or decelerate, but their speed always remains less than the speed of light, and hence on a space-time diagram, their trajectories always have slopes greater than 45◦ . An example of a world line is shown in Fig. 12.11 for a relativistic harmonic oscillator. The trajectory is oscillatory, but the equation for this oscillator is not a simple linear equation.3 All pairs of points on a single connected world line have Δs2 < 0 and hence have a “time-like” interval. These points with time-like intervals can be causally connected such that events can unambiguously be defined as occurring in the past or in the future relative to one another. A world line is constrained by a “light cone.” On a space-time diagram, there can be events that are not causally connected, and hence cannot unambiguously be defined to be in the future or in the past relative to each other. Such descriptions as “future” and “past” are relative for these types of events and depend on the frame of reference. These events are separated by an interval Δs2 > 0 and are said to have a “space-like” interval. ct World line

Slope = 1

Slope = -1

Light cone

x

3 For a mathematical description of a relativistic anharmonic oscillator, see W. Moreau, R. Easther, and R. Neutze, American Journal of Physics 62, 531–535 (1994).

Figure 12.11 World line of a massive particle oscillating relativistically. The slope of the space-time trajectory always has a magnitude greater than unity.

406 Introduction to Modern Dynamics

12.5 Relativistic dynamics Just as classical mechanics is traditionally divided into kinematics (how things move) and dynamics (why things move), relativistic mechanics likewise has this natural division. Relativistic dynamics is concerned with momenta and forces under relativistic speeds. In relativity, forces and accelerations do not share transformation properties like position and velocity. Even in Galilean relativity, positions and velocities transform, but forces and accelerations are frameinvariant. This is a result of the fundamental principle of relativity: that physics is invariant to the choice of inertial frame. Since Newtonian physics is defined by forces and accelerations, these are consequently invariant to the choice of inertial reference frame. On the other hand, in special relativity, this simple invariance of acceleration and force vectors is not preserved, but must be replaced with invariant products of vectors and covectors. For this reason, forces and accelerations do have relativistic transformations, but their invariant products are the main objects of interest.

12.5.1 Relativistic energies Kinetic energy is derived by considering the 3-force acting on a particle that is moving with a speed u relative to a fixed observer. Newton’s Second Law is still related to the momentum of the particle as observed in the fixed laboratory frame: d p d F = =  (γ mu) dt dt

(12.59)

By integrating the mechanical power, the work done on the particle is found to be  d  · u dt (γ mu) dt u = m u d (γ u)

W =T =

(12.60)

0

Integrating by parts gives u 

T = γ mu2 − m 0

u du 1 − u2 /c2

= γ mu2 + mc2 1 − u2 /c2 − mc2 = γ mc2 − mc2

(12.61)

Relativistic Dynamics 407 From this result, it is simple to make the assignment T = E − E0

(12.62)

for the kinetic energy of the particle, where the total energy is E = γ mc2

(12.63)

and the rest energy of the particle is E0 = mc2

(12.64)

The kinetic energy can therefore also be expressed as T = (γ − 1) mc2

(12.65)

These last three expressions are very useful when working with total, rest, and kinetic energy, as well as for mass–energy conversions.

Example 12.1 Mass–energy conversions The equivalence between mass and energy enables conversion from one form to the other. In other words, energy can be converted to mass, and mass can be converted to energy—most notably in the form of fusion or fission energy. As an example of energy conversion to mass, consider the binding of a neutron and a proton in the nucleus of a deuterium atom. The energy of the separated constituents is   (12.66) E∞ = m 11 H c2 + mn c2   where mn is the mass of the neutron and m 11 H is the mass of hydrogen, including the electron mass and the binding energy of the electron to the proton. The energy of a deuterium atom is similarly   ED = m 21 H c2 (12.67) Therefore, the binding energy of deuterium is given by ΔEB = E∞ − ED     = m 11 H c2 + mn c2 − m 21 H c2

(12.68)

The conversion from mass units to energy is c2 = 931.5

MeV u

(12.69) continued

408 Introduction to Modern Dynamics

Example 12.1 continued where the mass unit, denoted by u, is defined by 12 C which has 12 mass units for 12 nucleons. Numerically, the energy difference for the deuteron is   MeV ΔEB = (1.007825u + 1.008665u − 2.014102u) 931.5 u   MeV (12.70) = 0.002388u 931.5 u = 2.224422 MeV It is common to quote results such as this as the binding energy per nucleon, which is 1.12 MeV per nucleon in this case. In this example, the deuterium atom is lighter than the mass of its separated components. The missing mass comes from the binding energy of the neutron to the proton. This is a direct example of the conversion from mass to energy.

4 Leptons are one of the fundamental classes of massive particles and include the electron and positron as well as neutrinos. Electrons have positive lepton number +1, while positrons have negative lepton number −1. Hence, pair production conserves lepton number.

One of the consequences of mass–energy equivalence is the possibility of creating new matter out of energy. The creation of matter is governed by conservation laws that go beyond simple energy conservation and are topics in high-energy particle physics that go beyond the scope of this book. However, one of the conservation laws is the conservation of lepton number.4 If kinetic energy is to be converted to the creation of an electron, then the conservation of lepton number requires that an antielectron (known as a positron) also must be created.

Example 12.2 Antimatter pair production As an example, consider a relativistic electron that collides head-on with an electron in a thin metal foil in the laboratory frame, as shown in Fig. 12.12. The kinetic energy of the incident electron can be converted into the creation of an electron–positron pair in addition to the original two electrons (the incident electron and the electron in the foil). The pair-production reaction is written as 2e → 3e + e+

(12.71)

This reaction is most easily solved by considering it from the point of view of the center-of-mass (CM) frame. The initial energy in the CM frame is Ei = 2γ  me c2

(12.72)

where the primes relate to the CM frame. The lowest energy for which this reaction can take place is set by the final state when all of the particles are stationary in the CM frame. In this case, the only energy is mass energy, and the final energy is

Relativistic Dynamics 409

Example 12.2 continued Laboratory frame

Metal foil

Before:

After: e+ e– e– e–

e–

CM frame Before:

e–

e–

After:

e– e–

e– e+

Figure 12.12 Antimatter (positron) production through the energetic collision of two electrons. The upper pannel shows in incident electron on a metal foil in the laboratory frame. The bottom panel shows the same process from the point of view of the center-of-mass.

Ef = 4me c2

(12.73)

Equating the initial with the final energy gives 2γ  me c2 = 4me c2 γ = 2 which establishes the Lorentz factor. For the original two electrons in the CM frame, the kinetic energy is   K  = γ  − 1 me c2 = 0.511 MeV/c2

(12.74)

(12.75)

410 Introduction to Modern Dynamics

Example 12.2 continued for each one. To find the speeds of the initial two electrons in the CM frame, solve for v as 2=

 1

2

1−v2 c

v 2 =

(12.76)

3 2 c 4

v = 0.866c These are transformed back into the laboratory frame where the electron in the metal foil is stationary, so that u = v

(12.77)

and solving for the speed v=

v + u 1+ uv c2



= 0.9897c

(12.78)

≈7

(12.79)

gives the Lorentz factor in the laboratory frame as γ =

 1 1−

v2 c2

for the incident electron. The original energy in the laboratory frame is Ei = 7me c2 + me c2 = 8me c2

(12.80)

and the incident electron has a kinetic energy Ki = 6me c2

(12.81)

The final total energy is shared equally among all four particles moving with speed ν  , Ef = 4

1

me c2 = 8me c2

(12.82)

Kf = 8me c2 − 4me c2 = 4me c2

(12.83)

2 1− vc2

and the final kinetic energy is then

Therefore, positron production at threshold also has the unique property that all of the produced particles have the same velocity.

12.5.2 Momentum transformation By using the identifications p = γ mu

and

E = γ mc2

(12.84)

Relativistic Dynamics 411

Total energy:

E = γmc2

Rest energy:

E0 = mc2

2

2 )2

2

E

mc =(

+p

c2

(pc)2

Kinetic energy: T = (γ–1)mc2

a

4-momentum: p =

E20 = (mc2)2

E/c px py

Figure 12.13 Relativistic energies (total, rest and kinetic) and their relationship to momentum, derived by constructing the invariant product of the 4-momentum.

pz

the 4-momentum inner product can be expressed as p2 = pa pa = m2 c2 + γ 2 m2 u2 = γ 2 m2 c2 m2 c2 + p2 =

(12.85)

E2 c2

which yields the important result

Energy, mass and momentum

 2 E 2 = mc2 + p2 c2

(12.86)

This relates total energy and rest energy to particle momentum. For a photon that has no rest mass, the simple (and Einstein’s famous) relationship is E = pc

(12.87)

The relativistic energies are summarized in Fig. 12.13, showing the geometric relationship between total energy, rest energy, and momentum. The 4-momentum of Eq. (12.46) can be expressed as ⎛

⎞ E/c ⎜p ⎟ ⎜ x⎟ pa = ⎜ ⎟ ⎝ py ⎠

(12.88)

pz where the invariant momentum product is given in Eq. (12.86). The 4momentum is a 4-vector with the usual Lorentz transformation properties

412 Introduction to Modern Dynamics pa¯ = Λab¯ pb ⎛ γ −βγ 0 ⎜ − βγ γ 0 ⎜ =⎜ ⎝ 0 0 1 0 0 0 ⎛ ⎞ γ (E/c − βpx ) ⎜γ (−βE/c + p )⎟ ⎜ x ⎟ =⎜ ⎟ ⎝ ⎠ py

⎞⎛ ⎞ 0 E/c ⎜ ⎟ 0⎟ ⎟ ⎜ px ⎟ ⎟⎜ ⎟ 0⎠ ⎝ py ⎠ 1

pz

(12.89)

pz with the individual transformation equations E  = γ (E − vpx )  v  px = γ px − 2 E c py = py pz = pz

  E  = γ E  + vpx  v  px = γ px + 2 E  c py = py pz = pz

(12.90)

that couple the time-like component (energy) to the space-like components (momentum). Conservation of 3-momentum plays a key role in high-energy scattering processes like Compton scattering.

Example 12.3 Compton scattering Compton scattering is the process by which a photon scatters off a stationary electron. The Feynman diagram for Compton scattering is shown in Fig. 12.14. Feynman diagrams are routinely used to illustrate the interactions among fundamental particles. They are like cartoons of space-time diagrams with position along one axis (here the horizontal axis) and time increasing along the other (here the vertical axis). In this Feynman diagram, the electron first absorbs the photon and then emits a new one. In the process, the individual energies and momenta of the photons and the electron change, but both energy and momentum are conserved between the initial and final states. The energy of the scattered electron and photon can be derived as a function of the scattering angle. Energy conservation establishes Einit = Efinal E + mc2 = E  + Ee

(12.91)

where m is the electron mass, E is the incident photon energy, E is the energy of the scattered photon, and Ee is the energy of the scattered electron. The 3-momentum conservation gives p = pe cos φ + p cos θ 0 = pe sin φ − p sin θ

(12.92)

Relativistic Dynamics 413

Time

Example 12.3 continued

Scattered photon

Scattered electron

Photon

Electron Position

Figure 12.14 Feynman diagram of Compton scattering in which a photon scatters off of an electron. where θ and φ are the scattering angles that define the direction of the scattered photon and electron, respectively.   There are four unknowns θ, φ, Ee , E  but only three equations. It is possible to eliminate Ee and φ by asking only about properties of the photon. The momentum equations are pe cos φ = p − p cos θ pe sin φ = p sin θ

(12.93)

By squaring each equation and adding them, p2e = p2 − 2pp cos θ + p 2 cos2 θ + p 2 sin2 θ p2e = p2 − 2pp cos θ + p 2

(12.94)

the energy of the electron is Ee2 = c2 p2e + m2 c4

(12.95)

Combining these equations gives  2   2 E + mc2 − E  = c2 p2 − 2pp cos θ + p + m2 c4 Emc2 − mc2 E  = EE  (1 − cos θ)

(12.96)

414 Introduction to Modern Dynamics

Example 12.3 continued E − E 1 = (1 − cos θ ) EE  mc2 which finally gives the Compton scattering formula 1 1 1 − = (1 − cos θ ) E E mc2 This last equation is often written in terms of the photon wavelength using

(12.97)

hc (12.98) λ which is one of the few places in this textbook that makes a direct connection to quantum physics. The relationship between the initial and final wavelengths of the photon is E=

λ − λ = λe (1 − cos θ )

(12.99)

where the Compton wavelength for photon scattering from an electron is λe =

h = 2.426 × 10−12 m me c

(12.100)

For forward scattering (θ = 0), there is no shift in the photon wavelength. For backscattering (θ = π), the change in the photon wavelength is a maximum, with the value Δλ = 2λe

(12.101)

which is about 5 picometers (5 pm). Obviously, Compton scattering is negligible for visible light (wavelength around 0.5 micrometers), but it becomes sizeable for X-rays.

12.5.3 Invariant mass A powerful method of analysis in high-energy particle physics that makes it possible to discover the mass of a new particle (like the top quark, the Z-boson, or the Higgs) is invariant mass reconstruction. Consider a particle of rest mass m0 that decays into multiple particles, some of which may be photons without mass   (Fig. 12.15). The energy and momentum of each particle are given by Ea , pa . The rest mass of the original particle is an invariant property of the system. Therefore, the invariant mass of the original particle is calculated by summing the decay product energies and momenta to yield 2

 m0 =

N  a=1

Ea

 −

N  a=1

 N   pa · pa a=1

Relativistic Dynamics 415 (E1, p1) (E5, p5)

m0

(E4, p4)

(E2, p2)

Figure 12.15 A particle of rest mass m0 decays into multiple particles carrying energy and momentum.

(E3, p3)

This approach yields the same invariant mass independently of the frame in which the particles’ energies and momenta are measured, regardless whether this is the laboratory frame, the rest-mass frame, or any other frame. This makes this invariant a powerful tool in the search for new particles.

12.5.4 Force transformation The relativistic transformation of forces provides a first step toward the general theory of relativity. Forces on free objects in an inertial frame produce noninertial frames (the frames attached to the accelerating objects). But before we look at accelerating frames, we first must find how forces transform between two inertial frames. The transformation of forces between the frames leads to an important equivalence between fx and fx along the direction of motion. From Newton’s Second Law, 3-forces (ordinary forces) are derived from the time derivative of the momentum. The 4-force, by analogy, is derived as ⎛

⎞ E/c ⎟ dpa d ⎜ d ⎜ px ⎟ fa = = ⎜ ⎟=γ dτ dt ⎝ py ⎠ dt pz



⎞ E/c ⎜p ⎟ ⎜ x⎟ ⎜ ⎟=γ ⎝ py ⎠ pz



⎞ P/c ⎜ f ⎟ ⎜ x ⎟ ⎜ ⎟ ⎝ fy ⎠

(12.102)

fz

where fx , fy , and fz are the “ordinary” 3-forces, and P is the power transferred by the force to the object. Now consider two inertial frames as in Fig. 12.16, with a

416 Introduction to Modern Dynamics y

f fy fx v

Laboratory frame O

x

y

f y

fx

f x

Figure 12.16 A mass moving at velocity v along the x-axis experiences a force f in the fixed frame. In the rest frame of the mass, the force along the direction of motion remains unchanged, but the transverse forces are transformed.

Particle rest frame O

x

point mass that experiences a force. The point mass is stationary in the primed frame, and is moving along the x-axis with velocity v in the unprimed frame. The 4-force in the particle rest frame (the primed frame) is ⎛ ⎞ 0 ⎜f  ⎟ ⎜ ⎟ f b = ⎜ x ⎟ ⎝fy ⎠

(12.103)

fz

where the time component is zero because the particle is at rest (power equals force times velocity). Now consider the same object from the unprimed frame in

Relativistic Dynamics 417 which the point mass is moving with β oriented along the x-axis. Then, ⎞ βγ fx ⎜ γf ⎟ ⎜ x⎟ fa = ⎜ ⎟ ⎝ γ fy ⎠ γ fz ⎛

(12.104)

where the time component uses power P = vfx . But the inverse Lorentz transformation of the 4-force is ⎛

γ ⎜βγ ⎜ f a = Λab f b = ⎜ ⎝0 0 ⎞ ⎛ βγ fx ⎜ γf ⎟ ⎟ ⎜ = ⎜ x ⎟ ⎝ fy ⎠

βγ γ 0 0

0 0 1 0

⎞⎛ ⎞ 0 0 ⎟ ⎜ 0⎟ ⎜fx ⎟ ⎟ ⎟⎜ ⎟ 0⎠ ⎝fy ⎠ 1

fx

(12.105)

fz

Equating the components in the two 4-force expressions in Eqs. (12.104) and (12.105) gives the relationships among the 3-force components in the two frames as fx = fx fy = fy /γ fz = fz /γ

(12.106)

Therefore, the components of the force in the direction of relative motion are equal, which is the same result for that component that was obtained from Galilean relativity. In this sense, acceleration (in the direction of motion) is independent of the inertial frame. However, the forces transverse to the direction of motion do transform with a factor of γ , which is clearly non-Galilean. This result has importance for the relative view of accelerating objects. If a force f = mg acts on an object in one frame to accelerate it, then in the frame of the object (in which it is instantaneously at rest), it experiences the same force. In this way, it is possible to consider a rocket ship that exerts a constant thrust to give it a constant acceleration g. The astronaut inside feels their weight mastro g (which is what they would feel if the rocket ship were at rest at the surface of the Earth). And when viewed from the fixed frame, the rocket ship also experiences a constant force F rocket = mrocket g. However, the acceleration observed from the rest frame will decrease as the rocket moves more relativistically and approaches the speed of light. Furthermore, an accelerating object is only instantaneously

418 Introduction to Modern Dynamics in one inertial frame—it moves through a succession of inertial frames. This leads to a more complicated description of accelerated motion, but takes the first step toward a general theory of relativity that applies to all frames, not just inertial ones.

12.6 Linearly accelerating frames (relativistic) The last part of this chapter on special relativity bridges to the general theory by considering a frame with constant linear acceleration relative to an inertial frame as a relativistic extension of Galilean relativity by considering speeds approaching the speed of light. The linearly accelerating frame is obviously not inertial, and hence we may expect to find new and counter-intuitive phenomena in this case, especially with regard to the behavior of light rays. The reason that this section is included here at the end of the chapter on special relativity instead of at the beginning of the one on general relativity is because it has a closer correspondence with special relativity. We are dealing with frames that are instantaneously inertial relative to each other. This allows us to apply Lorentz transformations locally, even though the global properties of the two frames are considerably different. Constructing constantly accelerating frames is fraught with difficulties that lead to paradoxes related to the elastic deformation of rigid frames. To avoid these difficulties, a subterfuge is used that begins with a constantly accelerating point mass. This point mass passes continuously through successive inertial frames. If an extended collection of such point masses could be constructed, then they could represent masses in such an accelerated frame. The reason for taking pains to construct such a frame is to obtain a frame whose metric is time-independent. This makes it possible to compare such a frame with a gravitational field whose metric also is independent of time and hence to make the connection to the Equivalence Principle, which was one of Einstein’s cornerstones in the construction of the general theory. The Equivalence Principle (EP) states that there is an equivalence between a uniformly accelerating frame and a gravitational field. In other words, if you are in an elevator (without windows to look out) you cannot tell whether you are far out in space and accelerating at a rate g or are in a constant gravitational field. To begin this construction, consider a point mass that experiences a constant force fx = m0 g when viewed from the rest frame of the mass. However, from Eq. (12.106), we have fx = fx , and hence, in the unprimed frame, dpx = m0 g dt

(12.107)

Relativistic Dynamics 419 or d (γ m0 β) g = m0 2 cdt c

(12.108)

which yields the simple differential equation d (γβ) =

g (cdt) c2

(12.109)

This integrates immediately to γβ = κct

(12.110)

where the constant is κ=

g c2

(12.111)

 Solving for β = κct/γ and γ = 1/ 1 − β 2 gives β=

κct

and

1 + (kct)2

γ =

1 + (κct)2

(12.112)

These are the instantaneous velocity and Lorentz factor of the particle in O seen from the fixed laboratory frame O. Substituting β into the definition of velocity in the fixed frame dx = βcdt

(12.113)

and integrating gives  κx =



κct 1 + (kct)

2

d (κct) =

1 + (κct)2 + κxP

(12.114)

Rearranging this gives κ 2 (x − xP )2 − (κct)2 = 1

(12.115)

This is the equation for a hyperbola, where xP represents the location of an event P that is the focus of the hyperbola. The space-time diagram is shown in Fig. 12.17. The event associated with the particle at t = 0 is A. The focal event P is located at xP = x0 − 1/κ. The x-axis of the instantaneously inertial frame comoving with

420 Introduction to Modern Dynamics 0.8 ct

0.6

x 0.4

Figure 12.17 The trajectory of a uniformly accelerating point mass is a hyperbola. The particle begins accelerating at t’ = t = 0 at event A. At a later event B a comoving inertial frame with the same instantaneous velocity has an x’-axis that passes through B and the event P, which acts as the focus to the hyperbolic world line of the particle.

B 0.2

P

A

0.0 0.2

0.4

0.6

0.8

1

1.2 x

c2/g

the particle is the line through PB. As B evolves in time, the x axis transforms continuously toward the asymptotic line of the hyperbola. The event P has unexpected properties, because the world line that starts at A asymptotes to the light line that emanates from P. Therefore, P behaves like the event horizon of a black hole. A clock at P would appear infinitely time-dilated (stopped) to the observer in O accelerating with the particle. A photon emitted from P at the instant that the mass began accelerating would never catch up with it. Therefore, in this problem of constant acceleration, a phenomenon has appeared that we usually associate with massive bodies (black holes). The behavior of constant acceleration in special relativity has some of the properties of gravitation, as we shall see in the final chapter.

12.6.1 Equivalence Principle There is a modification to the metric tensor caused by the noninertial frame. For an elevator accelerating with constant acceleration g, it is necessary to derive how time and space transform. The 4-velocity of the accelerating mass is a constant, u2 = −c2 , and its 4-acceleration has magnitude aα aα = g 2

(12.116)

Because the inner product of the 4-acceleration with the 4-velocity is an invariant, which equals zero for the elevator at any chosen instantaneous time, the 4-acceleration must be orthogonal to the 4-velocity at all times, so uα aα = −ca0 + u1 a1 = 0

(12.117)

Relativistic Dynamics 421 Therefore, a0 = a1

du0 g = u1 dτ c

(12.118)

du1 g = = u0 dτ c

These can be integrated once to give u0 + u1 c exp

g  τ c

u0 − u−1 = c exp



−g τ c

 (12.119)

and integrated a second time to give g  c2 ct + x = exp τ g c 1

  c2 −g ct − x = − exp τ g c 1

(12.120)

with the solution g  c2 sinh τ  g c

ct ∝

x∝

g  c2 cosh τ  g c

(12.121)

The relationship of x and t to x and t  is found by considering the integration constants, with the origin taken as the event P. This gives   c2 c2 g  x =− + x + cosh τ  g g c     c2 g ct x , τ  = x + sinh τ  g c 

x , τ 



(12.122)

The instantaneous transformation between the inertial system O and the accelerating frame O is Λab¯ =

∂xa ∂x b

⎛ g  g 1 + 2 x cosh τ  c c =⎝ g sinh τ  c



1+

g  g ⎞ x sinh τ  2 c ⎠ c g cosh τ  c

(12.123)

By setting the local time inside the elevator to zero, the instantaneous Lorentz transformation is ⎛ ⎞ g   a 1 + x 0 ∂x  ⎠ c2 =⎝ Λab = (12.124)  0 1 ∂x b τ  =0 The local metric tensor inside the elevator is obtained from the Minkowski metric, and the metric tensor becomes

422 Introduction to Modern Dynamics gab¯ = Λca Λdb¯ ηcd

⎛  g 2 − 1 + 2 x ⎝ = c 0

⎞ 0⎠

(12.125)

1

where ηcd is the Minkowski metric.5 Therefore, the invariant interval is    g 2 2 2 2 2 ds2 = − 1 + 2 x c2 dt  + dx + dy + dz c

(12.126)

Only the time component of the metric is modified by the constant acceleration. It is also position-dependent, which means that clocks at different locations along the direction of acceleration run at different speeds. This is the metric description of the Equivalence Principle that equates a uniformly accelerating frame to a uniform gravitational field.

12.6.2 Gravitational time dilation To begin to explore the connection between noninertial frames and the metric tensor, consider two identical ticking clocks at rest in the accelerating frame of a spaceship that is accelerating at a rate g. The location of the forward clock is x1 and that of the aft clock x2 . The question is whether these clocks are ticking at the same rate. The challenge is to relate the properties of these two clocks to each other. This cannot be done as easily as in an inertial frame, where photons can be exchanged to synchronize and compare two clocks, because the photons themselves are modified by the emission and absorption processes at the clocks (see the discussion of gravitational redshift in the next chapter). However, there is a simple approach to make this comparison that employs a third clock that is falling freely in the frame of the spaceship. The third clock passes the first clock, and because they are located at the same place in space, their times can be compared directly. Assume that the speed of the falling clock when it passes clock 1 is given by β 1 . Then, when the falling clock passes clock 2, its speed is given by β 2 . The time on the falling clock is dilated by the amounts dt1 = γ1 dt1 dt2 = γ2 dt2

(12.127)

where the clock’s inertial rest frame is the unprimed frame. But in its own rest frame, it is obvious that dt1 = dt2 , and therefore γ1 dt1 = γ2 dt2 . This leads to dt1 γ2 = =  dt2 γ1 5

The flat Minkowski metric is often denoted by ηab to distinguish it from a more general gab in curved spacetime.



1 − u21 /c2

1/2

1 − u21 /c2

1 u21 − u22 ≈1− 2 c2

(12.128)

Relativistic Dynamics 423 where a low-velocity expansion has been used. Clearly, the two clocks, one forward and one aft, are ticking at different rates in the accelerating frame of the spaceship. An astronaut on the spaceship is at rest in the spaceship frame, and the acceleration of the ship at the rate g is experienced as a phenomenon equivalent to a gravitational field of strength g. Therefore, the astronaut would perform a simple calculation on the speed of the falling clock that re-expresses it in terms of gravitational potential as  dt1 g  ≈ 1 − 2 x2 − x1  dt2 c

(12.129)

This result suggests that the invariant interval, using Eq. (12.129), is  g 2 2 ds2 = − 1 + 2 x c2 dt  c

(12.130)

If we take “coordinate time” to be set by a clock at x = 0, then a clock at a position x has the time interval dt  =

dτ g 1 + 2 x c

(12.131)

relative to the proper time interval of the reference clock. Therefore, clocks that are along the positive x -axis (higher in potential energy in the pseudo-gravitational field experienced as the acceleration g) have a shorter time interval and run faster. Clocks with lower potential energy (at negative x ) run slower. Therefore, a clock at the tip of the accelerating rocket ship runs fast relative to a clock at the base of the rocket ship. This phenomenon provides an initial glimpse into one of the effects of general relativity.

12.7 Summary The invariance of the speed of light with respect to any inertial observational frame leads to a surprisingly large number of unusual results that defy common intuition. Chief among these are time dilation, length contraction, and loss of simultaneity. The Lorentz transformation intermixes space and time, but an overarching structure is provided by the metric tensor of Minkowski space-time. The pseudo-Riemannian metric supports 4-vectors whose norms are invariants, independent of any observational frame. These invariants constitute the proper objects of reality studied in the special theory of relativity. Relativistic dynamics defines the equivalence of mass and energy, which has many applications in nuclear energy and particle physics. Forces have transformation properties between relatively moving frames that set the stage for a more general theory of relativity that describes physical phenomena in noninertial frames.

424 Introduction to Modern Dynamics

12.8 Bibliography J. J. Callahan, The Geometry of Spacetime: An Introduction to Special and General Relativity (Springer, 2000). R. D’Inverno, Introducing Einstein’s Relativity (Oxford University Press, 1992). T. A. Moore, A Traveler’s Guide to Spacetime: An Introduction to the Special Theory of Relativity (McGraw-Hill, 1995). R. A. Mould, Basic Relativity (Springer, 1994). V. Petkov, Relativity and the Nature of Spacetime (Springer, 2005). R. Talman, Geometric Mechanics (Wiley, 2000).

12.9 Homework problems 1. Lorentz transformation: Derive Eqs. (12.3) and (12.4) from Eqs. (12.1) and (12.2). 2. Acceleration transformation: Derive the expressions for the relativistic transformation of accelerations by taking the derivative of the velocity transformation equation (12.23). If acceleration is constant in one frame, is it constant in all frames? 3. Synchronization: Two clocks at the origins of the K and K frames (which have a relative speed v) are synchronized when the origins coincide. After a time t, an observer at the origin of the K system observes the K  clock by means of a telescope. What does the K  clock read? 4. Light clock: Take the light clock in Fig. 12.6(a) and place it so the light path is along the direction of motion. Rederive the time dilation of the clock in this configuration. 5. Length contraction: A stick of length l is fixed at an angle θ from its x-axis in its own rest frame K. What is the length and orientation of the stick as measured by an observer moving along x with speed v? 6. Headlight effect: Consider a point source of light that emits isotropically with a constant intensity output in its own frame. If it is moving with speed v in a laboratory frame, derive the intensity as a function of angle from the forward direction that is observed in the laboratory frame. 7. Doppler effect: A light source moving with speed β along the x-axis emits a photon that has an angular frequency ω in its rest frame. The photon is observed in the fixed frame to travel at an angle θ 0 relative to the direction of motion. (a) Calculate the Doppler shift as a function of β and θ 0 . What is the shift for θ 0 = 90◦ ? Is it blueshifted or redshifted?

Relativistic Dynamics 425 (b) Calculate the Doppler shift as a function of β and θ 1 , where θ 1 is the angle from the detection point to the position of the light source at the moment the photon is detected. What is the shift for θ 1 = 90◦ ? Is it blue-shifted or red-shifted? (c) What is the shift for θ 0 + θ 1 = 180◦ ? Is it blueshifted or redshifted? This last answer is the transverse Doppler shift observed in transverse laser scattering experiments. 8. Invariant product: Show that the inner product between any two 4-vectors is necessarily an invariant, even if the two 4-vectors are not of the same type— for instance if one is the position 4-vector and the other is the momentum 4-vector. 9. Relativistic Lagrangian: Derive Eq. (12.50) from Eq. (12.48). 10. Pair production: Derive the threshold energy for a photon to scatter from a stationary electron to produce an electron–positron pair. Assume that all kinetic energy after the interaction is shared equally among the three leptons (the original electron and the electron–positron pair). 11. Relativistic force: Show that the relativistic forms of Newton’s Second Law when the acceleration is parallel or perpendicular to the force are respectively m du F=  3/2 2 dt 1−β

F= 

m

1/2 1 − β2

du dt

12. Relativistic force: Show that if a force F acts on a particle of rest mass m0 that has velocity v,  then d v F · v + 2 v F = γ m0 dt c 13. Constant acceleration: You are on a space ship that begins accelerating at t  = 0 with a constant acceleration a = g (measured by the force you experience locally). After 1 year passes on your clock, how far away is the Earth in your frame? How far away are you from the Earth in the Earth’s frame? Could you get to Alpha Centauri by this time? 14. Euler–Lagrange: Show that the Lagrangian

  L xa , x˙a = −mc −ηab x˙a x˙b − γ mgxa (where ηab is the Minkowski metric) and the Euler–Lagrange equations in proper time lead to the same kinematics as constant acceleration.

The General Theory of Relativity and Gravitation

13 13.1 The Newtonian correspondence

426

13.2 Riemann curvature tensor

429

13.3 Einstein’s field equations

432

13.4 Schwarzschild space-time

436

13.5 Kinematic consequences of gravity

438

13.6 The deflection of light by gravity

439

13.7 Planetary orbits

445

13.8 Black holes

447

13.9 Gravitational waves

452

13.10 Summary

456

13.11 Bibliography

456

13.12 Homework problems

457

The central postulate of general relativity holds that all observers are equivalent, even if they are not in inertial frames. This means that observers in accelerating frames and observers in gravitational fields must measure the same local properties of physical objects. This chapter introduces the concept of curved spaces and their tensor metrics and tensor curvatures. By defining the effect of massenergy on space-time curvature, simple consequences of the general theory are derived, such as gravitational acceleration, the deflection of light by gravity, and gravitational waves.

13.1 The Newtonian correspondence A particle with no forces acting on it executes a geodesic trajectory. This fact was derived through a variational calculation on the Lagrangian of a free particle. The resulting trajectory in Euclidean 3-space is a straight line. In Minkowski space, force-free relativistic particles, including photons, have world lines that are straight.

Inertial frame

d 2 xa =0 dτ 2

(13.1)

Introduction to Modern Dynamics. Second Edition. David D. Nolte, Oxford University Press (2019). © David D. Nolte. DOI: 10.1093/oso/9780198844624.001.0001

The General Theory of Relativity and Gravitation 427 The next step is to ask what trajectory a particle in free-fall executes when it is near a gravitating body. The answer is again a geodesic. However, the geodesic in this case is no longer a straight line. The geodesic also needs to be viewed in 4space, with time-like components that are affected by the curvature of space-time induced by matter (and energy). The equations of motion are then

dxb dxc d 2 xa + Γbca =0 2 dτ dτ dτ

Noninertial frame

(13.2)

The second term in the equation of motion for the noninertial frame contains all the forces associated with a coordinate transformation from one frame to another, such as the centrifugal force and Coriolis force. In addition, it describes gravitational forces. The trajectory of a particle in general relativity must approach the trajectory of a particle in Newtonian gravity when the field is weak and the particle velocity is small. Taking the noninertial equation of motion to the Newtonian limit and making the correspondence to Newtonian gravity provides a means of finding the expression for g00 (M) as a function of mass M. For a slow particle with dx/dt  c and dx/ds  cdt/ds, dτ =1 dt

(13.3)

for γ ≈ 1. This approximation allows us to write Γbca

dxc dxb a ≈ Γ00 ds ds



cdt ds

2 (13.4)

where the 0th term has dominated the summation because c is so much larger than the other velocities. The equation of motion becomes  2 d 2 xa a cdt + Γ00 =0 ds ds

(13.5)

1 ∂g00 a = − g ab b Γ00 2 ∂x

(13.6)

where

At this point, a second simplifying approximation is made (the first was the slowness of the particle), namely, that the metric is nearly that of Minkowski space, such that

428 Introduction to Modern Dynamics gab = ηab + hab ⎛ −1 0 ⎜ 0 1 ⎜ =⎜ ⎝ 0 0 0 0

0 0 1 0

⎞ ⎛ 0 h00 ⎟ ⎜ 0⎟ ⎜ 0 ⎟+⎜ 0⎠ ⎝ 0 1 0

0 h11 0 0

0 0 h22 0

0 0 0 h33

⎞ ⎟ ⎟ ⎟ ⎠

(13.7)

with the weak-field approximation | hab | 1. The metric connection is then 1 ∂h00 a Γ00 = − ηab b 2 ∂x

(13.8)

and the equation of motion becomes   d 2 x 1 cdt 2  = ∇h00 2 ds ds2

(13.9)

d 2 x 1  = c2 ∇h 00 2 ds2

which relates the particle acceleration to the perturbation of the metric. The next step is to make the connection between the geodesic trajectory and the equations of Newtonian gravity. This step is based on the Equivalence Principle, which states that the motion of a particle in a steadily accelerating (noninertial) reference frame is equivalent to the motion of a particle in a spatially uniform gravitational field. Recall that the result from Newtonian gravity is d 2 x  = −∇ dt 2



−GM r

 (13.10)

Equating Eqs. (13.9) and (13.10) determines h00 to be h00 = −

2 c2

 −

GM r

 (13.11)

and this gives g00 = −1 +

2GM c2 r

(13.12)

Therefore, the invariant interval in this weak-field and low-velocity limit is   2GM 2 2 c dt + dx2 + dy2 + dz2 ds = − 1 − 2 c r 2

(13.13)

The General Theory of Relativity and Gravitation 429 The time-like term is affected by mass and hence gravity. This equation is similar to the equation that was derived in special relativity for constant acceleration, Eq. (12.126). There, too, only the time-like term was affected. One of the interesting aspects of this derivation is the absence of any gravitational force. Only the geometry of space-time entered into the derivation through the geodesic equation. From this weak-field geodesic approach (using the Equivalence Principle), it is only possible to derive g00 to lowest order. The space-like components of the metric tensor are obtained from Einstein’s field equation, which uses the Riemann curvature tensor.

13.2 Riemann curvature tensor Curvature of surfaces in three-dimensional Euclidean space is easily visualized. A sphere, for instance, has curvature, which makes it impossible to map the surface of the Earth onto a flat sheet. It is also impossible, on a globe, to draw a triangle, composed of sections of three great-circle routes, that has angles that add to 180◦ . In fact, the sum of the angles is always greater than 180◦ . Furthermore, parallel lines can and do meet on the surface of a sphere: think of two lines of longitude that are parallel at the Equator but meet at the North Pole. Conversely, the surface of a cylinder has no curvature, because it can be cut along any “straight” line and laid flat. These intuitive properties of surfaces embedded in Euclidean spaces do not go very far when one begins to ask questions about the curvature of a metric space, especially when the metric space is four-dimensional and pseudo-Riemannian. To tackle questions of curvature in these cases, it is necessary to consider the actions of covariant gradients on vectors on the metric space. For instance, consider the commutation relation ∇c ∇d V a − ∇d ∇c V a

(13.14)

Does this commutator equal zero? This simple question opens up fundamental aspects of curved spaces. In flat Euclidean spaces, the quantity does commute. Even on manifolds that have global curvature, such as the surface of the cylinder, the quantity commutes. But on the surface of a sphere it does not. This is because the surface of the sphere has intrinsic curvature that destroys the concept of translating a vector in a “parallel” manner. To evaluate the commutator, take the first term

∇d ∇c V a = ∇d ∂c V a + Γbca V b





= ∂d ∂c V a + Γbca V b + Γeda ∂c V e + Γbce V b − Γcde ∂e V a + Γbea V b (13.15)

430 Introduction to Modern Dynamics then the second

a b ∇c ∇d V a = ∇c ∂d V a + Γbd V





a b e = ∂c ∂d V a + Γbd V + Γeca ∂d V e + Γbd V b − Γdce ∂e V a + Γbee V b (13.16) Subtract these two and use ∂d ∂c V a = ∂c ∂d V a . The result is

∇c ∇d V a − ∇d ∇c V a = Rabcd V b + Γcde − Γdce ∇e V a

(13.17)

where a new tensor, the Riemann curvature tensor, has been defined as

Riemann curvature tensor a a a Rabcd = ∂c Γbd − ∂d Γbca + Γbd Γec − Γbce Γeda

(13.18)

An all-covariant version of the tensor is the Riemann tensor of the first kind Rabcd = gar Rrbcd

(13.19)

(the tensor with the single raised index, Rabcd , is the Riemann tensor of the second kind). In broad terms, the Riemann curvature tensor is like a generalized second derivative of the metric tensor, while the Christoffel symbols are like generalized first derivatives of the metric tensor. The complexity of these tensors is mainly book-keeping to keep track of the interdependence of basis vectors in general curvilinear coordinates. The Riemann curvature tensor has 81 elements in 3D space, although it possesses many symmetries that reduces the number of independent elements to 6. Contractions of the Riemann tensor lead to additional useful tensors, such as the Ricci tensor:

Ricci tensor

Rab = Rcacb = g cd Rdacb

(13.20)

Further contraction leads to the Ricci scalar:

Ricci scalar

R = g ab Rab

(13.21)

These tensors and scalars are just consequences of the commutator of the second derivatives in Eq. (13.17). However, they are important measures of curvature of a metric space. To gain some insight into curved spaces, it is helpful to consider a few examples.

The General Theory of Relativity and Gravitation 431

Example 13.1 Surface of a cylinder (2D surface) The surface of a cylinder is not curved. Start with the line element

gab

ds2 = dz + r 2 dθ 2 ⎛ ⎞   1 r2 0 0 g ab = ⎝ r 2 ⎠ = 0 1 0 1

  1 11 ∂g11 ∂g11 ∂g12 ∂g22 ∂g21 ∂g11 g + g 11 − g 11 + g 12 + g 12 − g 12 2 ∂θ ∂θ ∂θ ∂θ ∂θ ∂z =0

1 = Γ11

and likewise all the others are zero because gab is constant (remember that r is a constant). Therefore R = 0. A cylinder surface is not curved. This is because parallel lines remain parallel if we think of the cylinder’s surface being formed from a rolled-up flat sheet of paper with parallel lines on it.

Example 13.2 Surface of a sphere (2D surface) Find the Ricci scalar of a sphere. x = r sin θ cos φ y = r sin θ sin φ z = r cos θ   r2 0 gab = 0 r 2 sin2 θ

ds2 = r 2 dθ 2 + r 2 sin2 θ dφ 2 ⎛

g ab

1 ⎜ r2 =⎝ 0

⎞ 0 1

⎟ ⎠

r 2 sin2 θ

  1 11 ∂g11 ∂g11 ∂g12 ∂g22 ∂g21 ∂g11 g + g 11 − g 11 + g 12 + g 12 − g 12 2 ∂θ ∂θ ∂θ ∂θ ∂θ ∂φ =0

1 = 11

1 = 22

  1 11 ∂g12 ∂g12 ∂g22 ∂g22 ∂g22 ∂g22 g + g 11 − g 11 + g 12 + g 12 − g 12 2 ∂φ ∂φ ∂θ ∂φ ∂φ ∂φ

1 ∂ 2 2 −1 1 2 1 (−1) 2 (r sin θ ) = 2r sin θ cos θ 2 ∂θ 2 r2 r = − sin θ cos θ =

2 = 2 = cot θ and all others are zero. 12 21

Next, the Riemann curvature tensor is continued

432 Introduction to Modern Dynamics

Example 13.2 continued ∂ 1 ∂ 1 1 1 2 1 1 1 2 1 Γ − Γ + Γ22 Γ11 + Γ22 Γ21 − Γ21 Γ12 − Γ21 Γ22 ∂θ 22 ∂φ 21 ∂ = (− sin θ cos θ ) − cot θ (− sin θ cos θ) ∂θ

= − − sin2 θ + cos2 θ + cot θ (sin θ cos θ )

R1212 =

= sin2 θ − cos2 θ + cos2 θ = sin2 θ All other Rabcd are zero. Now define the Ricci tensor Rab = Rcacb = g cd Rdacb R22 = sin2 θ 

and Rab which leads to the Ricci scalar

0 = 0

0 sin2 θ



R = g Rab ab

1 ⎜ r2 =⎝ 0



⎞ 0 1

 ⎟ 0 ⎠ 0

0 sin2 θ



r 2 sin2 θ 1 = 2 r This is recognized as the curvature of the surface of a sphere, as expected.

13.3 Einstein’s field equations Kinematics arising from the geodesic equation cannot by itself describe the origin of gravitational fields. For this, it is necessary to define a source term in an equation that is analogous to Poisson’s equation for Newtonian gravity. The source term should depend on the energy-momentum tensor T ab = ρ0 U a U b

(13.22)

where ρ 0 is the proper mass density and Ua is the 4-velocity. In the energymomentum tensor, both mass energy and kinetic energy densities contribute to the gravitational source term.

The General Theory of Relativity and Gravitation 433

13.3.1 Einstein tensor To arrive at the desired differential equation for gravity, Einstein sought a rank2 tensor operator that would be related to the rank-2 energy-momentum tensor. The most general rank-2 tensor that can be constructed from the metric tensor is Oab = Rab + μg ab R + Λg ab

(13.23)

which uses the Ricci tensor and scalar in addition to the metric tensor. Additional constraints must be satisfied (such as the Bianchi identities), which determine the 1 constant to be μ = − . This leads to the Einstein tensor 2 Gab = Rab −

1 ab g R 2

(13.24)

The field equations of general relativity are then

Einstein field equations

Gab + Λg ab =

8π G ab T c4

(13.25)

where the constant of proportionality before the energy-momentum tensor makes the correspondence to Newtonian gravity and the gravitational constant G. The constant is the so-called cosmological constant, which has a long and interesting history and has reemerged in recent years in relation to dark energy. The Einstein field equations are second-order nonlinear partial differential equations. Only for cases of special symmetry and boundary conditions are exact analytical solutions available. Before looking at exact solutions, it is instructive to look at the Newtonian limit and confirm that Newtonian gravity is incorporated in these field equations.

13.3.2 Newtonian limit The Newtonian limit of the field equations is derived by recognizing that, for weak gravity, the magnitudes of the energy-momentum tensor satisfy the conditions | T 00 |  | T oa |  | T bc |

(13.26)

This leads to approximations that extract the weak-field components of the metric tensor where gravity causes a perturbation of the Minkowski manifold through

434 Introduction to Modern Dynamics Eq. (13.7). To find the deviation hab from the Minkowski metric, the Riemann curvature tensor can be expressed as Rαβμν =

1 ασ

g ∂β ∂μ gσ ν − ∂β ∂ν gσ μ + ∂σ ∂ν gβμ − ∂σ ∂μ gβν 2

(13.27)

and, in terms of hab , Rαβμν =

1

∂β ∂μ hαν − ∂β ∂ν hαμ + ∂α ∂ν hβμ − ∂α ∂μ hβν 2

(13.28)

which leads after considerable algebra1 to an approximate expression for the Einstein tensor as Gab = Rab − =−

1 ab g R 2

   1 1 ∂2 1 − 2 2 + ∇2 hab − ηab h 2 2 c ∂t

1 8πG ab = − 2 h = 4 T ab 2 c

(13.29)

The trace h is hxx + hyy + hzz − h00 = h = haa

(13.30)

and the trace-reverse of hab is defined as ab

= hab −

h

1 ab η h 2

(13.31)

The correspondence between hab and Rab in Eq. (13.29) is made through the differential operator (the d’Alembertian 2 ). Because of the weak-field conditions for the energy-momentum tensor in Eq. (13.26) and relating Eq. (13.29) to Eq.(13.25), the only trace-reverse that is sizable (in the Newtonian limit of low velocities and low fields) is h00 . The dominant term in the energy-momentum tensor is T00 = ρc2

(13.32)

and the field equations become 00

1

And the application of gauge transformations; see Schutz (1985), p. 205.

2 h

=−

16π G ρ c2

(13.33)

The General Theory of Relativity and Gravitation 435 It is noteworthy that Eq. (13.33) for vacuum, when ρ = 0, is a homogeneous wave equation. This is a first indication of the existence of gravity waves that propagate at the speed of light, although more conditions are needed to ensure their existence (which are indeed met). For low particle velocities, only the gradient component of the d’Alembertian is sizable, yielding   1 16π G ∇ 2 h00 − η00 h = − 2 ρ 2 c

(13.34)

which looks suspiciously similar to Gauss’ Law for Newtonian gravity. The Newtonian potential is related to mass density through ∇ 2 Φ = 4π Gρ

(13.35)

Therefore, it is an obvious choice to make the correspondence h00 −

1 00 η h = −4φ 2

(13.36)

where φ=

Φ c2

(13.37)

is the gravitational potential (normalized by c2 ). The weak-field conditions of Eq. (13.26) applied to Eq. (13.29) leads to similar ab magnitudes for h and T ab , which are small for space components. This yields the space-component equations hab −

1 h≈0 2

(13.38)

for a, b = 1, 2, 3, from which hxx = hyy = hzz =

1 h 2

(13.39)

Finally, the trace h is h = ηab hab = −h00 + which is solved for h00 to give

3 h 2

(13.40)

436 Introduction to Modern Dynamics 1 h 2

(13.41)

h = −4φ

(13.42)

h00 = From Eq. (13.36) this yields

and the metric is finally

ds2 = − (1 + 2φ) c2 dt 2 + (1 − 2φ) dx2 + dy2 + dz2

(13.43)

This metric describes the spherically symmetric weak-field limit of the space-time geometry of the Einstein field equations. The potential outside the spherically symmetric mass distribution is φ=

Φ GM =− 2 c2 c r

and the metric becomes    

2GM 2 2 2GM 2 c dt + 1 + 2 dx + dy2 + dz2 ds2 = − 1 − 2 c r c r

(13.44)

(13.45)

which is the metric outside a planet or star. Previously, in Eq.(13.13), only the time component had been modified by the potential, where the effects of the curvature of space had been neglected. This previous result was derived from the Equivalence Principle, according to which a steadily accelerating reference frame was compared to a spatially uniform gravitational field. The form of Eq. (13.45) is now the complete result that includes general relativistic effects on both the space and the time components of the metric tensor (but still in the weak-field limit).

13.4 Schwarzschild space-time The weak-field metric is only an approximate solution to Einstein’s field equation. Within a year of the publication in 1915 of the general theory of relativity, Karl Schwarzschild published the first exact solution for a nonrotating spherically symmetric mass. The Schwarzschild metric solution to the Field Equation is   2GM 2 2 ds2 = − 1 − 2 c dt +  c r

dr 2  + r 2 dΩ 2 2MG 1− 2 c r

(13.46)

The General Theory of Relativity and Gravitation 437 This metric holds for arbitrarily strong gravity, as long as there is spherical symmetry. The time component has a zero and the space term has a divergence at the Schwarzschild radius RS = 2GM/c2

(13.47)

This radius is the event horizon of a black hole, where time stops and space compresses. Nothing at radii smaller than RS can be known to the outside world, and nothing falling past RS can ever return, even light. Light is the perfect tool to use to map out the geometry of Schwarzschild spacetime, because it defines the null geodesics that constrain all possible world lines. Consider the radial motion of a photon with the line element   RS 2 2 dr 2  =0 ds2 = − 1 − c dt +  RS r 1− r

(13.48)

from which it follows that cdt = ±

dr 1 − RS /r

(13.49)

This integrates to ct = ± (r + RS ln |r − RS | + C)

(13.50)

where C is an integration constant, and the plus and minus signs stand for outgoing and infalling null geodesics (light lines). The null geodesics of the Schwarzschild space-time are plotted in Fig. 13.1 for outgoing and infalling light curves. The coordinate singularity at R/RS = 1 divides the space-time into two regions. In the region outside the Schwarzschild radius far from the gravitating body, the light lines make approximately 45◦ angles, as expected for Minkowski space-time. In the region inside the Schwarzschild radius, the light cones are tipped by 90◦ and point to the singularity at the origin. All massive particles have world lines that lie within the light cones, and hence all trajectories, even light, eventual end on the singularity at the origin. The singular behavior of the coordinate description of the Schwarzschild space time at R = RS is an artifact of the choice of the coordinate variables (t, r, θ , φ). Therefore, the Schwarzschild metric is not the most convenient choice for describing the properties of systems that are near the Schwarzschild radius or for objects that pass through it. In fact, the proper time and lengths of objects falling into a black hole experience no discontinuous behavior at the Schwarzschild radius. There are many other choices of coordinate variables that avoid the coordinate singularity at RS . However, these go beyond the scope of this book.2

2 Alternative choices of coordinate variables include Eddington–Finkelstein coordinates and Kruskal coordinates.

438 Introduction to Modern Dynamics Schwarzschild radius

3

2

ct

Figure 13.1 Schwarzschild space-time showing the null geodesics. The coordinate singularity at R/RS = 1 divides the space-time into two regions. Far from the Schwarzschild radius, the light lines are approximately 45º. Inside the Schwarzschild radius, the light lines all terminate on the singularity at the origin.

Light cone

1

0

Light cone

–1

–2

–3 0

0.5

1.0

1.5 Radius

2.0

2.5

3.0

13.5 Kinematic consequences of gravity There are many important kinematic consequences of the Schwarzschild metric that may be derived for conditions that are outside the apparent singularity at the Schwarzschild radius. The proper time and distance of an observer approaching a gravitating body remain well behaved, but an observer far from the gravitating body sees in-falling clocks ticking slower, lengths contracting, and photons becoming red shifted.

13.5.1 Time dilation Consider a calibrated clock in a gravitational potential φ = −GM/c2 r ticking with proper time dτ : dτ =

1 −ds2 c

(13.51)

If it is stationary in its own frame, then the space terms vanish and the relation becomes  2GM dτ = dt 1 − 2 (13.52) c r where dt is the coordinate time. It is clear that when r equals infinity, coordinate time and proper time are equal. Therefore, the coordinates are referenced to

The General Theory of Relativity and Gravitation 439 infinite distance from the gravitating body. As the clock descends toward the gravitating body, the expression under the square root decreases from unity, meaning that the coordinate timespan is larger than the proper timespan—clocks near a gravitating body slow down. As the clock approaches the Schwarzschild radius, it slows and stops as it disappears into the event horizon (the light reflecting from the clock—in order to “see” it—becomes infinitely redshifted and so the clock becomes invisible).

13.5.2 Length contraction Consider a meter stick oriented parallel to the radius vector. The proper length of the stick is ds, which is related to the coordinate length by ds =



ds2 = 

dr 1−

2GM c2 r

(13.53)

Thus, a yardstick approach a gravitating body is contracted, and the length of the yardstick for a distant observer shrinks to zero as the stick passes the event horizon.

13.5.3 Redshifts The shift in frequency of a photon emitted from an object near a gravitating body is most easily obtained by considering two clocks at two radial locations r 1 and r 2 . These clocks are used to measure the frequency of light emitted from one and detected by the other. The ratio of measured frequencies is equal to the ratio of coordinate times for the clocks, giving the result  ν2 dτ1 = = dτ2 ν1

1 − 2GM/c2 r1 1 − 2GM/c2 r2

(13.54)

Therefore, a photon falling to Earth is blueshifted. Conversely, a photon emitted to the sky from Earth is redshifted. A photon emitted outside an event horizon is redshifted, with the redshift becoming infinite at the event horizon. From these considerations, one concludes that a clock falling toward the event horizon of a black hole ticks more slowly, becomes squashed in the direction of fall, and turns progressively more red.

13.6 The deflection of light by gravity In 1911, while at the University of Prague, Einstein made his famous prediction that the gravitational field of the sun would deflect light.3 The experimental

3 Einstein’s work in 1911 was still based on the Equivalence Principle, which ignores curvature of space. His full derivation of the deflection of light by the gravity of the Sun came in 1915, along with his derivation of the precession of the orbit of Mercury.

440 Introduction to Modern Dynamics Apparent position

Δφ/2 Observer

b

Figure 13.2 Deflection of light by the sun where the angular deflection is a function of the impact parameter b. The figure is not to scale: the angular deflection for a ray grazing the surface of the Sun is only 8 microradians. proof of this theoretical prediction was delayed by World War I. However, in May 1919, Arthur Eddington lead an expedition to an island off of Africa (and another expedition was undertaken to an island off of Brazil4 ) to observe stars during a total eclipse of the Sun. His observations confirmed Einstein’s theory and launched Einstein to superstar status on the world stage. The general relativistic calculation of the deflection of light by gravity is based on null geodesics in warped space-time. A gravitating body creates a metric with curvature, and a photon trajectory follows a geodesic within this metric. The curvature is greatest near the massive body, with two straight-line asymptotes far from the gravitational source, as shown in Fig. 13.2, with an angular deflection that is a function of the impact parameter b.

13.6.1 Refractive index of general relativity

4 A detailed account of this expedition can be found in Chapter 6 of Galileo Unbound (Oxford University Press, 2018). 5 See T.-P. Cheng, Relativity, Gravitation and Cosmology: A Basic Introduction (Oxford University Press, 2010), p. 124

An alternative calculation that does not use the geodesic equation explicitly (but which is consistent with it implicitly) treats warped space-time as if it had a spatially varying refractive index.5 The light trajectories then follow the ray equation of Chapter 11. The fundamental postulate of special relativity states that the speed of light is a constant (and the same) for all observers. This statement needs to be modified in the case of general relativity. While the speed of light measured locally is always equal to c, the apparent speed of light observed by a distant observer (far from the gravitating body) is modified by time dilation and length contraction. This makes the apparent speed of light, as observed at a distance, vary as a function of position. Application of Fermat’s Principle for light leads to the ray equation, and hence to the equation for the deflection of light by gravity. The invariant element for a light path in the Schwarzschild geometry with dθ = dφ = 0 is ds2 = g00 c2 dt 2 + grr dr 2 = 0 The apparent speed of light is then

(13.55)

The General Theory of Relativity and Gravitation 441  g00 (r) dr =c − = c(r) dt grr (r)

(13.56)

where c(r) is always less than c, when observing it from flat space. The “refractive index” of space is defined, as for any optical material, as the ratio of the constant speed divided by the observed speed:  c n(r) = = c(r)



grr (r) g00 (r)

(13.57)

The Schwarzschild metric has the property grr =

−1 g00

(13.58)

so the effective refractive index of warped space-time is n(r) =

1 = −g00 (r)

1 2GM 1− 2 c r

(13.59)

with a divergence at the Schwarzschild radius. The refractive index of warped space-time can be used in the ray equation (11.83) from Chapter 11:   d dxa ∂n n = a ds ds ∂x

(13.60)

where the gradient is ∂n 2GM x = −n2 2 3 ∂x c r ∂n 2GM y = −n2 2 3 ∂y c r

(13.61)

The ray equation is a four-variable flow in x, y, vx = ndx/ds, and vy = ndy/ds: dx ds dy ds dνx ds dνy ds

νx n νy = n =

2GM c2 2 2GM = −n c2 = −n2

(13.62) x r3 y r3

Capture

Impact parameter

442 Introduction to Modern Dynamics

Figure 13.3 Light trajectories near a black hole. Rays with a small impact factor are captured by the black hole.One ray near the critical impact parameter circles the black hole once and escapes. There is an unstable circular light orbit at a radius of 1.5 RS . Trajectories of light from the ray equation for several collimated rays are shown in Fig. 13.3 passing by a black hole. Rays with a large impact parameter are deflected toward the “optical axis.” Closer rays are increasingly deflected. Rays with an impact factor less than a critical value cannot escape and are captured by the black hole.6 An unstable circular light orbit exists at a radius of 1.5 RS . Rays with a large impact factor (larger than shown in Fig. 13.3) are deflected through small angles by the local index gradient dφ ≈

∂n dx ∂y

(13.63)

The total deflection is obtained by integrating over the “interaction length” of the light with respect to the gravitating body:  Δφ =

2GM c2

dφ = n2

∞ −∞

y dx r3

(13.64)

For large impact parameters, the y-position of the ray changes negligibly during the interaction, the index of refraction is nearly unity, and the integral simplifies to

Δφ = 6

See Misner, Thorne, and Wheeler (1973), p. 673.

2GM c2

∞

b

−∞

x2 + b2

3/2 dx

leading to the equation for the deflection of light:

(13.65)

The General Theory of Relativity and Gravitation 443

Δφ =

Deflection of light

4GM c2 b

(13.66)

For light passing by the surface of the Sun, b = RSun , and this angle is 1.7 arcseconds, predicted by Einstein and measured by Eddington in his 1919 expedition.

13.6.2 Gravitational lensing At first glance, the deflection of light by a spherically gravitating object would appear to make a gravitational lens that could focus light. However, the angular deflection by gravity is inversely proportional to the impact parameter, while for a thin glass lens it is linearly proportional to the impact parameter. Therefore, a gravitational “lens” behaves very differently than an optical lens, and it has very high distortion. For instance, if a black hole is along the line of sight to a distant point source (let’s say a quasar), then the quasar will appear as a bright ring around the black hole, known as an Einstein ring. The subtended angle of the Einstein ring is (see Homework problem 13.17 for the derivation)  θE =

LSL 4GM Ls LL c2

(13.67)

where LS is the distance to the source, LL is the distance to the lensing body, and LSL is the distance of the source behind the lens. Depending on the orientation of a gravitational lens relative to a distant source, and the mass distribution in the lens, the distant object may appear as a double image or as a smeared out arc. The principle of gravitational lensing is shown in Fig. 13.4. A massive object along the line of sight to a distant quasar bends the light rays to intersect with the Earth. Such “lensing” is not image-forming, because it does not “focus” the rays, but it does produce arcs. Note that the image formed by the gravitational lens is a virtual image, or arc, that is closer than the original object. There are many astronomical examples of double images of distant galaxies seen through an intermediate galaxy. One famous example with numerous Einstein arcs is shown in Fig. 13.5. Gravitational lensing has become a valuable tool in the exploration of dark energy and dark matter. One piece of evidence in favor of dark matter comes from the Bullet Cluster, which is actually the collision of two galaxy clusters. The gravitational lensing by all matter has been mapped out on both sides of the collision, and the X-ray sources from luminous matter are separated from the bulk of the gravitationally lensing matter. This separation of lensing matter versus emitting matter provides evidence for noninteracting dark matter within large-scale structures such as galaxies and galaxy clusters. What dark matter is, and what physical laws it obeys, is one of the greatest unsolved problems in physics today!

444 Introduction to Modern Dynamics

Virtual image Quasar

Earth Black hole

Figure 13.4 Principle of gravitational lensing. A strongly gravitating body on the line of sight to a distant object (quasar) bends the light rays to intersect with the Earth. The figure is not to scale; the angular deflection is typically a microradian.

Figure 13.5 Gravitational lensing. The “near-by” galaxy is causing gravitation lensing of objects behind it. Distant galaxies are smeared into arcs by the strong aberrations of the gravitational lens. From http://hubblesite.org/news center/archive/releases/2009/ 25/image/ao/format/web_print/.

Virtual image

The General Theory of Relativity and Gravitation 445

13.7 Planetary orbits The second notable prediction that Einstein made concerning his new theory of general relativity was an explanation of the precession of the perihelion of the planet Mercury. In an exact 1/r potential, the major axis of the elliptical orbit is constant and does not precess. However, if there are radius-dependent corrections to the potential, then the axis of the ellipse precesses, which is observed for the orbit of Mercury. The key observation about massive particles executing orbits in general relativity is that they follow force-free geodesic trajectories determined by the metric structure imposed by the gravitating mass. The Lagrangian in this case has the Schwarzschild metric and can be expressed as 1 L = − s˙2 2⎡   1⎢ 2MG 2 2 = ⎢ 1 − c t˙ −  2⎣ c2 r



⎥ r˙2  − r 2 θ˙ 2 + sin2 θ φ˙ 2 ⎥ ⎦ 2MG 1− 2 c r

(13.68)

where the mass m of the particle has been omitted (because it divides out of Lagrange’s equations) and the sign has been switched between the time-like and space-like components. The dot is with respect to proper time, and t˙ = dt/dτ = 1/γ . There are three conserved quantities   2MG 2 ∂L = 1− 2 c t˙ = E/m ∂ t˙ c r ∂L = r 2 sin2 θ φ˙ = /m ∂ φ˙

(13.69)

∂L =0 ∂τ where the first is related to energy E, the second is related to the orbital angular momentum, and the third is the Lagrangian itself. With these constants of motion, and because of the spherical symmetry, the orbit can be set on the equatorial plane with θ = π /2, and the Lagrangian becomes ⎛ L=

1⎜ ⎝ 2

⎞ E 2 /m2 c2 2MG 1− 2 c r



r˙2 2MG 1− 2 c r



2 m2 r 2

⎟ 1 2 ⎠= c 2

(13.70)

The differential equation for the radius is, after some rearranging, 1 2 1 2 m˙r + 2 2 mr 2

 1−

2MG c2 r

 −

mMG 1 1 = E 2 /mc2 − mc2 r 2 2

(13.71)

446 Introduction to Modern Dynamics The right-hand side is a constant of motion with units of energy,7 which will be denoted by T∞ , and the equation is finally 1 2 1 2 m˙r + 2 2 mr 2

 1−

RS r

 −

GmM = T∞ r

(13.72)

This is recognized as the equation for the non-relativistic central force problem, but with an extra factor (1 – RS /r) in the angular momentum term. This extra factor is the general relativistic correction that leads to a deviation from the perfect 1/r potential and hence to precession of the orbit. The differential equation (13.72) is commonly expressed in terms of derivatives relative to the angle φ, and using the substitution u = 1/r makes this 

du dφ

2 + u2 =

2m 2GMm2 T + u + RS u3 ∞ 2 2

(13.73)

Differentiating with respect to u leads to the simple form GMm2 3GM d2u + u = + 2 u2 dφ 2 2 c

(13.74)

which is valid for particle (or planetary) speeds much less than c. In the absence of its final term, Eq. (13.74) gives the classical orbital result for an inverse square law, d2u GMm2 + u = dφ 2 2

(13.75)

which has the elliptical solution 1 GMm2 = (1 − ε cos φ) = u0 r 2

(13.76)

where ε is the ellipticity. When this ideal solution is substituted into Eq. (13.74), the result is 7 By using the result from special relativity E 2 = p2 c2 + m2 c4 , the constant can be expressed as

T∞ =

1 2



E2 − mc2 mc2

 =

p2 2m

which is the nonrelativistic particle kinetic energy far from the gravitating body. 8 Details of the secular solution can be found in Section 8.9 of S. T. Thornton and J. B. Marion, Classical Dynamics of Particles and Systems, 5th ed. (Thomson, 2004).

  d 2 u0 GMm2 3G3 M 3 m4 ε2 + u = + 1 + 2ε cos φ + + cos 2φ) (13.77) (1 0 2 dφ 2 2 4 c2 Only the second term in the square brackets leads to first-order effects, giving the approximation to the solution as8 u1 = u0 + which can be rewritten as

3G3 M 3 m4 εφ sin φ 4 c2

(13.78)

The General Theory of Relativity and Gravitation 447

n

io

ss

e ec

Figure 13.6 Example of a highly elliptical orbit subject to gravitation precession with normalized units with GMm2 /2 = 1 and RS = 0.013.

Pr No precession

      GMm2 GMm 2 u1 = 1 + ε cos φ 1 − 3 c 2

(13.79)

When φ = 2π , the angle at the maximum radius has shifted (precessed) by an angle  General relativistic precession angle

Δφ = 6π

GMm c

2 (13.80)

In the case of the orbit of Mercury, this is 43 arcseconds per century. This was what was derived by Einstein in 1915 during a particularly productive two weeks that also included his prediction of the deflection of light by the Sun. A single trajectory is shown in Fig. 13.6 for normalized parameters with GMm2 /2 = 1 and RS = 0.013. The ellipticity and precession angle are large in this example.

13.8 Black holes Stars during a supernova can undergo gravitational collapse when the gravitational pressure exceeds the light pressure. The collapse can stop when electrons

448 Introduction to Modern Dynamics combine with protons to create a neutron star for which the nuclear pressure exceeds the gravitational pressure. However, if the original mass of the star was sufficiently large and the residual mass of the neutron star is 1.5–3 times that of the Sun, then the nuclear pressure is insufficient to support the gravitational pressure, and the neutron star will continue gravitational collapse until its physical size decreases below the Schwarzschild radius. A black hole remains, and all further information from the star is removed from our sensible universe. Black holes can have masses ranging from 1.5 to over a billion solar masses. The Schwarzschild radius for a black hole of mass M is given in kilometers by RS =

  2GM M = 2.95 M c2

(13.81)

where M is the mass of the Sun. A supermassive black hole with the mass of a billion suns has a Schwarzschild radius as large as our Solar System.

13.8.1 Orbits Orbits around black holes are strongly perturbed compared with the small precession corrections of Mercury orbiting the Sun. To understand black hole orbital behavior, it is helpful to look at Eq. (13.72) as a central force problem with an effective potential 1 2 m˙r + Φeff (r) = T∞ 2

(13.82)

where the effective potential is Φeff (r) = −

GmM RS 2 1 2 − + 2 r 2 mr 2 mr 3

(13.83)

The first two terms are the usual Newtonian terms: the gravitational potential and the repulsive contribution from the angular momentum that prevents the mass from approaching the origin. However, the third term is the general relativistic term, which is attractive and overcomes the centrifugal barrier at small values of r, allowing the orbit to collapse to the center. Therefore, not all orbits around a black hole are stable, and even circular orbits will decay if too close to the black hole. A graph of the effective potential is shown in Fig. 13.7. In the classical Newtonian case shown here, the orbiting mass can never approach the center, because of the centrifugal divergence. The radius units in the figure are in terms of the circular orbit radius. Therefore, the Newtonian minimum is at r = 1. But in the general relativistic case, there is the additional attractive term that dominates at small values of the radius. This causes the effective potential to have a centrifugal barrier, but the effective potential becomes negative close to the origin. To find the conditions for circular orbits, it is sufficient to differentiate Eq. (13.83) with respect to the radius to find the minimum, which gives the equation GmMr 2 −

RS 2 2 r+3 =0 m 2 m

(13.84)

The General Theory of Relativity and Gravitation 449 10 Newtonian 0.07 0.05 0.03

Effective potential

8

6

4

2

Figure 13.7 The GR effective potential has an attractive short-range contribution that dominates over the centrifugal repulsion.The Newtonian limit is always stable. The general relativistic curves are for different values of Rs 2 /2m.

0

–2

0.1

1 Radius

Solving for r yields

r=

2 2 + 2 mc mc2

 1 − 6GMRS mRS

m2 2

(13.85)

where the positive root is the solution for the stable circular orbit. There is an innermost stable circular orbit (ISCO) that is obtained when the term in the square root vanishes for 2 = 3R2S m2 c2

(13.86)

rISCO = 3RS

(13.87)

which gives the simple result

Therefore, no particle can sustain a circular orbit with a radius closer than three times the Schwarzschild radius, and it will spiral into the black hole9 . A single trajectory solution to Eq. 13.74 is shown in Fig. 13.8. The particle begins in an elliptical orbit outside the innermost circular orbit and is captured into a nearly circular orbit inside the ISCO. This orbit eventually decays and spirals with increasing speed into the black hole. The origin of these nearly stable circular orbits can be seen in the state space shown in Fig. 13.9.

9 Equation (13.87) applies for particles that have low velocity. However, special relativistic effects become important when the orbital radius of the particle approaches the Schwarzschild radius.

450 Introduction to Modern Dynamics 0.8

Schwarzschild radius

ISCO

y

0.0

Figure 13.8 Orbit simulation with conditions near the homoclinic orbit. RS = 0.15 and RISCO = 0.44. A particle that begins with an ellipticity settles into a nearly circular orbit near the homoclinic saddle point, after which it spirals into the black hole.

Initial condition

–0.4 0.0

–0.5

1.0 x

Schwarzschild radius

0.4 0.3

Figure 13.9 Phase-space portrait with flow lines for one value of (conserved) angular momentum. The ISCO is at 0.44 in these units. For the angular momentum in this simulation, a stable circular orbit occurs at 0.65. There is a homoclinic saddle point between stable orbits, and orbits that are captured by the black hole. The Schwarzschild radius is at 0.15.

Radial speed

0.2

Circular orbit Homoclinic saddle

0.1 0.0 –0.1 –0.2 –0.3 –0.4 0.0

0.2

0.4

0.6 Radius

0.8

1.0

1.2

The General Theory of Relativity and Gravitation 451 A circular orbit is surrounded by strongly precessing orbits that are contained within a homoclinic orbit that terminates at the homoclinic saddle point (similar to the homoclinic orbit in Fig. 4.12). The nearly stable circular orbits in Fig. 13.8 occur at the homoclinic saddle. Accretion discs around black holes occupy these orbits before collisions cause them to lose angular momentum and spiral into the black hole.

13.8.2 Falling into a black hole The Schwarzschild metric has an apparent singularity in the space term at the Schwarzschild radius. This can be either a coordinate singularity (such as the origin in polar coordinates) or a fundamental (physical) singularity. To state this another way: what would an observer experience as they travel through the Schwarzschild radius? What happens to the proper properties in their frame? To answer these questions, begin with the invariant velocity at constant θ and φ:   2GM 2 2 gab x˙a x˙b = − 1 − 2 c t˙ + c r

r˙2 = −c2 2GM 1− 2 c r

(13.88)

Similar to Eq. (13.69), a constant of the motion for a particle falling radially is 

 2MG 2 1− 2 c t˙ = E/m c r

(13.89)

Substituting into Eq. (13.88) gives −c2 =

r˙2 E 2 /m2 c2 − 1 − 2GM/c2 r 1 − 2MG/c2 r

(13.90)

On choosing E = mc2 (for a particle falling from infinity), this becomes simply r˙2 =

2GM r

(13.91)

or  r cdτ = ± dr Rs

(13.92)

Integrating gives  τ (r) =

2Rs dτ = τ0 + 3c



r0 Rs

3/2

 −

r Rs

3/2  (13.93)

452 Introduction to Modern Dynamics Event horizon 12

10 Coordinate time

Time

8

Figure 13.10 Coordinate time and proper time for a clock falling through the event horizon. A distant observer sees the clock slow down and stop as it asymptotically approaches the event horizon. An observer riding with the clock experiences nothing extraordinary when the event horizon is passed.

6

Proper time

4

2

0

0

1

2

3

Radius r/Rs

where τ 0 is the proper time at a reference radius r 0 . This expression is the proper time for a clock falling into the black hole, and it remains finite as the Schwarzschild radius is approached and passed, as opposed to the coordinate time, which diverges at the Schwarzschild radius, as shown in Fig. 13.10. The coordinate time is the time described by a distant observer, while the proper time is the time experienced by someone falling inward with the clock, for whom nothing noteworthy happens as the Schwarzschild radius is crossed.

13.9 Gravitational waves A new era of gravitational wave astronomy was opened with the announcement of the detection of gravity waves on February 11, 2016 by the Laser Interferometer Gravitational Observatory (LIGO). The event had been detected on September 14, 2015 and had the signature of the coalescence of two black holes of about 30 solar masses. Since then, many additional events have been detected, including the merger of two neutron stars. Gravitational waves are among the weakest of phenomena detected by mankind, requiring cataclysmic events to launch them and ultrasensitive interferometers to detect them. Although Einstein recognized early in the development of general relativity that gravitational waves should exist, he believed they would be too weak to observe. It took a century to prove him wrong.

The General Theory of Relativity and Gravitation 453 The Einstein field equations in vacuum, Gαβ = Rαβ − =−

1 αβ g R 2

   1 1 ∂ 1 − 2 2 + ∇ 2 hαβ − ηαβ h 2 2 c ∂t

1 αβ = − 2 h = 0 2

(13.94)

provide a wave equation αβ

2 h

=0

(13.95)

This has the plane-wave solution αβ

h

= Aαβ exp ikα xα

(13.96)

where kα xα is a constant (invariant) wave front and k0 ct = kx x + ky y + kz z = k · x

(13.97)

which may further be expressed as ωt = k · x

(13.98)

ω = ck0

(13.99)

where

and the gravitational wave travels at the speed of light without dispersion (the phase velocity is equal to the group velocity). By imposing the Lorentz gauge αβ

∂β h

=0

(13.100)

 · E = −i k · E = 0 for electromagnetic waves), we obtain (which is equivalent to ∇ a condition on kα and Aαβ , Aαβ kβ = 0

(13.101)

which states that gravitational waves are transverse: the displacement is perpendicular to the direction of propagation. Because of the tensor character of the wave, there are additional conditions that can be chosen for the gauge. For instance, the gauge can be chosen to make the tensor wave amplitude Aαβ traceless,

454 Introduction to Modern Dynamics Aαα = 0

(13.102)

Aαβ U β = 0

(13.103)

or to give the condition

where U β is a 4-velocity, and where the indices on the amplitude have been lowered using the Minkowski metric. The importance of this last condition can be found by making a Lorentz transformation to a frame in which the 4-velocity has only a time component. Then Aα0 U 0 = Aα0 γ c = 0 Aα0 = 0

(13.104)

Furthermore, if the propagation is along the z-axis, then Aαz = 0

(13.105)

because the wave is transverse. Finally, using the traceless property gives the following form for the wave amplitude: ⎛

Aαβ

0 ⎜0 ⎜ =⎜ ⎝0 0

0 Axx Axy 0

0 Axy −Axx 0

⎞ 0 0⎟ ⎟ ⎟ 0⎠

(13.106)

0

with only two free parameters Axx and Axy . This imposes a weak-field contribution to the metric tensor gαβ = ηαβ + hαβ : ⎛

hαβ

0 ⎜0 ⎜ =⎜ ⎝0 0

0 hxx hxy 0

0 hxy −hxx 0

⎞ 0 0⎟ ⎟ ⎟ 0⎠

(13.107)

0

The polarization of a gravitational wave passing by an observer is obtained by considering the proper length between two particles in the weak-field metric ds2 = [1 + h22 (t)] dx2 + [1 − h22 (t)] dy2 + dz2 − c2 dt 2 +2h23 (t)dx dy

(13.108)

First consider a wave with h23 = 0 and two particles that are separated in x on the x–y plane with positions (x0 , y0 ) and (x0 + dx, y0 ). The proper length between these particles is

The General Theory of Relativity and Gravitation 455 

 1 ds = 1 + h22 (t) dx 2 2

(13.109)

Similarly considering two particles with positions (x0 , y0 ) and (x0 , y0 + dy), the proper length between these particles is   1 ds = 1 − h22 (t) dy 2

(13.110)

The displacements along x and y in response to a passing gravitational wave are therefore 180◦ out of phase. The particle displacements for four consecutive times separated by a quarter wave period for the h22 polarization are shown in Fig. 13.11. Next consider the case when h22 = 0. The proper length is ds2 = dx2 + dy2 + dz2 − c2 dt 2 + 2h23 (t)dxdy

(13.111)

On making the ordinary coordinate rotation 1 y = √ (y + x) 2

1 x = √ (−y + x) 2

(13.112)

the proper line element becomes ds2 = [1 + h23 (t)] d x¯2 + [1 − h23 (t)] dy2 + dz2 − c2 dt 2

(13.113)

which is identical to Eq. (13.111), but rotated by 45◦ . The particle displacements for four consecutive times separated by a quarter wave period for the h23 polarization are also shown in Fig. 13.11. Gravitational wave polarizations h22 “+” polarization

t=0

h23 “×” polarization

t = T/4

t = T/2

t = 3T/4

Figure 13.11 A ring of particles in the x-y plane responds as a gravity wave propagates along the z-axis. The particles distort as the wave passes. There are two independent and orthogonal polarizations denoted as the “+” polarization and the “x” polarization.

456 Introduction to Modern Dynamics The most important difference in polarization behavior between gravitational waves and electromagnetic waves is the fact that orthogonal electromagnetic wave polarizations are 90◦ to each other, while for gravitational waves they are 45◦ to each other. This had important consequences for the design of gravitational wave detectors at the LIGO facilities.

13.10 Summary The intrinsic curvature of a metric space is captured by the Riemann curvature tensor which can be contracted to the Ricci tensor and the Ricci scalar. Einstein took these curvature quantities and constructed the Einstein field equations that relate the curvature of space-time to energy and mass density. For an isotropic density, the solution to the field equations is the Schwarzschild metric, which contains mass terms that modify both the temporal and spatial components of the invariant element. Consequences of the Schwarzschild metric include gravitational time dilation, length contraction, and redshifts. Trajectories in curved space-time are expressed as geodesics through the warped metric space. Solutions to the geodesic equation explain the precession of the perihelion of Mercury and the deflection of light by the Sun. Orbits around black holes are derived as geodesic flows. The coordinate singularity at the Schwarzschild radius can be removed by a local coordinate transformation that shows that nothing drastic occurs for an infalling observer when they pass that threshold. Gravitational waves arise naturally from Einstein’s field equations because the Ricci tensor reduces to a d’Alembertian wave operator.

13.11 Bibliography J. J. Callahan, The Geometry of Space-Time (Springer, 2001). S. Carroll, Spacetime and Geometry:An Introduction to General Relativity (Benjamin Cummings, 2003). T.-P. Cheng, Relativity, Gravitation and Cosmology: A Basic Introduction (Oxford University Press, 2010). R. D’Inverno, Introducing Einstein’s Relativity (Oxford University Press, 1992). D. F. Lawden, Introduction to Tensor Calculus, Relativity and Cosmology (Dover, 2002). C. W. Misner, K. S. Thorne, and J. A. Wheeler, Gravitation (Freeman, 1973). R. A. Mould, Basic Relativity (Springer, 1994). B. F. Schutz, A First Course in General Relativity (Cambridge University Press, 1985). S. Weinberg, Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity (Wiley, 1972).

The General Theory of Relativity and Gravitation 457

13.12 Homework problems 1. Geodesic equations: Find the geodesic equations for the metric ds2 = x2 dx2 ± y2 dy2 . 2. Ricci tensor: Derive the Ricci tensor for a saddle (positive- and negativecurvature) surface z = x2 − y2 . 3. Newton’s Law: Derive Newton’s Second Law explicitly from the geodesic equations for an arbitrary and time-independent force. 4. Einstein tensor: Show how the D’Alembertian operator emerges in the derivation of Eq. (13.29) for the Einstein tensor. 5. Null geodesic: For a photon, show that conservation of 4-momentum in curved space-time is equivalent to ∇pp = 0 which defines a null geodesic. 6. Newtonian gravity: In the Newtonian limit, the equation of motion for a particle is   d 2 xμ c dt 2 μ + Γ = 0 (a geodesic curve) 00 ds ds2 where 1 ∂g00 μ Γ00 = − g μυ ν . 2 ∂x By expanding the metric in terms of flat Minkowski space, gαβ = ηαβ + hαβ , derive Eq. (13.12) for g00 , showing all the steps. 7. Gravitational potential: From g00 = − (1 + 2φ), derive the Newtonian correspondence relations μ

Γ00 =

∂φ ∂xμ

μ

R0v0 =

∂φ ∂xμ ∂xv

8. Schwarzschild metric: There are a total of 13 Christoffel symbols that are nonzero in the Schwarzschild metric ds2 = − (1 + 2φ) c2 dt 2 +



dr 2 + r 2 dθ 2 + sin2 θ d φ 2 1 + 2φ

where φ(r) = −GM/c2 r is the gravitational potential. Four of them are 1 =− Γ11

1 GM (1 + 2φ) c2 r 2

0 = Γ0 = Γ10 01

1 = (1 + 2φ) Γ00

GM c2 r 2

1 GM (1 + 2φ) c2 r 2

What are the other nine? Note the symmetry with respect to subscripts.

458 Introduction to Modern Dynamics 9. Geodesic equations: From the 13 Christoffel symbols of Problem 8, construct explicit expressions for the four geodesic equations. 10. Schwarzschild metric: Calculate the 18 nonzero components of the Riemann curvature tensor. These are R0101 , R0202 , R0303 , R1221 , R1331 , R1001 , R2121 , R2332 , R2002 , R3131 , R3232 , R3003 , where the tensor components are antisymmetric with respect to the first two subscripts. As an example, R0101 =

RS = −R0011 r 2 (r − RS )

R0202 = −RS /2r = −R0022 R0303 = −

RS sin2 θ = −R0033 2r

11. Curved space-time: Is Schwarzschild space curved? Prove your results. 12. Equivalence Principle: A photon enters an elevator at a right-angle to its acceleration vector g.  Use the geodesic equation and the elevator metric  

GM 2 2 2 ds2 = − 1 − 2 2 c dt + dx + dy2 + dz2 c r to show that the trajectory is parabolic. 13. Tidal force: A photon enters an elevator at a right-angle. Assume the space is subject to a tidal force metric  

GM 2 ds2 = −c2 dt + 1 + 2 2 dx + dy2 + dz2 c r What is the photon trajectory to lowest order? The coordinate origin in this case is at the center of the gravitating mass, and the elevator is far from the origin. 14. Orbits: Put Eq. (13.70) into Lagrange’s equations and derive Eq. (13.71). 15. Photon orbits: Photon orbits can be derived using the same Lagrangian as for particles, except that L = −ds2 = 0 for null geodesics. Use this modification in Eq. (13.70) to derive the differential equation for photon orbits. Find the smallest radius for a circular photon orbit. 16. Spherical metric: Consider a spherically symmetric line element

ds2 = −e2Φ c2 dt 2 + e2Λ dr 2 + r 2 dθ 2 + sin2 θ dφ 2 Derive the nonzero Christoffel symbols Γtrt = −∂r Φ

Γttt = − (∂r Φ) e2Φ−2Λ

r = −rsin2 φ e−2Λ Γφφ φ

Γθφ = cot θ

Γrrr = ∂r Λ

Γrθθ = Γrφ = r −1 φ

r = −re−2Λ Γθθ

θ = − sin θ cos θ Γφφ

The General Theory of Relativity and Gravitation 459 17. Gravitational lensing: Derive the expression (13.67) for the angle subtended by the Einstein ring caused by gravitational lensing. 18. Gravitational magnification: Calculate the magnification factor of gravitational lensing. 19. Precession of the perihelion of Mercury: Derive the secular solution in Eq. (13.78) for the precession of the perihelion of Mercury. 20 Black hole orbital mechanics: Explore the fixed-point classifications for the flow in Eq. (13.82) as a function of angular momentum and particle kinetic energy. 21. Binding energy of an orbit. Derive the binding energy of a particle of mass m in a circular orbit at the ISCO. 22. Gravitational Acceleration: Calculate the gravitational acceleration at Rs . Compare it to g for a central mass of 10 solar masses. Compare it to g for a central mass of 106 solar masses. Explain the trend.

Appendix

A.1 Index notation: rows, columns, and matrices This textbook uses index notation to denote the elements of vectors, matrices, and metric tensors. In terms of linear algebra, the correspondence between indices on the one hand and rows and columns on the other uses a simple convention: superscripts relate to column vectors, where the rows are indexed, and subscripts relate to row vectors, where the columns are indexed. Similarly, a matrix has a superscript that indexes the rows and a subscript that indexes the columns. In terms of metric spaces, superscripts on tensors are contravariant indexes, and subscripts on tensors are covariant indexes. ⎛ ⎞ a ⎜ ⎟ xa = ⎝b⎠ c

(A.1)

and a row vector is xa = (a b c)

(A.2)

Matrix multiplication uses the implicit Einstein summation notation in which a repeated index, one a superscript and the other a subscript, imply summation over that index. For instance, an inner product is ⎛ ⎞ a ⎜ ⎟ a xa x = (a b c) ⎝b⎠ = a2 + b2 + c2 c Matrix multiplication of a column vector on the right is

(A.3)

A.1 Index notation: rows, columns, and matrices

461

A.2 The complex plane

463

A.3 Solution of linear and linearized ODEs

463

A.4 Runge–Kutta numerical solvers for ODEs

467

A.5 Tangents and normals to a curve in the plane

468

A.6 Elliptic integrals

469

A.7 MATLAB and Python programs for numerical homework

472

462 Introduction to Modern Dynamics ⎛

d ⎜ a b wb x = ⎝ g j

e h k

⎞⎛ ⎞ ⎛ ⎞ f a da + eb + fc ⎟⎜ ⎟ ⎜ ⎟ i ⎠ ⎝b⎠ = ⎝ga + hb + ic⎠ l c ja + kb + lc

(A.4)

Matrix multiplication with a row vector on the left is ⎛

⎞ d e f ⎜ ⎟ xa wab = (a b c) ⎝ g h i ⎠ j k l  = (ad + bg + cj) (ae + bh + ck)

(af + bi + cl)

(A.5)

which has components that are different from those obtained from multiplying on the right. Thus, matrix multiplication is not commutative. The right eigenvectors of a matrix are solutions of wab xb = λxb ← → W xR = λxR

(A.6)

while the left eigenvectors of a matrix are solutions of xa wab = λxa ← → xL W = λxL

(A.7)

The left and right eigenvectors are related to each other: the left eigenvectors are the transpose of the right eigenvectors of the transposed matrix. In terms of linear algebra, one can choose to work with either column vectors or row vectors, which are simply transposes of each other. Many chapters in this text use column vectors, but Chapter 9 on neurodynamics and Chapter 8 on evolutionary dynamics use row vectors. In metric spaces, column vectors (also known as contravariant vectors) are usually preferred for configuration space representation over row vectors (also known as covectors), and Chapters 11–13 mainly use contravariant vectors. In Cartesian coordinates, the components of contravariant vectors are identical to the components of covariant vectors, but this is not generally true for coordinates that are encountered in relativity theory. Projection operators play a role in several chapters, notably on neurodynamics (Chapter 9). Projection operators are outer products that are constructed as the Kronecker products of two vectors: ⎛ ⎞ ⎛ a ad ⎜ ⎟ ⎜ a a Pb = x yb = ⎝b⎠ (d e f ) = ⎝bd c cd

ae be ce

⎞ af ⎟ bf ⎠ cf

(A.8)

Appendix 463 A projection operator projects from one vector onto another: ⎛

⎞⎛ ⎞ ad ae af e ⎜ ⎟⎜ ⎟ Pba yb = ⎝bd be bf ⎠ ⎝ e ⎠ cd ce cf f ⎛ ⎞ ⎛ ⎞ 2 2 2 ad + ae + af  a ⎜ ⎟ ⎜ ⎟ = ⎝ bd 2 + be2 + bf 2 ⎠ = d 2 + e2 + f 2 ⎝b⎠ cd 2 + ce2 + cf 2 c  b a = yb y x

(A.9)

If yb is a unit vector, then the term in parentheses on the last line is unity, and the resultant of the projection is xa . With appropriate normalization, the projection operator provides a quantitative measure of similarity or correlation between two vectors.

A.2 The complex plane Linear, or linearized, ordinary differential equations (ODEs) typically have exponential solutions, and the arguments of the exponential functions are often complex. Euler’s formula for the imaginary exponential is exp (iωt) = cos ωt + i sin ωt

(A.10)

with the inverse decompositions into real and imaginary parts 1 iωt e + e−iωt 2 1 iωt sin ωt = e − e−iωt 2i

cos ωt =

(A.11)

The real and imaginary parts of the exponential are sines and cosines. On the complex plane, these are the x- and y-components of a phasor, shown in Fig. A.1. The rotation direction of the phasor is counterclockwise for exp(iωt) and clockwise for exp(−iωt).

A.3 Solution of linear and linearized ODEs Many of the analytical solutions of dynamical systems are obtained for systems of linear ODEs, or for linearized solutions of nonlinear systems around fixed points. In either case, the coupled differential equations can be represented through the equation

464 Introduction to Modern Dynamics z = reiωt = r(cosωt + i sinωt) =x+iy y = r(sinωt)

y = Im z

ωt

x = Re z x = r(cosωt)

Figure A.1 Phasor diagram of the imaginary exponential. The phasor rotates counter-clockwise with increasing time for exp(iωt).

x˙a =

N

Aab xb

(A.12)

b=1

for a = 1, . . ., N, where N is the number of equations. In vector form, this is ← → x˙ = A x

(A.13)

In the case of linearized equations, the matrix Aab is the Jacobian matrix Jba . The eigenvalues of the matrix are λa , and the eigenvectors are ν a . The general solution of the set of coupled linear differential equations is xa =

N

Cba ν b exp (λb t)

(A.14)

b=1

where the coefficients Cba are uniquely determined by initial conditions xa0 =

N

Cba ν b

(A.15)

b=1

and are solved using linear algebra (Cramer’s rule). When N = 2, and if the coupled equations have simple symmetry, it is often convenient to adopt a complex-plane solution rather than a matrix approach. For instance, consider the set of two coupled differential equations x˙ = x + y y˙ = −x + y

(A.16)

Appendix 465 Add the first equation to i times the second equation (known as adding the equations in quadrature) to get ˙ = (x + iy) + (−ix + y) (x˙ + i y) = (x + iy) − i (x + iy) = (1 − i) (x + iy)

(A.17)

Make the substitution q = x + iy

(A.18)

to convert the two equations into the one-dimensional complex ODE q˙ = (1 − i) q

(A.19)

q(t) = q0 eiωt

(A.20)

The assumed solution is

which is inserted into the equation to give ω = − (1 + i)

(A.21)

q(t) = q0 e−t e−it

(A.22)

x(t) + iy(t) = q0 e−t (cos t − i sin t)

(A.23)

and the general solution is

Writing this out explicitly,

Separating out the real and imaginary terms gives x(t) = q0 e−t cos t y(t) = q0 e−t sin t

(A.24)

which are decaying sinusoidal solutions. The complex-plane approach can also be applied to second-order equations that have sufficient symmetry, such as x¨ + 2ω1 y˙ + ω02 x = 0 y¨ − 2ω1 x˙ + ω02 x = 0

(A.25)

466 Introduction to Modern Dynamics Add the equations in quadrature to get ˙ + ω02 (x + iy) = 0 (¨x + i y¨ ) − 2iω1 (x˙ + i y)

(A.26)

and make the substitution q = x + iy to convert the two equations into the onedimensional complex ODE q¨ − 2iω1 q˙ + ω02 q = 0

(A.27)

q(t) = q0 eiωt

(A.28)

The assumed solution is

which is inserted into equation A.27 to give the secular equation −ω2 + 2ω1 ω + ω02 = 0

(A.29)

The solution to this quadratic equation in ω is ω = −ω1 ±

ω12 + ω02

(A.30)

and the general solution to the coupled equation is q(t) = q1 e

−iω1 t

    −iω1 t 2 2 2 2 exp i ω1 + ω0 t + q2 e exp −i ω1 + ω0 t

(A.31)

where the two coefficients are determined by the two initial conditions. To return to the x and y representation, rewrite the equation as q(t) = e−iω1 t q (t)

(A.32)

where the primed solution is associated with the frequency this out explicitly as

ω12 + ω02 . Writing

  x(t) + iy(t) = (cos ω1 t − i sin ω1 t) x (t) + iy (t)

(A.33)

and separating out the real and imaginary parts gives, individually, x(t) = x (t) cos ω1 t + y (t) sin ω1 t y(t) = −x (t) sin ω1 t + y (t) cos ω1 t This is recognized as the matrix equation

(A.34)

Appendix 467    x(t) cos ω1 t = − sin ω1 t y(t)

sin ω1 t cos ω1 t



x (t) y (t)

 (A.35)

which is a rotation matrix applied to the primed solution. The frequency

ω12 + ω02 of the primed representation is always larger than ω1 , and this is interpreted as “fast” coordinates that rotate slowly. These equations and solutions are applied directly to Foucault’s pendulum in Chapter 1. The fast solution is the swinging of the pendulum, while the slow rotation is the precession of the pendulum as the Earth spins on its axis. Complex-plane approaches are particularly useful when there are two coordinates, coupled linearly, but with sufficient symmetry that the substitution q = x+iy can be made. The solutions are oscillatory or decaying, or both, and have easy physical interpretations. There are many examples in general physics that use this complex-plane approach, such as mutual induction of circuits, polarization rotation of electromagnetic fields propagating through optically active media, as well as Foucault’s pendulum. The symmetry of Maxwell’s equations between the E-field and the B-field make the complex-plane approach common. However, general coupled equations often lack the symmetry to make this approach convenient, and the more general matrix approach is then always applicable.

A.4 Runge–Kutta numerical solvers for ODEs The solution of time-dependent ODEs is a broad topic consisting of many methods with trade-offs among accuracy and stability and time. The simplest (and oldest) is Euler’s method (Fig. A.2). Given a one-dimensional ODE y = x˙ = f (x) with an initial condition (yn , xn ), the (n + 1)th solution is yn+1 = yn + hf  (xn , yn )

(A.36)

for a step size h. The error is of order h and hence can be large.

yn+1 projected yn+1 = yn + hf ′(xn , yn)

Error O(h) yn+1 true

xn

h

xn+1

Figure A.2 Euler’s method linearizes the function by taking the slope. It is only accurate to first order.

468 Introduction to Modern Dynamics Intermediate approximates 4 3

Error O(h5) yn+1

2 1

Figure A.3 Runge–Kutta method for the solution of ODEs with four intermediate approximates.

xn

h

xn+1

A much more accurate approximate solution is the Runge–Kutta approach (Fig. A.3). The iterated solution in this case is  k1 k2 k3 k4 + + + + O h5 6 3 3 6 k1 = hf  (xn , yn )   h k1 k2 = hf  xn + , yn + 2 2   h k 2  k3 = hf xn + , yn + 2 2

yn+1 = yn +

(A.37)

k4 = hf  (xn + h, yn + k3 ) The Runge–Kutta method is accurate and computationally efficient, and is sufficient for many applications in dynamical systems that have well-behaved flows. However, in numerical analysis, caution is always a virtue, and some systems may have unstable solutions, or nearly so, for which other techniques are necessary. It is also important to understand the role of iterative error in a system. For dissipative dynamical systems with attractors (limit cycles or strange attractors), the solution stays on the stable manifold, even though it may be a fractal (strange) attractor. However, for some nondissipative systems, or systems that are unstable, the errors can accumulate, and the solutions then become inapplicable.

A.5 Tangents and normals to a curve in the plane Tangents and normals to (possibly high-dimensional) trajectories or orbits are common elements in dynamics. In the simplest case of a trajectory in a plane, there are simple equations that define tangents and normals. The tangent line to a curve in a plane defined by the function F(x, y) = 0 is

Appendix 469 A (x − x0 ) + B (y − y0 ) = 0

(A.38)

∂F ∂F and B = are evaluated at the point ∂x ∂x (x0 , y0 ). The unit tangent vector to a curve defined by F(x, y) = 0 is where the partial derivatives A =



T =



dx dy , ds ds



 dy (1, −A/B) dx =   2 = dy 1 + (A/B)2 1+ dx 1,

(B, −A) = √ A2 + B2

(A.39)

The normal line to the curve at the point (x0 , y0 ) is B (x − x0 ) − A (y − y0 ) = 0

(A.40)

and the unit normal vector to the curve at the point (x0 , y0 ) is  = 1 N κ



dT x dT y , ds ds

 = √

(A, B) A2 + B2

=

 (x, y) ∇F  (x, y) | | ∇F

(A.41)

where κ is the curvature of the curve at the point (x0 , y0 ).

A.6 Elliptic integrals Elliptic integrals are encountered routinely in the study of periodic systems such as gravitational orbits and pendula. The incomplete elliptic integral of the second kind is α E (α, k) =

1 − k2 sin2 θ dθ

(A.42)

0

The complete elliptic integral of the second kind is π/2 

E(k) =

1 − k2 sin2 θ dθ

(A.43)

0

The circumference of an ellipse with semimajor axis a is expressed in terms of the complete integral as

470 Introduction to Modern Dynamics C = 4aE(e)

(A.44)

where the eccentricity e of the ellipse is given by e=

1 − b2 /a2

(A.45)

E(k) is a weakly varying function of its argument, varying from π /2 at k = 0 to unity at k = 1. The incomplete elliptic integral of the first kind is α 

K (α, k) = 0



(A.46)

1 − k2 sin2 θ

and has as its limit the complete integral when α = π /2: π/α 



K (k) = 0



(A.47)

1 − k2 sin2 θ

The complete integral of the first kind varies slowly with its argument k for small or intermediate values, but diverges as k approaches unity. The complete elliptic integrals of the first and second kind are shown in Fig. A.4. 5.0 K(k) E(k)

K(k), E(k)

4.0

3.0

2.0

1.0

Figure A.4 Complete elliptic integrals of the first kind K(k) and second kind E(k).

0.0

0

0.2

0.4

0.6 k

0.8

1

Appendix 471 As an example, the period of a pendulum is expressed in terms of the incomplete elliptic integral of the first kind. Beginning with the Hamiltonian H=

p2φ 2I

+ mgd (1 − cos φ) = E

(A.48)

the momentum is md 2

  dφ 2mgd 2 φ = 2md 2 E 1 − sin dt E 2

(A.49)

which can be re-expressed as dt =

md 2 dφ  √ mgd 2 φ 2md 2 E 1 − 2 sin E 2

(A.50)

This is integrated to give the quarter-period of the pendulum: T md 2 = √ 4 2md 2 E

φ max



 1−

0

2mgd 2 φ sin E 2

(A.51)

where 

φmax sin 2



 =

E 2mgd

(A.52)

Hence, the period is given by 4md 2 T= √ 2md 2 E

φ max





2mgd 2 φ 0 1− sin E 2    8d φmax 2mgd =  K , 2 E 2E m

(A.53)

which can be expressed in terms of the maximum angle in both arguments of the incomplete integral.

472 Introduction to Modern Dynamics

A.7 MATLAB and Python programs for numerical homework Many of the chapters have computational homework problems at the end of the chapter. MATLAB and Python programs can be found at the Berkeley Press website http://www.works.bepress.com/ddnolte/ The posted MATLAB and Python programs are meant to help on these homework problems as starting points, but most often will need to be modified to complete the homework assignment. Additional materials related to the text subject matter and homework problems can be found in the Companion to Modern Dynamics, also at the Berkeley Press website.

Index

acceleration 25, 28–31, 52, 82–83, 337–338, 397, 406, 417–418, 420, 422–426 acceleration, angular 28, 29 acceleration, linear 28, 29, 417–418, 420, 422–426, 428–429, 458–459 action 70, 79, 89, 91, 96, 103–107, 174, 378 action, least 53, 356, 373, 378 action, stationary 1, 55–56, 79 action integral 55–56, 70, 103–105, 170, 174, 379 action integral, augmented 56–57 action metric 379–382 action potential 277–281 action-angle 70, 83, 89, 91, 96–102, 106–107, 156–160, 164–165, 174 action-angle oscillator 181, 184, 200 action-reaction 42 activation function 286–287, 296–297, 304–305 activation gate 279 activation variable 279–281, 283–285 adaptive expectations 329, 348 adiabatic invariant 83, 103–105, 107 adjacency matrix 210, 228, 236, 238–239, 241, 275 adjustment 309, 314–315, 317, 319–321, 323, 325–326, 330–331, 336, 338, 344, 348, 350, 352 affine 383, 389, 392 aggregate demand 329 AIDS 251 amplitude-frequency coupling 112, 152 AND 295, 307

Andronov-Hopf bifurcation 152 antigen 250 antimatter 408–409 arc length 13, 49, 366–368, 385, 404 area-preserving map 163, 168 Arnold diffusion 170 Arnold tongue 183–184, 190, 200, 202 Arnold, Vladimir I. 170, 174 artificial neuron 286–287, 289, 303 artillery 29 associative memory 124, 295, 298 associative recall 304 asymmetric 179–180, 182, 200–201, 254, 257, 275, 304–305 asynchronous 296, 299, 306 atlas 357 attractor 113–114, 122, 126, 141, 144, 146, 148–149, 153, 160, 185–186, 201, 265, 269, 295, 351, 468 attractor, Lorenz 111 attractor, Rössler 143, 153 attractor, strange 112, 142, 147–148, 154–155, 163, 174, 225–226, 468 Atwood machine 65 autonomous phase oscillator 184, 192, 195 axis, major 73–74, 77–79, 107, 445, 469 axon 277–280 bacteria 249, 252 barter 309–315, 321, 348, 350, 352 basin of attraction 124

basis vector 18–19, 21–23, 26–27, 48, 50, 69, 357–358, 361–363, 382–383, 430 beat frequency 193–202 bias current 279–280, 283–284 bifurcation 112, 132–137, 148, 152, 176, 202, 237, 241, 277, 280, 283–284, 303–304, 306, 338–339, 348 bifurcation cascade 134–137, 338–339, 348 bifurcation threshold 136, 304 biology 181, 198, 201, 208, 240, 252, 259, 286 Birkhoff 164–166 bistability 284–285, 303, 306 black hole 403, 420, 426, 437, 439, 442–444, 447–452, 456, 459 Black-Scholes equation 340, 344–345, 348–349 black swan 345–346, 348 blue shift 398, 424–425, 439 Boltzmann 102, 105 Boolean function 293–294 brain 181, 276, 289–291 budget line 313–314 bullet cluster 443 business cycle 308, 331–339, 348, 351 buying long 340, 342, 348 calculus of variations 53–55 cancer 244, 252–254, 271 canonical equations 88, 90–91, 95 canonical transformations 83, 89–91, 94, 96, 100, 106, 155, 160 capacitor 287, 289, 297, 335 carrying capacity 233–234, 244, 252 cascade 136, 167

cascade, bifurcation 134–137, 338–339, 348 cascade, subharmonic 153 Cassini 178, 191–192 Cauchy distribution 346–347, 352 center 10, 118–123, 132, 166, 246, 256–257, 331, 333, 335, 351 center-of-mass 29, 33, 36–38, 40, 46, 52, 71, 408–409 center-surround 291, 304 central force 70–78, 80–81, 446, 448 central limit theorem 346 centrifugal force 28–29, 51, 107, 427 chaos, dissipative 148, 160, 174 chaos, Hamiltonian 133, 149, 154–176 chaotic dynamics 135, 150, 174, 349, 352 characteristic equation 5, 115, 118, 140 chart 237, 356–357 chemotherapy 252 Chirikov 164, 170 Christoffel symbol 361–363, 365, 368, 372, 378–384, 403–404, 430, 457–458 circuit 153, 181, 279, 291, 335, 467 Clausius 67, 79 climate 29, 244 clique 214–215, 267 clock 7, 259, 386, 390, 393, 401, 420, 422–425, 438–439, 452 clock. Light 393, 424 cluster 220, 226–227, 232, 238, 443 cluster, giant 226–227 cluster, percolation 227

474 Index clustering coefficient 241 cobweb model 331, 333, 348, 351 collision 389, 409, 443, 451 column vector 11, 16–17, 286, 401, 461–462 compartment 233, 237 competition 118, 120–121, 205, 207, 243–244, 247, 256, 270–273, 275, 308, 317–318, 321, 348 competitive equilibrium 314, 317 complete graph 208–209, 211, 216–217, 219, 221–222, 226, 240 complex plane 139–140, 463–465, 467 complex system 14, 78, 112, 178, 200, 205, 308 compromise frequency 177, 186, 196 Compton scattering 412–414 Compton wavelength 414 configuration space 4, 6–9, 11–12, 14–15, 48, 55, 73, 79, 98–99, 102–103, 173, 356, 366, 377–381, 462 conjugate momentum 69, 77, 84, 89, 103, 105 conjugate variables 96 conservation laws 67–70, 80, 254, 408 conservation, energy 57, 67–68, 80, 105 conservation, momentum 105 conservation, phase space volume 83, 91–92, 151, 154, 160, 174 constant of motion 446 constraint 56–57, 62–66, 91, 96, 161, 237, 256–257, 263, 311, 322–324, 351, 368, 404 content-addressable memory 276, 295, 298 contract curve 310, 312–314, 321, 350 contravariant 16, 255, 358, 360, 461–462 control parameter 133, 148, 265, 275, 280, 298, 338 convex 85–86, 310, 314

coordinate transformation 3–4, 12, 15–17, 19, 22, 48, 50, 69–70, 89, 93–94, 106, 356–358, 383, 385–389, 427, 456 coordinates, affine 383, 389, 392 coordinates, Cartesian 11, 14–15, 18–19, 22, 69, 121, 357, 360–361, 377, 383, 462 coordinates, curvilinear 18–19, 50, 361, 385, 430 coordinates, cyclic 15, 69, 89 coordinates, cylindrical 19, 21 coordinates, generalized 14–19, 48–49, 55–58, 63, 65, 69–70, 80–81, 87, 89, 91, 95–96, 101, 105, 148–149, 357, 377 coordinates, ignorable 15, 69, 77, 80 coordinates, polar 14, 19, 50, 121, 126, 128, 151, 159, 202, 360–364, 372, 451 coordinates, spherical 21, 50, 70, 384 Coriolis force 25, 28–30, 48, 50–51, 427 coupled oscillator 177–204 coupling coefficient 183, 220, 222–223, 225 coupling strength 183–184, 191, 200–201, 203, 217, 219–220, 224 Cournot duopoly 318–319, 321, 351 covariant derivative 84, 362, 364–365, 369, 383 covariant vector 16, 358, 362, 383, 462 covector 11, 16, 18–19, 26, 69, 356, 358–360, 364, 382, 386, 400–401, 406, 462 critical phenomena 221, 240, 345 cross product 25, 27, 30 cross-reactive response 250–251 curvature 12, 49, 291, 372, 377, 379, 381, 398, 427,

429–432, 434, 436, 439–444, 456–458, 469 cyclic variable 69 dark matter 443 D’Alembertian 434–435, 456–457 Darboux 379 deflection of light 50, 373, 426, 439–444, 447, 456 degeneracy 41, 97, 100, 158–159, 170–171, 211 degree (graph) 208–217, 220, 223, 227–229, 231, 233, 235–236, 239, 242 degrees of freedom (DOF) 32, 66, 70, 78, 80, 91, 96, 101–102, 105, 111, 148–149, 160, 164, 170, 243 delay 296, 330–331, 334, 348, 387, 440 delta rule 292, 294–296, 304, 306 demand 309, 313, 315–317, 321, 323, 325–326, 328–329, 331–333, 348, 350–352 dendrite 277, 280, 284 depolarization 279 Dirac 105, 165 derivative (economic) 308, 339, 341–342, 344 deterministic chaos 109 differential geometry 360, 364, 382 diffusion 170, 207–208, 226, 228–231, 238, 240, 340, 344, 349 dimensionality 1, 15, 63, 70, 105, 109, 128, 137, 145, 155, 221, 276, 339 diminishing rate of return 310 diode 153 Diophantine 169–170, 191 discrete map 129, 134, 140, 163–166, 186–188, 200, 202, 230, 331, 333 dispersion curve 180 dissipation 10, 62, 70, 125, 144, 154–155, 163, 178, 186, 384 distance matrix 211–212, 214–217, 239

divergence 93, 117, 122–123, 144, 153, 348, 364, 437, 441, 448 DNA 260, 263 Doppler effect 393, 397, 424 double-well 147, 152–153, 155–157, 176 drag 5, 62 driven damped pendulum 49, 145–148, 152, 155 drug resistance 252–254 dual space 85 Duffing 153 dumbbell 40, 43, 165 dynamic networks 207, 209, 225, 308 Earth 29–30, 32, 50–52, 168, 170, 191, 370, 384, 425, 429, 443–444, 467 economics 208, 268, 309, 315, 318–319, 321, 331, 334–335, 344–345, 347, 349 econophysics 121, 339, 344–345, 348–349 ecosystem 178, 244–245, 247, 249, 273 Eddington-Finkelstein coordinates 437 effective potential 71–73, 75–76, 448–449 efficient market model 340 Eigen, Manfred 260 eigenfrequency 45, 180, 201 eigenvalue 41, 116, 201, 210–211, 224–225, 231, 241, 261, 274 eigenvector 41, 124, 149, 301 eikonal 375 Einstein arc 443, 459 Einstein summation convention 17, 19, 26, 33, 68, 84, 96, 255, 286, 358, 391, 461 Einstein tensor 433–434, 457 Einstein’s field equations 432–434, 436, 453, 456 elevator 50, 418, 420, 421, 458 elliptic integral 101, 107, 469–471 elliptical orbit 445, 447, 449

Index 475 embedding 144, 221 emergence 112, 122, 155, 157, 160, 166, 250, 253, 259, 271, 274 endowment 309–310, 312–314, 321, 350 entrainment 190, 198–199, 202, 219–221, 242 environment 233, 289 environmental 252–254, 260 epidemic 226, 232, 235–236, 238–239, 242 equilibrium, dynamic 8, 62, 137, 277 equilibrium, Nash 268–269, 320–321, 351 Equivalence principle 418, 420, 422, 428–429, 436, 439, 458 Erdos-Rényi (ER) graph 208, 212–214, 225–227, 230, 240, 242, 275 ergodic 102, 172 error backpropagation 294 Euclidean 3, 11, 356–357, 385–386, 398, 400, 427, 429 Euler angles 24, 60 Euler equations 46, 48, 55–57, 79, 367, 374 Euler-Lagrange equations 57–59, 62, 64, 68, 71, 79, 81, 84–85, 88, 311, 377, 425 Euler’s equations 41–46, 48, 51 event horizon 420, 437, 439, 452 evolutionary dynamics 8, 118, 121, 208, 226, 243–275, 308, 331, 462 evolutionary stable solution 267, 271–272 excess demand 315–317, 321, 323, 351 expected rate of inflation 329–330 exponential growth 334, 340 external synchronization 192, 194, 200 extinction 121, 245, 247, 256–259, 265, 273

Federal Reserve 326 feed forward 291, 293 feedback 114, 245, 278–279, 289, 295–296, 304, 317, 325, 334–336, 348, 351 Feigenbaum number 136–137 Feynman diagram 412–413 field lines 4, 6, 48 field, vector 6, 116–117, 151, 364–365 field, velocity 116 first integral 68–69, 83–84, 101, 368 first-return map 128–131, 140, 148, 165 fiscal policy 324, 328–329 fitness landscape 244, 263, 266, 272 fitness peak 272, 274 Fitzhugh-Nagumo model 280–282, 304 fixed point, saddle 122–124, 132, 166, 323 fixed point, stable 120, 189, 235, 245, 280–281, 297, 317, 328, 333, 338 fixed point, unstable 120, 124, 133, 149, 235, 281 Floquet multiplier 130–131, 140–141, 148, 151–152, 188, 231, 333 flow equations 6–7, 92–93, 98–100, 113, 128, 144, 228, 243, 250, 336, 339, 344, 368, 384 flow line 8, 48, 50, 101, 116–117, 122, 124, 244, 450 flow, autonomous 141, 145, 195, 202 flow, mathematical 48, 184, 244, 344 flow, non-autonomous 145 fluctuations 209, 238–239, 247, 309, 315, 323, 340–345, 348–349 force-free motion 366–367, 377–378, 380–381, 403 force-free top 43, 45 force, central 70, 78, 80–81, 446, 448 Foucault’s pendulum 3, 30, 467 Fourier transform 165, 346

four-vector 359–360, 386, 390–391, 400–402, 411, 423, 425 fractal 112, 135, 141–142, 146, 148, 468 frame, body 24, 26, 33, 42–43, 45–47 frame, center-of-mass 71, 408–409 frame, fixed 25–28, 33, 45, 47, 59, 388–389, 394, 416–417, 419, 424 frame, intertial 28, 385–388, 406, 417–420, 422–423, 426 frame, laboratory 25, 389–394, 406, 408–410, 415–416, 419, 424 frame, non-inertial 25, 48, 415, 420, 422–423, 427 frame, reference 48, 388, 399, 406, 428, 436 frame, rotating 25–28, 31–32, 48, 51 frequency entrainment 112, 190, 194, 198–200, 202, 221 fundamental memory 295–297, 299–301, 306 gain 125–128, 135, 152, 194, 282, 287–288, 294, 298, 304–305, 336, 338 gain parameter 135, 152 Galilean relativity 387, 389, 406, 418 game theory 267–268, 270–272 ganglion cell 290–291 general equilibrium 309, 314, 321, 323, 348, 350, 352 general relativity 22, 48–49, 353, 366, 373, 377, 379, 381–382, 418, 423–424, 426–427, 433, 440, 445, 452, 456 geodesic curve 80, 366, 368, 377–378, 382, 457 geodesic equation 1, 353, 366, 368–371, 373, 377–382, 384, 402–403, 429, 432, 440, 456, 458 geodesic flow 368, 384

geometry, differential 360, 364, 382 geometry, Euclidean 3 geometry, Riemannian 400 giant component 227–228 global synchronization 216, 219–221, 224, 240 golden mean 107, 167, 170, 176, 189 Goodwin 334, 351 graph Laplacian 210–211, 223–224, 229, 231, 238, 241 gravitational field 13–14, 29, 49, 51, 58–59, 366, 382, 418, 422–423, 428, 436, 439 gravitational lensing 443–444 gravitational wave 452–456 great circle route 355, 370, 372–373, 384 gross domestic product (GDP) 325–326, 329–330 gyroscope 46–47, 60–61 Hamilton’s principle 1, 53, 57–58, 79, 373 Hamiltonian chaos 133, 149, 154–157, 159–160, 165–166, 171, 174–176 Hamiltonian dynamics 53, 83, 105–106, 173–174 Hamiltonian function 84, 89 Hamiltonian flow 94 Hamming distance 263–266, 275 harmonic oscillator 5–6, 8, 10, 31, 49, 58–59, 86, 89, 91, 97, 100, 104, 106–107, 109, 125, 127, 132, 153, 159, 162, 170, 194–195, 384, 405 harmonic oscillator, damped 5–6, 49 harmonic oscillator, damped driven 49 Hawk and Dove 270, 274 Heaviside function 238, 288 heavy tail 346–348 hedge 339–340, 342–343 helix 15, 49, 80, 165, 384 Helmholtz 67 Henon map 175–176 Henon-Heiles model 160–162, 176

476 Index Hertz, Heinrich 379 Hessian condition 87 heteroclinic 148 heterogeneous 204, 232, 236 hidden layer 293–295, 306 hidden neuron 293 HIV 249, 251–252 Hodgkin and Huxley model 279–280, 283 homoclinic 131–133, 148, 280, 283–286, 303, 306, 450–451 homogeneous 69, 232–235, 238–239, 435 Hopfield network 277, 296, 298, 301–302, 304–306 Huygens, Christiaan 177, 195 hyperbolic fixed point 165–168 hypercycle 259–260, 272, 274 hypersurface 14, 101, 148 hypertorus 96, 102, 184 hysteresis 112, 153, 306 ideal gas 79 ignorable variable or coordinate 15, 69, 77, 80 immune response 249–252 immunization 236 incommensurate 102 index notation 12, 16, 27, 255, 361, 461 index, contravariant 16 index, covariant 16 index, repeated 17, 26, 255, 461 indifference function 309–314 inertia, moment of 34, 36, 38–39, 48, 52, 97–98 inertia tensor 32, 34, 36–37, 39–42, 48, 51 infection 232–239, 242, 249, 251–252 inflation 308, 325, 327–330, 348 initial condition 4–6, 8, 10, 13, 18, 49–50, 70, 73, 91–92, 102112, 115, 121, 123–124, 126, 134–135, 142, 148, 153, 156, 160, 162–164, 173–174, 181–182, 195,

197, 203, 223, 230, 238, 246, 273, 303, 351, 376, 384, 450, 464, 466–467 innermost stable circular orbit (ISCO) 449 inoculation 238–239, 242 integrable 70, 96, 101–102, 105, 133, 148–149, 155, 160–161, 164–165, 174, 184 integrate-and-fire 181, 183, 186, 200–201 interconnectivity 232 interest rate 325–327 intermittency 112 invariance 23, 70, 104–105, 107, 406, 423 invariant hyperbola 399–400, 404 invariant interval 359, 399–402, 422–423, 428 invariant tori 148, 164–165, 170, 191 inventory 319, 337–339, 352 invertibility 87 investment savings 334, 337, 348 irrational 102, 107, 168–170, 174, 176, 191, 202, 308, 331, 344 IS-LM model 325–329, 336, 351 iterative map 134, 328 Jacobi, Karl Gustav Jacob 55 Jacobian determinant 94, 163, 392 Jacobian matrix 16–18, 50, 93–94, 115, 117–118, 121–123, 126, 132, 138, 148–149, 163, 245–246, 248–249, 316, 327, 383, 392, 464 Janus 191 k-vector 156 Kaldor 331, 337, 351 KAM 97, 155, 159, 164, 168, 170, 174–176, 186, 190–191 Kelvin 67 Kepler’s laws 73–74, 80 kicked harmonic oscillator 170

kicked rotator 165, 170, 187 kinetic energy 33, 39, 45, 56–57, 72–73, 78–79, 85, 88, 90, 105, 378, 402, 406–411, 425, 432, 446, 459 Kolmogorov 168, 175 Kronecker product 462 Kruskal coordinates 437 Kuramoto 97, 216–217, 219–221, 226, 240 labor 331, 334–335 Lagrange 55 Lagrange multipliers 56, 62–63, 311–312 Lagrangian 53, 57–60, 62–64, 66–72, 77, 79–88, 105, 149, 311–312, 355–356, 366–367, 374, 377, 382, 402–403, 425–426, 445, 458 language 243 latitude 30, 32, 50–51, 372 lattice 201, 209, 214, 220–222 leakage current 279 least action, principle of 53, 56, 373 Legendre transformations 85–89, 105 length contraction 393–394, 423–424, 439–440, 456 Lévy distribution 346–349, 352 libration 9 light clock 393, 424 light deflection, gravity 373, 426, 439–440, 442–444, 447, 456 limit cycle 8, 125–131, 133, 140–142, 148, 188, 194, 196, 202, 280–282, 284–285, 304, 306, 336–338, 351 line element 20, 88, 366, 371, 378, 400, 402, 431, 437, 455, 458 linear chain 209 linear cycle 208, 220 linear superposition 111–112 linearization 113, 118, 151, 315 link 208, 211, 214, 216–217, 221, 230, 241

Liouville’s Theorem 83, 91–93, 106, 154, 163, 174, 186, 201 liquidity money 325–326 logistic function 233, 288, 304 logistic map 134–136, 152 Lorentz boost 393, 395, 397 Lorentz factor 389, 409–410, 419 Lorentz transformation 13, 383, 385, 390–394, 397, 399, 411, 417, 421, 423–424, 454 Lorentzian 219, 347 Lorenz attractor 111 Lorenz butterfly 142, 153 Lotka-Volterra equation 118, 245, 271, 273, 331, 334–335, 348 Lotka, Alfred J. 245 Lozi map 163–164, 176 Lützen, J. 379 Lyapunov exponent 18, 13, 115, 118–119, 122, 128, 148–149, 152, 173, 235–236, 246, 277, 282, 297, 315, 338, 350 macroeconomics 324–325, 327, 330, 348 major axis 77–78, 107, 445 manifold 14–15, 123–124, 126, 131, 133, 142, 149, 186, 236, 317, 322–324, 355–357, 365–366, 378–379, 402, 433, 468 map, discrete 129, 134, 140, 163–166, 186–187, 200, 202, 230 mapping 134, 136, 188. 291, 333, 386 marginal case 73, 114, 121 marginal rate of return 318 marginal rate of substitution 309–310, 314 market 256, 315, 317–318, 321, 323–324, 332, 336, 340, 344, 348–349, 351 master function 224 master oscillator 182–184 Matlab 241, 472 mean field theory 217–219

Index 477 membrane 277–281, 283–286, 306 metric tensor 19–22, 48, 50, 356–360, 365–366, 368, 371–372, 377–378, 382–383, 386, 398, 400, 420–423, 429–430, 433, 436, 454 Metzlerian matrix 248 microeconomics 309, 348 Mimas 192 minimax 268–269 Minkowski metric 359, 400, 422, 425, 434, 454 Minkowski space 18, 360, 386, 390, 398–400, 402, 404–405, 423, 426, 427, 437, 457 Minsky, Marvin 293 molecular evolution 260, 265, 272 momentum, angular 25, 39–43, 45, 48, 60, 70–71, 74, 77, 97–98, 100, 104–106, 160, 165, 445–446, 448, 450–451, 459 momentum, linear 69–70 monetary policy 325, 328, 330 money 308–309, 318, 325–330, 344, 348 money supply 328 Monte Carlo 238 moon 191–192 Moser 168 multilayer network 289, 293, 306–307 muon paradox 389–390 mutation matrix 261, 263, 266–267, 272, 275 mutation rate 265, 267, 274 mutual synchronization 195, 200 NAIRU 328, 330 NaK model 283–285, 303–304, 306 Nash arbitration 269 Nash equilibrium 268–269, 320–321, 351 natural selection 270–271 Navier Stokes 142 negative definite 248 neoclassical economics 318

network diameter 212, 216, 225–226 network topology 208, 210–211, 216, 223, 239 neural network 178, 276–277, 286–287, 289–291, 293, 295, 303–304 neurodynamics 8, 276, 284, 287, 303, 462 neuron, artificial 286–287, 289, 303 neuron, hidden 293–294, 306–307 neuron, output 290–291, 293–294 Newton’s Second Law 5, 28, 53, 57, 406, 415, 425, 457 Newtonian correspondence 426, 457 node, network 238, 286 node of Ranvier 277–278 Noether’s theorem 68–69 noise 302, 348 non-autonomous 7, 145, 148–149, 195 noncooperative 268–269 non-crossing theorem 117–118, 145, 148 non-inertial 25–27, 48, 415, 420, 422–423, 427–428 non-integrable 133, 160–163, 165, 174, 184 nonlinear dynamics 4, 7, 48–49, 111–153, 280, 303, 315, 348 normal mode 14, 177–180 notation, index 12, 16, 27, 255, 361, 461 null geodesic 366, 457 null space 301 nullcline 116–118, 120–123, 126, 149–150, 236–237, 244, 257, 273–274, 280–285, 304, 317, 324, 326–327, 329–330 observer 4, 48, 386–388, 394–395, 397–399, 406, 420, 424, 438–440, 451–452, 454, 456 ODE 8, 250, 344, 368–369, 465–467

optics 373, 375, 384 orbit, bound 76, 78 orbit, closed 166–167 orbit, elliptical 445, 447, 449 orbit, homoclinic 131–133, 284–286, 306, 450–451 orbit, parabolic 74, 123, 384 orbital mechanics 72, 459 oscillator, anisotropic 131–132, 384 oscillator, autonomous 7–8, 10, 50, 178, 184, 186, 192–195, 200 oscillator, coupled 178–181, 184, 187, 189–190, 192, 196, 200–201 oscillator, damped 146 oscillator, harmonic 5–6, 8. 10, 31, 49, 58–59, 86, 89, 91, 97, 100, 104, 106–107, 125, 127, 132, 153, 159, 162, 170, 194–195, 384, 405 oscillator, linear 10 oscillator, nonlinear 10, 125, 181, 183, 226, 303 oscillator, van der Pol 112, 125–128, 130, 150, 152, 192, 194, 196, 198, 200, 280, 282, 331 parallel transport 366, 369–370, 382 Pareto distribution 347–348 Pareto frontier 269, 313 Pareto optimal 269–270, 320–321 path length 12–13, 15, 49, 322, 347, 370, 373, 382, 383, 404 path length element 19, 48, 123, 357 pattern recall 298, 302–303 pattern recognition 302 payoff matrix 255–257, 259–260, 266, 268–270, 274 PDE 344, 349 pendulum 3, 7–9, 30–32, 49–50, 52, 53, 59, 70, 81–82, 97, 100–101, 107, 109, 112, 132, 145–146, 148, 152, 155–156, 158, 175–176, 177, 195–196, 383, 467, 471

pendulum, Foucault’s 3, 30, 53, 467 perceptron 290–295, 304–307 percolation 226–228 percolation threshold 221, 227–228, 233–234, 236–238, 240 period 32, 50, 52, 73–74, 77–78, 81, 96, 102, 104, 107, 122, 128, 132, 146–147, 152, 156–157, 164, 166–167, 181, 187, 191, 193, 201, 254, 282, 284, 318, 331, 338–339, 348, 376, 455, 471 period-five cycle 152, 257 period-four cycle 134–135 period-three cycle 135, 152, 257–258 period-two cycle 134–135, 189 permeability 277 persistence 284 phase locking 112, 177, 182, 185, 196–198, 203 phase oscillator 153, 184, 190, 192, 195, 198–204, 216–217, 219–220, 226, 242 phase plane 101, 103, 122, 124, 131, 159, 163, 171, 176, 244, 283–284, 317 phase portrait 116–118, 120, 126, 148, 150, 175, 185, 236, 246–247, 269, 273, 323–324 phase space 1, 9, 18, 83, 85, 87, 89, 91–106, 122–123, 128, 144, 146, 148, 149, 151, 154–156, 159–160, 163–165, 170, 173–174, 176, 450 Phillip’s curve 328–331, 348 photon 180, 366, 382, 390–391, 403–404, 411–414, 420, 422, 424–426, 437–440, 457–458 photoreceptor 290–291 piece-wise linear function 288 planet 77–78, 436, 445 Poincaré 122, 128, 133, 149, 163, 168 Poincaré-Birkhoff theorem 164–166

478 Index Poincaré oscillator 198–199, 217, 242 Poincaré section 128–130, 140, 146–147, 149, 152, 154, 162–167, 170, 174, 176, 187 Poisson bracket 94–96, 107 Poisson distribution 213 polarized 277 population dynamics 218, 233, 243–244, 249, 260, 271, 274 population pressure 245 postulates of relativity 386, 388, 390, 393 potassium channel 278–280, 283 potential energy 57–59, 65–66, 72–73, 75, 78–79, 88, 90, 98, 378, 403, 423 potential, effective 71–73, 76, 448–449 precession 32, 44–48, 61, 467 precession of the orbit of Mercury 439, 445–448, 456, 459 predator and prey 207, 260 price, spot or strike 344–345, 376 principal axes 40–42 probability distribution 218, 346–349 projectile 50–51 projection operator 299, 463 proper time 390, 401–403, 423, 425, 437–438, 445, 452 protein 154, 207–208 Python 472 quadrant 244 quadratic map 135–136 quantum chaos 171 quantum mechanics 94, 105, 168 quantum scar 173–174 quasar 443–444 quasi-species equation 260–263, 265–266, 272, 274 quasiperiodicity 161, 186, 200 random graph 208, 211–215, 227–228, 240–242 random matrix 257

random variable 341–342, 345 random walk 226, 263, 339–342, 349 rational 102, 159, 168–170, 174, 186–187, 189–191, 270, 308, 331, 344 ray equation 373–376, 440–442 Rayleigh number 142 RC time constant 288 reaction curve 319–320, 351, 408 recall 298, 301–304 reciprocal space 356 recurrence relation 129 recurrent network 277, 296, 304 red shift, Doppler 425 red shift, gravitational 438 reduced mass 71 reference frame 406, 428, 436 refractive index 373, 375, 384, 440–441 relative motion 417 repellor 113–114, 138–139, 149, 185 replicator equation 254–257, 259, 261, 265–266, 272–273 replicator-mutator 266–267, 272 reproduction 249, 271 reset 181, 183, 388 resilience 249 resistor 287–289, 297 resonance 157–159, 168–171, 189–192 resting potential 277–281, 284 retrieval 298–299 Ricci scalar 430–433, 456 Ricci tensor 430, 433, 456 Riemann curvature tensor 429–431, 434, 456, 458 Ricci tensor 410, 412, 433, 456–457 Riemann curvature tensor 429–431, 434, 456, 458 Riemannian metric 360, 378, 403, 423 rigid-body 32–34, 45, 48, 59 rk-model 233–234

rolling 44, 66–67, 82 Rössler system 143, 148, 152–153, 196, 223, 225–226, 242 rotating frame 25–28, 31, 51 rotation 32, 36–37, 40, 43, 45, 49, 51, 59, 65, 167, 191, 370, 383, 455 rotation matrix 22–24, 26, 32, 467 rotation, infinitesimal 26 rotator 97–98, 105, 164–167, 170, 187 row vector 11, 17, 360, 401, 461–462 Runge-Kutta 461, 467–468 saddle point 117–120, 123, 131–133, 138–139, 142, 147, 149, 155–157, 159, 162, 166–168, 245–246, 257, 284–285, 317, 323–325, 450–451 saddle-node bifurcation 132 saturation 234–235, 287 Saturn 177, 191, 198 Saturn’s rings 177, 191, 198 scale-free (SF) 208, 210, 212, 216–217, 221–222, 236, 238, 240, 275 Schuster, Peter 259–260 Schwann cell 277 Schwarzschild metric 436–438, 441, 445, 451, 456–458 Schwarzschild, Karl 436 secular determinant 10, 201 secular equation 179, 466 selection 243–244, 267, 270–272, 291 self-similarity 112, 135–136 self-sustained oscillation 7, 10, 125, 141 selling short 340, 342, 348 sensitivity to initial conditions (SIC) 112, 223 separatrix 157–159, 174, 186, 284 separatrix chaos 155–157, 159, 174, 186 sheep 118, 120, 122–124, 244–245

sigmoidal function 287, 290, 293 simplex 254–257, 263, 272, 274 sine-circle map 186–190, 198, 200, 202 skew-symmetric 25, 390 slow phase 187, 190, 192–193, 200–202, 218 small-world (SW) graph 212, 214–215, 240–241 sodium channel 278, 284, 303, 306 sodium-potassium pump 278 Solar System 97, 168, 170, 191, 448 solid state 180, 219–220 space-time 1, 3, 13–14, 22, 48, 353, 360, 366, 379, 381–382, 385, 388, 390–392, 402–405, 412, 419, 423, 426–427, 429, 436–438, 440–441, 456–458 space, Euclidean 11, 398, 429 space, phase 1, 9, 18, 83, 91–104, 106, 122–123, 128, 144, 146, 148–149, 151, 154–156, 159–160, 163–165, 170, 173–174, 176, 450 space, state 1, 3–10, 48, 50, 68, 74, 81, 84, 92–93, 109, 111, 116, 118–119, 127, 132, 136–137, 142, 144–149, 151–152, 178, 239, 247, 250, 298, 304, 449 special relativity 12, 14, 22, 48, 379, 385, 387–390, 406, 418, 420, 429, 440, 446 spike 278, 284, 303, 306 spin Hamiltonian 297 spiral, stable 119, 138–139, 237, 249, 327, 333, 336 spiral, unstable 119, 126, 138–139, 282, 336–337 spring constant 5, 58, 103–104, 107, 131–132, 180 square lattice 209, 220–222 stability analysis 10, 17, 140, 148, 280–281, 304, 309

Index 479 stable equilibrium 245, 256, 273, 316–317, 320–321, 327–328, 338 stable distribution 346 stable manifold 117, 122, 124, 131, 133, 149, 186, 323–324, 468 standard configuration 388, 390, 392, 394 standard map 164–168, 170, 186 stadium 172–174 stationarity 54, 174, 356 statistical mechanics 172, 194, 208, 345 steady-state 126, 134, 178, 181, 235–237, 243, 245–246, 259, 271, 278, 303, 305 stochastic layer 155–157, 159, 160, 162, 164, 171, 174, 176 stochastic matrix 261, 272, 275 stochastic variable 340, 348 stock market 344, 348–349 stock price 308, 339–345, 348, 349 storage 298–299, 335 strategy 239, 268–272, 342 Strogatz-Watts 214, 241 subcritical 152 summation junction 286–287 Sun 50, 73–74, 77, 168, 382, 439–440, 443, 447–448, 456 superposition 109, 111–112, 172–174, 180, 200 supply 308–309, 313, 315–317, 328, 331–333, 348, 350–352 susceptible-infected (SI) 232 susceptible-infected-removed (SIR) 232, 236

susceptible-infectedsusceptible (SIS) 232, 235 symbiosis 121, 151, 207, 244–245, 247, 271, 273 symmetry 14, 34, 36, 38, 40–41, 44–45, 48, 52, 59, 68–70, 91, 161, 171, 263, 358, 368, 378, 384, 403, 433, 437, 445, 457, 464–465, 467 symplectic geometry 91 synaptic weight 286–287, 289, 293–294, 296, 299–300, 302, 305 synchronization 181–204 tangent vector 6, 12–14, 49, 116, 324, 366, 369, 383, 469 tanh 288, 296–297, 307, 336 tax 324–328 telecommunications 208 tensor product 25 terminal velocity 50, 62, 64 three-body problem 81, 122, 133, 149, 168–169 threshold, percolation 221, 227, 228, 233–234, 236–238, 240 threshold, synchronization 216, 221 tidal force 458 time dilation 393–394, 398, 401, 422–424, 438, 440, 456 time invariant 70 time-reversed 124 topographic map 356 topology 208–212, 216, 219–221, 223–224, 226, 239–240, 305 torque 25, 40, 42, 43, 45–47, 70, 165

torus 96, 101–103, 105, 165, 167, 170, 181, 185–187, 200–202, 355, 383–384 total derivative 89, 365 total differential 63, 87, 96 trace 93–94, 115, 120, 122, 236, 248, 257, 273–274, 282, 316–317, 320, 327, 335–336, 434–435, 453–454 trace-determinant space 120 trader 309–315, 321, 348, 350, 352 training 289, 291–294, 296, 299, 302–303, 307 trajectory, parabolic 13–14, 49, 382 transfer function 289–290, 292–293 transformation matrix 392 transformation, affine 389, 392 transformation, linear 17, 385, 389, 392 tree 209, 212, 221 trigger 183, 249 twist map 165–166 undetermined multiplier 56, 62–67, 311 unemployment 308, 328–330, 348 unit vector 12, 18–19, 24–25, 361, 463 unstable manifold 122–124, 131, 133, 141, 149, 186, 236, 317, 321, 323–324 unsupervised learning 289 utility 268, 309–314, 350

192, 194, 196, 198, 200, 280, 282, 331 vector notation 25, 374 vector, column 11, 17, 401, 461 vector, row 11, 17, 360, 401, 461–462 velocity addition 393–395 velocity, angular 9, 26, 29–30, 33, 39–40, 42–45, 70, 128, 130, 156 viral load 250–251 virial theorem 78–80, 98 virus 3, 231–232, 249–251, 273 visual cortex 290 voltage-gated ion channels 278–279, 281, 282–282 Volterra, Vito 245 Walras’ law 321–324, 351 wave mechanics 180 weak coupling 220, 225 web map 170–172, 176, 187 web page 216 wedge product 27 weight matrix 286, 293–294, 299–300, 302, 305 Wiener process 340, 342 winding number 102, 165, 167–168, 174 winner-take-all dynamics 121, 256, 259 world line 404–405, 420, 426, 437 world-wide web 207–208, 216 XOR 293–295, 305

vaccination 226, 240 van der Pol oscillator 112, 125–128, 130, 150, 152,

zero-sum 244, 254, 256, 263, 267–268, 272, 321, 324, 351