Lyapunov Exponents: A Tool to Explore Complex Dynamics 978-1-107-03042-8

With the advent of electronic computers, numerical simulations of dynamical models have become an increasingly appreciat

768 177 14MB

English Pages 298 Year 2016

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Lyapunov Exponents: A Tool to Explore Complex Dynamics
 978-1-107-03042-8

Table of contents :
Contents......Page 6
Preface......Page 11
1.1.1 Early results......Page 14
1.1.2 Biography of Aleksandr Lyapunov......Page 16
1.1.3 Lyapunov’s contribution......Page 17
1.1.4 The recent past......Page 18
1.2 Outline of the book......Page 19
1.3 Notations......Page 21
2.1 The mathematical setup......Page 23
2.2 One-dimensional maps......Page 24
2.3 Oseledets theorem......Page 25
2.3.1 Remarks......Page 26
2.3.2 Oseledets splitting......Page 28
2.3.3 “Typical perturbations” and time inversion......Page 29
2.4.1 Stability of fixed points and periodic orbits......Page 30
2.5.1 Deterministic vs. stochastic systems......Page 31
2.5.2 Relationship with instabilities and chaos......Page 32
2.5.3 Invariance......Page 33
2.5.4 Volume contraction......Page 34
2.5.5 Time parametrisation......Page 35
2.5.6 Symmetries and zero Lyapunov exponents......Page 37
2.5.7 Symplectic systems......Page 39
3.1 The largest Lyapunov exponent......Page 41
3.2 Full spectrum: QR decomposition......Page 42
3.2.2 Householder reflections......Page 44
3.3 Continuous methods......Page 46
3.4 Ensemble averages......Page 48
3.5 Numerical errors......Page 49
3.5.1 Orthogonalisation......Page 50
3.5.2 Statistical error......Page 51
3.5.3 Near degeneracies......Page 53
3.6 Systems with discontinuities......Page 56
3.6.1 Pulse-coupled oscillators......Page 61
3.6.2 Colliding pendula......Page 62
3.7 Lyapunov exponents from time series......Page 63
4 Lyapunov vectors......Page 67
4.1 Forward and backward Oseledets vectors......Page 68
4.2 Covariant Lyapunov vectors and the dynamical algorithm......Page 70
4.3 Dynamical algorithm: numerical implementation......Page 72
4.4 Static algorithms......Page 74
4.4.2 Kuptsov-Parlitz algorithm......Page 75
4.5 Vector orientation......Page 76
4.6 Numerical examples......Page 77
4.7 Further vectors......Page 78
4.7.1 Bred vectors......Page 79
4.7.2 Dual Lyapunov vectors......Page 80
5.1 Finite-time analysis......Page 83
5.2 Generalised exponents......Page 86
5.3 Gaussian approximation......Page 90
5.4.1 Quick tools......Page 91
5.4.2 Weighted dynamics......Page 92
5.5 Eigenvalues of evolution operators......Page 93
5.6 Lyapunov exponents in terms of periodic orbits......Page 97
5.7.1 Deviation from hyperbolicity......Page 102
5.7.2 Weak chaos......Page 103
5.7.3 Hénon map......Page 107
5.7.4 Mixed dynamics......Page 109
6.1 Lyapunov exponents and fractal dimensions......Page 113
6.2 Lyapunov exponents and escape rate......Page 116
6.3 Dynamical entropies......Page 118
6.4.1 Generalised Kaplan-Yorke formula......Page 120
6.4.2 Generalised Pesin formula......Page 122
7.1 Finite vs. infinitesimal perturbations......Page 123
7.2 Computational issues......Page 125
7.2.1 One-dimensional maps......Page 127
7.3 Applications......Page 128
8 Random systems......Page 131
8.1.1 Weak disorder......Page 132
8.1.2 Highly symmetric matrices......Page 138
8.1.3 Sparse matrices......Page 141
8.1.4 Polytomic noise......Page 144
8.2.1 First-order stochastic model......Page 149
8.2.2 Noise-driven oscillator......Page 150
8.2.3 Khasminskii theory......Page 154
8.2.4 High-dimensional systems......Page 155
8.3.1 LEs as eigenvalues and supersymmetry......Page 159
8.3.2 Weak-noise limit......Page 162
8.3.3 Synchronisation by common noise and random attractors......Page 163
9.1 Coupling sensitivity......Page 165
9.1.1 Statistical theory and qualitative arguments......Page 166
9.1.2 Avoided crossing of LEs and spacing statistics......Page 170
9.1.3 A statistical-mechanics example......Page 172
9.1.4 The zero exponent......Page 173
9.2.1 Complete synchronisation and transverse Lyapunov exponents......Page 175
9.2.2 Clusters, the evaporation and the conditional Lyapunov exponent......Page 176
9.2.3 Synchronisation on networks and master stability function......Page 177
10.1 Lyapunov density spectrum......Page 181
10.1.1 Infinite systems......Page 184
10.2 Chronotopic approach and entropy potential......Page 186
10.3 Convective exponents and propagation phenomena......Page 191
10.3.1 Mean-field approach......Page 194
10.3.2 Relationship between convective exponents and chronotopic analysis......Page 196
10.3.3 Damage spreading......Page 198
10.4.1 Hamiltonian systems......Page 200
10.4.2 Differential-delay models......Page 204
10.4.3 Long-range coupling......Page 206
11.1 Lyapunov dynamics as a roughening process......Page 213
11.1.1 Relationship with the KPZ equation......Page 215
11.1.2 The bulk of the spectrum......Page 220
11.2 Localisation of the Lyapunov vectors and coupling sensitivity......Page 222
11.3 Macroscopic dynamics......Page 226
11.3.1 From micro to macro......Page 229
11.3.2 Hydrodynamic Lyapunov modes......Page 231
11.4 Fluctuations of the Lyapunov exponents in space-time chaos......Page 232
11.5 Open system approach......Page 236
11.5.2 Scaling behaviour of the invariant measure......Page 239
12.1 Anderson localisation......Page 242
12.2 Billiards......Page 244
12.3.1 Escape rate......Page 248
12.3.2 Molecular dynamics......Page 249
12.4 Lagrangian coherent structures......Page 250
12.5 Celestial mechanics......Page 252
12.6 Quantum chaos......Page 255
A.1 Lumped systems: discrete time......Page 258
A.2 Lumped systems: continuous time......Page 259
A.3 Lattice systems: discrete time......Page 260
A.4 Lattice systems: continuous time......Page 261
A.5 Spatially continuous systems......Page 262
A.8 Global coupling: continuous time......Page 263
Appendix B Pseudocodes......Page 265
C.1 Gaussian matrices: discrete time......Page 269
C.2 Gaussian matrices: continuous time......Page 270
Appendix D Symbolic encoding......Page 271
Bibliography......Page 272
Index......Page 290

Citation preview

Lyapunov Exponents A Tool to Explore Complex Dynamics

Lyapunov exponents lie at the heart of chaos theory and are widely used in studies of complex dynamics. Utilising a pragmatic, physical approach, this self-contained book provides a comprehensive description of the concept. Beginning with the basic properties and numerical methods, it then guides readers through to the most recent advances in applications to complex systems. Practical algorithms are thoroughly reviewed and their performance is discussed, while a broad set of examples illustrates the wide range of potential applications. The description of various numerical and analytical techniques for the computation of Lyapunov exponents offers an extensive array of tools for the characterisation of phenomena, such as synchronisation, weak and global chaos in lowand high-dimensional setups, and localisation. This text equips readers with all of the investigative expertise needed to fully explore the dynamical properties of complex systems, making it ideal for both graduate students and experienced researchers. Arkady Pikovsky is Professor of Theoretical Physics at the University of Potsdam. He is a member of the editorial board for Physica D and a Chaotic and Complex Systems Editor for J. Physics A: Mathematical and Theoretical. He is a Fellow of the American Physical Society and co-author of Synchronization: A Universal Concept in Nonlinear Sciences. His current research focusses on nonlinear physics of complex systems. Antonio Politi is the 6th Century Chair in Physics of Life Sciences at the University of Aberdeen. He is Associate Editor of Physical Review E, a Fellow of the Institute of Physics and of the American Physical Society, and was awarded the Gutzwiller Prize by the MaxPlanck Institute for Complex Systems in Dresden and the Humboldt Prize. He is co-author of Complexity: Hierarchical Structures and Scaling in Physics.

Lyapunov Exponents A Tool to Explore Complex Dynamics ARKADY PIKOVSKY University of Potsdam

ANTONIO POLITI University of Aberdeen

University Printing House, Cambridge CB2 8BS, United Kingdom Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/9781107030428 © Arkady Pikovsky and Antonio Politi 2016 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2016 Printed in the United Kingdom by TJ International Ltd. Padstow Cornwall A catalogue record for this publication is available from the British Library Library of Congress Cataloguing in Publication data Pikovsky, Arkady, 1956– Lyapunov exponents : a tool to explore complex dynamics / Arkady Pikovsky, University of Potsdam, Antonio Politi, University of Aberdeen. pages cm Includes bibliographical references and index. ISBN 978-1-107-03042-8 (Hardback : alk. paper) 1. Lyapunov exponents. 2. Differential equations. I. Politi, II. Title. QA372.P655 2016 515 .352–dc23 2015032525 ISBN 978-1-107-03042-8 Hardback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.

Contents

Preface

page xi

1 Introduction 1.1

1.2 1.3

Historical considerations 1.1.1 Early results 1.1.2 Biography of Aleksandr Lyapunov 1.1.3 Lyapunov’s contribution 1.1.4 The recent past Outline of the book Notations

2 The basics 2.1 2.2 2.3

2.4

2.5

The mathematical setup One-dimensional maps Oseledets theorem 2.3.1 Remarks 2.3.2 Oseledets splitting 2.3.3 “Typical perturbations” and time inversion Simple examples 2.4.1 Stability of fixed points and periodic orbits 2.4.2 Stability of independent and driven systems General properties 2.5.1 Deterministic vs. stochastic systems 2.5.2 Relationship with instabilities and chaos 2.5.3 Invariance 2.5.4 Volume contraction 2.5.5 Time parametrisation 2.5.6 Symmetries and zero Lyapunov exponents 2.5.7 Symplectic systems

3 Numerical methods 3.1 3.2

v

The largest Lyapunov exponent Full spectrum: QR decomposition 3.2.1 Gram-Schmidt orthogonalisation 3.2.2 Householder reflections

1 1 1 3 4 5 6 8 10 10 11 12 13 15 16 17 17 18 18 18 19 20 21 22 24 26 28 28 29 31 31

vi

Contents

3.3 3.4 3.5

3.6

3.7

Continuous methods Ensemble averages Numerical errors 3.5.1 Orthogonalisation 3.5.2 Statistical error 3.5.3 Near degeneracies Systems with discontinuities 3.6.1 Pulse-coupled oscillators 3.6.2 Colliding pendula Lyapunov exponents from time series

4 Lyapunov vectors 4.1 4.2 4.3 4.4

4.5 4.6 4.7

Forward and backward Oseledets vectors Covariant Lyapunov vectors and the dynamical algorithm Dynamical algorithm: numerical implementation Static algorithms 4.4.1 Wolfe-Samelson algorithm 4.4.2 Kuptsov-Parlitz algorithm Vector orientation Numerical examples Further vectors 4.7.1 Bred vectors 4.7.2 Dual Lyapunov vectors

5 Fluctuations, finite-time and generalised exponents 5.1 5.2 5.3 5.4

5.5 5.6 5.7

Finite-time analysis Generalised exponents Gaussian approximation Numerical methods 5.4.1 Quick tools 5.4.2 Weighted dynamics Eigenvalues of evolution operators Lyapunov exponents in terms of periodic orbits Examples 5.7.1 Deviation from hyperbolicity 5.7.2 Weak chaos 5.7.3 Hénon map 5.7.4 Mixed dynamics

6 Dimensions and dynamical entropies 6.1 6.2 6.3

Lyapunov exponents and fractal dimensions Lyapunov exponents and escape rate Dynamical entropies

33 35 36 37 38 40 43 48 49 50 54 55 57 59 61 62 62 63 64 65 66 67 70 70 73 77 78 78 79 80 84 89 89 90 94 96 100 100 103 105

vii

Contents

6.4

Generalised dimensions and entropies 6.4.1 Generalised Kaplan-Yorke formula 6.4.2 Generalised Pesin formula

7 Finite-amplitude exponents 7.1 7.2 7.3

Finite vs. infinitesimal perturbations Computational issues 7.2.1 One-dimensional maps Applications

8 Random systems 8.1

8.2

8.3

Products of random matrices 8.1.1 Weak disorder 8.1.2 Highly symmetric matrices 8.1.3 Sparse matrices 8.1.4 Polytomic noise Linear stochastic systems and stochastic stability 8.2.1 First-order stochastic model 8.2.2 Noise-driven oscillator 8.2.3 Khasminskii theory 8.2.4 High-dimensional systems Noisy nonlinear systems 8.3.1 LEs as eigenvalues and supersymmetry 8.3.2 Weak-noise limit 8.3.3 Synchronisation by common noise and random attractors

9 Coupled systems 9.1

9.2

Coupling sensitivity 9.1.1 Statistical theory and qualitative arguments 9.1.2 Avoided crossing of LEs and spacing statistics 9.1.3 A statistical-mechanics example 9.1.4 The zero exponent Synchronisation 9.2.1 Complete synchronisation and transverse Lyapunov exponents 9.2.2 Clusters, the evaporation and the conditional Lyapunov exponent 9.2.3 Synchronisation on networks and master stability function

10 High-dimensional systems: general 10.1 Lyapunov density spectrum 10.1.1 Infinite systems 10.2 Chronotopic approach and entropy potential 10.3 Convective exponents and propagation phenomena 10.3.1 Mean-field approach

107 107 109 110 110 112 114 115 118 119 119 125 128 131 136 136 137 141 142 146 146 149 150 152 152 153 157 159 160 162 162 163 164 168 168 171 173 178 181

viii

Contents

10.3.2 Relationship between convective exponents and chronotopic analysis 10.3.3 Damage spreading 10.4 Examples of high-dimensional systems 10.4.1 Hamiltonian systems 10.4.2 Differential-delay models 10.4.3 Long-range coupling

11 High-dimensional systems: Lyapunov vectors and finite-size effects 11.1 Lyapunov dynamics as a roughening process 11.1.1 Relationship with the KPZ equation 11.1.2 The bulk of the spectrum 11.2 Localisation of the Lyapunov vectors and coupling sensitivity 11.3 Macroscopic dynamics 11.3.1 From micro to macro 11.3.2 Hydrodynamic Lyapunov modes 11.4 Fluctuations of the Lyapunov exponents in space-time chaos 11.5 Open system approach 11.5.1 Lyapunov spectra of open systems 11.5.2 Scaling behaviour of the invariant measure

12 Applications 12.1 Anderson localisation 12.2 Billiards 12.3 Lyapunov exponents and transport coefficients 12.3.1 Escape rate 12.3.2 Molecular dynamics 12.4 Lagrangian coherent structures 12.5 Celestial mechanics 12.6 Quantum chaos

Appendix A Reference models A.1 A.2 A.3 A.4 A.5 A.6 A.7 A.8

Lumped systems: discrete time Lumped systems: continuous time Lattice systems: discrete time Lattice systems: continuous time Spatially continuous systems Differential-delay systems Global coupling: discrete time Global coupling: continuous time

Appendix B Pseudocodes

183 185 187 187 191 193 200 200 202 207 209 213 216 218 219 223 226 226 229 229 231 235 235 236 237 239 242 245 245 246 247 248 249 250 250 250 252

ix

Contents

Appendix C Random matrices: some general formulas C.1 Gaussian matrices: discrete time C.2 Gaussian matrices: continuous time

256 256 257

Appendix D Symbolic encoding

258

Bibliography Index

259 277

Preface

With the advent of electronic computers, numerical simulations of dynamical models have become an increasingly appreciated way to study complex and nonlinear systems. This has been accompanied by an evolution of theoretical tools and concepts: some of them, more suitable for a pure mathematical analysis, happened to be less practical for applications; other techniques proved instead very powerful in numerical studies, and their popularity exploded. Lyapunov exponents is a perfect example of a tool that has flourished in the modern computer era, despite having been introduced at the end of the nineteenth century. The rigorous proof of the existence of well-defined Lyapunov exponents requires subtle assumptions that are often impossible to verify in realistic contexts (analogously to other properties, e.g., ergodicity). On the other hand, the numerical evaluation of the Lyapunov exponents happens to be a relatively simple task; therefore they are widely used in many setups. Moreover, on the basis of the Lyapunov exponent analysis, one can develop novel approaches to explore concepts such as hyperbolicity that previously appeared to be of purely mathematical nature. In this book we attempt to give a panoramic view of the world of Lyapunov exponents, from their very definition and numerical methods to the details of applications to various complex systems and phenomena. We adopt a pragmatic, physical point of view, avoiding the fine mathematical details. Readers interested in more formal mathematical aspects are encouraged to consult publications such as the recent books by Barreira and Pesin (2007) and Viana (2014). An important goal for us was to assess the reliability of numerical estimates and to enable a proper interpretation of the results. In particular, it is not advisable to underestimate the numerical difficulties and thereby use the various subroutines as black boxes; it is important to be aware of the existing limits, especially in the application to complex systems. Although there are very few cases where the Lyapunov exponents can be exactly determined, methods to derive analytic approximate expressions are always welcome, as they help to predict the degree of stability, without the need of actually performing possibly long simulations. That is why, throughout the book, we discuss analytic approaches as well as heuristic methods based more on direct numerical evidence, rather than on rigorous theoretical arguments. We hope that these methods will be used not only for a better understanding of specific dynamical problems, but also as a starting point for the development of more rigorous arguments. The various techniques and results described in the book started accumulating in the scientific literature during the 1980s. Here we have made an effort to present the main (according to our taste) achievements in a coherent and systematic way, so as to make the understanding by potentially unskilled readers easier. An example is the perturbative xi

xii

Preface

approach of the weak-disorder limit that has already been discussed in other reviews; here we present the case of ellyptic, hyperbolic and marginal matrices in a systematic manner. Although this is a book and, as such, mostly devoted to a coherent presentation of known results, we have also included novel elements, wherever we felt that some gaps had to be filled. This is for instance, the case of the finite-size effects in the Kuramoto model or the extension of the techniques developed by Sompolinsky et al. to a wider class of random processes. As a result, we are confident that the book can be read at various levels, depending on the needs of the reader. Those interested in the bare application to some simple cases will find the key elements in the first three chapters; the following chapters contain various degrees of in-depth analysis. Cross references among the common points addressed in the various sections should help the reader to navigate across specific items. The most important acknowledgement goes to the von Humboldt Foundation, which, supporting the visit of Antonio Politi to Potsdam with a generous fellowship, has allowed us to start and eventually complete this project. Otherwise, writing the book would have been simply impossible. We happened to discuss with, ask and receive suggestions from various colleagues. We specifically wish to acknowledge V. N. Biktashev, M. Cencini, H. Chaté, A. Crisanti, F. Ginelli, H. Kantz, R. Livi, Ya. Pesin, G. Puccioni, K. A. Takeuchi, R. Tonjes and H.-L. Yang. Antonio Politi wishes also to acknowledge A. Torcini and S. Lepri as long-term collaborators who contributed to the development of some of the results herein summarised. Special thanks go to P. Grassberger, who, more than 10 years after the publication of a joint paper with G. D’Alessandro, S. Isola and Antonio Politi on the Hénon map, was able to dig out some data to determine the still most accurate estimate of the topological entropy of such a map. As laziness has prevented a dissemination of those results, we made an effort to include them in this book. We also wish to thank E. Lyapunova, the grand-niece of A. M. Lyapunov, who provided a high-quality photograph of the scientist who originated all of the story. We finally warmly thank S. Capelin of Cambridge University Press, who has been patient enough to wait for us to complete the work. We hope that the delay has been worthy of a much better product. Although surely far from perfect, at some point we had to stop.

1

Introduction

1.1 Historical considerations 1.1.1 Early results The problem of determining the stability of a given regime (e.g. the motion of the solar system) is as old as the concept of the dynamical system itself. As soon as scientists realised that physical processes could be described in terms of mathematical equations, they also understood the importance of assessing the stability of various dynamical regimes. It is thus no surprise that many eminent scientists, such as Euler, Lagrange, Poincaré and Lyapunov (to name a few), engaged themselves in properly defining the concept of stability. Lyapunov exponents are one of the major tools used to assess the (in)stability of a given regime. Within hard sciences, where there is a long-standing tradition of quantitative studies, Lyapunov exponents are naturally used in a large number of fields, such as astronomy, fluid dynamics, control theory, laser physics and chemical reactions. More recently, they started to be used also in disciplines, such as biology and sociology, where nowadays processes can be accurately monitored (e.g. the propagation of electric signals in neural cells and population dynamics). The reader interested in a fairly accurate historical account of how stability has been progressively defined and quantified can refer to Leine (2010). Here, we limit ourselves to the recapitulation of a few basic facts, starting from the Galilean times, when E. Torricelli (1644) investigated the stability of a mechanical system and conjectured (in the modern language) that a point of minimal potential energy is a point of equilibrium. Besides mechanical systems, floating bodies provide another environment where stability is naturally important, especially to avoid roll instability of vessels. Unsurprisingly, the first results came from a Flemish (S. Stevin) and a Dutch (Ch. Huygens) scientist: at that time, the cutting-edge technology of ship-building had been developed in the Dutch Republic. In particular, Huygens’ approach was quite modern in that he addressed the problem by explicitly comparing two different states. D. Bernoulli too dealt with the problem of roll-stability, emphasising the importance of the restoring forces, which make the body return towards the equilibrium state. L. Euler was the first to distinguish between stable, unstable, and indifferent equilibria and suggested also the possibility of considering infinitely small perturbations. The concept of stability was further developed by J.-L. Lagrange, who formalised the ideas expressed by Torricelli (for conservative dynamical systems), clarifying that, in the 1

2

Introduction

presence of a vanishing kinetic energy, the minimum of the potential energy corresponds to a stable equilibrium. The corresponding theorem is nowadays referred to as “LagrangeDirichlet” because of further improvements introduced by J. P. G. L. Dirichlet. In the nineteenth century, fluid dynamics provided many examples where the stability assessment was far from trivial. Some scientists (notably Lord Kelvin) were striving to unify physics under the paradigm of the motion of perfect liquids, and such an approach required the stability of various forms of motion. At a macroscopic level, in the attempt of predicting the Earth’s shape, the problem of determining the stable shape of a rotating fluid, under the influence of the sole action of centrifugal and (internal) gravitational forces, was posed. The studies led to the conclusion that, in some conditions, ellipsoidal shapes are to be expected, but the problem was not fully solved (see Section 1.1.2 on Lyapunov’s biography). On a more microscopic level, hydrodynamics proved to be an extremely fertile field for the appearance of instabilities: concepts such as sensibility to infinitesimal and finite perturbations were present in the minds of esteemed scientists. G. Stokes was one of the pioneers: he stipulated that instabilities naturally occur in the presence of rapidly diverging flow lines, such as past a solid obstacle. Slightly later, H. Helmholtz and W. Thomson discovered that the surface separating two adjacent flows may lose its flatness. Contrary to the instability foreseen by Stokes, which was based only on conjectures, the latter one, nowadays referred to as the Kelvin-Helmholtz instability, was also derived directly from the hydrodynamics equations. Last but not least, Lord Kelvin strived to develop a vortex theory of matter, which, however, required the stability of the underlying dynamical regimes. Only at the end of his career did he convince himself that his ideas were severely undermined by the unavoidable presence of instabilities. The interested reader can look at the exhaustive review by Darrigol (2002). Celestial mechanics proved to be another fruitful environment for the development of new ideas. In order to appreciate how relevant the subject was in those times, it is sufficient to mention that when P. S. Laplace studied perturbatively the behaviour of three gravitationally interacting particles (the so-called 3-body problem), he referred to it as to the “world system”. Heavily relying on recent results by Lagrange, Laplace concluded that the semi-major axis of the orbits is characterised by periodic oscillations. Thus, he concluded in favour of stability, meaning that the fluctuations are bounded. A bit later, S. D. Poisson discovered that second- and third-order terms generate a secular contribution of the type At sin αt; however, as remarked by C. G. J. Jacobi, it was not clear whether such a contribution would survive a higher-order analysis. All in all, no clear answer had yet been given by the end of the nineteenth century. This is the reason why King Oscar II of Sweden decided to offer a prize for those who could find an explicit solution. H. Poincaré won the prize even though he did not actually solve the problem. On the contrary, his work established the existence of unavoidable high sensitivity to initial conditions: what was later called the ‘butterfly effect’ by the metereologist E. N. Lorenz.1 Poincaré received the 1 The expression ‘butterfly effect’ was arguably introduced by Lorenz in 1972, when he gave a talk at the

American Association for the Advancement of Science entitled “Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas?”

3

1.1 Historical considerations

prize for the revolutionary methods that he developed to gain insight about the behaviour of generic dynamical systems. A last environment where stability turned out to be of primary importance is related to engineering applications. In the nineteenth century, with the advent of steam engines, it became necessary to regulate the internal pressure inside the boiler. This problem represented the starting point for the birth of a new discipline: automatic control theory. J. C. Maxwell analysed the stability of Watt’s flyball regulator by linearising the equations of motion. Independently, I. A. Vyshnegradtsky used a similar approach to study the same problem in greater detail.

1.1.2 Biography of Aleksandr Lyapunov Here, we briefly summarise some basic facts of the biography of Aleksandr Mikhailovich Lyapunov, mostly relying on Smirnov (1992) and Shcherbakov (1992). Aleksandr Lyapunov was born in 1857 in Yaroslavl. After completing his gymnasium studies in Nizhny Novgorod, Lyapunov moved to the University of St. Petersburg, where the Mathematical Department was blooming under the direction of Pafnuty Chebyshev, who soon became the supervisor of his graduate studies. Chebyshev used to say that “every young scholar . . . should test his strength on some serious theoretical questions presenting known difficulties”. As a matter of fact, Lyapunov got involved in a problem that had been earlier proposed to other students (he discovered this later in his career), namely that of determining the shape of a rotating fluid. As his efforts proved unsuccessful, Lyapunov

Fig. 1.1

A. M. Lyapunov in 1902, in Kharkov. Photo courtesy of Elena Alexeevna Lyapunova.

4

Introduction

decided to refocus his work, preparing a dissertation entitled On the stability of elliptic forms of equilibrium of rotating fluids, which nevertheless allowed him to be awarded a Master’s degree in applied mathematics (1884) and made him known in Europe. In 1885, Lyapunov was appointed Privatdozent in Kharkov, where he worked on the stability of mechanical systems. His main results were summarised in a remarkable thesis entitled The general problem of the stability of motion, which granted him a PhD at Moscow University (1892). The dissertation contains an extraordinarily deep and general analysis of systems with a finite number of degrees of freedom. Interestingly, Lyapunov mentioned H. Poincaré as one of his principal sources of inspiration. In 1893, Lyapunov was promoted to ordinary professor in Kharkov. In the following years, he kept studying stability properties of dynamical systems, investigated the Dirchlet problem, and engaged himself in problems of probability theory, contributing to the central limit theorem and paving the way to the rigorous results obtained by his friend Andrei Markov. In 1901 he became head of the department of Applied Mathematics at the Russian Academy of Sciences in St. Petersburg (the position, without teaching duties, had been vacant since 1894, when Chebyshev died). After having completed a cycle of papers on the stability of motion, Lyapunov came back to the question posed to him by Chebyshev about 20 years before and much related to the problem of determining the form of celestial bodies, earlier formulated by Laplace. While he was still struggling to find a solution, Lyapunov became aware of a book published by Poincaré in 1902 on the same problem and managed to acquire a copy. From a letter sent by Lyapunov to his disciple and close friend Steklov: “To my greatest surprise, I did not find anything significant in this book . . . Thus my work has not suffered and I apply myself to it afresh”. The book by Poincaré essentially contained previous (known) concepts with little advancements. Shortly after, the astronomer George Darwin (son of Charles Darwin) published some papers on the same subject, concluding that pear-shaped forms are to be expected. Lyapunov completed his studies in 1905: a treatise of about 1000 pages, with some mathematical calculations made up to 14 digits when necessary. He indeed discovered deviations from ellipsoids, but he also showed that pear-shaped forms are unstable. The controversy with Darwin went on for some years, until it was eventually settled in 1917, when another British astronomer, J. H. Jeans, confirmed that Lyapunov was right. In 1917 Lyapunov left St. Petersburg for Odessa, so that his wife could receive treatment for tuberculosis. On the day of his wife’s death, Aleksandr Lyapunov committed suicide.

1.1.3 Lyapunov’s contribution The first formal definition of stability was given by Lyapunov in his PhD thesis: a given trajectory is stable if, for an arbitrary ε, there always exists a δ such that all other trajectories starting in a δ-neighbourhood of the given one remain at most at a distance ε to it. He introduced also what was later called asymptotic stability, to refer to cases where sufficiently small perturbations eventually die.

5

1.1 Historical considerations

Lyapunov introduced also two methods to assess the stability of a given solution. The first method was based on a “standard” perturbative analysis; he was very interested in identifying those cases where it is necessary to go beyond the first order to characterise correctly the perturbation dynamics. As a result, he introduced the “characteristic number” λL . It is basically the opposite of what is nowadays called the (characteristic) Lyapunov exponent. In fact, he defined λL as the exponential rate which has to be added to balance the growth rate of a given perturbation δ(t): in other words, assuming that δ(t) ≈ eλt , he would define the characteristic number λL as the value such that δ(t)eλL t neither diverges nor converges exponentially. The second, or direct method, deals with the introduction of a pseudo-energy function (nowadays called Lyapunov function) that vanishes in the equilibrium point and is otherwise positive, and decreases (or does not increase) along a generic trajectory. This is an extension of the ideas of Torricelli and Lagrange to a context where the potential energy is not defined a priori. Lyapunov’s PhD thesis was translated into French in 1908, while one had to wait until 1992 to see the first English translation (Lyapunov, 1992). Although Lyapunov himself attached more importance to the first method, he became famous for the second method. Nevertheless, even within Russia, the first practical applications of his stability theory were not made until the 1930s by N. G. Chetayev and I. G. Malkin at the Kazan Aviation Institute. The reader interested in the development of the second method is invited to consult Parks (1992).

1.1.4 The recent past Although Lyapunov exponents (LEs) were formally introduced at the end of the nineteenth century, for a long time, they did not attract the attention of scientists. One reason is that most efforts were initially devoted to characterising the stability of either constant, or periodic dynamics, in which case the problem reduces to determining the eigenvalues of a suitable matrix (see Chapter 2). A second reason why the application of the Lyapunov exponents was so much delayed with respect to the time of their definition was the difficulty of dealing with noncommuting entities. In fact, as shown in Chapter 2, the LEs are generated by multiplying (infinitely many) matrices. The first analytical result was obtained by Furstenberg and Kesten (1960), who basically proved the existence of the maximum and the minimum Lyapunov exponent. The full multiplicative ergodic theorem, ensuring the existence of as many Lyapunov exponents as the dimension of the space where the matrices operate, was proved later by Oseledets (1968) under fairly general conditions. A further reason for the prolonged lack of specific studies of the Lyapunov exponents was the lack of workable instances of what was later defined as chaotic dynamics; in other words it was not clear which trajectories to consider for the underlying linearisation. It was only after the advent of the electronic computer that (approximate) trajectories could be generated and thereby characterised. The reception of the first physical model of a chaotic dynamical system, the Lorenz attractor (Lorenz, 1963), provides an enlightening

6

Introduction 500

citations

400 300 200 publication year 100

1960

Fig. 1.2

1970

1980

1990 year

2000

2010

2020

Number of citations per year of the seminal paper (Lorenz, 1963), starting from the year of its publication. (Data from ISI Web of Science (www.wokinfo.com).)

view. In Fig. 1.2, one can see that the number of yearly citations of such a seminal paper remained pretty low until the late 1970s, when it started to explode. That was the time when computers became generally available to scientists, allowing them to work “experimentally” on chaotic dynamics. It is, in fact, in the same period that the relevant algorithms for the Lyapunov-exponent computation were developed (Shimada and Nagashima, 1979; Benettin et al., 1980a, b).

1.2 Outline of the book Writing a book requires organizing a set of items in a sequential way, but in many cases, such as the present one, several interconnections are present, which make any ordering inconvenient. Therefore, we find it useful to summarise the book’s content here, highlighting the connections among the various chapters. Chapters 2 and 3 are devoted to the introduction of the main general properties of Lyapunov exponents and to the related numerical tools; the information contained therein will suffice for those interested in a basic Lyapunov analysis. The pseudocodes described in Appendix B provide further guidance for the implementation of the required algorithms. More precisely, the proper mathematical setups (continuous vs. discrete time) are introduced in Chapter 2, where they are followed by a discussion of the main properties, which include the effect of symmetries and the invariance of the LEs under changes of coordinates. In Chapter 3, various algorithms for the computation of LEs are introduced; they include the standard methods based on vector orthogonalisation at finite time intervals, as well as continuous methods. A specific section is devoted to models characterised by discontinuities in phase space where some care is required. The various sources of errors are also briefly discussed.

7

1.2 Outline of the book

In nonlinear dynamical systems, there exists a relationship between LEs and various properties of their invariant measures, such as fractal dimensions and dynamical entropies. This connection is discussed in Chapter 6, where the Kaplan-Yorke and Pesin formulas are reviewed. When computed over finite time, LEs naturally exhibit fluctuations that can be described via two different, but equivalent, methods. The first one, based on the computation of suitable moments, leads to the definition of the so-called generalised LEs. The second one is instead based on large-deviation theory. Both are described in Chapter 5, where we also illustrate a powerful numerical technique for the detection of rare fluctuations (see Section 5.4.2). Some generalised LEs are theoretically more manageable than the usual LEs; they can be used to derive approximate analytic expressions (some examples are presented in Chapter 8). In Chapter 6, we discuss the implication of LE fluctuations on the definition of the family of (generalised) fractal dimensions and entropies. The size of the fluctuations proves particularly useful to identify those borderline cases, where positive and negative finite-time LEs coexist. Finally, fluctuations arise also in spatially extended systems, where it is important to understand their scaling behaviour with the system size (this is discussed in Chapter 11). If finite, instead of infinitesimal, perturbations are considered, the so-called finiteamplitude LEs can be defined. The usefulness and the limits of this generalisation are discussed in Chapter 7. Of particular relevance is the case of collective chaos, where such exponents allow identification of the presence of macroscopic instabilities. LEs quantify the growth rates of generic (infinitesimal) perturbations. Further information is contained in the direction of the perturbations, i.e., in the orientation of the corresponding Lyapunov vectors. This issue is extensively discussed in Chapter 4, where different definitions and algorithms for their reconstruction are presented. The structure of Lyapunov vectors is discussed also in Chapter 11, where an analogy with rough interfaces (in spatially extended systems) allows for a fairly complete characterisation. An important property that is worth testing in high-dimensional contexts is the extended vs. localised nature of the Lyapunov vectors. The presence of localised vectors (especially the first one) helps to develop approximate expressions for the LE; some examples are discussed in Chapters 8 and 10. Furthermore, the presence of extended vectors is conjectured to be a signature of the presence of collective chaos (see Chapter 11). Exact analytical expressions for the LEs are not typically available in generic nonlinear/disordered systems. Some results are obtained with the help of suitable perturbative techniques when the fluctuations are quite small. Other results can be derived under the assumption of relatively simple statistical properties (e.g. lack of temporal and/or spatial correlations). The corresponding theories are reviewed in Chapter 8. This chapter is particularly important for those interested in applications of LEs to intrinsically random situations, but it also provides a basis to obtain approximate results for chaotic dynamical systems. Chapters 9, 10 and 11 deal with LEs in complex systems. In Chapter 9, we start from the relatively simple setup of two coupled systems, used as a testbed for understanding the effect of coupling on nearly identical units (e.g. coupling sensitivity and synchronisation). Chapter 10 is devoted to a general discussion of high-dimensional systems: various classes

8

Introduction

of setups are considered, but the emphasis is mostly put on systems characterised by shortrange interactions in a one-dimensional physical space. One of the key properties is the extensivity of the chaotic dynamics (i.e. the proportionality between the number of active degrees of freedom and the dimension of the phase space), which manifests itself as a well-defined Lyapunov density spectrum (in the so-called thermodynamic limit). Finally, Lyapunov vectors in spatially extended systems are analysed in Chapter 11, mostly through an insightful analogy with rough interfaces. Furthermore, models characterised by a chaotic collective behaviour are used to elucidate differences (and possible connections) between the stability analysis at a microscopic and a macroscopic level. In Chapter 12 we review several physical problems, where the application of Lyapunov exponents plays an essential role. The four appendices contain some technical details. In Appendix A, almost all models that are used as a reference across the book are properly defined. Appendix B contains the most basic pseudocodes to be used for the computation of Lyapunov exponents and vectors. In Appendix C, we derive some general formulas used in Chapter 8 for the computation of LEs in products of random matrices. Finally, rudiments of symbolic encoding are presented in Appendix D: they prove useful in implementing some techniques introduced in Chapter 5.

1.3 Notations While writing a monograph, the use of homogeneous notations is a must to avoid unpleasant misunderstandings on the meaning of some symbols. We have made our best efforts to avoid overlaps, but, given the large number of quantities we had to introduce, this has not been always possible. However, we have tried to clearly identify and differentiate the general observables, such as time, space and phase-space variables. Particular care has been taken in differentiating the many different types of Lyapunov exponents that are discussed in the book. The main notations are summarised in the following tables.

Acronyms Abbreviation BV CLV FAE FLI FPU GALI GS KPZ LE SVD

Description Bred vectors Covariant Lyapunov vector Finite amplitude exponent Fast Lyapunov indicator Fermi-Pasta-Ulam Generalised alignment index Gram-Schmidt orthogonalisation Kardar-Parisi-Zhang Lyapunov exponent Singular value decomposition

9

1.3 Notations

General notation Symbols used U, V, U, V u, v, u, v

F, F

t J, H, M ∂F J = ∂U K H

x L

Description

Page

Capital letters are used to refer to variables in phase space; bold letters denote vectors of variables. Lower-case letters refer to variables in tangent space obeying linearised equations; bold letters denote vectors in tangent space. Capital F usually denotes the velocity field for a differential equation or the map function; bold denotes a system of equations or a high-dimensional map. Time (either discrete or continuous) Sans serif letters denote matrices Instantaneous Jacobian of a map Instantaneous Jacobian for continuous time differential equations The product of many Js for a sequence of Jacobian matrices or the integration of K over a finite time for continuous time case Spatial variable (discrete or continuous) for spatio-temporal dynamics, written as an index, e.g., Ux , when discrete System size for spatially extended systems

10, 11

Different types of Lyapunov exponents λk Sn (t0 , τ ) = τ μ = exp[λ] L(q) G(q) = L(q)q

L L

kth Lyapunov exponent (typically ordered from the largest to the smallest one) Sum of the first n Lyapunov exponents (or the sum of all Lyapunov exponents) Finite time Lyapunov exponent determining the growth rate in the time interval t0 < t < t0 + τ Overall expansion factor over time interval τ Multipliers (i.e. their logarithms are the Lyapunov exponents) Generalised Lyapunov exponents Characteristic function of perturbation growth Finite amplitude exponent (averaged) Instantaneous finite amplitude exponent (related to

, like  to λ) Convective (velocity-dependent) exponent

12 21, 22 39, 71 39, 71 12 74 74 111 111 179

10

10, 11

10, 11 10 11 10

166, 172, 173, 247 168

2

The basics

In this chapter we introduce the mathematical background that is necessary for defining the Lyapunov exponents (LEs). We then introduce the LEs by referring to the Oseledets multiplicative ergodic theorem. As everywhere else in the book, rather than presenting fully rigorous derivations, we prefer to propose physical and heuristic arguments with the help of examples of increasing complexity. General properties of the LEs are also illustrated to help explain the role of LEs in general physical contexts.

2.1 The mathematical setup The notion of Lyapunov exponents emerges while assessing the stability of generic trajectories of a dynamical system. Two classes are typically studied in the scientific literature, namely discrete- and continuous-time models. A discrete-time dynamical system, usually referred to as a map, is represented by the recursive relation U(t + 1) = F(U(t)).

(2.1)

Here, U is an N-dimensional state variable, t is an integer variable denoting time and F(U) is a (possibly non-invertible) function from RN to RN . An initial condition U(0) uniquely defines the trajectory U(t), which may be generated by iterating the relation (2.1), and is assumed to be well defined for all t > 0. Its stability can be assessed by selecting a second close trajectory and thereby checking whether it remains close to the original one. It is customary to consider infinitesimal perturbations. This requires linearising the map (2.1) (this step assumes a sufficient smoothness of the map). The perturbation u(t) of the trajectory U(t) follows the linear transformation u(t + 1) =

∂F (t)u(t) =: J(t)u(t), ∂U

(2.2)

where J is the so-called Jacobian matrix. The current value u(t) of the perturbation is generated by iterating Eq. (2.2), u(t) = J(t − 1)u(t − 1) = J(t − 1)J(t − 2)u(t − 2) = . . . =

t−1  k=0

10

J(k)u(0) = H(t)u(0),

(2.3)

11

2.2 One-dimensional maps t−1 where the matrix H(t) = k=0 J(k) defines the evolution over t steps. The perturbation depends, of course, on the trajectory U(t); in practice this dependence can be reduced to that on the initial state U(0), as it uniquely identifies the forward trajectory. In order to determine whether the perturbation either grows or decays, it is necessary to study the effect of a product of matrices on some initial vector, or, equivalently, the properties of H(t) for large t. In the case of continuous-time systems, the typical starting point is an ordinary differential equation (although we will also consider partial and delay differential equations). We write it as ˙ = F(U, t), U

(2.4)

where U is an N-dimensional state variable and the vector field F possibly depends on time (this is the case of non-autonomous dynamical systems). The generator for the evolution of infinitesimal perturbation is again obtained by linearising the equations of motion, u˙ =

∂F (t)u(t) =: K(U, t)u. ∂U

(2.5)

By integrating this equation over a time t, one obtains u(t) = H(U0 , t)u(0),

(2.6)

  t where H(U0 , t) = exp 0 dt K(U(t ), t ) depends on the trajectory U(t ) at all intermediate times. In practice, the matrix H(U0 , t) is obtained by solving the linear system of ordinary differential equations (2.5). The relation (2.6) is fully equivalent to Eq. (2.3).

2.2 One-dimensional maps It is convenient to start from the study of one-dimensional maps, i.e. N = 1. In this case, F(U ) is a scalar function; the Jacobian matrix J reduces to its derivative F (U ) and the scalar perturbation u obeys u(t) =

t−1 

F (U(k))u(0).

k=0

The perturbation can either grow or decay in absolute value and change sign. The latter effect is irrelevant for assessing the stability of a trajectory. As for the absolute value, |u(t)| = |u(0)|

t−1  k=0

|F (U(k))|.

(2.7)

12

The basics

Instead of considering products of numbers, it is more convenient to deal with sums. This can be accomplished by computing the logarithm of expression (2.7), ln |u(t)| =

t−1 

ln |F (U(k))| + ln |u(0)|.

(2.8)

k=0

The properties of the sum depend on the correlations along the original trajectory U(t). If we assume that U(t) is a statistically stationary ergodic process, then the time average of the observable ln |F (U(k))| can be typically represented – thanks to the Birkhoff ergodic theorem (see Walters (1982)) – as an average over the corresponding probability measure  1 ln |F (U(k))| −−−→ ln |F (U )| . t→∞ t t−1

k=0

As a result, the perturbation, typically, grows exponentially for large t,  |u(t)| ≈ |u(0)| exp[ ln |F (U )| t] = |u(0)| exp[λt], (2.9)  where the quantity λ = ln |F (U )| is called the Lyapunov exponent of the onedimensional map.

2.3 Oseledets theorem In the one-dimensional case, the sign of a perturbation can change irregularly during the evolution, while its absolute value generally follows an exponential law (2.9). Similarly, one does not expect regularities in the direction of the perturbation vector u(t) in Eq. (2.3). The amplitude of the perturbation is, in fact, determined by the norm of the vector u(t)2 = Ht u(0)2 = u (0)H (t)H(t)u(0),

(2.10)

where H is the transpose of H and one can equivalently assume the time to be either discrete or continuous. Thus, only the properties of the real symmetric matrix M(t) = H (t)H(t) are relevant for the time evolution of the norm of the vector u. These properties have been established in a seminal theorem by Oseledets (1968, 2008), which states that for a statistically stationary, ergodic sequence of matrices (which happens if the underlying process U(t) is ergodic), the limit 1

lim [M(t)] 2t = P

t→∞

(2.11)

exists and is an N-dimensional matrix with positive eigenvalues μ1 ≥ μ2 ≥ · · · ≥ μN . The N Lyapunov exponents are defined as λk = log μk . The set of all Lyapunov exponents is called the Lyapunov spectrum.

13

2.3 Oseledets theorem

This definition of Lyapunov exponents can be equivalently reformulated in terms of the linear evolution of perturbations as follows: any initial perturbation u(0) evolves exponentially in time, the growth rate being one of the Lyapunov exponents, 1 H(t)u(0) 1 u(t) ln = lim ln = λk . t→∞ t t→∞ t u(0) u(0) lim

(2.12)

Which Lyapunov exponent characterises the evolution of a particular perturbation u(0) is discussed in Section 2.3.2.

2.3.1 Remarks The Oseledets theorem is often referred to as the ‘multiplicative ergodic theorem’. In the one-dimensional case, one deals with a product of random numbers, which, after introducing the logarithm of the local multipliers, can be reduced to a sum of suitable observables. Accordingly, the usual Birkhoff ergodic theorem applies (Eq. (2.8)), which states that the time average exists and is equal to the average over the probability distribution. Only the former of these statements can be extended to a product of matrices, namely the existence of long-time averages in the sense of Eq. (2.11). The representation as a straightforward average over the probability distribution is not possible, since information on the orientation of the perturbation is required.1 Thus the name ‘multiplicative ergodic theorem’. Lyapunov exponents have been introduced here on an intuitive level, without paying attention to mathematical rigor (those interested in more details can look at Eckmann and Ruelle, 1985; Barreira and Pesin, 2007; Oseledets, 2008; Viana, 2014; and references therein). It is, nevertheless, useful to stress some relevant points. First of all, the matrices describing the evolution of the perturbations are assumed to be non-singular, and their norm admits an exponential bound H(t) ≤ exp[ct]. Next, it is necessary to assume that the underlying dynamical system has nice statistical properties, i.e. the trajectory belongs to an invariant set characterised by an invariant probability measure and is ergodic with respect to it. Under such conditions, the Lyapunov exponents almost surely do not depend on the particular trajectory around which the perturbation is considered (i.e. there may exist a zero-measure set of orbits characterised by different exponents – see Chapter 5 for a discussion of the consequences of this point). Additionally, the so-called Lyapunov regularity is assumed, which implies the existence of the long-time limits in Eqs. (2.11, 2.12). Whenever this is not the case, it is necessary to replace the “lim” operation with the more general lim sup operation (lim). The Oseledets theorem has been formulated by referring to Jacobian matrices determined along a generic trajectory. In a more general formulation, one can consider generic matrices suitably defined in each state of the dynamical system. This mathematical construction is called a cocycle. Moreover, the LEs have been introduced for two typical classes of dynamical systems: ordinary differential equations and discrete maps. The theory is, however, applicable to 1 More and less sophisticated methods for the determination of Lyapunov exponents as ensemble averages may

be, nevertheless, implemented – see, for instance, Chapter 8.

14

The basics

any dynamical system provided the solution of an initial value problem is unique and defined at all times, and ergodicity holds. Many infinite-dimensional systems belong to this class. This includes partial differential, differential-delay and integral equations. In such systems the number of Lyapunov exponents is infinite, but that of positive LEs is typically finite (at least when the spatial extension or the delay is finite). The dependence of the Lyapunov spectrum on the system size in spatially extended dynamical systems is discussed in Chapter 10. Although the generation of the perturbation dynamics requires linearising the original equations of motion Eqs. (2.1, 2.4), the system itself need not be uniformly smooth. Indeed, it is sufficient to be able to define the linearised dynamics for typical trajectories. For example, a map (2.1) can be piece-wise differentable and even have discontinuities; this does not prevent the existence of well-defined Lyapunov exponents. Those trajectories which hit the singular points are indeed non generic. In continuous-time dynamical systems, it may happen that trajectories are generically subject to discontinuous jumps at certain moments of time, such as the collisions exhibited by a particle moving inside a billiard, and yet the Lyapunov exponent is well defined. In fact, what is important for the definition of the Lyapunov exponents is the behaviour of the separation between nearby trajectories rather than that of the trajectories themselves. In practice, one can follow a small perturbation between consecutive collisions and add the effect of the collisions themselves (see the next chapter for a technical description). Trajectories may also exist along which linearisation is not possible (e.g. those trajectories that exactly hit a corner of a billiard table). As long as they are atypical, they do not contribute to the Lyapunov exponents. It is also worth discussing the relationship between matrix eigenvalues and Lyapunov exponents. The eigenvalues of M are the singular values of H, and they generally differ from the (square root of the) eigenvalues of the latter matrix. Only when the matrix H is normal, its singular values correspond to the absolute values of the eigenvalues. The difference between eigenvalues and singular values is especially pronounced in the case of rotations. We illustrate this with the following example, adapted from I. Morris (unpublished). Let us assume that the linearised dynamics is defined by successive applications of two matrices



0 1 0 1/4 A= , B= . 1 0 4 0 It is easy to check that



2l 0 2 , H2l = (AB) = 0 2−2l

0 2−2l−1 , = B(AB)l = 2l+1 2 0 l

H2l+1



4l 0 2 M2l = , 0 2−4l

−4l−2 0 2 . M2l+1 = 0 24l+2

On the one hand, the eigenvalues ν of the H matrices alternate, ± ln 2 for k = 2l 1 ln νk = , k 0 for k = 2l + 1

15

2.3 Oseledets theorem

with no convergence towards some limit value. On the other hand, the singular values μ (the eigenvalues of matrices M) behave nicely, yielding 1 ln μk = ± ln 2 for all k. 2k This confirms the need of referring to the singular values for a proper definition of the LEs.

2.3.2 Oseledets splitting Which of the Lyapunov exponents is selected in Eq. (2.12) depends on the choice of the initial perturbation vector u. Suppose, for simplicity, that all Lyapunov exponents are different, i.e. that there are N different exponents λ1 > λ2 > · · · > λN . Then, in the N-dimensional space of vectors u there exists a set of nested subspaces Dk , k = 1, . . . , N, of dimensions N − k + 1, such that Dk+1 ⊂ Dk . Here D1 = RN is the full space, while DN has dimension one. Then, the Lyapunov exponent λk in Eq. (2.12) will hold for all initial vectors u ∈ Dk \ Dk−1 . This means that all vectors, except for those in the (N − 1)dimensional space D2 , grow asymptotically with the largest Lyapunov exponent λ1 . All the vectors in the (N − 1)-dimensional space D2 , except for those in the (N − 2)-dimensional subspace D3 , grow with the second Lyapunov exponent λ2 , etc. Only vectors belonging to the one-dimensional space DN grow with the smallest exponent λN . In case some of the Lyapunov exponents are equal, not all of the nested subspaces are well defined: if, e.g., λ2 = λ3 , one must “jump” from the subspace D2 to D4 . All the vectors in D2 \ D4 grow with an exponent λ2 = λ3 . Lyapunov exponents have been so far defined by referring to the forward iteration of the dynamical equations. This allows covering non-invertible dynamical systems such as maps of the interval that are often used as paradigmatic examples of a chaotic dynamics. In invertible systems, a perturbation can be followed both forwards and backwards in time, allowing for a sharper characterisation of the tangent space. We illustrate this idea by referring, for simplicity, to a discrete-time model with no degeneracy in the Lyapunov spectrum. Consider a typical trajectory U(t) and assume that the evolution of an infinitesimal perturbation u(t) is described by Eq. (2.2), u(0) denoting the perturbation amplitude at time t = 0. Again we assume for simplicity that all the Lyapunov exponents are different. The tangent space covered by all perturbations RN can be split into one-dimensional subspaces RN = E1 ⊕ E2 ⊕ · · · ⊕ EN such that a generic initial vector in Ek grows according to the Lyapunov exponent λk , both forwards and backwards in time, λk = lim

t→±∞

e(t) 1 log t e(0)

if e(0) ∈ Ek .

The Oseledets splitting generally depends on the point U(0) where it is determined, in the sense that the vectors Ek have different directions in different points of the phase space. The splitting is, however, covariant under the application of the transformation F, Ek (U(t + 1)) = Ek (FU(t)) = J(U(t))Ek (U(t)).

16

The basics In the physical literature, the directions Ek are often called Lyapunov vectors; see Chapter 4 for a more detailed discussion. In non-invertible systems, there may be more than one trajectory coming from the past, which reaches a given point U(t); the Oseledets splitting is, strictly speaking, meaningless.

2.3.3 “Typical perturbations” and time inversion Remarkably, the LEs provide a complete description of the behaviour of all possible perturbations. No perturbation can be generated whose exponential growth rate differs from all of the Lyapunov exponents of the system. This property allows one to determine explicitly some Lyapunov exponents, following the time evolution of particular perturbations and applying Eq. (2.12), as done in Section 2.5.6. What happens to an “arbitrarily” selected perturbation? Assuming, for simplicity, that there are no degeneracies, the foliation Dk , k = 1, . . . , N of the tangent space described in Section 2.3.2 implies that D1 , D2 , . . . have dimensions N, N − 1, . . . . Perturbations growing2 with the largest exponent are “typical”: they fill the whole volume in the tangent space except for the (N − 1)-dimensional subspace D2 . If one, instead, specifically selects a perturbation within D2 , then such a perturbation will “typically” grow with the exponent λ2 , unless it belongs to the “exceptional” subset D3 , etc. This property is used in the numerical evaluation of the Lyapunov exponents (Chapter 3): a randomly chosen perturbation will almost surely grow with the largest Lyapunov exponent; two randomly chosen perturbations will almost surely identify a two-dimensional subspace with a component in D2 , i.e. growing with the second largest exponent, etc. Based on this discussion, it is clear that finding the smallest (most negative) Lyapunov exponents is not an easy task, as they are highly “non-typical”. They become, however, “typical” if the dynamics is followed backwards in time. In both the discrete and the continous time setups, inverting time means looking at the inverse ratios in expression (2.12), so that the Lyapunov exponents in the reversed time direction λ˜ are the opposite of the forwards ones, λ˜ ⇒ −λ. If the exponents are ordered from the largest to the ˜k smallest, one has instead λ˜ k = −λN−k+1 . Correspondingly, the backwards foliation D 1 ˜ for k = 1, . . . , N, differs from the forwards one. D is the full space, and “typical” initial ˜ 2 ) grow with the Lyapunov exponent λ˜ 1 = −λN , etc. perturbations (except for those in D Noteworthy, in order to follow the perturbations backwards in time, it is not necessary to invert the original dynamical system (2.1) or (2.4). In fact, this may not even be possible if the map in (2.1) is non-invertible. It is enough to collect the coefficients of the linear equations (2.2) or (2.5) along a given trajectory and solve these linear equations (by inverting the Jacobian matrix in (2.2) or solving the ordinary differential equation (2.5) backwards in time). In this way one automatically selects the same trajectory of the nonlinear dynamical system. Of course, for invertible systems, these “typical” conditions can be expressed in terms of the Oseledets splitting. A typical initial vector has components in all directions Ek ; thus

2 For the sake of simplicity, the term “growing” is used even if the exponent is possibly negative.

17

2.4 Simple examples

it has components which grow with all Lyapunov exponents. From these components, the one corresponding to the largest Lyapunov exponent dominates at large positive times; thus a “typical” initial perturbation alignes along E1 and grows with the largest exponent. Backwards in time, the evolution of a “typical” perturbation is instead dominated by the largest exponent in the reversed time, λN .

2.4 Simple examples 2.4.1 Stability of fixed points and periodic orbits The first and simplest context where linear stability is encountered is that of constant or periodic trajectories. In the case of a fixed point U0 of a discrete-time dynamical system, the stability analysis reduces to determining the eigenvalues of a constant Jacobian matrix J defined at this point. Perturbations u indeed grow according to the (possibly complex) eigenvalues νk of this matrix: u(t) ∝ (νk )t u(0). Substituting this into Eq. (2.12) we obtain λk = log|νk |. Similarly, for a periodic orbit of period T one has to calculate the eigenvalues νk of the product of Jacobian matrices JT−1 JT−2 . . . J0 to obtain 1 log|νk |. (2.13) T In the continuous-time case, the perturbations around a fixed point grow according to Eq. (2.5), as u ∝ exp[νk t] where νk are (possibly complex) eigenvalues of the Jacobian matrix K, so that from Eq. (2.12) it follows that the Lyapunov exponents are just the real parts of the eigenvalues λk =

λk = Re(νk ). For a periodic orbit U(t + T) = U(t) in the continuous-time case, the linear equation (2.5) has periodic coefficients. Its general solution, according to the Floquet theory (see, ˆ where u(t) ˆ = u(t ˆ + T) has e.g., Hale (1969)), can be written as u(t) = exp[νk t]u(t), period T, and νk are the Floquet exponents. The Lyapunov exponents are the real parts of the Floquet exponents (the expression (2.13) still holds). Summarising, in the case of a regular dynamics, the characterisation of stability by means of the Lyapunov exponents corresponds to the standard eigenvalue analysis. The only difference is that the information on the rotation frequencies (encoded in the imaginary parts of the eigenvalues) is lost. One might wonder whether it is possible to extend the definition of Lyapunov exponents to suitably include the imaginary component. In the context of space-time chaos, this is a key point of the chronotopic approach discussed in Chapter 10. Otherwise, the only “frequency”-like observable that can be introduced is the so-called rotation number: a single number that can be at most generically defined in three-dimensional flows (Ruelle, 1985).

18

The basics

2.4.2 Stability of independent and driven systems It is clear that for a large system composed of two (or many) independent subsystems, the Lyapunov spectrum is also comprised of the exponents of the subsystems. A little more involved is the consideration of skew, or driven systems. In many physical applications, a master subsystem U drives a slave subsystem V but is not influenced by the latter. Next, we consider a discrete-time setup, but the following arguments (which rely on the orthogonalisation procedure explained in detail in the next chapter) can be easily repeated for analogous continuous-time models. In the tangent space, the dynamics of a master-slave system is described by the mapping u(t + 1) = A(t)u(t),

v(t + 1) = B(t)u(t) + C(t)v(t),

where the vector u is N-dimensional, v is M-dimensional (we will call these subspaces master and slave subspaces, respectively), and the three matrices fluctuate in time. Suppose that the transformation is applied to an orthonormal basis at time t, which is composed of N uk vectors from the master subspace and M vj vectors from the slave subspace. The application of the mapping leads to the following set of new vectors:



A(t)uk (t) 0 uk (t + 1) = , vj (t + 1) = . B(t)uk (t) C(t)vj (t) As shown in the following chapter, the Lyapunov exponents are obtained by orthogonalising this set of new vectors. Let us proceed by starting from the vj vectors. They all lie in the M-dimensional slave subspace and so does the resulting orthonormal basis vj (t + 1). One then proceeds by orthogonalising the space spanned by the vectors uk . These vectors lie in both subspaces, the master and the slave one, but in the latter subspace the orthonormal basis vj (t + 1) already exists. This implies that all components in the slave subspace of the uk vectors (i.e. the components Buk (t)) are to be discarded. As a result, one is left with the problem of orthogonalising the vectors Auk (t), i.e. the vectors describing solely the master system. Altogether, the Lyapunov exponents are obtained by separately processing the master and the slave systems: the coupling contributes only indirectly to the identification of the trajectories around which linearisation has to be performed. The matrix B(t) contributes to defining the direction of the Lyapunov vectors (see Chapter 4) only. Contrary to the case of independent subsystems, for skew systems the Lyapunov vectors span over both master and slave subspaces. These arguments can be generalised to chains with a unidirectional coupling; Lyapunov exponents can be determined in each element of such a chain separately.

2.5 General properties 2.5.1 Deterministic vs. stochastic systems Originally, the Lyapunov exponents and the Oseledets theorem have been introduced while studying the linear stability properties of trajectories of deterministic dynamical

19

2.5 General properties

systems. They have, however, an essentially statistical nature, and they can be equally applied to the description of the linear stability of either nonlinear stochastic systems or the asymptotic properties of purely stochastic linear models, like products of random matrices and stochastic linear differential equations (see also Chapter 8). In the context of noisy dynamical systems, e.g. noise-driven maps U(t + 1) = F(U(t), ξ (t)),

(2.14)

where ξ (t) are random processes, the Lyapunov exponents measure the stability of trajectories with respect to variations of the initial condition under the action of the same realisation of the noise. Indeed, by linearising (2.14) one obtains u(t + 1) = J(U, ξ(t))u, so that the product of Jacobian matrices, and correspondingly the Lyapunov exponents, are defined for a particular sequence ξ (t) of random numbers. In practice, the dependence on the realisation of the stochastic process is akin to the dependence of the Lyapunov exponents on the selection of the initial condition in purely deterministic setups.

2.5.2 Relationship with instabilities and chaos As is clear from its definition, the largest Lyapunov exponent determines the long-term linear stability of a given trajectory. A positive largest exponent implies an exponential instability, while a negative largest exponent means stability. In the case of a vanishing largest Lyapunov exponent, the linear analysis does not give the final answer, and a more refined study is needed.3 A strictly positive maximal Lyapunov exponent is often considered as a definition of deterministic chaos. Indeed, chaos is generally defined as a stationary (in a statistical sense) dynamical process accompanied by a sensitive dependence on the initial conditions. This sensitivity means that if any given state is approximately reproduced in the course of time – which must happen due to the assumed statistical stationarity of the whole process – further evolution is only approximate and volatile, eventually deviating from the previous patterns, as a result of the sensitivity of these patterns to tiny deviations. The largest Lyapunov exponent provides a quantitative measure of this sensitivity. Note that this criterion should be applied to the typical trajectories, not the exceptional ones (e.g. unstable periodic orbits that are embedded in the invariant set under consideration). Following a suggestion of Rössler (1979), a regime with more than one positive exponent is often called hyperchaos. The Lyapunov exponent is dimensionally equivalent to a frequency, or inverse time. The inverse of the largest Lyapunov exponent (sometimes called Lyapunov time) identifies the “predictability time”. If the state of a chaotic system is perturbed by an amount δ, this 3 For non-regular systems, where the Lyapunov exponents are defined in sense of limsup, the situation is more

subtle: even the largest Lyapunov exponent does not determine the stability – see Leonov and Kuznetsov (2007) for further discussion and additional references.

20

The basics perturbation grows exponentially in time ∝ exp[λ1 t], reaching a size , after a time

1

. (2.15) Tpr ≈ log λ1 δ The relation is only approximate because the Lyapunov exponent is defined as an asymptotic growth rate that is not necessarily valid at small times. One can, nevertheless, see that the predictability time Tpr depends weakly on the size of initial perturbation δ and it is essentially determined by the value of the largest Lyapunov exponent. Thus, practically only short-time predictions of individual states in chaotic systems are possible. This difficulty is quite often formulated as the butterfly effect: the flapping of wings of a butterfly in Brazil may cause a tornado in Texas. This statement first expressed by E. Lorenz means that a very minor perturbation somewhere in the phase space (here, the physical world) may amplify so much as to become extremely sizable somewhere else. It is important to stress that the inverse of the largest Lyapunov exponent is just of one of the time scales present in a generic dynamical system. The decorrelation of chaotic oscillators, for instance, depends on the diffusion of the phases, which is not related to any Lyapunov exponent (Farmer, 1981; Pikovsky et al., 1997c). If the largest Lyapunov exponent is zero or negative, then the dynamics is nonchaotic. However, a degree of irregularity can be observed in such systems as well. Some examples are the so-called strange nonchaotic attractors (Feudel et al., 2006), some polygonal billiards (Gutkin, 1986; Artuso et al., 2000) and stable chaos, a phenomenon arising in high-dimensional systems, which manifests itself as exponentially long irregular transients (Politi and Torcini, 2010). In most cases the existence of zero Lyapunov exponents can be attributed to the presence of continuous symmetries (see Section 2.5.6). The most typical symmetry is the invariance with respect to time shifts in autonomous continuous-time dynamics. In such systems, the presence of negative LEs accompanied by a single zero exponent is the signature of a stable limit cycle (periodic motion), while two or three zero exponents indicate a quasiperiodic motion with two or three incommensurate frequencies. In integrable Hamiltonian systems, all the Lyapunov exponents are typically zero. In a periodically forced continuous-time system, a periodic solution is stable if the largest Lyapunov exponent is negative, while a stable quasiperiodic regime with two incommensurate frequencies has a zero largest exponent. In practical applications one should remember that the LE definition involves two limits: (i) linearisation of the equations (i.e. the limit of infinitesimal perturbations is taken) and (ii) infinite-time limit (taken to determine the asymptotic growth rate). Of course, these limits cannot be interchanged, and in particular applications where one needs to follow a finite perturbation over a finite time interval, special care is needed.

2.5.3 Invariance The LEs are independent of both the metric used to determine the distance between perturbations and the choice of variables. This property implies that they are dynamical invariants and thereby provide an objective characterisation of the corresponding dynamics.

21

2.5 General properties

The independence of the metric follows from the equivalence of all norms in finitedimensional spaces: given any two norms α and β, one can show that Auα ≤ uβ ≤ Buα for some value of the constants A and B and any vector u. Thus, according to the definitions (2.12), the Lyapunov exponents do not depend on the norm. This is no longer true in infinite-dimensional space: this “ambiguity” lies at the heart of different classifications of supposedly chaotic regimes in infinite-dimensional systems (see the discussion on stable chaos in Chapter 10). Similarly, one can prove the invariance of the Lyapunov exponents under a smooth change of coordinates. Suppose a dynamical system is defined with reference to the variable U and that the new variable V = G(U) is introduced. Then the trajectory U(t) can be expressed, in the new variable, as V(t), while a small perturbation accordingly becomes v(t) = ∂G ∂U (t)u(t). If we assume that the transformation of variables is everywhere nonsingular, i.e. c ≤  ∂G ∂U  ≤ C for some positive constants c and C, then the norms u and v differ by no more than a finite factor. In the limit of long times, this correction provides a negligible contribution in the expressions (2.12), so that the Lyapunov exponents are the same in both variables.

2.5.4 Volume contraction

The sum SN = N k=1 λk of all Lyapunov exponents of an N-dimensional system measures the contraction rate of volumes in the phase space on the invariant set. In the discrete case, this follows from the fact that the determinant of the Jacobian matrix J determines the local growth rate (close to the trajectory U(t)) of the phase volume: VN (t + 1) = |det J|VN (t). Here the index N means that we consider the full N-dimensional volume. Since the determinant of the product of matrices is equal to the product of the determinants, the standard Birkhoff ergodic theorem holds, like in one-dimensional case (Section 2.2), yielding lim

n→∞

1 VN (t) ln = ln|det J| . t VN (0)

On the other hand, for any t, 

det Ht Ht =

t−1 

|det J(U)|2 ,

0

so that the determinant of the limiting matrix P in Eq. (2.11) is equal to exp [ln|det J|].  This determinant is equal to the product of the eigenvalues, k νk = exp [ln|det J|]. By taking the logartithm, we establish the equivalence between the sum of the Lyapunov exponents and the average growth rate of volumes in phase space, SN =

N  k=1

λk = ln|det J| .

(2.16)

22

The basics

A relation similar to (2.16) holds in the continuous-time case. Here, for large t

 t   1 ln det P = lim ln [det H(t)]1/t = lim ln det exp dt K(U(t ), t ) . t→∞ t→∞ t 0 Taking into account the linear-algebra relation det exp A = exp trA, we obtain SN =

N 

1 t→∞ t



t

λk = ln det P = lim

k=1

dt trJ(U(t ), t ) = tr K .

0

In the so-called dissipative systems, where det J < 1 (in the discrete case) and tr K < 0 (in the continuous case), the sum SN of the Lyapunov exponents is negative, meaning that volumes around generic trajectories shrink exponentially to zero. In volume-preserving (e.g. Hamiltonian) systems, since det J = 1 for discrete time and tr K = 0 for continuous time, the sum of the exponents vanishes, SN = 0. This property can be used to test the numerical precision in the calculation of Lyapunov exponents. These relations can be extended to partial volumes as well. Let us assume, for the sake of simplicity, that the Oseledets splitting (Section 2.3.2) is valid and consider two generic initial vectors u(0) and v(0). The area spanned by their iterates is given by the vector product u(t) × v(t). The two initial vectors typically have components along all directions Ek (0). As a result of the evolution over a long time interval t, each component will be roughly multiplied by a factor exp(tλk ). Accordingly, in the vector product, as the term E1 (t) × E1 (t), involving the largest component, vanishes, the leading contribution is provided by the cross terms E1 (t) × E2 (t), which is proportional to exp[t(λ1 + λ2 )]. Therefore, the area V2 spanned by two generic initial vectors grows as the sum of the two largest Lyapunov exponents: λ1 + λ2 = lim

n→∞

V2 (t) 1 log . t V2 (0)

(2.17)

Similarly, for a typical M-dimensional volume VM spanned by M generic different initial vectors, SM =

M  k=1

λk = lim

n→∞

1 VM (t) log . t VM (0)

(2.18)

This relation, which holds also for infinite-dimensional systems, lies at the heart of the numerical methods for determining Lyapunov exponents discussed in Chapter 3.

2.5.5 Time parametrisation In the Lyapunov analysis, time plays a special role. One can often change variables to find the most appropriate ones (this “game” will be often played throughout the book) for either numerical or analytical computations. There exists, however, only one time and usually one can at most think of rescaling it according to some characteristic scale of the problem. In relativistic dynamics, this is not so, as there is no absolute time axis: the Lyapunov exponents depend on the observer (Francisco and Matsas, 1988) and the use of them as

23

2.5 General properties

chaos indicators has even been challenged. Occasionally, it may be useful to perform a non-trivial change of the time variable also in classical contexts. Let us start by introducing a generic scalar variable: φ = φ(U). This is a proper time-like (phase-like) variable if its time derivative ∂φ dφ = · F(U) (2.19) dt ∂U is strictly positive everywhere (if not in the entire phase space, at least on the invariant set). If this is true, one can use φ to order all of the events along the true time axis. Furthermore, under this assumption, one can switch the independent variable from t to φ, arriving at the evolution equation F(U) dU = . dφ F(U) · ∂φ/∂U

(2.20)

Formally, one of the variables is redundant because of the link between φ and the position in phase space, but one can nevertheless disregard this property, select a generic initial condition U and let it evolve. In practice, the system (2.20) is a reformulation of the initial problem, where the time variable has been adjusted. In order to establish a full link with the original equations, it is necessary to complete the mathematical model with the equation that allows the determination of the true time as a function of φ. This is nothing but the inverse of Eq. (2.19): 1 dt = . dφ F(U) · ∂φ/∂U

(2.21) φ

In practice, it is easy to convince oneself that the Lyapunov exponents λi of the system (2.20) are equal to those of the original model up to a multiplicative constant φ

λi = λi /τ ,

(2.22)

where τ = lim

φ→∞

t . φ

(2.23)

An example where a meanigful variable can be introduced is the Rössler attractor (A.8) for the standard parameter values, setting φ = arctan(y/x) as the “phase” of the oscillations. Within relativistic dynamics φ is not a time-like variable, but the true time measured by some other observer (Motter, 2003). Remarkably Eq. (2.22) still holds when a Lorentz tranformation turns a bounded trajectory into an unbounded one (or vice versa) – see Motter and Saa (2009) for a more detailed discussion. As a result, the sign of the Lyapunov exponent is preserved, and this assures that a positive value is a valid criterion to identify chaos across space-time transformations. This change of variables proves rather useful whenever one has to determine a Poincaré surface of a section. Imagine that the section is defined by the condition φ = C. It is much easier to integrate Eq. (2.20) until the “time” variable φ reaches the preassigned value C than to incorporate this condition into the original system. In general, one cannot expect

24

The basics

that the variable definining the surface of section is a global time-like variable. One can nevertheless introduce the new variable φ locally, sufficiently close to the Poincaré surface, such that one can be sure it behaves monotonically. This is basically the trick suggested long ago by Hénon (1982). This construction also allows one to establish a relationship between the continuousand discrete-time representations of a given dynamical system. There are two ways to reduce a continuous-time system to a map: (i) by monitoring the continuous-time system stroboscopically, with a prescribed time interval T and (ii) by constructing a Poincaré map, sampling a trajectory only when it crosses some (N − 1)-dimensional surface of section in the N-dimensional phase space. In the case of the stroboscopic map, the dimension of the phase space is unchanged, and the N Lyapunov exponents are related by the equation = λdiscr . Tλcont k k When a Poincaré map construction is used (typically in autonomous continuous systems), the number of Lyapunov exponents reduces to N − 1. The zero exponent, which corresponds to the invariance of the original trajectories under time shift (see Section 2.5.6), “disappears” because this symmetry is lifted in discrete time. In fact, as discussed, the Poincaré map can be interpreted as a stroboscopic map for a time-like variable φ with constant C = 1. In this latter case, Eq. (2.23) yields τ = Tn  where Tn is the Poincaré return time, so that all other exponents are related by Tn  λcont = λPoin k k . The zero exponent may be eventually recovered by including the evolution equation (2.21) for the true time.

2.5.6 Symmetries and zero Lyapunov exponents Symmetries and conservation laws play an important role in determining the spectrum of Lyapunov exponents. Continuous symmetries, for instance, typically yield zero Lyapunov exponents. We start by showing that autonomous continuous-time dynamical systems have a zero Lyapunov exponent, provided the trajectory does not converge to a steady state. To see this, let us consider a trajectory U(t) of Eq. (2.4) and select the perturbation u = F. Since in autonomous system F does not explicitly depend on time, time differentiaton yields ∂F d U ∂F dF = = F = J(U)F, dt ∂U dt ∂U which tells us that u = F(U) satisfies Eq. (2.5). Thus, one can use the norm of the “velocity vector” to determine the corresponding Lyapunov exponent from Eq. (2.12). As F remains bounded from zero for a trajectory that does not converge to a fixed point, we conclude that the corresponding Lyapunov exponent vanishes. This fact has a simple physical interpretation: the zero Lyapunov exponent is measured by perturbing a phase point along its trajectory. Because the system is autonomous, such a perturbation on average neither grows nor decays; it just fluctuates, depending on the local velocity.

25

2.5 General properties

Invariance of an autonomous continuous-time dynamical system with respect to time shifts is an example of a continuous symmetry. In the presence of a continuous symmetry, one can map a trajectory onto an equivalent one. In particular, the transformation can be infinitesimal and thereby interpreted as a perturbation that neither grows nor decays and is thus associated to a zero Lyapunov exponent. An example of a discrete-time system with a continuous symmetry is the complex map z(n + 1) = f(|z|)z, which is symmetric with respect to a rotation of the argument z → eiα z. In the context of spatio-temporal chaos, translational invariance of a partial differential equation (with periodic boundary conditions) yields a zero Lyapunov exponent. In the case of a quasiperiodic motion with k incommensurate frequencies, the dynamics can be properly reduced to that of k phases ϕ˙j = ωj . The equations are invariant with respect to a shift of each phase; therefore the model possesses k zero Lyapunov exponents. Zero exponents appear also as a consequence of conservation laws. Let an N-dimensional dynamical system possess an integral of motion C(u) = const. Then, the dynamics live on an (N − 1)-dimensional manifold characterised by a specific (constant) C-value and has (N − 1) Lyapunov exponents. In the original phase space, this (N − 1)˙ = 0, which does not dimensional dynamics should be augmented by the equation C contribute to the contraction rate of the phase volume. Thus, the sum of all N Lyapunov exponents is the same as the sum of (N − 1) essential exponents, so that the integral of motion is associated with a zero Lyapunov exponent. As an example, let us consider a chain of oscillators with nearest neighbour coupling, characterised by the Hamiltonian

2 H = Pi /2 + i V(Qi − Qj ). This model has four zero exponents: one is associated with invariance under time translation; one arises from the invariance under the spatial translation (Q → Q + δQ) and the last two zeros arise from momentum and energy conservation, respectively. Finally, zero exponents may also (non generically) occur at bifurcation points, where some direction is (linearly) marginally stable. In such cases, it is necessary to go beyond the linear approach to determine the stability. Symmetries have additional implications: they may induce degeneracies for positive and negative Lyapunov exponents. Let S(U) denote a generic symmetry transformation. A dynamical regime is said to be instantaneously symmetric if each configuration of the invariant measure is S-symmetric, i.e. if S(U) = U. As an example, consider a set of three coupled oscillators U1 , U2 and U3 and the transformation S(U1 , U2 , U3 ) = (U3 , U2 , U1 ). The system is instantaneously S-symmetric if U1 (t) = U3 (t). In the presence of instantaneous symmetries, the evolution in tangent space can be decomposed into that of diagonal blocks, so that the stability analysis is greatly simplified (see, for instance, the implementation of the master stability function in Chapter 9). Whenever some of the blocks are “equivalent”, a multiplicity of Lyapunov exponents emerges (Aston and Dellnitz, 1995). The appearance of degeneracies may be more subtle and related to the presence of an average symmetry, i.e. not the symmetry of the single phase-points but that of the entire set of points in the invariant measure. Although this latter property by itself does not typically have any implication on the structure of the Lyapunov spectrum, it may have an implication, when suitably combined with the instantaneous symmetry (Ashwin and Breakspear, 2001; Aston and Melbourne, 2006). Consider, for instance,

26

The basics

the complex Ginzburg-Landau equation (A.18) with periodic boundary conditions in the interval [0, 2π ]: it has translational and reflection symmetry, besides being invariant under rotations of the complex variable U. For R = 4, μ = −4, in the interval ν ∈ [1.9, 2.3], the dynamics converge towards a state of spatial period π , thus with an instantaneous spatial translational symmetry of π , while the attractor itself exhibits a spatial translational symmetry of π/2 (and a reflection symmetry). The combination of the two ensures a strong degeneracy in the spectrum: two subsets of the Lyapunov exponents are equal to one another (Aston and Laing, 2000). Symmetries have been found to yield (multiple) degeneracies in various contexts, including a two-dimensional gas of hard disks (Eckmann et al., 2005) and among the negative exponents in coupled oscillators (see the discussion in Section 2.5.7).

2.5.7 Symplectic systems Symplectic dynamics is defined in terms of pairs of conjugated (canonical) variables q, p ∈ RN , so that it is customary to refer to a symplectic system with N degrees of freedom as a dynamical system characterised by 2N variables and 2N LEs. They possess a special symmetry property: Lyapunov exponents come in pairs λk = −λ2N−k+1 . The most prominent such example is Hamiltonian dynamics, described by the equations of motion q˙ =

∂H , ∂p

p˙ = −

∂H , ∂q

where H(q, p) is the energy (Hamiltonian function) of the system. In discrete time, one speaks of symplectic maps, which can be viewed as a canonical transformation (q(0), p(0)) → (q(t), p(t)) generated by the evolution of a Hamiltonian model. They are defined so as to preserve the canonical structure of the model; i.e. they must satisfy the Poisson brackets {qi (t), qj (t)} = {pi (t), pj (t)} = 0,

{qi (t), pj (t)} = δij .

Such conditions can be rewritten in terms of the Jacobian matrix J of the transformation as   0 IN  JJ = ,  = , (2.24) −IN 0 where IN is a unit N × N matrix and  is skew-symmetric. Matrices satisfying condition (2.24) are called symplectic. Since 2 = −I, it can be easily proven that the relation J J = 

(2.25)

holds as well. The Lyapunov exponents are defined in terms of the eigenvalues of the matrix H H (see Eqs. (2.10, 2.11)). Now, we show that if μ is an eigenvalue of H H, where H is symplectic, then (μ)−1 is also an eigenvalue. Indeed, if y is the eigenvector associated to the eigenvalue μ, then H Hy = μy.

27

2.5 General properties By multiplying this equation on the left by H H, one obtains H HH Hy = μH Hy. Since H is the product of symplectic maps, it is still symplectic. By recursively using the conditions (2.24, 2.25), it is found that (y) = μH H(y) . This proves proves that y is the eigenvector of the matrix H H with eigenvalue μ−1 . Thus, the eigenvalues of a product H H of symplectic matrices appear in symmetric pairs. This implies that, taking the infinite-time limit in Eq. (2.11), the Lyapunov exponents also appear in symmetric pairs, λk = −λ2N−k+1 . Various generalisations of this symmetry to a wider class of dynamical systems have been discussed in the literature. Remarkably, this includes the symmetry with respect to a negative value −γ (λk + λ2N−k+1 = −γ ) that appears in some models of dissipative or suitably thermostatted particles (Dressler, 1988; Gupalo et al., 1994).

3

Numerical methods

By definition, the computation of Lyapunov exponents requires knowledge of the evolution equations in tangent space and, in particular, of the local Jacobians. This is, however, not always possible. The reason may be the complexity of the model itself, as for the most detailed models used for weather predictions, or the absence altogether of a quantitative model, as in many experiments. In such cases one can substitute the lack of knowledge of the Jacobian by following trajectories that are close enough to the reference trajectory to fall in the linear regime (and frequently rescaling their distance to avoid it becoming either smaller than the computational accuracy or the noise in the system, or so large as to be affected by nonlinearities). In practice, it is necessary to reconstruct (either implicitly or explicitly) the Jacobian. In fact, this chapter is almost entirely devoted to discussing various methods to determine LEs, starting from the known local Jacobians. In Section 3.7, we nevertheless discuss some approaches that can be applied to time series in the absence of a known quantitative model.

3.1 The largest Lyapunov exponent The numerical calculation of the largest Lyapunov exponent is a relatively simple problem. It is sufficient to implement the relation (2.12). As discussed in Section 2.3.3, a generic initial perturbation u(0) has a non-zero component along the most expanding direction, which will dominate the evolution over long times. One can thus compute the norm of the perturbation after a suitable transient time (which depends on the difference between the largest and the second exponent) and thereby determine λ1 from Eq. (2.12). Practically, in order to avoid large or small numbers (for a positive or negative largest exponent, respectively), it is convenient to periodically renormalise the perturbation. As an example, we describe the procedure for the case of discrete time. Given an initial unit vector u(0), one first computes u(1) ˜ = J(0)u(0) and then renormalises it to unit norm, u(1) =

u(1) ˜ . u(1) ˜

(3.1)

Next, the Jacobian matrix J(1) is applied to the unit vector u(1), etc. The largest Lyapunov exponent is determined by averaging the normalisation factors obtained at each step, 1 lnu(k). ˜ t→∞ t t

λ1 = lim

k=1

28

(3.2)

29

3.2 Full spectrum: QR decomposition

In practice, the average is performed over a finite time (possibly dropping an initial transient). This method cannot be straightforwardly extended to the computation of the second Lyapunov exponent, as one needs to choose a non-typical initial vector in the (N − 1)-dimensional subspace L2 , which is a priori unknown. Turning, instead, our attention to the definition (2.11), one might wish to solve the problem by determining the eigenvalues of the symmetric real matrix M(t) = H (t)H(t), where the Jacobian matrix H(t) is obtained by multiplying the single-step Jacobian matrices for a time interval from 0 to t − 1. Although it is formally sufficient to implement a suitable linear-algebra routine, one can see that this procedure is eventually rather inaccurate. Let us, indeed, look at the action of the N×N matrix H(t) on a given set of initial vectors. Singular value decomposition (SVD) allows decomposing Ht as H(t) = ODQ, where O and Q are two orthogonal matrices, while D is diagonal. From the equality H H = QD2 Q, one can conclude that the eigenvalues of M(t) are μ2i = D2ii . In practice, SVD reveals that a generic transformation Ht can be decomposed into a rotation, Q, followed by an expansion/contraction along various orthogonal directions (D) and by a second rotation O. Therefore, when H(t) is applied to a unit N-dimensional sphere SN , the first rotation is irrelevant, while the application of D transforms SN into an ellipsoid that is then suitably rotated. Altogether, this means that the sizes of the axes of the ellipsoid directly measure the multipliers μi . Unfortunately, one cannot use this method to compute the Lyapunov exponents. If the matrix H(t) is computed for too long of a time (this is needed to ensure that the limit t → ∞ is taken), the singular values Dii cover too wide a range1 to be estimated with enough accuracy with any method. On the other hand, one cannot break the computation into many consecutive subintervals since the outcome of the first SVD would be an ellipsoid rather than a sphere. Thus, the aforementioned arguments cannot apply, as they work only when the initial set is a unit sphere. The methods described next overcome this difficulty by breaking the overall computation into many distinct, short time-steps. The main idea is to use the relationship between the Lyapunov exponents and the evolution of the phase volume, discussed in Section 2.5.4. Indeed, the evolution of a phase volume, Eq. (2.18), is the evolution of a scalar quantity, similar to the norm of the vector, and its growth rate can be computed in short steps, analogously to what was done in Eqs. (3.1, 3.2) for the largest Lyapunov exponent.

3.2 Full spectrum: QR decomposition A solution to this problem is obtained by estimating the growth rate Sm of the volume Vm of a generic m-dimensional parallelepiped,

1 In fact, a range that grows exponentially in time.

30

Numerical methods  ln Vm Sm := lim λi , = t→∞ t m

(3.3)

i=1

where the last equivalence relation follows from the tendency of the parallelepiped to almost surely align along the most expanding m-dimensional subspace (see Section 2.5.4). Let us now denote with Q0 an N×m orthogonal matrix (i.e. a set of m orthogonal vectors in the N-dimensional tangent space). Within a time t, Q0 evolves into a parallelepiped identified by P = H(t)Q0 . The matrix P admits a unique decomposition of the type P(t) = QR, where Q is an orthogonal N × m matrix, while R is an upper triangular m × m matrix with positive diagonal elements. The determinant of R is equal to the previously mentioned volume Vm (since Q involves only rotations and reflections that do not affect the volume). Therefore, Vm =

m 

Rii .

i=1

By substituting this expression into Eq. (3.3) and applying it successively to m = 1, m = 2, etc., one finds that ln Rjj . t The main advantage with respect to the SVD is that now one can decompose the computation of R into the product of many terms, each of which can be accurately determined. In fact, let us break the time interval t into L steps of size τ and denote with Hk the operator resulting from the integration between time (k − 1)τ and kτ (k = 1, L), λj := lim

t→∞

P=

L 

Hk Q0 .

k=1

If we now define Pk := Hk Qk−1 and perform the QR decomposition Pk = Qk Rk , a recursive iteration of the procedure leads to P = Qk

L 

Rk .

k=1

Given that the product of the R matrices is still upper triangular and that the decomposition is unique, one can conclude that the computation of R can be split into that of many single steps of finite length, and the jth Lyapunov exponent is obtained by summing the single ln Rjj contributions. For the computation of the Lyapunov exponents, only the diagonal

31

3.2 Full spectrum: QR decomposition

terms of R are necessary. Notice also that it is not necessary to determine separately the matrix Hk and multiply it by Qk , as it suffices to integrate the equations in tangent space, starting from the initial conditions Qk . Altogether one can summarise the approach by stating that given a set of orthogonal vectors, they are re-orthogonalised after some time. In the process, the expansion of volumes of different dimensions is computed, while the orthogonalisation prevents the different directions to mutually align, thus avoiding problems of numerical accuracy. This approach was first proposed in the late 1970s (Shimada and Nagashima, 1979; Benettin et al., 1980a, b); its implementation allowed determining Lyapunov spectra in many relevant dynamical systems.

3.2.1 Gram-Schmidt orthogonalisation The most popular algorithm to perform the QR decomposition is the Gram-Schmidt (GS) procedure, which is defined as follows. Let P be an N×m matrix and set R11 = P1 , where  denotes the norm of the vector, while Pk denotes the kth column vector of matrix P. In the first step one calculates Q1 = P1 /R11 . The following steps are recursively defined as ˜ i = Pi − Q

i−1  (Pi · Qj )Qj ,

˜ i , Rii = Q

˜ i /Rii . Qi = Q

j=1

These computations can be carried out in two different ways with different degrees of numerical accuracy. The classical approach amounts to literally following this definition. The modified GS method amounts to subtracting the components along the Qj vectors as soon each new vector is generated. The different order of the operations guarantees a better degree of orthogonality in the resulting matrix. A pseudocode of the latter approach is sketched in Appendix B. As for the computational complexity of this method, one can easily check that the leading term requires N3 multiplications (in the case m = N).

3.2.2 Householder reflections Householder reflections provide a more accurate tool to compute Lyapunov exponents. This was first recognised in a 1984 preprint of Johnson et al. (1987) and then implemented by Eckmann and Ruelle (1985). A Householder transformation is a reflection around a given plane A. In matrix form, this orthogonal operation is described by O = I − 2ww , where I is the identity matrix, while w is a unit vector orthogonal to the plane A. Given a matrix P(0) , its QR decomposition is obtained by repeatedly applying a sequence of Householder transformations Oi , P(i) = Oi P(i−1) . Notice that the superscript index in parentheses is used here and below to distinguish the intermediate steps during a given QR decomposition. This is to discriminate from

32

Numerical methods

the subscript k used to distinguish the P matrices generated along a trajectory. These ˜ i /|w ˜ i |, where transformations are identified by the unit vectors wi = w   (i−1) (i−1) ei , i−1 + sign Pii w ˜ i = Pi ·i denotes the projection operator which zeroes the first i components and ei is the ith Euclidean unit vector. It is clear that the first i − 1 components of wi are equal to zero. This expresses the fact that the ith Householder transformation does not affect the first i − 1 variables. It turns out that P(i) has the following structure   R(i) S(i) (i) P = (i) , 0 S˜ (i) where R(i) is an i × i upper triangular matrix and S(i) and S˜ are two i × (m − 1) and (N − i) × (m − i) matrices, respectively. Finally, Rii = [Pi ]i ; here we should note that in order to guarantee that the diagonal elements of R are positive, suitable additional reflections should be considered. Such operations are, however, totally irrelevant for the computation of the Lyapunov exponents, which depend only on the absolute values. As a result of the recursive procedure,

R P(m) = Om Om−1 . . . O1 P(0) = 0 (m)

and therefore Q = P˜ , where the tilde means that only the first m rows of the matrix are considered. An explicit expression of Q can be obtained by repeatedly applying the matrices Oi ,   y(i) = y(i−1) − 2 wi · y(i−1) wi . By implementing the whole procedure to the column matrix P(0) ≡ Pk and setting y(0) =  ek , one eventually finds that Qk = y(m) . A pseudocode of this algorithm is reported in Appendix B. In the case m = N, the Householder procedure requires (2/3)N3 multiplications for the factorisation and N3 for the reconstruction of the Q matrix, i.e. (5/3)N3 altogether. The scaling behaviour is the same as for GS, while the prefactor is slightly worse in this latter case. Notice, however, that it is possible to further optimise the method, bringing the number of operations down to (2/3)N3 (von Bremen et al., 1997). So far, we have not included the computational complexity of the generation of the new matrix Pk from Qk−1 . This step depends on the structure of the equations of motion. Assuming that NI represents the average number of interactions between a generic variable and all the others, this task requires a number of operations that is proportional to nNI N2 , where n is the number of integration time steps in between two QR decompositions. In many physical models, NI is independent of N and possibly a small number (in the case of nearest-neighbour coupled oscillators, it may be as small as 3). In such cases, the computational bottleneck depends on n/N but is likely to be the QR decomposition for large systems. In systems with global coupling, the bottleneck is typically the integration of the vectors, which would require a time nN3 (if all exponents have to be determined).

33

3.3 Continuous methods

An exception is the class of mean-field models where all variables are driven by the same field, and, thereby, the cost of the integration is much reduced.

3.3 Continuous methods The methods described in Sections 3.1 and 3.2 are equally applicable to discrete-time and continuous-time systems. The only difference is that in the continuous case, between consecutive QR decompositions, it is necessary to integrate a nonlinear differential equation together with the differential equation for the perturbations, using appropriate (possibly standard) numerical methods. If the time variable is continuous, one can, however, determine the Lyapunov exponents by integrating directly suitable differential equations without linear-algebra manipulations. This idea was first proposed by Goldhirsch et al. (1987). The starting point is the fundamental equation ˙ = KY, Y where Y is a matrix made of N independent vectors and K is the Jacobian of the differential equation (or, more in general, just a time-dependent matrix). In order to compute the Lyapunov spectrum, it is necessary to decompose Y: Y(t) = Q(t)R(t), where Q is orthogonal and R is upper triangular. For the sake of simplicity we assume that Q is a square matrix, i.e. all exponents are going to be computed, but the approach works generally, with some crucial differences that will be properly emphasised. By combining these two equations, one obtains ˙ + QR ˙ = KQR, QR which leads, after multiplying from the left with Q and from the right with R−1 , to ˙ = RR ˜ = Q KQ − Q Q ˙ −1 , K

(3.4)

where the last term in the r.h.s. is upper triangular, being the product of two upper triangular ˙ (this property follows matrices. Accordingly, the antisymmetric matrix L(t, Q) = Q Q from the orthogonality of Q) satisfies ⎧  ⎪ i > j, ⎪ ⎨[Q KQ]ij Li, j = 0 i = j, ⎪ ⎪ ⎩−[Q KQ] i < j. ji

In other words, Q evolves according to the differential equation ˙ = QL(t, Q), Q

(3.5)

34

Numerical methods

while the information on the volume expansion is contained in the matrix R, which satisfies the equation ˙ = KR ˜ R (see Eq. (3.4)). As the Lyapunov exponents depend only on the diagonal terms of R (the diagonal terms of L are equal to zero), this equation reduces to a set of independent equations, namely R˙ii = [Q KQ]ii Rii , so that 1 1 λi = lim ln Rii = lim t→∞ t t→∞ t



t

[QT (s)K(s)Q(s)]ii ds,

0

which can be integrated by quadratures. By construction, the evolution of Q is constrained to the (Stiefel) manifold of orthogonal matrices. If one is interested in the computation of all eigenvectors, the Stiefel manifold is neutrally stable (Dieci and Van Vleck, 1995), and it is harmless to integrate the equation with unitary algorithms (e.g. Gauss Runge-Kutta). Their implementation, however, is not straightforward (Dieci et al., 1997), especially in high-dimensional spaces. On the other hand, if one is interested in only a few exponents, the Stiefel manifold is transversally unstable (as noted by Dieci and Van Vleck (1995)), and different approaches become necessary. A first solution was proposed by Christiansen and Rugh (1997), who added a suitable dissipation which vanishes on the Stiefel manifold but stabilises transversal perturbations. Unfortunately, the method is only conditionally stable (above a model-dependent parameter value) and, in some cases, just fails (Ramasubramanian and Sriram, 2000). A more sophisticated approach has been developed by Bridges and Reich (2001), where the authors have been able to stabilise unconditionally the Stiefel manifold by making use of concepts of differential geometry. In practice, this method is based on a transformation of the matrix L so as to make it strictly antisymmetric.2 Yet another class of methods is based on mapping Q onto a series of angles, which uniquely identify the transformation Q, and thereby writing suitable differential equations. This method was first proposed by Rangarajan et al. (1998) and later extended by Ramasubramanian and Sriram (2000). However, the number of expressions that one has to manage grow rapidly with the dimension and are unmanageable above N = 4. Finally, in some special flows, such as symplectic dynamics, different decompositions have been proposed (Habib and Ryne, 1995), which again suffer from the problem of scalability. Altogether the implementation of automatic unitary integrators is not straightforward, and this represents a major obstacle to the study of high-dimensional systems. In the end, even though much less elegant, the most versatile algorithms are those based on the so-called projected schemes, where a standard integration algorithm is combined with an orthogonalisation routine, to remove systematically transversal components (Dieci et al., 1997). 2 In fact, as noted by Dieci and Van Vleck (1995), the Stiefel manifold is unstable as long as L is only weakly antisymmetric, i.e. if Q [H + H ]Q = 0, without being H = −H. This is possible only for rectangular

orthogonal matrices.

35

3.4 Ensemble averages

From a computational point of view, one should notice that even disregarding the CPU time used to orthogonalise Q (required only by the last approach), the integration of Eq. (3.5) is much more CPU-time consuming than the integration of P, since L is (except for its symmetry) a full matrix. This implies that the number of operations per time steps is of the order of N3 . This means that, unless one can use a significantly longer integration time step, such methods are necessarily more CPU-time consuming than the discrete-time ones.

3.4 Ensemble averages The standard definition of Lyapunov exponents requires averaging a local expansion rate along a formally infinite trajectory. It is natural to ask whether LEs can also be determined by performing an ensemble, or invariant-measure, average. In the case of one-dimensional maps (cf. Section 2.2), the local expansion rate is given by ln F (U), and the Lyapunov exponent can be simply defined as  λ = ln |F (U )|ρ(U )dU, where ρ(U) is the invariant probability density (invariant measure). In other words, the LE can be expressed as the average of a local observable in phase space. In higher-dimensional systems, the problem is more difficult, since the expansion rate depends on the orientation of the perturbation. As will be illustrated in Chapter 4, Ershov and Potapov (1998) proved that each point in phase space is equipped with a unique set of covariant Lyapunov vectors that are the proper directions to be used. Therefore, one is entitled to define  (3.6) λi = logFU Vi (U)ρ(U)d U, where the unit vector Vi (U) is aligned along the ith covariant Lyapunov vector in U. A direct implementation of Eq. (3.6) is generally problematic, since the covariant vectors are unknown. The only way to determine them is by setting up the same algorithm which allows determining the LEs via the standard approach (see next chapter)! This idea should not, however, be completely discarded. One can indeed follow a hybrid approach, by combining an average over the phase space with some time evolution. For instance, one can formally define  (3.7) log |Rii |dμ, λi = lim t→∞

where the matrix R is determined via a standard QR decomposition along a trajectory of time length t that starts in U at time 0 with a random initialisation of the vectors in tangent space, while μ is the given invariant measure. This definition looks awkward, as it involves a time limit as well as an average over the phase space. Here, however, the convergence in time is much faster than for the usual approach, since it is limited only by the relaxation time towards the proper Lyapunov

36

Numerical methods

vectors in tangent space. Slow processes in phase space do not affect the computation, as one assumes to deal directly with the invariant measure. A practical implementation of this idea was proposed by Aston and Dellnitz (1999) for the largest LE and later extended by Beyn and Lust (2009) to the whole spectrum. The convergence in Eq. (3.7) can be further accelerated if, before starting the evolution in the tangent space with a set of random vectors, one first iterates the initial point U for some time tb backwards in phase space. This way, the orthogonal subspaces are properly oriented at time t = 0. This idea was implemented by Eckhardt and Yao (1993) to determine the maximum Lyapunov exponent in the Lorenz attractor and the ChirikovTaylor standard map. Altogether, the advantage of this method is based on the ability to avoid statistical fluctuations by performing an integral in phase space. Therefore, its validity depends crucially on the accuracy of the known invariant measure. In Hamiltonian systems, the invariant measure is known a priori, but, more in general, it has to be determined one way or another. Typically, the phase space can be partitioned into relatively small boxes Bi (Dellnitz and Hohmann, 1997), to thereby build a discretized Perron-Frobenius operator P, which governs the evolution of the measure  m(Bj F−1 (Bi )) , Qij = m(Bj ) where m(Bj ) denotes the uniformly distributed mass within the box Bj (cf. Section 5.5). The (normalized) leading eigenvector of P provides an estimate for the suitably coarsegrained invariant measure. The bottleneck of this approach is the phase-space dimension; it can be hardly applied to high-dimensional dynamics. The ensemble-average approach may be also implemented analytically. In the case of a weakly forced chaotic oscillator (a problem connected with the onset of phase synchronisation), it allows deriving a perturbative expression for the second Lyapunov exponent (see Section 9.1). Finally, notice that, in the case of random processes, the integral over the phase space in Eq. (3.6) is to be replaced by two integrals: one over the possible orientations of the given covariant Lyapunov vectors and the other over the realisations of the stochastic process itself. This is a rather complex task, but the distribution of the random variables is known a priori and is typically rather smooth (at variance with the usual wild dependence of the covariant vectors on the position in phase space). In fact, analytic results for the maximum LE can be obtained for some problems involving products of random matrices (see Chapter 8).

3.5 Numerical errors The first, obvious source of errors in the computation of Lyapunov exponents is the integration of the evolution equations in the tangent space. This problem is analogous to that of guaranteeing a good accuracy in the real-space evolution; one must, however,

37

3.5 Numerical errors

keep in mind that time scales in tangent space may be different. This is particularly true for the computation of very negative Lyapunov exponents that might require a time step smaller than for the nonlinear equation. Since all of these problems are not specific to Lyapunov exponents, the interested reader can look at the proper literature on this subject. Methods for the integration of general ODEs are, e.g., found in Butcher (2008), while more specific algorithms to deal with systems with symmetries (such as symplectic dynamics) are discussed in Leimkuhler and Reich (2004) and Hairer et al. (2010). Finally, a comprehensive presentation of methods for partial differential equations can be found in Ames (1992), Quarteroni and Valli (1994), and Morton and Mayers (2005). A comparison of the various numerical methods for the computation of Lyapunov exponents can be found in Geist et al. (1990) and Ramasubramanian and Sriram (2000). Although they are somewhat outdated, one can nevertheless still find some useful information. In the following sections we discuss two major sources of errors: (i) the finite orthogonalisation time and (ii) statistical fluctuations. The last section is devoted to a specific example, where small errors may give rise to sizable effects, because of a quasi-degeneracy. For the sake of clarity, some of our considerations are supported and exemplified by numerical simulations.

3.5.1 Orthogonalisation The errors on the Lyapunov exponents are a direct consequence of the errors affecting the triangular matrix R. In discrete methods, the error on R depends in the errors made in the computation of the transition matrices P. The first analysis was presented by Dieci et al. (1997) and later refined by Dieci and Van Vleck (2005). There, it was found that there are two sources of errors. One is related to the “nonnormality” of the matrix R, i.e. to the size of its nondiagonal components, which may obstruct the estimate of the diagonal terms (Dieci and Van Vleck, 2005). This contribution arises from the fact that some directions may not be well separated from each other; it grows with the orthogonalisation time interval τ . Except for possibly pathological models, it should not create any problem provided that a small enough value is selected for τ . The second contribution is due to the actual value of the expansion/contraction rates. A bound was given by Dieci et al. (1997),   R−1 ii (τ ) ε, (3.8) δλi = C τ where C is typically an unknown prefactor, Rii (τ ) is the expansion factor over the orthogonalisation time τ and ε represents the numerical accuracy. The accuracy may, therefore, degrade significantly for the negative multipliers. Here we illustrate the problem by studying a chain of Rössler oscillators (see Eq. (A.17)) for parameter values where some LEs are approximately equal to −10 (see Fig. 3.1). In that case, assuming that 6 R−1 ii (τ ) ≈ exp(λi τ ), we find that for τ = 1 the error is amplified by a factor 10 , i.e. six digits are lost in the computation. As long as the single computations are carried out with an accuracy of 15–16 digits, this degradation does not create practical problems, and the full dots in Fig. 3.1 indeed provide a fairly accurate description of the whole Lyapunov

38

Numerical methods

0 ¸

–5

–10

0

Fig. 3.1

5

10

15 i

20

25

30

Lyapunov spectra for a chain of 10 Rössler oscillators (A.17) for a = b = 0.1, c = 10 and ε = 2. The various symbols correspond to different protocols and values of the orthogonalisation time τ . Full dots and the solid curve have been obtained using the GS procedure with τ = 1 and τ = 2, respectively. The dashed line corresponds to a double GS implementation (see the text). The dotted line has been obtained by implementing Householder reflections with τ = 2. spectrum. For τ = 1, the differences among the discrete methods cannot be appreciated. However, as soon as τ = 2, Eq. (3.8) implies that the amplification factor becomes 1012 (apart from the unknown factor C). As a result, the last 10 exponents are badly estimated, if determined with the GS approach (here and almost everywhere in the book, the modified GS method is implemented) − see the solid line. This can be partially understood by noticing that there is a large gap in the spectrum between the 20th and the 21st exponent. This gap cannot be properly handled by the GS method, which grossly overestimates the spectrum. The reason is that in some limit cases, the GS approach is not able to produce a truly orthogonal basis, and some improvements can be obtained by orthogonalising the basis, i.e. applying the GS orthogonalisation twice (see the dashed line). The dotted line, obtained by using Householder reflections, proves that this method is superior to the GS orthogonalisation; one can exploit the higher accuracy by using longer orthogonalisation times. In continuous-time methods, the error in R depends on the integration of the matrix Q, whose evolution is less unstable (Dieci and Van Vleck, 2005). Accordingly, such methods are superior in this respect. We should recall, however, that typically the major sources of errors are the statistical fluctuations discussed in Section 3.5.2.

3.5.2 Statistical error The different degree of stability of the various regions in phase space represents an unavoidable source of fluctuations in chaotic dynamics. Equivalently, fluctuations may

39

3.5 Numerical errors

(a) 0.08 Λ(U0, t ) 0.04 0

time –0.04 100

(b)

10

1

2

3

10

10

10

4

10

5

106

σ2/T 0.1 0.05 0

Fig. 3.2

T 101

102

103

104

(a) The finite-time Lyapunov exponent (U0 , t) in the Rössler oscillator (A.8) for the standard parameter values, obtained from a single trajectory. The error bars are obtained by following the procedure described in the text. (b) Effective diffusion coefficient σ 2 /T versus T, from the same data set. arise in stochastic contexts, when different matrices are singled out in different time steps. In order to determine how the presence of such fluctuations affects the computation of an LE, it is necessary to sharpen the notations, introducing appropriate observables. Let (U0 , t) denote the growth rate measured over a time t, along a trajectory starting from U0 .3 We then introduce the logarithm of the expansion factor over a time t, such as (t) := (U0 , t)t. The time evolution of (t) can be viewed as a diffusive process with a drift. The average increment   (T) of over a time T is nothing but λT. Its variance σ 2 (T) is expected to increase linearly with time, σ 2 (T) ≈ DT for T large enough (unless we are in the presence of an anomalous diffusion). Accordingly, the statistical error in the √ Lyapunov exponent is σλ = D/ T. In practice, the diffusion coefficient D can be estimated by splitting the overall computation time t into a relatively large number of intervals of length T. If T is longer than the correlation time, one can determine the diffusion coefficient as D = σ 2 (T)/T and thereby extrapolate the error for longer times, including the length t of the entire simulation, without the need of repeating the simulation. This procedure is illustrated in Fig. 3.2, which refers to the maximum exponent in a single Rössler oscillator (A.8) with the standard parameter values. The solid line in Fig. 3.2a corresponds to (U0 , t) for some specific initial condition, versus time. The error bars are obtained by splitting the entire trajectory into samples of length t0 = 3000 (much smaller than the total length) and thereby estimating the statistical fluctuations. Fig. 3.2b contains the effective diffusion coefficient σ 2 /T. Its decrease reveals the presence of slowly decaying correlations that survive more than 1000 time units. On the basis of the D value

3 In Chapter 5,  will be identified with the finite-time Lyapunov exponent. For the sake of simplicity, we drop

the index here that identifies the LE itself.

40

Numerical methods obtained for T = 1280, the estimate of the Lyapunov exponent at time t = 106 is found to be affected by a statistical error of 1.2 × 10−4 . The diffusion coefficient D, introduced here as a tool to estimate the statistical error on the Lyapunov exponent, is itself a dynamical invariant that contributes to characterising a chaotic dynamics. This concept is extensively discussed in Section 5.3. Here, we limit ourselves to recall that zero LEs are characterised by a vanishing diffusion coefficient whenever they are due to symmetries or conservation laws. The very small LEs arising in weakly chaotic systems (such as nearly integrable Hamiltonian models) is a qualitatively different case. There, the convergence of the Lyapunov exponents is affected by the longlasting temporal correlations which characterise also the convergence of the invariant measure to its asymptotic shape. As a consequence, it is not possible to define generally valid protocols, apart from comparing the convergence of the LE, while starting from significantly different initial conditions.

3.5.3 Near degeneracies We now address a problem that may occur in the case of nearly degenerate Lyapunov exponents. A sufficiently long simulation (107 time units) of a chain of Rössler oscillators (A.17) yields the set of the exponents reported in Fig. 3.3, where one can recognise that there is just one positive exponent (see the inset) and that many LEs come in pairs (the latter ones are joined by segments to make the pairs easily identifiable). As we will further clarify, the presence of pairs is to be attributed to the (discrete) translational symmetry, which, in the case of spatially periodic solutions, manifests itself as an equivalent stability against sine and cosine perturbations. Here, we focus on the four almost identical exponents indicated by the arrow in the inset (i.e. on two nearly degenerate consecutive pairs). Long simulations (up to 108 time units) reveal that there is a small difference between 1

0

0

0

9 10

8

6

2

7

1

5

2 4

3 3

4

¸

5 6 7

–5 0

0.05

¸

1

–0.05

10

9

–0.1

–15

Fig. 3.3

1

0

0

9 10

0

0

–10

8

5

10

3

8

10

i 20

1 2

30 i

40

4 0

6 5

50

8 7

10 9

60

Lyapunov spectrum for a chain of N = 20 Rössler oscillators (A.17) with a = b = 0.1, c = 14 and ε = 2.

41

3.5 Numerical errors the two pairs (which correspond to −0.1009(0) and −0.1025(5)). We now show that this deviation is substantially a spurious effect, due to a subtle problem of numerical accuracy. In fact, it is easy to verify that the attractor is (for the parameter values as in Fig. 3.3) a perfectly homogenous (synchronous) state, U (t) = U(t). Accordingly, the Ansatz u (t) = u˜ (k) (t)e2π ik /N (where k = 0, . . . , N/2) diaganolises the evolution equation in tangent space (see Section 9.2.3 for a general treatment); i.e. the eigendirections of the linear problem are the Fourier modes. As a result, the dynamics of the kth mode is determined by three coupled linear equations for u˜ (k) := (˜x(k) , y˜ (k) , z˜(k) ): x˙˜ (k) = −˜y(k) − z˜(k) ,   y˙˜ (k) = x˜ (k) + a˜z(k) + 2ε cos(2πk/N) − 1 y˜ (k) , z˜˙(k) = z˜(k) (X − c) + Z˜x(k) .

(3.9)

As these equations have real coefficients, sine and cosine components are characterised by the same evolution, and this explains the existence of degenerate pairs in the spectrum. The only modes that are not characterised by this degeneracy are the 0th mode and the (N/2)th one. So, leaving aside the degeneracy, each Fourier mode is characterised by a triple of Lyapunov exponents. The numbers shown inside Fig. 3.3 close to the Lyapunov spectrum identify the corresponding Fourier modes. Long simulations reveal that the results obtained with the Fourier approach coincide with those obtained with the original method, except for the aforementioned pairs, where the Fourier approach leads to slightly different values, namely −0.1016(8) and −0.1018(9). The analysis of the Lyapunov vectors (see Chapter 4 for a detailed discussion of these vectors) helps to explain the origin of the differences. In Fig. 3.4 we plot the instantaneous

(a) 1 x(1) 8 0.5

0

(b)

1

x(8) 8 0.5

0 600

800

1000

1200

t

Fig. 3.4

Scalar product of the x component of the 8th Lyapunov vector (rescaled to norm 1) with the first (panel (a)) and eighth (panel (b)) Fourier mode versus time. The sum of the two terms is always equal to 1.

42

Numerical methods

value of the scalar product of the x-component of the 8th Lyapunov vector (as obtained (8) from the standard Gram-Schmidt orthogonalisation) with the 1st (x(1) 8 ) and the 8th (x8 ) Fourier mode: see the circles in the upper and lower panels, respectively. Ideally, the two scalar products should be equal to 1 and to 0, respectively, because these two modes correspond to different Lyapunov exponents. However, the intermittent behaviour reveals that the vector occasionally aligns along the “wrong” direction (as the sum of the two scalar products is always equal to one, no other direction is involved in the process). The reason can be attributed to the fluctuations of the finite-time Lyapunov exponent. If, for some time, there is an inversion between the Lyapunov exponent of the first pair and that of the second pair (i.e. the vector corresponding to the smaller average LE grows temporarily faster than the one corresponding to the larger exponent), the 8th Lyapunov vector, due to numerical inaccuracies, is attracted towards the currently more expanding (but least expanding on average) direction. This results in an erroneously larger value of the Lyapunov exponent. In fact, the Lyapunov value obtained from the GS analysis, equal to −0.01009(0), is larger than the true value −0.01016(8). The difference is small but reveals a source of inaccuracy. In order to further clarify this point, let us study a simple model: a product of twoby-two matrices, constructed in such a way that the eigendirections are constant, but the expansion rates fluctuate. As a result, the eigendirections temporally exchange their order. More precisely, we integrate Eq. (2.5), where the Jacobian K is given by the matrix √ − p1 p2 (s+ (t) − s− (t)) s+ (t)p2 + s− (t)p1 , K= √ s+ (t)p1 + s− (t)p2 − p1 p2 (s+ (t) − s− (t))

(3.10)

where p1 and p2 are two positive numbers such that p1 + p2 = 1, while s1 (t) and s2 (t) are two time-dependent functions. One can easily check that the eigenvectors of K have the same direction independent of the value of s1 and s2 , and, therefore, the Lyapunov exponents are trivially λ1 = s+  and λ2 = s− . The presence of a noise acting on the evolution of the first Lyapunov vector can mimic a controllable accuracy of the numerical computations. We have chosen to add a bounded noise with a flat distribution between −δ and δ. If we assume that s± = a± ∓ A sin(t/T) with a+ > a− and A > (a+ − a− )/2, it turns out that λ1 > λ2 , but s− becomes periodically (in some time interval) larger than s+ . Although such oscillations should not affect the value of the LE, we can see in Fig. 3.5a that the numerically computed LE varies with δ and may dramatically overestimate the expected value if δ > 10−7 . The reason is precisely that the corresponding Lyapunov vector is not always aligned along the correct direction. This can be seen in Fig. 3.5b, where the angle identifying the vector is plotted versus √ time for δ = 10−8 . The orientation tends to switch between a low value (− tan−1 p1 /p2 ), which corresponds to the expected direction of the largest Lyapunov vector, and a large √ value (tan−1 p2 /p1 ), which corresponds to the direction of the second eigenvector. It is remarkable to see that even though the final value of the Lyapunov exponent is quite correct for this value of δ, the alignment is often wrong.

43

3.6 Systems with discontinuities

(b)

(a)

p/2

¸ 0.5

®

0.4 0

0.3 0.2 0.1 0 –8 10

Fig. 3.5

10–7

10–6

10–5

±

10–4

–p/2

0

20

40

60

80

100

time

(a) Lyapunov exponent obtained from Eq. (3.10) versus the accuracy δ for p1 = 0.4, a± = ±0.1, A = 2, T = 25. (b) Orientation of the first Lyapunov vector.

3.6 Systems with discontinuities In some physical contexts, a continuous dynamics may be occasionally interrupted by sudden jumps, due to discontinuities in the basic equations, kicks which are modelled by terms containing δ-function, etc. This is typical, e.g. for billiards, where the main ingredients of the dynamics are collisions of a particle with either the boundary or another particle. Another setup where this phenomenon arises is that of neuron models, where the discontinuity is due to the idealisation of a short pulse (spike) as a δ-peak, whose absorption modifies the membrane potential instantly. Another example is that of stick-slip dynamics, where the discontinuity is induced by dry friction; many other examples can be found in engineering problems of machine dynamics (di Bernardo et al., 2008). In all such cases, the formalism for the computation of the Lyapunov exponents must be properly modified to account for the effect of the discontinuities. One might even doubt that the Lyapunov exponents are well defined in the presence of discontinuities. Fortunately, this is not the case; the reader interested in the details can consult Kunze (2000). Here we start by analysing the simplest context where this problem arises, namely that of a kicked dynamical system, as this helps also to introduce the proper notations. Let us consider a generic dynamical system, ˙ = F(U, t), U

(3.11)

and assume that its internal state undergoes an abrupt transition at time tn , U+ (tn ) = G(U− (tn )),

(3.12)

where the superscripts “−” and “+” mean that we refer to a time just before and just after the transition. In this context, the computation of the Lyapunov exponents can be easily generalised by treating the continuous and discontinuous contributions separately. It turns

44

Numerical methods

out that the overall evolution in the tangent space results from the alternating application of two types of matrices, P=

L 

D(U− (tn ))Hn Q0 ,

n=1

where Hn is the Jacobian arising from the integration of the differential equation (3.11) from time tn−1 to time tn , and matrix D = ∂G ∂U , appearing through linearisation of (3.12), describes the change of the perturbation at the abrupt transition. Accordingly, the only required care is to apply the QR decomposition to the properly assembled sequence of linear transformations. Notice, also, that the kicks do not need to be evenly spaced in time for this approach to work. When the amplitude and the direction of the kicks do not depend on the system configuration, D ≡ I and no care at all is required, since the discontinuity plays no role in the tangent space. Another simple situation is when the evolution between consecutive kicks can be explicitly solved. This happens, e.g., when the only nonlinearity is contained in the kicks, while the continuous-time evolution is linear. The kicked rotor is a popular model that belongs to this class. It is described by the Hamiltonian H=

+∞  P2 + K cos Q δ(t − n), 2 n=−∞

where the components of the state vector U ≡ (Q, P) represent the position (Q) and the momentum (P), while the time scale is chosen so as to fix the time separation between kicks equal to 1. This model is sufficiently simple to allow for an exact integration of the equations of motion between consecutive kicks, since P stays constant and while Q increases linearly. If we denote with Q(n) and P(n) the values of the variables after the nth kick, then just prior to the next kick these quantities take values P− (n + 1) = P(n) and Q− (n + 1) = Q(n) + P(n). After the kick, Q is unchanged, while P+ (n + 1) = P− (n+1)+K sin Q− (n+1). Altogether, one obtains the so-called Chirikov-Taylor standard map (A.6), Q(n + 1) = Q(n) + P(n)

(3.13)

P(n + 1) = P(n) + K sin(Q(n) + P(n)), which allows the use of standard methods of LE analysis for discrete-time systems. Another class of models, requiring more delicate handling, is the one where the occurrence of the discontinuity is not imposed externally (as in the previous case) but is self-determined, such as the collision of two disks moving in a two-dimensional plane. In full generality, the condition for the occurrence of a discontinuity can be expressed mathematically as a scalar equality h(U) = 0, which defines a codimension-one surface, the crossing of which triggers the discontinuity. In the collision of two circular disks, the condition is obtained by imposing that the distance between the centres of the two disks is equal to the sum of their radii. In this more general class of models, the idea of breaking the time evolution into continuous and discontinuous components is still meaningful, but one must include the

45

3.6 Systems with discontinuities

u+

V

U u–

h=0

tv

tu

¿

Fig. 3.6

t

Sketch of the evolution of two nearby trajectories (U and V ) at a discontinuity represented by a collision with the line h = 0 (tu and tv denote their collision times). Here u− and u+ represent the separation just before and after the collision. dependence of the collision time on the trajectory. A method to cope with this case was independently developed by Müller (1995) and Dellago and Posch (1995). Here we follow the former approach, as it is more appropriate for a general treatment. Let the scalar constraint h(U, tu ) = 0 define the condition for a discontinuity to occur at time tu . For the sake of generality, we assume that the condition may explicitly depend on time. Moreover, let F and Fˆ denote the velocity field before and after the collision, respectively (again, for the sake of generality, we assume that the two fields may differ, as in structurally variable systems (Müller, 1995)). Next, we consider a perturbed trajectory V(t) = U(t) + u(t), where u is an infinitesimal quantity. In general, it will hit the surface h(U, tv ) = 0 at an infinitesimally different time tv = tu + τ . Without loss of generality, we assume that τ > 0, as pictorially depicted in Fig. 3.6 (if this is not the case, it is sufficient to exchange the two trajectories). The task consists in expressing the “final” value u+ ,4 when both trajectories underwent the discontinuity, as a function of the initial condition u(0). Until the unperturbed trajectory reaches the surface h(U, tu ) = 0, the perturbation evolves in a smooth way, and one can write u− = H(tu )u(0), 4 In order to keep the notation as simple as possible, whenever there is no ambiguity, a subscript “–” (“+”)

implies that the variable is computed just before (after) the discontinuity of the unperturbed trajectory at time tu . Superscripts refer in the same way to the second discontinuity – that of the perturbed trajectory.

46

Numerical methods

where H is the standard Jacobian integrated over the time tu . The next step consists in determining the perturbation amplitude at the “final” time tv , by separately following the evolution of the reference and of the perturbed trajectory, respectively. On the one hand, by definition of the model, one can write, U+ = G(U− , tu ). Moreover, since τ is infinitesimal (and the evolution is smooth), U+ can be determined by expanding its evolution rule up to the first order only, U+ = U+ + τ Fˆ + .

(3.14)

On the other hand, the perturbed trajectory evolves smoothly at first and then undergoes the discontinuity. As for the first part, V− = V− + τ F(V− ) = U− + u− + τ F− ,

(3.15)

where we have replaced F(V− ) with F− ≡ F(U− ) in the last term, since their difference would give rise only to higher (second) order corrections. After the discontinuity has occurred, V+ = G(V− , t + τ ) ≈ G (U− + u− + τ F− , t + τ ) ∂G τ, ≈ U+ + D− (u− + τ F− ) + ∂t

(3.16)

where we have again used that U+ = G(U− ) and we assume, for the sake of generality, that G may have an explicit dependence on time. Finally, subtracting Eq. (3.14) from Eq. (3.16), we obtain   (3.17) u+ = D− u− + τ D− F− − Fˆ + + Gt . The first term in the r.h.s. is the same as in the previous non-autonomous regime, when the sequence of discontinuities was externally imposed at certain times rather than being selfdetermined during the evolution. The presence of an additional term, proportional to τ , reveals that the time shift indeed contributes to the tangent-space dynamics and must be properly taken into account. In order to complete the calculation, it is necessary to determine τ . This can be done by imposing the condition h(V− , tu + τ ) = 0.

(3.18)

By inserting Eq. (3.15) into Eq. (3.18) and noticing that h (U− , tu ) = 0 (this identifies the first discontinuity), one obtains



∂h ∂h · [u− + τ F− ] + τ = 0, ∂U − ∂t − where the dot denotes a scalar product. As a result, the time separation between the discontinuities is given by the linear transformation   ∂h ∂U − · u−   . τ = −  (3.19) ∂h ∂h · F + − ∂U ∂t −



47

3.6 Systems with discontinuities

Eqs. (3.17, 3.19) provide a complete solution to the problem. One can formally combine them into a single compact relation, u+ = Su− ,

(3.20)

where the matrix S is defined as

    ∂h D− F− − Fˆ + + ∂G ∂t ∂U −     S = D− − ∂h F− · ∂U + ∂h ∂t −



()

and the superscript means that the corresponding object is to be considered as a row vector, so that the numerator is altogether a matrix. The Lyapunov exponents can be computed by alternating the application of H and S matrices. Notice also that “–”s and “+”s appear only as subscripts in equation (3.20), indicating that the required variables have to be computed just before and after the jump experienced by the reference trajectory, as should be expected. From now on, since there is no ambiguity, the up/down position of −/+ symbol will be selected only to keep notations compact and simple. Finally, we offer a different justification of Eq. (3.17), and for the sake of simplicity, we now drop the explicit time dependence. Given a generic trajectory and a perturbed one, they are typically characterised by a different h value at the same time. Since the discontinuity arises when the same zero h value is attained, it is necessary to “synchronise” the two trajectories; this can be done by shifting the perturbed trajectory along its own orbit, uˆ = u + τ F,

(3.21)

where τ is given by Eq. (3.19) and the derivatives are computed when the reference trajectory is characterised by a given but generic h value. If the condition h = 0 holds, one is entitled to apply the map G, which, in tangent space, amounts to the linear transformation uˆˆ = Du. ˆ As a last step, the trajectory must be “unfolded”, bringing it back to the same time as that of the reference one. This amounts to repeating the step (3.21) with an opposite sign for the time τ and the proper velocity field in the new point in phase space (after the discontinuity has occurred). By combining the three steps together, one obtains Eq. (3.17). The presence of the first and the last step represent the difference with respect to the non-autonomous case, when the discontinuity occurs at the same time for all trajectories. In practice, the first (last) step can be performed at any time before (after) the discontinuity, provided that across the discontinuity the equations are integrated using h as the independent variable (see Chapter 2); in fact, this approach would allow maintaining the same h value, once a trajectory has been synchronised. Even more, if h turned out to be a proper global phase, one could even avoid the first and last steps, thus reducing the treatment of the discontinuity to that of the forced case. An alternative approach consists in reducing the continuous-time evolution to a discretetime one in between two consecutive Poincaré sections. In this case, the initial point is assumed to lie on the surface h = 0, which means that we deal with an (N − 1)dimensional phase space. Like in the previous case, the perturbation u is smoothly evolved

48

Numerical methods

until time tu , synchronised with the reference trajectory and then evolved according to the transformation D. The advantage of this method is that one does not need the third unfolding step, since the trajectory must, by definition, lie on the surface h = 0. This approach has been implemented in neural network dynamics (Zillmer et al., 2006). Notice also that in large systems, where each non-smooth event involves just a few variables, the advantage is no longer such, since, in the absence of the D transformation, the third step compensates the first one; i.e. the non-involved variables need not be touched in the continuous-time scheme.

3.6.1 Pulse-coupled oscillators A large class of models where discontinuities may arise is that of pulse-coupled oscillators, where each oscillator is described by a phase-like variable which, upon reaching a threshold, is reset and, at the same time, a δ-pulse is sent to the connected oscillators. As an example, we now discuss two leaky integrate-and-fire neurons  (2) δ(t − tj ) (3.22) U˙1 = a − U1 − g j

U˙2 = a − U2 − g



(1)

δ(t − tj ).

(3.23)

j

When the variable U1 (U2 ) reaches the value 1 (at times t1,2 j ), it is reset to zero and, simultaneously, sends a spike to the other oscillator. The discontinuous transformation (we discuss the case when the first neuron reaches the threshold – the other being symmetric) is G(U1 , U2 ) = (0, U2 − g), so that the Jacobian is

0 0 D= . 0 1 As a result, DF− − F+ =

−a . g

The discontinuity is identified by the condition h ≡ U1 = 0, so that time shift is τ=

∂h ∂U

= (1, 0) and the

u− 1 , a−1

while the final transformation reads a − u , 1−a 1 g − − u . u+ 2 = u2 + a−1 1 u+ 1 =−

(3.24)

In the case of large ensembles of neurons, at each spike emission, it is sufficient to update the perturbations of the involved variables (the one that is reset to zero and those of the

49

3.6 Systems with discontinuities

neurons which receive the spike). This approach has been, e.g., implemented by Monteforte and Wolf (2010) to study chaotic properties of various neural networks. Finally, notice that additional complications may arise if the neurons are assumed to undergo a refractory period after the reset, during which their potential stays constant to the reset value (Zhou et al., 2010).

3.6.2 Colliding pendula In this section we consider a system of two pendula of equal mass in a gravitational field (see Fig. 3.7a). If they are sufficiently long, they can be viewed as moving horizontally in a harmonic potential. Additionally, the left pendulum is subject to a sinusoidal forcing and to a viscous damping. Altogether, in between collisions, the equations of motion are (here, U = (x1 , x2 , v1 , v2 )) x˙ 1 = v1

x˙ 2 = v2

v˙ 1 = −αx1 − γ v1 + A cos ωt

v˙ 2 = −α(x2 − δ),

(3.25)

where γ is the strength of the viscous drag, α is the amplitude of the restoring force and δ the spatial separation between the two pendula. Whenever x1 = x2 , the two pendula undergo an elastic collision, which induces a perfect exchange of their momenta, while the coordinates remain unchanged. The discontinuous transformation G is linear and characterised by the following 4 × 4 matrix

D=

I 0

0 , E

(b) (a) 1

2

x

v2

5

0

–5

–5

Fig. 3.7

0

5

x2

(a) Two pendula: pendulum 1 is modulated and forces pendulum 2. (b) The chaotic dynamics exhibited by the second pendulum (position and momentum of the second particle are reported) for α = A = δ = 1, ω = 1.05 and γ = 0.13 (see Eq. (3.25)).

50

Numerical methods where I and 0 are the identity and null 2 × 2 matrices, while

0 1 E= . 1 0 Simple algebra shows that ⎞

v ⎜ − v ⎟ ⎟ DF− − F+ = ⎜ ⎝ C + γ v− ⎠ , 2 −C − γ v− 1 ⎛

(3.26)

− where v = v− 1 −v2 and C = α−A cos ωtu . The discontinuity is identified by the condition h(U) = x1 − x2 = 0, so that hU = (1, −1, 0, 0) and the infinitesimal time shift is − u− 1 − u2 . (3.27)

v By inserting Eqs. (3.26, 3.27) into Eq. (3.17), one finally obtains the linear transformation induced by the discontinuity:

τ =−

− u+ 1 = u2 ,

− u+ 2 = u1 ,

u+ 3 u+ 4

= =

u− 4 u− 3

− − (C + γ v− 2 )(u1 − + (C + γ v− 1 )(u1

(3.28) − u− 2 )/ v, − u− 2 )/ v.

When the oscillations of both pendula are small enough, there are no collisions and the dynamics is ordered: the first pendulum oscillates with the period of the forcing term, while the second one according to its natural frequency. However, as soon as collisions come into play, a chaotic dynamics may be induced by the nonlinear character of the kicks, as shown in Fig. 3.7b. The implementation of this described algorithm shows that the first LE is positive, λ1 ≈ 0.0084. It is instructive to notice that the correct implementation is crucial even from a qualitative point of view: if one disregards the collision, the maximum Lyapunov exponent would vanish, while a treatment analogous to the non-autonomous case (i.e., without synchronisation and unfolding) would even yield a negative result (λ1 ≈ −0.022). Finally, notice that in the case of many particles, it is sufficient to transform only the variables of the particles involved in the given collision. A chain of elastically colliding harmonic oscillators has been investigated by Sano and Kitahara (2001), while a Lorentz gas has been studied by Dellago and Posch (1995).

3.7 Lyapunov exponents from time series The methods discussed in the previous sections for the computation of the Lyapunov exponents assume that the evolution rule is known (either as a differential equation or as a recursive map). In typical experimental contexts, however, the equations of motion

51

3.7 Lyapunov exponents from time series

are, at best, known only approximately. It is therefore desirable to develop methods for the determination of the LEs directly from the time series of suitable observables. LEs are dynamical invariant; i.e. they are independent of the variables chosen to describe a given physical system. It is, however, necessary to access sufficiently many variables to reconstruct the underlying dynamics. Takens (1981) proposed an “embedding” technique, which allows proceeding even when a single scalar observable is available. Given a sequence {U(iτ )} of measurements of a D-dimensional attractor, one can construct the m-tuple = (U(iτ )), U((i + 1)τ ), . . . , U((i + m − 1)τ ), U(m) i where, for the sake of simplicity, we have assumed that the time separation between consecutive elements coincides with the sampling time τ . Accordingly, the original scalar time series is embedded into Rm . If m > 2D, the attractor itself is faithfully unfolded; i.e. distinct points are parametrised by different m-tuples. As a result, the corresponding evolution rule can be expressed as a mapping from time kτ to (k + 1)τ , (m) (m) [Uk ] U(m) k+1 = F

where F(m) denotes the unknown transformation of Rm . The corresponding evolution in tangent space is ruled by the recursive equation u(m) k+1 =

∂F(m) (m) (m) u ≡ J(m) k uk , ∂U(m) k

where the Jacobian has the following simple structure ⎛ 0 1 0 ··· ⎜0 0 1 ··· ⎜ ⎜ (m) .. .. .. Jk = ⎜ ... . . . ⎜ ⎝0 0 0 ··· a1

a2

a3

···

⎞ 0 0⎟ ⎟ .. ⎟ . . ⎟ ⎟ 1⎠ am

In practice, the only nontrivial components are those of the last row, where we have dropped the dependence on the index k for the sake of simplicity. (m) (m) Let Uk and Un denote two generic points on the attractor. If they are close enough, one can expand the evolution rule, writing (m)

(m)

(m)

(m)

Un+1 − Uk+1 = Jk (U(m) n − Uk ) + h.o.t. To the extent that the higher order terms (h.o.t.) can be neglected, this equation tells us that the Jacobian can be reconstructed from the evolution of nearby points. In principle, m nontrivial components of the Jacobian can be determined by identifying and following in time m neighbours of the reference point U(m) (k). In practice, it is more reliable to identify a larger number of neighbours and thereby proceed with a least-square fit by minimising   (m) 2 U((n + m)τ ) − U((k + m)τ ) − a · (U(m) Sk = n − Uk ) , n

52

Numerical methods where the sum is restricted to all points n  = k that are close enough to the reference point k. In some cases the computation of the LEs is affected by relatively large errors. This is particularly true for the negative exponents, as they refer to directions that are poorly explored by the dynamics. Some improvements can be obtained by adopting a series of precautions such as the selection of a not-too-small τ -value (to avoid Jacobian eigenvalues too close to 1, which degrade the accuracy) or the discard of temporally close points (which may bias the neighbour statistics). The reader interested in such details can refer to Kantz and Schreiber (2004). A more fundamental difficulty arises from the fact that m may be larger than the dimension of the space actually spanned by the invariant measure. As a result, the reconstructed dynamics is characterised by a certain number of additional “spurious” exponents associated with directions that are everywhere transversal to the attractors. They have no physical meaning, as they do not correspond to any degree of freedom but just follow from the selection of variables adopted to represent the underlying attractor. A theoretical study performed within the limits of infinitesimal perturbations suggests that the spurious exponents are suitable combinations of the true ones (Sauer et al., 1998). This property, however, breaks for finite perturbations, even small ones. More robust information can be extracted from the local direction of the corresponding covariant Lyapunov vectors (see Chapter 4 for their definition). In fact, so long as a given Lyapunov vector is transversal to the invariant measure, the corresponding LE is spurious, and it has thereby to be discarded. This idea, first proposed by Brown et al. (1991), has been revisited by Kantz et al. (2013). Here, we illustrate the approach with reference to the simple Hénon map (A.4) for the standard parameter values a = 1.4 and b = 0.3. A time series containing N = 6 × 105 points is used to embed the attractor in a space of dimension m = 5, while the local Jacobian is reconstructed by using all neighbours within a ball of radius r = 0.001. The Jacobians are thereby multiplied in a standard way to determine five Lyapunov exponents (after discarding a suitable transient). The five values correspond to the full circles in Fig. 3.8 (see the x coordinate); the dashed lines mark the true LEs, while the dotted-dashed lines correspond to the linear combinations that are ¸2

¸1

2¸1

–1

10 |µ|

¸1+¸2

–2

10

–3

10

–4

10

–2

–1.5

–1

–0.5

0

0.5

¸

Fig. 3.8

Average angles between covariant vectors and the invariant measure – obtained from a five-dimensional reconstruction of the Hénon map – versus the actual value of the Lyapunov exponents. (Data: courtesy of H. Kantz, G. Radons and H. Yang.)

53

3.7 Lyapunov exponents from time series

100 ¾i2=r2 10–3 10–6 10 10

–9

–12

1

Fig. 3.9

2

3 i

4

5

Rescaled singular values for the Hénon map (A.4) in boxes of radius r = 0.001. (Data: courtesy of H. Kantz, G. Radons and H. Yang.) predicted by the theory (Sauer et al., 1998). One can see that there exists a slightly negative LE (≈ −0.1) that is not predicted by the theory. As a next step, the covariant vector corresponding to each LE is estimated (see Chapter 4 for a description of the method). Simultaneously, the angle θ between the vector itself and the manifold locally spanned by the experimental points is calculated. The underlying idea, originally proposed by Broomhead et al. (1987), is to locally approximate the manifold covered by the attractor with a linear subspace to thereby identify the directions actually covered by the attractor. In practice, given the points which fall within each prescribed ball, singular value decomposition is applied to determine the variance σ 2 along the main axes. The singular values are thereby rescaled and averaged over the entire attractor. The results are plotted in Fig. 3.9. There, we see that the last three are really negligible (they are non-zero because of the nonlinearities, which induce a bending of the linear subspace; moreover in real applications we can expect the presence of observational noise, which induces fluctuations along any direction, whether spurious or not). In this case, it is clear that the physically relevant invariant measure is restricted to the first two principal axes. One can therefore locally determine the angle θ between the ith covariant vector and the subspace identified by the first two principal axes. The average absolute value |θ | is reported in Fig. 3.8 (see the vertical coordinate). There we see that only two of them are close to zero and therefore correspond to physically relevant directions. This is confirmed by the actual values of the LEs that coincide with the known exponents of the Hénon map. The other three exponents are spurious: these include those two values that could be identified as linear combinations of the true values and the intermediate negative exponent that could not be spotted otherwise. In full generality, it should be stressed that this criterion also suffers some problems when applied to real world data. In particular, we refer to the identification of the directions that are truly spanned by the invariant measure. Already in the simple setup of the Hénon map, in the absence of observational noise, the average angle between the negative exponent and the attractor manifold is not as small as one might hope for. The determination of whether a direction is covered or not by a given attractor is certainly more tricky in truly experimental data.

4

Lyapunov vectors

The linear stability analysis of fixed points involves the computation not only of eigenvalues but also of eigenvectors. Equivalently, a complete characterisation of chaotic or stochastic dynamical systems requires going beyond the knowledge of the LEs, including the identification of the (local) orientation of stable and unstable manifolds. An eigenvector of a given linear transformation can be identified as a direction that is mapped onto itself. This definition cannot be straightforwardly extended to contexts where different transformations are applied at different times, as no invariant direction is expected to exist. It is, however, possible to rephrase the definition in such a way that a generalisation becomes possible. Eigenvectors can, in fact, be viewed as the only directions which, if iterated forwards and backwards in time, are accompanied by an expansion rate which coincides with the eigenvalues of the given matrix (here, for the sake of simplicity, we assume that no complex eigenvalues exist). In this definition, the very fact that the direction itself is invariant becomes a secondary property. Accordingly, it can be extended to any sequence of matrices, requiring that the observed average expansion rate has to coincide with one of the LEs of the given system. Such directions, often referred to as covariant Lyapunov vectors in the physics literature, are nothing but the vectors Ek introduced in Section 2.3.2, while referring to the Oseledets splitting. They had been introduced already by Oseledets (1968) and later formalised as tangent directions of invariant manifolds (Ruelle, 1979) but for many years escaped the attention of researchers, probably due to the lack of effective algorithms to determine them. The first computation of covariant vectors was performed in the context of time-series analysis (see Section 3.7) (Bryant et al., 1990; Brown et al., 1991). Since then, covariant vectors have been occasionally used as a tool to determine Lyapunov exponents via a transfer matrix approach (Politi et al., 1998) or to characterise spatio-temporal chaos (Kockelkoren, 2002). Only after the development of two effective computational methods (Ginelli et al., 2007; Wolfe and Samelson, 2007), the usefulness of covariant vectors was eventually recognised. Here, we provide a heuristic discussion of the subject, while a more formal introduction is given in Section 4.1. For simplicity, we assume that all of the LEs of an N-dimensional system are different. We start by considering a generic initial perturbation. One expects it to have components along all of the Ek directions; therefore its forwards evolution is dominated by the largest Lyapunov exponent; i.e. after some transient, the perturbation aligns along the most unstable direction E1 . Accordingly, the determination of E1 is a trivial task as it “attracts” generic initial conditions. Analogously, a generic perturbation, when iterated backwards in time, aligns along the most “expanding” direction in negative times, i.e., along EN , which corresponds to the smallest Lyapunov exponent of the original system. Therefore, if N = 2, this completes the task of finding the covariant Lyapunov 54

55

4.1 Forward and backward Oseledets vectors

vectors. Notice that if one is interested in knowing the two covariant vectors in a given point of phase space, it is necessary to know a long trajectory, which includes the point under consideration and extends into the far past as well as into the far future. In higher-dimensional systems, new approaches are necessary. A possible strategy is as follows. As discussed in Chapter 3, the second Lyapunov exponent λ2 can be determined by following two linearly independent vectors and orthogonalising them to avoid the alignment of both of them along E1 . As a result, one obtains two vectors that span the plane (E1 , E2 ), where the second vector is not aligned along E2 but is orthogonal to E1 . This information can, however, be used to determine E2 : it is sufficient to consider an arbitrary vector in the plane (E1 , E2 ) and iterate it backwards. In fact, because E2 is the least expanding direction in this restricted space, it “attracts” the evolution of a generic initial condition when iterated backwards in time, where it becomes the most expanding direction. Thus, the direction of the second covariant vector can be determined by combining forward and backward iterations of suitable perturbation vectors. This is the core of the dynamical algorithm that is extensively illustrated later in this chapter (together with other approaches).

4.1 Forward and backward Oseledets vectors We will start by revisiting the definition of Lyapunov exponents given in Chapter 2. As, here, we consider backward and forward transformations, it is necessary to slightly change the notations. So, let us denote with H(t , t ) the Jacobian matrix ruling the evolution in tangent space from time t to time t , along the trajectory U(t). From Chapter 2 and, in particular, from Eq. (2.12), the Lyapunov exponents λj can be identified from the eigenvalues of the matrix M+ (t , t ) = H (t , t )H(t , t ) (in the limit (t − t ) → +∞),  −t )

M+ (t , t )ξ ( j) (t , t ) = e2λj (t

ξ ( j) (t , t ),

(4.1)

where the eigenvectors ξ ( j) (t , t ) belong to the tangent space in U(t ). In fact, the action of M+ (t , t ) corresponds to the application of H(t , t ), which maps the initial vector from time t to time t , followed by the application of H (t , t ), which maps it back to time t (the superscript “+” in M+ helps to remind us that the matrix operates in the future). The eigenvectors ξ ( j) (t , t ) are nothing else as than the right singular vectors of the matrix H(t , t ). Similarly, one can define the left singular vectors ψ ( j) of H(t , t ) from the eigenvalue equation  −t )

M− (t , t )ψ ( j) (t , t ) = H(t , t )H (t , t )ψ ( j) (t , t ) = e2λj (t

ψ ( j) (t , t ).

(4.2)

The vectors ψ ( j) are associated to the tangent space in U(t ), as also confirmed by the following standard equations

56

Lyapunov vectors  −t )

ψ ( j) (t , t )

 −t )

ξ ( j) (t , t ),

H(t , t )ξ ( j) (t , t ) = eλj (t H (t , t )ψ ( j) (t , t ) = eλj (t

(4.3)

which show how the two vectors are related to one another. Since the matrix M+ (t , t ) is symmetric, its eigenvectors are mutually orthogonal, but they depend on the coordinates chosen to represent the dynamical evolution. This indeterminacy disappears in the limit t → ∞, when the right singular vectors converge to ( j) the so-called forward Oseledets vectors v+ (t ) = ξ ( j) (t , +∞). This is telling us that the the Oseledets vectors are well-defined objects attached to the initial point U(t ) (see Ershov and Potapov (1998) for a rigorous analysis). Similarly, with reference to the matrix M− (t , t ), if one takes the limit t → −∞, the ( j) left singular vectors converge to the backward Oseledets vectors v− (t ) = ψ ( j) (−∞, t ). Rigorously speaking, the Oseledets vectors can be uniquely identified only in the absence of degeneracies, i.e. when no two (or more) consecutive LEs are equal to one another. In such cases, one should, more properly, refer to suitable subspaces of dimension equal to the level of degeneracy. Since taking into account the presence of degeneracies would involve the introduction of heavier notations, here, for the sake of simplicity, we assume they are absent (the reader interested in a more complete treatment can consult Ginelli et al. (2013)). As a result, in any point U(t) of the phase space, one can define two sets of Oseledets vectors: by considering the time interval (t , t) [(t, t )] and taking the limit t → −∞ ( j) ( j) [t → +∞] one obtains v− (t) [v+ (t)]. Moreover, from the relation between the left and right singular vectors (4.3), there exists a connection between them, v− (t ) = H(t , t )v+ (t ), ( j)

( j)

up to a scaling factor, if t − t is large enough. In practice, as already noted in the previous chapter, it is not convenient to determine Lyapunov exponents by diagonalising the matrices in Eqs. (4.1, 4.2). It is preferable to implement some form of QR decomposition, which also generates orthogonal bases. The bases obtained by following the evolution forwards in time (with the help of either the Gram-Schmidt orthogonalisation or Householder reflections) converge to the backward Oseledets vectors (in the same phase points). This seemingly odd relationship is due to the fact that the “backward” vectors are termed as such because they depend on the past, so they are naturally obtained by reaching a given point from previous states. Analogously, if one generates a long trajectory in the far future and then moves backwards in tangent space, implementing again a QR decomposition up until the point of interest, the forward Oseledets vectors are obtained (although their identification requires moving backwards!). The most serious criticism that can be made against both sets of vectors is that, with the ( j) ( j) exception of the first vector, they are not covariant, i.e. v± (t )  = H(t , t )v± (t ). This can be appreciated in Fig. 4.1, where one can compare the evolution of the first (1) two backward vectors. There, we see that v− (n + 1) is, by construction, nothing but a rescaled version of H(tn , tn+1 )v(1) − (n); so it is true that it is mapped onto itself (except for a scaling factor). The vector v(2) − (n + 1) is instead obtained after subtracting from (2) (1) H(tn , tn+1 )v− (n + 1) its component parallel to v− (n + 1).

57

4.2 Covariant Lyapunov vectors and the dynamical algorithm

(1)(n) Jv− (2)(n) Jv−

(1)(n + 1) v−

(2)(n + 1) v−

u(2)(n + 1) (1)(n) v−

trajectory

u(2)(n) (2)(n) v−

J–1 u(2)(n + 1)

Fig. 4.1

( j)

Schematic representation of the evolution of the backward Oseledets vectors v− (n) and of the dynamical algorithms for the determination of the covariant Lyapunov vectors (because here we depict one-step evolution, we refer to the local Jacobian J instead of the product H).

4.2 Covariant Lyapunov vectors and the dynamical algorithm A truly covariant Lyapunov vector (CLV) of an N-dimensional system can be defined by intersecting the subspaces identified by the v± vectors. Let ((i) (t))− be the subspace spanned by the first i eigenvectors of M− . This subspace at time t contains all perturbations that grow with the Lyapunov exponents λ1 , . . . , λi ; i.e. it coincides with the subspace that, in terms of the Oseledets splitting is E1 ⊕· · ·⊕Ei . Then, let ((i) (t))+ be the space spanned by the last N+1−i eigenvectors of M+ ; in terms of the Oseledets splitting it corresponds to Ei ⊕· · ·⊕EN . The (one-dimensional) intersection of the two subspaces yields the covariant or characteristic Lyapunov vector as it has been defined by Ruelle (1979): u(i) (t) = ((i) (t))+ ∩ ((i) (t))− .

(4.4)

Invariance of the vector u(i) (t) under time evolution follows from the invariance of the spaces ((i) )± . When the same matrix J(t) = A is applied at all times, it is easy to convince oneself that u(i) (t) reduce to the standard eigenvectors (provided that no degenerate eigenvalues are

58

Lyapunov vectors

present): they are trivially mapped onto themselves (apart from an irrelevant scaling factor associated with the corresponding eigenvalue). In general, the vectors u(i) (t) represent the proper generalisation of the concept of eigenvectors to a context where a different matrix is applied at each time step. In order to transform this definition into an effective algorithm, it is necessary to generate forward and backward Oseledets vectors. Because of this difficulty, for many years CLVs have been determined only sporadically, as mentioned in the introduction of this chapter. It is only recently that two effective algorithms appeared (Ginelli et al., 2007; Wolfe and Samelson, 2007), which make the computational task much easier. Here, we describe the so-called dynamical algorithm from Ginelli et al. (2007), which is convincingly stable and easier to understand from a physical point of view. From Eq. (4.4), we see that the determination of the second CLV u(2) (t) requires identifying the covariant (one-dimensional) subspace obtained by intersecting ((2) (t))− with ((2) (t))+ . The former two-dimensional subspace is easily obtained by iterating the first two backward Oseledets vectors (see again Fig. 4.1). As for ((2) (t))+ , there is no need to build the entire (typically, high-dimensional) space, but it is sufficient to restrict the study to ((2) (t))− when following the tangent space evolution backwards in time. In fact, for the same reason that a generic perturbation, when iterated forwards in time, tends to align along the most expanding direction, it tends to align along the least expanding (most contracting) when iterated backwards. If the evolution is constrained to the subspace ((2) (t))− , this means that it aligns along the second most expanding directions (since −λ2 > −λ1 ), i.e. along the second covariant direction. Now, we transform this way of reasoning into a general algorithm. Let w(i) (t) denote a generic vector embedded in the subspace spanned by the first i backward Oseledets vectors ( j) v− (t). As the latter vectors represent an orthonormal basis, one can write w(i) (t) =

i 

( j)

c( j,i) (t)v− (t),

(4.5)

j=1 ( j)

where c( j,i) (t) = w(i) (t)|v− (t). By denoting with c(i) (t) the set of coordinates of w(i) (t), one can express the iteration from time t to time t + 1 as c(i) (t + 1) = R(i) (t)c(i) (t), where the i × i upper triangular matrix R(i) (t) corresponds to the upper-left block of the matrix R(t), determined by means of the QR decomposition. The ith covariant vector is obtained by iterating backwards this equation, c(i) (t) = (R(i) )−1 c(i) (t + 1),

(4.6)

for a sufficient number of times to allow for the transient to die out. (R(i) (t))−1 , too, is an upper triangular matrix, whose diagonal elements are the inverse of those of R(i) (t); they, in fact, contain the information on the volume-contraction rates. Its off-diagonal elements are crucial to identify the orientation of the CLV. In practice, it is not necessary to perform an explicit matrix inversion; backward iteration can be efficiently performed with backsubstitution algorithms, starting from the last ith component of c(i) (t).

59

4.3 Dynamical algorithm: numerical implementation

4.3 Dynamical algorithm: numerical implementation The various steps of the dynamical algorithm can be summarised as follows: Step 1 – The first step is fully equivalent to the one needed for the computation of the Lyapunov exponents. Here, we briefly summarise the procedure for later convenience. A given initial condition in phase space and a set of m ≤ N orthogonal tangent-space vectors v(i) (0), i = 1, . . . , m, are allowed to evolve in phase- and tangent-space dynamics (by applying the QR decomposition described in Chapter 3), respectively. This evolution should last a time interval t long enough to allow the trajectory and the tangent vectors to converge to the attractor (in dissipative models) and to the backward Oseledets vectors, respectively. Step 2 – Both the trajectory and the backward Oseledets vectors are evolved forwards, from time t until time t . During this evolution the components of the backward Oseledets vectors v− (t), as well as the elements of the triangular matrices R(t), are recorded at each step. The stored information is necessary for the execution of the following two steps. Step 3 – A random set of vectors c(i) (t ) (i = 1, m) is selected (notice that c(i) (t ) has i components) and evolved backwards via Eq. (4.6) until time t , which lies between t and t . The backward transient time t − t should be long enough to allow each vector to converge to the corresponding covariant vector. At variance with the forward evolution, here each vector is iterated independently and is only rescaled (from time to time) to avoid too large numbers. Step 4 – The backward evolution continues, yielding the proper covariant vectors for t ≤ t ≤ t ; notice that the c(i) (t) (i = 1, m) coordinates are automatically expressed with reference to the local backward Oseledets basis. A nice property of this algorithm (shared also by the methods described next) is that the computation of the first m CLV requires only the iteration (forwards and backwards) of m vectors; there is no need to explore other directions. If the dynamics is invertible, the last m vectors can be equivalently obtained without evolving the first N − m vectors. It is necessary first to evolve the real space dynamics to generate a faithful trajectory and thereby follow this scheme by exchanging forward with backward evolution. If the Lyapunov spectrum is degenerate, some covariant subspaces have a dimension larger than one and the individual vectors have no physical meaning. In this case, this algorithm will simply return some arbitrary set of independent vectors which depend on the selection of tangent vectors made at time t . A pseudocode describing the entire procedure is illustrated in Appendix B.

Memory (i) The algorithm requires the storage of the R(t) matrices and the v− (t) vectors to be used   during the backward evolution from t to t . The R(t) matrices needed to run the backward

60

Lyapunov vectors (i)

dynamics involve m(m + 1)/2 floating-point numbers, while the v− (t) vectors require mN floating-point numbers. Thus, taking into account that h = |t − t | sampling points are to be considered, the total number MT of floating-point numbers is   m+1 . MT = h m N + 2 This burden can be sensibly reduced if one is not interested in the structure of the CLV (i) in the original basis, but just in the mutual angles, since the v− (t) vectors stored in the second step are not needed for the backward iteration. In this case, the hmN term in the r.h.s. of this equation can be disregarded. A large amount of memory may, nevertheless, be needed, if the phase-space dimension is large and/or many vectors are required, exhausting the capacity of the fast-access memory. In such a case, in order to avoid frequent calls to a slow-access memory (e.g. disk storage), it is convenient to split the computation into separate blocks. Once the maximum amount of data Mb that can be stored in the fast-access memory is determined, divide the total number h of forward steps into nb blocks of length h/hb such that MT /nb ≤ Mb . Then, (i) step 2 of the algorithm is replaced by a forward run, where the R(t) matrices and the v− (t) vectors are not stored; the current phase-space configuration and the backward Oseledets vectors are instead saved every hb time step, at the beginning of each block. This data can be typically stored on a disk, as it will not be accessed too frequently. Once this step has been completed, perform a series of block-by-block forward and backward iterations, starting from the last block. At the end of each pair of steps, the phase-space position and the corresponding Oseledets vectors are recovered and used to generate (a second time) (i) the R(t) matrices and the v− (t), which are now stored in the fast memory. At the end of a forward step, a backward iteration begins from the c(i) (t) vectors obtained at the end of the previous block (with the exception of the very first step, where random initial conditions are selected, instead). This way, the CLVs are kept properly oriented. The overall price paid to study large systems is, therefore, the need to repeat the forward iterations twice.

Computational complexity Suppose the dynamical system has N degrees of freedom and we are interested in computing the first m CLVs. The number of operations required in the forward step is the same as for the computation of the Lyapunov exponents. To calculate the forward dynamical evolution one must run both phase and tangent space dynamics for kf time steps and then perform a single QR decomposition. Dynamical systems such as those with finite-range or mean-field interactions (easy dynamics) require O(N) operations for a single step of phase-space dynamics and O(mN) for the tangent space evolution. Dynamical systems with long-range interactions (hard dynamics) typically require O(N2 ) and O(mN2 ) operations for phase-space and tangent-space evolution, respectively. This leads to either O(m kf N) (for easy dynamics) or O(m kf N2 ) (for hard dynamics) operations to be performed between consecutive QR decompositions, which is an O(m2 N) algorithm in itself. So far the computation is fully equivalent to the complexity of LE computation.

61

4.4 Static algorithms

A single step of the backward evolution of the CLV coefficients via Eq. (4.6) requires ∼ m3 /3 operations (by resorting to a back-substitution algorithm), while the number of operations needed to express the CLV in the phase-space coordinate basis via Eq. (4.5) is ∼ m2 N/2. As a result, neglecting the relatively small number of iterations needed to rescale the CLVs, we find that the total number of operations is

N 2 m Ttot ≈ h m + . 3 2 Accordingly, in systems with finite-range or mean-field interactions, even when the full set of CLVs is computed, i.e. m = N, the backward evolution is approximately 2.4 times faster than the forward one (see Chapter 3), or 6 times if one is not interested in expressing the CLVs in the phase-space coordinate basis. The situation is slightly less favorable for systems with long-range interactions, but it is nevertheless clear that the computational time is not a more serious issue than in the computation of the Lyapunov exponents.

Transient time The convergence in both forward and backward evolutions is typically exponential and is related to the difference between consecutive Lyapunov exponents (see also Ginelli et al. (2013)). More precisely, the convergence to the ith CLV (i > 1) depends on the difference λi − λi−1 . Dynamical systems with many degrees of freedom are typically characterised by a piece-wise continuous limit spectrum λ(i/N) (see Chapter 10). This implies that the difference between consecutive exponents scales to zero as 1/N, so that it is advisable, when performing a finite-size analysis of such systems, to scale the transient time with the number of degrees of freedom.

4.4 Static algorithms Other algorithms have been proposed which do not make use of the intrinsic stability of the backward evolution, when restricted to the Oseledets subspace, but, rather, determine the CLVs as linear combinations of either forward or backward Oseledets vectors. More precisely, at each point along a given trajectory u(i) =

i N   ( j) ( j) ( j) ( j) v− |u(i) v− = v+ |u(i) v+ , j=1

( j)

( j)

(4.7)

j=i

where v+ and v− are, respectively, the forward and backward Oseledets vectors. For the sake of simplicity, we are dropping the time dependence. The relation (4.7) follows from the fact that the CLV u(i) is the intersection of the space spanned by the first i forward Oseledets vectors with the ones spanned by the last (N − i + 1) backward Oseledets vectors.

62

Lyapunov vectors

4.4.1 Wolfe-Samelson algorithm Upon taking the scalar product of u(i) with v(k) ± , one can write (k) v− |u(i) 

i  ( j) (k) ( j) = v+ |u(i) v− |v+ ,

k ≥ i,

(4.8)

k ≤ i.

(4.9)

j=1 (k)

v+ |u(i)  =

N  ( j) (k) ( j) v− |u(i) v+ |v− , j=i

By now substituting Eq. (4.8) into Eq. (4.9), one obtains the following set of equations for the components of the covariant vectors in the forward basis: % $ N i   (k) (l) (l) ( j) ( j) (k) (i) v+ |u  = v+ |v− v− |v+  v+ |u(i) , k ≤ i. (4.10) j=1

l=i

The solution of these equations requires the knowledge of N vectors, no matter the size of i. However, as noticed by Wolfe and Samelson (2007), one can simplify the numerics, by exploiting the relationship N 

( j)

(i) (k) v+ |v− v(k) − |v+  = δij ,

k=1 (i)

(i)

which follows from the fact that v− and v+ are two complete sets of orthonormal vectors. As a result, N i−1   (k) (l) (l) ( j) (k) (l) (l) ( j) v+ |v− v− |v+  = δkj − v+ |v− v− |v+ . l=i

l=1

By replacing this expression into Eq. (4.10), one obtains the set of equations % $ i−1 i   (k) (l) (l) ( j) ( j) v+ |v− v− |v+  v+ |u(i)  = 0, k ≤ i, j=1

(4.11)

l=1

which is much simpler, as the two vector-indices j and l run up to i only. As a result, we see that Eq. (4.11) produces the expansion coefficients of the ith CLV as the kernel of a matrix computed from the first i forward and i − 1 backward Oseledets vectors.

4.4.2 Kuptsov-Parlitz algorithm Kuptsov and Parlitz (2012) have developed another static approach, which makes use of LU factorisation. In matrix notations, Eq. (4.7) can be rewritten as V+ C+ = V− C− ,

(4.12)

with the plus and minus indices referring to forward and backward dynamics, respectively, and where the upper and lower triangular matrices C± provide a compact description of all

63

4.5 Vector orientation

components of the CLVs in the forward and backward bases, respectively. Since V− is an orthogonal matrix, one can recast Eq. (4.12) as 

(V− V+ )C+ = PC+ = C− . If one is interested only in the ith CLV, that is, in the ith column of the matrix C+ , only the (i − 1) × i upper left corner of matrix P is needed. Since C− is lower triangular and the first i entries of its ith column are all zeros, we are left with the following system of (i − 1) linear homogeneous equations in i variables i  [P]k, j [C+ ]j,i = 0, k = 1, 2, . . . , i − 1, j=1

which defines the ith vector up to a rescaling factor. Analogously to the dynamical approach, in both of the previous schemes, the first i forward and (i − 1) backward vectors are needed if the first i CLV are required. In these static methods, however, one is forced to perform the QR decomposition twice, which, as shown before, is the most computationally demanding part of any algorithm. A second concern arises for large systems, which have to be dealt with by means of singular value decomposition (SVD) to attain a satisfactory numerical accuracy. SVD is more time consuming than back substitution by a factor of 18, as it requires ∼ 6m3 operations for an m × m matrix (Golub and Van Loan, 1996). Finally, for what concerns memory requirements, only the forward Oseledets vectors need to be stored by the static algorithm. While this reduces the memory requirement to about 2/3 of what is needed by the dynamical algorithm (where one has to store both V and R matrices), this memory advantage is lost whenever one is interested only in the angles between vectors, for which the dynamical algorithm requires the storage of only the upper triangular matrices R.

4.5 Vector orientation Covariant Lyapunov vectors carry information on the (local) geometrical structure of the attractor. For example, it is useful to know the angle between the stable and unstable manifolds, since this helps to check whether they are everywhere transversal as required for hyperbolic systems. It is therefore important to determine the angles between the different vectors. It is noteworthy that the angles are not dynamical invariants, but do depend on the choice of variables used to describe the underlying chaotic attractor. Moreover, in general, the task is not to compare single vectors but higher-dimensional spaces. Given any two linear subspaces S1 and S2 of dimension, respectively, N1 and N2 (with N1 + N2 ≤ N and N1 < N2 ), there exists a number N1 of principal angles between them which can be defined as follows (see section 12.3.4 in Golub and Van Loan, 1996). Let A1 (A2 ) be an N × N1 (N × N2 ) matrix, whose column vectors span S1 (S2 ). Then, the QR decomposition is

64

Lyapunov vectors

applied to both matrices, A1 = Q1 R1 ,

A2 = Q2 R2 .

The N1 principal values s(i) of QT1 Q2 are the cosines of the principal angles θ (i) , s(i) = cos θ (i) . The minimum angle θ¯ = mini θ (i) measures the degree of transversality between the two subspaces. When S1 and S2 are the unstable and the stable manifolds, respectively, it is convenient to refer to the backward Oseledets basis. This simplifies the calculation of principal angles, since the matrix A1 is upper triangular, and its corresponding orthogonal matrix Q1 is just the identity matrix (Kuptsov and Kuznetsov, 2009).

4.6 Numerical examples In this section we illustrate the concept of CLVs in a few examples, starting from the Hénon map (A.4). In this case, since the phase space is two-dimensional, there are two CLVs, which can be easily determined by iterating forwards and backwards in tangent space. In Fig. 4.2a, seven consecutive points of a trajectory are superposed on the Hénon attractor. For each point the solid segment is oriented along the direction of the first CLV, while the dashed segment corresponds to the second CLV. In some cases the solid segment is hardly visible (see, e.g., point 1); this happens whenever the curvature of the unstable manifold is locally small. In fact, the first CLV is naturally oriented along the unstable manifold that is, in this case, one dimensional. In some other points, the dashed segment is hardly visible as it is almost parallel to the solid one (see, e.g., points 4 and 5). This is the (a)

(b)

2 Un+1

Fig. 4.2

2

1

0

–2 –2

P(µ)

6

1

–1

3

4

5 2

1

7 3 –1

0

1

Un

2

0

0

p/4

µ

p/2

(a) Covariant Lyapunov vectors in seven consecutive points of a trajectory of the Hénon map (with a = 1.4 and b = 0.3). (b) Distribution of the angles for the same map.

65

4.7 Further vectors 2 P(µ)

1.5

1

0.5

0

Fig. 4.3

0

p/4

µ

p/2

Distribution of the minimal angle between stable and unstable manifolds in a chain of four Hénon maps (solid line, a = 1.4, b = 0.3, ε = 0.025) and of four symplectic maps (dashed line, μ = 4). indication that the trajectory is close to a homoclinic tangency. A compact description of how frequently such tangencies are encountered can be obtained by computing the probability density of the angle θ between two such manifolds. This is illustrated in Fig. 4.2b. There we see that a finite density of angles is observed around θ = 0, which implies that the integrated probability of observing an angle smaller than θ is proportional to θ itself. This is a manifestation of deviations from strict hyperbolicity. The several peaks, instead, correspond to the regions where the unstable manifold is rather straight, and thereby the angle with the stable manifold is almost constant. As less trivial examples, we now discuss two higher-dimensional systems: a dissipative chain of Hénon maps (see Eq. (A.11)) and a chain of symplectic maps (see Eq. (A.12)). The results are reported in Fig. 4.3, where both models have been studied with four maps and assuming periodic boundary conditions. The angle θ is determined as the minimal angle between two four-dimensional subspaces in the former case and between two three-dimensional subspaces in the latter. In fact, in the case of the symplectic maps there are two zero Lyapunov exponents (arising from the conservation of the sum of the Z variables). We see that again the distribution extends to zero with a finite height, suggesting that deviations from hyperbolicity do not become more dangerous in higher-dimensional systems.

4.7 Further vectors Besides covariant, singular, forward and backward vectors already discussed, other forms of Lyapunov vectors have been proposed in connection to various questions. In this section we briefly review two further classes of vectors.

66

Lyapunov vectors

4.7.1 Bred vectors Whenever the dynamical model is highly complex, it may be inappropriate to develop algorithms for the integration of the equations in the tangent space. This is, for instance, the case of the atmospheric models typically used for weather forecasts. In such a context, it is convenient to consider finite-amplitude perturbations (see also Chapter 7). Given a generic initial condition U and a randomly chosen nearby point, they are allowed to evolve for a time t. As a result, δU (t + t) = M [U(t) + δU(t)] − M[U(t)], where δU is the initial perturbation with small amplitude A = δU, M denotes the evolution operator over a time t. The perturbation δU is then rescaled to keep its norm equal to A (a free parameter one can play with), δU(t + t) = A

δU (t + t) , δU (t + t)

and the procedure is repeated forwards in time. The resulting direction of the perturbation is called a bred vector (BV). In the small A limit, this is nothing but the method to determine the largest Lyapunov exponent. In this case, the BV converges towards the most expanding direction. Alternatively, given the reference initial condition U, the distance δU can be measured by comparing the evolution of two randomly perturbed trajectories; this “two-sided selfbreeding” has the advantage of maintaining the linearity of the perturbation dynamics to the second order compared with the original one-sided scheme (Toth and Kalnay, 1997). In practice, all that can be said about BVs, including the considerations reported here, applies also to the standard first covariant Lyapunov vector, except when the amplitude A is not too small (in this case, see the finite amplitude Lyapunov exponents discussed in Chapter 7). Several numerical studies have revealed that BVs appear to be less sensitive to the selection of the norm with respect to singular vectors. This is, for instance, found when comparing potential-entrophy with the stream-function norm (see Corazza et al. (2003)). A more interesting property is the distribution of directions taken by the BVs (when computed over not-too-long times). In fact, it has been found that in global atmospheric models, as well as in other strongly nonlinear models, BVs remain distinct, rather than converging to a single leading direction; whether due to the presence of nearly degenerate spectra or of long transients, this observation has important consequences for the predictive power of the underlying model. In order to quantify the spreading among BVs, the concept of BV-dimension has been proposed. If there are k BVs, each composed of N components (where N is the number of variables in the model), all variables can be organised into an N × k matrix B. Afterwards, the principal component analysis, one computes the (positive) eigenvalues σj2 and the eigenvectors of the k × k covariance matrix C = B B. So long as all BVs have converged

67

4.7 Further vectors

towards the same direction, only the leading eigenvalue is significantly different from zero. On a qualitative level one can introduce the effective dimension as 2  k σ j j=1 . DBV = k 2 j=1 σj If only one eigenvalue differs from zero, DBV = 1, and if all are equal, DBV = k. Therefore DBV roughly tells us how many directions are spanned by the BVs. A large dimension is typically observed either in weakly chaotic regions, where many directions are characterised by similar (small) expansion rates, or when the amplitude A is large, allowing perturbations to be very different from one another. Interestingly, it has been found that in some cases the Earth’s atmosphere is characterised by a low DBV dimensionality (DBV < 2.5) (Patil et al., 2001).

4.7.2 Dual Lyapunov vectors The covariant Lyapunov vectors represent the generalisation of the right eigenvectors of a constant Jacobian J to a situation where one deals with a time-varying Jacobian. In some circumstances, it is convenient to consider the left eigenvectors as well, which, within linear algebra, are often referred to as dual vectors. In the context of this book, the ith dual Lyapunov vector is orthogonal to the space spanned by all CLVs uj with j  = i. In other words, dual Lyapunov vectors coincide with the CLVs only when the Jacobian is an orthogonal operator. Dual vectors prove useful for the reconstruction of the linear response of a given dynamical system to a small external stimulus when the dynamics is periodic (and stable). Let δ denote a generic perturbation acting on a given periodic dynamics at time t0 . It can be decomposed into a transversal component that is eventually absorbed (because of the stability of the limit cycle) and a longitudinal component, which, being associated to a zeroLyapunov exponent, is transformed into a finite phase shift (either negative or positive) δ = cT uT + cL uL . Here uT is a suitable combination of the stable CLVs, while uL coincides with the zeroexponent CLV and corresponds to a shift along the trajectory. The task is to determine cL , given δ. Upon applying (from the left, as usual) the Jacobian H(t0 , t) for a time t long enough, one obtains H(t0 , t)δ = cL H(t0 , t)uL , as the transversal component dies out. By now multiplying from the left by the dual vector uL of H(t0 , t), and solving for cL , it is found that cL =

uL · δ , uL · uL

where we have made use of the fact that uL is characterised by an eigenvalue equal to 1 (zero Lyapunov exponent). In practice, cL corresponds to the expected phase shift when a

68

Lyapunov vectors

perturbation δ is applied at time t0 . Notice that in the case of an orthogonal Jacobian, the denominator is equal to 1 as the CLV coincides with its dual vector. At the opposite limit of a near degeneracy, when the two vectors are almost orthogonal, the denominator can, instead, be very small, thus implying a large phase response; this happens when the zeroCLV is nearly colinear with the stable directions, so that a relatively small perturbation decomposes into two large but antilinear components. Now, we illustrate the problem in the case of the FitzHugh-Nagumo model (A.7) for parameter values, where the evolution is asymptotically periodic. The shape of the limit cycle can be appreciated in Fig. 4.4a (see the figure caption for the parameter values); the various segments show the orientation of the second, i.e. stable, CLV. Within a linear

(a)

1.5 1 V

0.5 O

0 –0.5

(b)

1.5 1 V

0.5 0 –0.5

O –2

–1

0

1

2

U

Fig. 4.4

The FitzHugh-Nagumo model for I = 0.5, τ = 10, a = 0.7 and b = 0.8 (see the solid line in both panels). The segments in panel (a) are oriented along the locally stable direction, while those in panel (b) are oriented along the covariant vector, which corresponds to the zero exponent.

4 2 R

0 –2 –4 –6

Fig. 4.5

0

p=2

p µ

3p=2

2p

Phase-response curve for the FitzHugh-Nagumo model: solid and dashed lines refer to a perturbation acting along the U and V variables, respectively.

69

4.7 Further vectors

approximation, phase points lying along the same segment all converge to the same trajectory. In this 2-dimensional model, the zero dual Lyapunov vector is orthogonal to the stable CLV. The amplitude of the phase shift (the so-called phase-response curve) due to a perturbation which acts along the U (V) direction is reported in Fig. 4.5 as a function of the phase θ along the limit cycle (where θ is defined as the rescaled elapsed time, starting from the origin O). More sophisticated applications to spatially extended systems can be found in Biktashev (2005), where the problem is approached by integrating the adjoint linearised equations backwards.

5

Fluctuations, finite-time and generalised exponents In this chapter we revisit the concept of LE introduced in Chapter 3, starting from the definition of finite-time Lyapunov exponents. Any average performed over a finite time is not a well-defined quantity per se since it is typically affected by fluctuations. In the long-time limit, however, it turns out that the residual fluctuations, rather than representing an obstacle for an accurate estimate of the LEs, offer the opportunity to extract useful information on the degree of homogeneity of the instabilities in phase space. Such information can be properly encoded into suitable dynamical invariants, either in the form of generalised Lyapunov exponents or as a large deviation function. Finite-time Lyapunov exponents can be used to introduce further numerical tools as well as powerful techniques for the evaluation of the LEs. On the numerical side, we briefly review some quick methods suitable for a qualitative assessment of the presence of instabilities, and then we present an advanced Monte Carlo technique to quantify large deviations. On the theoretical side, on combining the concept of generalised LEs with the ensemble-average approach introduced in Chapter 3, we show that the problem of evaluating the largest LE can be reduced to that of determining the largest eigenvalue of a suitable evolution operator. We thereby illustrate the potentiality and difficulties of the various methods with the help of some simple examples. Another theoretical approach, based on the stability of the periodic orbits embedded in a chaotic attractor, is also discussed. The last part of this chapter is devoted to the discussion of several examples of fluctuations that may emerge in generic dynamical systems. This includes the relationship between fluctuations and deviations from hyperbolic behaviour (the Hénon map being a rather instructive reference model) and several borderline cases, where the maximal LE may be equal to zero (weak chaos) or even negative (mixed dynamics), and yet positive fluctuations are generated and induce some form of irregularity in the resulting dynamics.

5.1 Finite-time analysis There are different ways of defining finite-time Lyapunov exponents.1 We start with a definition that is naturally associated with the concept of covariant Lyapunov vectors.

1 Sometimes finite-time exponents are called local exponents; we avoid such a terminology since locality may

be considered in both time and space (see the discussion of space-time dynamics in Chapters 10 and 11).

70

71

5.1 Finite-time analysis

Let uk (t0 ) denote a vector oriented along the direction of the kth CLV in the phase point U(t0 ). One can, accordingly, define the finite-time Lyapunov exponent k (t0 , τ ) =

H(t0 , t0 + τ )uk (t0 ) k (t0 , τ ) 1 log := , τ uk (t0 ) τ

(5.1)

where the Jacobian H(t0 , t0 +τ ) corresponds to the integration in tangent space from time t0 to t0 + τ (we use the same notations as in Section 4.1). The finite-time Lyapunov exponent is denoted with k because we want to distinguish it from the true Lyapunov exponent λk , defined in the limit τ → ∞. The LE λk can be thought of as the time-averaged finite-time Lyapunov exponent k . On the other hand, the quantity k (t0 , τ ) is the overall exponential growth rate of the perturbation integrated over a time interval of length τ . For finite times, k (t0 , τ ) generally depends on the selection of coordinates/norm and on the chosen initial condition (through t0 ). We start by discussing the first dependence. If one introduces a new variable V = G(U), the corresponding Jacobian can be written with help of the matrix D = ∂G ∂U as HV (t0 , t0 + τ ) = D(t0 + τ )H(t0 , t0 + τ )D−1 (t0 ), where vk (t) = Duk (t) so that k (t0 , τ )

$ % D−1 vk (t0 ) 1 Duk (t0 + τ ) + ln = k (t0 , τ ) + ln . τ uk (t0 + τ ) vk (t0 )

The term in square brackets is a bounded correction that does not systematically grow with the integration time τ . As a result of the division by τ , the correction becomes increasingly negligible upon increasing τ . Similar corrections arise if a different norm is selected for the computation of the Lyapunov exponents. The dependence on the initial condition can be appreciated in Fig. 5.1, where 1 (t0 , τ = 10) (dots) is plotted versus time for the Hénon map (A.4). Here, we see that even though 1 is typically positive, occasional negative values are spotted as well. Strong fluctuations can be observed even for arbitrarily large τ values, although their probability decreases upon increasing τ . This is an example of the large deviations that are discussed later in this chapter. One of the sources of the fluctuations is the existence of periodic orbits characterised by different sets of Lyapunov exponents. Finite-time Lyapunov exponents can also be defined in a simpler way, by making reference to the QR decomposition (implemented either through the Gram-Schmidt orthogonalisation or the Householder reflections – see Section 3.2) QR

k (t0 , τ ) =

logRkk (t0 , t0 + τ ) , τ

(5.2)

where Rkk are the diagonal elements of the triangular matrix R arising from the integration over a time τ . The second Lyapunov exponent is the simplest example to illustrate the differences between the two protocols. In the QR case, the sum of the first two finite-time Lyapunov QR QR exponents 1 (τ ) + 2 (τ ) corresponds to the growth rate of the most expanding area.

72

Fluctuations, finite-time and generalised exponents

Λ1(t0) 0.4

0

0

Fig. 5.1

2000

4000

t0

First finite-time Lyapunov exponent for the Hénon map. Dots refer to 1 (τ = 10); each value is computed over ten consecutive points of a given trajectory, and the dashed line corresponds to the asymptotic (average) value. CLV In the CLV context2 , CLV 1 (τ ) + 2 (τ ) is characterised by additional fluctuations, since the elementary area defined by the first two (non-orthogonal) CLV varies in time as well. The Lozi (A.5) and the Hénon (A.4) maps provide the opportunity to illustrate the difference, since in both cases, a single iterate of the map induces everywhere in phase space a volume (area) contraction by a factor b. Therefore, the QR definition implies QR QR that 1 (τ ) + 2 (τ ) = ln b. It is, therefore, convenient to introduce CLV (τ ) = 1CLV (τ ) + 2CLV (τ ) − τ ln b to denote a fluctuation of the expansion rate, integrated over a time τ . The outcome of numerical simulations performed with the covariant Lyapunov vectors is reported in Fig. 5.2. Panel (a) refers to the Lozi map: the distribution P( CLV ) has been obtained for τ = 40. One can, first of all, appreciate that, at variance with the QR definition, CLV takes non-zero values. The deviations are harmless, however, as they are independent of τ (the probability distribution for τ = 20 is indistinguishable from the curve reported in Fig. 5.2a). The scenario is slightly different for the Hénon map, where P( CLV ) is again independent of τ , but it now exhibits long, supposedly infinite tails (see Fig. 5.2b). This phenomenon is related to the existence of homoclinic tangencies, where the stable manifold is tangent to the unstable manifold (see Section 5.7.3 for a more detailed discussion). The Chirikov-Taylor standard map is a widely studied model of chaos, used as a testbed for symplectic dynamics. In this case, volumes are preserved over any time interval. Nevertheless, in Fig. 5.2c we see that CLV , now defined as CLV (τ ) = 1CLV (τ ) + 2CLV (τ ), reveals fluctuations distributed as in the Hénon map, the origin being again the non-hyperbolicity of the map. In conclusion, we have seen that two definitions of finite-time Lyapunov exponents are available that are almost, but not exactly, equivalent. The definition (5.1), based on CLVs, 2 The superscript “CLV” has been added to stress that we are referring to the definition given in (5.1).

73

5.2 Generalised exponents (a)

(b)

(c)

102

100

100

P

P

P

101

10–2

100

10–2

10–4 10–4

10–1 10–6 10–2 –0.5

Fig. 5.2

0

ΔΓCLV

0.5

–20

–10

0

10 20 ΔΓCLV

–20

–10

0

10 20 ΔΓCLV

Probability distribution P( CLV ) of the volume contraction rate in the Lozi map with a = 1.4 and b = 0.3 (panel (a)), the Hénon map with a = 1.4 and b = 0.3 (panel (b)) and the Chirikov-Taylor symplectic map for K = 1.5 (panel (c)). The dashed lines in panels (b) and (c) correspond to an exponential decay with an exponent −1. is the natural extension of the concept of eigenvalue of linear operators, but it loses the nice property of volume conservation laws that are, instead, kept by the definition (5.2), based on the QR decomposition. The differences between the two definitions, however, manifest themselves only in the tail of the distribution of the LEs, as it will become clear in the following sections.

5.2 Generalised exponents In the previous section we saw that finite-time Lyapunov exponents do typically fluctuate. Such fluctuations are more than a hindrance for an accurate computation of the LEs. They also carry information on how the chaotic dynamics is structured, or the degree of stability of a stochastic system (which can be destabilised by rare fluctuations). There are various ways to characterise such fluctuations. Here, we proceed by introducing the so-called generalised Lyapunov exponents in an abstract manner. In fact, the formalism applies to any observable that needs to be suitably averaged over the phase space. Moreover, for the sake of simplicity, we refer to a discrete-time dynamical system, the extension to continuous time being straightforward. So, let υ(U) be the scalar observable of interest and U0 a suitable initial condition. One can thereby define the time-integrated observable, ϒ (U(0)) = τ

τ −1  k=0

υ(Fk (U(0))),

74

Fluctuations, finite-time and generalised exponents

so that ϒ τ /τ is the average over the time interval τ . One can accelerate the convergence by averaging also over the initial distribution  ϒ τ  = dN U ρ0 (U(0))ϒ τ (U(0)). A more detailed description of the process is obtained by looking at the moments and cumulants of ϒ τ . They can be determined by introducing a characteristic function:  τ  τ eqϒ = ρ0 (U)eqϒ (U) dN U. (5.3) For large τ , this quantity is expected to be independent of the initial distribution and to grow exponentially in time with a rate G(q) that depends on q, 1  τ G(q) = lim ln eqϒ . (5.4) τ →∞ τ Notice that the conservation of probability implies that3 G(0) = 0. The characteristic function G(q) allows calculation of the cumulants of the distribution of ϒ τ as derivatives in zero. In particular,  τ & & (ϒ − τ υ)2 d2 G && dG && υ = = , lim , (5.5) τ →∞ dq &q=0 τ dq2 &q=0 where the latter quantity is the diffusion coefficient of the process ϒ τ . The Lyapunov exponents can be treated with this formalism; it is sufficient to identify τ ϒ with k (τ ). As a result, λ = υ (for the sake of simplicity, from now on the subscript index k will be dropped). By combining Eq. (5.1) with Eq. (5.4), one obtains ( ' H(t0 , t0 + τ )u(t0 )q 1 , (5.6) qL(q) = lim ln τ →∞ τ u(t0 )q where, instead of referring to the characteristic function, we have introduced the customary definition of the generalised Lyapunov exponent L(q) = G(q)/q.

(5.7)

For q → 0, L(0) reduces to the standard Lyapunov exponent λ (the limit may be computed by applying the L’Hôpital’s rule to the expression (5.6)). L(q) is a monotonously increasing function of q (see, e.g., Fig. 5.3b). Its variation with q reflects the existence of a finite range of exponential growth rates, as discussed next. Of particular relevance is L(1), which coincides with the topological entropy (see Chapter 6).

3 In the case of chaotic repellers, the initial measure decreases exponentially, and one has to thereby modify the

equation slightly.

75

5.2 Generalised exponents

Large deviation function The process υ(t) can be equivalently characterised from the scaling behaviour of the probability P(, τ ) to observe a finite-time exponent  over the time interval τ (see Eq. (5.1)), P(, τ ) ∝ e−S()τ , τ →∞

(5.8)

where the large deviation function S() is positive-definite with a minimum (equal to zero) at the asymptotic value of the Lyapunov exponent λ. In the absence of long-time correlations, the minimum is quadratic, meaning that the central limit theorem holds and one can approximate the distribution of finite-time Lyapunov exponents with a Gaussian for  ≈ λ (see also the next section). From Eqs. (5.7, 5.8), one can write  eL(q)qτ ≈ de[q−S(λ)]τ , which implies the Legendre transformation (or, in the case of non-smoothness and/or nonconcavity, the Legendre-Fenchel transform), qL(q) = max{q − S()}.

(5.9)

So long as S() is a smooth function, the maximum is attained for the special value ∗ where S (∗ ) = q, so that qL(q) = q∗ − S(∗ ).

(5.10)

This transformation has a simple geometrical interpretation. Given a generic distribution S() (see, for instance, Fig. 5.3a), its tangent in the point of abscissa ∗ intersects the vertical axis in a point of ordinate equal to −qL(q). The resulting generalised LE L(q) is plotted in panel (b) of the same figure. Finally, Eq. (5.9) can be inverted. As for generic Legendre-type transforms, given L(q), one can relate q with  through  = (qL) , while S() is obtained by inverting Eq. (5.10): S() = q( − L(q)). This means that the fluctuations of the finite-time Lyapunov exponents, often referred to as the stability spectrum, can be equally characterised by either the large deviation function S() or the generalised exponents L(q). Notice in Fig. 5.3a that the range of possible -values is finite: it is delimited by the points where S() = . In fact, in a deterministic dynamical system, exp[ − S()t] corresponds to the number of trajectories of length t which are characterised by the exponent  (Badii and Politi, 1997): wherever S() > , such a multiplicity vanishes,

76

Fluctuations, finite-time and generalised exponents (a)

(b)

−qL L

S 1.2

0.6

0.8

0.5

0.4

0.4

S(Λ∗) 0

Fig. 5.3

0.2

Λ∗

0.6

0 Λ

–20

–10

0

10

q

20

Large deviation function (a) and generalised Lyapunov exponents (b) for the model (5.15) with p = 1/2. The action of the Legendre-Fenchel transform (5.9) is also schematically illustrated. The tilted solid line in panel (a) corresponds to the bisectrix S = λ. meaning that the corresponding finite-time LE  cannot be observed at all. The minimum and maximum LEs are typically attained by simple (fixed point, periodic or quasiperiodic) trajectories. Indeed, if such a special, non-typical trajectory (or, more generally, an invariant subset inside chaos) possesses a certain Lyapunov exponent, then a typical trajectory can potentially come close to this non-typical set, yielding the possibility for the corresponding Lyaponov exponent to be observed for a finite time (which can be in fact arbitrarily long). Thus, the domain of the large deviation function describes the spectrum of instabilities of invariant sets inside chaos. We will see applications of this in Section 9.2. In the presence of degeneracies (i.e. when different periodic orbits are characterised by the same exponent), the ending points of S() may lie strictly below the bisectrix. On the other hand, in stochastic processes and in random dynamical systems, S() can take any positive value, as S() is determined by the properties of the given stochastic process. In a more general context, one could consider the whole set of finite-time Lyapunov exponents  = {1 , 2 , . . . , N }, thereby introducing the probability distribution P(, τ ) and the corresponding large deviation function, P(, τ ) ∝ e−S()τ . τ →∞

S() being still a positive-definite scalar quantity is now a function of N variables; it provides the most detailed characterisation of LE fluctuations. It can, in principle, be mapped onto a suitable characteristic function, but now it would be necessary to introduce a set of N indices qi . In many applications the observable of interest is a linear combination of various Lyapunov exponents (see, e.g., the Kolmogorov-Sinai entropy, which is the sum of the

77

5.3 Gaussian approximation

positive LEs, Chapter 6), υ() =



ci i (τ ).

Assuming knowledge of S(), one can easily determine the appropriate large deviation function as Sυ (ϒ) = max{S(, τ )|υ() = ϒ}.

5.3 Gaussian approximation In general, determining S is too ambitious a task – especially for high-dimensional systems (see, however, the next section for a powerful method for detecting extremely improbable events). One can nevertheless infer relevant properties already from the curvature of S around the minimum λ, i.e. from the first terms in the expansion 1 ( − λ)Q( − λ) . (5.11) 2 This is nothing but the Gaussian approximation, as it is readily seen in the case of a scalar variable, when Q reduces to the single component Q11 , S() ≈

P(, τ ) ≈ e−

(−λ)2 τ Q11 2

.

From this equation it is transparent that P(, τ ) is indeed a Gaussian distribution centred around the asymptotic value of the Lyapunov exponent with a variance D = 1/(Q11 τ ). This result is consistent with the central limit theorem, which states that √ the standard deviation of the average of M independent variables is on the order of 1/ M (here τ plays the role of M). In other words, as we have made no assumption on the distribution of the -values, we learn that the aforementioned quadratic approximation is equivalent to assuming that correlations are sufficiently weak. This does not exclude that in some cases S may not have a quadratic minimum (see, e.g., the case of weak chaos discussed in Section 5.7.2). In general, it is preferable to refer to the symmetric diffusion matrix D = Q−1 . Its elements Dij can be easily determined from the (linear) growth rate of the (co)variances of ( i (τ ) − λi τ ),   Dij = lim i (τ ) j (τ ) − λi λj τ 2 /τ . (5.12) τ →∞

Here the overline denotes time average. The information contained in D can be expressed in a more compact form by determining its (positive) eigenvalues μk (k = 1, . . . , N), i.e. the so-called principal components, which correspond to the fluctuation amplitudes along the principal axes. The knowledge of the eigenvalues can indeed help to identify, if not to discover, constraints in the tangent space dynamics. For instance, if the total-volume contraction rate is constant in the phase space, and the QR definition of the finite-time LEs is used, it

follows that i i (τ ) = const, i.e. all {i } n-tuples lie in the same hyperplane (see the

78

Fluctuations, finite-time and generalised exponents

next section for a discussion of possible differences between the two definitions of finitetime Lyapunov exponents given in Section 5.1). As a result, one eigenvalue of D must be equal to zero, as it corresponds to the (vanishing) transversal width of the hyperplane itself. Another instructive case is that of symplectic dynamics (again, the QR definition of the finite-time LEs is used). Since the LEs come in pairs whose sum is zero, the fluctuations of the negative LEs are perfectly anticorrelated with those of the positive ones. As a result Dij has an additional symmetry, DN+1−i,j = Di,N+1−j = −Dij , and half of the principal components are exactly equal to zero. Altogether, the possible existence of zero eigenvalues reinforces the choice of studying D rather than its ill-defined inverse Q. Moreover, since the matrices Q and D are diagonal in the same basis, and the eigenvalues of Q are the inverse of those of D, one can infer the scaling behaviour of the former ones from that of the latter. One must simply be careful and discard the redundant variables associated with the zero eigenvalues.

5.4 Numerical methods In this section we briefly review some tools based on finite-time Lyapunov exponents and the so-called Lyapunov weighted dynamics, which allows the study of large deviations.

5.4.1 Quick tools Sometimes, it may be useful to rapidly assess the chaotic character of a given trajectory. This is the case of Hamiltonian systems in the presence of a mixed phase space, when there is, e.g., the need to identify regular islands in a chaotic sea. It is also the case of celestial mechanics (see Section 12.5), where chaos is weak and the trajectories are unavoidably “relatively” short. The overall goal is to extract as much information as possible from finite-time Lyapunov exponents. In this context, the dependence on the initial condition is not considered as a drawback but rather as an opportunity to classify different regions, according to their “short-term” stability. The most serious difficulty is the lack of knowledge of the covariant Lyapunov vectors; a randomly chosen initial direction may indeed be accidentally close to one of the less expanding directions. In order to minimise such an effect, various approaches have been proposed. Here, we briefly summarise two of them: the Fast Lyapunov Indicator (FLI) (Froeschlé et al., 1997) and the Generalised Alignment Index (GALI) (Skokos et al., 2007). Both of them make a reference to the parallel evolution of p unit vectors {u1 , u2 , . . . , up } in the tangent space. In the case of the FLI, these unit vectors are initially chosen as an orthonormal basis and are thereby allowed to freely evolve for some time interval τ . The indicator is finally defined as LFLI (τ ) = max lnuj (τ ). j

79

5.4 Numerical methods

The max operation allows the discarding of those “unfortunate” initial conditions in the tangent space, which may initially be improperly aligned. In the case of the GALI, the tangent verctors are rescaled (but not orthogonalised!) to obtain unit vectors uˆ j at all times. The order-p indicator is thereby defined as the volume of the parallelepiped identified by the tangent vectors, p

VGALI (t) = uˆ 1 ∧ uˆ 2 ∧ . . . uˆ p . In Hamiltonian systems, where a zero maximum exponent implies that all other exponents vanish as well, this indicator allows the distinguishing of chaotic from ordered dynamics. In fact, in the case of a chaotic regime (and in the absence of degeneracies), the GALI decreases exponentially in time: p

VGALI ≈ exp [(λ1 − λ2 ) + (λ1 − λ3 ) + · · · + (λ1 − λp )]t. In the case of a regular dynamics, the order-p GALI either remains finite or, at most, decays as a power law, when p is larger than the number of frequencies that characterise the given trajectory (Skokos et al., 2007). Of course, this method with p = 2 cannot recognise the presence of a chaotic dynamics, if the two largest positive LEs are nearly equal.

5.4.2 Weighted dynamics The previous discussion of finite-time LEs shows that the true exponents are the averaged ones, but there are deviations, due to the existence of phase space regions characterised by an either anomalously large or weak local instability (or even local stability). Large deviations are, by definition, improbable, and it is therefore difficult and time-consuming to collect enough statistics for a reliable estimate of their contribution. Tailleur and Kurchan (2007) have proposed an effective algorithm that is inspired by the sequential or diffusion Monte Carlo dynamics, often used in statistical mechanics. A set of suitably selected trajectories (replicas) is simulated simultaneously to properly determine average observables even when they are dominated by large deviations. An example is the generalised Lyapunov exponent L(q) for q sufficiently away from 0 (for the sake of simplicity, we refer to the maximum LE, the extension to the other exponents being conceptually straightforward). The method proceeds by alternating two steps: (i) the parallel evolution of Nc trajectories (including the position in real space and the corresponding Lyapunov vector) for a given time t to determine j (t) (see Eq. (5.1) – notice that the subscript here labels different trajectories) and (ii) a suitable adjustment of the pool of trajectories, performed in such a way as to favour those which provide the most important contribution to L(q). In practice, at the end of the first step, the expansion factor wj (t, q) = e j qt is computed for each trajectory and thereby averaged to obtain 1  wj (t, q). W(t, q) = Nc j

Afterwards, each replica is replaced by M = ηj − wj (t, q)/W(t, q) copies, where ηj is a random number uniformly distributed between 0 and 1, while x denotes the largest

80

Fluctuations, finite-time and generalised exponents integer smaller than x. In practice, if M = 0, the original trajectory is removed; if M = 1, nothing is done, while if M > 1, M − 1 clones of the trajectory are added to the pool. This new set of trajectories is then iterated for another time lap t (step i). In order to prohibit the daughter trajectories from evolving exactly as the parents, either a small amount of noise is added to the evolution in real space or some noise is added just at the cloning moment. In both cases the noise must be sufficiently small as not to modify substantially the invariant measure. The generalised exponent L(q) can be eventually estimated as L(q) =

1 lnW(t, q), qt

where the average · is performed over the total number of steps performed. As usual, t must be large enough to ensure that the correlations have died out. Although the number of replicas is, on average, equal to Nc , it is expected to diffuse because of the randomness of the deletions/insertions. It is, therefore, wise to either randomly prune or clone some of them, to ensure a constant number. Experience with this type of Monte Carlo algorithm suggests that the numerical estimates may be affected by metastability (a local optimum is selected which differs from the global one). In order to minimise such problems it is advisable to perform different runs and/or to increase the number of samples in the pool to check the stability of the results. The method works because the trajectory-resampling does not modify the average value W(t, q) but makes the estimated value more statistically reliable by increasing the weight of the appropriate trajectories. This can be better appreciated by looking at an equivalent description of the resampling process. Given the weights wj , let us order them from the

minimum to the maximum and introduce the integral variable C(i) = j 1). As a result, the highly dense regions are pruned, while the sparser ones are re-populated so as to leave the total sum (the quantity we are interested in) unchanged. Much care has to be taken when the fluctuations of the Lyapunov exponent lead to very small or even negative values, as the replicas of the phase points cannot rapidly generate statistically independent trajectories. The reader interested in the formal aspect of the Lyapunov weighted dynamics and in further details is invited to consult Tailleur and Kurchan (2007), Vanneste (2010) and Laffargue et al. (2013).

5.5 Eigenvalues of evolution operators The generalised Lyapunov exponents correspond to the asymptotic scaling behaviour of suitable observables. The problem of their computation can be recast as the identification of the leading eigenvalue of a suitable evolution operator. We first illustrate this idea by

81

5.5 Eigenvalues of evolution operators

considering piecewise linear maps of the interval in the presence of a Markov partition (see Appendix D): this simplification, besides making the analysis more understandable, helps to elucidate the difficulties encountered in more general contexts, before they manifest themselves. The existence of a Markov partition means that the unit interval can be split into disjoint subintervals Ii (1 ≤ i ≤ B), such that F(Ii ) is equal to the union of one or more such subintervals. If, moreover, we assume that the map is linear within Ii , it is easy to verify that the invariant distribution is constant within each subinterval, and one can reduce the iteration of the Perron-Frobenius operator to a matrix multiplication, ρ(t + 1) = Qρ(t),

(5.13)

where ρ is a vector of dimension B. Its components correspond to the probability densities within the interval Ii at time t, while Qij is the transition rate from interval j to interval i. In practice Qij = 1/|Fj | if Ii ∩ F(Ij )  = ∅ and Qij = 0 otherwise. The generalised Lyapunov exponent L(q) can then be obtained from the scaling behaviour of the moments  M(τ )q  = M(τ , U)q ρ(U )dU, where M(τ , U) is the multiplier along a trajectory of length τ , located in U at time τ . As the density is piecewise constant, one can express the moment as the sum of a finite number of components,  q

i ρi Mi (τ ), M(τ )q  = i q Mi (τ )

is an average over the trajectories ending in the interval Ii at time τ , while where

i is the width of the same interval. From Eq. (5.13), it is found that the vector Mq (τ ) follows the evolution equation, Mq (τ + 1) = G(q)Mq (τ ),

(5.14)

where the entries of the matrix G(q) are simply given by Gij (q) = |Fj |q−1 . As a result, from Eq. (5.7) ln αq 1 lnM(τ )q  = , τ →∞ qτ q

L(q) = lim

where αq is the largest eigenvalue of G(q). From the definition of G(q) (see Eq. (5.14)) it follows that Gij (1) is either equal to 1 if Ii ∩ F(Ij )  = ∅ or zero otherwise. In fact, L(1) identifies the topological entropy, an indicator that quantifies the growth rate of the number of different trajectories, irrespective of their probability. In particular, any linear or nonlinear map characterised by the same Markov partition is characterised by the same topological entropy L(1). As an example, we now consider the map ⎧ p ⎨1 − p + 1−p U(t) if U(t) < 1 − p (5.15) U(t + 1) = ⎩1 − U(t)−1+p if U(t) > 1 − p. p

82

Fluctuations, finite-time and generalised exponents The corresponding evolution operator is the 2 × 2 matrix   0 p Q = 1−p , p p whose eigenvalues are α = 1 and α = −1 + p. The first eigenvalue follows from the conservation of the probability, while the second eigenvalue accounts for the exponential convergence towards the stationary distribution. The evolution of the multiplier moments is instead controlled by   0 p1−q . G(q) =  1−p 1−q p1−q p Once again, the matrix is characterised by two eigenvalues. The generalised Lyapunov exponent is defined by the largest one, i.e.  

) 1 1−q 2(1−q) 1−q /2 . + p + 4(1 − p) L(q) = log p q Its shape and the corresponding large-deviation function S for p = 0.5 are drawn in Fig. 5.3. The minimum and the maximum exponents correspond to a period-2 and a fixedpoint orbit, respectively. The separation between the maximum and the second exponent determines the convergence rate of L(q), if one were to determine it from Eq. (5.6) for a finite τ . This is true in general, irrespective of the dimensionality of the matrix G(q). So far we have analysed a setup where the evolution operator reduces to a finite matrix and the generalised Lyapunov exponents can be thereby exactly determined. In general, one cannot get rid of the infinite dimensionality of the Perron-Frobenius operator, and it is necessary to rely on suitable perturbative techniques. This happens already in one-dimensional maps, which, although admitting a Markov partition, have a nonlinear structure, so that the invariant measure is not piecewise constant, as in the previous example. In such a case, it is natural to approximate the original map with a piecewise linear one, by refinining the Markov partition. This goal can be achieved by further splitting the unit interval with the help of the preimages of the borders of the initial atoms. The strategy is briefly illustrated in Fig. 5.4, where the standard generating partition, made of the two intervals [P0 , P2 ] and (P2 , P1 ], is Markov since F(P0 ) = P2 (while F(P1 ) = P0 and F(P2 ) = P1 , by construction). A second-order refinement leads to the addition of three points, namely P3 , P4 and P5 (with F(P3 ) = P0 and F(P4 ) = F(P5 ) = P3 ), and thereby to a partition made of five atoms; see I1 , . . . , I5 in Fig. 5.4, where the dotted line describes the corresponding piecewise linear approximation of the original map. The convergence of the asymptotic Lyapunov exponents and of the large deviation function can be appreciated in Fig. 5.5, where we report the results for increasingly accurate partitions (see the figure caption). Notice that all approximations yield the same value of L(1). In fact, as mentioned previously, L(1) corresponds to the topological entropy, a concept that is associated with a counting of orbits, irrespective of their probability.

83

5.5 Eigenvalues of evolution operators (a)

I1

I2

P0

P4

I3 P2

I4 P5

I5 P3

P1

(b)

½ 2 1 0

Fig. 5.4

U 0

0.2

0.4

0.6

0.8

1

(a) Piecewise nonlinear map: U(t + 1) = a + (1 − a)U(t)/a + U(t)(a − U(t)) (for U(t) < a); U(t + 1) = 1 − (U(t) − a)/(1 − a) + b(1 − U(t))(U(t) − a) (for U(t) > a), with a = 1/2, b = 2. The diagonal corresponds to the bisectrix. (b) The corresponding probability density. In generic dynamical systems, however, no finite Markov partition exists at all, and even the topological entropy cannot be exactly determined. In such cases, the very first objective consists in building (either implicitly or explicitly) an approximate Markov partition. An effective, though entirely numerical, strategy consists in identifying (with the help of a generating partition) the irreducibile “forbidden” words of increasing length. A word is called forbidden if no trajectory exists which can generate such a sequence; an example is the word “00” in the map in Fig. 5.4. The word is termed irreducibile if it does not contain any shorter subsequence that is itself forbidden. Given a list of such words up to length n, it is possible to build a suitable Markov process which generates a sequence of words (a language) with exactly the same set of irreducible words (see Badii and Politi (1997)). By construction, the resulting topological entropy provides an upper bound to the true value since the missing information amounts to additional constraints that are not taken into account. The corresponding (approximate) Markov partition can be then refined, as in the previous case, to better approximate the nonlinearity of the map.

84

Fluctuations, finite-time and generalised exponents (a)

(b)

0.6

L

S 0.5

0.4

0.2 0.4

0

Fig. 5.5

0.4

0.5

0.6

–20

–10

0

10

20

(a) Large deviation function and (b) generalised exponents for the map described in the previous figure for different levels of approximation. Solid, dashed and dotted lines refer to partitions built with all preimages up to orders 8, 12 and 16, respectively. The tilted straight line in panel (a) corresponds to the bisectrix.

5.6 Lyapunov exponents in terms of periodic orbits In the previous sections we have seen that Lyapunov exponents can be determined by averaging some observable or determining the (leading) eigenvalue of suitable evolution operators. As a third option, one can exploit the information contained in the stability of the (unstable) periodic orbits that are embedded in a generic chaotic set.4 In this section we mainly follow the book by Cvitanovi´c et al. (2013), with a slight adjustment of the notations. We start by introducing in a more formal way the relationship between the computation of (generalised) Lyapunov exponents and the identification of the largest eigenvalue of a suitable evolution operator. Given the general Eq. (5.3), if one adds the integration of an n-dimensional δ-function over the set of all final points V = Ft (U) (omitting the argument of U(0) in the initial density as well as in ϒ t ), it is found that   τ  τ qϒ N = d Uρ(U) dN V δ N (V − Fτ (U))eqϒ (U) . (5.16) e This relation can be considered as the action of the evolution operator J τ with kernel J τ (V, U),  τ J τ (V, U) = δ(V − Fτ (U))eqϒ (U) , J τ φ(V) = dN U J τ (V, U)φ(U).

4 Or hidden in the different realisations of a stochastic process – see the application of the zeta function formalism

to products of random matrices in Section 8.1.

85

5.6 Lyapunov exponents in terms of periodic orbits

Its action on the initial density and the subsequent integration over the final point leads precisely to relation (5.16), which can be rewritten as  τ  * + τ G(q) = eqϒ = dN V J τ (V, U)ρ(U)dN U . (5.17) e The main advantage of working with the operator J τ is its multiplicativity along the time evolution: J t1 +t2 = J t2 J t1 . As a result, Eq. (5.17) implies that the characteristic function G(q) indeed corresponds to the largest eigenvalue of J , as more heuristically seen in the previous section. Now, we focus on the trace of J . The trace, being the sum of the eigenvalues, is dominated by the largest one, which is needed to determine G(q). The trace emerges naturally by setting V = U,   τ τ G(q) τ N τ ∼ trJ = d U J (U, U) = d N Uδ N (U − Fτ (U))eqϒ (U) . (5.18) e Thanks to the presence of a δ-function, the integral in this equation can be easily computed. The only contributions are those from the points Us = Fτ (Us ), i.e. from the fixed points of the τ th iteration of the map F. These are all periodic orbits with a period τ/m with m ≥ 1. Supposing that there are M(τ ) such points, then one can write  τ M  eqϒ (Us ) . (5.19) eτ G(q) ≈ d N U J τ (U, U) = |det(I − Jτ )| s=1

∂Fτ

The factor |det(I − Jτ )|, where Jτ = ∂U is the Jacobian matrix of the map Fτ at its fixed points, appears via the integration of the δ-function. Using the identity det(I − Jτ ) = N j=1 (1 − μj ), where μj are the eigenvalues of Jτ , i.e. the multipliers of the fixed points (some of them can be complex), we can rewrite (5.19) as eτ G(q) ≈

M(τ )

eqϒ

s=1

τ (U ) s

N 

|1 − μj |−1 .

j=1

Under the assumption of strict hyperbolicity, this expression can be simplified for large τ . If there are N+ unstable directions, then N+ multipliers of Fτ are very large, while (N − N+ ) N+  multipliers are very small. By thereby approximating N j=1 |1 − μj | ≈ j=1 |μj |, one obtains ⎡ ⎤ N+ M 1 ⎣ qϒ τ (Us )  e |μj |−1 ⎦. G(q) ≈ ln τ s=1

j=1

With the help of (5.5), this expression of the characteristic function allows one to determine the average of a generic observable in terms of averages performed over the periodic orbits

 −1 ϒ τ (U ) N+ |μ |−1 s j s=1 τ j=1 υ ≈ . (5.20)

N+ −1 s j=1 |μj | This equation means that the weight of the contribution ϒ τ /τ of an orbit of period τ (i.e. a fixed point of the map Fτ (U)) is inversely proportional to the product of the unstable

86

Fluctuations, finite-time and generalised exponents

multipliers along the orbit itself. Thus, the more unstable an orbit is, the smaller its contribution to the general average. In the case of the largest Lyapunov exponent, ϒ τ is the logarithm of the largest multiplier μ1 , so that

M −1 N+ ln |μ1 (Us )| j=1 |μj |−1 s=1 τ λ≈ . (5.21)

M N+ −1 |μ | j s=1 j=1 In one-dimensional maps there is only one multiplier, and the expression (5.21) reduces to

M −1 τ ln ||μ(Us )||μ(Us )|−1 . (5.22) λ ≈ s=1 M −1 s=1 |μ(Us )| Increasingly accurate approximations are obtained by considering larger τ -values in Eqs. (5.21, 5.22). Eq. (5.18) can be better handled with the help of the zeta-function formalism. We start by rewriting Eq. (5.19) as trJ τ =

M  s=1

τ

eqϒ (Us ) |det(I − Jτ )|

(5.23)

and introduce the prime cycles as the non-repeating ones. If a periodic orbit P has a minimal (basic) period p, then it is also periodic with period rp for any integer r. Thus, a fixed point contributes to (5.23) for any τ , while period-2 orbits for any even τ , etc. These contributions can be easily summed. Since ϒ τ is additive along a trajectory of basic period p, we have that ϒ rp = rϒ p . Similarly, Jrp = [Jp ]r since the Jacobian matrices multiply along a trajectory. Furthermore, as all the points along a prime cycle are equivalent, they give p equal contributions to the sum in (5.23). This allows expressing the trace as a sum over the prime cycles P only, trJ τ =

  p P

r

erqϒ(P) δτ ,rp , |det(I − Jrp )|

(5.24)

where ϒ(P) is a shorthand notation to refer to a generic prime-cycle of period p. The Kronecker δ selects only those prime cycles whose period p divides τ . It is now convenient to introduce the dynamical or Ruelle zeta function,  ∞  zτ τ trJ . (5.25) ζ (z, q) = exp τ τ =1

In the limit of large τ , trJ τ ≈ eG(q)τ , where G(q) is the largest eigenvalue of J . We, therefore, see that the problem of determining the characteristic function G(q) can be reduced to that of finding the smallest zero zˆ of ζ (z, q), which corresponds to the radius of convergence of the zeta function, namely zˆ = e−G(q) . It is also useful to recall the additional relationship  1 z = (1 − ) = det(1 − zJ ) = exp [tr ln(1 − zJ )] , ζ (z, q) zk k

where zk are the eigenvalues of J .

87

5.6 Lyapunov exponents in terms of periodic orbits

With the help of (5.24), one finds that   zrp erqϒ(P) 1 = . ζ (z, q) r|det(I − Jrp )| r P

Notice that the sum over τ in (5.25) has allowed removing the Kronecker δ in Eq. (5.24); this is the consequence of having introduced the auxiliary variable z. In order to get rid of the nasty “det” term, the same approximation as in Eq. (5.20) N+ |μj | is the product of can be used, namely det|(I − Jrp )| ≈ μ(P), where μ(P) = j=1 all multipliers of the cycles P that are larger than 1 in absolute value. With the following simplifying notation QP = μ−1 zp exp[qϒ(P)], one can finally write

  ∞   QrP 1 = exp − (1 − QP ), = ζ (z, q) r P r=1

(5.26)

P

−1 r where the last step follows from the equality ∞ r=1 r Q = − ln(1 − Q). At first glance it looks as if the zeros of the r.h.s. of (5.26) are given by the condition QP = 1. This is not true, as one has an infinite product of the factors, which can be written as   (1 − QP ) = 1 − (−1)k+1 QP1 QP2 . . . QPk , (5.27) P

{P1 , P2 ,... , Pk }

where the prime means that the sum is over all distinct non-repeating combinations of the prime cycles. This expression is basically a polynomial expansion in powers of z (notice that QPi is a monomer proportional to zpi ). The average of the generic observable υ can be finally obtained from Eq. (5.5), by recalling that zˆ(q) = exp[−G(q)] is the leading zero of ζ −1 (z, q) and computing the total derivative of the expression ζ −1 (ˆz(q), q) = 0, &

 & ∂q ζ −1 && ϒπ Qπ && υ = =  . (5.28) & ∂z ζ −1 & pπ Qπ &(1,0) (1,0)

Here the subscript in ∂ denotes a partial derivative with respect to that variable, while & & Qπ = (−1)k+1 QP1 QP2 . . . QPk & = [μp1 μp2 . . . μpk ]−1 , (1,0)

pπ = p1 + p2 + . . . pk ,

(5.29)

ϒπ = ϒp1 + ϒp2 + . . . ϒpk . In the case of the largest Lyapunov exponent, ϒpk = ln |μ1 (pk )|, where μ1 is the largest multiplier of the prime cycle pk . For the sake of simplicity, hereafter we consider the simplest nontrivial case of prime cycle organisation, namely that of a dynamical system characterised by the two symbols (0, 1), where all sequences are possible (this is the case of the Bernoulli map, the tent map

88

Fluctuations, finite-time and generalised exponents

and the logistic map in the fully chaotic regime). The prime cycles are: two fixed points (0 and 1), one period-2 cycle (01), two period-3 cycles (100 and 110), three period-4 cycles (1000, 1100 and 1110) and so on. Writing explicitly only cycles up to length 4, one obtains a 4th-order polynomial approximation of the zeta-function 1 = 1 − Q0 − Q1 − [Q10 − Q0 Q1 ] ζ (z(q), q) − [Q100 − Q10 Q0 ] − [Q110 − Q1 Q10 ]− − [Q1000 − Q100 Q0 ] − [Q1110 − Q1 Q110 ] − [Q1100 − Q1 Q100 − Q110 Q0 + Q1 Q10 Q0 ]

− ...

Here we ordered the terms according to powers of z and additionally organised them into groups characterised by the same symbolic sequence, though composed of cycles with different periods. This grouping leads to a rapidly converging result, as the terms in square brackets nearly cancel each other. Substituting this expansion into (5.28) and (5.29), one can finally obtain the Lyapunov exponent as a ratio from Eq. (5.28),

∂q ζ

−1

∂z ζ −1

  ln μ0 ln μ1 ln μ10 ln μ0 + ln μ1 (5.30) = + + − μ0 μ1 μ10 μ0 μ1     ln μ100 ln μ110 ln μ10 + ln μ0 ln μ10 + ln μ1 + − − + + ... μ100 μ10 μ0 μ110 μ10 μ1       3 3 1 1 2 2 3 3 + + + ..., = + + − − − μ0 μ1 μ10 μ0 μ1 μ100 μ10 μ0 μ110 μ10 μ1

where μα is the absolute value of the multiplier of the prime cycle with symbolic sequence α. If the map is one-dimensional and piecewise linear, the slope is constant in each of the two atoms corresponding to 0 and 1; then 1/μ0 + 1/μ1 = 1, μ10 = μ1 μ0 etc., and all the terms in square brackets cancel exactly, so that we find λ = (ln μ0 )/μ0 + (ln μ1 )/μ1 , as expected. The convergence of the expressions (5.22) and (5.30) for the skew nonlinear tent map

F(U ) =

⎧ ⎨ U + U(a − U) a

0≤U≤a

⎩ 1−U + (U − 1)(a − U) 1−a

a i+1 (τ ) for all τ > τ0 . This property implies the absence of tangencies between the corresponding Oseledets subspaces spanned by the ith and the subsequent vector (Bochi and Viana, 2005). The probability of order-exchanges is quantified by the large deviation function Si (δ) = min S(1 , 2 , . . . , i , i − δ, . . . , N ), where the minimum is taken over all possible values of (1 , . . . , i ) and of (λL+2 , . . . , N ). In practice, for the Oseledets splitting to be dominated, it is necessary that the domain of Si (δ) does not extend to negative values. In general, it is quite hard to verify whether this condition is satisfied. One can nevertheless extract some semi-quantitative information with the help of a Gaussian approximation. In practice, it is necessary to determine the diffusion coefficient Ki of δi = i − i+1 ; it can be expressed in terms of the diffusion matrix D (5.12), Ki = Dii + Di+1,i+1 − 2Di,i+1 .

(5.32)

Since the probability of an exchange of finite-time LEs is equal to the probability P− i of observing a negative δi , we have, in the Gaussian approximation, P− i

2 0 1 τ 1 , ≈ erfc (i − i+1 ) 2 2Ki

√ where “erfc” is the complementary error function. In practice, if Ki is substantially smaller than i − i+1 , one can conclude that the given dynamical system is effectively hyperbolic, as the probability of inversion is negligible.

90

Fluctuations, finite-time and generalised exponents

5.7.2 Weak chaos LEs are the appropriate tool to quantify the degree of stability of a given dynamical system, when perturbations behave exponentially in time. There is, however, a relatively wide class of systems where perturbations behave sub-exponentially (Zaslavsky, 2007). This type of behaviour is often referred to as weak chaos or pseudochaos. In the context of Hamiltonian models, it includes models with mixed phase space (Zaslavsky and Edelman, 2004) and billiards (Artuso et al., 1997), but it arises also at the transition to chaos (either via perioddoubling or intermittency). Whenever the perturbation growth is sub-exponential, the LE cannot discriminate between instability and neutral stability. Given the different type of dynamics that can hide itself behind a zero Lyapunov exponent (see, e.g., power-laws vs. stretched exponentials), one cannot build a general theory. In some cases, however, in spite of an average sub-exponential growth, it is still possible to use the large-deviation theory to extract useful information. A rather general and relatively simple example is intermittency, described by one-dimensional maps of the Pomeau-Manneville type U(t + 1) = F(U(t)) = U(t) + aU z (t) mod 1

(5.33)

with z > 1. The map, depicted in Fig. 5.6, is of a Bernoulli type, with the important difference that one of the two fixed points (U = 0) is marginally (neutrally) stable. It is important to understand to what extent the presence of a neutrally stable point affects the overall chaotic properties. The Pomeau-Manneville dynamics with integer z typically arises at bifurcation points, when a fixed point loses stability, in which case one generically expects z = 2 (or z = 3 if some symmetry is present). Here, we discuss the case of a general z value, as it helps to better clarify the underlying mechanisms. 1 0.8 0.6 U(t+1) 0.4 0.2 0

0

0.2

0.4

0.6 U(t)

Fig. 5.6

Pomeau-Manneville map (5.33) for a = 1 and z = 2.3.

0.8

1

91

5.7 Examples

The computation of the LE in one-dimensional maps is relatively simple. As already seen in Section 3.4, λ can be determined as an ensemble average  λ = dUρ(U ) log |F (U )|, where ρ(U ) is the invariant measure. Even under such a simplifying hypothesis, an analytic expression for ρ(U ) is rarely known (one of the few exceptions can be found in Pikovsky (1991)). In this case, however, it is sufficient to determine the critical behaviour of ρ in the vicinity of U = 0. For small U, the dynamics is well approximated by the differential equation U˙ = aU z . One can assume that this evolution equation holds up to U = 1, without causing any change of the critical behaviour. Therefore, the probability distribution satisfies the equation ∂ ∂ρ =− [aU z ρ] + S, (5.34) ∂t ∂U where the phenomenological uniform term S accounts for the re-injection of points from the second branch of the map. In principle S should vary in time in order to ensure that the overall probability is constantly equal to 1. Since the fluctuations of the transient length do not affect our scaling arguments, we assume S to be constant. This equation admits the simple stationary solution C , U z−1 where C is a suitable normalisation constant, to be determined. For z > 1, ρ0 (U ) exhibits a divergence in the origin. Let us now study the convergence of an initially flat distribution ρ(U, 0) = 1. Upon neglecting the spatial derivative of ρ in Eq. (5.34), one finds that ρ0 (U ) =

dρ ≈ −aU z−1 ρ + S. dt This equation reveals that the relaxation rate of ρ depends strongly on U (it vanishes for U → 0). As a consequence, at any finite time t, the probability density has relaxed to its aymptotic value only for U > δU, where δU(t) can be estimated as the point where the relaxation time is equal to the elapsed time t, obtaining δU(t) ≈ t−1/(z−1) . For U < δU, relaxation is so slow that one can safely approximate ρ(U, t) with ρ0 (δU(t)). Since the δU 0 dUρ(U, t) ≈ 1/t (independently of z), one can approximate ρ(U, t) with ρ0 for δU < U < 1. Accordingly, the normalisation constant is 1 1 1 = [1 − (δU)2−z ] = [1 − t−(2−z)/(z−1) ]. C(t) 2−z 2−z For 1 < z < 2, C(t) converges to a finite value; this is no surprise, as the singularity in zero is integrable. For z > 2, instead, C(t) ≈ t(2−z)/(z−1) . We can now go back to the problem of computing the LE. For small U, the logarithm of the local multiplier vanishes as U z−1 , thus matching the divergence of the probability density.

92

Fluctuations, finite-time and generalised exponents

Therefore, for z < 2 (when the normalisation constant has a finite value) the integral in Eq. (5.33) is not singular, and a finite (positive) Lyapunov exponent is expected. For z > 2, the presence of a time-dependent normalisation constant implies that (t) ≈ t(2−z)/(z−1) , or, equivalently, that the overall exponential growth is (t) = t ≈ t1/(z−1) = tα .

(5.35)

This equation implies that does not increase linearly with time, as usual in exponentially unstable regimes, but slower, with a power α < 1; in other words we are facing a stretched exponential behaviour. Here, we show that in spite of the anomalous scaling of the perturbation, some useful information can be nevertheless extracted by means of the large-deviation approach. We start by considering the case z = 1.5. The corresponding generalised LE L(q) is plotted in Fig. 5.7a. There, we see that for q ≥ 0, L(q) behaves normally, while it is identically equal to zero for negative q-values, with a discontinuity in q = 0. The corresponding largedeviation function S() is plotted in panel (b). Below 0 (see the vertical dashed line), S() = 0; above, S() > 0 grows quadratically and eventually reaches the diagonal in a point which corresponds to the most unstable fixed point of the map (U = 1), whose (a)

(b) 1

1 L(q)

S

0.5

0.5 q

0

(c)

–2

0

2

0

4

0

0.5

1

Λ

(d)

1

1

L(q)

S

0.5

0.5 q

0

Fig. 5.7

–2

0

2

4

0

0

0.5

Λ

1

The generalised Lyapunov exponent L(q) and the large deviation function S() for the Pomeau-Manneville map (5.33) with a = 1. Panels (a) and (b) refer to z = 1.5, while panels (c) and (d) correspond to z = 2.3. The dashed line in panel (b) signals the value 0 , where S starts being strictly larger than 0.

93

5.7 Examples

101

P(Λ; t) 100 10–1 10–2 10–3

Fig. 5.8

0

0.2

0.4

0.6

Λ

Probability density of the Lyapunov exponents for the Pomeau-Manneville map (5.33) for a = 1 and z = 1.5. Dashed, dotted and solid lines correspond to t = 250, 500 and 1000, respectively. The vertical dashed line corresponds to the true LE. Lyapunov exponent is  = ln 2.5. This means that the probability density P(, t) decreases slower than exponentially in the interval 0 <  < 0 , so that one cannot conclude where the LE is located along the interval itself. A more refined analysis is needed. In Fig. 5.8 we plot P(, t) for three different time values. There we see that the distribution becomes increasingly peaked and the peak position converges towards 0 (see the vertical dashed line). Moreover, one can recognise that the low-density region does not decrease exponentially with time but rather as a power law (approximately as 1/t). For z > 2, the presence of a marginally stable fixed point U = 0 has deeper consequences on the overall stability of the dynamical system. In Fig. 5.7c, L(q) is plotted for z = 2.5. The most substantial difference with the previous case is the disappearance of the discontinuity for q = 0. This implies that S() is now strictly larger than zero for any λ > 0, signalling that the LE is equal to zero, as previously anticipated. Numerical studies performed for different z values reveal that L(q) ≈ qβ for small q values, while S() = γ (for   1). From the Legendre relationship linking the two observables, it follows that γ =1+

1 . β

By then recalling the definition of S() in terms of P(, t), P( , t) ≈ exp[−b γ t1−γ ], where = t, while b denotes some unknown constant. This equation implies that the growth prefactor (sometimes called generalised exponent) ξ=

, tα

where α = 1 − 1/γ , is distributed as Q(ξ ) ≈ exp[−bξ γ ],

(5.36)

94

Fluctuations, finite-time and generalised exponents

independently of time. By recalling the original definition of α in terms of z (see Eq. (5.35)) we can conclude that γ =

z−1 , z−2

β = z − 2.

As a consequence, the knowledge of the critical behaviour exhibited by the large-deviation function S allows us to infer the scaling behaviour of the perturbation even in a case like this, where the LE is equal to zero. Finally, notice that the growth prefactor ξ defined by Eq. (5.36) has no specific limit value for t → ∞; it keeps fluctuating, no matter how long the computation time is. The functional dependence outlined in the previous equation is valid in the limit of large ξ values. A globally valid expression can be found in Korabel and Barkai (2010), where a more refined treatment has been implemented. A rigorous analysis of the large deviation function can be found in Pires et al. (2011) and in Venegeroles (2012). Finally, let us recall that different instances of weak chaos are typically characterised by a different scaling behaviour and, thereby, require different treatments. For instance, in the transition to chaos via period-doubling bifurcations, a power-law behaviour is found rather than the stretched-exponential behaviour discussed in this section.

5.7.3 Hénon map The Hénon map provides a simple, but nevertheless general, setup for a fruitful test of the methods discussed in this chapter. We start by determining the large deviation function S(λ) for the largest exponent with the help of the Lyapunov weighted dynamics. In Fig. 5.9a, the characteristic function G(q) = qL(q) is plotted versus q. Above q = −1 the method converges quite rapidly, even sufficiently away from the canonical value q = 0. Fluctuations become rather nasty below q = −1. This is due not to their low probability but to the fact that, as pointed out by Grassberger et al. (1988), q = −1 corresponds to a transition point from the hyperbolic phase to the phase dominated by homoclinic tangencies with strong fluctuations. The Legendre transform of G(q) is plotted in panel (b): the function S is quite smooth and well converged above a certain value of the Lyapunov exponent, while, below, it is affected by a large uncertainty. In practice, it appears that the weighted dynamics approach is well suited to describe the hyperbolic phase, while difficulties arise in the characterisation of the non-hyperbolic one. There are no ambiguities in the definition of the maximum finite-time LE, as the two approaches discussed at the beginning of this chapter (through CLV or the QR decomposition) coincide. The same is not true for the second exponent. In fact, in Fig. 5.2b, we have seen that = (1 + 2 − ln b)τ is not exactly equal to zero in the CLV approach. This quantity obeys an exponential distribution P( ) ≈ e−| | for sufficiently different from 0 (the decay rate visible in Fig. 5.2b is indeed equal to 1 – see the dashed line). One can transform such fluctuations into those of the intensive

95

5.7 Examples (a)

(b) 3

4 G(q)

S 3 2 2

1

1

0 –2

Fig. 5.9

0

2

4

6

0 –2

–1

0

(a) The generalised Lyapunov exponent G(q) = qL versus q for the Hénon map and (b) the corresponding large-deviation function S for the standard parameter values. The size of the error bar is only approximate. In the region where there are no error bars, the accuracy is better than the line size. Both curves have been obtained by implementing the weighted dynamics for τ = 20. variable δλ = /τ , obtaining P(δλ) ≈ e−|δλ|τ . This implies that the corresponding large deviation function is S(δλ) = |δλ|; i.e. differences between the two definitions of this compound observable are to be expected when |q| > 1. Such differences are a signature of the presence of homoclinic tangencies. Finally, we consider the generalised Lyapunov exponent L(1). We have already commented – with reference to the one-dimensional map (5.15) – that q = 1 is a special value, where the LE does not depend on the probability distribution but only on the geometric structure of the attractor. In fact, as further explained in Chapter 6, it corresponds to the so-called topological entropy, a quantity that measures the growth rate of the number of “different” trajectories of a given length. The evaluation of L(1) greatly simplifies when it is possible to faithfully represent trajectories in phase space as sequences of symbols, through the implementation of a generating partition. In the case of the Hénon map, this can be done by following the approach sketched in Appendix D. In practice, one can split the phase space into two different symbolic regions, by constructing a suitable dividing line. The partition drawn in Fig. 5.10 is able to correctly identify all irreducible forbidden sequences up to length 20. As mentioned previously, given a list of irreducible forbidden words, one can construct a suitable Markov process and thereby determine the topological entropy as the maximum eigenvalue. By using a more refined partition than that reported in the table, and

96

Fluctuations, finite-time and generalised exponents

2 U(t + 1)

(−0.09648, 2.) (−0.09645, 1.78000) (−0.09673, 1.77902) (−0.09672, 1.77802) (−0.11750, 1.72371) (−0.11744, 1.72283) ( 0.02670, 1.11999) (−0.00893, 0.98653) (−0.01004, 0.98450)

(−0.09648, 1.78027) (−0.09641, 1.77979) (−0.09669, 1.77883) (−0.09672, 1.77799) (−0.11743 1.72290) ( 0.02682, 1.12013) (−0.00895 0.98676) (−0.01004, 0.98484) (−0.01004 −2.)

1

0

–1 1

0

–2 –2

Fig. 5.10

–1

0

1

U(t)

2

Generating partition of the Hénon map, for a = 1.4 and b = 0.3. The eighteen points reported in the table correspond to the edges of the dividing line. considering irreducible forbidden words of progressively longer length, it is possible to extrapolate that L(1) = 0.464936 ± 0.000003.5

5.7.4 Mixed dynamics In the previous section we encountered a large deviation function whose domain of definition covers positive as well as negative -values, indicating that the dynamics of the Hénon map is a mixture of stable and unstable trajectories. Non-hyperbolic chaos is one of the many examples that can be encountered in nonlinear dynamics. Another example is the bubbling transition, associated with the loss of transversal stability of a chaotic invariant manifold (Pikovsky and Grassberger, 1991; Ashwin et al., 1994) (see also the synchronisation transition discussed in Section 9.2). The reader interested in further details can consult Politi (2014b). Next, we analyse two amazing cases, where rare positive fluctuations can induce an irregular dynamics even in the presence of a negative Lyapunov exponent.

Strange nonchaotic attractors Strange nonchaotic attractors (Romeiras et al., 1987) are non-autonomous systems driven by quasiperiodic forces, which possess a non-smooth structure in spite of being characterised by a negative maximal LE. We refer, here, to the Lyapunov exponent of the driven, nonautonomous subsystem – the quasiperiodic driver which is obviously characterised by two zero Lyapunov exponents. According to the usual definition of chaos, such systems are nonchaotic, but the domain of definition of the corresponding large deviation function 5 This estimate is implicitly contained in D’Alessandro et al. (1990). The use of all irreducibile forbidden words

up to length 31 yields also the upper bound 0.4649418.

97

5.7 Examples

S() extends to positive as well as to negative -values. This is, indeed, the reason why they are characterised by a fractal structure (Pikovsky and Feudel, 1995). The connection can be analytically unravelled by studying a simple one-dimensional, quasiperiodically driven map (in our presentation we follow Feudel et al. (2006)) U(t + 1) = f(U(t), θ (t)), θ (t + 1) = θ (t) + ω

(mod 1),

(5.37)

where θ is a phase-like variable which describes a quasiperiodic motion (provided the frequency ω is irrational), while the function f is periodic in θ . If U is, on the attractor of the map (5.37), a smooth function of θ , then the attractor itself is a standard torus. Smoothness can be assessed from the derivative ∂U/∂θ. From Eq. (5.37), one obtains the recursive relation ∂f ∂U ∂f ∂U (t + 1) = (t) (t) + (t), ∂θ ∂x ∂θ ∂θ whose iteration leads to  ∂f ∂U (t) = (k)X(k + 1, t), ∂θ ∂θ t−1

k=0

X(k + 1, t) =

t−1  ∂U(t) ∂f (j) = . ∂x ∂U(k + 1)

(5.38)

j=k+1

The last expression involves the derivative of the variable U at time t with respect to its value at a previous time k + 1; this quantity is related to the finite-time Lyapunov exponent, ∂U(t) = ± exp[(n − k − 1)(k + 1, t)]. ∂U(k + 1)

(5.39)

According to Eq. (5.8), the probability density of  is ∼ e−S()(t−k−1) , where its domain of definition is [min , max ], and its minimum (equal to zero) is in  = λ < 0. If max < 0, then all of the factors X are bounded exponentially |X(k + 1, t)| < exp[(t − k − 1)max ], and the sum in (5.38) converges unconditionally as t → ∞. This means that the derivative ∂U/∂θ(t) is bounded and the function U(θ ) is a smooth torus. If max > 0, arbitrarily large values of X are possible (although they can be rare events). Such large events dominate the sum in (5.38). In order to estimate the largest factor X, we have to compare the probability of rare events with the observation time t. Within a long but finite time interval t, an exponent  (defined over a time τ ) can be observed, if its probability is at least of order 1/t, i.e. if exp[−S()τ ] ≈ 1/t. This yields τ ≈ (log t)/S(). 

Substituting this into the expression (5.39), we obtain |X| ≈ t S() , and one has now to take the maximum over all possible  values,

 . μ = max tμ ∼ max |X(k + 1, t)| k 0 and arbitrarily large contributions to the sum (5.38) can be present. This indicates that U(θ ) cannot be a smooth, but rather, a fractal curve. Therefore it is called a strange nonchaotic attractor. In Fig. 5.11 we present some numerical √ estimates of S() for the system (5.37) with f(U, θ ) = 3 tan U cos(2π θ ) and ω = ( 5 − 1)/2 (the first observation of a strange nonchaotic attractor in this model is

98

Fluctuations, finite-time and generalised exponents

0.5

0.4

0.3 S(Λ) 0.2

0.1

0 –2

–1.5

–1

–0.5

0

0.5

Λ

Fig. 5.11

The large deviation function for the system (5.37) with f(U, θ ) = 3 tan U cos(2π θ ). Squares, stars and filled circles refer to three different values of the observation times τ : 40, 50 and 60, respectively. The oscillations in the tails are due to insufficient statistics. due to Romeiras et al. (1987)). The minimum of S() is around −1; i.e. the Lyapunov exponent is negative, but positive finite-time Lyapunov exponents can be observed up to   0.5.

Stable chaos A perhaps more intriguing example of mixed dynamics is the so-called stable chaos, an irregular regime characterised by a negative Lyapunov exponent (Politi and Torcini, 2010). This phenomenon, first observed in a chain of coupled maps (Crutchfield and Kaneko, 1988) and later discovered also in neural networks (Zillmer et al., 2006), is, strictly speaking, transient, but it is exponentially long in large systems. The coupled chain of piecewise linear maps (A.14) is an appropriate testbed for this regime. For a relatively wide range of parameter values, a lattice of such maps converges to more or less structured stable periodic states, but the transient time is exponentially long with the lattice length. Moreover, in spite of the fact the LE is negative, the transient itself is irregular and seemingly stationary (there is no obvious precursor of the final collapse onto some order orbit). In practice, this behaviour is akin to that of (chaotic) cellular automata (Wolfram, 1986), the main difference being that, here, the variables are continuous. Some light has been shed on the nature of stable chaos by monitoring the evolution of perturbations. By following small but finite perturbations (i.e. studying finite-amplitude Lyapunov exponents; see Chapter 7), it is found that they undergo a sudden amplification, becoming of order 1. The analysis of large deviations, instead, reveals a positive tail.

99

5.7 Examples

S(Λ) 0.4

0.2

0

–0.5

Fig. 5.12

0

0.5

Λ

Large deviation function for the model (A.13, A.14) for η = 10−4 and ε = 1/3. Dashed and solid lines correspond to an observation time t = 20 and 40, respectively. The chain length is L = 200. A typical instance is plotted in Fig. 5.12 for two different observation times. Although finite-size corrections are still fairly large, one can notice that the domain of definition of S() extends to positive exponents, suggesting that the rare expansion events are sufficiently important to trigger and sustain the irregular dynamics.

6

Dimensions and dynamical entropies

The Lyapunov exponents of a deterministic system allow quantifying not only its stability and the predictability of the resulting dynamics, but also other important properties such as the fractal dimension of the underlying attractor and its dynamical entropy. Let us briefly recall the definition of fractal dimension. Given a set A (which is, for us, a generic attractor), we introduce a covering C(ε), as a collection of balls of size ε, such that any point of the attractor belongs to at least one ball. The fractal dimension can be thereby defined from the scaling behaviour of the number N(ε) of balls of size ε needed to cover the attractor, D = − lim min ε→0

ln N(ε) , ln ε

(6.1)

where the minimum over all possible covering procedures has the role of ensuring a negligible overlap among the different balls. This definition is the so-called box dimension. Other definitions are possible which yield, in general, different values. The difference among various definitions follows from the existence of fluctuations in the local density of the invariant measure over arbitrarily small scales, which make the measure “multifractal”. As we will see in the final section, there is a strong analogy between these fluctuations and those exhibited by the finite-time LEs that are responsible for the existence of a finite range of generalised LEs. A similar scenario arises also for the dynamical entropy. In order to make the key elements more understandable, we first neglect the existence of fluctuations, deferring the general treatment to the final section.

6.1 Lyapunov exponents and fractal dimensions In this section we derive the basic Kaplan-Yorke formula (Kaplan and Yorke, 1979), which expresses the fractal dimension of a given dynamical system in terms of its Lyapunov exponents. We start by illustrating the connection in a simple two-dimensional chaotic system, namely the Hénon map, as it allows an approach that will be straightforwardly extended to open systems in Chapter 11. Given an attractor A, consider a compact region S0 of size O(1) which contains the attractor (A ∈ S0 ) and it is fully contained in its basin of attraction. S0 can be used to build a proper covering of A, as its forward iterates St provide an increasingly refined covering of A (the almost triangular region ABC in Fig. 6.1a represents one such example for the Hénon map). 100

101

6.1 Lyapunov exponents and fractal dimensions

(a)

C

B

A

B⬘ A⬘

C⬘

(d)

(c)

Fig. 6.1

(b)

The initial region S0 covering the Hénon attractor and its first three iterates. The primed letters denote the images of the non-primed ones. As a result of the expansion along the unstable direction and of the contraction along the stable one, the forward iterates St of S0 resemble increasingly thin and long cylinders (see panels (b)–(d) in Fig. 6.1, where the first three iterates are represented). Upon cutting St transversally into thin slices of size equal to the cylinder thickness, one obtains a partition of the attractor into boxes of size ε = exp{λ2 t}, where λ2 is the second (negative) Lyapunov exponent (as anticipated, here we neglect the fluctuations of the finite-time Lyapunov exponent which is assumed to be everywhere equal to the asymptotic value – see Section 6.4.1 for a more accurate analysis). The crux of the argument is that such a partition becomes increasingly accurate as the time t evolves (since λ2 < 0). By then recalling the definition (6.1) of the fractal dimension, one can conclude that DKY = −

ln(eλ1 t /ε) λ1 ln N =− =1− , ln ε ln ε λ2

(6.2)

where λ1 is the positive Lyapunov exponent. Eq. (6.2) is nothing but the well-known Kaplan-Yorke formula (Kaplan and Yorke, 1979) for two-dimensional maps. Although its derivation is rather sketchy, this equation is exact, provided that DKY is interpreted as the information dimension (Ledrappier, 1981; Young, 1982; Eckmann and Ruelle, 1985) (see also Section 6.4.1).

102

Dimensions and dynamical entropies

One can provide a simple interpretation of this formula by introducing the partial dimensions Di : these are the dimensions of the attractor along the directions oriented along the local invariant directions (i.e. the covariant Lyapunov vectors; see Chapter 4). One can in practice write D = D1 +D2 ; in this specific case D1 = 1, while D2 = |λ1 /λ2 |, meaning that the attractor is continuous along the unstable direction, while it is fractal (Cantor-like) along the stable direction. Let us now consider an N-dimensional dynamical system and let S0 be a cuboid with edge O(1). At variance with the simple two-dimensional model, it is no longer obvious how to establish a connection between the elapsed time t and the accuracy of the corresponding partition. The fractal structure of the attractor is progressively uncovered by letting St evolve in time; at time t, we do not want to choose a box size ε that is smaller than the scales that spontaneously emerged. In order to shed light on this problem, it is convenient to adopt a different point of view. Let us consider a small i-dimensional region Ri0 and its forward iterates Rit . Under the hypothesis of an ergodic evolution, various initial points will spread over the entire attractor. The volume of the region grows according to the sum of the first i Lyapunov exponents Si (cf. Eq. (2.17)). If Si > 0, the volume of Rit grows exponentially while the attractor is being covered. One can thus conclude that the attractor dimension DKY > i; this process is tantamount to determining the “length” of a square: an infinitely long curve is needed to cover the full attractor. Analogously, an exponentially shrinking i-dimensional volume would signal that DKY < i. In practice, the volume neither expands nor contracts only when i corresponds to the true dimension. Since the number of directions is a discrete variable, in general there exists a value m − 1 such that Sm−1 > 0 while Sm < 0. By linearly interpolating between such two limits, one finds DKY = m − 1 +

λm−1 . λm

(6.3)

This is the general form of the Kaplan-Yorke formula (see Fig. 6.2 for a geometric construction). The relationship between dimensions and Lyapunov exponents can be further clarified by expressing the volume dynamics in terms of the partial dimensions Dk ,

V(t, {Dk }) = e k D λk t . (6.4)

A meaningful value of the dimension DKY = k Dk must be such that the corresponding volume neither expands nor contracts, i.e.  Dk λk = 0. (6.5) k

k

It is obvious that there exist different sets {Dk } of partial dimensions, which can satisfy these constraints. The Kaplan-Yorke formula (6.3) is obtained by conjecturing that Dk = 1 for k < m, D = 0 for k > m, and that the mth partial dimension is the only one possibly having a non-integer value (0 < Dm ≤ 1). This formulation makes it transparent the reason why the Kaplan-Yorke formula provides, in general, an upper bound to the (information) dimension. Any other distribution of weights Dk would indeed give lower values. This is also the reason why the so-defined DKY is often referred to as the Kaplan-Yorke dimension,

103

6.2 Lyapunov exponents and escape rate hKS Si

1

Fig. 6.2

DKY

2

3

4

5

i

6

Sketch of the Kaplan-Yorke dimension DKY and of the Kolmogorov-Sinai entropy hKS . The solid line refers to the Lyapunov exponents λi , while the dashed line corresponds to the partial sums Si . to underline its possible difference with respect to the effective dimension. One example where there is a finite discrepancy is that of two (or more) uncoupled attractors, since different contracting directions separately contribute to compensate for the expansion along the expanding directions. It has been proved that DKY is exact for two-dimensional mappings (where there is no ambiguity about the folding directions) and in random attractors, where the nonlinear dynamical rule changes stochastically in time. It is still unclear, however, under which conditions the Kaplan-Yorke formula holds exactly for generic high-dimensional models. We can now return to the original approach and use the convergence of the region St to generate increasingly accurate coverings of an attractor. The identification of the mth direction as the only fractal one is equivalent to defining the proper box size as ε = exp(λm t).

(6.6)

One can easily see that the number of boxes of size ε, which is needed to cover the corresponding N-dimensional cylinder, scales according to the Kaplan-Yorke dimension DKY . Finally, notice that the Kaplan-Yorke formula also provides useful information on the number of active degrees of freedom. In fact, in typical dissipative models, the number of variables that are necessary to uniquely identify the different points of an attractor is smaller (in some cases much smaller) than the phase-space dimension (i.e. the number of variables that are used to define the model itself). The Kaplan-Yorke formula can be viewed as a general prescription to estimate the number of “active” variables.

6.2 Lyapunov exponents and escape rate In the case of chaotic attractors the invariant measure is typically smooth along the unstable directions, while it is assumed to be fractal along one of the stable directions.

104

Fig. 6.3

Dimensions and dynamical entropies

Sketch of a multiple scattering process

There are, however, physical setups where no attractor is present in a given region of phase space, and yet an effective irregular dynamics can be sustained for arbitrarily long times. This is the case of scattering problems, such as the one depicted in Fig. 6.3, where an incoming particle may be trapped because of repeated collisions with the three disks. The invariant set, composed of all non-escaping trajectories, is a chaotic saddle also called repeller or, in mathematical terms, a Smale horseshoe (Katok and Hasselblatt, 1995). This object is fractal along both the stable and the unstable directions. In practice a chaotic saddle can be generated by “digging holes” (which let trajectories escape) in an otherwise stable attractor. These may be physical holes in real space, such as in the previous example (the open space in between the three disks), or holes in phase space, which arise as a result of a crisis transition (Grebogi et al., 1983). In all such cases the chaotic dynamics is transient (see Lai and Tél (2011) for more details). In a repeller, the number N of randomly chosen initial trajectories which have not typically escaped after a time t decays exponentially, N ∼ exp[−γ t]; the rate γ is called the escape rate. One can build increasingly accurate approximations of the invariant measure by recursively removing the preimages of the holes (i.e. those initial conditions which enter the holes in a number n of iterates). It is natural to expect that this removal process introduces cuts along the unstable manifold; i.e. it makes the distribution fractal along those directions, too. As the sizes of preimages are related by the growth rates of distances along the unstable direction, one can expect the existence of a relationship between Lyapunov exponents, fractal dimension and escape rate. This is indeed known as the Kantz-Grassberger formula (Kantz and Grassberger, 1985). We derive such a formula by following the same kind of arguments developed to prove the Kaplan-Yorke formula (6.2) in a two-dimensional setup. We first focus on the unstable direction: the nth preimage of the hole is composed of many intervals of typical size εn , which have not yet escaped after n iterates. If the fractal dimension of such a set of points 1 1 is D1 , then the number of intervals is εn−D whose total length sums up to εn1−D . This quantity defines the measure of initial conditions that do not escape before time n. By now taking into account that εn = eλ1 εn+1 and that the ratio between the mass in εn and εn+1 is equal to e−γ , we find that

105

6.3 Dynamical entropies (e−λ1 εn )1−D εn1−D

1

1

= e−γ ,

from which the Kantz-Grassberger formula follows: D1 = 1 −

γ . λ1

(6.7)

By finally using (6.5), we obtain an expression for D2 and then for the full dimension of the chaotic saddle,



γ λ1 1 1 2 1 2 , DKY = D + D = (λ1 − γ ) 1− . + D = |λ2 | λ1 λ1 |λ2 | In Hamiltonian systems, where λ1 = |λ2 |, the two partial dimensions D1 and D2 are equal to one another.

6.3 Dynamical entropies Another dynamical invariant that is connected to the Lyapunov exponents is the Kolmogorov-Sinai entropy, which measures the growth rate of the number of different trajectories that can be generated by a dynamical system and quantifies the overall instability of the underlying dynamics (Sinai, 2009). For the sake of simplicity, here we refer to discrete-time systems, but the final results apply to continuous-time models as well. Let Bε denote a partition composed of nonoverlapping cells (atoms) Cm of size ε, which cover the phase space visited by the attractor. One can thus map a generic trajectory U(˜t) (1 ≤ ˜t ≤ t) onto a symbol sequence {m}t = (m(1), m(2), . . . , m(t)), where m( ˜t) identifies the atom containing the configuration at time ˜t (U(˜t) ∈ Cm(˜t) ). Upon introducing the probability P({m}t ) to observe the symbol sequence {m}t , one can define the corresponding entropy  P({m}t ) ln P({m}t ). (6.8) H(t, ε) = − {m(t)}

In chaotic systems the diversity of trajectories grows exponentially with time; therefore the entropy H(t, ε) can be expected to increase linearly with time. The Kolmogorov-Sinai entropy hKS is defined as the average growth rate, 1 H(t, ε). ε→0 t→∞ t

hKS = lim lim

(6.9)

If the partition is generating, the limit ε → 0 is not necessary (see Appendix D for a brief discussion and Eckmann and Ruelle (1985) for a more detailed treatment). From now on we assume that this is the case, since this assumption allows for a simplified mathematical analysis. A relationship between hKS and the Lyapunov exponents can be established by identifying all trajectories characterised by the same given sequence {m}t of cells, as those sitting

106

Dimensions and dynamical entropies at time 0 in the set C = Cm(1) ∩ T −1 Cm(2) ∩ . . . ∩ T t−1 Cm(t) . As a result, the probabilities in (6.8) can be expressed as P({m}t ) = P(C),

(6.10)

where P(C) is the measure of C. C is basically an ellipsoid: let εi denote the semi-axis length along the direction of the ith Lyapunov direction (here identified as from the QR decomposition). In the case of a stable direction, we expect εi to be approximately equal to the width of Cm(1) , i.e. εi ≈ O(1). Since the mutual distance between any two trajectories converges exponentially to zero, this condition is sufficient to ensure that all the forward iterates fall within the same atoms (at the same time). In the case of an unstable direction, the mutual distance increases exponentially in time, and it is therefore necessary to require that εi ≈ e−λi t to ensure that it is always smaller than the typical atom size. As a result, by invoking the smoothness of the invariant measure along the unstable directions, one can write

 − λ >0 λi t i εi = e , P(C) ≈ i:λi >0

which, with the help of Eqs. (6.8, 6.9, 6.10), implies  hKS = λi .

(6.11)

i:λi >0

This is the standard expression of the Pesin formula (Pesin, 1977) for generic chaotic attractors. Rigorously speaking, this formula provides an upper bound for the KolmogorovSinai entropy. In the graphical representation of Fig. 6.2 the Kolmogorov-Sinai entropy corresponds to the maximal value of Si . Generally, having in mind chaotic saddles as well, it is necessary to take into account that the density along the expanding directions may be singular (i.e. characterised by a partial dimension Di < 1). In practice, this is tantamount to assuming that P(C) ≈





εiD = e i

λi >0 λi D

it

,

(6.12)

i:λi >0

which implies the general Pesin formula (Pesin, 1977),  D i λi . hKS =

(6.13)

i:λi >0

In chaotic attractors, Di = 1 and this equation reduces to (6.11). In two-dimensional repellers, the Kantz-Grassberger formula (6.7) implies that hKS = λ1 − γ ,

(6.14)

i.e. the dynamical entropy is obtained by subtracting the escape rate from the positive Lyapunov exponent. This is quite intuitive, as the escaped points do not contribute to the complexity of the stationary dynamics.

107

6.4 Generalised dimensions and entropies

6.4 Generalised dimensions and entropies So far, we have treated the dynamical systems as if all trajectories were characterised by the same Lyapunov exponent, so that there is no ambiguity in the identification of the various observables. As we have learned in Chapter 5, however, the presence of fluctuations induces an entire range of (generalised) Lyapunov exponents. In this chapter we extend both the Kaplan-Yorke and Pesin formulas to this more general case.

6.4.1 Generalised Kaplan-Yorke formula Given a covering of a (fractal) set with boxes of size ε, the generalised dimensions are defined as

1 ln i (Pi )β , (6.15) D(β) = lim ε→0 β − 1 ln ε where Pi is the probability of the ith box. This formula is similar to the definition of generalised Lyapunov exponents, as it is better seen, by introducing the concept of pointwise dimension ln Pi , ln ε which is indeed analogous to the finite-time Lyapunov exponent  assigned to a given initial condition (see Eq. (5.1)) with ln ε playing the role of the time τ . For β = 0, this definition reduces to the box dimension defined in (6.1), which is independent of how probable a box is (provided that the probability is strictly larger than zero). For β → 1, one obtains the information dimension

Pi ln Pi = α, D(1) = lim i ε→0 ln ε αi =

which coincides with the average pointwise dimension. Finally, D(2) is the so-called correlation dimension that can be computed by implementing the Grassberger-Procaccia algorithm (Grassberger and Procaccia, 1983; Beck and Schlögl, 1995). The definition of dimension can be made more general, allowing for boxes of different sizes (this extension is necessary to carry on the next steps). We proceed by introducing the local observable, Vi (D) = εiD /Pi .

(6.16)

By noticing that 1/Pi can be interpreted as an effective number of boxes, Vi (D) can be seen as an effective D-volume of the set as extrapolated from the parameters of a single box. In particular, if D is larger (smaller) than the local pointwise dimension, then Vi (D)  1 1). D can thereby be estimated by averaging suitable powers of Vi (D). More (Vi (D) specifically, let us introduce    Pi (εiD /Pi )1−β = (Vi (D))1−β . V β,D = i

108

Dimensions and dynamical entropies

In the limit of increasingly fine partitions (i.e. for the average box-size going to zero), V β,D either diverges or vanishes; D(β) can be defined as the limit value separating the two regimes. When all boxes have the same size ε, this definition reduces to (6.15), as the common factor ε(1−β)D can be brought out of the average. If we now go back to Eq. (6.4) and replace the asymptotic value λi with the finite-time observable i , we see that V(t, {Dk }) is by all means analogous to Vi (D) in Eq. (6.16). We can thereby consider the power (1 − β), averaged over all initial conditions, and take the limit t → ∞ (which is equivalent to refining the partition), thus obtaining the implicit formula  

k (6.17) e(1−β) k D (β)k t = O(1), where Dk (β) are the generalised partial dimensions. The expression in angular brackets in (6.17) is of the type (5.4), the observable ϒ being, in this case,  V= Di i , so that the condition (6.17) can be equivalently expressed as LV (1 − β) = 0,

(6.18)

where LV is the generalised Lyapunov exponent associated with the observable V. This equation, being a single constraint, does not allow for determining several partial dimensions. However, upon assuming that all but one of the partial dimensions are either equal to 1 or zero (i.e. arguing as in Section 6.1), this equation reduces, for β → 1, to the Kaplan-Yorke formula (6.3). It is sufficient to identify DKY with D(1) (and interpret λm , λm−1 as the standard LEs). In other words, we can conclude that the Kaplan-Yorke formula allows one to express the information dimension via the LEs. As for the other dimensions, including D(0) and D(2), the exact relationship is implicit. In the case of two-dimensional maps, no extra assumption is needed to determine the generalised dimension (D1 = 1). Whenever, moreover, the volume contraction does not fluctuate, i.e. λ1 + λ2 = B, equation (6.17) simplifies to   2 2 e(1−β)(1−D (β))λ1 t = e(1−β)D (β)Bt , so that L1 ((1 − β)(1 − D2 )) = D2 (β)B, where L1 is the generalised maximum Lyapunov exponent. Although this equation is simpler than (6.18), it is still an implicit equation that requires some work for it to be solved. Moreover, the reader should be warned that we do not expect this equation to hold in the non-hyperbolic phase (i.e. for sufficiently positive β values), as it is no longer true that D1 = 1. In full generality we also expect that more than one stable direction may be characterised by a fractal structure (Paoli et al., 1989); this is an additional reason why the formalism needs further refinements.

109

6.4 Generalised dimensions and entropies

6.4.2 Generalised Pesin formula Here, we discuss the case of dynamical entropies. The first step is to generalise the definition of entropy (6.8), by introducing the order-β Renyi entropy    1 1 P({m}t )β = Hβ (t, ε) = ln ln P({m}t )β−1 . 1−β 1−β {m(t)}

In the limit case β = 1, this formula reduces to the standard definition (6.8). As a result, one can define the generalised Kolmogorov-Sinai entropy, 1 Hβ (t, ε), t where, for the sake of simplicity, we have dropped the ε → 0 limit, implicitly assuming that we refer to a generating partition. From Eqs. (6.10, 6.12), considering that the Lyapunov exponents do fluctuate (i.e. replacing λi with i ), it follows that h(β) = lim

t→∞

P({m}t ) ≈ eSt where S=

Du 

Di i ,

i

while Du denotes the number of unstable directions. Accordingly, from Eq. (5.7), h(β) = LS (1 − β). This equation generalises the Pesin formula (6.13) to β-values different from 1. Notice that

u LS (1 − β)  = D i=1 Li (1 − β), for β  = 1, as the fluctuations of the single exponents are typically correlated with each other.

7

Finite-amplitude exponents

Standard Lyapunov exponents refer to the evolution of infinitesimal perturbations. A possible strategy for the study of finite perturbations consists in progressively including higher-order corrections to the linear dynamics of the Lyapunov exponent. This route was suggested and explored by Farmer and Sidorowich (1987) (see Pikovsky (1984a) for using corrections to linear dynamics in the context of synchronisation transition), who introduced the concept of higher order Lyapunov exponents to characterise the growth rate of higherorder derivatives. As a result, it was found that such exponents can be expressed as linear combinations of the standard Lyapunov exponents (Dressler and Farmer, 1992; Taylor, 1993). Therefore, it was concluded that this strategy does not lead to new tools nor to new indicators. It proves more fruitful to deal directly with finite perturbations. In the literature, starting from the original papers (Aurell et al., 1996, 1997), the term finite-size exponent is typically adopted to refer to the corresponding indicator. Since in this book we deal also with spatially extended systems, where “size” naturally refers to the spatial extension (see the so-called finite-size effects), we prefer to use the name finite-amplitude exponent (FAE), which alludes to the “amplitude” of a perturbation, rather than to its “size”. A comprehensive review of this concept can be found in Cencini and Vulpiani (2013). Here, we focus on those general aspects that help to clarify the differences and analogies with the standard linear analysis.

7.1 Finite vs. infinitesimal perturbations Given two neighbouring trajectories U0 (t) and U1 (t), let (t) = |U0 (t) − U1 (t)| denote their distance at time t. As long as (t) is finite, its growth rate varies with the value of

itself; this dependence can be captured by a properly defined FAE. Let us fix a series of thresholds θn and measure the corresponding (first-passage) crossing times tn . This way, when different measurements are compared (and possibly averaged), it is clear that all refer to the same instantaneous value of the distance . As for small the perturbation evolution is exponential, and it is natural to space the thresholds exponentially, according to some prescribed multiplicative factor r: θn = rθn−1 , r > 1 (see Fig. 7.1). One can then define the local FAE as L(θn ) = 110

ln r . tn − tn−1

111

7.1 Finite vs. infinitesimal perturbations

thresholds ln Δ

n

first passage times n–1

tn–1

tn time

Fig. 7.1

A sketch of the procedure to determine the finite-amplitude Lyapunov exponent. By then averaging the time interval over an ensemble of M different initial conditions, 1  (i) (i) (tn − tn−1 ), M M

tn − tn−1  =

i=1

one obtains the true finite-amplitude exponent (Aurell et al., 1996, 1997)

(θn ) =

ln r . tn − tn−1 

(7.1)

The reason why the denominator of Eq. (7.1), rather than the local FAE itself, has been averaged in Eq. (7.1) follows from the need of consistency with the definition of the usual Lyapunov exponent. For θn → 0, if a perturbation is amplified by a factor rN over a time tN , i.e. if (tN ) = rN (t0 ) (with (tN )  1), one can write ln rN = lim tN →∞ tN N→∞

λ = lim

1 N

N

ln r

1 (ti

− ti−1 )

=

ln r . ti − ti−1 

The last term in the r.h.s. of this equation is equivalent to the r.h.s of Eq. (7.1). Altogether, finite-amplitude and finite-time exponents differ not only because of the amplitudes of the perturbations that are being considered, but also because of the protocol adopted to determine them. The role of this latter difference has been thoroughly discussed in the analysis of Lagrangian coherent structures by Karrasch and Haller (2013) (see also Section 12.4), but its relevance is more general. In order to better clarify the point, it is convenient to consider the FAE in the limit of infinitesimal perturbations (Karrasch and Haller, 2013); the resulting indicator will be referred to as the first-passage time Lyapunov exponent. Superficially, the finite-time and the first-passage-time Lyapunov exponent are defined in the same way, as (ln r)/τ , where τ is the elapsed time and r is the corresponding expansion factor. In the former case, however, τ is an independent variable (fixed a priori), while r(τ ) is being measured, so as to obtain

112

Finite-amplitude exponents (t0 , τ ) = (ln r(τ ))/τ .1 In the latter case, the expansion factor r is an independent variable, while the time-interval τ (r) is the measured observable; the first-passage-time Lyapunov exponent is thereby defined as L(t0 , r) = (ln r)/τ (r). As far as averages are being considered, we have already seen in the first part of this section that the two definitions are equivalent. The instantaneous observables, however, may have different properties. Whenever (t) increases (or decreases) monotonously, no striking differences are expected and one can choose whichever definition. The situation is different in the presence of strong fluctuations, especially if a non-monotonic behaviour is observed (as, e.g., shown in Fig. 7.1). In this case the dependence of  on τ is continuous, while that of L(t0 , r) on r may have jumps due to the local maxima of (t) as a function of time. This difference makes the finite-time LE a better choice at least in situations where the (local) time stability of particular states is to be characterised, as, e.g., in Lagrangian coherent structures.

7.2 Computational issues Several problems may arise when the general FAE definition (7.1) is to be implemented. Here, we summarise the most important ones. First of all, Eq. (7.1) applies only to continuous-time dynamics, where first-passage times can be sharply identified. It can be easily implemented, however, in discrete-time systems, by extending the definition of (t) to non-integer times, ln (t) = ln (m(t)) + ln( (m(t) + 1)/ (m(t)), where m(t) is the largest integer smaller than t. This formula amounts to assuming a uniform exponential growth in between consecutive time instants. It is easy to check that this definition gives the correct averaging in the linear regime. A more serious problem arises when averaging has to be implemented to obtain a robust indicator. In the case of infinitesimal perturbations, it is sufficient to select a reference trajectory and thereby average in time (with a sporadic rescaling of the distance to avoid numerical overflows). In fact, the amplitude of the perturbation is immaterial, and contributions obtained at all times can be used to determine the average value. In the FAE case, rescaling of the perturbation is incorrect, as the local orientation of the perturbation is expected to depend on its amplitude. Accordingly, when the maximal distance is reached (which corresponds to the attractor size), it is necessary to start over with a new initial condition. This protocol does not entirely solve the computational problem; a randomly selected initial condition is not typically aligned along the most expanding direction. The only context where this is not a problem is that of one-dimensional maps, where perturbations are scalar quantities. The difficulty can be overcome, nevertheless, by selecting the initial perturbation along the direction of the first covariant Lyapunov vector (as obtained with the help of a standard linear analysis); see Letz and Kantz (2000). 1 The dependence on t is introduced to specify that  refers to a trajectory starting at time t . 0 0

113

7.2 Computational issues

Furthermore, some care should be taken in the presence of a multistable dynamics. Since a finite perturbation may induce a jump towards a different attractor, it is necessary to restrict the average to the perturbed trajectories which converge to the same attractor visited by the unperturbed one. For small enough amplitude, the FAE is expected to coincide with the largest Lyapunov exponent, while it saturates to zero for large amplitudes, since perturbations cannot be larger than the size of the accessible phase space. In the intermediate range, tells us how the growth of a perturbation is affected by nonlinearities. This is illustrated in Fig. 7.2, with reference to the Lorenz model, where one can notice a long plateau, which corresponds to the maximal Lyapunov exponent, followed by a drop to zero. The same data reveal, however, an additional feature: an initial peak that is obtained when the initial condition is chosen randomly in time (see the circles in Fig. 7.2). This is a spurious effect due to the sampling procedure adopted to select the initial conditions; points characterised by a smaller local Lyapunov exponent are selected with a larger probability, and as a result, the statistics of the average first passage time are biased, at least for the first few crossings. On the other hand, if one starts the calculation of the FAE exponent exactly when the linear perturbation crosses a given threshold, then the statistics are unbiased (see the squares in Fig. 7.2). An unavoidable problem is that the FAE definition involves neither an infinite-time limit (the trajectories that are being followed are not only finite but even rather short) nor that of infinitesimal perturbations. As a result, the FAE is not mathematically well defined: its value depends on the selection of the variables and on the metric used for the computation of the distance . Nevertheless, it can be profitably used to extract useful information on the stability, whenever it is strongly scale-dependent. We discuss some applications in the next section. 2.5

2

1.5 `

1

0.5

0 10–10

10–8

10–6

10–4 n

Fig. 7.2

10–2

100

102

√ The finite-amplitude exponent for the Lorenz model. The amplitude levels are scaled by a factor r = 2. Circles: initial perturbations are chosen randomly in time; squares: initial perturbations are chosen according to the unbiased statistics.

114

Finite-amplitude exponents

So far, we have assumed the underlying dynamics to be unstable. One can apply the same approach also to stable systems, by starting with large perturbations and following the threshold-crossing downwards towards increasingly fine scales. This has been done, for instance, by Ginelli et al. (2003) to identify the percolation threshold in a chain of stochastic maps. A weakness of this backward approach is the lack of an objective protocol to define the initial direction of the perturbation. It may seem desirable to extend the definition of the FAE to cover further directions in phase space, obtaining a spectrum as for the usual Lyapunov exponents. Unfortunately, the difficulty of extending the scalar product to a nonlinear environment has prevented any further steps in this direction.

7.2.1 One-dimensional maps For maps F(U(t)) of the unit interval, the direction of the perturbation is not an issue, and one can determine the FAE as a phase-space average of the local expansion rate

(δ) = ln |μ(U, δ)|,

(7.2)

where μ(U, δ) =

F(U + δ/2) − F(U − δ/2) . δ

Here, we discuss an application of this formula in the case of the tent map (A.3) and for the asymmetric Bernoulli map [F(U) = U/a ((U − a)/(1 − a)) if U < a (U > a)]. Since the invariant measure of both maps is flat, one can define the FAE as  1−δ/2 1 ln |μ(U, δ)|dU, (7.3)

(δ) = 1 − δ δ/2 where the integral is restricted to the interval IT = [δ/2, 1 − δ/2], so as to ensure that both U1 = U − δ/2 and U2 = U + δ/2 lie within the unit interval, where the map is defined. Moreover, the multiplicative factor has been included to rescale properly the probability to 1 within Ii . For both maps, the integral can be performed by splitting IT into the union of I1 = [δ/2, a − δ/2], I2 = [a − δ/2, a + δ/2] and I3 = [a + δ/2, 1 − δ/2]. In I1 (I3 ), both U1 and U2 are iterated according to the left (right) branch of the map, while in I2 , U1 falls inside the left branch, while U2 falls inside the right branch. The resulting dependence of the FAE on δ is illustrated in Fig. 7.3a, where one can make a comparison with the numerical results obtained using the definition (7.1). In the case of the tent map, the analytic formula (7.3) (dashed line) reproduces almost exactly the numerical data (circles). For the Bernoulli map (see triangles and the solid line), the agreement is less satisfactory in the large amplitude region. The reason is that the distribution of the midpoint U1 + U2 , assumed to be flat in Eq. (7.3), is no longer such for larger amplitudes, when U1 and U2 become increasingly decorrelated. The problem would not arise in the case of U being an angle variable (i.e. if the maps were defined on an interval with periodic boundaries).

115

7.3 Applications (a)

(b)

0.8 `

`

0.6

0.2

0.4

0.1

0.2

0

± 0

Fig. 7.3

10–6

10–4

10–2

± –0.1 –6 10

10–4

10–2

100

Finite-amplitude exponent for one-dimensional maps. (a) Tent (circles) and Bernoulli (triangles) maps. The absolute value of the slopes of the two map branches is 3 and 3/2 in both cases. The dashed and solid lines correspond to the theoretical predictions. The circles in panel (b) correspond to a numerical simulation of the map (A.14) for η = 0 (the other parameters are as in (A.14)) with additive noise defined as a sequence of i.i.d. random numbers uniformly distributed in [−0, 0.1]. An interesting phenomenon appears if the one-dimensional map has both a discontinuity and a negative (standard) Lyapunov exponent. In the purely deterministic case, the latter property would imply that the map has a stable periodic orbit (or, more generally, a stable invariant set) and the invariant measure is singular. For a noisy map, however, the invariant density is continuous and the averaging in (7.2) is nontrivial. Of course, in the case of noisy dynamics, the finite-amplitude Lyapunov exponent should be measured from the separation of two trajectories under the action of the same noise (see also Chapter 8). We illustrate a possible nontrivial dependence of by referring to the map (A.14) for η = 0 (when it is discontinuous). Simulations are performed by assuming a uniformly distributed additive noise −d ≤ ξ(t) ≤ d and taking the absolute value of U to enforce a confinement of the dynamics within the unit interval. The numerical results from Eq. (7.2) are reported in Fig. 7.3b, where we see that while < 0 for small δ, it becomes positive for moderate amplitudes. This is an example of how a strong nonlinearity (here mimicked by a discontinuity in the map) can generate a large-scale instability. Notice that in this case, the approximate formula Eq. (7.2) is preferable to the general definition; the first-passagetime approach would not be able to capture the mixture of stable/unstable properties of the dynamics.

7.3 Applications The main applications of FAEs concern the stability of systems with a variety of temporal and amplitude scales, as well as spatially extended systems. Here, we discuss a couple of examples, starting from two coupled Lorenz systems (A.9) with different temporal and amplitude scales,

116

Finite-amplitude exponents c˙x1 = σ (y1 − x1 ) c˙y1 = rx1 − y1 − x1 z1 c˙z1 = −bz1 + x1 y1 + εz2 x˙ 2 = σ (y2 − x2 ) y˙ 2 = rx2 − y2 − dx2 z2 z˙2 = −bz2 + dx2 y2 + d−1 εz1 . The parameter c > 1 is responsible for the time-scale separation: the subsystem 1 is c times slower than subsystem 2. The parameter d > 1 controls the separation between the scales of the variables themselves: all the amplitudes in subsystem 2 are d times smaller than in subsystem 1. Correspondingly, perturbations in the subsystem 2 grow faster, but they saturate at amplitudes smaller than the slowly growing perturbations in subsystem 1. This separation of scales is clearly revealed by the evolution of the FAE (see Fig. 7.4). At small amplitudes, the FAE is equal to the Lyapunov exponent of the standard Lorenz system (subsystem 2); after a sharp drop, a second, lower plateau arises at large amplitudes, characterising the perturbation growth within subsystem 1. The second application is the so-called stable chaos phenomenon (Crutchfield and Kaneko, 1988; Politi et al., 1993), where an irregular dynamics in spatially extended systems is observed despite the negative value of the largest Lyapunov exponent. The simplest model where such a dynamics can occur is a lattice of coupled maps of the type (A13,A14). In the purely deterministic case, each map possesses a stable periodic orbit of period three (for the parameters given in caption of Fig. 7.3). Thus, the dynamics of

1

0.8

0.6 ` 0.4

0.2

0 10–2

10–1

100

101

102

n

Fig. 7.4

The finite-amplitude exponent for the two-scale Lorenz model, with c = 10 and √ standard parameters (σ = 10, r = 28, b = 8/3). The threshold levels are scaled by a factor r = 2; ε = 0.1. Squares: ratio of amplitude scales d = 10; circles: ratio of scale d = 20.

117

7.3 Applications

a homogeneous initial state is periodic and linearly stable. However, an inhomogeneous state may remain “turbulent” for extremely long times (exponentially long with the system length). This “turbulence” is sustained by the finite-amplitude instability already seen in Fig. 7.3b (the main difference being that in the coupled-map lattice the effective noise is generated by the spatial coupling). In practice, localised finite perturbations can propagate as “nonlinear” waves (Cencini and Torcini 2001) in spite of the small-amplitude stability (see also Chapter 11).

8

Random systems

The problem of computing Lyapunov exponents arises not only in the study of dynamical systems but also in the context of noisy or disordered linear systems. In the latter case, the fluctuating properties of the Jacobian (or, more generally, of the matrices which determine the linear dynamics) are given beforehand, when the structure of the noise is postulated, rather than emerging self-consistently from the evolution of a deterministic dynamical system. This simplification allows for the development of powerful analytic approaches, especially when the noise is assumed to be δ-correlated. In the first section we discuss linear discrete-time systems, where the problem of determining the Lyapunov exponents can be formulated in terms of products of random matrices. We present various setups, starting from the case of weak disorder (noise), where the matrices are nearly equal to each other, and show that the variation of the Lyapunov exponents is generically proportional to the square of the disorder amplitude. For an arbitrary amplitude of noise, it is hard to extract analytic information on the entire spectrum of LEs, unless the dynamics is highly symmetric (essentially isotropic), as discussed in Section 8.1.2; otherwise, only the largest LE can be typically determined (in a more or less approximate way). The sparse matrices discussed in Section 8.1.3 provide one such setup, where it is even possible to detect a phase transition upon increasing the amount of disorder. Another setup where powerful semi-analytic techniques have been developed is when the disorder manifests itself as a selection among a few different options (polytomic noise). The second section of this chapter is devoted to the analysis of continuous-time systems, i.e. of linear stochastic differential equations. One- and two-dimensional linear systems are first discussed in the presence of multiplicative noise. The general Khasminskii theory is then briefly reviewed in Section 8.2.3. Unfortunately, closed expressions of the LEs can be hardly obtained if the noise is not δ-correlated and, even less so, in high-dimensional spaces. A remarkable exception is the fully coupled setup discussed in Section 8.2.4, where the evaluation of the largest LE can be reduced to the computation of an eigenvalue of a Schrödinger operator. The last section is devoted to a discussion of systems where noise and nonlinearities are simultaneously present. First, we briefly outline a stimulating analogy with supersymmetric quantum mechanics and then discuss the weak noise limit, which, not unexpectedly, reveals analogies with the purely linear case presented in Section 8.1. The role of LEs in noisy nonlinear systems is finally discussed in Section 8.3.3.

118

119

8.1 Products of random matrices

8.1 Products of random matrices The analysis of random matrices deserves special attention. Their study helps to clarify discrete-time dynamical systems and also several physical problems such as the propagation of waves in random linear systems and the statistical properties of disordered systems. An application of the former type (i.e. Anderson localisation) is extensively discussed in Section 12.1, while an instance of the latter type, namely random polymers, is illustrated later in this section. The reader interested in other setups is invited to consult Crisanti et al. (1993). Here, we discuss various techniques which allow us to derive (approximate and exact) analytical expressions. For later convenience, we first recall that given the linear evolution  rule uk (t + 1) = A(t)uk (t) and the product of matrices P(t) = tj=1 A(t), the object of study is the sum Sk of the first k Lyapunov exponents, 1 P( j)u1 (1) ∧ . . . ∧ P( j)uk (1) ln t→∞ t u1 (1) ∧ . . . ∧ uk (1) t  1 A( j)u1 ( j) ∧ . . . ∧ A( j)uk ( j) = lim , ln t→∞ t u1 ( j) ∧ . . . ∧ uk ( j)

Sk = lim

(8.1)

j=1

where the vectors uk (1) are linearly independent. The time average in Eq. (8.1) can, in principle, be determined as an ensemble average, but this is a formidable task, as the addenda depend on the orientation of the parallelepiped u1 ( j) ∧ . . . ∧ uk ( j) and, in the case of time correlated processes, on the past matrices as well.

8.1.1 Weak disorder We start by analysing matrices A of the type A(t) = A0 + εB(t), where A0 is constant and ε is a small parameter which gauges the amplitude of noise/disorder, while B(t) is a fluctuating matrix with zero-average entries (if not, the average contributions could be absorbed into A0 ). Here, we show that the variation induced by the disorder on the Lyapunov exponents is generally of order O(ε2 ), with exceptions in some critical cases.

Non-degenerate spectra We first consider matrices A0 characterised by distinct eigenvalues (the modules of the eigenvalues are assumed to be all mutually different). For reasons that will soon become clear, it is convenient to work in the basis which diagonalises A0 and where the eigenvalues are ordered from the most to the least expanding one. We shall closely follow the approach outlined by Derrida et al. (1987). From Eq. (8.1), it  is clear that the Lyapunov exponents can be determined from the matrix P(t) = tj=1 A(t),

120

Random systems

which, up to second order in ε, can be written as   P(t) ≈ 1 + εC + ε2 D At0 , where A0 is diagonal and C=

t 

A0i−1 B(i)A−i 0 ,

i=1

D=



j−i−1

Ai−1 0 B(i)A0

−j

B( j)A0 .

1≤i 0, which ensures that this growth rate is generically observable in one of the possible paths. When this is not the case,

183

10.3 Convective exponents and propagation phenomena a smaller -value should be selected, such that n( y0 , v) − S() = 0 (see Livi et al. (1992) for a more detailed discussion). For v = 0 and a symmetric coupling (ε− = ε+ = ε), it follows that y0 = 1 − 2ε, so that the two contributions n( y0 , 0) and g(v0 , 0) cancel each other out, yielding λ = L (0) = L(1). These results are exact only in the absence of fluctuations, when L(0) = L(1), as the assumption of independent paths is clearly false, since many paths share the same links. As long as fluctuations are negligible, Eqs. (10.11, 10.12) nevertheless provide a good approximation. An important type of fluctuations are those induced by the presence of positive and negative signs. In this case, as discussed by Pikovsky (1993), two phases may exist: (i) one where one sign prevails and the LE is determined by the growth rate of the mean amplitude and (ii) one where the LE is determined by the variance of the perturbations.

10.3.2 Relationship between convective exponents and chronotopic analysis It can be easily shown that L (v) is connected to the largest temporal Lyapunov exponent λ(0, μ) via a Legendre-type transform. Eq. (10.5) implies that the perturbaton has a locally exponential profile with a spatial rate μ=

dL (v) dv

(10.13)

in the point x = vt. From the chronotopic analysis, we know that such a perturbation evolves as u(vt, t) % exp [(λ(0, μ) + μv)t].

(10.14)

Notice that with these conventions μ < 0 in the positive x region. By combining Eqs. (10.5) and (10.14), one obtains L (v) = λ(0, μ) + μ

dλ(0, μ) , dμ

which, together with Eq. (10.13), can be interpreted as a Legendre transform from the pair (L , v) to the pair (λ, μ). The inverse transform reveals the further constraint v=−

dλ(0, μ) . dμ

The corresponding geometrical construction is presented in Fig. 10.9. It is important to remark that, numerically, it is more convenient to determine L (v) from the Legendre transform, as it is not necessary to deal with continously expanding lattices. In order to determine the velocity corresponding to a given μ-value, it is necessary to compute the derivative of λ(μ). Since the numerical computation of derivatives is always affected by large numerical errors, it is convenient to perform a few more analytical steps. By following Politi and Torcini (1992), we introduce vx (t, μ + dμ) = vx (t, i) + zx (t, μ)dμ

184

High-dimensional systems: general

¸(0, ¹)

¹v L (v)

Fig. 10.9

¹

Illustration of a relationship between the convective exponent L (v) and the chronotopic spectrum. in the recursive relation (10.4), obtaining an equation for the deviation zx (t) (for the sake of simplicity, we now drop the dependence on μ): 5 zx (t + 1) = FU εe−μx [zx−1 (t) − vx−1 (t)](1 − 2ε)zx (t, μ)+ ; (10.15) + εeμx [zx+1 (t) + vx+1 (t)] + bzx (t − 1). The knowledge of zx and of ux allows us to determine λ (μ). In fact, by computing the μ derivative of the definition of the chronotopic Lyapunov exponent, λ(μ) =

log ||v(t)||2 1 lim , 2 t→∞ t

one obtains λ (μ) = lim

t→∞

u(t) · z(t) , t||u(t)||2

where · stands for the scalar product. In order to better understand the selection process of the propagation velocity, it is convenient to go back to the evolution of a single exponential profile wx (t) = exp[λ(μ)t − μx]. Its velocity is obviously V(μ) = −λ/μ. From the Legendre transform it follows that

1 dλ λ L (v) dV = − =− 2 . dμ μ dμ μ μ Since the perturbation velocity v0 is identified by the condition L (v0 ) = 0, we see that v0 corresponds also to the minimum of V(μ0 ). In other words, as long as the evolution is controlled by linear mechanisms, it coincides with the velocity of the slowest front. By comparing with wave propagation, the velocity of the single front corresponds to the phase velocity, while v0 is equivalent to the group velocity. This analysis has basically shown that the maximal convective expansion rate can be determined by Legendre transforming λ(0, μ). It is tempting to extend the concept to a generic spectral component λ(ρ, μ) with ρ > 0, to obtain the propagation velocity of the other exponents. It is, however, not yet clear whether this concept is purely formal or not.

185

10.3 Convective exponents and propagation phenomena

Kenfack Jiotsa et al. (2013) have shown that the resulting convective exponent is consistent with a more direct determination of L (ρ, v), obtained by following the perturbation amplitude within a finite window and imposing suitable boundary conditions. Unfortunately, no direct definition has yet been given of such generalised convective exponents, and the question of whether they are meaningful observables is still unanswered. There is, nevertheless, an interesting point to notice: as can also be seen from Fig. 10.3, there exists a critical value of ρc above which λ(ρ, μ) has a local maximum, rather than a minimum, in μ = 0. This implies that the Legendre transform would yield a L (ρ, 0) that is strictly smaller than λ(ρ, 0). In principle this phenomenon might occur for the maximum exponent, as well, but it is not clear whether some general properties forbid a different concavity for λ(0, ρ).

10.3.3 Damage spreading In many systems, the linear velocity v0 coincides with the propagation velocity of directly measurable physical observables such as the sound velocity in the Fermi-PastaUlam chain (A.15) or the propagation of correlations in the complex Ginzburg-Landau equation (A.18) (Giacomelli et al., 2000). This is, however, not always the case; for instance, in the so-called hard-point gas (a chain of elastically colliding free particles), the sound velocity is larger than the value one could infer from a linear stability analysis (Delfini et al., 2007). It is, therefore, instructive to look at the propagation of finite-amplitude perturbations, or, as otherwise called, damage spreading, interpreting the perturbation as a sort of damage to a given configuration. Even the front of a finite propagating perturbation must be preceded by a leading edge, where the perturbation is infinitesimal. Therefore, the velocity vf of the front must be equal ∗ to the velocity v(μ∗ ) of some exponential profile e−μ x . Were the propagation controlled by linear mechanisms, the spatial profile of the front would be characterised by a rate μ0 = (dL /dv)(v0 ). It is hard to imagine a mechanism by which μ∗ is smaller than μ0 ; it is more natural to expect μ∗ > μ0 and, therefore, vf > v0 (steeper fronts propagate faster). A numerical study of the coupled maps (A.13, A.14) indeed confirms such expectations. The solid and dashed curves in Fig. 10.10 correspond to vf and v0 , respectively. There, we see that vf is strictly larger than vL for η < ηc ≈ 1.2 × 10−3 , while above ηc the two velocities coincide within numerical accuracy. One can also notice that the linear velocity is not defined for η < η∗ , where the system is linearly stable and no propagation of infinitesimal perturbations can occur at all. As discussed by Torcini et al. (1995), the mechanism responsible for the finite difference between v0 and vf is that perturbations of increasing amplitude (which are naturally present in the bulk of the perturbation) propagate faster and thereby tend to push the corresponding front. The occurrence of velocities larger than the linear one is indeed fairly general and not just restricted to the aforementioned abstract model (see, e.g., Cencini and Torcini (2001) and Delfini et al. (2007)). This phenomenology is conceptually equivalent to what is observed in the context of front propagation (see, e.g., fronts connecting steady states in reaction-diffusion systems discussed by van Saarloos (1988, 1989)), which is effectively described by the famous Fisher-Kolmogorov-Petrovsky-Piskunov equation (Fischer, 1937; Kolmogorov et al., 1937).

186

High-dimensional systems: general

vf

0.6

0.4

v0

0.2

´c

´∗ 0

Fig. 10.10

0

0.005

´

0.01

0.015

Linear (v0 , solid curve) and front (vf , dashed curve) velocity versus η for the model (A.13, A.14) with ε = 1/3, versus the width η of one of the three intervals defining the local map F. Standard deterministic chaos exists only for η > η∗ . Beyond ηc , v0 = vf . Altogether, the evidence of a front-velocity vf strictly larger than the linear velocity v0 can be considered as an extension of the concept of stable chaos to a regime where local instabilities are present but not strong enough to control spreading phenomena (Politi and Torcini, 1994).

Binary variables As already discussed in Section 10.1.1, the usual Lyapunov exponents cannot be defined at all in discrete-state systems due to the impossibility of introducing infinitesimal perturbations. In such a case, one is bounded to use the velocity vf of finite-amplitude perturbations as a tool to quantify the degree of instability. If the local variables are binary, another approach has been proposed that is based on Boolean algebra. In this case, the difference between two states can be expressed through the XOR operation, which gives 0 if the states are equal and 1 if they differ. The smallest possible perturbation (a so-called damage) is just a change of state Ux in a single site, which also can be represented by the XOR operation: Ux = Ux XOR 1. Let us denote with U(0) and U (0) the two initial configurations and ask how the difference δU(t) = U(t) XOR U (t) evolves. There are two limit cases: (i) the perturbation eventually dies out so that after some time U = U , this is the stable scenario; (ii) the perturbation grows in time, implying an unstable dynamics. So long as the updating rule is local, this growth looks like spreading, therefore the name “damage spreading” (Kauffman, 1969; Herrmann, 1990). The distance δU(t) can, at most, grow linearly in time, with a velocity 2vf (as it grows in two directions). A more Lyapunov-like indicator has been introduced, by referring to a pool of perturbations, by Bagnoli et al. (1992). Whenever the evolution of a given pattern

187

10.4 Examples of high-dimensional systems U (t) gives rise to m > 1 defects, m − 1 replicas are added to the pool, with a possible exponential growth of the cardinality of the pool itself. In practice, there is no need to effectively handle a growing number of configurations; it is sufficient to introduce the Boolean-Jacobian matrix J, whose entries, Jxy =

∂Ux (t + 1) = Ux (t + 1) XOR Ux (t + 1), ∂Uy (t)

Ux (t) = Ux (t) XOR δxy ,

are either equal to 0s or 1s. Next, the perturbation vector N is introduced, whose component Nx counts the number of replicas with a defect in x. N(t) evolves according to N(t + 1) = JN(t). A Boolean Lyapunov exponent λB can be finally defined in a usual way, as an asymptotic growth rate 1 N(T) . T→∞ T N(0)

λB = lim

λB is more an entropy-like rather than Lyapunov-like observable, as it arises from the counting of possible future patterns, although it is not exactly equivalent to the dynamical entropy since the procedure does not take into account the correlations among different neighbouring sites. Some applications of this approach can be found in the work of Bagnoli et al. (1992); a generalisation has been discussed by Martin (2007).

10.4 Examples of high-dimensional systems Many different dynamical systems are characterised by a large number of variables. In this section we discuss the most relevant model classes.

10.4.1 Hamiltonian systems Obtaining analytical estimates for the LEs in multidimensional systems is highly difficult. In Hamiltonian models there is at least the advantage that the invariant measure is known from the Liouville theorem. In this section we show that some encouraging results can be obtained with the help of formulas derived for random systems. Here, we illustrate the potentiality of two approaches to estimate the largest LE and the entire spectrum for the Fermi-Pasta-Ulam chain. From the evolution equation (A.16) in real space, it is found that, in tangent space, q¨ x = qx−1 − 2qx + qx+1 + 3(Qx−1 − Qx )2 (qx−1 − qx ) + 3(Qx+1 − Qx )2 (qx+1 − qx ).

(10.16)

188

High-dimensional systems: general

Largest Lyapunov exponent Casetti et al. (1995) developed a geometric approach to estimate the largest Lyapunov exponent by deriving an equation for the so-called sectional curvature. Here, we briefly illustrate the key steps, by following a substantially equivalent, but purely dynamical, derivation. Given Eq. (10.16), it is known that the largest Lyapunov vector of Hamiltonian systems is strongly localised (see Chapter 11 for a detailed discussion of the Lyapunov vectors). A good approximation of the tangent-space dynamics is thus obtained by assuming that the maximum of the Lyapunov vector sits in x and neglecting the qx±1 terms: q¨ x + [2 + 3(Qx−1 − Qx )2 + 3(Qx+1 − Qx )2 ]qx = 0.

(10.17)

This equation represents a big step forwards, since it reduces the problem to that of a noisy oscillator of the type investigated in Section 8.2.2. Let us now define E = 2 + 6(Qx−1 − Qx )2 ,

2 = 4Var(1 + 3(Qx−1 − Qx )2 ).

Both E and 2 can be computed by making use of the invariant microcanonical measure. Eq. (10.17) can be now identified with Eq. (8.26), under the additional assumption that the diffusion coefficient of ξ is given by σ 2 = 2 τ/2, where the time constant τ is still to be determined. Casetti et al. (1995) use some geometrical considerations to propose 8 1 = τ

2 2 + E

1

E . 2

Now E and σ 2 are known and one can make use of Eq. (8.30) to determine the generalised Lyapunov exponent L(2). The detailed expressions of L(2) can be found in Pettini (2007). They are remarkably close to the true maximum LE for an extremely wide range of energy densities. It should be noted, however, that this approach is not always so effective, the weakness being presumably the estimation of the time constant τ .

The Lyapunov density spectrum Determining the entire spectrum of LEs is a yet more difficult task. Some results have been derived by Eckmann and Wayne (1988) for the Fermi-Pasta-Ulam chain of oscillators, making use of the theory developed by Newman for a product of random matrices (see Chapter 8). In the case of N such oscillators, the Jacobian is composed of 4 (N × N) blocks

K=

0  , 1 0

189

10.4 Examples of high-dimensional systems

where  is a tridiagonal matrix of the type ⎛ ω1 −ω1 0 ⎜−ω1 ω1 + ω2 −ω 2 ⎜ ⎜ 0 ω + ω3 −ω 2 2 =⎜ ⎜ . . . .. .. ⎝ .. 0 0 0

··· ··· ··· .. .

0 0 0 .. .

···

ωN−1

⎞ ⎟ ⎟ ⎟ ⎟, ⎟ ⎠

where ωi = 1 + 3(Qi+1 − Qi )2 . Free boundary conditions have been assumed, but as long as one is interested in the thermodynamic limit, this is not a relevant point. In order to clarify the various approximations involved in the whole procedure, it is instructive to split it into four steps. Step 1: Short correlation time If the correlation time τ of the matrix entries is so short that the nearest-neighbour coupling has not yet propagated to the next neighbours, one can write the finite-time evolution in tangent space approximately as Hτ = exp(Kτ ).

(10.18)

The Lyapunov exponents can be then determined by multiplying infinitely many Hτ matrices, which can be assumed to be mutually uncorrelated. This assumption is expected to be reasonably accurate at high energies, when the dynamics is highly chaotic. Step 2: Thermodynamic equilibrium Upon assuming that the Qi variables are distributed according to the equilibrium measure, one can analytically determine the distribution F (ω) of the random variables ωi . This is analogous to the previous approach for the computation of the largest LE. Step 3: Spectral properties of random matrices Given F , it is possible to determine the spectral density F˜ J (ν) of the eigenvalues ν of  K and thereby the density FJ (μ) of the eigenvalues of (Hτ Hτ )1/2 . The corresponding equations are quite cumbersome and the determination of the final expression technically involved. The interested reader is invited to consult the original paper by Eckmann and Wayne (1988). Step 4: From the single matrices to their product  By finally assuming that the matrices Hτ Hτ are rotation invariant, one can invoke Eq. (8.16), with FJ (μ) playing the role of S(μ). In view of the symplectic structure of Hτ , it is easily seen that λ(ρ) is symmetric around 1/2, as expected. Moreover,  1 λmax = ln FJ (μ)μ2 . 2 Testing the accuracy of the various steps is a difficult task. It is at least possible to check the last step, the conceptually most important one, since it establishes a connection between the spectrum of single matrices with that of an infinite product. Here, we show some numerical results obtained for the Fermi-Pasta-Ulam chain with

190

High-dimensional systems: general

(a)

(b) 0.4

4 ¹

¸ 0.2

2

1

0

0.5

–0.2

0.25 0

Fig. 10.11

0.5

s

1

–0.4

0

0.5

½

1

(a) Average spectrum of the short time Jacobian for an FPU chain with energy density 10, 32 oscillators and τ = 0.062 and (b) Lyapunov density spectrum for the same model (solid curve) and the one estimated from Eq. (8.16) (dashed curve). periodic boundary conditions. Hτ has been determined by numerically integrating the equations in the tangent space for a time τ and averaging over a large number of trajectories. The results are plotted in Fig. 10.11a for an FPU chain with N = 32 oscillators and a high energy density (H/N = 10). In this condition, a strongly chaotic dynamics is expected. The Jacobians Hτ have been determined by directly integrating the equations over a time interval τ = 0.062; it is sufficiently short to expect that the approximation (10.18) is valid. In order to facilitate the comparison with the final Lyapunov density spectrum (see  panel (b)), the μ value is plotted versus the integrated density s (i.e. s(μ) = FJ dμ). The vertical logarithmic scale allows recognising the symmetry μ(s) = μ(1 − s), typical of symplectic matrices. In panel (b), the outcome of direct simulations (solid curve) is superposed to the expression deriving from the application of Eq. (8.16) (dashed curve). Both spectra are approximately, though not exactly, linear. Quite remarkably, a quasi-linear shape is a general feature that is found also in products of random symplectic matrices of the type (10.18) for τ of order 1, where the arguments developed in step 4 do not apply (see Paladin and Vulpiani (1986) and Livi et al. (1987)). As for the relatively good quantitative agreement, one should not be too excited. In fact, while the shape of the spectrum obtained from Eq. (8.16) is substantially independent of selection of the τ value, the same is not true for its scale. If τ is selected by following the previously discussed geometrical approach, it is found that τ = 0.124 and thereby λmax ≈ 0.69 – twice as high, compared with the expected value λmax = 0.32. One can thus conclude that this theory provides a reasonable description of the shape of the spectrum, although we cannot expect it to be exact; direct studies of the Lyapunov vectors indeed show that they are localised, i.e. isotropy is not satisfied. The most delicate point is, however, the need for an appropriate protocol for the selection of the optimal τ value.

191

10.4 Examples of high-dimensional systems

10.4.2 Differential-delay models As soon as a delayed interaction term is introduced in an otherwise one-dimensional, ordinary differential equation, the phase-space dimensionality becomes infinite. Let us consider, for instance, the equation U˙ = F(U(t), U(t − τ )), where τ is a given delay. The initial condition can be assigned by defining the function U(t) over a time interval of length τ (e.g. t ∈ [−τ , 0]). The dynamical complexity of such a model is equivalent to that of a spatially extended system. This can be appreciated by decomposing the time variable as t = nτ + x, where x ∈ [0, τ ] is a continuous space-like variable, while the integer n plays the role of time (see Arecchi et al. (1992), Kuznetsov and Pikovsky (1986) and Fig. 10.12). The only (important) difference with respect to spatially continuous, time-discrete systems is in the boundary conditions, since here they involve two different times: U(n, 0) = U(n−1, τ ). Since for large values of τ one expects the boundary conditions to be substantially irrelevant, one can safely conclude that long delays are equivalent to spatially extended systems in the thermodynamic limit. In order to illustrate the scaling of the Lyapunov spectra, we consider the Ikeda equation (A.20) for different delays. The spectra reported in Fig. 10.13 confirm that the delay plays the role of the system size in spacetime chaos (ρτ = (i − 1/2)/τ ). Moreover, analogously to spatially continuous systems, the density ρτ is unbounded from above. However, at variance with diffusively coupled spatial units, where the Lyapunov spectrum is proportional to −ρ 2 , here the decrease is typically

t

0

1

2

3

4

n 4

3

2

1

0

Fig. 10.12

Spatial interpretation of a delayed system.

x

192

High-dimensional systems: general

¸×¿

0

–1

–2

–3 0

Fig. 10.13

5

½¿

10

Rescaled Lyapunov spectra for the Ikeda delay equation (with a = 5 and U0 = 0) and different delays: the dashed, dotted-dashed, dotted and solid curves correspond to τ = 2.5, 5, 10 and 20, respectively. logarithmic (Farmer, 1982). An important difference with spatial models is that here, the Lyapunov exponents scale with the “system size”: they indeed decrease as 1/τ (see the vertical axis). This is due to the fact that the time-like variable n is related to the true time through the relation n = t/τ . As a consequence of both scalings, the Kolmogorov-Sinai entropy does not increase with the value of delay time, but rather it stays constant: the same amount of instability is simply distributed over an increasing number of variables. On the other hand, the fractal dimension is extensive, as in spatially extended systems, since it depends on the number of the positive Lyapunov exponents, but not on their actual values. So far, we have referred to the case of a scalar variable. If U is a vector, such as, for instance, in Rössler oscillators (A.8) with delayed feedback, the scenario may differ in the long-delay limit. In fact, if the local dynamics is chaotic by itself, one or more positive Lyapunov exponents may remain finite even for τ → ∞. This can be understood with the help of a self-consistency argument. In full generality, the tangent space dynamics is ruled by the following equation, u˙ =

∂F ∂F uτ , u+ ∂U ∂Uτ

where the subscript τ means that the corresponding variable is estimated at a delayed time. Now, if there exists a finite and positive Lyapunov exponent λ, the corresponding uτ is exponentially negligible with respect to u (being smaller by a factor of exp(−λτ )), so that the last term in this equation can be neglected. As a result, the value of λ is obtained by simply linearising the original equation, as if the delayed term were absent (notice that this is not true for the evolution in real space). In other words, if the simplified linearised equation admits a positive exponent, this is also a true exponent of the delayed dynamics. It is well known that no such solution can be obtained in one-dimensional differentialdelay equations. The occurrence of this short-term instability was initially defined as an

193

10.4 Examples of high-dimensional systems

“anomalous” exponent by Lepri et al. (1994); in more recent studies, the term strong chaos has been introduced to distinguish the regime where a finite LE survives for long delays, from the weak chaos, when this does not happen (Heiligenthal et al., 2011). Transitions from weak to strong chaos are naturally encountered in networks of oscillators with delayed coupling upon tuning some control parameter (Heiligenthal et al., 2011). An intermittent presence of short-term instabilities seems to be responsible for the lowfrequency fluctuations observed in semiconductor lasers with optical feedback (Yanchuk and Wolfrum, 2010). In some systems the delay may be state dependent. The corresponding mathematical model is by far more difficult to treat; the solutions resulting from an initial value problem have less smoothness properties than in usual problems. Very little is known about the stability properies of such systems. Hartung et al. (2006) give a fairly complete review of both physical contexts where such equations may arise and of their dynamical properties.

10.4.3 Long-range coupling When coupling involves interactions at arbitrary distances, the way the Lyapunov spectrum scales in the thermodynamic limit is no longer obvious. In fact, one cannot view the whole system as the juxtaposition of many weakly interacting subsystems. In this section we nevertheless show that the extensivity hypothesis proves to be essentially correct, although some notable exceptions are present.

Global coupling: chaotic oscillators We first consider an ensemble of globally coupled maps of the type (A.23). The evolution in tangent space is ruled by the recursive equation * + uj (t + 1) = Fj (t) (1 − ε)uj (t) + εu(t) ,

where u(t) = (1/N) i ui (t). In Fig. 10.14, we report the Lyapunov spectra obtained by simulating ensembles of tent maps (F(V) = a(1/2 − |1/2 − V|)) of different sizes. One can see that the spectra become flatter and flatter in the thermodynamic limit. This can be understood in the following way. Since the Lyapunov vectors are typically localised in a chaotic environment (see Chapter 11), the contribution εu(t) of the coupling to the tangent dynamics becomes increasingly negligible in the thermodynamic limit. The mutual coupling contributes only

to the self-consistent determination of the mean field E = ( i Ui )/N, which, in the absence of collective chaos, is independent of time. As a consequence, the dynamics of each map is described by the effective equation (see Appendix A) Uj (t + 1) = F((1 − ε)Uj (t) + εE), and all Lyapunov exponents are equal to one another and equal to λe = ln |F ((1 − ε) Uj (t) + εE)|. In the case of tent maps, since the slope is everywhere the same, λe = (1 − ε) ln a.

194

High-dimensional systems: general

0.2 ¸

0.19

0.18

0.17

Fig. 10.14

0

0.2

0.4

0.6

0.8

½

1

Lyapunov spectrum for an ensemble of globally coupled tent maps with a = 1.7 and ε = 0.3. The solid curves refer, from top to bottom, to N = 50, 100, 200, 400 and 800. The dashed line corresponds to the theoretical expectation.

The correctness of this analysis is confirmed by the nice agreement between λe and the Lyapunov spectra obtained for the larger system sizes (see Fig. 10.14, where one can see that even the maximum LE which is affected by the largest deviations is clearly converging towards the expected asymptotic value). In general, however, it is natural to conjecture that the Lyapunov density spectrum λ(ρ) of an ensemble of generic d-dimensional chaotic oscillators is composed of d perfectly flat steps, which correspond to the d LEs of a single oscillator forced by a suitable mean field. Although this scenario is substantially confirmed by numerical simulations, two important exceptions are found, as well. The first one is induced by collective chaos, i.e. by a chaotic dynamics of the mean field. In fact, in the presence of collective chaos, some Lyapunov vectors may be extended, rather than localised, thus implying that the coupling term is no longer negligible, as assumed here (see Chapter 11 for a more detailed analysis of this point). The second exception is induced by coupling sensitivity. This effect, also discussed in Chapter 11, leads to a non-extensive number of exponents which differ from the expected plateau at λe . The reason why neither of these effects is present in the aforementioned tent maps is that no collective chaos is present and the finite-time LEs do not fluctuate. So far we have discussed ensembles of identical oscillators. If the oscillators differ from each other, one can keep arguing as before and conclude that the diversity should remove the degeneracy typical of identical systems. This is indeed what happens when each map is characterised by a different parameter a. The spectra reported in Fig. 10.15 indeed show a convergence to the curve λ = ln[(2 − ρ/2)(1 − ε)] that is obtained assuming that the maps are eventually uncoupled and the a-parameter uniformly distributed within the interval [1.5, 2]. In practice we see that the extensivity of globally coupled systems is quite trivial: it follows from the effectively

195

10.4 Examples of high-dimensional systems

0.4 λ 0.3

0.2

0.1

0

Fig. 10.15

0

0.2

0.4

0.6

0.8

ρ

1

Lyapunov spectrum for an ensemble of globally coupled tent maps with a uniform distribuiton of the parameter a in the interval [1.5, 2] and ε = 0.3. Circles and plusses correspond to N = 50 and 100, respectively, while the dashed and dotted lines correspond to N = 100 and 200, respectively. Finally, the solid curve is the theoretical expectation. vanishing coupling among the various elements. The effect of the interactions is only the self-consistent determination of a parameter, which in the case of tent maps does not even affect the Lyapunov exponent. The only degeneracy that is not resolved by adding a diversity of the oscillators is the degeneracy of the zero Lyapunov exponent, which is always present in autonomous continuous-time dynamical systems. In the next section we analyse such exponents.

Global coupling: phase oscillators The splay state offers the chance of studying the effect of global coupling on the zero LEs in a simple context. The splay state is a microscopically periodic regime where the phases φi of N identical oscillators are equispaced, i.e. φi − φi−1 = 2π/N. If the Lyapunov exponents (or, more appropriately, the Floquet exponents) are arranged from the largest to the smallest, according to their modulus, analytical and accurate numerical studies have revealed that |λk | ≈ exp(−βk) in pulse-coupled oscillators, when the velocity field φ˙ is sufficiently smooth (Olmi et al., 2012). The main effect of increasing N is indeed the inclusion of increasingly smaller exponents (allowing for larger k values). Altogether, this implies that λ(ρ) is flat (and equal to zero) with a singularity on either its left or right, depending on whether the splay state is stable or unstable. More specifically, the number N(δ) of exponents whose magnitude is larger than δ is of the order of | ln δ|. In this case, the reason why the coupling is negligible for most but not all eigenvalues is not the localisation of the Lyapunov vectors but the fact their average nearly vanishes. Let us recall that the fully synchronous solution Uj = U (see the discussion in Section 9.2.2) is characterised by one zero (longitudinal) exponent, while all the other (transverse)

196

High-dimensional systems: general

exponents are equal to one another. The presence of a flat spectrum there is due to the high symmetry of the solution. The presence of disorder induces a less trivial scenario, as it can be understood, by studying the Kuramoto model (Kuramoto, 1975) g sin(φj − φi ), (10.19) φ˙ i = ωi + N i

where the bare frequencies ωi are distributed according to some distribution P(ω). In the thermodynamic limit, if P(ω) is unimodal with the peak located in ω = 0,1 for g < gc = 2/(πP(0)) the oscillators behave as if they were uncoupled, while for g > gc a finite fraction of oscillators are mutually locked (Acebrón et al., 2005). The transition is revealed by the order parameter 1  iφ e . N N

Reiψ =

i

For g < gc , R = 0, while for g > gc , the value of R is determined by the implicit condition (Acebrón et al., 2005)  +π/2 dφP(gR sin φ) cos2 φ = 1. g −π/2

Since the equations of motion can be rewritten as φ˙ i = ωi + gR sin(ψ − φi ), all oscillators with a bare frequency |ω| < gR are locked and their phase is given by sin φ = ω/(gR) (the origin can be set in such a way that ψ = 0). Accordingly, their stability is determined by the exponent ) λ = − g2 R2 − ω2 . One can now determine the Lyapunov density spectrum λ(ρ) by linking ρ with ω. This is ensured by the relationship  +ω ρ =1− dωP( ˜ ω), ˜ −ω

which expresses that the most stable oscillator is the one with ω = 0, while the least stable one is characterised by a frequency ω(ρc ) = gR. For ρ < ρc , where ω(ρc ) = gR, λ(ρ) = 0, as the oscillators are unlocked (see Radons (2005)). The solid curves in Fig. 10.16 are two prototypical spectra. Below transition, as for g = 0.8gc , the Lyapunov spectrum is flat and equal to zero, since no locking is present. Above the transition (e.g. for g = 1.2gc ), two components are present: a flat branch, which corresponds to the unlocked oscillators, followed by a negative branch, which corresponds to the entrained oscillators. The two dashed curves correspond to the spectra obtained for a specific realisation of the disorder and N = 400. The most important finite-size effect 1 This is not a severe restriction, as the frequency can be arbitrarily shifted without altering the equation structure.

197

10.4 Examples of high-dimensional systems (a)

(b)

* λ

λ1

0

10–2 –0.5

–1

10–3

–1.5 0

Fig. 10.16

0.5

ρ

1

2

10

3

N 10

(a) Lyapunov density spectrum of the Kuramoto model (10.19) for a Gaussian distribution of frequencies with unit variance and g = 0.8gc (see the two upper curves) and g = 1.2gc (lower curves). The solid lines correspond to the asympotic spectra; the dashed lines have been obtained for N = 400. The asterisk denotes the maximum exponent for g = 1.2gc and N = 400. (b) Average maximum LE versus the system size N. Triangles (g = 0.8gc ) and circles (g = 1.2gc ) correspond to a Gaussian distribution of frequencies as in panel (a); crosses and diamonds correspond to a uniform distribution, respectively, perfectly equispaced frequencies within the interval [−1, 1] and g = π gc /8. is the presence of positive LEs, which culminates in a rather large maximum exponent λ1 (N) (see, for instance, the asterisk in Fig. 10.16a which refers to g = 1.2gc ). This reveals that, although no instabilities are present in the thermodynamic limit, finite ensembles are weakly chaotic. No theory has so far been developed which is able to predict the scaling behaviour of λ1 (N). Popovych et al. (2005) found that λ1 (N) ≈ N−1 when the frequencies are equispaced. This is substantially confirmed by the data reported in Fig. 10.16b (see the crosses), which are well fitted by a decay N−0.93 . For truly random distributions, it appears that the decay is of the type N−α with α < 1. For a Gaussian distribution and g = 0.8gc , it is found that α ≈ 0.47, while for g = 1.2gc , α ≈ 0.73 and, finally, for a uniform distribution and g = π gc /8, α ≈ 0.71. Altogether, it is not clear whether there exist a few universality classes; the simulations at least convincingly show that the sample-to-sample fluctuations of the Lyapuov exponents do not vanish when the LE itself decreases to zero.

Networks So far, we have discussed models where each variable is coupled either to a finite subset of nearby variables (this is the case of many spatially extended systems) or “democratically” to all other variables (this is the case of the mean-field models).

198

High-dimensional systems: general

In recent years it has become increasingly clear that many setups lie somewhere in between such two extreme cases. These are the network structures, where each variable interacts with a subset of variables that are not necessarily located nearby. As a representative example, we consider an ensemble of one-dimensional maps, ε  Sij F(Uj ), (10.20) Ui (t + 1) = (1 − ε)F(Ui ) + K j

where the network structure is determined by the matrix entries Sij , which are equal either to 0 or to 1, while K denotes the average connectivity of the single elements. There exist many structures, most of which have been poorly explored from the point of view of their dynamical properties. Here, we limit ourselves to discuss the Erdös-Renyi networks, where the links are randomly selected. Roughly speaking, there are two large subfamilies: that of massive and of sparse networks. In the former case, K is assumed to be proportional to the network size (i.e. to the total number N of elements), while in the latter one, K is independent of N. The random matrices illustrated in Section 8.1.3 belong to the latter class. From the point of view of Lyapunov exponents, massive networks are not much different from globally coupled networks. As the coupling term is the sum of an increasing number of terms (when the thermodynamic limit is attained), its fluctuations can be neglected, and such networks are substantially equivalent to globally coupled systems, with a suitably rescaled coupling strength. More intriguing is the case of sparse networks, since they are characterised by a nontrivial asymptotic spectral density λ(ρ). An example is reported in Fig. 10.17, where the spectra obtained for a sparse network of logistic maps are reported for different network sizes. 0.4 λi

0.3 0.38 λi 0.37

0.2

0.36

0

0.05

ρ

0.1

0.1 0

Fig. 10.17

0.5

ρ

1

Lyapunov spectra for the model (10.20) (ρ = (i − 1.2)/N), for F(U) = aU(1 − U), a = 3.9, ε = 0.1 and K = 10. The dot-dashed, dashed, dotted and solid curves correspond to N = 100, 200, 500 and 1000, respectively. The inset containts an enlargement of the initial part of the spectrum.

199

10.4 Examples of high-dimensional systems

The various curves reveal a clear convergence towards an asymptotic shape for increasing N. As shown in Luccioli et al. (2012), the existence of a well-defined spectral density is not specific to the logistic maps but arises also in other contexts. The result is remarkable, since, at variance with standard spatially extended models, here the presence of “long-range” links prevents viewing the network dynamics as the evolution of several almost uncoupled subunits. In other words, here, chaos is extensive without being additive, at least in the standard phase space. One should, in fact, notice that a chain of nonlinear oscillators with nearest-neighbour coupling, which is clearly additive when seen in real space, no longer seems to be so, when viewed in Fourier space, where all modes are mutually coupled. In other words, one cannot exclude that the extensivity revealed by the sparse networks is a consequence of additivity in some yet-to-be-discovered representation.

11

High-dimensional systems: Lyapunov vectors and finite-size effects In this chapter we discuss some scaling properties of Lyapunov exponents and Lyapunov vectors. In the case of spatially extended systems, a fruitful analogy with roughening phenomena (Pikovsky and Politi, 1998) is used to characterise the spatial structure of the Lyapunov vectors and to establish the convergence of the LEs towards their asymptotic value. The localisation of the (largest) Lyapunov vector allows simplifying the evolution equation for the largest LE and thereby deriving analytic expressions. A first instance of this strategy was discussed in Section 10.4.1 in chains of coupled oscillators. It also helps to revisit coupling sensitivity in globally coupled ensembles of dynamical systems. In Section 11.2, we show that the effect is so strong as to sustain a finite increase of the LE even in the thermodynamic limit. Globally coupled systems offer also the opportunity to discuss the chaotic properties of collective dynamics. In Section 11.3, we introduce the “macroscopic” LEs and briefly discuss the possible relationship between microscopic and macroscopic instabilities. Section 11.4 is devoted to a quantitative analysis of the scaling properties of LE fluctuations and of their relationship with problems such as dimension variability in spatially extended systems. Finally, in the last section yet another class of Lyapunov spectral densities is introduced that is appropriate to establish the scaling property of the invariant measure in infinitely extended dyanamical systems.

11.1 Lyapunov dynamics as a roughening process Numerical simulations of a wide class of spatially extended dynamical systems have revealed that the Lyapunov vectors are often localised in space. An example of the typical structure of the first Lyapunov vector is presented in Fig. 11.1 with reference to a chain of coupled Hénon maps (see also Eq. (A.11)) U (t + 1) = a − [U (t) + εDU (t)]2 + bU (t − 1).
(1 − ε)N/ε), the coupling term in Eq. (11.9) becomes suddenly predominant. Its action can be schematised as a sudden force, which prevents the distance from becoming larger than the maximum allowed value. In other words, as depicted in Fig. 11.7, the various particles are confined within a box of width log[N(1 − ε)/ε], whose right edge corresponds to the position of the rightmost particle. The left edge of the box acts as a barrier, which prevents the particles from diffusing away from the box itself. As a result of the action of the (left) barrier, the average velocity of the box (i.e. the Lyapunov exponent of the entire system) becomes larger than λ0 . This phenomenon is an extreme form of the coupling sensitivity discussed in Section 9.1. A quantitative analysis can be carried out from the evolution equation for the distribution P(s, t) of particles in a frame moving with a velocity λ (s = h − λ t). Under the previous assumptions, P(s, t) satisfies the Fokker-Planck equation ∂ D ∂ 2P ∂P = λP + , ∂t ∂s 2 ∂s2 where λ = λ − λ0 . If λ coincides with the (yet unknown) velocity of the box, we can set the left edge in s = 0 and thereby assume that P(s, t) = 0 for s < 0. As shown by Takeuchi et al. (2011a), this equation can be solved with a self-consistent argument. The stationary solution is P(s) = (2 λ/D) exp(−(2 λs/D). Therefore, given N particles distributed according to P(s), the typical position smax of the rightmost particle is given by the condition P> (smax ) = 1, where P> (s) is the probability to find at least one out of N particles above s,  ∞ P> (s) = NP(s)ds = N exp(−(2 λs/D). s

By then imposing that smax is equal to the box size, smax = ln[N(1 − ε)/ε], one obtains an equation for λ, whose solution is

212

High-dimensional systems: Lyapunov vectors and finite-size effects

0.62 ¸ 0.6

0.58

0.56

0.54 0

0.1

0.2 1/lnN

Fig. 11.8

Largest Lyapunov exponent in an ensemble of globally coupled maps of the type (A.21, A.23, A.22), where F(V ) = bV [F(V ) = b(1 − V )/(b − 1)] if V ≤ 1/b [1/b < V ≤ 1], with ε = 0.02 and b = 4. The two triangles along the vertical axis correspond to the Lyapunov exponent of a single map (the lower point) and the extrapolated largest Lyapunov exponent according to Eq. (11.10).   log(1 − ε)/ε D 1− .

λ ≈ 2 log N

(11.10)

As a result, the increase of the Lyapunov exponent remains finite even in the thermodynamic limit, although the effective coupling strength eventually vanishes. In other words, the limits N → ∞ and t → ∞ do not commute. By replacing the value of λ into the expression of the probability distribution, we find that P(s) = exp(−s), which implies that the distribution Q(u) of the amplitudes u = exp(s) scales as Q(u) ≈ 1/u2 . This means that the Lyapunov vector is weakly localised (the integral of Q(u)u converges because of the corrections to the decay exponent of Q(u)). In Fig. 11.8 the largest Lyapunov exponent is plotted for an ensemble of skew tent maps for different values of N.4 The lower triangle corresponds to the Lyapunov exponent λ0 = 0.530(2) of the single map F[(1 − ε)U + εU], for the mean field U = 0.511(0), while the upper triangle corresponds to the the asymptotic value as from Eq. (11.10). The theoretical prediction is close to the value extrapolated from the numerical simulations. The only point of disagreement with the theory is the coefficient of the (1/ ln N) correction, which is smaller than predicted. The difference is possibly due to: (i) the weak localisation of the perturbation vector, so that the coupling strength is stronger than assumed by the theory, and (ii) the fluctuating velocity of the “box”: no matter how large N, is the position of the box depends on the highly fluctuating position of the rightmost particle. In spite of the underlying approximations, the prediction Eq. (11.10) appears to be rather accurate: simulations performed with various models confirm that the increase of 4 Skewness is a necessary condition for the observation of coupling sensitivity since it ensures the presence of

fluctuating multipliers and, thereby, of a finite variance D.

213

11.3 Macroscopic dynamics

the Lyapunov exponent is determined by the diffusion coefficient of the single-system Lyapunov exponent (Takeuchi et al., 2011a). The most extreme form of coupling sensitivity can be found in the so-called Hamiltonian mean-field model (A.29). Although the dynamics reduces, in the thermodynamic limit, to a collection of periodic orbits (with different periods), a strictly positive largest Lyapunov exponent is found. The origin can be traced back to the existence, for any N, of sufficiently many oscillators that stay long enough and close enough to the unstable direction of the saddle point of the self-generated poential to be able to pull the “box” (see Ginelli et al. (2011) for a detailed discussion of the problem). Coupling sensitivity also plays a major role in determining the scaling behaviour of the largest LE in the Kuramoto model that we encountered in Section 10.4.3. A quantitative theory is, however, still lacking.

11.3 Macroscopic dynamics Besides microscopic chaos, high-dimensional systems may exhibit a nontrivial collective dynamics, where not only the single variables fluctuate irregularly in time, but also the coarse-grained ones do so. The simplest setup where this phenomenon manifests itself is that of mean-field models, where each element interacts with all the others in an equal manner, independently of the physical distance (the limit case of long-range interactions). Two examples are briefly illustrated in Fig. 11.9. In panel (a), the recursive evolution of the mean field is plotted for an ensemble of globally coupled maps (see Eq. (A.21) for a definition of the model) with F(V) = a(1/2 − |1/2 − V|)), a = 1.7 and ε = 0.3. From the figure, one can appreciate the existence of a two-band seemingly chaotic dynamics (see also the inset where the attractor has been suitably tilted and enlarged to emphasise the existence of a non-trivial fine-grain structure). Fig. 11.9b instead portrays the temporal evolution of the mean field for an ensemble of Stuart-Landau oscillators (see Eq. (A.27)) with c1 = −2, c2 = 3 and K = 0.47; the presence of an irregular dynamics is transparent. In the thermodynamic limit, the problem of characterising the collective motion can be addressed by introducing the instantaneous density ρ(U, t) of dynamical units whose variable lies in the infinitesimal region spanned by d U. In the case of maps of the interval, the variable U is a scalar quantity and the density ρ satisfies the self-consistent PerronFrobenius equation ρ(U, t + 1) =

 ρ(F−1 j ((1 − ε)U + U, t), t) 

j

|Fj |

.

(11.11)

This equation is nonlinear, since the mean field U(t) is a function of ρ(t); in the N → ∞ limit it is written as  U = Uρ(U, t)dU. (11.12)

214

High-dimensional systems: Lyapunov vectors and finite-size effects (a) U(t+ 1) 0.7 Y2 0.6

0.5

Y1

0.5

(b)

0.6

0.7

U(t)

0.6 |U|

0.2 0

Fig. 11.9

50

100

150

t

200

(a) Recursive plot of the mean-field dynamics for 105 globally coupled tent maps with a = 1.7 and ε = 0.3, after a sufficiently long transient. In the inset the variables Y1 = U(t) + 0.6U(t + 1), Y2 = U(t + 1) − 0.6U(t) are used to better resolve the chaotic structure of the upper band (obtained for an ensemble of 109 maps). (b) Behaviour of the mean field of an ensemble of 107 Stuart-Landau oscillators for the parameter values reported in the text. (Data: courtesy of Takeuchi et al.).

In the case of the Stuart-Landau oscillators, the collective motion is described by the nonlinear Liouville equation  ∂  ∂ρ =− (U − (1 + ic2 )|U|2 U + K(1 + ic1 )(U − U) ρ, ∂t ∂U

(11.13)

where the definition (11.12) of U still holds with the proviso that U is now a complex variable. The computation of the leading Lyapunov exponents of the collective motion is far from trivial: Eqs. (11.11, 11.13) are functional equations, i.e. the phase space is infinitedimensional. Moreover, additional difficulties may further complicate the task, such as the presence of singularities in the distribution and even the fractal structure of the support itself. Whenever the probability distribution is sufficiently smooth, it can be projected onto a suitable functional basis and thereby identify a finite (although possibly large) number of effective modes. If the variable of the single map is phase-like (i.e. U + 2π ≡ U ), it is

convenient to expand ρ into Fourier modes, ρ(U, t) = j ψk (t)eikU . The corresponding

215

11.3 Macroscopic dynamics

Perron-Frobenius equation is ψk (t + 1) =



R(k, m)ψm (t),

m

where 1 R(k, m) ≡ 2π





dUei(mU−kF(U)) .

0

This approach is quite effective in the case of the model (A.24), where the mean field U is defined by Eq. (A.25). More specifically,  2π 1 2 2 2 dUei[(m−2k)U−2kε(1−2a U ) sin U] = Jm−2k [2kε(1 − 2a2 U )], R(k, m) ≡ 2π 0 where Jα denotes the Bessel function of order α. In the present case, U = ψ(1) and all ψk are real. As a result, the Perron-Frobenius operator is  Jm−2k ψm (t). ψk (t + 1) = 2kε sin(ψ1 (t)) m

Upon iterating the aforementioned equations together with their linearisations, one can easily compute the Lyapunov spectrum: the first 10 exponents are plotted in Fig. 11.10. From the figure we infer that the collective motion is low-dimensional: only the first mode is positive and the expansion is already compensated by the second exponent (λ1 + λ2 < 0). Moreover, the Lyapunov spectrum is discrete; upon increasing the number of Fourier modes, additional negative exponents add up, which correspond to increasingly stable, high-frequency modes. In practice, the scenario is reminiscent of the continuumlimit in spatially extended chaos. This is not, however, always the case. Coupled logistic maps appear to be characterised by an infinite-dimensional collective dynamics; this has been conjectured by adding noise to the microscopic dynamics (in order to smooth out

0 ¸ –1

–2

–3

–4

Fig. 11.10

0

2

4

6

8

i

10

Collective Lyapunov spectrum for the model of globally coupled maps (A.24, A.25) for a = 2.2 and ε = −1.15.

216

High-dimensional systems: Lyapunov vectors and finite-size effects

the singularities of ρ) and thereby decreasing the noise amplitude (Shibata et al., 1999; Takeuchi and Chaté, 2013). As a result, it has been found that the dimension increases logarithmically with the decreasing of the noise amplitude.

11.3.1 From micro to macro Since any collective dynamics is ultimately the consequence of microscopic dynamical rules, it is legitimate to ask whether the “microscopic” Lyapunov exponents are of any usefulness for its characterisation. One can approach the problem with the help of finite-amplitude exponents. We illustrate the idea by referring again to the ensemble of tent maps shown in Fig. 11.9. This model has been studied by Cencini et al. (1999). Given a generic macroscopic configuration {Uj } (selected according to the invariant measure), a second initial condition has been generated, by adding the same perturbation δ to each component and thereby determining the finite amplitude exponent ( ) as described in Chapter 7. The resulting behaviour is plotted in Fig. √ 11.11 for four different system sizes after rescaling the perturbation amplitude by N. There, one can notice the presence of two plateaus: the first one, which occurs on small scales, corresponds to the microscopic maximal Lyapunov exponent: over such scales, the evolution of finite perturbations is correctly described by the linearised equations. The lower plateau corresponds to the macroscopic Lyapunov exponent (the one obtained by iterating the Perron-Frobenius operator); the size of the plateau increases with N progressively extending to √ smaller scales. In fact, the data collapse reveals that it starts at a resolution of order 1/ N, the amplitude of statistical fluctuations. In practice, the microscopic dynamics seems to contribute to what, macroscopically, appears as internal noise. One is therefore tempted to conclude that microscopic and macroscopic worlds are mutually exclusive. There are, however, examples where the spectral properties of

`(Δ) 10–1

10–2

10–3 –3 10

Fig. 11.11

10–2

10–1

100

Δ

√ N

(a) Finite amplitude Lyapunov exponent for coupled tent maps ( f(U) = a(1 − |1/2 − x|)) with a = 1.7 and ε = 0.3. Circles, squares, diamonds and plusses refer to N = 104 , 105 , 106 and 107 , respectively. (Data: courtesy of M. Cencini et al.)

217

11.3 Macroscopic dynamics

the macroscopic dynamics extend down to the infinitesimal scales that are typical of microscopic dynamics. One example is that of the splay states, which emerge in pulsecoupled oscillators (see Chapter 10). In fact, in some models, it has been proven that the leading eigenvalues of the corresponding Liouville-type operators coincide with the largest exponents as determined by following the microscopic dynamics (Olmi et al., 2012): there exists a crossover, beyond which the microsocpic Lyapunov exponents perceive the finiteness of the system and are therefore different in the two setups. Such a striking, almost one-to-one, correspondence is not too surprising, as it holds in the absence of microscopic chaos, which creates a sort of “barrier” of statistical fluctuations, below which the macroscopic analysis is no longer valid. It is therefore legitimate to ask when and whether the validity of the microscopic linear equations can extend up to those scales, where a perturbative analysis of the macroscopic deviations can be carried out, and vice-versa. A crucial prerequisite for a microscopic Lyapunov exponent to be part of the macroscopic spectrum is that the “direction” of the corresponding covariant vector be oriented as expected for the “modes” of the PerronFrobenius operator. In particular, this implies that the microscopic vector should be extended rather than localised. Preliminary studies of Stuart-Landau oscillators (Takeuchi and Chaté, 2013) suggest that the microscopic Lyapunov spectra contain a signature of collective properties, in the vicinity of the largest Lyapunov vector. From the data in Fig. 11.12, one can in fact see that the distribution of the inverse participation ratio Y2 shrinks to zero upon increasing the system size, and, from the inset, we can even see that it asymptotically scales as 1/N, suggesting that the corresponding Lyapunov vector is eventually extended.

108

10–2

P

PN

10–4 104 102

104 Y2/N

100

10–6

Fig. 11.12

10–3

Y2

100

Distribution of the inverse participation ratio for the first Lyapunov vector in the globally coupled ensemble of Stuart-Landau oscillators. From right to left, the curves refer to N = 2048, 8192, 32,768, 131,072, 106 , 107 and 5 × 107 . In the inset, the distribution is rescaled to emphasise its scaling with N. (Data: courtesy of K. A. Takeuchi et al.)

218

High-dimensional systems: Lyapunov vectors and finite-size effects

11.3.2 Hydrodynamic Lyapunov modes The use of linear stability properties of the microscopic equations to infer macroscopic properties is particularly challenging in spatially extended systems, where collective dynamics manifests itself as a long-wavelength motion. In such setups, at variance with mean-field models, the study of Liouville-type functional equations is a prohibitive task, so that it is highly desirable to extract as much information as possible directly from the microscopic equations. As in the previous section, the main working hypothesis is the interpretation of the extended nature of the covariant Lyapunov vectors as the signature of the collective dynamics, i.e. the identification of degrees of freedom that correspond to a motion of the system as a whole. The study of space-time chaos (e.g. in the Kuramoto-Sivashinsky equation) has revealed that the relaxation dynamics can be partly interpreted as a collective process. As mentioned in Section 11.1, beyond a critical index (which depends on the system size), the Lyapunov vectors are close to Fourier modes and, thereby, extended (Takeuchi et al., 2011b). The extensivity of such vectors is, however, not so relevant, since it refers to degrees of freedom that are basically inactive (they characterise the relaxation towards the manifold which contains the attractor, rather than the asymptotic dynamics itself). A more interesting possibility is suggested by the mapping between the first Lyapunov vector and the KPZ dynamics of rough interfaces. It is, in fact, known that in more than two dimensions interfaces need not be rough. Would the potential extensivity of the logarithm of a Lyapunov vector be a sufficient condition implying the existence of some collective motion? This is an interesting question that has not yet been explored. In general, the possibility to extract information on large-scale behaviour from the dynamics of Lyapunov vectors was first explored in Hamiltonian models. While studying a fluid of hard disks, Posch and Hirschl (2000) noticed that the Lyapunov vectors corresponding to the smallest (in the absolute sense) Lyapunov exponents – notably those obtained with the orthogonalisation procedure – correspond to collective perturbations. McNamara and Mareschal (2001) went further, by developing a kinetic theory approach to characterise such vectors that were named hydrodynamic Lyapunov modes. At variance with the interface representation discussed in Section 11.1, where the logarithm of the amplitude is considered, hydrodynamic Lyapunov modes are better characterised by looking directly at the vector amplitude, i.e. by introducing the structure function 6& &2 7 & & L 2π ikx & & dxu(t, x)e S(t, k) = & & . 0

S(t, k) undoubtedly reveals the existence of long-wavelength modes for the vectors associated with nearly vanishing LEs. In (Ginelli et al., 2007) it has even been noted that in three symplectic lattice systems such covariant Lyapunov vectors exhibit a 1/f (spatial) power spectrum; Yang and Radons (2013) have found that hydrodynamic Lyapunov modes emerge in dissipative systems as well, provided that some continuous symmmetry is present. A clear connection between the tangent-space dynamics and physical properties such as transport coefficients is, however, still lacking.

219

11.4 Fluctuations of the Lyapunov exponents in space-time chaos

11.4 Fluctuations of the Lyapunov exponents in space-time chaos In Chapter 5 we have seen that finite-time Lyapunov exponents do fluctuate, and their fluctuations can be quantified by introducing a suitable large-deviation function. In this section we analyse the dependence of such fluctuations on the system size in spatially extended systems. This is done by determining the diffusion matrix D (see Eq. (5.12)). This approach proves to be an alternative method to understand the extensivity of the underlying dynamics. In the following we shall always refer to the q = 0 norm that is naturally associated with the interface-interpretation of the Lyapunov vectors. This makes no difference for the identification of the elements of D, while different scaling functions (see later in this section) are expected for different q-values (Pazó et al., 2013). For the sake of simplicity, we refer to a chain of L Hénon maps (A.11) with a = 1.4, b = 0.3, ε = 0.025 and periodic boundary conditions. For such parameter values, the Lyapunov spectrum is composed of two bands of positive and negative exponents, respectively. Since the Jacobian matrix J satisfies the symplectic-like condition JAJT = −bA, with A being a generic antisymmetric matrix, one can conclude that Lyapunov exponents come in pairs, such that λi +λN+1−i = ln b. The numerical results are shown in Fig. 11.13. In panel (a) we report the self-diffusion coefficients Dii . The clean overlap of the scaled curves indicates that Dii (ρ) ≈ 1/Lγ with γ ≈ 0.85. This means that the LE fluctuations vanish in the thermodynamic limit, i.e. the Lyapunov exponents self-average. In Fig. 11.13b, Dij is plotted along the column j = 2L/5 (i.e. for ρ = 2/5). The off-diagonal terms decrease as 1/L, so that the matrix D becomes increasingly diagonal

(a)

(b) DiiL0.85

4

0.8 DijL

0.6 0.4

1

0.2 0.25 (c) ¹L

0

0.5

0

1

0

0.5

1

102 100 0

Fig. 11.13

0.5

1

1.5

2

Diffusion coefficients in a chain of Hénon maps. In all panels, squares, diamonds, plusses and circles refer to L = 40, 80, 160 and 320, respectively. The results have been obtained by iterating the chain over 5 × 106 time steps. Panel (a) contains the diagonal elements – ρ = (i − 1/2)/L; panel (b) refers to the column j = 2L/5, ρ = i/L and panel (c) refers to the eigenvalues of D, ordered from the largest to the smallest one – σ = k/L.

220

High-dimensional systems: Lyapunov vectors and finite-size effects

in the thermodynamic limit. Nevertheless, the contribution of the off-diagonal terms is not negligible. As a consequence of the aforementioned symplectic-like structure of J, it turns out that half of the eigenvalue spectrum of D is equal to zero. Moreover, the amplitude of the non-zero eigenvalues decreases as 1/L with the system size rather than as L−γ , thus implying that the eigenvalues of Q = D−1 (see also Eq. (5.11)) are proportional to L (see Fig. 11.13c), i.e. that the large deviation function S is an extensive observable. This behaviour is confirmed by the study of other models, such as symplectic maps and chains of Stuart-Landau oscillators (Kuptsov and Politi, 2011). Actually, the extensivity of S has a deeper meaning than usually expected for spatially extended systems. Consider, for instance, a chain of uncoupled chaotic dynamical systems. The system is trivially extensive in the sense that its fractal dimension and dynamical entropies are proportional to the system size. Because of the lack of coupling, however, the diffusion matrix D is (block) diagonal so that all its eigenvalues remain finite in the thermodynamic limit. As soon as coupling is switched on, only the leading eigenvalue μ1 remains finite, while all the others scale as 1/L. The different scaling exhibited by μ1 manifests itself as the 1/σ singularity that can be seen in Fig. 11.13c. Some implications of this phenomenon are discussed at the end of this section, in the context of dimension variability. The scaling behaviour of the diagonal elements of D can be connected with the roughening of the corresponding Lyapunov vectors. It is convenient to define the diagonal elements of the diffusion matrix using the interface language. We start by introducing the variance χ 2 = (h(t))2  − h2 ,

(11.14)

where h(t) is the logarithm of the amplitude of the ith covariant Lyapunov vectors, and, for the sake of simplicity, we drop the Lyapunov index i; finally h(t) is nothing but λt. The diffusion coefficient is χ2 . t→∞ t

D = lim

By comparing Eq. (11.14) with the definition (11.3) of the square width, one can see a rather similar structure, although a different asymptotic behaviour is expected for the two variances: χ 2 diverges linearly in time, while W 2 saturates. In any case, the two quantities being dimensionally equivalent, it is reasonable to expect that they exhibit the same scaling behaviour, namely χ 2 (t, L) = L2α−z G(t/Lz )t, where the factor t is made explicit to emphasise the asymptotic linear growth of χ 2 (i.e. G(u) converges to a constant for u → ∞). Accordingly, the diffusion coefficient is D(L) = L2α−z G(∞), so that the aforementioned exponent γ can be expressed in terms of the interface roughening exponent, γ = z − 2α.

221

11.4 Fluctuations of the Lyapunov exponents in space-time chaos

KiL 6

4

2

0

Fig. 11.14

0

0.2

0.4

0.6

½

0.8

1

Rescaled diffusion coefficients Ki of the LE differences δλ in Hénon maps.

This relationship accounts for the anomalous scaling of the diffusion constant exhibited by the first Lyapunov vector. Since this vector is characterised by the KPZ dynamics, it follows that γ1 = 1/2, as indeed observed by Kuptsov and Politi (2011). In the bulk of the spectrum γ is close to 1 since z = 1, and the smallness of 2α reflects itself in small deviations of γ from 1 (for a more detailed discussion see Pazó et al. (2013)). We conclude this section by discussing how this analysis can be used to assess the hyperbolicity of the underlying dynamics. In Section 5.7.1, we saw that the diffusion coefficient Ki of δi = i − i+1 is a useful observable to determine whether the Oseledets splitting is dominated (Bochi and Viana, 2005). Eq. (5.32) allows us to express Ki in terms of the diffusion matrix D. The results of numerical simulations for a chain of Hénon maps are plotted in Fig. 11.14, where one can see that Ki ∝ 1/L√in the bulk of the spectrum. Therefore the standard deviation of δi is much larger (1/ L) than δi itself, which is of order 1/L (this follows from the very existence of a limit Lyapunov density spectrum), and one must thereby conclude that order exchanges of the LEs may occur with a finite probability. In order to establish whether the Oseledets splitting is dominated, one should, however, focus on the i value which corresponds to the dimension of the unstable manifold. In the case of the Hénon maps (for the parameter values considered in this chapter), this happens for ρ = 1, where there is √ a gap in the Lyapunov spectrum. There, even though KL decreases slower (namely, as 1/ L), since δL is finite, the probability of order exchanges goes to zero, revealing that stable and unstable manifolds are mutually transversal and the system is effectively hyperbolic. The existence of a gap is, however, not a general property of dynamical systems, and it is therefore reasonable to conjecture that generic dynamical systems are not typically hyperbolic in the thermodynamic limit. Another interesting property has been detected, however, which makes the scenario more intriguing. In a chain of Stuart-Landau oscillators, it has been found that somewhere in the negative part of the Lyapunov spectrum Ki vanishes. It has been conjectured that this is associated with the separation between the physical modes (which contribute to the

222

High-dimensional systems: Lyapunov vectors and finite-size effects

attractor dynamics) and irrelevant or slaved modes (which govern the relaxation towards the inertial manifold) (Kuptsov and Politi, 2011).

Dimension variability High-dimensional chaotic systems may be so complex that their local dimension exhibits fluctuations that are larger than one unit, and it may even happen that the periodic orbits embedded in the attractor have a different number of unstable directions (Kostelich et al., 1997). Such features are manifestations of a non-hyperbolic dynamics. A statistical analysis of Lyapunov exponents allows us to quantify dimension variability. In Chapter 6, we saw that the dimension m of an attractor5 corresponds to the dimension of a box Vm , whose volume expansion rate Sm = 1 +2 +· · ·+m vanishes. Because of LE fluctuations, m fluctuates as well, implying a dimension variability across the invariant measure. A semi-quantitative analysis can be carried out by arguing as follows. Within a Gaussian approximation, the distribution of Sm values over a finite time τ is approximately given by

Sm2 τ , (11.15) P(Sm ) ≈ exp − V 2D (m) where DV (m) can be expressed in terms of the entries of the diffusion matrix D,  DV (m) = Dij . i, j≤m

DV (m)

can be determined only numerically. The results of simulations made The quantity with chains of Hénon and logistic maps are reported in Fig. 11.15, where the scaled diffusion coefficients are plotted versus ρ = m/L, together with the integrated Lyapunov spectrum S, which enables the location of the Kaplan-Yorke dimension density as the point where S = 0. In the case of Hénon maps DV is symmetric around ρ = 1; this is a consequence of the pairing rule of the Lyapunov exponents. For ρ > 1, negative Lyapunov exponents progressively contribute to the total volume expansion rate, but their fluctuations cancel those of their positive “companions”, as they are perfectly anticorrelated. One expects to observe the same symmetry in any symplectic dynamical system. No symmetry is instead present in the chain of logistic maps, but the good data collapse reveals that DV is proportional to L in both models. This is because DV is the sum of a number of elements of order L2 , all being positive (at least as long as positive Lyapunov exponents are considered) and of order 1/L. This in turn implies that the covariance along the direction √ um ≡ (1, 1, 1, . . . , 0, . . .)/ m (the different entries correspond to the different Lyapunov exponents ordered from the largest to the smallest one, while m is the number of “1”s) is of order O(1) (recalling that m is of order L). Accordingly, one understands that the previously discussed divergence in the spectrum of D is due the large LE fluctuations observed along directions that can be expressed as a linear (positive) combination of many components in the original basis. A physical justification of this behaviour is still lacking. 5 Here, for the sake of simplicity, we neglect fractional corrections and assume m to be an integer.

223

11.5 Open system approach (a) 0.4

S=L

0.2

DV=L

0 –0.2 0.5

0

1.5

1

(b)

0.2

½

2

DV=L S=L

0.1 0 0

Fig. 11.15

0.1

0.2

0.3

0.4

0.5

½

Volume fluctuations in a chain of Hénon maps (panel (a): same parameters as in the previous figures) and of logistic maps (panel (b): ε = 1/3, a = 2). The system sizes are: 20, 40, and 80 (dotted, dashed and dotted-dashed lines) in the former case; 100 and 200 (dotted and dashed) in the latter case. The solid curves correspond to the rescaled volume expansion rate S/N. Within a linear approximation, SD = Sm + (D − m)λm . By identifying D with the value where SD = 0, it follows that D = m − Sm /λm . By combining this change of variables with Eq. (6.6), which allows transforming the dependence on τ into a dependence on ε, Eq. (11.15) can be rewritten as


d, starting with a finite slope. This equation can be solved for x = Dc /L. By then expanding the resulting inverse function, F−1 (ln ε), for small values of ln ε, one finds the perturbative expression Dc (ε, L) = dL −

ln ε , β1

which has the same structure as Eq. (11.16). Thus, Eq. (11.17) appears as a natural extension to arbitrarily small scales. In the case of the complex Ginzburg-Landau equation, a rigorous upper bound has been obtained by Collet and Eckmann (1999): Dc (ε, L) < B0 L + B1 /ε2 . The approach that has led to such a formula is the same discussed in Chapter 6 to derive the Kaplan-Yorke formula for standard finite-dimensional attractors: given a set of boxes that cover the attractor, a finer covering is obtained by simply letting each box evolve in time. As a result, time evolution can be turned into an increase of the resolution. We anticipate that the main difference between closed and open systems is that in the latter case, the Lyapunov exponents exhibit a slow convergence that is absent in closed systems. Let us now proceed by introducing some notation: let the vectors U and U⊥ correspond to the state variables within and, respectively, outside the window WL of size L we are interested in. The aim is to infer the fractal properties of the underlying attractor from the time evolution of the probability P(t, U , U⊥ ). Let the initial condition P(0, U , U⊥ ) denote a homogeneous distribution confined to a unit hypercube which contains the attractor and is contained in its basin of attraction.8 The probability density can be then represented as  (11.18) P(t, U , U⊥ ) = d U0⊥ Q(t, U , U⊥ |U0⊥ )P⊥ (U0⊥ ), where Q(t, U , U⊥ |U0⊥ ) denotes the probability density at time t conditioned to the initial configuration U0⊥ of the external “hidden” variables, while P⊥ (U0⊥ ) represents their distribution. The integral over the variables U0⊥ represents the first relevant difference with respect to closed systems: the ignorance about their current values contributes to dressing the probability density. We briefly discuss this problem in Section 11.5.2. There exists, in fact, a second difference: pairs of initial conditions that differ only inside the window WL evolve in a way that their separation propagates in the outer region as well. This suggests the need of defining a different, more appropriate Lyapunov spectrum.

8 If this is not possible, one should decompose the following reasoning into various steps at the expense of

requiring additional technicalities.

226

High-dimensional systems: Lyapunov vectors and finite-size effects

11.5.1 Lyapunov spectra of open systems For the sake of simplicity we restrict our considerations to a one-dimensional lattice and we assume that each site is characterised by a scalar variable. Let {u(n) (t)}, (n = 1, . . . , L) denote L independent perturbation vectors, all restricted to a window of length L, i.e. (n) u(n) x (0) = 0 for x ≤ 0 and x > L and ∀ n = 1, . . . , L, where ux stands for the amplitude of the nth vector at site x. Such perturbations are allowed to freely evolve within a formally infinite environment for a time T = gL, to determine the volumes spanned by the projections PL u(n) (t), where PL is the projection operator, (n) ux (t) if 0 < x ≤ L, (n) [PL u (t)]x = 0 otherwise . In practice, one proceeds as in the standard computation of Lyapunov exponents, except for two differences: (i) the environment is unbounded (as for the convective exponents), and (ii) volumes are determined only with reference to a fixed finite window. The open-system Lyapunov exponents oi are finally defined by taking the L → ∞ limit for a fixed g value, where g is a free, independent parameter. For g → ∞, one expects that all open-system Lyapunov exponents converge to the maximum Lyapunov exponent, since the space covered by the perturbations grows and this procedure becomes similar to the computation of an increasingly tiny fraction of a Lyapunov spectrum. On the other hand, for g → 0, we expect to recover the usual Lyapunov spectrum, since propagation effects are negligible. Here, we illustrate the results with reference to a chain of logistic maps (see Eq. (A.10)). In Fig. 11.16a we report the spectra obtained for ε = 1/6, a = 2, L = 150 and different values of g (from 0.1 to 0.5). The nice data collapse reveals that the spectral change essentially amounts to an expansion along the horizontal axis: the Lyapunov label i has indeed been scaled by the effective length Le = f(g)L, where the factor f has been determined so as to maximise the overlap. This result can be defined as evidence of hyperscaling (scale invariance of the spectrum under both changes in the system size and propagation times). It is, a priori, reasonable to expect that the effective length increases linearly with time, namely that Le = L + 2vo T, where the factor 2 accounts for the fact that the growth occurs on both sides of the window, while vo is some unknown velocity. By then recalling that g = L/T, the aforementioned equation implies Le /L = 1 + 2vo g. The data plotted in Fig. 11.16b indeed confirm a linear growth and suggest that vo ≈ 0.085 (see the dashed line). This is a rather small velocity compared with the propagation speed measured from the convective exponents (see Section 10.3); in fact it measures a different quantity: how an ensemble of perturbations is able to fill the available phase space, rather than the amplitude along a specific direction.

11.5.2 Scaling behaviour of the invariant measure Now we can return to the problem of extending the Kaplan-Yorke formula to open systems. We start by neglecting the integration over the external degrees of freedom (see

227

11.5 Open system approach (a)

(b)

1.1 0.4

f

Λo

1.08 0

1.06

1.04

–0.4

1.02 –0.8 0

Fig. 11.16

0.5

i/(Lf)

1

1

0

0.2

0.4 g

(a) Open-system Lyapunov spectra for a chain of logistic maps: the factor f is empirically determined so as to ensure the best overlap. The various curves correspond to g = 0, 0.1, 0.2, 0.3, 0.4 and 0.5. In panel (b), the scaling factor f is plotted versus g. The dashed line has been obtained with a linear fit; its slope is 0.16. Eq. (11.18)). In this approximation, we are entitled to use Eqs. (6.6) and (6.3), with the warning that now the Lyapunov exponents do depend on time. Under the assumption that the hyperscaling holds, one can still invoke the proportionality of the dimension to the system size, with the length L replaced by its effective value after a time T: ˜ KY (T, L) = d(1 + 2vo g)L = dL + 2dvo T. D

(11.19)

˜ KY on time T into a dependence We can now use Eq. (6.6) to transform the dependence of D of DKY on the resolution ε. Indeed, inverting Eq. (6.6), we have T=−

ln ε , |λd |

(11.20)

. where λd = λ(ρ = d). Upon inserting Eq. (11.20) into Eq. (11.19), we obtain DKY (ε, L) = dL −

2vd ln ε, |λd |

(11.21)

which reveals a logarithmic dependence on the resolution of the type heuristically conjectured in (11.16). This formula represents, however, a lower bound on the dimension DKY , since we have neglected the effect of the integral over x0⊥ in Eq. (11.18) that accounts for the uncertainty on the inner variables induced by the initial lack of knowledge on the outer degrees of freedom. An accurate quantification of the corresponding propagation of information is far from trivial, since it may be controlled by nonlinear mechanisms, as it happens for instance for stable chaos (Politi and Torcini, 2010), where nonlinear mechanisms are responsible for the self-sustainment of irregular behavior even in the presence of a negative

228

High-dimensional systems: Lyapunov vectors and finite-size effects

Lyapunov spectrum. In the following, we limit ourselves to presenting a heuristic argument that leads to an upper bound. From the convective Lyapunov exponents, we know that a perturbation of amplitude 1 originated at the boundary of the window at time 0 is “amplified” by a factor exp [TL (x/T)] after a time T at a distance x (this is the point where the assumption that nonlinear effects are negligible is crucial). From Eq. (11.20), we need to probe the perturbation at the time T | ln ε/λd |. The maximum distance from the boundary where the perturbation is larger than the resolution ε can be then determined by imposing exp TL (x/T) = ε, i.e.

x|λd | . (11.22) λd = L | ln ε| Now let v denote the velocity such that L (v ) = λd (if the whole convective spectrum is larger than λd , v must be assumed equal to the maximal possible velocity, which, in the case of nearest-neighbour coupling, is equal to 1). Eq. (11.22) implies that x=

v | ln ε|. |λd |

In other words, x is the maximal distance where a perturbation arisen from the boundary is larger than ε at the time implicitly selected to partition the phase space with boxes of size ε. If we now assume that the distribution over the x sites has the maximal possible dimension, we find that the correction to the dimension due to the external degrees of freedom is xdv , where dv is the number of variables per lattice sites in each of the two window edges. Altogether, adding this contribution to Eq. (11.21) one obtains the upper bound Dc (ε, L) ≤ dL −

2(dv + dv v ) ln ε. |λd |

This result suggests that the rigorous bound obtained by Collet and Eckmann (1999) with reference to the complex Ginzburg-Landau equation can be improved. For a more detailed discussion of the limits of this derivation (in particular the condition | ln ε| < αL, where α is a suitable factor) the reader is invited to consult Cipriani and Politi (2004). We expect the open system approach to be of some relevance in the characterisation of nonequilibrium measures in Hamiltonian systems, since the phase space is filled in a singular way, like in the setup considered in this section.

12

Applications

12.1 Anderson localisation Anderson localisation is a general phenomenon occurring in the linear propagation of waves in disordered media. The name comes from the seminal paper by Anderson (1958), where he argued that in a disordered lattice, a wave packet remains localised rather than spreading out. Nearly simultaneously, Gertsenshtein and Vasiljev (1959) found that the almost full reflection of a wave by a disordered waveguide is also a manifestation of localisation. Anderson localisation lies at the heart of several properties of disordered media. It has many applications in solid state physics (see the reviews by Kramer and MacKinnon (1993), Beenakker (1997) and Evers and Mirlin (2008) and the collection edited by Abrahams (2010)) and in optics and acoustics, as well as in general disordered environments, either continuous or discrete. Two questions are usually addressed: (i) characterisation of the eigenstate structure and (ii) determination of the transport properties (e.g. scattering) in the presence of a disordered layer. Remarkably, in one dimension, a small disorder is sufficient to induce a pure point spectrum of the eigenvalues with eigenmodes that are exponentially localised in space, and an almost perfect reflection with an exponentially small (in terms of the layer length) transmission coefficient. In both contexts, once a harmonic time-dependence of the field has been assumed, the problem can be reduced to a stationary equation, where the field depends only on the spatial variable(s). In one dimension, this is expressed either as a recursive equation (in lattice systems) or as a differential equation (in spatially continuous formulations). To be more concrete, let us consider two prototypical formulations: the stationary Schrödinger equation in continuous space, Eψ(x) = −

d2 ψ + V(x)ψ(x), dx2

(12.1)

and its lattice analog (called the Anderson model), Eψn = Vn ψn − ψn−1 − ψn+1 .

(12.2)

Here, the potential V(x) is a random function (respectively, Vn is a sequence of random numbers). If Eqs. (12.1, 12.2) are treated as an initial value problem, e.g. fixing ψ(0) and ψ  (0) in (12.1) to determine ψ(x) for x > 0 (respectively, fixing ψ0 and ψ−1 in (12.2) to determine ψn for n > 0), a clear analogy with the computation of LEs in stochastic systems 229

230

Applications arises: the field generally grows as ψ(x) ∼ exp[λx], where λ is the largest Lyapunov exponent. In the continuous setup the computation of the Lyapunov exponents is equivalent to studying the stability of a noise-driven oscillator (see Eq. (8.26)). In a discrete lattice context, the problem is equivalent to the analysis of a product of random (transfer) matrices,



Vn − E −1 ψn ψn+1 = ψn ψn−1 1 0 (see Section 8.1). In both cases, however, one is interested not in the initial value problem but in either the eigenvalue or the boundary problem. Hence, additional care is needed to show that proper solutions of these problems follow the same exponential asymptotics as the initial value problem (see Bougerol and Lacroix (1985); Pastur and Figotin (1992)). Roughly speaking, the LE analysis identifies the correct eigenfunctions under the following assumptions: (i) the asymptotic exponential rates (as defined according to either the Oseledets theorem or, for random matrices, the Furstenberg-Kesten theorem) are valid for all perturbations (for a typical realisation of the disorder) and (ii) these rates depend continuously on the control parameters (the energy, in the context of eigenvalue problems). The fact that the Lyapunov exponent determines the exponential decay rate of the eigenfunction is often called Borland conjecture in the physical literature (Borland, 1963). Of course, any localised eigenfunction while growing exponentially on one side of the peak decays exponentially on the other side. Since, however, the negative Lyapunov exponent has the same magnitude as the positive one (due to the symmetry x → −x, or n → −n), one can write that away from the localisation area |ψ| ∼ exp[−λ|x|]. The quantity = 1/λ is often referred to as the localisation length of the given eigenfunction. In practice, returning to the initial-value interpretation of the evolution equations, the true eigenfunction is generated by complementing ψ(0) with the special value ψ  (0), such that ψ(x), after having reached its maximum somewhere, eventually decreases back towards the boundary value ψ(L). As for the transmission T through a disordered layer of length L, it is exponentially small, T ∼ exp[−2λL] (the factor 2 is there because the transmission coefficient refers to the “energy” ∼ |ψ|2 ). The Lyapunov exponent depends on E; since it is non-zero for all energy values that belong to the spectrum, there is no room for the propagation of extended modes; i.e the spectrum of the Schrödinger equation in a disordered potential is purely discrete. For the transmission coefficient the situation is slightly more subtle. Here E can be arbitrary (it corresponds to the frequency of the incident wave), while the exponential law for the transmission is valid for typical values of E. There can be exceptional resonant frequencies (corresponding to localised eigenmodes in the bulk of the disordered layer, having close amplitudes at both ends) where the transmission is large. Indeed, if the incident wave is resonant with such a mode, the latter will be excited to very large amplitudes, which will result in a large transmission rate; see examples in Bliokh et al. (2008) and Gredeskul and Freilikher (1990). In the context of Anderson localisation studies, there is a huge literature on different numerical and analytic approaches to compute the Lyapunov exponents in models of the type (12.1, 12.2). Many of the methods described in Chapter 8 have been, in fact, developed

231

12.2 Billiards

in this field. A detailed account of random potentials with correlations is presented by Izrailev et al. (2012), while problems where the potentials are dynamically generated as quasiperiodic or chaotic functions of a spatial coordinate are discussed by Bourgain (2005). Finally, one should notice that in more complex one-dimensional setups, where the evolution in space involves more than one degree of freedom, several positive and negative exponents are present (e.g. in the presence of next-nearest-neighbour interactions in a lattice model, one deals with 4 × 4 transfer matrices, so that four exponents are expected). In this case the localisation length is determined by the LE closest to zero, which asymptotically dominates the decay of the eigenfunctions. The concept of Lyapunov exponent cannot be straightforwardly extended to twoand three-dimensional systems as there is no single time-like coordinate. Also, the stationary problem no longer reduces to an ordinary differential equation but to an elliptic partial differential one. Here, the Lyapunov-exponent approach can be applied to the so-called quasi-one-dimensional setups, i.e. disordered stripes (bars) that are extended only along one dimension, while being bounded in the other dimension(s) (see Kramer and MacKinnon (1993) and Kramer et al. (1987) for details and further references). If there are d transverse dimensions, each of length L, then the vector identifying the given quasi-one-dimensional setup consists of Ld components. Its evolution along the one-dimensional space is governed by a product of 2Ld × 2Ld transfer matrices and is thereby characterised by 2Ld Lyapunov exponents (which come in symmetric pairs, as the system is space-reversal). The asymptotic properties of the eigenfunctions are determined by the smallest LE (in absolute value), i.e. the exponent λmin (d, L) closest to zero. The localisation properties of a d + 1 dimensional disordered medium can be assessed by monitoring the dimensionless parameter = Lλmin for increasing values of the transversal size L (MacKinnon and Kramer, 1981, 1983; Pichard and Sarma, 1981a, b). In the localised phase, increases proportionally to the transversal size L, since λmin remains finite. In the delocalised phase, the minimal Lyapunov exponent tends to zero rapidly enough to make decrease with L. The application of this approach to two-dimensional disordered media (d = 1) reveals that all eigenfunctions are typically localised. In the three-dimensional setup (d = 2), a so-called mobility edge separates localised from delocalised states. The transition can be observed, e.g., by tuning the strength of the disorder above a critical value (MacKinnon and Kramer, 1981, 1983; Kramer et al., 1987; Kramer and MacKinnon, 1993; Slevin and Ohtsuki, 1999).

12.2 Billiards Billiards are one of the most studied Hamiltonian setups for testing nonlinear dynamics concepts and investigating connections with statistical mechanics (see Section 12.3). A billiard is a region R, where a pointlike particle is left free to move, colliding elastically with the boundary of R. Two examples are presented in Fig. 12.1: in the triangular billiard drawn in panel (a), collisions occur with an outer boundary; in the Sinai billiard drawn in panel (b), periodic boundary conditions are assumed and collisions occur only with the “inner” disk.

232

Applications

(b)

(a)

φ

Fig. 12.1

(a) A dispersing triangular billiard. The solid line represents a possible trajectory; φ is the angle between the direction of propagation and the normal to the boundary in a given collision. (b) Sinai billiard with periodic boundary conditions.

Depending on whether the concavity of the boundaries points towards inner or outer regions, the billiard is called dispersing or focusing (and can also have a mixed nature). In the former case, the dynamics is undoubtedly chaotic, while the behaviour of focusing billiards is more difficult to assess a priori. The reader interested in a rigorous analysis of such dynamical systems can consult Chernov and Markarian (2006). The phase state of a plane billiard is determined by three variables: two are needed to identify the position within the billiard region and one to identify the orientation of the velocity (the modulus of the velocity, being constant, can be assumed to be equal to 1 without loss of generality). The evolution in tangent space is conveniently described by referring to a frame that moves with the particle. In such a frame, two variables suffice: the distance u orthogonal to the direction of propagation and the velocity difference v along the same orthogonal direction (the distance along the direction of propagation is irrelevant, as it is neutrally stable and yields a zero Lyapunov exponent). Let {tn } denote the sequence of collision times of the particle with the boundary (t1 < t2 < · · · < tn ). In between two consecutive collisions, u˙ = v,

v˙ = 0,

which integrates to yield + + u− n+1 = un + vn τn ,

+ v− n+1 = vn ,

where τn = tn+1 − tn and the −/+ superscript means that the corresponding variable is determined just before/after the collision. The evolution equations are completed by adding the effect of the collision, − u+ n+1 = −un+1 ,

− v+ n+1 = −vn+1 +

2u− n+1 a cos φn+1

,

233

12.2 Billiards where −π/2 ≤ φn+1 ≤ π/2 is the angle between the direction of propagation and the normal to the boundary (see Fig. 12.1), while a is the local radius of curvature of the boundary. By combining the two steps, the following transformation is obtained:

1 τn Jn = − . 2/(a cos φn+1 ) 1 + 2τn /(a cos φn+1 ) The determinant of Jn is equal to 1, as it should be. The Lyapunov exponent can be determined by multiplying the Jacobians along a generic trajectory. Depending on whether the radius of curvature a is positive or negative, Jn is either hyperbolic or elliptic. In the former case (dispersing billiard) one eigenvalue is larger than 1 and the dynamics is trivially chaotic. It is however, possible to have a positive Lyapunov exponent also in the latter case (focusing billiard), as we have seen in Chapter 8, while discussing random products of elliptic matrices. The expansion rate between the nth and the (n + 1)th collision can be expressed as μn+1 ≡

u+ n+1 u+ n

= − (1 + κn τn ) ,

+ where κn ≡ v+ n /un denotes the local expanding direction, which can be determined by iterating the recursive equation

κn+1 =

κn 2 + . 1 + κn τn a cos φn+1

(12.3)

The positive Lyapunov exponent can be thus expressed as λ+ =

ln |1 + κn τn | , τn 

where the angular brackets denote a time average. If one is able to determine κn in all points of the phase space, the Lyapunov exponent can be equivalently expressed by replacing the time average with an ensemble average. The quantity κn has a simple physical interpretation: it corresponds to the curvature of a “wave front” propagating along the trajectory of reference. In fact, the first term in the r.h.s. of this equation (12.3) corresponds to the linear increase of the radius of curvature (1/κ) of the wave front in between two consecutive collisions, while the second term takes into account the reflection with the boundary. In Fig. 12.2, the direction of the Lyapunov vector (i.e. the curvature) is plotted in the phase space of the triangular billiard of Fig. 12.1a. Going back to the continuous-time representation, it is easily seen that the Lypaunov exponent can be expressed as an average of the instantaneous expansion rate (i.e. the instantaneous value of the curvature), ( ' ( ' κn v+ n = . λ+ = + 1 + κn t un + v+ nt The problem of determining the Lyapunov exponent remains nevertheless quite hard, as no analytic expression is typically available for the curvature. In some limit cases, it is, however, possible to derive approximate formulas. One example is the Lorentz gas, i.e. a collection of circular disks of radius a, in the limit of high dilution, a  1. In these

234

Applications

À

r

Fig. 12.2

Local phase space for the triangular billiard in the previous figure: r is the position along one of the three arcs composing the boundary, v is the component of the velocity parallel to the boundary. The small segments indicate the local direction of the Lyapunov vector, i.e. the curvature of the corresponding wave front.

conditions, the curvature values in Eq. (12.3) are dominated by the large last term under logarithm on the r.h.s. of this formula. As a result, Eq. (12.3) can be rewritten as λ+ ≈

ln(2/a) − ln cos φn  + ln τn  . τn 

The average time τn  can be determined with the following rough argument: a generic (straight) trajectory hits any disk whose centre is located in a corridor of width 2a. Accordingly, the typical distance L the particle travels before encountering an obstacle is given by n2aL = 1, where n is the density of scatterers, so that τ  = 1/(2an) – recalling that we have assumed a unit velocity. Therefore, λ+ ≈ −4a ln(an) − 2aln cos φ, where we have also assumed that ln τn  = lnτn . The leading contribution −4a ln a is universal, as it does not depend on the details of the scattering process. It was first conjectured by Krylov (1979) and then proved by Friedman et al. (1984). The value of the correction terms depends on the specific physical setup; their estimation requires dealing with the incidence angle φn . This can be done for instance by focusing attention on the trace of the products of the Jacobians, the knowledge of which suffices for the determination of the Lyapunov exponents in two-dimensional area-preserving maps. By adopting this approach, Dahlqvist (1997) was able to derive an additional constant term in the case of the Sinai billiard (see Fig. 12.1b) in the small scatterer limit.

235

12.3 Lyapunov exponents and transport coefficients

An alternative approach is based on the estimate of the probability distribution of the curvature in real space (a sort of Boltzmann equation) that is akin to the determination of the density of the direction of Lyapunov vectors that we have encountered in Chapter 8 for products of random matrices. This approach was first developed by van Beijeren and Dorfman (1995) for a planar Lorentz gas with a diluted distribution of scatterers, where the leading correction was determined. Further terms have been later determined by Kruis et al. (2006). Such a method is so powerful that it could be extended to a three-dimensional diluted Lorentz gas as well, where the two positive LEs have been determined to leading orders (Latz et al., 1997) and found to agree with direct numerical simulations (Dellago and Posch, 1997).

12.3 Lyapunov exponents and transport coefficients Two approaches have been developed that can help to extract information on nonequilibrium properties of statistical systems from the knowledge of dynamical invariants. In both cases, a link is established by expressing the same observable in terms of macroscopic as well as microscopic variables.

12.3.1 Escape rate This approach starts from the observation that the exponential decay of a suitable probability density can be equivalently expressed in terms of either a “diffusion coefficient” or microscopic dynamical quantities. Here, we illustrate the idea with reference to the Lorentz gas (see the previous section for a definition). Consider a small particle moving in a region R of size S, much larger than both the average distance between the scatterers and the mean free path p. Such a particle undergoes a diffusive motion on a length scale that is large compared with p, but small compared with S. Suppose also that R is surrounded by absorbing boundaries, so that when the particle crosses a boundary, it is removed. It is well known that the probability P(t) that the particle is still inside R after a long time t decreases exponentially, P(t) ≈ e−γ t .

(12.4)

Assuming that we are in the presence of normal diffusion, the rate γ can be obtained by solving a standard diffusion equation with absorbing boundary conditions. In practice γ corresponds to the smallest eigenvalue of this problem, which, for S large enough, has the typical structure γ = Da/S2 (Gaspard and Nicolis, 1990), where D is the diffusion coefficient, while a is a constant related to the geometric structure of R (in a simple onedimensional setup, a = π 2 ). On the other hand, the particle can be macroscopically seen as following a chaotic trajectory. In this framework, it remains confined forever within the allowed region if and only if it asymptotically converges to some fractal repeller, characterised by a zero Lebesgue measure. The probability for a randomly chosen initial condition to remain inside

236

Applications R decays exponentially as described by Eq. (12.4), where γ can now be interpreted as the escape rate from the repeller. Accordingly, from Section 6.3  γ = λi − hKS , i

where the sum is performed over all positive Lyapunov exponents (in the case of a planar Lorentz gas, only one positive exponent exists), while hKS is the Kolmogorov-Sinai entropy of the repeller. Upon identifying the two definitions of γ , one obtains the final relationship  S2  λi − hKS . (12.5) D = lim S→∞ a Since the diffusion coefficient D is an intensive quantity that is independent of the size and the form of the region R, this equation implies that the escape rate should vanish quadratically with S and contain an implicit dependence on the geometry, so as to cancel the explicit dependence on a present in the r.h.s. of Eq. (12.5). This appears to be true, but a comprehensive theory is still lacking. This method can be extended to determine generic transport coefficients in simple fluids. Instead of monitoring the particle position in real space, one needs to consider suitable Helfand moments (Gaspard, 2005). In the case of the shear viscosity η, the appropriate quantity to look at is  1 y xi pi , Mη = √ VkB T i where V is the volume, kB the Boltzmann constant, T the temperature, xi the x coordinate y of the particle and pi the corresponding momentum along the y direction. Mη performs a random walk in phase space. Therefore, one can again follow the previous strategy. On the one hand, the decay rate can be expressed in terms of η as γ = ηaη /S2η , where Sη is the size of the selected phase-space region, while aη is the corresponding geometrical factor. On the other hand, one can define γ as the escape rate of a suitable repeller, so that a formula of the type (12.5) is again obtained. The reader interested in a more detailed discussion can consult Gaspard (2005). A weakness of this approach is that properties of repellers in high-dimensional systems are poorly understood, and it is therefore not possible to infer the escape rate directly from the microscopic equations.

12.3.2 Molecular dynamics A second approach is based on the introduction of deterministic thermostats to steadily maintain a constant temperature even in the presence of an external forcing. Such devices are regularly used in molecular dynamics studies of stationary nonequilibrium states (Hoover, 1991; Evans and Morris, 2008). We illustrate the approach with reference to a charged particle that moves in a Lorentz gas in the presence of an electric field E. The determination of the corresponding electric conductivity is the object of interest. Between collisions, the particle satisfies the equation

237

12.4 Lagrangian coherent structures q E − α(v)v, m where m and q are the mass and the charge of the particle, respectively; −αv is a fictitious force, added to keep the kinetic energy strictly constant. This is ensured by imposing v˙ · v = 0, so that one finds that qE · v α(v) = ; mv2 α is a self-generated friction coefficient which determines the electric conductivity. In fact, by averaging over time the definition of α, we obtain r˙ = v,

α =

v˙ =

σ E2 j · E = , kB T kB T

where we have made use of the definition of kinetic temperature v2  = kB T and recalled the definition of electric conductivity ( j is the electric current). One should notice that the “frictional” term destroys the Hamiltonian structure of the evolution equation. In particular, the sum of all Lyapunov exponents is no longer equal to zero. The divergence of the velocity field is indeed equal to −(d − 1)α, where d is the dimension of the space. Since the divergence is also equal to the sum of all Lyapunov exponents, one obtains the final relationship  σ E2 =− λi , (d − 1) kB T i

which is another way of linking microscopic to macroscopic observables. Relationships of this type have been derived by Chernov et al. (1993) and Baranyai et al. (1993), and even earlier with reference to shear viscosity by Evans et al. (1990). In this case, nontrivial physics is hidden in the fact that the volume contraction rate is expected to scale quadratically with the field E. A weakness of this approach is the ad-hoc character of the additional terms, which have no microscopic justification. By finally comparing the two approaches, one can notice that the two methods invoke different invariant structures in phase space: the former one assumes the existence of a repeller that is typically fractal along all directions, while the latter assumes a chaotic attractor that is typically fractal only along the stable directions. How and whether the two methods reconcile with one another is still an open problem. A more detailed comparative analysis can be found in Dorfman and van Beijeren (1997).

12.4 Lagrangian coherent structures Passive tracers are particles which move according to a given velocity field, without affecting it. In an incompressible, turbulent fluid flow, passive tracers mix together quite effectively, so that a uniform distribution is soon established. If the flow is, instead, dominated by relatively stationary large structures (typically, large eddies), a generic initial distribution tends to form rather structured patterns on long but finite time scales. They are called Lagrangian (because they follow the motion of individual fluid elements)

238

Applications

coherent structures (LCSs). The theory of these structures, developed by Pierrehumbert and Yang (1993) and Haller (2001), is based on the computation of the finite-time Lyapunov exponents, as described in Chapter 5. Before describing the theory, we give some examples of LCSs. The most spectacular structures are those appearing over large scales as a result of either natural phenomena or man-made disasters. As a result of a volcanic eruption, an ash cloud may be spread by the atmospheric winds and form stripes with large concentration, alternating with basically ash-free regions. In an ocean, such stripes appear as a result of advection of an oil spill or large plankton fields. These and several other examples can be found in a popular article by Peacock and Haller (2013). Next, we describe the simplest setup of a two-dimensional incompressible, timedependent flow (vx (x, y, t), vy (x, y, t)). The particles follow Lagrangian trajectories determined by the dynamical equations x˙ = vx (x, y, t),

y˙ = vy (x, y, t).

(12.6)

For this dynamical system one can define two Lyapunov exponents, which generally (if the time dependence is random) are both non-zero. Because of incompressibility, their sum is zero, so that they have opposite sign. This means that all trajectories are “saddles” characterised by a stable and an unstable direction. According to the basic interpretation of Lyapunov exponents (see Chapter 2), this means that a small disk around a trajectory evolves into an ellipsoid, extended along the direction of the unstable manifold and compressed along the direction of the stable one. As coherent structures appear over finite times (a uniform mixing eventually settles down), it is natural to consider finite-time Lyapunov exponents. Let us consider the flow (12.6) over a finite time interval (t0 , t1 ). It defines the map x(t1 ) = fx (x(t0 ), y(t0 )),

y(t1 ) = fy (x(t0 ), y(t0 )).

Let us denote the corresponding Jacobian matrix as H. Just as for the definition of the Lyapunov exponents in Chapter 2, the eigenvalues of the symmetric matrix P(t0 , t1 ) = H H tell us how an initial small disk is stretched and compressed along different directions (in the context of fluid flow, the matrix P is often called Cauchy-Green strain tensor). The finite-time Lyapunov exponents 1,2 = 0.5 log μ1,2 , where μ1 , μ2 are the eigenvalues of P, quantify the rate of stretching. Because of the incompressibility of the flow, 1 = −2 . Regions of initial points x(t0 ), y(t0 ) where the Lyapunov exponent is large are regions where particles will be maximally dispersed (unstable coherent structures). Regions of final points x(t1 ), y(t1 ) where the Lyapunov exponent is large are regions with a maximal attraction of points for the evolution within time interval (t0 , t1 ), thus identifying stable coherent structures. Haller and Yuan (2000) and Haller (2001, 2002) suggest the use of the ridges of the two-dimensional surface 1 (x(t1 ), y(t1 )) for the identification of onedimensional LCSs. In Shadden et al. (2005) it was argued that such ridges are indeed nearly material lines of the Lagrangian flow. This is illustrated in Fig. 12.3 with reference to the double-gyre flow (Shadden et al., 2005), defined by the stream function ψ = A sin(π(a(t)x2 + b(t)x)) sin(π y).

(12.7)

239

12.5 Celestial mechanics

(a) 1

(b) 1

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0

0 0

0.5

1

1.5

2

0.5

1

1.5

2

(d) 1

10

0.8

0.8

8

0.6

0.6

6

0.4

0.4

4

0.2

0.2

2

0

0

0 0

Fig. 12.3

0

(c) 1

0.5

1

1.5

2

0

0.5

1

1.5

2

(a) and (b) Initial (t0 = 0) and final (t1 = 10) positions of passive tracers embeddedn in a double-gyre flow (12.7) with A = 0.1, a(t) = 0.2 sin(2π t/10), b(t) = 1 − 2a(t). (c) and (d) The finite-time Lyapunov exponents as a function of the initial (panel (c)) and the final (panel (d)) coordinates, represented with a grey scale. They define the unstable and stable coherent structures. Let us recall that the velocity field is defined as vx = ∂ψ/∂y, vy = −∂ψ/∂x. The evolution of particles (see panels (a) and (b)) is compared with the unstable and stable coherent structures (panels (c) and (d)). There one can see that the stable LCS is exactly the region where the tracers are concentrated. We conclude this section by mentioning that in some works, finite-amplitude exponents (see Chapter 7) have been used, instead of the finite-time ones (d’Ovidio et al., 2004; Bettencourt et al., 2013). As discussed in Section 7.1, in the presence of a well-established chaotic dynamics, the two approaches are substantially equivalent. In the absence of a strong mixing, Lagrangian structures are, instead, more reliably identified through the computation of finite-time Lyapunov exponents (Karrasch and Haller, 2013).

12.5 Celestial mechanics The solar system is an excellent example of a system where the assessment of the stability is conceptually relevant. It is also an example of how the distinction between order and chaos may be rather fuzzy.

240

Applications

Superficially, the motion of the planets is a manifestation of ordered evolution. Already before the Copernican revolution took place, however, prolonged observations revealed the presence of tiny fluctuations. When the existence of Newtonian gravitational forces became indisputable, it also became clear that such fluctuations originate from the mutual interactions among the various celestial bodies. Typically, all bodies are treated as point masses (this includes compound bodies, such as the Earth-Moon system, which is assumed to have the mass of the planet plus that of the satellite, and to be located in the barycenter), neglecting tidal friction, solar and planetary oblateness, solar-mass loss, as well as perturbations due to the possible passage of external stars. In practice, the solar system is modelled as a nonlinear Hamiltonian N-body system, governed by the Newtonian law of universal gravitation, H=

N−1 

p2j

j=0

2mj

−G

N−2  N−1  j=0 k=j+1

mj mk , |rj − rk |

where mj and rj denote the mass and the position of the jth body, respectively, pj is the corresponding momentum, and G is the gravitational constant. Sometimes generalrelativity effects are also included, as a perturbation to the solar potential (see, e.g., Laskar (1989)). The aforementioned Hamiltonian can be expressed as the sum of a Keplerian component, which induces a perfectly integrable dynamics, and an interaction component that is treated perturbatively. Sophisticated techniques have been indeed developed, based on suitable small parameters, such as the relative masses of the celestial bodies, the eccentricity, and the inclination of the orbits. This approach, which proved useful and effective over relatively short astronomical scales, is based on the implicit assumption that the planet motion is quasi-periodic, i.e. that it can be represented as a function f(ω1 t, ω2 t, . . .) that is 2π-periodic in each argument, and where ωi are incommensurate frequencies. With the progress of nonlinear dynamics, however, it has become clear that even in the presence of extremely small perturbations, the phase space of a Hamiltonian system is filled not only by quasi-periodic orbits (the so-called KAM tori) but also by a web of tiny chaotic layers. The reader interested in a detailed discussion of the various techniques can consult Murray and Dermott (1999). The main question is whether the current trajectory of the solar system is actually quasiperiodic (in which case the perturbative series is valid over all times), or weakly chaotic, in which case it may eventually lead to the collision of some planets and their ejection from the solar system. Generally speaking, this can be ascertained by integrating the equations of motion and simultaneously computing the corresponding (finite-time) Lyapunov exponent. In practice, the task is very hard, since it is necessary to accurately integrate over a very long time: the gap between the time scale of the fastest motion (a few hours) and that one needed to assess the possible existence of a chaotic dynamics (millions of years) is wider than 109 . Because of these difficulties, most of the efforts have been restricted to analysing the motion of the so-called outer solar system (i.e. the planets from Jupiter to Pluto), since the

241

12.5 Celestial mechanics

20 ln Δ 15

10

5

0 0

20

40

60

80

100

time (Myr)

Fig. 12.4

Evolution of a small distance in the solar system (in arbitrary units). Circles and diamonds refer to the simulations by Laskar and Sussman and Wisdom, respectively. Data are freely taken from their respective publications. The slopes, obtained by fitting the last part of the curve, are equal to 0.15 and 0.24 (Myr)−1 .

inner planets move faster and require a smaller integration step. It was, nevertheless, later understood that the former subsystem is less chaotic than the whole solar system, so that longer simulations are eventually required. At least two different approaches have been adopted for the integration of the equations. Laskar (1989) made an extensive use of averaging techniques to remove the short periods and to derive effective equations for the secular terms that could be integrated with a much larger time step (about 500 years). In order to have an idea of how complicated the final model is, notice that for eight planets (Pluto was excluded), the model involves about 150,000 polynomial terms. A few pairs of initial conditions have been considered to determine the growth of the relative distance (i.e. in practice, the finite-amplitude exponent has been determined). The result is reproduced in Fig. 12.4 for one such simulation (see the circles): an initially linear increase, which is typical of a quasiperiodic motion, is followed by an exponential growth with an exponent that corresponds to a Lyapunov exponent about 0.16–0.2 (Myr)−1 . This corresponds to a Lyapunov time scale (the inverse of the Lyapunov exponent) of about 5−6 Myr. Sussman and Wisdom (1992) have, instead, integrated the equations of motion of the whole solar system (including Pluto), by simulating the effect of the perturbations as that of equispaced delta kicks. This has allowed for an easier integration of the Keplerian component in between consecutive kicks, while keeping a sufficiently high accuracy in the treatment of the perturbations. In practice, the “time step” could be chosen as long as seven days. Their results, obtained again by following a pair of nearby trajectories, correspond to the diamonds in Fig. 12.4. The resulting Lyapunov time is ≈ 4 Myr, a value relatively close to the estimate obtained by Laskar.

242

Applications

In both cases, the exponential growth does not settle from the very beginning; this is to be expected, as the initial perturbation is not aligned along the most unstable direction. Notice that the time needed for the perturbation to reach the proper alignment is larger but comparable to the Lyapunov time (the inverse of the Lyapunov exponent); this suggests that the second Lyapunov exponent is substantially smaller than the maximal one. Remarkably, the Lyapunov time is much shorter (by a factor 103 ) than the age of the solar system. It should be noted, however, that in nearly integrable Hamiltonian systems, chaos is confined to tiny layers, where only the phase variables (i.e. the position of the planets along their “normal” trajectories) are affected by the chaotic dynamics, while the structure of the orbits is essentially unchanged. The problem of assessing the stability of the solar system is related to the occurrence of substantial orbital changes, which occur on longer time scales. Laskar and Gastineau (2009) have performed careful studies, explicitly including the Moon. Since long simulations are unavoidably affected by the exponential amplification of tiny errors (and by the intrinsic approximations of the model), a statistical approach is more appropriate. Laskar and Gastineau (2009) found significant enhancements in the eccentricity of Mercury’s trajectory that are strong enough to trigger collisions with either Venus or the Earth. After simulating 2501 trajectories, they conclude that the probability of a substantial change of Mercury’s eccentricity is about 1% over 5 Gyr. Finally, one should notice that although chaos in the whole solar system is very weak, the motion of small bodies in the gravitational fields generated by the sun and the planets can be more irregular. One example is the chaotic dynamics of Halley’s comet, first uncovered by Chirikov and Vecheslavov (1989). They described the motion of the comet as a Kepler orbit, reducing the problem to either a two-dimensional map (if the influence only of Jupiter is taken into account) or a three-dimensional map (if the influence of Saturn is included, too). These maps have regions of stable and chaotic behavior: the current position of the comet was estimated to lie within a chaotic region, rather close to its boundary. The Lyapunov exponent computed by Chirikov and Vecheslavov (1989) was in the range 0.16  λ  0.26, in units of the comet basic period. Longer calculations revealed a diffusion of the comet’s energy due to effectively random kicks from giant planets. As a result they estimated an escape of the comet from the solar system after about 10 Myr, although it is not clear whether the map is still valid on such time scales (e.g. the comet might have evaporated earlier).

12.6 Quantum chaos The term “quantum chaos” refers to the behaviour of quantum-mechanical systems whose classical counterpart behaves chaotically. This is a broad research field (Stöckmann, 1999; Haake, 2010) which belongs to the even wider class of wave (electromagnetic and acoustic) chaos, where the problem consists in inferring propagation properties whenever the dynamics of rays is chaotic. Here we comment only briefly on those aspects that are related to the concept of Lyapunov exponents.

243

12.6 Quantum chaos

The definition of LE is intimately related to that of “trajectory”, which is not present in quantum mechanics that is instead based on the notion of “quantum” state, whose evolution ˆ is ruled by a unitary operator (propagator over a time interval t) U(t). Such an evolution is stable in the following sense: the correlation between a given state |" and a perturbed one |"  , understood as the overlap (scalar product) of these states, is time-independent: †

"  (t)|"(t) = "  (0)| Uˆ (t)U(t)|"(0) = "  (0)|"(0).

(12.8)

† ˆ By recalling that Uˆ (t) = U(−t), one can interpret this conservation of the correlation as a nearly perfect invertibility of the quantum evolution: if one lets the forward evolution be followed by the backward one, it is found that a quantum system (nearly) returns to its initial value, even if, at the inversion time, it is slightly perturbed. This is in contrast with the evolution of classical systems, where the memory of the initial condition is lost if the evolution time exceeds the characteristic Lyapunov time. The classical non-invertibility is often referred to as the Loschmidt paradox: in a discussion about the “arrow of time” in statistical thermodynamics, Loschmidt in 1876 argued that, because of the invariance under time reversal of the classical dynamics, entropy decreases should be also observable. In view of the modern theory of chaos, it is nowadays understood that although individual trajectories are time reversible, the evolution of a typical ensemble of trajectories (a density) leads to an entropy increase in both time directions. In a simple numerical test, the time inversion of a classical chaotic trajectory does not reproduce the initial state because of the unavoidable round-off errors which grow exponentially according to the largest Lyapunov exponent (which, in Hamiltonian systems, has the same value in both time directions), so that the backward trajectory is followed only during the short “predictability time” (2.15). On the other hand, as it follows from Eq. (12.8), the quantum evolution is resilient against the introduction of perturbations (see, e.g., the comparison between the classical and the quantum Chirikov-Taylor standard map by Shepelyansky (1983)). There is, however, another setup, where the existence of classical chaos manifests itself in a quantum world, inducing an exponential decay of the correlations; this happens if, instead of perturbing the initial state, the Hamiltonian itself is perturbed. Namely, one considers the evolution of the same initial state |"(0) under two slightly different propagators, Uˆ 0 (t) and Uˆ ε (t). The overlap between unperturbed and perturbed states at time t is called fidelity, †

Fε (t) = |"0 (t)|"ε (t)|2 = |"(0)|Uˆ 0 (t)Uε (t)|"(0)|2 .

(12.9)

One can interpret this relation also as a “Loschmidt echo”: fidelity shows how exact the state returns back if the forward and the backward evolution operators differ by a small perturbation ∼ ε. As it has been first demonstrated by Jalabert and Pastawski (2001), using a semiclassical approximation, the fidelity (12.9) exhibits a pronounced range of exponential decay in time, with the rate equal (in a certain regime) to the largest classical Lyapunov exponent λ. We do not enter the details of the theory (see Gorin et al. (2006) for a review), but we describe the results according to Goussev et al. (2012). An exponential decay of the fidelity (12.9) according to the Lyapunov exponent λ occurs for perturbations

244

Applications

ε that are neither too small nor too large. For very small ε, the decay rate is smaller than λ, due to the contribution of uncorrelated pairs of classical trajectories which contribute to Fε (t). For very large ε, the fidelity decays as ∼ exp(−t2 ). Also the time range where an exponential decay of the fidelity is observed is limited. For very small times, the fidelity decays quadratically in time F ∼ 1 − (ηt/h) ¯ 2 , due to the average dispersion η of the perturbation propagator with respect to the initial state. For very large times, the decay of fidelity is saturated by value F ∼ N−1 , where N is the size of the effective Hilbert space (volume of the phase space available for the state during evolution, expressed in units of the Planck’s cell). In other words, the exponential decay of F ∼ exp[−λt] is observed only in the time range h/η ¯  t  λ−1 ln N.

Appendix A

Reference models

In this appendix we introduce those models that are used in the monograph to illustrate various properties of the LEs. For the sake of simplicity, they are organised in a few categories of increasing complexity.

A.1 Lumped systems: discrete time The simplest examples of a self-sustained irregular dynamics are given by discrete-time dynamical systems.

The logistic map The logistic map is a map of an interval U(t + 1) = a − U 2 (t).

(A.1)

This model was originally introduced to describe population dynamics. It expresses the size U(t+1) of the (t+1)th generation as a function of the size of the previous generation. It was later acknowledged that the same map approximately describes a wide class of nonlinear phenomena, where all variables but one are very stable (strongly contracting). This map is one-dimensional, i.e. the current state is characterised by a scalar variable U(t). A major limit of the map is that it is not invertible; in fact, given a generic state U(t + 1), there may exist more than one preimage U(t). Quite often other forms of the logistic map are used, obtained from (A.1) by a linear transformation of the variable: U(t + 1) = 1 − bU 2 (t),

U(t + 1) = cU(t)(1 − U(t)).

(A.2)

If there exist exactly two preimages for each point of the interval, the logistic map displays “full chaos”. That happens at the so-called Ulam point (a = 2, b = 2, c = 4) where the map is called Ulam map.

The tent map U(t + 1) =

245

U(t) a 1−U(t) 1−a

0 ≤ U(t) ≤ a a < U(t) ≤ 1

(A.3)

246

Appendix A Reference models

is a piece-wise linear version of the logistic map (A.1) at the Ulam point. The parameter a allows control of the fluctuations of the local multiplier (in the symmetric case, a = 1/2; no fluctuations of the absolute value of the multipliers are present).

The Hénon map U(t + 1) = a − U(t)2 + bU(t − 1)

(A.4)

generalises the logistic map to a two-dimensional space; here the state at time t is uniquely identified by the pair of variables U(t) and U(t − 1). An important advantage over the logistic map is that it is invertible: one can easily verify that given the state U(t + 1), U(t), there exists one and only one U(t − 1). Sometimes in the literature one finds the equivalent formulation U(t + 1) = 1 − aU(t)2 + bU(t − 1) (this latter expression is easily obtained by transforming U(t) → aU(t) in (A.4)).

The Lozi map The Lozi map is basically a piece-wise linear version of the Hénon map, U(t + 1) = 1 − a|U(t)| + bU(t − 1).

(A.5)

Like the Hénon map, the Lozi map is invertible.

The Chirikov-Taylor standard map Q(t + 1) = Q(t) + P(t) P(t + 1) = P(t) + K sin Q(t + 1) = P(t) + K sin(Q(t) + P(t)),

(A.6)

where both variables Q(t) and P(t) are meant to be taken as modulo 2π . This twodimensional map is the prototypical model of a chaotic symplectic dynamics, where volumes are conserved and forward and backward dynamics are mutually equivalent. K controls the degree of chaos (the dynamics is typically quasiperiodic for K  1.).

A.2 Lumped systems: continuous time Sets of a few nonlinear ordinary differential equations offer the chance of investigating the continuous-time dynamics.

The FitzHugh-Nagumo model U3 −V+I 3 τ V˙ = U + a − bV U˙ = U −

is used as a reference for the onset of an oscillatory dynamics in excitable systems.

(A.7)

247

A.3 Lattice systems: discrete time

The Rössler oscillator Here U = (X, Y, Z), X˙ = −Y − Z Y˙ = X + aZ

(A.8)

Z˙ = b + Z(X − c). This set of equations is called Rössler oscillator. These equations, derived as a simplified model for chemical kinetics, represent one of the prototypical testbed for the study of deterministic chaos. The typical parameter values used in this monograph are a = 0.1, b = 0.1 and c = 10.

The Lorenz system X˙ = σ (X − Y), Y˙ = −Y + rX − XZ,

(A.9)

Z˙ = −bZ + XY, where U = (X, Y, Z) is one of the most popular models for chaos. The “standard” parameter values used by Lorenz are σ = 10, r = 28 and b = 8/3.

A.3 Lattice systems: discrete time Chains of maps (coupled map lattices) provide a natural environment to investigate chaotic properties of spatially extended systems, but they have to be complemented by suitable boundary conditions. Unless otherwise specified, periodic boundary conditions are assumed (the same holds for the other classes of space-time systems).

Logistic maps In the case of logistic maps, the model is written as Ux (t + 1) = a − [Ux (t) + εDUx (t)]2 ,

(A.10)

where DUx ≡ (Ux−1 − 2Ux + Ux+1 ) is the discrete Laplacian operator. This is the simplest example of chaotic spatially extended system. Sometimes in the literature the evolution rule is expressed by referring to the variable Vx (t) = Ux (t) + εDUx (t). The two formulations are perfectly equivalent.

248

Appendix A Reference models

Hénon maps Coupled Hénon maps follow the evolution equation Ux (t + 1) = a − [Ux (t) + εDUx (t)]2 + bUx (t − 1) .

(A.11)

This is a minimal invertible model for space-time chaos (Politi and Torcini, 1992). It has a symplectic-like structure: Lyapunov exponents come in pairs λi , λL−i+1 (L is the system size) such that λi + λL−i+1 = ln b (see also Section 2.5.7).

Symplectic maps A model of coupled symplectic maps can be defined as   Zx (t + 1) = Zx (t) + μ sin(Ux+1 (t) − Ux (t)) − sin(Ux (t) − Ux−1 (t) Ux (t + 1) = Ux (t) + Zx (t + 1),

(A.12)

where both variables are meant to be taken modulo 2π . This is a minimal model often used as a testbed for symplectic dynamics in spatially extended systems in the presence of a conservation law (the sum of the Z variables).

Stable-chaos maps The following chain of maps provides a paradigmatic stable-chaos model, Ux (t + 1) = (1 − 2ε)F(Ux (t)) + ε[F(Ux−1 (t) + F(xx+1 (t))], where F(U) is the piecewise-linear function ⎧ ⎪ ⎪ ⎨U/a F(U) = 1 − (1 − b)(U − a)/η ⎪ ⎪ ⎩b + c(U − a − η)

(A.13)

0≤U≤a a Ttrans then for j = 1 to M do j ← j + ln |αj | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . [running average] end for end if end for . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . [end of main loop] for j = 1 to M do λj ← j /Tav end for

Algorithm 2 α1 ← u1  u1 ← u1 /α1 R11 ← α1 for j = 2 to M do for i = j to M do Rj−1,i ← uj−1 · ui ui ← ui − Rj−1,i uj−1 end for αj ← uj  uj ← uj /αj Rjj ← αj end for

QR decomposition: Gram-Schmidt orthogonalisation

254

Appendix B Pseudocodes

Algorithm 3

Computation of M covariant Lyapunov vectors Intitialisation

U(0) ← initial conditions for the map for j = 1 to M do uj (0) ← initial conditions for the forward linearised variables j ← 0 end for for=j = >j 1 to M do vj ← initial conditions for the backward linearised variables s ← vj j = >j = >j vj ← vj /s end for Forward transient for t = 1 to Ttrans do for j = 1 to M do ∂F · uj (t − 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . [linearised map] uj (t) = ∂U end for U(t) = F(U(t − 1)) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . [nonlinear map] call QR({uj , αj },R(t)) . . . . . . . . . . . . . . . . . . . . . . . . . . [QR decomposition is performed] end for Forward loop for t = 1 to Tav do {wj (t)} ← {uj } . . . . . . . . . . . . . . . . . . . . . . . . . . . [M {uj }vectors are saved for later use] for j = 1 to M do ∂F · uj (t − 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . [linearised map] uj (t) = ∂U end for U(t) = F(U(t − 1)) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . [nonlinear map] call QR({uj , αj },R(t)) . . . . . . . . . . . . . . . . . . . . . . . . . . [QR decomposition is performed] end for Backward loop for t = Tav to 1,−1 do for j = 1 to M do for k = j to 2,−1 do vjk ← vjk /Rkk (t) . . . . . . . . . . . . . . . . . . . . . . . . [backward iteration of the vector vj ] for i = 1 to k − 1 do vji ← vji − vjk Rik (t) end for end for vj1 ← vj1 /R11 (t) s ← vj j = >j = >j vj ← vj /s end for end for

255

Appendix B Pseudocodes

Algorithm 4 for j = 1,M do s ← uj j αj ← −s sgn(ujj ) < X ← s(s + |ujj |) ujj ← ? @ ujj ?− αj @ uj j ← uj /X j for i = j +? 1,M do @ σ ← uj · ui j @ ? ui j ← ui − σ uj j end for end for for i = 1,M do y(i) ← ei for j = M,1,−1 do @ ? σ ← uj · ui j @ ? yi j ← yi − σ uj j end for end for for i = 1,M do ui ← yi end for

QR decomposition: Householder reflections

Appendix C

Random matrices: some general formulas

In this appendix we prove some results for the products of random matrices that are discussed in Chapter 8. More specifically, the following two sections are devoted to the computation of the Lyapunov spectrum in a discrete-time and continuous-time case, respectively.

C.1 Gaussian matrices: discrete time An ensemble of N × N matrices A that are characterised by the symmetry properties mentioned in Section 8.1.2 satisfy Eq. (8.14). Such an equation can be rewritten as Sk = lnAe1 ∧ . . . ∧ Aek  = lnAe1 ∧ . . . ∧ Aek−1 Pk Ak ek , where Pk is the projector onto the (N − k + 1)-dimensional subspace orthogonal to the subspace spanned by Ae1 , Ae2 , . . . Aek−1 . In the Gaussian matrix ensemble, the vectors Ae1 , Ae2 , . . . and Aek−1 are independent of Aek . Moreover, Aek is a random vector that has the same distribution for any rotation of A, so that we can replace P with the projector onto the last (N − k + 1) vectors of an Euclidean basis, obtaining 6 $N−k+1 %7  1 A2ik . ln λk = 2 i

The average can be performed by noticing that Wn := A21k + A22k + · · · + A2nk (the index k in Aik is irrelevant) has a χ 2 distribution. The use of standard relations for the χ 2 distribution (Abramowitz and Stegun, 1964) allows the determination of an explicit expression for the moments of Wn (Cohen and Newman, 1984), 

Wnq



 +∞ σ 2q = n/2 dw wn/2+q−1 e−w/2 2 (n/2) 0 (2σ 2 )q (n/2 + q) = exp[(ln 2σ 2 )q + φ(n/2 + q) − φ(n/2)], = (n/2)

(C.1)

where φ(q) = ln (q) and (q) is the standard gamma function. An expression for ln Wn  q is then obtained, by noticing that Wn  = exp(q ln Wn ) and thereby expanding this equation around q = 0, 256

257

C.2 Gaussian matrices: continuous time & & & & d d q & q & ln Wn  = = Wn & ln Wn & dq dq q=0 q=0 = ln 2σ 2 + ψ(n/2)

(C.2)

where ψ(y) =  (y)/ (y) is the digamma function. As a result, 

 N−k+1 1 ln 2 + ψ . λk = ln σ + 2 2

C.2 Gaussian matrices: continuous time Given the model defined in Section 8.2.4, one can express the volume defined in Eq. (8.39) as the determinant of a k × k matrix, obtained by projecting onto the first k vectors of the Euclidean basis,   √ √ m ln det Pk eC/ m eC/ m Pk . k = lim m→ ∞ 2 √ By now exploiting the smallness of 1/ m, one can expand the exponentials in the aforementioned equation and make use of the standard relation ∞ * +  TrAi i ε. (−1)i+1 ln det(I + εA) = Tr ln (I + εA) = i i=1

As a result, it is found that

' 0   C + C (C + C)2 m Tr Pk + √ m→ ∞ 2 2m m (  2 (C + C)Pk (C + C) − Pk + o(1/m) , 2m

k = lim

(C.3)

where we have used that Tr(C C) = Tr(CC ). Finally, by recalling that the entries of the matrix C have zero average, one obtains 6 k 7 N k   σ2  (Cji + Cij )(Cij + Cji ) = (N − k), k = 2 i=1 j=k+1

which implies Eq. (8.40).

i=1

Appendix D

Symbolic encoding

In this appendix we briefly introduce the basic elements of a powerful approach that helps to characterise chaotic dynamical systems and eventually to obtain accurate estimates of its Lyapunov exponents. The idea is to partition the phase space into a collection P of disjoint elements {Bi } (the atoms) and thereby encode a generic trajectory {Un } as a sequence of symbols {sn }, where sn = Bi if Un ∈ Bi . The procedure is faithful only if the partition P is generating; i.e. an infinitely long trajectory is encoded by one and only one sequence of symbols. In maps of the interval, a generating partition can be constructed by splitting the interval itself into subsets, where the map behaves monotonously (Collet and Eckmann, 1980). For instance the logistic map (A.2) U = aU(1 − U) has a maximum in U = 1/2, and its dynamics can be thereby encoded as a sequence of binary symbols, which are selected depending on whether the phase point belongs to the interval [0, 1/2) or [1/2, 1]. In two-dimensional spaces, the problem of constructing a generating partition is much harder. No rigorous results are, in fact, available, but there is compelling evidence that a method proposed by Grassberger and Kantz (1985) for the Hénon map works for generic dissipative models. It makes use of the homoclinic tangencies (i.e. the points where stable and unstable manifolds are mutually tangent). In practice, the two-dimensional plane is split into two parts by the polygonal line obtained by connecting the so-called primary tangencies (approximately, those characterised by a minimal value of the sum of the curvature of the two manifolds). As shown by Giovannini and Politi (1992) and Hansen (1992), the final result is not unique: a given dynamical system can be characterised by equivalent but different symbolic descriptions. The approach can be extended to symplectic systems by complementing the use of homoclinic tangencies with that of suitable symmetry lines, which allow the partitioning of the ordered regions where no tangencies are present (Christiansen and Politi, 1997). By definition, any trajectory of a map F(U) is encoded as a suitable symbolic sequence, but the converse is not generally true; typically, there exist infinitely many sequences that cannot be generated by a given mapping F. This information is implicitly contained in the value of the topological entropy. Markov partitions represent an important subclass of generating partitions. A partition P is said to be Markov if each atom Bi is mapped exactly onto the union of one or more atoms. As a result, whenever a finite Markov partion is available, the topological entropy can be exactly determined (see Chapter 5). In general, however, no finite Markov partition is typically available. In one-dimensional spaces, useful information can be extracted with the help of the kneading theory (Collet and Eckmann, 1980; Milnor and Thurston, 1988). In two-dimensional spaces, the leading tool is the so-called pruning front (Cvitanovi´c et al., 1988). 258

Bibliography

Abrahams, E. (ed.). 2010. 50 years of Anderson localization. World Scientific Publishing, Hackensack, NJ. Abramowitz, M., and Stegun, I. A. 1964. Handbook of mathematical functions with formulas, graphs, and mathematical tables. National Bureau of Standards Applied Mathematics Series, vol. 55. For sale by the Superintendent of Documents, U.S. Government Printing Office, Washington, DC. Acebrón, J. A., Bonilla, L. L., Pérez Vicente, C. J., Ritort, F., and Spigler, R. 2005. The Kuramoto model: a simple paradigm for synchronization phenomena. Rev. Mod. Phys. 77: 137–185. Ahlers, V., Zillmer, R., and Pikovsky, A. 2001. Lyapunov exponents in disordered chaotic systems: avoided crossing and level statistics. Phys. Rev. E 63: 036213. Ames, W. F. 1992. Numerical methods for partial differential equations. 3rd edn. Academic Press, Boston, MA. Anderson, P. W. 1958. Absence of diffusion in certain random lattices. Phys. Rev. 109: 1492–1505. Arecchi, F., Giacomelli, G., Lapucci, A., and Meucci, R. 1992. Two-dimensional representation of a delayed dynamical system. Phys. Rev. A 45: R4225–R4228. Arnold, L. 1998. Random dynamical systems. Berlin: Springer-Verlag. Arnold, L., and Imkeller, P. 1995. Furstenberg-Khas’minski˘ı formulas for Lyapunov exponents via anticipative calculus. Stochastics Stochastics Rep. 54: 127–168. Arnold, L., Papanicolaou, G., and Wihstutz, V. 1986. Asymptotic analysis of the Lyapunov exponent and rotation number of the random oscillator and applications. SIAM J. Appl. Math. 46: 427–450. Artuso, R., Casati, G., and Guarneri, I. 1997. Numerical study on ergodic properties of triangular billiards. Phys. Rev. E 55: 6384–6390. Artuso, R., Guarneri, I., and Rebuzzini, L. 2000. Spectral properties and anomalous transport in a polygonal billiard. Chaos 10: 189–194. Ashwin, P., and Breakspear, M. 2001. Anisotropic properties of riddled basins. Phys. Lett. A 280: 139–145. Ashwin, P., Buescu, J., and Stewart, I. 1994. Bubbling of attractors and synchronisation of chaotic oscillators. Phys. Lett. A 193: 126–139. Aston, Ph. J., and Dellnitz, M. 1995. Symmetry breaking bifurcations of chaotic attractors. Int. J. Bifurcat. Chaos 5: 1643–1676. Aston, Ph. J., and Dellnitz, M. 1999. The computation of Lyapunov exponents via spatial integration with application to blowout bifurcations. Comput. Methods Appl. Mech. Engrg. 170: 223–237. 259

260

Bibliography

Aston, Ph. J., and Laing, C. R. 2000. Symmetry and chaos in the complex GinzburgLandau equation. II. Translational symmetries. Physica D 135: 79–97. Aston, Ph. J., and Melbourne, I. 2006. Lyapunov exponents of symmetric attractors. Nonlinearity 19: 2455–2466. Aurell, E., Boffetta, G., Crisanti, A., Paladin, G., and Vulpiani, A. 1996. Growth of noninfinitesimal perturbations in turbulence. Phys. Rev. Lett. 77: 1262–1265. Aurell, E., Boffetta, G., Crisanti, A., Paladin, G., and Vulpiani, A. 1997. Predictability in the large: an extension of the concept of Lyapunov exponent. J. Phys. A – Math. Gen. 30: 1–26. Badii, R., and Politi, A. 1997. Complexity: hierarchical structures and scaling in physics. Cambridge University Press, Cambridge. Bagnoli, F., Rechtman, R., and Ruffo, S. 1992. Damage spreading and Lyapunov exponents in cellular automata. Phys. Lett. A 172: 34–38. Barabási, A.-L, and Stanley, H. E. 1995. Fractal concepts in surface growth. Cambridge University Press, Cambridge. Baranyai, A., Evans, D. J., and Cohen, E. G. D. 1993. Field-dependent conductivity and diffusion in a two-dimensional Lorentz gas. J. Stat. Phys. 70: 1085–1098. Barreira, L., and Pesin, Y. 2007. Nonuniform hyperbolicity: dynamics of systems with nonzero Lyapunov exponents. Encyclopedia of mathematics and its applications, vol. 115. Cambridge University Press, Cambridge. Baxendale, P. H., and Goukasian, L. 2002. Lyapunov exponents for small random perturbations of Hamiltonian systems. Ann. Probab. 30: 101–134. Beck, C., and Schlögl, F. 1995. Thermodynamics of chaotic systems: an introduction. Cambridge University Press, Cambridge. Beenakker, C. W. J. 1997. Random-matrix theory of quantum transport. Rev. Mod. Phys. 69: 731–808. Benettin, G., Galgani, L., Giorgilli, A., and Strelcyn, J.-M. 1980a. Lyapunov characteristic exponents for smooth dynamical systems and for Hamiltonian systems; a method for computing all of them. Part I: theory. Meccanica 15: 9–20. Benettin, G., Galgani, L., Giorgilli, A., and Strelcyn, J.-M. 1980b. Lyapunov characteristic exponents for smooth dynamical systems and for Hamiltonian systems; a method for computing all of them. Part II: numerical application. Meccanica 15: 21–30. Benzi, R., Paladin, G., Parisi, G., and Vulpiani, A. 1985. Characterization of intermittency in chaotic systems. J. Phys. A – Math. Gen. 18: 2157. Berlekamp, E. R., Conway, J. H., and Guy, R. K. 1982. Winning ways for your mathematical plays. Vol. 2: Games in particular. Academic Press, London and New York. Bettencourt, J. H., López, C., and Hernández-García, E. 2013. Characterization of coherent structures in three-dimensional turbulent flows using the finite-size Lyapunov exponent. J. Phys. A – Math. Theor. 46: 254022. Beyn, W.-J., and Lust, A. 2009. A hybrid method for computing Lyapunov exponents. Numer. Math. 113: 357–375. Biktashev, V. N. 2005. Causodynamics of autowave patterns. Phys. Rev. Lett. 95: 084501.

261

Bibliography

Bliokh, K. Y., Bliokh, Yu. P., Freilikher, V., Savel’ev, S., and Nori, F. 2008. Colloquium: Unusual resonators: plasmonics, metamaterials, and random media. Rev. Mod. Phys. 80: 1201–1213. Bochi, J., and Viana, M. 2005. The Lyapunov exponents of generic volume-preserving and symplectic maps. Ann. Math. 161: 1423–1485. Borland, R. E. 1963. The nature of the electronic states in disordered one-dimensional systems. P. Roy. Soc. Lond. A Mat. 274: 529–545. Bougerol, Ph., and Lacroix, J. 1985. Products of random matrices with applications to Schrödinger operators. Progress in Probability and Statistics, vol. 8. Birkhäuser, Boston, MA. Bourgain, J. 2005. Green’s function estimates for lattice Schrödinger operators and applications. Princeton University Press, Princeton, NJ. Bridges, T. J., and Reich, S. 2001. Computing Lyapunov exponents on a Stiefel manifold. Physica D 156: 219–238. Broomhead, D. S., Jones, R., and King, G. P. 1987. Topological dimension and local coordinates from time series data. J. Phys. A – Math. Gen. 20: L563–L569. Brown, R., Bryant, P., and Abarbanel, H. D. I. 1991. Computing the Lyapunov spectrum of a dynamical system from an observed time series. Phys. Rev. A 43: 2787–2806. Bryant, P., Brown, R., and Abarbanel, H. D. I. 1990. Lyapunov exponents from observed time series. Phys. Rev. Lett. 65: 1523–1526. Butcher, J. C. 2008. Numerical methods for ordinary differential equations. 2nd edn. John Wiley & Sons, Chichester. Campanino, M., and Klein, A. 1990. Anomalies in the one-dimensional Anderson model at weak disorder. Comm. Math. Phys. 130: 441–456. Carroll, T. L., and Pecora, L. M. 1991. Synchronizing chaotic circuits. IEEE Trans. Circ. and Systems 38: 453–456. Casetti, L., Livi, R., and Pettini, M. 1995. Gaussian model for chaotic instability of Hamiltonian flows. Phys. Rev. Lett. 74: 375–378. Cecconi, F., and Politi, A. 1999. An analytic estimate of the maximum Lyapunov exponent in products of tridiagonal random matrices. J. Phys. A – Math. Gen. 32: 7603–7621. Cencini, M., and Torcini, A. 2001. Linear and nonlinear information flow in spatially extended systems. Phys. Rev. E 63: 056201. Cencini, M., and Vulpiani, A. 2013. Finite size Lyapunov exponent: review on applications. J. Phys. A – Math. and Theor. 46: 254019. Cencini, M., Falcioni, M., Vergni, D., and Vulpiani, A. 1999. Macroscopic chaos in globally coupled maps. Physica D 130: 58–72. Cessac, B., Doyon, B., Quoy, M., and Samuelides, M. 1994. Mean-field equations, bifurcation map and route to chaos in discrete time neural networks. Physica D 74: 24–44. Chernov, N., and Markarian, R. 2006. Chaotic billiards. American Mathematical Society, Providence, RI. Chernov, N. I., Eyink, G. L., Lebowitz, J. L., and Sinai, Ya. G. 1993. Derivation of Ohm’s law in a deterministic mechanical model. Phys. Rev. Lett. 70: 2209–2212.

262

Bibliography

Chirikov, B. V., and Vecheslavov, V. V. 1989. Chaotic dynamics of comet Halley. Astronomy & Astrophysics 221: 146–154. Christiansen, F., and Politi, A. 1997. Guidelines for the construction of a generating partition in the standard map. Physica D 109: 32–41. Christiansen, F., and Rugh, H. H. 1997. Computing Lyapunov spectra with continuous Gram-Schmidt orthonormalization. Nonlinearity 10: 1063–1072. Cipriani, P., and Politi, A. 2004. An open-system approach for the characterization of spatio-temporal chaos. J. Stat. Phys. 114: 205–228. Cohen, J. E., and Newman, Ch. M. 1984. The stability of large random matrices and their products. Ann. Probab. 12: 283–310. Cole, J. D. 1951. On a quasi-linear parabolic equation occurring in aerodynamics. Quart. Appl. Math. 9: 225–236. Collet, P., and Eckmann, J.-P. 1980. Iterated maps on the interval as dynamical systems. Birkhäuser, Boston, MA. Collet, P., and Eckmann, J.-P. 1999. Extensive properties of the complex Ginzburg-Landau equation. Comm. Math. Phys. 200: 699–722. Cook, J., and Derrida, B. 1990. Lyapunov exponents of large, sparse random matrices and the problem of directed polymers with complex random weights. J. Stat. Phys. 61: 961–986. Cooper, F., Khare, A., and Sukhatme, U. 1995. Supersymmetry and quantum mechanics. Phys. Rep. 251: 267–385. Corazza, M., Kalnay, E., Patil, D. J., Yang, S. C., Morss, R., Cai, M., Szunyogh, I., Hunt, B. R., and Yorke, J. A. 2003. Use of the breeding technique to estimate the structure of the analysis “errors of the day”. Nonl. Processes in Geophysics 10: 233–243. Crauel, H., Debussche, A., and Flandoli, F. 1997. Random attractors. J. Dynam. Differential Equations 9: 307–341. Crisanti, A., Paladin, G., and Vulpiani, A. 1993. Products of random matrices in statistical physics. Springer-Verlag, Berlin. Crutchfield, J. P., and Kaneko, K. 1988. Are attractors relevant to turbulence? Phys. Rev. Lett. 60: 2715–2718. Curato, G., and Politi, A. 2013. Onset of chaotic dynamics in neural networks. Phys. Rev. E 88: 042908. Cvitanovi´c, P., Gunaratne, G. H., and Procaccia, I. 1988. Topological and metric properties of Hénon-type strange attractors. Phys. Rev. A 38: 1503–1520. Cvitanovi´c, P., Artuso, R., Mainieri, R., Tanner, G., and Vattay, G. 2013. Chaos: classical and quantum. Niels Bohr Institute, Copenhagen. www.chaosbook.org. Dahlqvist, P. 1997. The Lyapunov exponent in the Sinai billiard in the small scatterer limit. Nonlinearity 10: 159–173. Daido, H. 1984. Coupling sensitivity of chaos: a new universal property of chaotic dynamical systems. Progr. Theoret. Phys. Suppl. 79: 75–95. Daido, H. 1985. Coupling sensitivity of chaos and the Lyapunov dimension: the case of coupled two-dimensional maps. Phys. Lett. A 110: 5–9. Daido, H. 1987. Coupling sensitivity of chaos: theory and further numerical evidence. Phys. Lett. A 121: 60–66.

263

Bibliography

D’Alessandro, G., Grassberger, P., Isola, S., and Politi, A. 1990. On the topology of the Hénon map. J. Phys. A – Math. Gen. 23: 5285–5294. Darrigol, O. 2002. Stability and instability in nineteenth-century fluid mechanics. Rev. Histoire Math. 8: 5–65. Deissler, R. J., and Kaneko, K. 1987. Velocity-dependent Lyapunov exponents as a measure of chaos for open-flow systems. Phys. Lett. A 119: 397–402. Delfini, L., Denisov, S., Lepri, S., Livi, R., Mohanty, P. K., and Politi, A. 2007. Energy diffusion in hard-point systems. Eur. Phys. J – Spec. Top. 146: 21–35. Dellago, Ch., and Posch, H. A. 1995. Lyapunov exponents of systems with elastic hard collisions. Phys. Rev. E 52: 2401–2406. Dellago, Ch., and Posch, H. A. 1997. Lyapunov spectrum and the conjugate pairing rule for a thermostatted random Lorentz gas: numerical simulations. Phys. Rev. Lett. 78: 211–214. Dellnitz, M., and Hohmann, A. 1997. A subdivision algorithm for the computation of unstable manifolds and global attractors. Numer. Math. 75: 293–317. Derrida, B., and Gardner, E. 1984. Lyapounov exponent of the one-dimensional Anderson model: weak disorder expansions. J. Physique 45: 1283–1295. Derrida, B., and Hilhorst, H. J. 1983. Singular behaviour of certain infinite products of random 2 × 2 matrices. J. Phys. A – Math. Gen. 16: 2641–2654. Derrida, B., and Spohn, H. 1988. Polymers on disordered trees, spin glasses, and traveling waves. J. Stat. Phys. 51: 817–840. Derrida, B., Mecheri, K., and Pichard, J. L. 1987. Lyapounov exponents of products of random matrices: weak disorder expansion. Application to localisation. J. Physique 48: 733. Deutsch, J. M., and Paladin, G. 1989. Product of random matrices in a microcanonical ensemble. Phys. Rev. Lett. 62: 695–699. di Bernardo, M., Budd, C. J., Champneys, A.R., and Kowalczyk, P. 2008. Piecewisesmooth dynamical systems: theory and applications. Springer-Verlag, London. Dieci, L., and Van Vleck, E. S. 1995. Computation of a few Lyapunov exponents for continuous and discrete dynamical systems. Appl. Numer. Math. 17: 275–291. Dieci, L., and Van Vleck, E.S. 2005. On the error in computing Lyapunov exponents by QR methods. Numer. Math. 101: 619–642. Dieci, L., Russell, R. D., and Van Vleck, E. S. 1997. On the computation of Lyapunov exponents for continuous dynamical systems. SIAM J. Numer. Anal. 34: 402–423. Dorfman, J. R., and van Beijeren, H. 1997. Dynamical systems theory and transport coefficients: a survey with applications to Lorentz gases. Physica A 240: 12–42. d’Ovidio, F., Fernández, V., Hernández-García, E., and López, C. 2004. Mixing structures in the Mediterranean Sea from finite-size Lyapunov exponents. Geophys. Res. Lett. 31: L17203. Dressler, U. 1988. Symmetry property of the Lyapunov spectra of a class of dissipative dynamical systems with viscous damping. Phys. Rev. A 38: 2103–2109. Dressler, U., and Farmer, J. D. 1992. Generalized Lyapunov exponents corresponding to higher derivatives. Physica D 59: 365–377.

264

Bibliography

Eckhardt, B., and Yao, D. 1993. Local Lyapunov exponents in chaotic systems. Physica D 65: 100–108. Eckmann, J.-P., and Ruelle, D. 1985. Ergodic theory of chaos and strange attractors. Rev. Mod. Phys. 57: 617–656. Eckmann, J.-P., and Wayne, C. E. 1988. Liapunov spectra for infinite chains of nonlinear oscillators. J. Stat. Phys. 50: 853–878. Eckmann, J.-P., Forster, Ch., Posch, H. A., and Zabey, E. 2005. Lyapunov modes in harddisk systems. J. Stat. Phys. 118: 813–847. Ershov, S. V., and Potapov, A. B. 1998. On the concept of stationary Lyapunov basis. Physica D 118: 167–198. Evans, D. J., and Morris, G. 2008. Statistical mechanics of nonequilibrium liquids. 2nd edn. Cambridge University Press, Cambridge. Books Online. Evans, D. J., Cohen, E. G. D., and Morriss, G. P. 1990. Viscosity of a simple fluid from its maximal Lyapunov exponents. Phys. Rev. A 42: 5990–5997. Evers, F., and Mirlin, A. D. 2008. Anderson transitions. Rev. Mod. Phys. 80: 1355–1417. Family, F., and Vicsek, T. 1985. Scaling of the active zone in the Eden process on percolation networks and the ballistic deposition mode l. J. Phys. A – Math. Gen. 18: L75. Farmer, J. D. 1981. Spectral broadening of period-doubling bifurcation sequences. Phys. Rev. Lett. 47: 179. Farmer, J. D. 1982. Chaotic attractors of an infinite-dimensional dynamical system. Physica D: Nonlinear Phenomena 4: 366–393. Farmer, J. D., and Sidorowich, J. J. 1987. Predicting chaotic time series. Phys. Rev. Lett. 59: 845–848. Feigel‘man, M. V., and Tsvelik, A. M. 1982. Hidden supersymmetry of stochastic dissipative dynamics. Sov. Phys. JETP 56: 823. Feudel, U., Kuznetsov, S., and Pikovsky, A. 2006. Strange nonchaotic attractors: dynamics between order and chaos in quasiperiodically forced systems. World Scientific, Hackensack, NJ. Feynman, R. P., and Hibbs, A. R. 2010. Quantum mechanics and path integrals. Dover, Mineola, NY. Fischer, R. A. 1937. The wave of advance of advantageous genes. Ann. Eugenics 7: 353–369. Francisco, G., and Matsas, G. E. A. 1988. Qualitative and numerical study of Bianchi IX models. General Relativity and Gravitation 20: 1047–1054. Friedman, B., Oono, Y., and Kubo, I. 1984. Universal behavior of Sinai billiard systems in the small-scatterer limit. Phys. Rev. Lett. 52: 709–712. Froeschlé, C., Lega, E., and Gonczi, R. 1997. Fast Lyapunov indicators: application to asteroidal motion. Celest. Mech. Dyn. Astr. 67: 41–62. Furstenberg, H., and Kesten, H. 1960. Products of random matrices. Ann. Math. Statist. 31: 457–469. Gardiner, C. 2009. Stochastic methods: a handbook for the natural and social sciences. Springer-Verlag, Berlin.

265

Bibliography

Gaspard, P. 2005. Chaos, scattering and statistical mechanics. Cambridge University Press, Cambridge. Gaspard, P., and Nicolis, G. 1990. Transport properties, Lyapunov exponents, and entropy per unit time. Phys. Rev. Lett. 65: 1693–1696. Geist, K., Parlitz, U., and Lauterborn, W. 1990. Comparison of different methods for computing Lyapunov exponents. Progr. Theoret. Phys. 83: 875–893. Gertsenshtein, M. E., and Vasiljev, V. B. 1959. Waveguides with random inhomogeneities and Brownian motion in the Lobachevsky plane. Theor. Prob. Appl. 4: 391–398. Giacomelli, G., Hegger, R., Politi, A., and Vassalli, M. 2000. Convective Lyapunov exponents and propagation of correlations. Phys. Rev. Lett. 85: 3616–3619. Ginelli, F., Livi, R., Politi, A., and Torcini, A. 2003. Relationship between directed percolation and the synchronization transition in spatially extended systems. Phys. Rev. E 67: 046217. Ginelli, F., Poggi, P., Turchi, A., Chaté, H., Livi, R., and Politi, A. 2007. Characterizing dynamics with covariant Lyapunov vectors. Phys. Rev. Lett. 99: 130601. Ginelli, F., Takeuchi, K., Chaté, H., Politi, A., and Torcini, A. 2011. Chaos in the Hamiltonian mean-field model. Phys. Rev. E 84: 066211. Ginelli, F., Chaté, C., Livi, R., and Politi, A. 2013. Covariant Lyapunov vectors. J. Phys. A – Math. Theor. 46: 254005. Giovannini, F., and Politi, A. 1992. Generating partitions in Hénon-type maps. Phys. Lett. A 161: 332–336. Girko, V. L. 1984. The circular law. Teor. Veroyatnost. i Primenen. 29: 669–679. Goldhirsch, I., Sulem, P.-L., and Orszag, S. A. 1987. Stability and Lyapunov stability of dynamical systems: a differential approach and a numerical method. Physica D 27: 311–337. Goldobin, D. S., and Pikovsky, A. 2004. Synchronization of periodic self-oscillations by common noise. Radiophys. Quantum El. 47: 910–915. Goldobin, D. S., and Pikovsky, A. 2006. Antireliability of noise-driven neurons. Phys. Rev. E 73: 061906. Goldobin, D. S., Teramae, J., Nakao, H., and Ermentrout, G. B. 2010. Dynamics of limitcycle oscillators subject to general noise. Phys. Rev. Lett. 105: 154101. Golub, G. H., and Van Loan, Ch. F. 1996. Matrix computations. Johns Hopkins University Press, Baltimore, MD. Gorin, Th., Prosen, T., Seligman, Th. H., and Znidaric, M. 2006. Dynamics of Loschmidt echoes and fidelity decay. Phys. Rep. 435: 33–156. Goussev, A., Jalabert, R. A., Pastawski, H. M., and Wisniacki, D. A. 2012. Loschmidt echo. Scholarpedia 7(8): 11687. Gozzi, E., and Reuter, M. 1994. Lyapunov exponents, path-integrals and forms. Chaos, solitons & fractals 4: 1117–1139. Graham, R. 1988. Lyapunov exponents and supersymmetry of stochastic dynamical systems. Europhys. Lett. 5: 101–106. Grassberger, P., and Kantz, H. 1985. Generating partitions for the dissipative Hénon map. Phys. Lett. A 113: 235–238.

266

Bibliography

Grassberger, P., and Procaccia, I. 1983. On the characterization of strange attractors. Phys. Rev. Lett. 50: 346–349. Grassberger, P., Badii, R., and Politi, A. 1988. Scaling laws for invariant measures on hyperbolic and nonhyperbolic attractors. J. Stat. Phys. 51: 135–178. Grebogi, C., Ott, E., and Yorke, J. A. 1983. Crises, sudden changes in chaotic attractors, and transient chaos. Physica D 7: 181–200. Gredeskul, S. A., and Freilikher, V. D. 1990. Localization and wave propagation in randomly layered media. Physics-Uspekhi 33: 134–146. Gupalo, D., Kaganovich, A. S., and Cohen, E. G. D. 1994. Symmetry of Lyapunov spectrum. J. Stat. Phys. 74: 1145–1159. Gutkin, E. 1986. Billiards in polygons. Physica D 19: 311–333. Haake, F. 2010. Quantum signatures of chaos. Springer-Verlag, Berlin. Habib, S., and Ryne, R. D. 1995. Symplectic calculation of Lyapunov exponents. Phys. Rev. Lett. 74: 70–73. Hairer, E., Lubich, Ch., and Wanner, G. 2010. Geometric numerical integration: structurepreserving algorithms for ordinary differential equations. Springer, Heidelberg. Hale, J. K. 1969. Ordinary differential equations. Wiley-Interscience, New York. Haller, G. 2001. Distinguished material surfaces and coherent structures in threedimensional fluid flows. Physica D 149: 248–277. Haller, G. 2002. Lagrangian coherent structures from approximate velocity data. Phys. Fluids 14: 1851–1861. Haller, G., and Yuan, G. 2000. Lagrangian coherent structures and mixing in twodimensional turbulence. Physica D 147: 352–370. Halpin-Healy, T., and Zhang, Y.-Ch. 1995. Kinetic roughening phenomena, stochastic growth, directed polymers and all that. Aspects of multidisciplinary statistical mechanics. Phys. Rep. 254: 215–414. Hansen, K. T. 1992. Remarks on the symbolic dynamics for the Hénon map. Phys. Lett. A 165: 100–104. Harmer, G. P., and Abbott, D. 1999. Parrondo’s paradox. Statist. Sci., 14: 206–213. Hartung, F., Krisztin, T., Walther, H.-O., and Wu, J. 2006. Functional differential equations with state-dependent delays: theory and applications. In Cañada, A., Drábek, P., and Fonda, A. (eds.), Handook of differential equations, vol. 3. Amer. Math. Soc., Providence, RI, 435–546. Heiligenthal, S., Dahms, Th., Yanchuk, S., Jüngling, Th., Flunkert, V., Kanter, I., Schöll, E., and Kinzel, W. 2011. Strong and weak chaos in nonlinear networks with time-delayed couplings. Phys. Rev. Lett. 107: 234102. Hénon, M. 1982. On the numerical computation of Poincaré maps. Physica D 5: 412–414. Herrmann, H. J. 1990. Damage spreading. Physica A 168: 516–528. Hoover, W. G. 1991. Computational statistical mechanics. Elsevier Science, Amsterdam. Hopf, E. 1950. The partial differential equation ut + uux = μuxx . Comm. Pure Appl. Math. 3: 201–230. Horsthemke, W., and Bach, A. 1975. Onsager-Machlup function for one-dimensional nonlinear diffusion processes. Z. Phys. B Cond. Mat. 22: 189–192.

267

Bibliography

Ilachinski, A. 2001. Cellular automata: a discrete universe. World Scientific, River Edge, NJ. Inagaki, S., and Konishi, T. 1993. Dynamical stability of a simple model similar to selfgravitating systems. Publ. Astron. Soc. Japan 45: 733–735. Isopi, M., and Newman, Ch. M. 1992. The triangle law for Lyapunov exponents of large random matrices. Comm. Math. Phys. 143: 591–598. Izrailev, F. M., Ruffo, S., and Tessieri, L. 1998. Classical representation of the onedimensional Anderson model. J. Phys. A – Math. Gen. 31: 5263–5270. Izrailev, F. M., Krokhin, A. A., and Makarov, N. M. 2012. Anomalous localization in lowdimensional systems with correlated disorder. Phys. Rep. 512: 125–254. Jalabert, R., and Pastawski, H. 2001. Environment-independent decoherence rate in classically chaotic systems. Phys. Rev. Lett. 86: 2490–2493. Johnson, R. A., Palmer, K. J., and Sell, G. R. 1987. Ergodic properties of linear dynamical systems. SIAM J. Math. Anal. 18: 1–33. Kaneko, K. 1985. Spatiotemporal intermittency in coupled map lattices. Progr. Theor. Phys. 74: 1033–1044. Kantz, H., and Grassberger, P. 1985. Repellers, semi-attractors and long-lived chaotic transients. Physica D 17: 75–86. Kantz, H., and Schreiber, Th. 2004. Nonlinear time series analysis. Cambridge University Press, Cambridge. Kantz, H., Radons, G., and Yang, H. 2013. The problem of spurious Lyapunov exponents in time series analysis and its solution by covariant Lyapunov vectors. J. Phys. A – Math. Theor. 46: 254009 Kaplan, J. L., and Yorke, J. A. 1979. Chaotic behavior of multidimensional difference equations. In Walter, H. O., and Peitgen, H.-O. (eds.), Functional differential equations and approximation of fixed points. Springer-Verlag, Berlin, 204–227. Kardar, M., Parisi, G., and Zhang, Y.-Ch. 1986. Dynamic scaling of growing interfaces. Phys. Rev. Lett. 56: 889–892. Kargin, V. 2014. On the largest Lyapunov exponent for products of Gaussian matrices. J. Stat. Phys. 157: 70–83. Karrasch, D., and Haller, G. 2013. Do finite-size Lyapunov exponents detect coherent structures? Chaos 23: 043126. Katok, A., and Hasselblatt, B. 1995. Introduction to the modern theory of dynamical systems. Cambridge University Press, Cambridge. Kauffman, S. A. 1969. Metabolic stability and epigenesis in randomly constructed genetic nets. J. Theor. Biol. 22: 437–467. Kenfack J., A., Politi, A., and Torcini, A. 2013. Convective Lyapunov spectra. J. Phys. A 46: 254013. Khasminskii, R. 2012. Stochastic stability of differential equations. Springer, Heidelberg. With contributions by G. N. Milstein and M. B. Nevelson. Kockelkoren, J. 2002. Dynamique hors d’équilibre et universalité en présence d’une quantité conservée. Ph.D. thesis, Université Denis Diderot, Paris 7, CEA. Kolmogorov, A. N., and Tikhomirov, V. M. 1961. ε-entropy and ε-capacity of sets in functional spaces. Amer. Math. Soc. Transl. Ser. 2 17: 277–364.

268

Bibliography

Kolmogorov, N., Petrovsky, I., and Piscounov, N. 1937. A study of the diffusion equation with increase in the amount of substance, and its application to a biological problem. Bull. Univ. Moscow, Ser. Int. A1: 1. Korabel, N., and Barkai, E. 2010. Separation of trajectories and its relation to entropy for intermittent systems with a zero Lyapunov exponent. Phys. Rev. E 82: 016209. Kostelich, E. J., Kan, I., Grebogi, C., Ott, E., and Yorke, J. A. 1997. Unstable dimension variability: a source of nonhyperbolicity in chaotic systems. Physica D 109: 81–90. Kramer, B., and MacKinnon, A. 1993. Localization: theory and experiment. Rep. Prog. Phys. 56: 1469. Kramer, B., MacKinnon, A., Ohtsuki, T, and Slevin, K. 1987. Finite size scaling analysis of the Anderson transition. In Abrahams, E. (ed.), 50 years of Anderson localization. World Scientific, Singapore, 347–360. Krug, J., and Meakin, P. 1990. Universal finite-size effects in the rate of growth processes. J. Phys. A – Math. Gen. 23: L987. Kruis, H. V., Panja, D., and van Beijeren, H. 2006. Systematic density expansion of the Lyapunov exponents for a two-dimensional random Lorentz gas. J. Stat. Phys. 124: 823–842. Krylov, N. S. 1979. Works on the foundations of statistical physics. Princeton University Press, Princeton, N.J. Translated by A. B. Migdal, Ya. G. Sinai and Yu. L. Zeeman. With a preface by A. S. Wightman. Kunze, M. 2000. Lyapunov exponents for non-smooth dynamical systems. In Kunze, M. (ed.), Non-smooth dynamical systems. Springer, Berlin and Heidelberg, 63–140. Kuptsov, P. V., and Kuznetsov, S. P. 2009. Violation of hyperbolicity in a diffusive medium with local hyperbolic attractor. Phys. Rev. E 80: 016205. Kuptsov, P. V., and Parlitz, U. 2012. Theory and computation of covariant Lyapunov vectors. J. Nonl. Sci. 22: 727–762. Kuptsov, P. V., and Politi, A. 2011. Large-deviation approach to space-time chaos. Phys. Rev. Lett. 107: 114101. Kuramoto, Y. 1975. Self-entrainment of a population of coupled nonlinear oscillators. In Araki, H. (ed.), International symposium on mathematical problems in theoretical physics. Springer, New York, 420. Kuramoto, Y. 1984. Chemical oscillations, waves and turbulence. Springer, Berlin. Kuznetsov, S. P., and Pikovsky, A. 1986. Universality and scaling of period-doubling bifurcations in dissipative distributed medium. Physica D 19: 384–396. Laffargue, T., Lam, Kh.-D. N.-Th., Kurchan, J., and Tailleur, J. 2013. Large deviations of Lyapunov exponents. J. Phys. A – Math. Theor. 46: 254002. Lai, Y.-Ch, and Tél, T. 2011. Transient chaos: complex dynamics on finite time scales. Springer, New York. Lam, Kh.-D. N.-Th., and Kurchan, J. 2014. Stochastic perturbation of integrable systems: a window to weakly chaotic systems. J. Stat. Phys. 156: 619–646. Landau, L. D., and Lifshitz, E. M. 1958. Quantum mechanics: non-relativistic theory. Course of theoretical physics, vol. 3. Pergamon, Londo and Paris. Laskar, J. 1989. A numerical experiment on the chaotic behaviour of the solar system. Nature 338: 237–238.

269

Bibliography

Laskar, J., and Gastineau, M. 2009. Existence of collisional trajectories of Mercury, Mars and Venus with the Earth. Nature 459: 817–819. Latz, A., van Beijeren, H., and Dorfman, J. R. 1997. Lyapunov spectrum and the conjugate pairing rule for a thermostatted random Lorentz gas: kinetic theory. Phys. Rev. Lett. 78: 207–210. Ledrappier, F. 1981. Some relations between dimension and Lyapunov exponents. Commun. Math. Phys. 81: 229–238. Leimkuhler, B., and Reich, S. 2004. Simulating Hamiltonian dynamics. Cambridge University Press, Cambridge. Leine, R. I. 2010. The historical development of classical stability concepts: Lagrange, Poisson and Lyapunov stability. Nonlinear Dynam. 59: 173–182. Leonov, G. A., and Kuznetsov, N. V. 2007. Time-varying lineraization and the Perron effects. Int. J. Bifurcat. Chaos 17: 1079. Lepri, S., Giacomelli, G., Politi, A., and Arecchi, F.T. 1994. High-dimensional chaos in delayed dynamical systems. Physica D 70: 235–249. Lepri, S., Politi, A., and Torcini, A. 1996. Chronotopic Lyapunov analysis. I: a detailed characterization of 1D systems. J. Stat. Phys. 82: 1429. Lepri, S., Politi, A., and Torcini, A. 1997. Chronotopic Lyapunov analysis. II: toward a unified approach. J. Stat. Phys. 88: 31. Lepri, S., Livi, R., and Politi, R. 2003. Thermal conduction in classical low-dimensional lattices. Phys. Rep. 377: 1–80. Letz, T., and Kantz, H. 2000. Characterization of sensitivity to finite perturbations. Phys. Rev. E 61: 2533–2538. Lifshits, I. M., Gredeskul, S. A., and Pastur, L. A. 1988. Introduction to the theory of disordered systems. John Wiley & Sons, New York. Livi, R., Politi, A., and Ruffo, S. 1986. Distribution of characteristic exponents in the thermodynamic limit. J. Phys. A – Math. Gen 19: 2033–2040. Livi, R., Politi, A., Ruffo, S., and Vulpiani, A. 1987. Liapunov exponents in highdimensional symplectic dynamics. J. Stat. Phys. 46: 147–160. Livi, R., Politi, A., and Ruffo, S. 1992. Scaling-law for the maximal Lyapunov exponent. J. Phys. A – Math. Gen. 25: 4813–4826. Lorenz, E. N. 1963. Deterministic nonperiodic flow. J. Atmos. Sci. 20: 130–141. Luccioli, S., Olmi, S., Politi, A., and Torcini, A. 2012. Collective dynamics in sparse networks. Phys. Rev. Lett. 109: 138103. Lyapunov, A. M. 1992. The general problem of the stability of motion. Taylor & Francis, Ltd., London. Translated from Edouard Davaux’s French translation (1907) of the 1892 Russian original and edited by A. T. Fuller, with an introduction and preface by Fuller, a biography of Lyapunov by V. I. Smirnov, and a bibliography of Lyapunov’s works compiled by J. F. Barrett, Lyapunov centenary issue. Reprint of Internat. J. Control 55 (1992), no. 3. MacKinnon, A., and Kramer, B. 1981. One-parameter scaling of localization length and conductance in disordered systems. Phys. Rev. Lett. 47: 1546–1549. MacKinnon, A., and Kramer, B. 1983. The scaling theory of electrons in disordered solids: additional numerical results. Z. Phys. B Cond. Mat. 53: 1–13.

270

Bibliography

Mainen, Z. F., and Sejnowski, T. J. 1995. Reliability of spike timing in neocortical neurons. Science 268: 1503. Mainieri, R. 1992. Cycle expansion for the Lyapunov exponent of a product of random matrices. Chaos 2: 91–97. Mallick, K., and Peyneau, P.-E. 2006. Phase diagram of the random frequency oscillator: the case of Ornstein–Uhlenbeck noise. Physica D 221: 72–83. Manneville, P. 1985. Liapounov exponents for the Kuramoto-Sivashinsky model. In Macroscopic modelling of turbulent flows (Nice, 1984). Springer, Berlin, 319–326. Marˇcenko, V. A., and Pastur, L. A. 1967a. Distribution of eigenvalues in certain sets of random matrices. Mat. Sb. (N.S.) 72: 507–536. Marˇcenko, V. A., and Pastur, L. A. 1967b. The spectrum of random matrices. Teor. Funkci˘ı Funkcional. Anal. i Priložen. Vyp. 4: 122–145. Marinari, E., Pagnani, P., and Parisi, G. 2000. Critical exponents of the KPZ equation via multi-surface coding numerical simulations. J. Phys. A – Math. Gen. 33: 8181. Markoš, P. 1993. Weak disorder expansion of Lyapunov exponents of products of random matrices: a degenerate theory. J. Stat. Phys. 70: 899–919. Martin, B. 2007. Damage spreading and μ-sensitivity on cellular automata. Ergodic Theory Dynam. Systems 27: 545–565. McNamara, S., and Mareschal, M. 2001. Origin of the hydrodynamic Lyapunov modes. Phys. Rev. E 64: 051103. Mehta, M. L. 2004. Random matrices. Elsevier/Academic Press, Amsterdam. Mello, P. A., and Robledo, A. 1993. Strongly coupled Ising chain under a weak random field. Physica A 199: 363–386. Milnor, J., and Thurston, W. 1988. On iterated maps of the interval. In Dynamical systems (College Park, MD, 1986–87). Springer, Berlin. Monteforte, M., and Wolf, F. 2010. Dynamical entropy production in spiking neuron networks in the balanced state. Phys. Rev. Lett. 105: 268104. Morton, K. W., and Mayers, D. F. 2005. Numerical solution of partial differential equations: an introduction. Cambridge University Press, Cambridge. Motter, A. E. 2003. Relativistic chaos is coordinate invariant. Phys. Rev. Lett. 91: 231101. Motter, A. E., and Saa, A. 2009. Relativistic invariance of Lyapunov exponents in bounded and unbounded systems. Phys. Rev. Lett. 102: 184101. Müller, P. C. 1995. Calculation of Lyapunov exponents for dynamic systems with discontinuities. Chaos Solitons Fractals 5: 1671–1681. Murray, C. D., and Dermott, S. F. 1999. Solar system dynamics. Cambridge University Press, Cambridge. Newman, Ch. M. 1986a. The distribution of Lyapunov exponents: exact results for random matrices. Comm. Math. Phys. 103: 121–126. Newman, Ch. M. 1986b. Lyapunov exponents for some products of random matrices: exact expressions and asymptotic distributions. In Random matrices and their applications (Brunswick, Maine, 1984). Providence, RI, 121–141. Olmi, S., Politi, A., and Torcini, A. 2012. Stability of the splay state in networks of pulsecoupled neurons. J. Math. Neurosci. 2: 12 Oseledets, V. 2008. Oseledets theorem. Scholarpedia 3(1): 1846.

271

Bibliography

Oseledets, V. I. 1968. A multiplicative ergodic theorem. Lyapunov characteristic numbers for dynamical systems. Trans. Moscow Math. Soc. 19: 197–231. Paladin, G., and Vulpiani, A. 1986. Scaling law and asymptotic distribution of Lyapunov exponents in conservative dynamical systems with many degrees of freedom. J. Phys. A – Math. Gen. 19: 1881–1888. Paoli, P., Politi, A., and Badii, R. 1989. Long-range order in the scaling behaviour of hyperbolic dynamical systems. Physica D 36: 263–286. Parks, P. C. 1992. A. M. Lyapunov’s stability theory–100 years on. IMA J. Math. Control Inform. 9: 275–303. Parrondo, J.-M. R., and Dins, L. 2004. Brownian motion and gambling: from ratchets to paradoxical games. Contemp. Phys. 45(2): 147–157. Pastur, L., and Figotin, A. 1992. Spectra of random and almost-periodic operators. Springer-Verlag, Berlin. Patil, D. J., Hunt, B. R., Kalnay, E., Yorke, J. A., and Ott, E. 2001. Local low dimensionality of atmospheric dynamics. Phys. Rev. Lett. 86: 5878–5881. Pazó, D., López, J. M., and Politi, A. 2013. Universal scaling of Lyapunov-exponent fluctuations in space-time chaos. Phys. Rev. E 87: 062909. Peacock, Th., and Haller, G. 2013. Lagrangian coherent structures: the hidden skeleton of fluid flows. Phys. Today 66: 41–47. Pecora, L. M., and Carroll, T. L. 1991. Driving systems with chaotic signals. Phys. Rev. A 44: 2374–2383. Pecora, L. M., and Carroll, T. L. 1998. Master stability functions for synchronized coupled systems. Phys. Rev. Lett. 80: 2109–2112. Pecora, L. M., and Carroll, T. L. 1999. Master stability functions for synchronized coupled systems. Int. J. Bifurcat. Chaos 9: 2315–2320. Pesin, Ya. B. 1977. Characteristic Lyapunov exponents and smooth ergodic theory. Russ. Math. Surv. 32: 55. Pettini, M. 2007. Geometry and topology in Hamiltonian dynamics and statistical mechanics. Springer, New York. Pichard, J. L., and Sarma, G. 1981a. Finite-size scaling approach to Anderson localisation. J. Phys. C – Solid State 14: L127. Pichard, J. L., and Sarma, G. 1981b. Finite-size scaling approach to Anderson localisation. II. Quantitative analysis and new results. J. Phys. C – Solid State 14: L617. Pierrehumbert, R. T., and Yang, H. 1993. Global chaotic mixing on isentropic surfaces. J. Atmos. Sci. 50: 2462–2480. Pikovsky, A. 1984a. On the interaction of strange attractors. Z. Physik B 55: 149–154. Pikovsky, A. 1984b. Synchronization and stochastization of nonlinear oscillations by external noise. In Sagdeev, R. Z. (ed.), Nonlinear and turbulent processes in physics, vol. 3. Harwood Acad, Chur. Pikovsky, A. 1984c. Synchronization and stochastization of the ensemble of autogenerators by external noise. Radiophys. Quantum Electron. 27: 576–581. Pikovsky, A. 1989. Spatial development of chaos in nonlinear media. Phys. Lett. A 137: 121–127.

272

Bibliography

Pikovsky, A. 1991. Statistical properties of dynamically generated anomalous diffusion. Phys. Rev. A 43: 3146–3148. Pikovsky, A. 1993. Local Lyapunov exponents for spatiotemporal chaos. Chaos, 3: 225– 232. Pikovsky, A., and Feudel, U. 1995. Characterizing strange nonchaotic attractors. Chaos 5: 253–260. Pikovsky, A., and Grassberger, P. 1991. Symmetry breaking bifurcation for coupled chaotic attractors. J. Phys. A: Math., Gen. 24: 4587–4597. Pikovsky, A., and Politi, A. 1998. Dynamic localization of Lyapunov vectors in spacetime chaos. Nonlinearity 11: 1049–1062. Pikovsky, A., and Politi, A. 2001. Dynamic localization of Lyapunov vectors in Hamiltonian lattices. Phys. Rev. E 63: 036207. Pikovsky, A., Osipov, G., Rosenblum, M., Zaks, M., and Kurths, J. 1997a. Attractorrepeller collision and eyelet intermittency at the transition to phase synchronization. Phys. Rev. Lett. 79: 47–50. Pikovsky, A., Zaks, M., Rosenblum, M., Osipov, G., and Kurths, J. 1997b. Phase synchronization of chaotic oscillations in terms of periodic orbits. Chaos 7: 680–687. Pikovsky, A., Rosenblum, M. G., Osipov, G. V., and J., Kurths. 1997c. Phase synchronization of chaotic oscillators by external driving. Physica D 104: 219–238. Pikovsky, A., Rosenblum, M., and Kurths, J. 2001. Synchronization: a universal concept in nonlinear sciences. Cambridge University Press, Cambridge. Pinsky, M. A. 1986. Instability of the harmonic oscillator with small noise. SIAM J. Appl. Math. 46: 451–463. Pinsky, M. A., and Wihstutz, V. 1988. Lyapunov exponents of nilpotent Itô systems. Stochastics 25: 43–57. Pinsky, M. A., and Wihstutz, V. 1992. Lyapunov exponents and rotation numbers of linear systems with real noise. In Probability theory (Singapore, 1989). de Gruyter, Berlin, 109–119. Pires, C. J. A., Saa, A., and Venegeroles, R. 2011. Lyapunov statistics and mixing rates for intermittent systems. Phys. Rev. E 84: 066210. Politi, A. 2014a. Probability density of the Lyapunov vector orientation. Unpublished manuscript. Politi, A. 2014b. Stochastic fluctuations in deterministic systems. In Vulpiani, A., et al. (eds.), Large deviations in physics. Springer, Berlin and Heidelberg, 243–261. Politi, A., and Torcini, A. 1992. Periodic orbits in coupled Hénon maps: Lyapunov and multifractal analysis. Chaos 2: 293–300. Politi, A., and Torcini, A. 1994. Linear and non-linear mechanisms of information propagation. Europhys. Lett. 28: 545. Politi, A., and Torcini, A. 2010. Stable chaos. In Nonlinear dynamics and chaos: advances and perspectives. Springer, Berlin, 103–129. Politi, A., and Witt, A. 1999. Fractal dimension of space-time chaos. Phys. Rev. Lett. 82: 3034–3037. Politi, A., Livi, R., Oppo, G.-L., and Kapral, R. 1993. Unpredictable behavior of stable systems. Europhys. Lett. 22: 571.

273

Bibliography

Politi, A., Torcini, A., and Lepri, S. 1998. Lyapunov exponents from node-counting arguments. J. Phys. IV France 8(Pr6), Pr6–263–Pr6–270. Politi, A., Ginelli, F., Yanchuk, S., and Maistrenko, Y. 2006. From synchronization to Lyapunov exponents and back. Physica D 224: 90–101. Pollicott, M. 2010. Maximal Lyapunov exponents for random matrix products. Invent. Math. 181: 209–226. Pomeau, Y., Pumir, A., and Pelcé, P. 1984. Intrinsic stochasticity with many degrees of freedom. J. Stat. Phys. 37: 39–49. Popovych, O. V., Maistrenko, Yu. L., and Tass, P. A. 2005. Phase chaos in coupled oscillators. Phys. Rev. E 71: 065201. Posch, H. A., and Hirschl, R. 2000. Simulation of billiards and of hard body fluids. In Hard ball systems and the Lorentz gas. Encyclopaedia Math. Sci., vol. 101. Springer, Berlin, 279–314. Pyragas, K. 1996. Weak and strong synchronization of chaos. Phys. Rev. E 54: 4508–4511. Pyragas, K. 1997. Conditional Lyapunov exponents from time series. Phys. Rev. E 56: 5183–5188. Quarteroni, A., and Valli, A. 1994. Numerical approximation of partial differential equations. Springer-Verlag, Berlin. Radons, G. 2005. Disordered dynamical systems. In Radons, G. Just, W., and Hussler, P. (eds.) Collective dynamics of nonlinear and disordered systems, Springer-Verlag, Berlin 271–299. Ramasubramanian, K., and Sriram, M. S. 2000. A comparative study of computation of Lyapunov spectra with different algorithms. Physica D 139: 72–86. Rangarajan, G., Habib, S., and Ryne, R. D. 1998. Lyapunov exponents without rescaling and reorthogonalization. Phys. Rev. Lett. 80: 3747–3750. Risken, H. 1989. The Fokker-Planck equation: methods of solution and applications. 2nd edn. Springer-Verlag, Berlin. Romeiras, F. J., Bondeson, A., Ott, E., M., Antonsen T., and Grebogi, C. 1987. Quasiperiodically forced dynamical systems with strange nonchaotic attractors. Physica D 26: 277–294. Rössler, O. E. 1979. An equation for hyperchaos. Phys. Lett. A 71: 155–157. Ruelle, D. 1979. Ergodic theory of differentiable dynamical systems. Inst. Hautes Études Sci. Publ. Math. 50: 27–58. Ruelle, D. 1982. Large volume limit of the distribution of characteristic exponents in turbulence. Comm. Math. Phys. 87: 287–302. Ruelle, D. 1985. Rotation numbers for diffeomorphisms and flows. Ann. Inst. H. Poincaré Phys. Théor. 42: 109–115. Ruffo, S. 1994. Hamiltonian dynamics and phase transition. In Transport, chaos and plasma physics. World Scientific, Singapore, 114. Rulkov, N. F., Sushchik, M. M., Tsimring, L. S., and Abarbanel, H. D. I. 1995. Generalized synchronization of chaos in directionally coupled chaotic systems. Phys. Rev. E 51: 980–994. Sano, M. M., and Kitahara, K. 2001. Thermal conduction in a chain of colliding harmonic oscillators revisited. Phys. Rev. E 64: 056111.

274

Bibliography

Sauer, T. D., Tempkin, J. A., and Yorke, J. A. 1998. Spurious Lyapunov exponents in attractor reconstruction. Phys. Rev. Lett. 81: 4341–4344. Schmalfuß, B. 1997. The random attractor of the stochastic Lorenz system. Z. Angew. Math. Phys. 48: 951–975. Shadden, Sh. C., Lekien, F., and Marsden, J. E. 2005. Definition and properties of Lagrangian coherent structures from finite-time Lyapunov exponents in twodimensional aperiodic flows. Physica D 212: 271–304. Shcherbakov, P. S. 1992. Alexander Mikhailovitch Lyapunov: on the centenary of his doctoral dissertation on stability of motion. Automatica J. IFAC 28: 865–871. Shepelyansky, D. L. 1983. Some statistical properties of simple classically stochastic quantum systems. Physica D 8: 208–222. Shibata, T., Chawanya, T., and Kaneko, K. 1999. Noiseless collective motion out of noisy chaos. Phys. Rev. Lett. 82: 4424–4427. Shimada, I., and Nagashima, T. 1979. A numerical approach to ergodic problem of dissipative dynamical systems. Prog. Theor. Phys. 61: 1605–1616. Sinai, Ya. 2009. Kolmogorov-Sinai entropy. Scholarpedia 4(3): 2034. Sinai, Ya. G. 1996. A remark concerning the thermodynamical limit of the Lyapunov spectrum. Int. J. Bifurcat. Chaos 6: 1137–1142. Skokos, Ch., Bountis, T. C., and Antonopoulos, Ch. 2007. Geometrical properties of local dynamics in Hamiltonian systems: the generalized alignment index (GALI) method. Physica D 231: 30–54. Slevin, K., and Ohtsuki, T. 1999. Corrections to scaling at the Anderson transition. Phys. Rev. Lett. 82: 382–385. Smirnov, V. I. 1992. Biography of A. M. Lyapunov. International Journal of Control, 55: 775–784. Translated by J. F. Barrett from A M Lyapunov: Izbrannie Trudi, Izdat. Akad. Nauk SSSR, 1948. Sommerer, J. C. 1994. Fractal tracer distributions in complicated surface flows: an application of random maps to fluid dynamics. Physica D 76: 85–98. Sompolinsky, H., Crisanti, A., and Sommers, H.-J. 1988. Chaos in random neural networks. Phys. Rev. Lett. 61: 259–262. Stöckmann, H.-J. 1999. Quantum chaos: an introduction. Cambridge University Press, Cambridge. Stratonovich, R. L. 1967. Topics in the theory of random noise. Taylor & Francis. Straube, A. V., and Pikovsky, A. 2011. Pattern formation induced by time-dependent advection. Math. Model. Nat. Phenom. 6: 138–148. Sussman, G. J., and Wisdom, J. 1992. Chaotic evolution of the solar system. Science 257: 56–62. Tailleur, J., and Kurchan, J. 2007. Probing rare physical trajectories with Lyapunov weighted dynamics. Nat. Phys. 3: 203–22207. Takens, F. 1981. Detecting strange attractors in turbulence. In Dynamical systems and turbulence, edited by D. A. Rand and L.-S. Young. Springer, London, 366–381. Takeuchi, K. A., and Chaté, H. 2013. Collective Lyapunov modes. J. Phys. A 46: 254007.

275

Bibliography

Takeuchi, K. A., Chaté, H., Ginelli, F., Politi, A., and Torcini, A. 2011a. Extensive and subextensive chaos in globally coupled dynamical systems. Phys. Rev. Lett. 107: 124101. Takeuchi, K. A., Yang, H.-L., Ginelli, F., Radons, G., and Chaté, H. 2011b. Hyperbolic decoupling of tangent space and effective dimension of dissipative systems. Phys. Rev. E 84: 046214. Tanase-Nicola, S., and Kurchan, J. 2003. Statistical-mechanical formulation of Lyapunov exponents. J. Phys. A – Math. Gen. 36: 10299. Taylor, T. J. 1993. On the existence of higher order Lyapunov exponents. Nonlinearity 6: 369. Teramae, J., and Tanaka, D. 2004. Robustness of the noise-induced phase synchronization in a general class of limit cycle oscillators. Phys. Rev. Lett. 93: 204103. Torcini, A., Grassberger, P., and Politi, A. 1995. Error propagation in extended chaotic systems. J. Phys. A – Math. Gen. 28: 4533–4541. Toth, Z. and Kalnay, E. 1997. Ensemble forecasting at NCEP and the breeding method. Weather Rev. 125: 3297–3319. Vallejos, R. O., and Anteneodo, C. 2012. Generalized Lyapunov exponents of the random harmonic oscillator: cumulant expansion approach. Phys. Rev. E 85: 021124. van Beijeren, H., and Dorfman, J. R. 1995. Lyapunov exponents and Kolmogorov-Sinai entropy for the Lorentz gas at low densities. Phys. Rev. Lett. 74: 4412–4415. van Saarloos, W. 1988. Front propagation into unstable states: marginal stability as a dynamical mechanism for velocity selection. Phys. Rev. A 37: 211–229. van Saarloos, W. 1989. Front propagation into unstable states. II. Linear versus nonlinear marginal stability and rate of convergence. Phys. Rev. A 39: 6367–6390. Vanneste, J. 2010. Estimating generalized Lyapunov exponents for products of random matrices. Phys. Rev. E 81: 036701. Venegeroles, R. 2012. Thermodynamic phase transitions for Pomeau-Manneville maps. Phys. Rev. E 86: 021114. Viana, M. 2014. Lectures on Lyapunov exponents. Cambridge University Press, Cambridge. von Bremen, H. F., Udwadia, F. E., and Proskurowski, W. 1997. An efficient QR based method for the computation of Lyapunov exponents. Physica D 101: 1–16. Walters, P. 1982. An introduction to ergodic theory. Springer-Verlag, Berlin. Wigner, E. P. 1967. Random matrices in physics. SIAM Review 9: 1–23. Wolfe, C. L., and Samelson, R. M. 2007. An efficient method for recovering Lyapunov vectors from singular vectors. Tellus A 59: 355–366. Wolfram, S. 1986. Theory and applications of cellular automata: including selected papers, 1983–1986. World Scientific, Singapore. Yanchuk, S., and Wolfrum, M. 2010. A multiple time scale approach to the stability of external cavity modes in the Lang-Kobayashi system using the limit of large delay. SIAM J. Appl. Dyn. Syst. 9: 519–535. Yang, H.-L., and Radons, G. 2013. Hydrodynamic Lyapunov modes and effective degrees of freedom of extended systems. J. Phys. A 46: 254015.

276

Bibliography

Young, L.-S. 1982. Dimension, entropy, and Lyapunov exponents. Ergod. Theor. Dyn. Syst. 2: 109–124. Yu, L., Ott, E., and Chen, Q. 1990. Transition to chaos for random dynamical systems. Phys. Rev. Lett. 65: 2935–2938. Zanon, N., and Derrida, B. 1988. Weak disorder expansion of Liapunov exponents in a degenerate case. J. Stat. Phys. 50: 509–528. Zaslavsky, G. M. 2007. The physics of chaos in Hamiltonian systems. Imperial College Press, London. Zaslavsky, G. M., and Edelman, M. A. 2004. Fractional kinetics: from pseudochaotic dynamics to Maxwell’s demon. Physica D 193: 128–147. Zhou, D., Sun, Y., Rangan, A.V., and Cai, D. 2010. Spectrum of Lyapunov exponents of non-smooth dynamical systems of integrate-and-fire type. J. Comput. Neurosci. 28: 229–245. Zillmer, R., and Pikovsky, A. 2003. Multiscaling of noise-induced parametric instability. Phys. Rev. E 67: 061117. Zillmer, R., and Pikovsky, A. 2005. Continuous approach for the random-field Ising chain. Phys. Rev. E 72: 056108. Zillmer, R., Ahlers, V., and Pikovsky, A. 2000. Scaling of Lyapunov exponents of coupled chaotic systems. Phys. Rev. E 61: 332–341. Zillmer, R., Ahlers, V., and Pikovsky, A. 2002. Coupling sensitivity of localization length in one-dimensional disordered systems. Europhys. Lett. 60: 889–895. Zillmer, R., Livi, R., Politi, A., and Torcini, A. 2006. Desynchronization in diluted neural networks. Phys. Rev. E 74: 036203.

Index

!4 chain, 204 χ 2 distribution, 256 ε-entropy, 224

bubbling transition, 96 butterfly effect, 2, 20 BV, see bred vectors

active degrees of freedom, 103 amplifier, 180 analytic complex function, 175 Anderson localisation, 119, 120, 137, 159, 175, 210, 229 model, 229 problem, 210 area growth rate, 71 ash cloud, 238 attractor, 63, 100, 105, 178 random, 150 size, 112 strange, 150 strange nonchaotic, 20, 96 autocorrelation function, 140 avoided crossing, 158

canonical ensemble, 224 variables, 26 Cantor, see fractal Cauchy-Green strain tensor, 238 Cauchy-Riemann conditions, 176 celestial mechanics, 2, 78, 239 cellular automata, 98, 172 central limit theorem, 75, 77, 126, 127 chaos, 19 additive, 199 collective, see collective chaos indicator, 23 microscopic, 213 non-hyperbolic, 96 quantum, 242 stable, 20, 98, 116, 172, 227, 248 strong, 193 weak, 40, 67, 77, 78, 90, 94, 193, 197, 240, 242 chaotic attractor, 70, 106, 237 dynamical systems, 258 dynamics, 5, 40, 73, 150, 172, 189, 190, 194, 246 motion, 146 oscillator, 166, 194 regime, 149 repeller, 74 saddle, 104–106 set, 84 spot, 174 system, 152, 153, 159, 162 high-dimensional, 222 characteristic function, 74, 76, 85, 86, 94, 137 chemical kinetics, 247 chemical turbulence, 249 Chirikov-Taylor standard map, 36, 44, 72, 158, 243, 246 chronotopic approach, 17, 168, 173, 177, 178, 205 Lyapunov exponent, 184 circular law, 126, 143 closed system, 180, 225

barycenter, 240 basin of attraction, 100, 225 Bernoulli map, 88, 90, 114, 250 Bessel function, 215 bifurcations, 25 billiard, 20, 43, 231 dispersing, 232 focusing, 232 triangular, 233 binary variables, 186 binomial expansion, 132 Bloch ansatz, 167 Bloch-Lyapunov exponent, 166, 167 Boltzmann equation, 235 bond energy, 128 Boolean algebra, 186 Boolean Lyapunov exponent, 187 Boolean-Jacobian matrix, 187 Borland conjecture, 230 boundary problem, 230 bred vectors, 66 dimension, 66 Brownian bridge, 203 motion, 203

277

278

Index

cluster, 164 CLV, see covariant Lyapunov vector coarse-grained dimension, 224 invariant measure, 36 cocycle, 13 collective chaos, 193, 194 dynamics, 210, 213, 215, 218 motion, 213, 218 perturbations, 218 collision, 44, 49 complete set of orthonormal vectors, 62 complex Ginzburg-Landau equation, 26, 178, 185, 203, 225, 228, 249 complex-conjugate eigenvalues, 120 computational complexity, 31, 32, 35, 60 computational time, 61 concavity, 185 conditional Lyapunov exponent, 164 conductance fluctuations, 137 connectivity, 128 conservation law, 24, 25, 40 continuous limit, 170 continuous-time dynamics, 112 system, 118, 246, 252 contracting direction, 103 contraction rate, 37, 77 control theory, 3 convective expansion rate, 184 Lyapunov exponent, 168, 173, 178, 181, 185, 226, 228 generalised, 185 spectrum, 228 coordinate-dependence, 71 correlation, 119 decay of, 243 function, 146 space-time, 203 time, 189 coupled chaotic systems, 160 logistic maps, 168, 215 map lattice, 98, 116, 156, 180, 181, 247 systems, 152 coupling, 128, 152, 181 asymmetric, 156 critical, 162 global, 32, 142, 156, 158, 163, 195, 250 long-range, 193 massive, 168 nearest-neighbour, 32, 199, 228 parameter, 163 small, 155 sparse, 168

strength, 155, 158, 204 symmetric, 183 unidirectional, 18, 180 coupling sensitivity, 152, 158, 161, 171, 194, 203, 211, 213 covariance, 56 covariance matrix, 66 covariant Lyapunov vector, 35, 36, 52–54, 70, 72, 78, 94, 102, 112, 207, 209, 210, 218, 252 covering, 100, 107 crisis transition, 104 critical behaviour, 91 critical case, 122 crossing times, 110 crossover time, 201 cumulants, 74 current, 123 curvature, 77, 233 sectional, 188 cyclic permutations, 134 damage, 186 spreading, 168, 173, 185, 186 defect, 187 degeneracy, 25, 40, 76, 120, 159 degenerate spectra, 124 delayed feedback, 192 delocalised phase, 231 deterministic system, 18, 118 deterministic thermostats, 236 dichotomic process, 134 differential geometry, 34 differential-delay equation, 168, 191, 192, 203, 250 diffusion, 39, 137, 166, 191 anomalous, 39 coefficient, 39, 74, 89, 210, 213, 222, 235 generalised, 165 matrix, 77, 89 dimension box, 100, 107 correlation, 107 density, 223, 224 fluctuations, 223 fractal, see fractal dimension full, 105 generalised, 107, 108 information, 101, 102, 107, 108 partial, 102, 105, 106, 108 generalised, 108 pointwise, 107 variability, 222 directed polymer, 128, 160 discontinuity, 43, 115 discrete Laplacian operator, 247 discrete spectrum, 230 discrete-state system, 172, 186

279

Index

discrete-time system, 10, 105, 118, 119, 245 disorder, 118 critical, 231 delta-correlated, 120 quenched, 210 strong, 125, 130 weak, 118, 119, 130 disordered media, 229 disordered systems, 119 dissipative systems, 22 double-gyre flow, 238 driven (skew) systems, 18, 20 dual Lyapunov vectors, 67 dynamical algorithm, 55, 57–59 invariant, 20, 40, 51, 105, 224, 235 mean field, 146 dynamical (Ruelle) zeta function, see zeta function dynamical entropy, 77, 100, 103, 105, 106, 109, 170, 177, 178, 187, 192, 220, 236 density, 177 super-invariant, 168 generalised, 107, 109 potential, 176, 177 dynamics aperiodic, 172 irregular, 213 microscopic, 171, 215 non-hyperbolic, 222 periodic, 67 symmetric, 118 Earth’s atmosphere, 67 Earth-Moon system, 240 easy dynamics, 60 eccentricity, 240, 242 effective dimension, 67 eigenvalue, 26, 29, 54, 84 of a Schrödinger operator, 118 problem, 230 eigenvector, 26, 34, 42, 54, 57 elastic collision, 185 electric conductivity, 237 ellipsoid, 106 elliptic phase, 122 embedding technique, 51 ensemble average, 35, 91, 119, 233 entire function, 224 entropy, 243 entropy-like observable, 187 Erdös-Renyi network, 198 ergodic evolution, 102 system, 172 ergodicity, 13, 122 escape rate, 103, 104, 106, 178, 235, 236

Euclidean basis, 126, 256 norm, 205, 209 evanescent waves, 180 evaporation exponent, 164 evolution operator, 70, 80, 82, 84 excitable system, 247 expanding direction, 66, 78, 103, 120 expansion factor, 39 rate, 67 fluctuation of, 72 experimental data, 53 extended Lyapunov vectors, 194 extensivity, 168, 193, 194, 199, 218, 220, 224 FAE, see finite-amplitude Lyapunov exponent Family-Viczek scaling, 201 fast Lyapunov indicator, 78 faster-than-exponentially converging, 131 Fermi-Pasta-Ulam chain, 169, 185, 187–190, 204, 248 fidelity, 243 finite perturbations, 110 finite-amplitude instability, 117 perturbations, 185 finite-amplitude Lyapunov exponent, 66, 98, 110, 114, 216, 239 finite-size analysis, 61 corrections, 169, 175, 205, 206 effect, 110, 208 exponent, 110 finite-time Lyapunov exponent, 39, 70, 77, 89, 94, 97, 100, 101, 107, 108, 194, 205, 207, 238–240 fluctuations of, 42 transverse, 163 first-passage time, 110, 112, 115 Lyapunov exponent, 111 Fisher-Kolmogorov-Petrovsky-Piskunov equation, 185 FitzHugh-Nagumo model, 68, 247 fixed point, 17, 86, 163 marginally stable, 90 flame-front propagation, 249 FLI, see fast Lyapunov indicator floating particles, 150 Floquet exponent, 17, 195 fluctuations, 70, 100, 109, 112, 118, 145 fluid of hard disks, 218 Fokker-Planck equation, 123, 137, 138, 141, 149, 154, 211 forbidden words, 83 irreducible, 95 Fourier modes, 41, 209, 214, 218 space, 202

280

Index

Fourier (cont.) spectrum, 208 transform, 170, 202 FPU, see Fermi-Pasta-Ulam chain fractal, 102 curve, 97 dimension, 100, 101, 170, 192, 220 direction, 103 repeller, 235 set, 107, 150 structure, 102, 108 free energy, 128, 160 friction, 43 front propagation, 185 front velocity, 184, 186 Furstenberg-Kesten theorem, 5, 230 Furutsu-Novikov formula, 139, 142, 149 GALI, see generalised alignment index Galilean invariance, 204 game of life, 172 Gauss Runge-Kutta algoritthm, 34 Gaussian approximation, 77, 89, 137, 139, 222 distribution, 75, 77 ensemble, 126 integral, 130, 137 generalised alignment index, 78 generalised Lyapunov exponent, 70, 73, 75, 79–82, 84, 92, 95, 100, 108, 130, 134, 136, 138, 139, 142, 148, 182, 188 maximum, 108 generating function, 129 generating partition, 83, 95, 105, 109, 258 glassy phase, 130 global atmospheric models, 66 global phase, 47 globally coupled maps, 158, 193, 210, 213, 215, 250 globally coupled systems, 194, 198, 210 Gram-Schmidt orthogonalisation, 31, 38, 42, 56, 71, 252 Gram-Schmidt vectors, 61 Grassberger-Procaccia algorithm, 107 gravitation, 240 group velocity, 184 growth prefactor, 93 GS, see Gram-Schmidt Hénon map, 52, 64, 65, 70–72, 94, 96, 100, 156, 246, 258 chain of, 173, 200, 208, 219, 221, 222, 248 Halley’s comet, 242 Hamiltonian, 44 bosonic, 148 dynamics stochastically perturbed, 150

equations, 153 fermionic, 148 ground state, 148 mean field model, 171, 213, 251 model, 204, 218 oscillator chain of, 168 structure, 237 system, 20, 22, 26, 36, 105, 187, 240, 243, 248 high-dimensional , 138 nearly integrable, 242 nonequilibrium, 228 hard disks, 26 hard dynamics, 60 hard-point gas, 185 heat bath, 172 Heaviside function, 172 Helfand moments, 236 high-dimensional system, 142 higher-order Lyapunov exponents, 110 Hilbert space, 244 homoclinic tangencies, 65, 72, 94, 95, 258 homogeneous state, 116 Hopf bifurcation, 249, 251 Hopf-Cole transformation, 202 Householder reflections, 31, 38, 56, 71 hydrodynamic Lyapunov modes, 218 hydrodynamics, 2 hyperbolic, 94 phase, 122 hyperbolicity, 63, 65, 70, 72, 85, 89, 221 hyperchaos, 19 hyperscaling, 226, 227 Ikeda delay equation, 192, 250 inclination, 240 incommensurate frequencies, 240 inertial manifold, 209, 222 infinite-dimensional system, 21, 168 infinite-precision arithmetics, 172 infinite-time limit, 113, 168, 205 infinitesimal perturbations, 110, 112 initial value problem, 230 instability, 2, 133 absolute, 180 convective, 180 integrated density, 127 interaction long-range, 213, 251 short-range, 203 interface velocity, 205 intermittency, 90 invariant directions, 102 invariant measure, 13, 35, 36, 52, 53, 80, 82, 91, 103, 106, 161, 187, 222, 223, 225, 226

281

Index

approximations, 104 microcanonical, 188 singular, 115 invariant set, 76, 163, 178 stable, 115 inverse participation ratio, 210 invertiblility, 243 Ising model, 172 isotropicdynamics, 118 Ito’s interpretation, 141, 143 Jacobi matrix, 10, 86, 238 Jacobian, 28, 33, 51, 71, 118, 126, 147, 161, 165, 188, 190, 219, 233, 234 reconstruction, 51 KAM tori, 240 Kantz-Grassberger formula, 104, 106 Kaplan-Yorke dimension, 103 density, 222 formula, 100, 102, 108, 170, 224–226 generalised, 107 Kardar-Parisi-Zhang critical exponents, 204 dynamics, 218, 221 equation, 202–205, 208 universality class, 203, 205 Keplerian component, 240, 241 Khasminskii theory, 118, 141 kicked dynamical system, 43 kicked rotor, 44 kicks, 43 kneading theory, 258 Kolmogorov-Sinai entropy, see dynamical entropy KPZ, see Kardar-Parisi-Zhang Kuptsov-Parlitz algorithm, 62 Kuramoto model, 196, 197, 213, 251 Kuramoto-Sivashinsky equation, 169, 170, 208, 209, 218, 249 Lagrangian coherent structures, 111, 112, 237 Lagrangian trajectories, 238 Langevin equation, 123, 138, 142, 149 large deviation function, 70, 75, 82, 89, 92, 94–97, 163, 219 large deviations, 71, 79, 89, 98 large-scale behaviour, 218 largest Lyapunov exponent, 28, 54, 127, 128, 155, 205, 206, 210, 212, 213 largest Lyapunov vector, 188, 217 leading edge, 185 leaky integrate-and-fire neuron, 48 left-right symmetry, 174, 175, 180 Legendre transform, 75, 93, 94, 129, 132, 176, 183–185

Legendre-Fenchel transform, 75 level repulsion, 157, 158 Levy walk, 204 light beam propagation, 173 limit cycle, 20, 69 linear damping, 137 linear response, 67 linear velocity, 180 Liouville equation, 218 nonlinear, 214 Liouville operator, 217 Liouville theorem, 187 liquid films, 249 local pointwise dimension, 107 localisation, 210 length, 137, 175, 230 of Lyapunov vectors, 209 weak, 210, 212 logistic map, 88, 157, 245 chain of, 169, 179, 203, 206, 207, 222, 226, 247 network of, 198 two-dimensional lattice of, 204 long-delay limit, 192 long-time correlations, 75 longitudinal exponent, 196 Lorentz gas, 233, 235, 236 Lorenz attractor, 6, 36 model, 113, 115, 165, 247 Loschmidt echo, 243 paradox, 243 Lozi map, 72, 152, 156, 246 LU factorisation, 62 Lyapunov density spectrum, 127, 168, 169, 173, 180, 181, 188, 190, 194, 196, 197 generalised, 173 first method, 5 function, 5 regularity, 13 second (direct) method, 5 spectrum, 12, 33, 38, 61, 142, 168, 170, 215, 221, 222, 225, 256 stability, 4 time, 19, 241, 243 vector, 41, 79, 130, 200 localisation of, 193, 195, 200 machine dynamics, 43 map area preserving, 234 of the interval, 258 random, 146 two coupled, 162

282

Index

Markov partition, 81, 82, 161, 258 approximate, 83 process, 95, 140 master equation, 140 master stability function, 25, 164, 165 master-slave system, 18 material lines, 238 Mathieu equation, 140 matrix antisymmetric, 33, 34 band diagonal, 139 coupling, 164 highly symmetric, 124, 125 hyperbolic, 124 non-commutativity of, 133 orthogonal, 158 positive, 131 product of, 132 random, 119, 189, 198 Gaussian, 127, 144, 256, 257 product of, 36, 118, 119, 188, 230, 235, 256 theory of, 158 sparse, 118, 128, 181 symplectic, 158, 190 transfer, 173, 174 triangular, 32, 37, 58, 64, 71 tridiagonal, 189 two-by-two, 42 upper triangular, 58 volume-conserving, 124 mean field, 194 mean field model, 197, 213, 250 memory, 59, 63 microcanonical approach, 129, 131–134 ensemble, 224 microscopic Lyapunov exponent, 216, 217 maximal, 216 Milnor attractor, 163 mismatch, 155 mixed dynamics, 96 mixed phase space, 78, 90 mixing, strong, 239 mobility edge, 231 molecular dynamics, 236 moment Lyapunov exponent, 136 moments, 74, 81 Monte Carlo dynamics, 79, 80 technique, 70 multifractality, 100 multiplicative ergodic theorem, see Oseledets theorem multiplicative noise, 118, 136, 202 multiplicativity, 85 multiplier, 85, 87 multistable dynamics, 113

nearly degenerate spectra, 66 negative Lyapunov exponent, 115, 116 network, 128, 164, 166, 168, 197 massive, 198 neural, 48, 49, 126, 145 sparse, 131, 198 topology, 165 neuron, 151 neuron models, 43 neuroscience, 150 node counting, 175 noise, 42, 80, 115, 118, 121, 215, 224 correlated, 143 delta-correlated, 118, 122, 137, 142, 146, 149, 154 dichotomic, 132 observational, 53 parametric, 137 polytomic, 118, 131, 132, 134, 136 weak, 118, 140, 149 white Gaussian, 141, 149, 202 noise stabilisation, 133 noisy Burgers equation, 203 noisy nonlinear systems, 118 nonanalytic behavior, 122 non-autonomous case, 47, 50 non-degenerate spectra, 119 non-hyperbolic, 108 nonlinear wave, 117 nonnormality, 37 norm equivalence, 168 generalised, 205 non-equivalence, 171 norm-dependence, 71 normal form, 251 normalisation, 28 numerical accuracy, 31, 37, 41 numerical error, 36 oblateness, 240 occupation number, 209 oil spill, 238 one-dimensional lattice, 226 one-dimensional map, 11, 35, 82, 86, 88, 90, 95, 112, 114, 198, 210 open system, 100, 180, 223, 224 open-system Lyapunov exponent, 226 optical feedback, 193 ordinary differential equation, 246 Ornstein-Uhlenbeck process, 140 orthogonal vectors, 31 orthogonalisation, 34 error, 37 time interval, 37 orthonormal basis, 58, 78 orthonormalisation, 252

283

Index

oscillator chain of, 128, 166, 203 chaotic, forced, 161 entrained, 196 harmonic, chain of, 172 identical, ensemble of, 194 linear, 137 nonlinear, chain of, 199 pulse-coupled, 48, 195, 217, 251 Oseledets splitting, 15, 54, 57 dominated, 89, 221 subspace, 61 theorem, 5, 12, 13, 230 vectors, 55, 58, 61 backward, 56, 59 forward, 56, 58 outer solar system, 240 parametric resonance, 137 Parrondo’s paradox, 133 partial differential equation, 170, 249 elliptic, 231 partition, 101, 102, 108 partition function, 128, 160 passive tracer, 237 path-integral formalism, 147 pendulum, 49 percolation threshold, 114 period doubling, 90, 94 period-2 orbit, 86 periodic orbit, 17, 70, 71, 84, 85, 122, 163, 213 stable, 115 Perron-Frobenius equation, 213, 214 Perron-Frobenius operator, 36, 81, 82, 216, 217 perturbations finite-amplitude, localised, 117 perturbative analysis, 120 perturbative techniques, 82 Pesin formula, 106, 170 generalised, 109 phase, of oscillations, 149 phase oscillators, 195 phase response, 68, 69 phase shift, 67 phase-space dimension, 36, 103 phase synchronisation, 36, 162, 249 phase transition, 118 phase velocity, 184 piecewise linear map, 81 Planck’s cell, 244 plankton field, 238 Poincaré section, 23, 47 Poisson brackets, 26 Poisson distribution, 158 polar coordinates, 121 polymer, 129

Pomeau-Manneville map, 90 population dynamics, 245 potential-entrophy norm, 66 power law, 94, 130 power spectrum, 140 predictability time, 19, 243 prime cycle, 86–88, 134 principal component analysis, 66, 77, 225 probability density, 81, 147 projected schemes, 34 propagation, of waves, 119 propagation velocity, 172 propagator, 243 pruning front, 258 pseudo-symplectic structure, 174 pseudo-probability, 209 pseudochaos, 90 pseudocodes, 252–255 pure point spectrum, 229 QR decomposition, 29, 35, 44, 56, 58, 60, 63, 71, 94, 106, 252 continuous, 33 quarter-circle law, 127 quasi-momentum, 167 quasiperiodic force, 96 quasiperiodic motion, 20, 240 Rössler oscillator, 23, 39, 40, 192, 247 chain of, 37, 168, 249 random dynamical system, 76 environment, 128 polymer, 119 potential, 124 process, 36, 154, 210 system, 118, 187 walk, 137, 156 random-field Ising chain, 159 rare events, 97 rare fluctuations, 73 reaction-diffusion system, 185 real world data, 53 recursive relation, 121 refractory period, 49 re-injection, 91 relativistic dynamics, 22 relaxation time, 35 reliability, 150 renormalisation group, 203 Renyi entropies, 109, 210, 224 repeller, 104, 106, 178, 236, 237 replica, 79, 80, 187 replica-symmetry breaking, 130 resampling, 80 rotation number, 17 rotational invariance, 126, 127

284

Index

rough interface, 204, 218 rough Lyapunov vector, 209 roughening, 203 exponent, 220 interface, 201, 207 phenomena, 200 process, 203 round-off errors, 158, 243 saddle-point method, 182 scalability, 34 scalar observable, 73 scaling, 75, 138 anomalous, 221 behaviour, 80, 94, 202, 203, 210, 213, 226 relation, 206 scattering, 104, 234 Schrödinger equation, 124, 137, 144, 175, 229, 230 second Lyapunov exponent, 101 secular terms, 241 self-averaging, 219 self-generated friction, 237 semi-analytic techniques, 118 semiclassical approximation, 243 semiconductor laser, 193 Shannon entropy, 210, 224 shear viscosity, 237 simple fluids, 236 Sinai billiard, 231 singular value decomposition, 29, 53, 63 singular values, 14, 29 singular vectors, 55, 56, 66 skew product, 161 skew tent map, 88, 212 Smale’s horseshoe, 104 smallest Lyapunov exponent, 54, 218 solar system, 239 sound velocity, 185 space-like variable, 173, 191 space-time chaos, 169, 191, 202, 218, 249 spacing density, 159 spatial Lyapunov exponent, 168, 174 spatially continuous system, 170, 249 spatially extended system, 115, 168, 171, 178, 191, 197, 200, 203, 218, 223, 224, 247 spatially homogeneous state, 167 special flow, 161 spectral density, 189, 198 spike, 43 spiking neuron, 48 splay state, 195 spurious exponents, 52, 53 stability, 1 asymptotic, 4 fluctuations of, 38 spectrum, 75 uniform, 163

stable direction, 101, 103, 106 stable manifold, 54, 63–65, 72, 238 standard deviation, 77 state-dependent delay, 193 stationary distribution, 155 statistical error, 38 statistical fluctuations, 36, 179, 217 statistical mechanics, 79, 134 stick-slip dynamics, 43 Stiefel manifold, 34 Stirling formula, 132 stochastic D-bifurcation, 151 differential equation, 122, 136, 141, 142, 146, 202, 207 linear, 118 map, chain of, 114 process, 76, 84, 134, 140, 153 stability, 141 system, 18, 73, 146, 229 straight Lyapunov vector, 208, 209 Stratonovich’s interpretation, 141, 143 stream function, 238 stream-function norm, 66 stretched exponential, 90, 92 stroboscopic map, 24, 161 strong-exchange coupling limit, 160 structurally variable systems, 45 Stuart-Landau oscillator, 214, 217, 251 chain of, 220, 221 ensemble of, 213 subexponential growth, 90 sum, of Lyapunov exponents, 102 super-exponential convergence, 135 superdiffusion, 204 supersymmetric quantum mechanics, 118, 146, 148 supersymmetry, 146 SVD, see singular value decomposition symbolic encoding, 258 symbolic sequence, 88 symmetry, 20, 24, 26, 27, 40, 78, 90, 159, 160, 162 average, 25 breaking, 162 continuous, 24, 25 instantaneous, 25 reflection, 26 translational, 26 symplectic dynamics, 34, 72, 175, 246 lattice system, 218 map, 65, 120, 153, 220 chain of, 203, 248 structure, 189 system, 26, 78, 169, 222, 258 symplectic-like condition, 219 symplectic-like structure, 248

285

Index

synchronisation, 152, 162 by common noise, 150 complete, 164 generalised, 164 manifold, 163 transition, 96, 110 synchronous dynamics, 165 synchronous state, 41, 163 system size, 168 tangent space, 36, 41, 44, 46, 47 telegraph process, 140 temperature, 128, 130 tent map, 88, 114, 157, 193, 195, 214, 246 ensemble of, 216 globally coupled, 194, 195 thermal conductivity, anomalous, 204 thermodynamic equilibrium, 189 limit, 98, 126–128, 143, 157, 168, 171–173, 191, 193, 196–198, 209, 211–213, 219, 221 measure, 223 threshold, 110 tidal friction, 240 tight-binding approximation, 120 time average, 77, 120, 121, 233 time series, 50 time translation invariance, 160 time-like variable, 173 time-reversal symmetry, 175 topological entropy, 81, 82, 95, 258 torus, 97 trace, 85 transfer matrix, 160 transient, 28, 61, 66, 98 dynamics, 104 phenomena, 179 time, 28, 61

transition rate, 81 transmission coefficient, 229 transport, 204, 229 coefficient, 235 transversality, 64, 209 transverse Lyapunov exponent, 152, 159, 162, 164–166, 196 transverse stability, 164 turbulence, 150 turbulent fluid flow, 237 turbulent state, 117 Ulam point, 157, 245 unimodal distribution, 196 unitary integrators, 34 unitary operator, 243 universality class, 197, 203 unstable direction, 101, 103, 104, 106, 109, 153, 222 unstable manifold, 54, 63–65, 72, 221, 223, 238 variance, 77 volume conservation, 73 contraction, 21, 108, 237 fluctuations, 223 wave chaos, 242 wave front, 233 wave propagation, 173, 229 weather forecasts, 66 weighted dynamics, 79, 94 Wolfe-Samelson algorithm, 62 XOR operation, 186 zero Lyapunov exponent, 20, 24, 40, 67, 79, 90, 93, 96, 160, 172, 195 zeta function, 84, 86, 88, 131, 134, 135