Space-time chaos : characterization, control and synchronization 9789810245061, 9810245068

484 18 14MB

English Pages 305 [323] Year 2001

Report DMCA / Copyright


Polecaj historie

Space-time chaos : characterization, control and synchronization
 9789810245061, 9810245068

Table of contents :
Part 1 Analysis and characterization of chaotic, complex and space extended systems: non-stationary as an embedding problem, M. Small et al
nonlinear dynamics and time series analysis - a basic introduction, E.J. Kostelich
fixed point analysis - dynamics of non-stationary spatiotemporal signals, A. Hutt and F. Kruggel
complex-valued second difference as a measure of stabilization of complex dissipative statistical systems - Girko ensemble, M.M. Duras
study of a C-integrable partial differential equation, A. di Garbo et al
wave patterns in coupled map lattices, P.G. Lind et al
chaos in air bubble formation, A. Tufaile and J.C. Sartorelli
patterns in a nonlinear optical system, F.T. Arecchi and P.L. Ramazza
bifurcation behaviour of a superlattice model, J. Galan et al
delayed dynamics systems with variable delay, S. Madruga et al
stability of hexagonal patterns in a generalized Swift-Hohenberg equation, B. Pena et al
Rayleigh-Benard convective instabilties in nematic liquid crystals, J. Salan and I. Bove
topological analysis in a dripping faucet experiment, M.B. Reyes and J.C. Sartorelli. Part 2 Control and synchronization of chaos and space time chaos: the breakdown of synchronization and shadowing in coupled chaotic systems - analysis via the subsystem decomposition, E. Barreto and P. So
anticipating chaotic synchronization - an overview, H.U. Voss
phase synchronization in chaotic convection, D. Maza et al
a simple use of the diffusion approximation for treating roundoff-induced problems in coupled maps with an invariant subset, G. Santoboni et al
out-of-phase vs. in-phase synchronization of two parametrically excited pendula, A.G. de Oliveira et al
experiments on chaos control in lasers, R. Meucci and F.T. Arecchi
communication with chaos - reconstructing information-carrying signals, I.P. Marinon et al.

Citation preview


I!!f!HHiiriMiii| Edited b) S. Boccaletti J. Burguete W. Gonzalez-Vifias H. L Mancini D. L Valladares

*- m *- v




7 9-23 June 2000

Edited by

S. Boccaletti J. Burguete W. Gonzalez-Vinas H. L. Mancini D. L Valladares Institute of Physics, University of Navarra, Pamplona, Spain

V|*5 World Scientific «•

Singapore • Ne w Jersey • London • Hong Kong Singapore'New Jersey London*

Published by World Scientific Publishing Co. Pte. Ltd. P O Box 128, Farrer Road, Singapore 912805 USA office: Suite IB, 1060 Main Street, River Edge, NJ 07661 UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE

British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library.

SPACE-TIME CHAOS: CHARACTERIZATION, CONTROL AND SYNCHRONIZATION Copyright © 2001 by World Scientific Publishing Co. Pte. Ltd. All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means, electronic or mechanical, including photocopying, recording or any information storage and retrieval system now known or to be invented, without written permission from the Publisher.

For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy is not required from the publisher.

ISBN 981-02-4506-8

Printed in Singapore by Uto-Print

Preface This book reports the Proceedings of the interdisciplinary School on "Space Time Chaos: Characterization, Control and Synchronization," which was held at the University of Navarra, Pamplona, Spain, on June 19-23, 2000, under the direction of the Editors. The School was organized by the Institute of Physics of the University of Navarra and was intended to cover subjects related to complex spatiotemporal phenomena from both theoretical and experimental points of view, focusing especially on the recent advancements in the fields of control, synchronization and characterization of complex space-extended systems. Many topics have been object of lectures, among which we recall the analysis of experimental data, the diffusion limited aggregation and Laplacian growth problems, communication using chaotic carriers, noise effects in spatially extended systems, control and synchronization of chaos, riddling and on-off intermittency, the problem of a definition of complexity. In this framework, we wish to thank all lecturers, who have contributed substantially to train the audience toward the new perspectives in the respective fields, as well as to create a charming atmosphere and highly stimulating scientific discussions with the participants. We would like to acknowledge the other members of the School Scientific Committee, namely F. T. Arecchi (Italy), C. Grebogi (Brazil), J. Kurths (Germany), I. Procaccia (Israel), K. Showalter (USA) and L. Vazquez (Spain) for their fruitful advice and cooperation in the organization of the School. Furthermore we would like to acknowledge the financial support from the Spanish Ministerio de Education y Ciencia, the European Commission (DG XII, contract number: HPRN-CT-2000-00158), the USA Fulbright Program and the University of Navarra, which made it possible to award grants and facilitate the participation of students and contributors from all the world. Besides lecturers, the School saw the participation of more than one hundred PhD or postdoc students coming from very different fields, whom we gratefully thank for their contribution with talks, poster presentations or v

simply with their presence, and for having profusely discussed the different presented subjects from very diverse points of view: from applied mathematics, to engineering, laser physics, chemistry, biophysics and ecology, statistical mechanics, electronics. We believe that this event has considerably contributed to bring together different skills and expertises, emphasizing the common achievements and pointing out future directions in the respective fields of interest. Finally, we feel indebted to C. Ceccarini and G. Sottile for their valuable contribution as secretaries of the School during its whole duration, and we would like to thank all members of the Department of Physics and Applied Mathematics of the University of Navarra, as well as all the staff of the University of Navarra, who have contributed to guarantee the pleasantness of those days.

The Editors Stefano Boccaletti Javier Burguete Wenceslao Gonzalez-Vinas Hector Luis Mancini Diego Leonardo Valladares


Contents v


I Analysis and Characterization of chaotic, complex and space extended systems 1


N O N - S T A T I O N A R I T Y AS A N E M B E D D I N G P R O B L E M MICHAEL SMALL, DEJIN YU, ROBERT G. HARRISON 1 Introduction 2 Modelling 2.1 Reconstruction 2.2 Radial Basis Modelling 2.3 Minimum Description Length 2.4 Cylindrical Basis Models 2.5 Embedding Time 3 Applications 3.1 Nonstationary Simulations 3.2 Chaos in Computational Simulations 3.3 Vibrating Strings 3.4 Bifurcating Babies 3.5 Fibrillation 4 Conclusion

1 3 3 4 4 5 6 7 7 8 8 10 12 12 15 17

N O N L I N E A R D Y N A M I C S A N D TIME SERIES ANALYSIS: A BASIC INTRODUCTION E. J. KOSTELICH 19 1 Introduction 19 2 The embedding problem 20 ix

3 4

Noise reduction in chaotic data Other resources

3 F I X E D P O I N T ANALYSIS: D Y N A M I C S OF NON-STATIONARY SPATIOTEMPORAL SIGNALS A. HUTT, F. KRUGGEL 1 Introduction 2 A simulated dataset: Kuppers-Lortz instability 3 Fixed Point Clustering 3.1 The clustering algorithm 3.2 Results of clustering 4 Spatiotemporal Modeling 5 Fixed Point Analysis (FPA) 6 Summary 7 Acknowledgments 4



24 26

29 29 30 32 33 33 36 40 41 42


45 45 45 47 49

S T U D Y OF A C-INTEGRABLE PARTIAL D I F F E R E N T I A L EQUATION A. DI GARBO, S. CHILLEMI, L. FRONZONI 1 Introduction 2 Properties of the PDE 3 Explicit solutions 4 Conclusions

53 53 54 55 58

WAVE P A T T E R N S IN C O U P L E D M A P LATTICES PEDRO GONCALVES LIND, JOAO ALEXANDRE MEDINA CORTEREAL, JASON ALFREDO CARLSON GALLAS 61 1 Introduction 61 2 Coupled Map Lattices 63 3 The Five Classes of Time-evolution 64 x


5 6 7



Algorithm for automatic classification of time-evolutions . . . . 4.1 Definition of the period T 4.2 Definition of d Dynamical Characteristics of the Five Classes Conclusions

CHAOS IN AIR B U B B L E FORMATION A. TUFAILE, J.C. SARTORELLI 1 Introduction 2 Experimental apparatus 3 Bifurcation 4 Anti-bubbles 5 Sound wave 6 Conclusions P A T T E R N S IN A N O N L I N E A R OPTICAL SYSTEM F.T. ARECCHI, P.L RAMAZZA 1 Introduction 2 The Liquid Crystal Light valve with optical feedback: principles of operation and conditions for pattern formation 3 Effects of nonlocal interactions: rotation 4 Effects of nonlocal interactions: translation 5 Conclusions

70 70 72 73 74

79 79 79 80 82 83 84

87 87 91 96 99 102

B I F U R C A T I O N BEHAVIOR OF A SUPERLATTICE MODEL J. GALAN, M. MOSCOSO, L. L. BONILLA 105 1 Introduction 105 2 Model 105 3 Results and discussion 106 4 Conclusions Ill

10 DELAYED D Y N A M I C A L SYSTEMS W I T H VARIABLE DELAY S. MADRUGA, S. BOCCALETTI, M A . MATIAS 1 Introduction 2 Variable delay 2.1 Dynamical behavior of VDS 2.2 Results for sinusoidal modulation 2.3 Results for triangular modulation XI

113 113 115 117 118 119


Conclusions and Future Goals


11 STABILITY OF H E X A G O N A L P A T T E R N S IN A G E N E R ALIZED S W I F T - H O H E N B E R G EQUATION B. PENA, C. PEREZ-GARCIA, B. ECHEBARRIA 123 1 Introduction 123 2 Model equation 124 2.1 Linear analysis 125 3 Center Manifold Reduction to a pattern with hexagonal symmetry 126 4 Multiple scale analysis for a hexagonal pattern 128 5 Stability diagrams 130 5.1 Amplitude instabilities 131 5.2 Phase equations of a hexagonal pattern 132 5.3 Numerical simulations 134 6 Conclusions 135 12 R A Y L E I G H - B E N A R D C O N V E C T I V E INSTABILITIES I N N E M A T I C LIQUID CRYSTALS JESUS SALAN, ITALO BOVE 137 1 Introduction 137 2 Planar Nematics and horizontal magnetic field 140 2.1 Thresholds 140 2.2 No linear properties 145 3 Homeotropic NLC under external magnetic fields heating from below 145 3.1 Threshold 145 3.2 Non linear effects 148 4 Homeotropic NLC under external magnetic fields heating from above 149 4.1 Introduction 149 4.2 Field H parallel to director n 150 4.3 Field H perpendicular to director n 152 13 TOPOLOGICAL ANALYSIS I N A D R I P P I N G FAUCET E X P E R I M E N T M.B. REYES, J.C. SARTORELLI 1 Introduction 2 Experimental Apparatus 3 Results and discussion xn

157 157 157 159

3.1 3.2

Partition of the phase space Topological Characterization

160 164

II Control and Synchronization of chaos and space t i m e chaos 169 14 T H E B R E A K D O W N OF SYNCHRONIZATION A N D S H A D O W I N G IN C O U P L E D CHAOTIC SYSTEMS: ANALYSIS VIA THE SUBSYSTEM DECOMPOSITION ERNEST BARRETO, PAUL SO 171 1 Introduction 171 2 Denning UPO Subsystems 172 3 Desynchronization Beyond Generalized Synchrony 173 3.1 Introduction 173 3.2 Evolution of the Attractor 175 3.3 Subsystem Analysis 177 3.4 Measuring the decoherence transition from trajectory datal78 3.5 Noise 179 3.6 Conclusion ." . 180 4 The Breakdown of Shadowing 181 4.1 Introduction 181 4.2 Subsystem Analysis 181 4.3 Mechanisms for the Development of UDV 184 4.4 Breakdown of Shadowing is Common in Coupled Systems 185 4.5 Quantifying UDV and Shadowing Times 186 5 Conclusion 188 15 ANTICIPATING CHAOTIC SYNCHRONIZATION — A N OVERVIEW H. U. VOSS 1 Introduction 2 Time-delayed feedback systems 3 Coupled time-delayed feedback systems 4 Time-delayed dissipative coupling 5 Drive systems without a memory 6 Systems without memory 7 Discrete and spatiotemporal dynamics 8 Overview and Discussion


191 191 192 194 196 200 203 203 205

16 P H A S E SYNCHRONIZATION IN CHAOTIC C O N V E C T I O N D. MAZA, A. VALLONE, H. MANCINI, S. BOCCALETTI 211 1 Introduction 211 2 Experimental Setup 212 3 Prom Chaotic Oscillations to Phase Synchronization 214 4 Conclusion 219 17 A SIMPLE U S E OF T H E DIFFUSION A P P R O X I M A T I O N FOR T R E A T I N G R O U N D O F F - I N D U C E D P R O B L E M S IN COUPLED MAPS WITH A N INVARIANT SUBSET GIOVANNI SANTOBONI, RUA MURRAY, STEVEN R. BISHOP 223 1 Introduction 223 2 An analytically solvable example 225 2.1 Analytical estimates 225 2.2 Numerical results 226 2.3 The role of the integration parameters 227 3 Diffusion approximation 228 4 Comments 232 18 O U T - O F - P H A S E vs I N - P H A S E SYNCHRONIZATION OF TWO PARAMETRICALLY EXCITED P E N D U L A ANA GUEDES de OLIVEIRA, WINSTON S. GARIRA, STEVE R. BISHOP 233 1 Introduction and Motivation 233 2 Single Parametric Excited Pendulum 233 3 Coupled Parametric Excited Pendula 235 3.1 Synchronization: Numerical Simulations 239 4 Concluding Remarks and Future research 246 19 E X P E R I M E N T S O N CHAOS CONTROL IN LASERS R. MEUCCI, F.T. ARECCHI 1 Introduction 2 The Physical System: a CO2 laser with feedback 3 Stabilization of an Unstable Fixed Point 4 Control of chaos in a delayed high-dimensional system 5 Conclusions xiv

251 251 252 255 262 268

20 C O M M U N I C A T I O N W I T H CHAOS: R E C O N S T R U C T I N G I N F O R M A T I O N - C A R R Y I N G SIGNALS I. P. MARINO, D. L. VALLADARES, C. GREBOGI, S. BOCCALETTI, E. ROSA Jr. 273 1 Introduction 273 2 Encoding technique 275 3 Dropout reconstruction 280 3.1 First Reconstruction Method (FRM) 282 3.2 Second Reconstruction Method (SRM) 286 4 Multiplexing 288 5 Conclusions 289

List of Participants



Part I

Analysis and Characterization of chaotic, complex and space extended systems



of Physics, Heriot-Watt University, Riccarton, Edinburgh, United Kingdom, E-mail: [email protected]


of Physics, Heriot-Watt University, United Kingdom, E-mail:









Riccarton, Edinburgh, [email protected]

R O B E R T G. H A R R I S O N of Physics, Heriot-Watt University, Riccarton, Edinburgh, United Kingdom, E-mail: [email protected]

Non-stationarity is the time dependent change in system dynamics. In this paper we describe a generalisation of nonlinear modelling techniques that may allow one to extract time dependent features from a nonstationary time series. Nonlinear modelling is generally applied to predict future values of a time series from past values. By explicitly considering the time as a variable in the model one may extend this methodology to encompass time dependent dynamics. This method has been applied to successfully describe: (i) a Shil'nikov mechanism from experimental string vibration data, (ii) a period doubling bifurcation in infant respiration, (iii) time dependent features in a computational simulation of cardiac arrhythmia and (iv) computational simulations of classical nonlinear systems.



Radial basis models are capable of accurately and compactly modelling a wide variety of functional forms x . Judd and Mees 2 have shown that applying a radial basis modelling technique with a pseudo-linear optimisation and minimum description length 3 as a fitting criterion can be employed to accurately model short, noisy experimental data. This technique has been successfully applied to model (for example): annual sunspot numbers 2 ' 4 , human infant respiration 5,6,7,8^ v o w e i intonations 4 , ventricular arrhythmia 9 ' 1 0 and classical nonlinear systems 2 ' 11 . This nonlinear modelling technique proceeds by embedding a scalar time series in a high dimensional Euclidean space and building a map from the embedded space to the next scalar point. A simple extension of this methodology allows one to also embed a time component in the same phase space 12 . An extension of radial basis modelling to locally embedded basis functions (cylindrical basis modelling) 4 then allows one to efficiently build a model which has appropriate time dependent dynamics if and only if this is required. In this 3

paper we describe the basic methodology, review some previous results and present some new applications. Section 2 describes the modelling procedures we employ. Section 3 gives several examples of applications of this methodology and Section 4 provides some concluding remarks. 2


Nonlinear modelling is logically and operationally divided into three important issues: (i) reconstruction, (ii) functional approximation, and (iii) model selection. In Section 2.1 we describe the reconstruction methodology. Section 2.2 describes the basic functional form of radial basis models and Section 2.3 describes the application of minimum description length for model selection. In Section 2.4 we describe the extension of radial basis modelling to cylindrical basis functions. Finally, Section 2.5 describes the operation of time embedding that we employ to model nonstationary data. 2.1


The much cited Takens' embedding theorem 13 provides a guarantee that under certain conditions it is possible to reconstruct a finite dimensional dynamical system ($,X) (where $ : X i—> X, xt £ X, dim(X) = d) from a single scalar time series {ut}£Lx where ut — g{xt). In practice one cannot be sure that the observation function g : X i—> R is diffeomorphic (i.e. ut may only observe a subsystem) and that the constraints on noise level and observation length N may not be met. However, a time delay embedding still allows one to reconstruct a noisy attractor with dynamics approximately equivalent to the underlying dynamical system (or a subsystem). The time delay embedding is defined by Vt = {Ut,Ut-T,Ut-2T,



where r and de are the embedding lag and embedding dimension. Embedding dimension and embedding lag may be selected by one of many criteria including: false nearest neighbours 14 , zero of autocorrelation 15 , minimum of mutual information 16 or one-quarter the quasi-period 8 . The evolution operator of the underlying dynamical system $ (zt+i = $(xt) or the analogous for a continuous system) may then be approximated by a function F, vt+i = F(vt) + et 4

where et € R d ° is an i.i.d. noise vector. For simplicity we take r = 1 and we then have that vt+i - F(vt) + et

' f(vt) ' ut Ut-1 Ut-2



0 0 0 0

The modelling problem becomes one of finding a functional approximation / : Rd« i—> R such that

JT [f(vt)-ut+i]2



(et = ut+i — f{vt)) is minimised. 2.2

Radial Basis Modelling

Let x G R d e then a radial basis model / is defined as

/(*) = f>* (^f^)



where (pi : [0,oo) •—> R is the basis function, Ci € R d " is the centre and r-j e R is the radius. The weights Oj, fy may be selected by the usual least squares approach 17 (with the caveat that some bj may be set equal to zero). The parameters Cj and r* will need to be selected by some nonlinear selection routine and the basis functions are selected from some class of candidates. Typically one may take fc to be selected from

• {exp(^)} «{* 3 } • {tanh(mx + b) : m € R, b 6 R }

. {exp(fc^):p>0} 5

. {(2{x/s)2

- l)exp(-(a;/s) 2 ) : s > 0}

or some combination of these. The results described in this paper were obtained by selecting &e |exp(^-)|u{l}. For a given model size m the optimal model of that size is selected according to the algorithm described in Judd and Mees 2 or Small and Judd 5 to minimise (1.1). The optimal model size m is selected by computing the description length for all model sizes and selecting the model with minimum description length. 2.3

Minimum Description Length

The description length of a model is (roughly) the number of bits of information required to describe the data by describing the model of that data, initial conditions and the model prediction errors. If a model is a good model, then the description length should be less than the description length of the raw data (the model provides a compact description of the features observed in the data). If the model is poor then its description length will be larger than that of the data alone (the model is more complex than the data). Description length is described in detail by Rissanen 3 . The key feature of description length is that the data values, model parameters and model prediction errors need only be specified to some finite accuracy. A model with low description length will have many parameters which need not be specified too precisely. Rissanen shows that the description length of a parameter Aj specified to some accuracy Si is log( j 1 ) 2 ' 3 . The constant 7 is not critical and is related to the binary representation of floating point numbers 2 . Therefore, the description length of A; model parameters A = {Aj}f=1 is given by k

L(A) = 5>g(J). The description length of the data { u t } ^ 1 , and the model with parameters A, is given by L(u,A) = L(«|A) + L(A)


where the description length of the data given the model (i.e. the description length of the model prediction errors) L(u\A) is the negative logarithm of the likelihood of the data under the assumed distribution, - ln(P(u|A)). 6

Clearly the minimum description length of a given model will depend critically on the optimal selection of the model parameter precisions S{. A second order expansion of (1.3) about the maximum likelihood model parameter values indicates that the optimal values of S = (£i,#2> • • • ,(fc) (the extra \k term comes from solving (1.3) for the maximum liklihood values Si2).


Cylindrical Basis Models

Standard radial basis models are functions of the form (1.2). Cylindrical basis models are described by functions of the form







where Pi : R d e 1—> Rd' is a projection onto some subset of the coordinate directions. The inclusion of this additional component effectively removes some of the problems involved in the "correct" selection of embedding parameters T and de and makes embedding an intrinsic part of the modelling process 5 4 ' . Moreover, dynamical systems often have increased complexity only in some parts of phase space — the Lorenz system for example is mostly two dimensional (on the wings), three dimensional structure is only significant at the central separatrix 4 . Successful application of the minimum description length principle should mean that the optimal model utilises precisely the dimensionality required in the correct parts of phase space, but no more. 2.5

Embedding Time

Time dependent structure may be incorporated into a radial or cylindrical basis model by considering time as a coordinate along with the scalar observations 7

ut- With r = 1 we embed vt according to Vt = (ut,Ut-l,Ut-2,



for some constant fc. The evolution operator of the underlying dynamical system $ may now be approximated by a function F, vt+i = F(vt) + et "et"

' f(vt) ' Ut-1 Ut-2

Ut-de + 2



0 0 0

0 0


In this section we present several applications of the techniques described in Section 2. In Section 3.1 we exemplify the application of this method to detrend time series. Section 3.2 demonstrates the application of this method to identify a period doubling bifurcation in the logistic equation and bifurcation in the Rossler system. Section 3.3 briefly reviews the results of Judd and Mees 12 : an experimentally vibrating string undergoing a Shil'nikov type bifurcation. In Section 3.4 we present previously unpublished results showing a period doubling bifurcation in infant respiratory recordings prior to onset of periodic breathing. Section 3.5 describes some preliminary results of the application of these techniques to computational simulations of cardiac arrhythmia. 3.1



The modelling technique described in this paper is capable of estimating bifurcation diagrams, but also modelling nonstationary nonlinear trends (and also more generally nonstationary data). Figure 1 shows a computational simulation of a nonstationary nonlinear trend added to a random process. The presence of such non-stationarity can be detected with the algorithms described by Yu et alis'19. Figure 2 shows the results of nonstationary modelling applied to this data. The trend has been accurately extracted and the noise component has been estimated with a correlation coefficient of 0.9989. 8

c 3


is. c











Time (arbitrary units)

Figure 1: Trend and i.i.d. noise. 5000 points of a slight trend and large scale i.i.d. noise. The horizontal axis is the datum number, the vertical axis is arbitrary.

r 1000 2000 3000 4000 Bifurcation parameter


0 Model error

Figure 2: E s t i m a t i o n of t r e n d and s t o c h a s t i c c o m p o n e n t from d a t a . The left hand plot shows the trend estimated from the data (solid) and the true trend (dot-dashed). The right hand plot shows almost unity correlation (r = 0.9989) between the original i.i.d. noise and that estimated by the modelling algorithm.























Time (arbitrary units)

Figure 3: T i m e series of bifurcation in t h e Rossler s y s t e m . 5000 points of the Rossler systems with 5% additive Gaussian noise. During this simulation the bifurcation parameter a is varied from a — 0.3 to a — 0.398


Chaos in Computational


The Rossler system is denned as \ y



x + ay

\2 + z(x-4)J

for a = 0.3 this system has a stable limit cycle. The Rossler system undergoes a transition from this stable limit cycle, through period 2, period 4, "four band" chaos, period 6, and is chaotic for a = 0.398 20 . We generated 5000 points (time step 0.2) of the Rossler system as a varied from 0.3 to 0.398. To this time series we added 5% Gaussian i.i.d. noise. The time series is shown in Figure 3. A reconstruction of the bifurcation diagram using a time varying radial basis model is shown in Figure 4. This model captures the essential features of the underlying bifurcation. However, the addition of sufficient observational noise may confuse some of the underlying details. Note, for example, that a small range of bifurcation parameter values failed to produce convergent results and for large values the behaviour is almost periodic (it should be single band chaos). Initially the model is estimated as bi-periodic when it should be simply periodic. This may be due to the low sampling rate making the time series appear to be bi-periodic early in the simulation. Other features are relatively accurately reproduced. The available information is certainly superior to that available from the raw data. We repeated these calculations for data with less noise and obtained superior results (see Figure 5). For noise free data and lower noise levels (Figure 5) we were able to successfully recreate the bifurcation diagram 10
















Time (arbitrary units)

Figure 4: E s t i m a t i o n of bifurcation in t h e Rossler s y s t e m (high n o i s e ) . The top panel (a) shows the successive peak values computed from the noisy time series (an approximate estimate of the bifurcation in the underlying system). The bottom plot (b) shows the estimated peak values as a function of the bifurcation parameter a for the model estimated from the noisy time series (shown in Figure 3).

*&&i 0, |6| > 1, two stable and one unstable local manifold exist. Thus the fixed point represents a saddle point. The fixed points A° and A3 behave analogously. 30


Figure 2: Trajectory of the 3-dimensional signal A(t). It starts near one corner and passes three corners, before it returns to the initial corner. The numbers denote the time steps of the trajectory at their locations.

A 3-dimensional trajectory is calculated by 2200 integration steps with the initial condition A(i = 0) = (0.03,0.2,0.8) and is seen in Fig. 2. The trajectory passes the saddle points A° = (0,0,1), A j = (1,0,0) and A° = (0,1,0) in this sequence, and then returns to A3. By a composition corresponding to Eq. 3.1, we obtain a spatiotemporal signal (Fig. 3), and one recognizes that these fixed points correspond to the spatial modes v 3 , vi and V2. Now, we aim at extracting the fixed points back from the signal without using any prior knowledge about the internal dynamics, we use the raw signal as input for our method. In the next section, a clustering approach is introduced and its application on the simulated data is discussed. 31

Figure 3: Spatiotemporal signal as a temporal sequence of spatial patterns. One recognizes transitions between three basis patterns and a return to the initial pattern.


Fixed Point Clustering

We assume a signal trajectory, which shows a sequence of segments governed by saddle point dynamics (Fig. 4). Under the hypothesis, that these segments comprise the main functionality of the underlying system, we aim to extract them from the signal. According to Fig. 4, trajectories approach saddle points along their stable manifolds whereas they leave the vicinity of the fixed points along the unstable manifolds. The signal points accumulate close to the fixed points if the signal is sampled at a constant rate. This accumulation also represents a point cluster in data space. Subsequently, stable manifolds in multi-dimensional signals lead to point clusters (at constant sampling rate) and their detection can be treated as a recognition problem of these clusters in data space 12 . In the present paper we use the K-Means Algorithm (see e.g. 13 ) to detect regions in data space with high density of data points. Though there are highly developed and optimized routines 14 - 15 ' 16 to detect point clusters, we choose one of the simplest methods is probed to be useful here.

stable manifold

fixpoint unstable manifold


Figure 4: Sketch of a trajectory, which passes two fixed points. The transition part is denoted by a dashed line



The clustering algorithm

A JV-dimensional spatiotemporal signal can be described by a data vector q(t) £ $tN, where the component qj(U) represents a data point at time i and detection channel j . The clustering algorithm aims at cluster centers {k^}, whose mean Euclidean distance to a set of datapoints q(ij) is minimal. The presented implementation follows Moody et. al. ir and is sketched in Fig. 5. In many clustering algorithms, the number of clusters k is unknown a priorily. We increase k from 2 and analyze each clustering result. This approach leads to a criterion for valid clusters and is discussed in the next sections. Cluster centers k° are initialized at random locations and their Euclidean distances to each data point are calculated. K-Means defines memberships of data points to a cluster by the smallest Euclidean distance to its center. Thus, data are segmented into k clusters and new cluster centers k 1 are calculated as means of clustered data points. Distances between data points and centers k n are re-estimated until a convergence condition is fulfilled. This criterion can be set either as a upper Euclidean distance limit between sequential cluster centers k " , k n + 1 or or as number of iterations. We choose to limit the number of iterations to 25. •

choose number of clusters k

calculate distances from data points to cluster centers

initial cluster centers, chosen randomly

which cluster center is nearest for every data point ? •

calculate new cluster centers as mean of nearest data points calculate distances of data points and cluster centers and plot them


llk n -k n + lkeor


iteration number exceeded

Figure 5: The implementation steps of the K-Means algorithm.


Results of clustering

In a first step, we choose the number of clusters to k = 3 and apply the KMeans algorithm on the 144-dimensional simulated dataset. In Fig. 6 , the Euclidean distance from each data point to the determined cluster centers cluster is plotted in respect of the temporal point sequence. When trajectory 33







Figure 6: Cluster results of the K-Means algorithm with k = 3. The plot shows the Euclidean distance of each data point to the detected clusters. Changes of the signal in high-dimensional space between clusters are recognized.

points are near respectively far from a cluster center, their Euclidean distance to a center is small resp. large. These changes can be observed in Fig. 6 by decreasing and increasing Euclidean distances in time. We consider a data point to be a member of the cluster whose center is closest to the point. The hypothesis, that cluster centers are related to stable manifolds or , in general, fixed points, thus allows an identification of regions of fixed points. The borders of clusters are marked by vertical dashed lines in Fig. 6. A change of 3 states is observed, where the first occuring cluster returns at the end of the signal. Increasing the number of detected clusters A; from 2 to 7, we obtain the distancetime plots shown in Fig. 7. The time windows, where the signal has reached the vicinity of a cluster center, remain similar for the investigated clustering results. Since the algorithm has to find k clusters, though there might be only a limited number of clusters kd < k, void clusters are detected at the borders of valid clusters. This leads to first criterion for valid clusters. A cluster can be called valid, if • its width and location in time remain more or less independent of the number of clusters, • the Euclidean distance of data points of a cluster to the center is obvious smaller then the Euclidean distance of points to the next nearest cluster center, and the width of the cluster-time window is not too small Although these criteria are rather heuristic then formal, they proved to be useful in practice 12 . Now, we try to evolve them quantitatively. The first item 34



2000 1000

_^ -"~~\

' s—-O i

_ •>





i \^


Figure 7: Cluster results for fc = 2, ..,7. The Euclidean distances between data points and detected clusters is shown.

can be formulated as a sum over all clustering results: valid contributions are additive if they occur for all k, others vanish in the sum as small contributions. Thus the contribution of a valid cluster to the sum has to be large, not reliable clusters should contribute with small values. A good quantity for these contributions is the area between the difference curves of the signal-nearest cluster-distance and signal-next cluster-distance. This definition allows the analytical formulation of the second item and is outlined in Fig. 8. Each data point ti obtains an index corresponding to the cluster j it is member of. The «C0 (u) index is equal the relativ area Tjty, where T is the number of data points. By summing up the indices over K cluster realizations for every data points, we obtain a Cluster Quality Measure (CQM) for every data point:

Aik\ti) = 4^M Ei4(*) CQM(U) =





EUE^^'in) 35

Figure 8: Sketch to illustrate the introduced criterion of a clusters validity. The area Aj between two distance curves indexes the data points, which belong to cluster j . Large areas indicates at a high measure of cluster quality.

The application on the simulated data with K = 30 leads to CQM shown in Fig. 9 and 4 cluster with a high CQM are recognized. Clusters are recognized as plateaus of CQM, its borders are located at rapid changes of CQM. The original and detected cluster time windows, shown in Fig. 6, 7 and summed up in Fig. 9, are very similar and indicate a correct detection of the fixed points. Now, we aim at modeling the dynamics of the detected trajectory segments by a nonlinear spatiotemporal analysis. If we can obtain reliable models, then the dynamics of the trajectory segments and, consequently, the whole signal is described. 4

Spatiotemporal Modeling

In this section, we introduce a nonlinear spatiotemporal analysis 19 . It determines optimal projections of high-dimensional signals onto a low-dimensional basis and fits synchronously a deterministic dynamical system describing the low-dimensional projections. The presented method is based on Principal Component Analysis (PCA), also known as Karhunen-Loeve expansion or Empirical Orthogonal Functions. These methods aim at a few orthonormal spatial modes and projections, which explain most of the variance of a multi-dimensional signal. Extensions in respect of fittings of deterministic dynamical model were proposed by Kirby 20 and Ramsay et. al. 2 1 . But these methods leave the basis of modes orthonormal. The orthogonality constraint was first abolished in metereology by Kwas36


Figure 9: The Cluster Quality Measure for every data point. Plateaus denote valid clusters, which are delimited by rapid changes.

niok 22 ' 23 and in neuroscience by Uhl 24 ' 25 ' 26 . We outline the latter method in the following. A spatiotemporal signal q(t) can be composed by a spatial modes v; and amplitudes Xi(t) by «*(*) = ^2xi(t)xi

> Xi{t) = qy\

with v,vt = 5ij. The biorthogonal modes Vi,v] can be determined by minimizing a cost function


where < .. > denotes the time average. A synchronous optimal fit of a dynamical system

idt) = r° + E rljXj(t) + E E TUxi (*)**(*) + • • • i



= fi[xj], which describes the dynamics of the projections Xi(t), can be also obtained by a cost function ^ i

< (±j(t) - fj[Xj])2 i2 > 37


A mutual cost function allows the derivation of spatial modes {VJ}, {vj} and a dynamical system f[xj] synchronously V = p • V\ + (1 — p) • V-2, +


The parameter p weights the optimization of spatial modes and dynamical system. Numerical implementation of the method and its applications on epilepsy data 2 7 and Event-Related-Potential (ERP) data 2 5 led to new insights in the dynamics of brain signals. But some problems like the influence of the weight factor p to the results and the extensive numerics for high number of spatial modes remain. In order to improve the method, we provide an analytical derivation of optimal spatial modes and a dynamical system. The new method optimizes a similar cost function v =

y < (q - q w l w ») 2 > , c y < (fc(0 - fibfj])2 > ^r1

< q2 >


+ ^ r y ( w l w i ~ hj) + ^2ai(w2i - 1), where T, a represent Lagrange multipliers of the added constraints and e denotes a weighting factor. Due to the nonlinear differential equation system x = f[xj], the variations of V in respect of the biorthogonal basis lead to nonlinear coupled vector equations N


0 = - 2 C w * + 2Cw+ + V TkjWj + e — j N


0 = - 2 C w £ + 2(wj[Cwj[ )wfc + ^2 T4fc wt + 2akMVk +


These can be solved by a perturbation expansion in e:

wl =-- vfc

+ e p i 1 ' + • •• , m = 0 + erff



Wfc == vfc + e r j ^ + • •• , ak = 0 + eak



lo + e


|i + -


_ dVd

' dwk



+ ••• ( c


dVd 2 i s m e a n value of radius Ri for N=3 dimensional Ginibre ensemble 12 . We plot in Fig. 5 the distributions of X\,Cp. On the basis of comparison of results for Gaussian ensembles, Poisson ensemble, and Ginibre ensemble we formulate 6 ' 7 ' 8 ' 9 ' 10 ' 11,12 : Homogenization Law: Random eigenenergies of statistical systems governed by Hamiltonians belonging to Gaussian orthogonal ensemble, to Gaussian unitary ensemble, to Gaussian symplectic ensemble, to Poisson ensemble, and finally to Ginibre ensemble tend to be homogeneously distributed. It can be restated mathematically as follows: 49

Figure 3: The probability density function of the radius R% of the second difference for Ginibre ensemble.

Figure 4: The probability density function of the argument $1 of the second difference for Ginibre ensemble.

IfH e GOE, GUE, GSE, PE, Ginibre ensemble, then Pvob(D)=max, where D is random event corresponding to vanishing of second difference 's probability distributions at the origin.

Both of above formulation follow from the fact that the second differences' distributions assume global maxima at origin for above ensembles 6 ' 7 ' 8,9,10,11,12 . For Coulomb gas's analogy the vectors of relative positions of vectors of relative positions of charges statistically most probably vanish. It means that the vectors of relative positions tend to be equal to each other. Thus, the relative distances of electric charges are most probably equal. We call such situation stabilization of structure of system of electric charges on complex plane. 50

Figure 5: The probability density function of the rescaled second differences for Ginibre ensemble Xi (solid line:G), for GOE(3) C\ (dashed line: O), for GUE(3) C2 (solid line: U), for GSE(3) C 4 (dashed line: S), and for P E Co (dashed line: P ) , respectively.

Acknowledgements It is my pleasure to most deeply thank Professor Jakub Zakrzewski for formulating the problem. I also thank Professor Antoni Ostoja-Gajewski for creating optimal environment for scientific work and Professor Wlodzimierz Wojcik for his giving me access to computer facilities. References 1. F. Haake, Quantum Signatures of Chaos (Berlin Heidelberg New York, Springer-Verlag, 1990), Chapters 1, 3, 4, 8 pp 1-11, 33-77, 202-213. 2. T. Guhr T, A. Muller-Groeling, and H. Weidenmuller, Phys. Rept. 299, 189 (1998). 3. M. L. Mehta, Random matrices (Boston, Academic Press, 1990), Chapters 1, 2, 9 pp 1-54, 182-193. 4. J. Ginibre, J. Math. Phys. 6, 440 (1965). 5. M. L. Mehta, Random matrices (Boston, Academic Press, 1990), Chapter 15 pp 294-310. 6. M. M. Duras, and K. Sokalski, Phys. Rev. E 54, 3142 (1996). 7. M. M. Duras, Finite difference and finite element distributions in statistical theory of energy levels in quantum systems (PhD thesis, Jagellonian University, Cracow, July 1996). 8. M. M. Duras, and K. Sokalski, Physica D125, 260 (1999). 9. M M. Duras, Proceedings of the Sixth International Conference on 51

Squeezed States and Uncertainty Relations, 24 May-29 May 1999, Naples, Italy (Greenbelt, Maryland: NASA), at press. 10. M. M. Duras, and K. Sokalski, Acta Phys. Pol. B27, 2027 (1996). 11. M. M. Duras, K. Sokalski, and P. Sulkowski, Acta Phys. Pol. B28 1023 (1997). 12. M. M. Duras, J. Opt. B: Quantum Semiclass. Opt. 2, 287 (2000).


S T U D Y OF A C-INTEGRABLE PARTIAL D I F F E R E N T I A L EQUATION A. DI GARBO (1)(2) , S. CHILLEMI(1) and L. FRONZONI(3) Istituto di BioSsica, CNR, via AlSeri 1, 56010 Ghezzano (Pisa), Italy. (2) Jstituto Nazionale di Ottica Applicata, Largo E. Fermi 6, Firenze, Italy. (3 'Dipartimento di Fisica deH'Universita di Pisa, Piazza Torricelli 2, 56100 Pisa, Italy. E-mail: [email protected] w

We study the properties of a n + 1—dimensional partial differential equation. It is shown that this equation is C-integrable; moreover explicit solutions of it are determined for different spatial dimensions. We find numerically that, in some cases, its solutions exhibit a soliton-like behaviour. 1


A nonlinear partial differential equation (PDE) is said to be C-integrable if it can be linearized by an appropriate change of variables 1 ' 2 ' 3 . Classical examples of PDEs belonging to this class are those of Burgers and of Eckhaus: both can be linearized by change of the dependent variable 2 . The study of C-integrable PDEs is interesting for several reasons. Firstly they can be thought as a convenient theoretical laboratory to study in explicit details the properties of these nonlinear PDEs. Furthermore, the study of C-integrable PDEs could be useful to better understand the nonlinear phenomena in multidimensions. The purpose of this paper is to study the following n + 1-dimensional C-integrable PDE:

i>u - v 2 v + [vn^> • v „ v - vflsWO = o



where V„ is the standard n—dimensional gradient, V = V„ • V„ and g is a generic function of the scalar field rp. This PDE can be linearized, as we will show, by a change of the dependent variable. This PDE could be relevant in applicative contexts: indeed there are examples of model PDEs used in field theory that, in some particular cases, can be reduced to equation (1). For instance in reference 5 the Yang-Mills-Higgs system in 2 + 1 dimensions with spherical symmetry reads -V[d2

- r-ldr{rdT)]y

+ (dt + [VnV> • v n v - Vf ] Y^2

= °-


This last PDE is in the form of equation (1) provided that g{tp) = tp/(l + ip2). In the following we will study some general properties of equation (1) and will explicitly determine some of its solutions. 2

Properties of the P D E

Let us show that equation (1) can be obtained by reduction from the Chiral equation. Let us suppose to have a group P and a matrix J € P.Then the generalized n + 1-dimensional Chiral equation is denned as

where the xm (m = 1,2,.., n) are Cartesian coordinates. Now, by setting J = A(ip) £ R it follows that equation (4) is equivalent to the following equation

i>tt - v > + [v„v • v n v + ^ H ^ f " ^ ] = o


that reduces to equation (1) if A{tp) satisfies the ordinary differential equation A^A-A^ = -AA^g(ip) {A^A ^ 0). Moreover, the PDE (1) can be obtained as the Eulero-Lagrange equation corresponding to the Lagrangian density

L = \[tf - v„V • v„V]Q(V0


where Q{ip) = exp[—2 f g(ip)dip]. Similarly, the corresponding Hamiltonian density is given by H = \W$ + Vni> • 3n*l>]Q(il>).


Now we show that equation (1) is a C-integrable PDE. To this aim we make the following change of the dependent variable lKi,*) = F(k(t,x)).


Then, after some manipulations, equation (1) reduces to the wave equation ktt-Kk

=0 54


if the function F(k) satisfies the ordinary differential equation Fkk — F%g(F) = 0 with Fk ^ 0 in order that the transformation (8) be locally invertible. Finally we discuss how the Cauchy problem for equation (1) can be solved. Let us assume that F(k) and its inverse are known explicitly and let T/>(0,X) and ipt{0,x) be the initial conditions relatively to the Cauchy problem for PDE (1). Then, by using the inverse transformation between if>(t,x) and k(t,x) we determine the initial conditions for the Cauchy problem corresponding to equation (9). Here we are assuming that V'CO, x) and ?/>t(0,x) are so chosen that the Cauchy problem corresponding to the PDEs (1) and (9) is well posed. The final step is to solve the wave equation and then to come back, by using equation (8), to the solution xl>(t,x) of PDE (1). We anticipate that particular functional forms of g(ip) could exist, for which the transformation (8) cannot be written explicitly. 3

Explicit solutions

In this section we determine explicit solutions of equation (1) for some well defined functional form of the function g{ip). Example 1: g(tp) = \ cot f Let us determine the solutions of equation (1) with spherical symmetry. By requiring that 4>{t,r) = ip(t,Xi,x2, ...,z„) equation (1) yields Iptt ~ Iprr - ^ — V>r + ^ bPl ~ ^t ] COt | = 0


where r = y/^2Z=i x\- Now, by setting i\>{t,r) = 4arctan^(i,r) we get the following evolution equation [1 + 4>2]{tt -


- rr + < r V r ~ < T V t ] = 0


whence, for cf>(t,r) = exp[k(t,r)], we get ktt

n—1 kr - krr = 0.


Finally, by changing in the previous equation the independent variables, y = (r + t)/2 and z = (r - t)/2, we get the Euler-Poisson-Darboux equation k

yz + J^r^[ky

+ kz}=0;

7 ? = ^ ^ ;

n = l,2,...


whose general solution is known 4 . Let us consider two cases: a) r\ integer; b) r\ half-integer. 55


7? = ( n - l ) / 2 ;


In this case it can be shown that the general solution of equation (13) is given

where c is a constant, F and G are arbitrary functions. b) 7? = ( n - l ) / 2 ; f j = | , §,§,... Setting 77 = P + I (P = 0,1,2,..), the general solutions of equation (13) can be written 11




F x




G x





where c is a constant and F , G are both arbitrary functions. Now we determine some solutions of equation (10) by starting with the case n = 2 (the case n = 1 will be treated later). We set F(x) = M and G{x) = TV where M, AT are constants; the value n = 2 implies that we must use equation (15). In this case we get for the solution of equation (10) ijj(t, r) = 4 arctanjexpfc + (M — N) arcsin -]} (16) r defined for \r\ > \t\. For n = 3 we must use equation (14) and the result is ip(t,r) =4arctan{exp[c + F ( ^ ^ ) / r + G ( ^ ^ ) / r ] } where F and G are arbitrary functions and c is a constant. solution, in this last case, is tp{t,r) — 4 arctaniexpfc H

(17) A particular

civ ~\~ hi




Similarly, other solutions of equation (10) can be found for any spatial dimension. Now we show that equation (1) for g{xj)) = \ cot f possesses solitary wave solutions in the case n — 1. From the results of the previous section it follows that ip{t,x) = 4arctanexp[cr7(x - vt)];

a = ±1


is a traveling wave solutions of equation (1) for n = 1. The parameter 7 is given by 7 = 1/Vl — v2. Similarly, it can be shown that i/>(t,-x) =4arctan{exp[(t, r) = exp[a(r 2 + 2i 2 )]; V»(«,r)=exp[ ^ • f ) = exp{[(r

° at

+ t ) a + 4 ] [ ( r

]; _

n =2


n = 3

t ) a + 4 ]


(23) n =3


where a is a constant. For negative values of a we get localized solutions. E x a m p l e 3: g{ip) = j^p By proceeding as in the previous cases it follows that the nonlinear transformation of the dependent variable that linearizes equation (1) is •0 = F(k) = sinh(fc). Consequently, examples of solutions of equation (1) can be obtained from the previous cases by simply changing the corresponding functional form of F{k) to the new one. It is worth noting that there are functional forms of g(ip) for which it is not possible to get explicitly the transformation (and its inverse) that linearizes equation (1). Let us discuss here this point by considering the case g(tp) = ij). For this choice the corresponding linearizing transformation is the solution of the following equation ^ = exp[F 2 /2] + c, where c is an arbitrary integration constant. Now, by setting c = 0, the previous equation is equivalent to JQ exp[—s2/2]ds = k — kg. We don't know explicitly the expression of the definite integral involved, but it is of course f0 exp[—s2/2]ds = ^/7r/2$(F\/2/2), where $ is the probability integral. To get information, in this case, on the kind of solutions exhibited by equation (1), we performed some numerical simulations for spatial dimension n = 1. A spatial discretization was created with Ax = 0.02 and the spatial derivatives were evaluated as: tpx(i) ~ (^(i + 1) - V(* - 1))/Az and r/>xx ~ (ip(i + 1) - 2^(i) + V»(t - 1))/A 2 x. 57

The boundary conditions used were ^(0) = ip(2) and ip(N + 1) = ijj{N - 1); where N represents the total number of points in the spatial mesh. Then, the corresponding set of 2N coupled ordinary differential equations was integrated with a fourth order Runge-Kutta method with time step At = 0.0025. The initial conditions were ip(x, 0) = 1/(1 + 10a;2) and rpt{x,0) = 0. An example of this numerical simulation is reported in figure (1). As can be seen from figure 1, two soliton-like traveling waves are created moving, with the same velocity, in opposite directions.


mto 2








Figure 1: Creation of two soliton-like traveling waves of equation (1) for g(i/>) = i/>. A 3.125: 1) t = A; 2) t = 2A; 3) t = 3A; 4) t = 4A.



We studied equation (1) in the case of a generic functional form of g(ip). For particular g(ip) this PDE could be relevant in the applicative context. We shown that this PDE is obtained as reduction from Chiral equation and it is C-integrable. We found explicit solutions of it for many spatial dimension and for different functional forms of g(tp). Moreover we shown numerically that, for the choice g(ip) = tp, the creation of soliton-like solutions occurs. Acknowledgements The authors would like to thank Michele Barbi for discussions and for reading the manuscript. 58

References 1. Calogero, F. (1991) "Why are certain nonlinear PDEs both widely applicable and integrable", in: What is integrability? edited by V. E. Zakharov, Springer, New York, 1-62. 2. Calogero, F. (1992) "C-Integrable nonlinear partial differential equations in N + l dimensions", J. Math. Phys. 33:1257-1271. 3. Calogero, F. (1993) "Universal C-integrable nonlinear partial differential equation in N + l dimensions", J. Math. Phys. 34:3197-3209. 4. Chester, C. R. (1971) Techniques in partial differential equations, McGraw-Hill Book Company. 5. Sutcliffe, P. M. (1993) "Yang-Mills-Higgs solitons in 2+1 dimension", Phys. Rev. D 47:5470-5476. 6. Ward, R. S. (1985) "Slowly-moving lumps in the CP 1 model in 2+1 dimension", Phys. Lett. B 158:424-428.



PEDRO GONQALVES LIND Grupo de Meteorologia e Climatologia, Departamento de Fisica, Faculdade de Ciencias, Universidade de Lisboa, 1749-016 Lisboa, Portugal E-mail: JOAO ALEXANDRE MEDINA CORTE-REAL Grupo de Meteorologia e Climatologia, Departamento de Fisica, Faculdade de Ciencias, Universidade de Lisboa, 1749-016 Lisboa, Portugal E-mail: JASON ALFREDO CARLSON GALLAS Grupo de Meteorologia e Climatologia, Departamento de Fisica, Faculdade de Ciencias, Universidade de Lisboa, 1749-016 Lisboa, Portugal E-mail: and Instituto de Fisica, Universidade Federal do Rio Grande do Sul, 91501-970 Porto Alegre, Brazil http: //www. i f . u f rgs . b r / ~ j g a l l a s We argue that the time-evolution of lattice configurations (patterns) representing asymptotic attractors in one-dimensional diffusively coupled map lattices subjected to periodic boundary conditions may be universally classified into five generic classes, independently of the nature or state of the local oscillators. In addition, we describe an algorithm allowing the automatization of this classification thus providing an efficient tool for the systematic investigation of the parameter space and of the relative abundance and growth of attractors as a function of lattice size and local parameters.



Recently, the complicated spatio-temporal evolution of extended dynamical systems has become a subject of intensive research and considerable progress in understanding their behavior was achieved by simulating their dynamics with arrays of identical chaotic oscillators represented by nonlinear mappings coupled diffusively and updated synchronously 1,2 ' 3 ' 4 ' 5 . To use arrays of interacting oscillators as an alternative to more traditional modelings, usually based on large sets of partial differential equations, is particularly appealing in situations where the computational effort could be greatly reduced. One computationally very demanding problem is that of simulating climate variability and change 6 . It is clear that any fast way of ascertaining the properties of the atmosphere would be very welcome. With 61

this motivation in mind, we ask: is it be possible to use arrays of coupled map lattices to simulate any one of the plethora of physical phenomena underlying climate variability? Could any of these phenomena be understood with a simplified model of the planetary atmosphere in which global macroscopic variations result from a cooperative interaction, or coupling, between regional microclimatic units behaving locally independently? A particularly important effect in the physics of the atmosphere, which seems to be a good candidate to have its properties described reasonably well by coupled map lattices, is that associated with traveling waves. Wave activity in general and traveling waves are common and rather crucial atmospheric phenomena. They are associated with the weather (e.g. Rossby waves 7 ), low-frequency variability (e.g. teleconnection patterns 8 ), tropospherestratosphere interactions (e.g. sudden warmings 9 ), intraseasonal oscillations (e.g. the Madden-Julian oscillation 10 ) and interannual oscillations (e.g. ElNino-Southern Oscillation 11 ). Traveling waves are also an important component of ocean dynamics (e.g. the Kelvin wave in the tropical Pacific ocean 1 2 ). Wave dynamics underlies atmospheric evolution ruled by a number of collective phenomena and depends on global interchanges among natural oscillations mediated by traveling waves. Thus, it is very important to understand the subtleties of traveling-wave generation to be able to improve our ability of forecasting long-term behaviors of the atmosphere (climatic changes). Several years ago, Kaneko 13 reported the existence of traveling waves in coupled map lattices. Very recently, Carretero-Gonzalez et al 1 4 investigated aspects of the traveling interface which separates two adjacent stable phases in coupled map lattices. In spite of the work so far, much remains to be done if one wishes to use the traveling waves seen in coupled map lattices to model actual physical phenomena. For instance, one needs an unambiguous classification of all possible pattern dynamics supported by the lattice, to have a measure of the number of different patterns for a given lattice with L elements, to characterize the abundance of different patterns as L grows towards the thermodynamic limit (L —> oo), to characterize typical transient-times to reach particular attractors, etc. Even for the simple case of quadratic dynamics one still does not have a systematic investigation of the possible collective behaviors as a function of the local parameter, just some case studies being available. Furthermore, a systematic investigation of the time interval needed to 'reach' attractors is also missing. Good estimates of this interval are important because simulations and statistics performed after much longer transient times frequently reveal quite regular time-evolutions 15 . In other words, transient-times required to "quench" lattice dynamics [i.e. to come sufficiently close to an attractor] can 62

be quite different (larger) from those considered so far. In this respect, the situation resembles that found for cellular automata 1 6 , where after very long transients an apparently complicated dynamics turns out to be rather tame. An additional important point is the fact that the use of a homogeneous lattice ruled by quadratic map as local oscillator seems too restrictive. This happens because for a given value of the parameter, local dynamics can only display a single stable attractor (apart from the trivial attractor at infinity). Surely, realistic models call for individual units (maps) with richer dynamics, e.g. supporting multistability 17 and non-homogeneities 15 . As a first step towards analyzing these and other questions, we present in this paper first results of our reconsideration of the phenomena of traveling waves, still for lattices of quadratic maps. A crucial point here will be to classify adequately and exhaustively the possible time-evolutions of patterns observed in coupled map lattices. Although the classification is obtained from a detailed study of quadratic dynamics, it is generic, remaining true for local dynamics of any sort 1 5 . In the present paper we argue that the time-evolution of lattice configurations, patterns, observed in one-dimensional diffusively coupled maps can be reduced to the consideration of five different basic classes. Then, we describe an algorithm providing a mean of classifying automatically time-evolutions. To this end we digress about the spatial and temporal evolution of patterns and introduce two parameters, r and d, to quantify the evolution of lattice dynamics. Finally, we discuss briefly certain dynamical characteristics of each class by considering return maps. A companion paper, to appear elsewhere, uses the concepts and the algorithm introduced here to scan systematically the parameter space of the system and to study quantitatively relevant features of traveling waves as modeled by lattices of coupled maps 1 5 with richer local dynamics. 2

Coupled Map Lattices

As is well-known 18 , coupled map lattices (CMLs), are dynamical systems formed by interconnecting L individual sites (oscillators) whose local dynamics is controlled by a nonlinear map f(x). In particular, diffusive CMLs, the models considered here, are ruled by the following equation of motion

*m(0 = /(*(0) - *[/(**)) - ^




where xt(i) represents the value of a continuous physical variable x at discrete times t and at the i-th site on the lattice. The parameter e varies in the interval 63

Figure 1: The dynamics of the local oscillators, Eq. 6.2, as a function of a.

0 < e < 1 and measures the relative coupling strength among neighboring sites of the lattice. In this work, the local dynamics is controlled by the familiar logistic (quadratic) map xt+i = f(xt) = 1 - ax\, (6.2) where a is the local parameter. In addition, we use periodic boundary conditions: xt(L + 1) = xt(l). The bifurcation diagram shown in Fig. 1 illustrates the familiar behavior of the local oscillators, Eq. 6.2, when uncoupled. 3

The Five Classes of Time-evolution

After a suitable transient time, typically 105 time steps, each site i oscillates in its own characteristic way which depends considerably on the values of a, e and initial conditions. For a given time t, we refer to the set {xt(i)} as a pattern. Patterns represent the configuration (state) of the lattice at a given time t and are the main object of investigation here. For six representative values of a, Fig. 2 below shows snapshots of lattice configurations, patterns, reached by the system after discarding a transient of 150.000 time steps when selecting e = 0.5 and starting the dynamics from random initial conditions, the same for all six snapshots. For better visualization, we always interconnect the discrete sites with line segments. 64



0.5 0 0.5








WW' v









(f) 10








i Figure Illustrative configurations, patterns, after We2: say that a examples pattern xoft(i)lattice 'repeats', or 'reappears', if for150.000 a fixedtime-steps integer for 64 sites, eto = 0.5 ( a ) that a = 1.0, k =a 0lattice , 1 , 2 ,with . . . itL is= possible findforanselected instantvalues t' >oft a. such xt USN the curve of stationary states in the bifurcation diagram (I-V characteristic curve) is Z-shaped and the system presents bistability.





-- v c


Figure 2: Partial phase diagram for N = 20 and c = 1 0 - 4 . We plot the main Hopf curve (HB), the main curve of saddle-node of equilibria (SN) and curve of homoclinic orbits (HOM). They meet in a tangent way at a Takens-Bogdanov point (TB). The symbols refer to the different configurations; Stable stationary solutions are filled dots, unstable one are hollow dots and a cross represents a saddle. Stable periodic orbits are solid circumferences.

For v > VH fixed, a stationary solution loses stability via a Hopf bifurcation 108

at / = _ 0a) 1 / 2 • It is worth noting that p via a second Hopf bifurcation. The width of the interval (00,0/3) where a time-periodic solution exists increases with v, for doping values on the interval VH < v < VHOM- For v > VHOMI the branch of oscillatory solutions ends at biases on the curve HOM, so that the oscillations disappear with fixed amplitude and zero frequency. The curves SN and HB coincide at the codimension 2 Takens-Bogdanov bifurcation point TB. At this point the curve HOM (saddle points having homoclinic orbits) is born with the same slope. In figure 2 we see that away from the TB point, the curve HOM presents some structure. Nevertheless, the configuration on phase space of the homoclinic orbits along this curve is always planar. The structure of the curve HOM is related to other Hopf curves different from HB. The main result of our investigation is the total v phase diagram of the system for a fixed value of c and is presented in figure 3. We have plotted in a doping-voltage phase diagram the Hopf curves (thick curves), the curves of saddle node bifurcation of equilibria (dotted curves) and a curve of homoclinic curves. The Hopf bifurcation curves separate regions of oscillatory and stationary behavior of the solutions. Note that there are twelve additional curves of Hopf bifurcation points besides the main Hopf curve HB. It is easy to predict oscillatory or stationary behavior for a fixed v and variable . The phase diagram has Hopf tongues within which there are no oscillations. All these Hopf curves are born in different Takens-Bogdanov points. A curve of homoclinic orbits must leave from each one of these points. By analyzing the direction of the bifurcating oscillatory branch for parameter values located on the main Hopf curve HB, we have discovered that there is a change in the character of the bifurcation at a doping v = UDHFor v < VDH the Hopf bifurcation is supercritical (the periodic orbit exists for biases in the region where the stationary solution is unstable), whereas for v > UDH it is subcritical. The analysis of the degenerate Hopf bifurcation plus 109






Figure 3: Total phase diagram of the model for N = 20 y c = 1 0 - 4 . The dotted lines are curves of stationary saddle-nodes. For the sake of clarity, we have plotted only the main line of homoclinic orbits which sprout from the T B point (thin solid line). We have not shown other homoclinic orbits.

numerical continuation shows the following. For v > UDH, the branch of unstable oscillatory solutions reaches a turning point at a bias smaller than 4>a and merges with a branch of stable time-periodic solutions with larger amplitude. At the merging point, we have a saddle-node periodic orbit. Bistability between stationary and oscillatory solutions is expected for these values of v. The large dimension of the problem and the strong nonlinearity of the equations give rise to a complicated bifurcation behavior. Nevertheless, the solutions present a robust planar behavior within the parameter region where oscillatory behavior is possible. This means that we have not found evidence of attractors more complicated than those typical of two-dimensional dynamical systems despite dealing with systems of N equations. We have also paid special attention to the curves of homoclinic orbits. Away from the TB points, the saddle could evolve to a saddle-focus, develop spatial structure and become a Sil'nikov homoclinic orbit. In this case we would 110

expect a rich variety of periodic and aperiodic motions in its neighborhood. This does not occur for our equations and we have not found any signature of time-chaotic solutions. In view of the recent experimental observation of undriven chaos (under dc voltage bias conditions) 10 , we conclude that our model equations need to incorporate additional physics to capture undriven chaos in these heterostructures.



In this work we have studied in detail the bifurcation behavior of a drift model for a semiconductor superlattice. Our results can be used for a better understanding of the experimental data. In fact, the information contained in the bifurcation set (figure 3) can be used to design new samples and predict bifurcation scenarios9. We have shown that qualitatively different behavior can be reached depending on the values of (f> and u, which are proportional to the external DC voltage applied and the doping density in the quantum wells. The former can be easily controlled in the laboratory, but the latter is fixed once the superlattice has been grown. Therefore, it is necessary to know a priori an appropriate value of the doping density depending on the purpose of the device. The possibility of building a tunable room temperature GHz oscillator is of particular technological interest. The stationary solutions are controlled by a nonlinear map and explain the qualitative behavior found in the experiments. The dynamical behavior presents oscillatory solution for certain parameter range and the bifurcation to the oscillatory solution can be sub or supercritical. The interplay between Hopf, homoclinic and saddle-node bifurcations gives rise to a rich phase diagram whose details have been thoroughly investigated. The presence of domain solutions is crucial for the disappearance of the oscillatory behavior. The organizing centers for the long time dynamics in a broad range of parameters are multiple Takens-Bogdanov bifurcation points and degenerate Hopf bifurcation points. The system shows a robust planar behavior typical of a two-dimensional dynamical system; although the system is high dimensional and strongly nonlinear the bifurcations that take place are those that can be found in a planar system. In conclusion, we have obtained a fairly complete understanding of the dynamics of the model and predicted new interesting dynamical phenomena that could be relevant for the design of new semiconductor devices. The presence of homoclinic bifurcations and the possibility of temporal chaotic solutions are still under study. Ill

Acknowledgments The authors wish to acknowledge H. T. Grahn, M. Kindelan, G. Platero and A. Wacker for collaboration on related work and E. Freire and A. RodriguezLuis for fruitful discussions about the bifurcation analysis. This work has been partially supported by Junta de Andalucia and by the DGES grants PB98-1152 and PB98-0142. References 1. L. Esaki and L. L. Chang, New transport phenomenon in a semiconductor superlattice. Phys. Rev. Lett. 33 (1974), pp. 495-498. 2. For a review see Semiconductor superlattices; Growth and Electronic Properties, World Scientific 1995, Edited by H. T. Grahn. 3. B. Laikhtman, Current-voltage instabilities in superlattices, Phys. Rev. B 44 (1991), pp. 11260-11265. 4. L. L. Bonilla, J. Galan, J. A. Cuesta, F. Martinez and J. M. Molera, Dynamics of electric-field domains and oscillations of the photocurrent in a simple superlattice model. Phys. Rev. B50 (1994) pp. 8644-8657. 5. M. Moscoso, J. Galan and L. L. Bonilla, SIAM Journal of Applied Math 2000 (in press). 6. S.-H. Kwok, R. Merlin, and L. L. Bonilla, J. Galan, J. A. Cuesta, F. C. Martinez and J. M. Molera, H. T. Grahn and K. Ploog, Domain wall kinetics and tunneling-induced instabilities in superlattices, Phys. Rev B 51 (1995) pp. 10171-10174. 7. J. Kastrup, R. Hey, K.H. Ploog, H.T. Grahn, L.L. Bonilla, M. Kindelan, M. Moscoso, A. Wacker and J. Galan, Phys. Rev. B 55 (1997), pp. 24762488. 8. L. L. Bonilla, G. Platero and D. Sanchez, Microscopic derivation of the drift velocity and diffusion coefficients in discrete drift-diffusion models of weakly coupled superlattices. Phys. Rev. B (1999), submitted. SISSA Preprint cond-mat/9909449. 9. K. J. Luo, S. W. Teitsworth, M. Rogozia, H. T. Grahn, L. L. Bonilla, J. Galan and N. Ohtani, Controllable bifurcation processes in undoped, photoexcited GaAs/AlAs superlattices, in Proceedings of the Fifth Experimental Chaos Conference, edited by M. Ding, W. L. Ditto, A. Osborne, L. M. Pecora, and M. L. Spano. World Scientific, Singapore, 1999, submitted. 10. Y. Zhang, R. Klann, K.H. Ploog, and H.T. Grahn, Synchronization and chaos induced by resonant tunneling in GaAs/AlAs superlattices, Phys. Rev. Lett. 77 (1997) pp. 3001-3004. 112

DELAYED D Y N A M I C A L SYSTEMS W I T H VARIABLE DELAY S. MADRUGA Dept. of Physics and Applied Math., University of Navarra, Irunlarrea E-31008 Pamplona, Spain E-mail: [email protected] S. BOCCALETTI Dept. of Physics and Applied Math., University of Navarra, Irunlarrea E-31008 Pamplona, Spain E-mail: [email protected]



M.A. MATIAS Instituto Mediterrdneo de Estudios Avanzados, IMEDEA (CSIC- UIB) Ctra. Valldemossa, km 7.5, E- 07071 Palma de Mallorca, Spain E-mail: [email protected] We report numerical evidence of the effects of a periodic variable delay time in delayed dynamical systems. By periodically modulating the time delay in the Mackey-Glass system we describe a transition from high dimensional chaotic states to ordered periodic outputs. We analyze this transition for a particular kind of modulation, and give the relationship between the period of the resulting orbit and the amplitude of the modulation. Future goals and open questions are highlighted.



In nature the response between a cause and its effect is never instantaneous. In the language of differential equations, the temporal interval between these two events is called a delay. In some situations, such a delay comes out to be negligible, so that the usual ordinary differential equation model well represents the natural behavior. However, there are relevant cases in which finite propagation speeds of signals, transport effects, finite reaction times, or finite response times of the system must be taken into account. As a result, the most suitable model equations to describe these cases are of the form x = F(x,Xd,\),


where a; is a m-dimensional vector (m > 1) representing the order parameter of the system, dot denotes temporal derivative, T is a generic nonlinear function, A is a family of control parameters, Xd = x(t — r ) , and r is a delay time. The above is called a delayed dynamical system (DS). In order to solve Eq. (10.1), one must set not only the initial condition for the order parameter x{t = 0), but also all its values in the time interval 113




' 200



Figure 1: Fractal dimensions Do v.s. delay time for the Mackey-Glass system.

[0, — T}. This implies in general a drastic change in the dynamical system, which becomes infinite dimensional, since one has to estimate x(t) in the infinite points of the interval. As we will see momentarily, this property induces a richer dynamical behavior, e.g. the presence of a delay creates infinite degrees of freedom. Because of their infinite dimensionality, DS provide a natural link between concentrated and space extended systems (ES). The evidence of the analogy between DS and ES has been given experimentally for a CO2 laser with delayed feedback 1 , and supported by a theoretical model 2 . The link is based on a two variable representation of the time, defined by t = a + 9T, where 0 < a < r is a continuous space-like variable and 9 is a discrete temporal variable l. By means of this representation, the long range interactions caused by the delay are converted into short range interactions along the direction 9, since Xd = x(a, 9 — 1). Furthermore, in this framework, the formation and propagation of space-time structures, such as defects 3 and/or spatiotemporal intermittency, have been identified2 and controlled 4 . Therefore, DS behave similarly to one dimensional partial differential equations defined on a interval of length r, insofar as the growth of the number of positive Lyapunov exponents with the delay r is linear, similarly to what happens in one dimensional ES, where the number of positive Lyapunov exponents grows linearly with the size of the system in the regime of weak space time chaos 5 . An important result concerning DS and supporting the above analogy is due to Farmer 6 , who found a linear relation between delay and attractor 114

dimensions for two particular systems, namely the Mackey-Glass system 7 and the Ikeda map 8 . In this latter case, Farmer pointed out that increasing the delay time has the same effect as increasing the size of a one dimensional extended system, i.e., the delay plays a role of an extensive parameter. There are aspects, however, for which the analogy between DS and ES fails. An example is the metric entropy, which is constant as a function of the delay time for DS, whereas in general it can vary as a function of the system extension for ES. A great number of DS has been proposed in the literature to model a variety of different systems. In biophysics, the Mackey-Glass model has been introduced to describe the process of blood cells creation 7 , represented by


(io 2)


7 and /u being real parameters, and m = 1. Another relevant example of DS is the logistic equation modeling the population evolution 9 x = -»x(t)

+ pu{t - r ) ( l +