Higher Order Dynamic Mode Decomposition and Its Applications 9780128197431, 0128197439

Higher Order Dynamic Mode Decomposition and Its Applications provides detailed background theory, as well as several ful

551 35 14MB

English Pages 320 [311] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Higher Order Dynamic Mode Decomposition and Its Applications
 9780128197431, 0128197439

Table of contents :
Front-Matter_2021_Higher-Order-Dynamic-Mode-Decomposition-and-Its-Applicatio
Copyright_2021_Higher-Order-Dynamic-Mode-Decomposition-and-Its-Applications
Dedication_2021_Higher-Order-Dynamic-Mode-Decomposition-and-Its-Applications
Contents_2021_Higher-Order-Dynamic-Mode-Decomposition-and-Its-Applications
Contents
Biography_2021_Higher-Order-Dynamic-Mode-Decomposition-and-Its-Applications
Biography
Professor José M. Vega
Doctor Soledad Le Clainche
Preface_2021_Higher-Order-Dynamic-Mode-Decomposition-and-Its-Applications
Preface
Chapter-1---General-introduction-a_2021_Higher-Order-Dynamic-Mode-Decomposit
1 General introduction and scope of the book
1.1 Introduction to post-processing tools
1.1.1 Singular value decomposition
1.1.2 A toy model to illustrate SVD
1.1.3 Proper orthogonal decomposition
1.1.4 Higher order SVD
1.1.5 A toy model to illustrate HOSVD
1.1.6 Applications of SVD and HOSVD
1.2 Introduction to reduced order models
1.2.1 Data-driven ROMs
1.2.2 Projection-based ROMs
1.3 Organization of the book
1.4 Some concluding remarks
1.5 Annexes to Chapter 1
A. Compact SVD
B. Truncated SVD
C. Economy HOSVD
D. Compact HOSVD
E. Truncated HOSVD
Chapter-2---Higher-order-dynamic-_2021_Higher-Order-Dynamic-Mode-Decompositi
2 Higher order dynamic mode decomposition
2.1 Introduction to standard DMD and HODMD
2.2 DMD and HODMD: methods and algorithms
2.2.1 The standard (optimized) DMD method: the DMD-1 algorithm
2.2.2 The DMD-d algorithm with d>1
2.2.3 HODMD for spatially multidimensional data, involving more than one spatial variables
2.2.4 Iterative HODMD
2.2.5 Some key points to successfully use the DMD-d algorithm with d>=1
2.3 Periodic and quasi-periodic phenomena
2.3.1 Approximate commensurability
2.3.2 Semi-analytic representation of periodic dynamics and invariant periodic orbits in phase space
2.3.3 Semi-analytic representation of quasi-periodic dynamics and the associated invariant tori in phase space
2.4 Some toy models
2.5 Some concluding remarks
2.6 Annexes to Chapter 2
A. HODMD algorithm: the main program
B. DMD-d algorithm
C. DMD-1 algorithm
D. Reconstruction of the original field
E. Approximate commensurability
Chapter-3---HODMD-applications-to-the-anal_2021_Higher-Order-Dynamic-Mode-De
3 HODMD applications to the analysis of flight tests and magnetic resonance
3.1 Introduction to flutter in flight tests
3.1.1 Training the method using a toy model for flight tests
3.1.2 Using the method in actual flight tests experimental data
3.2 Introduction to nuclear magnetic resonance
3.2.1 Training the method using a magnetic resonance toy model
3.2.2 Using the method with synthetic magnetic resonance experimental data
3.3 Some concluding remarks
3.4 Annexes to Chapter 3
A. Flight test experiments: toy model
B. Nuclear magnetic resonance: toy model
Chapter-4---Spatio-temporal-Koop_2021_Higher-Order-Dynamic-Mode-Decompositio
4 Spatio-temporal Koopman decomposition
4.1 Introduction to the spatio-temporal Koopman decomposition method
4.2 Traveling waves and standing waves
4.3 The STKD method
4.3.1 A scalar state variable in one space dimension
4.3.2 Vector state variable with one longitudinal and one transverse coordinate
4.3.3 Vector state variable with two transverse and one longitudinal coordinates
4.3.4 Vector state variable with one transverse and two longitudinal coordinates
4.4 Some key points about the use of the STKD method
4.5 Some toy models
4.6 Some concluding remarks
4.7 Annexes to Chapter 4
A. The STKD algorithm: main function
B. Spatio-temporal DMD
C. DMD-d for STKD
D. DMD-1 for STKD
E. Reconstruction
F. Dispersion diagram
Chapter-5---Application-of-HODMD-and-ST_2021_Higher-Order-Dynamic-Mode-Decom
5 Application of HODMD and STKD to some pattern forming systems
5.1 Introduction to pattern forming systems
5.2 The one-dimensional CGLE
5.2.1 Properties of the CGLE
5.2.2 The one-dimensional CGLE with Neumann boundary conditions
5.2.3 The one-dimensional CGLE with periodic boundary conditions
5.3 Thermal convection
5.3.1 The Lorenz system
5.3.2 Thermal convection in a three-dimensional rotating spherical shell
5.4 Some concluding remarks
5.5 Annexes to Chapter 5
Chapter-6---Applications-of-HODMD-a_2021_Higher-Order-Dynamic-Mode-Decomposi
6 Applications of HODMD and STKD in fluid dynamics
6.1 Introduction to fluid dynamics and global instability analysis
6.2 The two- and three-dimensional cylinder wake
6.3 Flow structures in the three-dimensional cylinder wake
6.4 The zero-net-mass-flux jet
6.5 Exercise: apply HODMD to analyze the three-dimensional cylinder wake
6.6 Some concluding remarks
6.7 Annexes to Chapter 6
A. Iterative, multi-dimensional HODMD in the wake of a three- dimensional circular cylinder
B. Calculate HOSVD
Chapter-7---Applications-of-HODMD-an_2021_Higher-Order-Dynamic-Mode-Decompos
7 Applications of HODMD and STKD in the wind industry
7.1 On the relevance of extracting spatio-temporal patterns in wind turbine wakes
7.2 Flow structures in vertical wind turbines using the HODMD method
7.3 Analysis of the flow structures in a wind turbine with horizontal axis using STKD
7.4 LiDAR experimental data: wind velocity spatial predictions
7.4.1 HODMD for spatial forecasting: toy model
7.4.2 HODMD for spatial forecasting: LiDAR experiments
7.5 Some concluding remarks
7.6 Annexes to Chapter 7
A. Toy model
B. Spatial data forecasting
Chapter-8---HODMD-and-STKD-as-data-d_2021_Higher-Order-Dynamic-Mode-Decompos
8 HODMD and STKD as data-driven reduced order models
8.1 Introduction to data driven reduced order models
8.2 Data-driven reduced order models based on HODMD and STKD
8.2.1 Data-driven ROM for temporal forecasting in the three-dimensional wake of a circular cylinder
8.2.2 Spatio-temporal forecasting in a toy model
8.3 Data-driven adaptive ROM: HODMD on the Fly
8.3.1 Application of the HODMD on the Fly data driven ROM to the one-dimensional complex Ginzburg-Landau equation
8.4 Exercises: data-driven reduced order models for the Lorenz system
8.4.1 Standard data driven ROM
8.4.2 Partially adaptive data driven ROM
8.5 Some concluding remarks
8.6 Annexes to Chapter 8
A. STKD for extrapolation in a toy model: the toy model
B. HODMD to construct a standard HODMD-based data driven ROM for extrapolation in Lorenz system
C. HODMD to construct a standard HODMD-based data driven ROM for extrapolation in Lorenz system. Additional function
Chapter-9---Conclusi_2021_Higher-Order-Dynamic-Mode-Decomposition-and-Its-Ap
9 Conclusions
9.1 Brief summary of the content of the book
9.2 The HODMD method
9.3 The STKD method
9.4 Scientific and industrially oriented applications
9.4.1 Flight tests and magnetic resonance
9.4.2 Application of the HODMD and STKD methods to pattern forming systems
9.4.3 Applications of HODMD in fluid dynamics
9.4.4 Applications of HODMD and STKD in the wind industry
9.4.5 Data-driven reduced order models obtained via the HODMD and STKD methods
References_2021_Higher-Order-Dynamic-Mode-Decomposition-and-Its-Applications
References
Index_2021_Higher-Order-Dynamic-Mode-Decomposition-and-Its-Applications
Index

Citation preview

Higher Order Dynamic Mode Decomposition and Its Applications

Higher Order Dynamic Mode Decomposition and Its Applications

José M. Vega E.T.S. Ingeniería Aeronáutica y del Espacio Universidad Politécnica de Madrid Madrid, Spain

Soledad Le Clainche E.T.S. Ingeniería Aeronáutica y del Espacio Universidad Politécnica de Madrid Madrid, Spain

Academic Press is an imprint of Elsevier 125 London Wall, London EC2Y 5AS, United Kingdom 525 B Street, Suite 1650, San Diego, CA 92101, United States 50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom Copyright © 2021 Elsevier Inc. All rights reserved. MATLAB® is a trademark of The MathWorks, Inc. and is used with permission. The MathWorks does not warrant the accuracy of the text or exercises in this book. This book’s use or discussion of MATLAB® software or related products does not constitute endorsement or sponsorship by The MathWorks of a particular pedagogical approach or particular use of the MATLAB® software. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library ISBN: 978-0-12-819743-1 For information on all Academic Press publications visit our website at https://www.elsevier.com/books-and-journals Publisher: Matthew Deans Acquisitions Editor: Brian Guerin Editorial Project Manager: Fernanda A. Oliveira Production Project Manager: Nirmala Arumugam Designer: Greg Harris Typeset by VTeX

To Carmen and Esteban, for their continuous understanding and support.

Contents

Biography Preface

1.

General introduction and scope of the book 1.1 Introduction to post-processing tools 1.1.1 Singular value decomposition 1.1.2 A toy model to illustrate SVD 1.1.3 Proper orthogonal decomposition 1.1.4 Higher order SVD 1.1.5 A toy model to illustrate HOSVD 1.1.6 Applications of SVD and HOSVD 1.2 Introduction to reduced order models 1.2.1 Data-driven ROMs 1.2.2 Projection-based ROMs 1.3 Organization of the book 1.4 Some concluding remarks 1.5 Annexes to Chapter 1

2.

xi xiii

1 1 5 8 9 13 14 15 15 17 20 21 21

Higher order dynamic mode decomposition 2.1 Introduction to standard DMD and HODMD 2.2 DMD and HODMD: methods and algorithms 2.2.1 The standard (optimized) DMD method: the DMD-1 algorithm 2.2.2 The DMD-d algorithm with d > 1 2.2.3 HODMD for spatially multidimensional data, involving more than one spatial variables 2.2.4 Iterative HODMD 2.2.5 Some key points to successfully use the DMD-d algorithm with d ≥ 1 2.3 Periodic and quasi-periodic phenomena 2.3.1 Approximate commensurability 2.3.2 Semi-analytic representation of periodic dynamics and invariant periodic orbits in phase space

29 34 34 40 48 51 52 55 55 59 vii

viii Contents

2.3.3

Semi-analytic representation of quasi-periodic dynamics and the associated invariant tori in phase space 2.4 Some toy models 2.5 Some concluding remarks 2.6 Annexes to Chapter 2

3.

HODMD applications to the analysis of flight tests and magnetic resonance 3.1 Introduction to flutter in flight tests 3.1.1 Training the method using a toy model for flight tests 3.1.2 Using the method in actual flight tests experimental data 3.2 Introduction to nuclear magnetic resonance 3.2.1 Training the method using a magnetic resonance toy model 3.2.2 Using the method with synthetic magnetic resonance experimental data 3.3 Some concluding remarks 3.4 Annexes to Chapter 3

4.

85 88 99 104 106 111 113 114

Spatio-temporal Koopman decomposition 4.1 Introduction to the spatio-temporal Koopman decomposition method 4.2 Traveling waves and standing waves 4.3 The STKD method 4.3.1 A scalar state variable in one space dimension 4.3.2 Vector state variable with one longitudinal and one transverse coordinate 4.3.3 Vector state variable with two transverse and one longitudinal coordinates 4.3.4 Vector state variable with one transverse and two longitudinal coordinates 4.4 Some key points about the use of the STKD method 4.5 Some toy models 4.6 Some concluding remarks 4.7 Annexes to Chapter 4

5.

60 61 70 71

121 123 125 125 127 130 132 135 136 144 145

Application of HODMD and STKD to some pattern forming systems 5.1 Introduction to pattern forming systems 5.2 The one-dimensional CGLE 5.2.1 Properties of the CGLE 5.2.2 The one-dimensional CGLE with Neumann boundary conditions

159 160 160 164

Contents ix

5.2.3

The one-dimensional CGLE with periodic boundary conditions 5.3 Thermal convection 5.3.1 The Lorenz system 5.3.2 Thermal convection in a three-dimensional rotating spherical shell 5.4 Some concluding remarks 5.5 Annexes to Chapter 5

6.

Introduction to fluid dynamics and global instability analysis The two- and three-dimensional cylinder wake Flow structures in the three-dimensional cylinder wake The zero-net-mass-flux jet Exercise: apply HODMD to analyze the three-dimensional cylinder wake 6.6 Some concluding remarks 6.7 Annexes to Chapter 6

189 192 193 199 204 208 209

Applications of HODMD and STKD in the wind industry 7.1 On the relevance of extracting spatio-temporal patterns in wind turbine wakes 7.2 Flow structures in vertical wind turbines using the HODMD method 7.3 Analysis of the flow structures in a wind turbine with horizontal axis using STKD 7.4 LiDAR experimental data: wind velocity spatial predictions 7.4.1 HODMD for spatial forecasting: toy model 7.4.2 HODMD for spatial forecasting: LiDAR experiments 7.5 Some concluding remarks 7.6 Annexes to Chapter 7

8.

179 185 185

Applications of HODMD and STKD in fluid dynamics 6.1 6.2 6.3 6.4 6.5

7.

168 176 176

219 220 225 230 232 235 238 239

HODMD and STKD as data-driven reduced order models 8.1 Introduction to data driven reduced order models 8.2 Data-driven reduced order models based on HODMD and STKD 8.2.1 Data-driven ROM for temporal forecasting in the three-dimensional wake of a circular cylinder 8.2.2 Spatio-temporal forecasting in a toy model 8.3 Data-driven adaptive ROM: HODMD on the Fly

247 249 251 255 256

x Contents

8.3.1

Application of the HODMD on the Fly data driven ROM to the one-dimensional complex Ginzburg–Landau equation 8.4 Exercises: data-driven reduced order models for the Lorenz system 8.4.1 Standard data driven ROM 8.4.2 Partially adaptive data driven ROM 8.5 Some concluding remarks 8.6 Annexes to Chapter 8

9.

258 262 263 266 268 270

Conclusions 9.1 9.2 9.3 9.4

Brief summary of the content of the book The HODMD method The STKD method Scientific and industrially oriented applications 9.4.1 Flight tests and magnetic resonance 9.4.2 Application of the HODMD and STKD methods to pattern forming systems 9.4.3 Applications of HODMD in fluid dynamics 9.4.4 Applications of HODMD and STKD in the wind industry 9.4.5 Data-driven reduced order models obtained via the HODMD and STKD methods

283 283 286 287 287 287 288 288 289

References

291

Index

299

The MATLAB codes used in the book can be found at https://nam03. safelinks.protection.outlook.com/?url=https%3A%2F%2Fdata.mendeley. com%2Fdatasets%2Fz8ks4f5vy5%2F1&data=02%7C01 %7Cn.arumugam%40elsevier.com %7Ce8710c5be4774a576b1808d806111359 %7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0 %7C637266017695948185&sdata= %2BfrCC9eW9SnLuJTvQFScAq875Ira2CcXtC4Cj0Qdgy8%3D &reserved=0

Biography

Professor José M. Vega Prof. José M. Vega currently holds a Professorship in Applied Mathematics at the School of Aerospace Engineering of the Universidad Politécnica de Madrid (UPM). He received a Master and a PhD, both in Aeronautical Engineering at UPM, and a Master in Mathematics at the Universidad Complutense de Madrid. Along the years, his research has focused on applied mathematics at large, including applications to physics, chemistry, and aerospace and mechanical engineering. The main topics were connected to the analysis of partial differential equations, nonlinear dynamical systems, pattern formation, water waves, reaction–diffusion problems, interfacial phenomena, and, more recently, reduced order models and data processing tools. The latter two topics are related, precisely, to the content of this book. Specifically, he developed (with Dr. Le Clainche as collaborator) the higher order dynamic mode decomposition method, and also several extensions, including the spatio-temporal Koopman decomposition method. His research activity resulted in the publication of more than one hundred and twenty research papers in first class referred journals, as well as around forty publications resulting from scientific meetings and conferences.

Doctor Soledad Le Clainche Dr. Soledad Le Clainche holds a Lectureship in Applied Mathematics at the School of Aerospace Engineering of UPM. She received three Masters of Science: in Mechanical Engineering by UPCT, in Aerospace Engineering by UPM, and in Fluid Mechanics by the Von Karman Institute. In 2013 she completed her PhD in Aerospace Engineering at UPM. Her research focuses on computational fluid dynamics and in the development of novel tools for data analysis enabling the detection of spatio-temporal patterns. More specifically, she has co-developed (with Prof. Vega) the higher order dynamic mode decomposition and variants. Additionally, she has exploited these data-driven tools to develop reduced order models that help to understand the complex physics of dynamical systems. She has also contributed to the fields of flow control, global stability xi

xii Biography

analysis, synthetic jets, analysis of flow structures in complex flows (transitional and turbulent) using data-driven methods, and prediction of temporal patterns using machine learning and soft computing techniques.

Preface The numerical simulation of many problems of scientific or industrial interest, modeled by partial differential equations, may involve a large number of numerical degrees of freedom. This is because, strictly speaking, the phase space associated with partial differential equations is infinite dimensional, meaning that it involves infinitely many degrees of freedom, which can lead to a very large number of numerical degrees of freedom upon discretization (large scale systems). However, for many such systems, the underlying physical laws introduce data redundancies, which allow for decreasing the number of numerical degrees of freedom to a smaller number of physically meaningful degrees of freedom (dimension reduction) within a good approximation. The experimental counterpart of these large scale systems involves similar difficulties and opportunities. In addition, experimental data may exhibit non-negligible noise, which must be filtered to get rid of the associated noisy unphysical artifacts. The discretized outcomes of the problems mentioned above can give rise to very large databases. This can happen because of the involved high dimensionality (one or more space dimensions, time, and perhaps various involved parameters) and/or a large number of grid points along some dimensions. Efficient manipulation and extracting knowledge (i.e., identifying the relevant patterns that are present) from the very large amount of involved data is of high interest. This can be performed using appropriate (purely data-driven) post-processing tools, which are the main object of the present book. Post-processing tools can be used to construct data driven reduced order models, which allow for the very fast online simulation of the system. These reduced order models are constructed from just experimental or numerical (obtained via black-box numerical solvers) data, without using the governing equations at all. Other reduced order models, known as projection-based reduced order models, whose construction requires using the governing equations, are outside the scope of the present book and will only be briefly addressed for completeness. On the other hand, projection-based reduced order models can be used to efficiently obtain the data needed to construct data-driven reduced order models. In other words, these two types of reduced order models are complementary of each other. Against this background, this book focuses on two recent post-processing tools developed by the authors. The main tool is the higher order dynamic mode xiii

xiv Preface

decomposition [101], which is an improvement of the well-known standard dynamic mode decomposition [154] (see also [146,155,179]). Because of the improvement, the new tool gives good results in cases in which standard dynamic mode decomposition fails, as will be explained and illustrated in several toy models and in some, less academic applications along the book. Both standard dynamic mode decomposition and higher order dynamic mode decomposition apply to dynamics that exhibit exponential/oscillatory behavior in the temporal coordinate. The second tool is the spatio-temporal Koopman mode decomposition [104], which is a spatio-temporal extension of higher order dynamic mode decomposition that simultaneously treats time and some distinguished spatial coordinates, which will be called the longitudinal coordinates. This tool applies to dynamics that exhibit exponential/oscillatory behavior in both time and the longitudinal coordinates. In particular, this method is very efficient in identifying pure or modulated traveling and standing waves, as will be seen along the book. Developing these methods requires using more classical postprocessing tools that exhibit their own interest and will also be considered. Preliminary versions of a part of the material presented in the book have been already published in research papers (which will be referred to) and presented in scientific meetings and conferences. In preparing the book, the authors have benefited from several discussions on the methods with, e.g., Marta Net and Joan Sánchez (who provided us with some computer programs that will be included in an annex to Chapter 2 and also with some numerical data that will be used in Chapter 5), from the Universitat Politècnica de Catalunya, Barcelona, Spain, and also from suggestions on the applicability of the methods to various problems from colleagues. These include, in particular, Julio Soria, from Monash University, Melbourne, Australia, Xueri Mao, from Nottinghan University, Nottinghan, U.K., Rubén Moreno-Ramos, from Altram Co., Madrid, Spain, and Paul Taylor, from Gulfstream Co., Savanah, Georgia, USA. The book is foremost intended for researchers, including young researchers, and graduate students with knowledge of MATLAB programming, numerics, advanced linear algebra, and fluid dynamics. Thus, the book could be used as an advanced graduate textbook. In order to make the material user friendly, the implementation of the various tools in the MATLAB environment is indicated by addressing to the appropriate MATLAB commands, when possible, and giving the relevant MATLAB functions (in annexes to the various chapters) otherwise. The methods will be illustrated using some toy models. In addition, some practice problems are given in the annexes to the various chapters. Already existing related books include both descriptions of dynamic mode decomposition, some of its previous extensions [90], and applications to dynamic programming [81] and fluid mechanics [30]. The dynamic mode decomposition improvements in the present book are both robust and flexible. Also, a variety of applications of scientific and industrial interest are addressed, including pattern forming systems such as the Ginzburg–Landau equation and thermal convection problems, various fluid dynamics systems (e.g., the cylinder wake,

Preface xv

the zero-net-mass-flux jet), and some more industrially oriented applications, such as magnetic resonance, aircraft flight tests, and various wind turbine flows.

José M. Vega Soledad Le Clainche Madrid, March 2020

Chapter 1

General introduction and scope of the book 1.1 Introduction to post-processing tools Database post-processing tools for matrices (namely, two-dimensional databases) and higher than two order tensors (namely, higher-dimensional databases) are used for various purposes, including filtering out errors in noisy databases, decreasing the database size (compression), and extracting the underlying relevant patterns. These tools can also be used for constructing purely data-driven reduced order models (ROMs), which allow for the fast online simulation of the physical systems from which the databases have been extracted. There are many data-processing tools [182,189], also referred to as, e.g., data analytics or data mining tools, depending on the context. Here, we concentrate in those that are most closely related to the scope of the book and, in fact, will be needed along the book. These tools apply to both real and complex data.

1.1.1 Singular value decomposition Let us begin with the singular value decomposition (SVD) method, which was invented [168] independently by Beltrami and Jordan (in 1873-74) for square matrices and further developed by Eckart and Young [54] for general rectangular matrices. The SVD of a real or complex, generally rectangular, I × J matrix A of rank r is given by [71] A=U SVH

(1.1)

where the superscript H denotes hereinafter the Hermitian adjoint, namely the transpose conjugate, i.e., 

VH ≡V ,

(1.2)

with the overbar and the superscript  denoting the complex conjugate and the transpose, respectively. Note that the Hermitian adjoint is computed with the MATLAB command ‘prime’ and that it reduces to the transpose for real matrices. U and V are I × r  and r  × J (for appropriate r  ≥ r) matrices, whose columns, known as the left and right modes, respectively, of the decomposition, are mutually orthonormal with the Hermitian inner product. Namely, they are Higher Order Dynamic Mode Decomposition and Its Applications https://doi.org/10.1016/B978-0-12-819743-1.00008-2 Copyright © 2021 Elsevier Inc. All rights reserved.

1

2 Higher Order Dynamic Mode Decomposition and Its Applications

such that U H U = V H V = Ir  ×r  ,

(1.3)

where Ir  ×r  is the unit matrix of order r  . The matrix S is diagonal and its elements, known as the singular values of the decomposition and denoted as σn , are real, non-negative, and usually sorted in decreasing order, namely σ1 ≥ σ2 ≥ . . . ≥ σr  ≥ 0.

(1.4)

Note that, because of round-off errors, the left and right sides of (1.1) are not equal in finite precision computations, but exhibit a difference comparable to zero-machine. There are several versions of SVD, which differ among each other in the sizes of the matrices U , S, and V . For instance, in the version economy SVD, computed with the MATLAB command ‘svd’, option ‘econ’, the sizes of the matrices U , S, and V are I × r  , r  × r  , and r  × J , respectively, where r  = min{I, J } ≥ r. In this case, Eq. (1.1) can also be written in terms of the elements of the various matrices as 

Aij =

r 

σn Uin V¯j n ,

(1.5)

n=1

whose right hand side can be seen as a linear combination of linearly independent rank-one matrices. In fact, the rank of A can be defined as the minimum number of involved rank-one matrices such that the decomposition (1.5) is possible. This definition coincides with the usual definition of the rank, as the rank of either the rows or columns of the matrix, illustrated in Fig. 1.1, which coin-

FIGURE 1.1 Illustration of the columns (left) and rows (right) of a matrix.

cide. If r < r  = min{I, J }, the definition of the rank as the minimum number of rank-one matrices such that the expansion (1.5) is possible implies that only the first r singular values are strictly positive in economy SVD. The remaining

General introduction and scope of the book Chapter | 1

3

r  − r ones are zero and can be ignored. Namely, Eq. (1.5) can be substituted by Aij =

r 

σn Uin V¯j n .

(1.6)

n=1

This is consistent in Eq. (1.1) with retaining only the first r singular values in the matrix S and the first r columns in both U and V . The resulting version of SVD is known as compact SVD. For compact SVD, Eqs. (1.3) and (1.4) read U H U = V H V = Ir×r

(1.7)

σ1 ≥ σ2 ≥ . . . ≥ σr−1 ≥ σr > 0,

(1.8)

and

where, as above, r is the rank of the matrix A. For convenience (to eliminate round-off errors) we do not consider the exact rank here, but the approximate rank, defined as the maximum value of r such that σr /σ1 ≥ 10−15 .

(1.9)

Using this, compact SVD is implemented in the MATLAB function ‘CompactSVD.m’, given in Annex 1.2, at the end of this chapter. As in economy SVD, defined in Eq. (1.5), because of round-off errors and truncation, the approximation provided by compact SVD in terms of the relative root mean square (RMS) error (see Eq. (1.16) below) is slightly larger than zero-machine. Note that, generically, the columns of A are linearly independent if I ≥ J and the rows are linearly independent if I ≤ J . In other words, generically, the rank of A is min{I, J } and compact SVD coincides with economy SVD. However, this property is generic, not general, since, e.g., if I = 5, J = 3 and the columns of A are v 1 = [1, 2, 3, 4, 5] , v 2 = [0, 1, 0, 0, 0] , and v 3 = v 1 + v 2 , namely ⎡ ⎤ 1 0 1 ⎢ 2 1 3 ⎥ ⎢ ⎥ ⎢ ⎥ A = ⎢ 3 0 3 ⎥, (1.10) ⎢ ⎥ ⎣ 4 0 4 ⎦ 5 0 5 then the rank of A is r = 2 < min{I, J } = 3. In this case, the singular values computed by the economy SVD method are (truncated to five significant digits) σ1 = 10.708,

σ2 = 1.1551,

σ3 = 0.

(1.11)

4 Higher Order Dynamic Mode Decomposition and Its Applications

Using compact SVD, instead, only the first two nonzero singular values are retained, namely the singular values are σ1 = 10.708,

σ2 = 1.1551.

(1.12)

It turns out that the Frobenius norm of the matrix A, defined as the square root of the sum of the squares of the absolute values of the elements of A, is given by [71] A2 = σ12 + σ22 + . . . + σr2 . (1.13) Truncation in the expansion (1.5) or (1.6), retaining only the first

r < r singular values, yields an approximation of A by a smaller rank (namely, a smaller effective size) matrix, within an error [71] 2 + . . . + σ 2, Aapprox. − A2 = σ r+1 (1.14) r where  · 2 is the Frobenius norm, as above, and the superscript Aapprox. denotes the truncated SVD reconstruction of the matrix A retaining only the first

r singular values. Equations (1.13) and (1.14) mean that 2 σ + . . . + σr2 RRMSE = r+1 , (1.15) σ12 + . . . + σr2 where, for an approximation to a vector, matrix, or tensor A, the relative RMS error, RRMSE, is defined hereinafter as RRMSE =

Aapprox. − A2 , A2

(1.16)

with  · 2 denoting the Euclidean norm for vectors and the Frobenius norm for matrices and tensors. Note that, in all cases, this norm is the square root of the sum of the squares of the absolute values of the elements of the vector, matrix or tensor. When the elements of A are strongly correlated, the singular values decay fast and the rank reduction is very large keeping the relative RMS error of the approximation, RRMSE (as defined in Eq. (1.16)), very small. On the other hand, appropriate truncation of SVD is a very good means for cleaning noisy databases [167,181] provided that noise is somewhat uncorrelated with the physically meaningful data. According to (1.15), in order to ensure that the relative RMS error be smaller than a given threshold εSVD , the following must be imposed: 2 2 σ

r+1 + . . . + σr ≤ ε (1.17) SVD . σ12 + . . . + σr2

General introduction and scope of the book Chapter | 1

5

Selecting

r as the smallest index satisfying this condition (recall that the singular values are sorted in decreasing order) uniquely determines the number of modes to be retained; if no

r < r exists such that this inequality is satisfied, then we take

r = r (no strict truncation). The resulting algorithm, retaining only these singular values, is called truncated SVD, and implemented in the MATLAB function ‘TruncSVD.m’, given in Annex 1.2 at the end of this chapter. Note that truncated SVD can also be computed replacing compact SVD by economy SVD because the additional singular values introduced by the latter method are zero (or extremely small) and thus they do not contribute to the left hand side of (1.17). Equation (1.15) gives the exact relative RMS error of the truncated SVD reconstruction. In order to ensure that this is small,

r is sometimes selected as the smallest index such that σ r+1 ≤ εSVD (1.18) σ1 for some threshold εSVD , which obviously does not coincide with its counterpart in Eq. (1.17) because the left hand sides of Eqs. (1.17) and (1.18) are different. However, both equations require that the first neglected singular value, σ r+1 , be conveniently small. In fact, when σ r+1 is very small and the singular values decay spectrally beyond σ r+1 (namely, σ r+1  σ r+2  . . .), condition (1.18) is either comparable or even more strict than condition (1.17).

1.1.2 A toy model to illustrate SVD Let us now consider a toy model illustrating the various versions of SVD given above using the function (plotted in Fig. 1.2)

FIGURE 1.2 The function f defined in Eq. (1.19).

f (x, y) = 2x(sin 30x − 15y) + exp(2xy)+ 5 sin(10x + 20y) log(5 + x 2 + y 2 ) − 3(x − 1)2 y 3/2 ,

(1.19)

6 Higher Order Dynamic Mode Decomposition and Its Applications

for 0 ≤ x ≤ 1, 0 ≤ y ≤ 1. Note that this function is fairly complex and, moreover, because of the y 3/2 factor in the last term, it is not analytic in the considered domain. Using the function (1.19), we construct the 100 × 100 matrix Aij = f (xi , yj ),

with xi =

i −1 , I −1

yj =

j −1 , J −1

(1.20)

for i = 1, . . . , I and j = 1, . . . , J , with I = J = 100. Applying compact SVD (implemented in the MATLAB function ‘CompactSVD.m’ given in Annex 1.2 at the end of this chapter) to this matrix, it turns out that the approximate rank is r = 18 < min{I, J } = 100 and the relative RMS error of the reconstruction, as defined in Eq. (1.16), is RRMSE∼ 1.8 · 10−15 . This error is almost identical to that obtained via the economy SVD (implemented in the MATLAB command ‘svd’, option ‘econ’). Thus, the truncation associated with the approximate rank does not increase the reconstruction error significantly.

FIGURE 1.3 Singular values σn (as computed using compact SVD) vs. n for the matrix defined in Eq. (1.20) with I = J = 100.

The singular values retained by compact SVD are plotted in Fig. 1.3, where it can be seen that the singular values decay spectrally for n ≥ 10. In fact, retaining just 9 modes (note the gap between the ninth and the tenth singular values), the relative RMS error of the truncated SVD reconstruction, as defined in Eq. (1.16), is RRMSE∼ 2 · 10−6 . And retaining 12 modes (note the second gap between the 12th and 13th singular values) the relative RMS error is RRMSE∼ 9.7 · 10−10 . On the other hand, using (1.17) to require that the relative RMS reconstruction error, RRMSE, be smaller than εSVD using truncated SVD (which is implemented in the MATLAB function ‘truncSVD.m’, given in Annex 1.2 at the end of this chapter), with εSVD = 10−2 and 10−3 , the method retains six and eight modes, respectively, and gives relative RMS errors, as defined in Eq. (1.16), RRMSE∼ 5.3 · 10−3 and ∼ 8.3 · 10−4 , respectively. Note that the relative RMS errors are clearly smaller than εSVD in both cases.

General introduction and scope of the book Chapter | 1

7

It is interesting to compare the singular values provided by compact SVD, plotted in Fig. 1.3, with those obtained using economy SVD, which are plotted in Fig. 1.4. As can be seen, economy SVD retains 100 modes, as expected.

FIGURE 1.4 Counterpart of Fig. 1.3 using economy SVD.

However, there is a clear change of tendency in the plot of σn vs. n at n = 18 (precisely the approximate rank of A as computed by compact SVD). The first 18 singular values almost exactly coincide with their counterparts computed by compact SVD, while the remaining ones are extremely small and affected by round-off errors. Let us now see the effect of controlled noise. To this end, we add to the matrix A a random noise with zero mean constructed with the MATLAB command ‘rand’, which gives uniformly distributed noise varying between 0 and 1. Namely Anoisy = A + noise(I, J ),

(1.21)

where, using MATLAB commands, the random matrix noise(I, J ) is defined as noise(I, J ) = 0.01 ∗ 2 ∗ (rand(I, J ) − 0.5).

(1.22)

Note that this noise exhibits zero mean, while its RMS value is seen to be √ (1.23) noise(I, J )2 / I · J 0.0058, where  · 2 is the Frobenius norm, as above. Performing compact SVD to the noisy matrix Anoisy , the singular values are as plotted in Fig. 1.5. As can be seen in this figure, now the approximate rank of the matrix is 100, which is due to the errors. In other words, noise introduces 82 additional non-small singular values, which were not present in the clean matrix (cf. Fig. 1.3). Also, there is a clear change of tendency in the plot of σn vs. n at n = 18 (namely, at the approximate rank of the clean matrix), which indicates the presence of noise and helps to

8 Higher Order Dynamic Mode Decomposition and Its Applications

FIGURE 1.5 Counterpart of Fig. 1.3 when compact SVD is applied to the noisy matrix (1.21), obtained adding a positive random noise of relative mean size 0.005 to the matrix A.

identify the noise level. On the other hand, we may take advantage of the gaps that are seen in the singular values distribution in Fig. 1.5. For instance, retaining only the first nine modes, the truncated SVD reconstruction of the noisy matrix, Aapprox , exhibits a RMS error (compared to the clean matrix) √ (1.24) A − Aapprox 2 / I · J 0.0023. Comparing this value with the RMS of the noise (see (1.23)) shows that appropriate truncation in the SVD of the noisy matrix more than halves the RMS of the noise, which is consistent with well known error filtering properties of truncated SVD [167,181].

1.1.3 Proper orthogonal decomposition The proper orthogonal decomposition (POD) was anticipated by Pearson [132] in 1901 and further developed by Hotelling [78,79]. Depending on the context, this method is also called principal component analysis or Karhunen–Loève decomposition. The POD of a system of real or complex vectors [20,34], known in this context as the snapshots, and denoted as v1, . . . , vJ ,

(1.25)

with each vector v j of size I , allows for approximately dimension-reducing the vector space spanned by the snapshots. Moreover, this dimension reduction is optimal in the RMS sense for a given dimension of the reduced subspace. The method gives an orthonormal basis of the dimension reduced subspace, known as the POD subspace, and also provides the collective (i.e., for the whole snapshot set) RMS error when the snapshots are reconstructed as their orthogonal projections onto the POD subspace. The POD can be calculated considering the

General introduction and scope of the book Chapter | 1

9

snapshot matrix, whose columns are the snapshots, namely defined as A = [v 1 , . . . , v J ].

(1.26)

Note that the size of A is I × J . The POD can be computed in several ways, either relaying on the covariance matrix [54,115] (which exhibits several drawbacks) or as a byproduct of SVD [34]. Concentrating on the latter approach, it turns out that the POD of a system of snapshots can be considered as a byproduct of the SVD of the snapshot matrix A. Applying truncated SVD to the snapshot matrix, as A = U SV H , the above-mentioned orthonormal basis of the POD subspace is formed by the retained columns of U and the relative RMS error is given by the right hand side of Eq. (1.15). As it happened with truncated SVD, the dimension reduction is very strong and the relative RMS error very small when the snapshots exhibit strong redundancies among each other. An important application of POD is the construction of ROMs; see Sect. 1.2.2 below.

1.1.4 Higher order SVD The extension of SVD to higher than two order tensors (multidimensional databases) is highly non-trivial. There are many such extensions [88]. The most obvious extension, known as the canonical decomposition, consists in reorganizing the tensor as a linear combination of rank-one tensors. For a N th order tensor T of size I1 × I2 × . . . × IN , with I1 > 1, I2 > 1, . . . , IN > 1, this decomposition is written as (cf. Eq. (1.5)) Ti1 ,i2 ,...,iN =

r 

σn Ui11 n Ui22 n · · · UiNN n ,

(1.27)

n=1

for i1 = 1, 2, . . . , I1 , i2 = 1, 2, . . . , I2 , . . . , in = 1, 2, . . . , IN . In fact, the minimum number of rank-one tensors such that this decomposition is possible defines the rank of the tensor, r, whose computation for general tensors is an open problem still nowadays [159]. The difficulty already appears in real tensors, and the analysis below applies to both real and complex tensors. The alternative definition of the rank of a matrix as the common rank of the files and columns (whose counterparts for tensors are known as the fibers of the tensor; see Fig. 1.6) does not make sense here because the ranks of the various fibers, r1 , . . . , rN , do not generally coincide if N > 2; the vector r = (r1 , . . . , rN ) is known as the multilinear rank of the tensor. Because of the above-mentioned difficulties, instead of the canonical decomposition, other extensions of SVD are used. A very robust extension was introduced by Tucker [180] and more recently popularized by de Lathauwer et al. [95,96]. This extension is known as the higher order singular value decomposition (HOSVD).

10 Higher Order Dynamic Mode Decomposition and Its Applications

FIGURE 1.6 Illustration of the fibers of a third order tensor.

As it happened with standard SVD, there are various versions of HOSVD. In particular economy HOSVD is implemented in the MATLAB function ‘EconomyHOSVD.m’ given in Annex 1.2 at the end of this chapter. Economy HOSVD decomposes the tensor T as 

Ti1 ,i2 ,...,iN =





r2 r1   n1 =1 n2 =1

···

rN  nN =1

Sn1 n2 ...nN Ui11 n1 Ui22 n2 · · · UiNN nN ,

(1.28)

for i1 = 1, 2, . . . , I1 , i2 = 1, 2, . . . , I2 , . . . , in = 1, 2, . . . , IN . In Eq. (1.28), r1 , . . . , rN are such that r1 ≥ max{r1 , I1 },. . . , rN ≥ max{rN , IN }, where r1 , . . . , rN are the components of the multilinear rank of T (namely, the ranks of the fibers of the tensor along the first, second, . . . , N th dimensions of the tensor, as defined above). Sn1 n2 ...nN are the components of a N th order tensor S, of size r1 × r2 × . . . × rN , known as the core tensor, and the matrices U 1 , U 2 , . . . , U N , of sizes I1 × r1 , I2 × r2 , . . . IN × rN , respectively, whose elements are Ui11 n1 , Ui22 n2 . . ., and UiNN nN , respectively, are known as the mode matrices along the various dimensions of the tensor T . In particular, when r1 = r1 , r2 = r2 , . . ., and rN = rN , then the economy HOSVD is known as compact HOSVD. Note that compact HOSVD is a particular case of economy HOSVD. Compact HOSVD is implemented in the MATLAB function ‘CompactHOSVD’, given in Annex 1.2, at the end of this chapter. As in standard SVD, because of round-off errors, the difference between the left and right hand sides of (1.28) is not generally zero, but slightly larger than zero-machine. Now, the right hand side of Eq. (1.28) is a multilinear function of the components of the core tensor and the mode matrices, which in fact can be considered as a tensor product of the core tensor and the mode matrices, denoted as tprod (S, U 1 , U 2 , . . . , U N ). Thus, Eq. (1.28), is rewritten as T = tprod (S, U 1 , U 2 , . . . , U N ).

(1.29)

General introduction and scope of the book Chapter | 1

11

Let us now explain how economy and compact HOSVD are calculated. In fact, there is only a slight difference between both methods, as it will be indicated below. As a first step, for each n = 1, . . . , N , the mode matrix U n is calculated as follows. First, the tensor T is unfolded into the matrix An , whose columns are the fibers of T along the nth tensor dimension. Then either economy SVD (for economy HOSVD) or compact SVD (for compact HOSVD) is applied to the matrix An (whose rank is rn ), which gives An = U n S n (V n )H .

(1.30)

Here, the matrix U n is precisely the mode matrix along the nth dimension, which invoking (1.7) is such that (U n )H U n = I, for n = 1, . . . , N,

(1.31)

where I is the unit matrix of size rn for economy HOSVD and I is the unit matrix of size rn for compact HOSVD. Once the mode matrices have been calculated, as a second step, the core tensor is computed as S = tprod (T , (U 1 )H , (U 2 )H , . . . , (U N )H ),

(1.32)

as obtained from Eqs. (1.29) and (1.31). The strictly positive singular values of the compact SVD (1.30) (namely, the elements of the diagonal matrix S n ) are denoted with the superscript n and are such that (see Eq. (1.8)) σ1n ≥ σ2n ≥ . . . ≥ σrnn −1 ≥ σrnn > 0,

(1.33)

for n = 1, 2, . . . , N . These singular values will be called the HOSVD-singular values along the nth dimension of the tensor. Now, noting that the matrix An contains all elements of the tensor T and invoking Eq. (1.13), we have T 2 = An 2 =

(σ1n )2 + (σ2n )2 + . . . + (σrnn )2 ,

(1.34)

for all n = 1, . . . , N , where  · 2 is the Frobenius norm, as above, namely the square root of the sum of the squares of the absolute values of the elements of the matrix or tensor. Similarly, applying (1.14) to the various matrices A1 , . . . , AN , it turns out that if the expansion (1.34) is truncated retaining only

r1 ≤ r1 ,

r2 ≤ rN ≤ rN terms along the various dimensions of the tensor, the Frobenius r2 ,. . . ,

norm of the error of the approximation given by the truncated expansion is such that N    T approx. − T  ≤ (σ n )2 + . . . + (σ n )2 . (1.35)

rn +1

2

n=1

rn

12 Higher Order Dynamic Mode Decomposition and Its Applications

Invoking Eqs. (1.34) and (1.35), it turns out that the relative RMS error of the approximation, as defined in Eq. (1.16), is such that N  (σ rn +1 )2 + . . . + (σrnn )2 n RRMSE ≤ . (1.36) (σ1n )2 + . . . + (σrnn )2 n=1

Note that, in Eqs. (1.35) and (1.36), we only obtain an upper bound of the error, while in Eqs. (1.14) and (1.15) we obtained the exact error. This is because when truncating along various dimensions of the core tensor, some of the neglected elements of the core tensor are counted several times in the right hand side of Eqs. (1.35) and (1.36). In any event, these equations show that truncation is very effective when the singular values along the various dimensions of the tensor decay fast, which in turn occurs when the elements of the given tensor T are strongly correlated. Now, as in standard SVD, invoking (1.36) in order to ensure that the relative RMS error of the truncated approximation is smaller than a given threshold, εHOSVD , the following can be imposed: N  (σ rn +1 )2 + . . . + (σrnn )2 n

≤ εHOSVD . (1.37) (σ1n )2 + . . . + (σrnn )2 n=1

However, in general, this condition does not uniquely determine the numbers of modes along the various dimensions,

r1 ,. . . ,

rN . This is because, when decreasing some of the numbers of retained modes and increasing others, condition (1.37) can be maintained. This gives us freedom to impose additional conditions to improve the approximation. For instance, the size of the core tensor,

r1 ×

r2 × · · · ×

rN ,

(1.38)

can be minimized subject to the restriction (1.37). This is a fairly inexpensive constraint optimization problem because both the objective function (1.38) and the restriction (1.37) are given by simple algebraic equations. A second possibility is to equi-distribute, along the various dimensions of the tensor, the upper bound in the left hand side of (1.37) by choosing

r1 , . . . ,

rN as the smallest indices such that  (σ rnn +1 )2 + . . . + (σrnn )2 εHOSVD ≤ √ for n = 1, . . . , N. (1.39) (σ1n )2 + . . . + (σrnn )2 N As it happened with standard SVD if, for some index (or indices), no strict rn = rn (no

rn < rn exists such that this inequality is satisfied, then we take

strict truncation along this dimension of the tensor). Now, these conditions do determine uniquely the numbers of modes to be retained along the various dimensions of the tensor. The resulting truncated HOSVD is implemented in the

General introduction and scope of the book Chapter | 1

13

MATLAB function ‘TruncHOSVD.m’ given in Annex 1.2 at the end of this chapter. As in standard SVD, when the singular values along the various dimensions decay very fast from σ rnn +1 on (for n = 1, . . . , N ), conditions (1.39) can be replaced by the counterpart of (1.39), namely σ rnn +1 σ1n



εHOSVD √ N

for n = 1, . . . , N.

(1.40)

1.1.5 A toy model to illustrate HOSVD Let us now illustrate the various versions of HOSVD considering the function  2   f (x, y, z) = x 2 sin 5πy + 3 log(x 3 + y 2 + z + π 2 ) − 1 − 4x 2 y 3 (1 − z)3/2 + (x + z − 1)(2y − z) cos (30(x + z)) log(6 + x 2 y 2 + z3 )

(1.41)

for 0 ≤ x ≤ 1, 0 ≤ y ≤ 1, and 0 ≤ z ≤ 1. See Fig. 1.7 for plots of three representative sections of the function. Using this function, we construct the 100 × 100 × 100 tensor Tij k = f (xi , yj , zk ),

with xi =

i−1 j −1 k−1 , yj = , zk = , I −1 J −1 K −1

(1.42)

for i = 1, . . . , I , j = 1, . . . , J , and k = 1, . . . , K, with I = J = K = 100.

FIGURE 1.7 The sections z = 0 (left plot), z = 0.5 (middle plot), and z = 1 (right plot) of the function (1.41).

Applying compact HOSVD (which is implemented in the MATLAB function ‘CompactHOSVD.m’ given in Annex 1.2 at the end of this chapter) to this tensor, we see that the approximate ranks along the first, second, and third dimensions are 25, 18, and 26, respectively. Note that these ranks are different among each other and also smaller than min{I, J, K} = 100. The relative RMS of the reconstruction, as defined in Eq. (1.16), using this method is close to zero-machine, namely RRMSE∼ 2.5 · 10−15 . The associated singular values are plotted in Fig. 1.8, where it can be seen that the three sets of singular values decay spectrally for n > 10.

14 Higher Order Dynamic Mode Decomposition and Its Applications

FIGURE 1.8 Singular values (as computed using compact HOSVD) of the tensor defined in Eq. (1.42) with I = J = K = 100, considering the singular values along the first (o), second (+), and third (×) dimensions of the tensor.

Now, we use Eq. (1.39) to truncate the tensor T , equi-distributing the upper bound in (1.39) along the three dimensions of the tensor. Such truncation is implemented in the function ‘TruncHOSVD’ in Annex 1.2 at the end of this chapter. Performing this truncation with εHOSVD = 10−2 , the method retains 6, 5, and 9 modes along the first, second, and third indices, respectively, and gives a relative RMS error RRMSE∼ 3.8 · 10−3 , which is clearly smaller than εHOSVD . If, instead, εHOSVD = 10−3 , then the numbers of retained modes along the first, second, and third indices are 9, 6, and 9, respectively, and the relative RMS error is RRMSE∼ 3.5 · 10−4 . Again, this relative RMS error is clearly smaller than εHOSVD .

1.1.6 Applications of SVD and HOSVD Standard SVD and HOSVD exhibit very interesting applications. One such application consists in database compression [111], and another, in recovering lost data using an iterative method, invented by Sirovich for two-dimensional databases [55] and extended by Moreno et al. [123] for higher-than-twodimensional databases. A step further from the results in [123] permits an effective cleaning of concentrated large errors in multidimensional databases [124]. A second application yields purely data-driven ROMs for two-dimensional or higher than two-dimensional databases, by combining standard SVD [25,26]

General introduction and scope of the book Chapter | 1

15

or HOSVD [110], respectively, with appropriate one-dimensional interpolation, which will be further explained in Sect. 1.2.1. It is remarkable that the method developed in [110] gives very good results for the transonic flow around an airfoil, where the presence of shock waves is very demanding when using SVDlike methods.

1.2 Introduction to reduced order models There are many types of reduced order models (ROMs), whose goal is to decrease the large computational resources required by standard numerical solvers. Decreasing the computational cost of numerical simulations is needed in many tasks, such as flow control [64] and optimization design [113,130].

1.2.1 Data-driven ROMs Purely data-driven ROMs do not use the governing equations. Instead, they are constructed just from data, which in turn may be experimental data or numerical data obtained by a black-box solver. Two examples are obtained upon combination of standard SVD or HOSVD plus one-dimensional interpolation [110] and combination of POD plus multidimensional interpolation [170], which are considered in this subsection. For the sake of clarity, we describe these data-driven ROMs for a specific aerodynamic problem, namely the pressure distribution along the wall of an airfoil, depending on two parameters, the angle of attack and the Mach number. Thus, the aim is to compute an approximation of the function p = p(α, M, s),

(1.43)

where α is the angle of attack, M is the Mach number, and s is a coordinate along the wall of the airfoil. To begin with, we discretize α, M, and s as α1 , . . . , αI , M1 , . . . , MJ , and s1 , . . . , sK . Thus, the resulting data can be organized as the set of I × J snapshots pij (sk ) = p(αi , Mj , sk ),

(1.44)

each giving the discretized distribution of p for these specific values of the pair (αi , Mj ). These snapshots can be computed using a high fidelity computational fluid dynamics (CFD) solver. On the other hand, these snapshots can also be organized as the third order I × J × K-tensor pij k = p(αi , Mj , sk ).

(1.45)

The HOSVD plus one-dimensional interpolation method first applies truncated (using (1.35) to keep the required precision in truncation) HOSVD, which

16 Higher Order Dynamic Mode Decomposition and Its Applications

gives pij k

r3

r2 

r1   n1 =1 n2 =1 n3 =1

1 Sn1 n2 n3 Uin U2 U3 . 1 j n2 kn3

(1.46)

Or, recalling the meaning of the indices i, j , and k, this equation can also be written as p(αi , Mj , sk ) =

r3

r2 

r1   n1 =1 n2 =1 n3 =1

Sn1 n2 n3 Vn11 (αi )Vn22 (Mj )Vn33 (sk ),

(1.47)

where 1 Vn11 (αi ) = Uin , 1

Vn22 (Mj ) = Uj2n2 ,

3 Vn33 (sk ) = Ukn . 3

(1.48)

Performing one-dimensional interpolation (using, e.g., spline interpolation) in r2 , and

r3 one-dimensional datasets gives the functions each of these

r1 ,

Vn11 (α)

(1.49)

r1 and continuous values of α, for n1 = 1, . . . ,

Vn22 (M)

(1.50)

r2 and continuous values of M, and for n2 = 1, . . . ,

Vn33 (s)

(1.51)

r3 and continuous values of s. Substituting (1.49)–(1.51) into for n3 = 1, . . . ,

(1.47) leads to p(α, M, s)

r3

r2 

r1   n1 =1 n2 =1 n3 =1

Sn1 n2 n3 Vn11 (α)Vn22 (M)Vn33 (s),

(1.52)

which completes the derivation of the method. This equation was derived in a purely data-driven fashion and can be used for the very fast computation (it only involves algebraic operations and one-dimensional interpolation) of the pressure for continuous values of α, M, and s. Concerning the combination of POD plus multidimensional interpolation, it is constructed as follows. First, appropriately truncated POD is applied to the snapshots set (1.44), which gives a limited number,

r, of POD modes, r. Then, the snapshots are projected onto the POD Pn (s1 , . . . , sK ) for n = 1, . . . ,

modes, which gives pij (sk )

r  n=1

Aij n Pn (sk ),

(1.53)

General introduction and scope of the book Chapter | 1

17

where the left hand side is as defined in (1.44) and the elements of the third order tensor Aij n appearing in the right hand side are determined precisely when performing this projection. Now, for continuous values of α and M, the pressure at the nodes s1 , . . . , sK is written as p(α, M, sk ) =

r 

an (α, M)Pn (sk ),

(1.54)

n=1

where the

r POD coefficients, an (α, M), are computed via two-dimensional interpolation in the plane α − M, noting that an (αi , Mj ) = Aij n .

(1.55)

This interpolation was performed in [170] by various methods. Note that if a larger number, m, of parameters were present, the two-dimensional interpolation should be substituted by an m-dimensional interpolation, which may be problematic. The main drawback of the two data-driven ROMs considered above is that they permit simulations only in the variables or parameters range where the data have been acquired. In other words, they do not generally permit extrapolation. The post-processing tools developed later in this book, namely the higher order dynamic mode decomposition and the spatio-temporal Koopman mode decomposition, instead, do permit developing data-driven ROMs for which extrapolation is possible; see Chapter 8.

1.2.2 Projection-based ROMs There is a second class of ROMs, known as projection-based ROMs, which need using the governing equations and have been developed in an impressive number of works for both steady [2–4,12,105] and unsteady problems of both purely scientific [1,38,138,140,143,158,160,161,174] and industrial [22,50,109, 114,178] interest. See [8,19,77,137] for summaries of the state of the art and applications of projection-based ROMs. The idea in these ROMs is to first identify a low-dimensional vector subspace of the phase space of the system, E, that approximately contains the dynamics of the system and then project the governing equations onto this low-dimensional vector subspace. This idea applies to partial differential equations and was anticipated for the computation of attractors in some time-dependent systems, which are seen to be such that their large time behavior is contained in a finitedimensional (generally nonlinear) inertial manifold [61]; see [145] for the actual calculation of inertial manifolds. In fact, even though the finite-dimensional inertial manifold is generally nonlinear, according to Whitney embedding theorem [185], the inertial manifold is contained in an also finite-dimensional (linear) vector space. Thus, using a low-dimensional approximation of the large time

18 Higher Order Dynamic Mode Decomposition and Its Applications

dynamics is a quite appealing idea that was applied early after the discovery of the concept of inertial manifold [82], combining the calculation of inertial manifolds with Galerkin projection of the governing equations onto the finitedimensional vector space containing the inertial manifold. In addition, for timedependent problems, it may be interesting to compute not only the final attractor but also transients, and steady problems depending on one or more parameters are of interest as well. In these more general situations, a good low-dimensional subspace may be obtained using the method of snapshots [162], which is now illustrated for the following nonlinear, time-dependent system: ∂v = Lv + f (v), ∂t

(1.56)

where v is a state vector belonging to a phase space E, assumed to be a (generally infinite-dimensional) Hilbert space. In addition, L is a (generally unbounded) linear operator such that L−1 is well defined and compact [144] and f is a nonlinear operator such that the nonlinear operator v → L−1 f (v) is compact. Let us recall here that compact operators map the unit ball onto a set that is close to a finite-dimensional linear manifold. The latter assumption holds if |f | is bounded and, more generally, if L is a strongly elliptic operator and f is a nonlinear operator involving spatial derivatives of a lower order than those included in the principal part of the operator L. These assumptions obviously include the case in which (1.56) is a system of ordinary differential equations, and also many dissipative systems modeled by partial differential equations of scientific/industrial interest, such as reaction–diffusion–convection systems. We assume, in addition, that the dimension of such a manifold is small. In other words, we assume that the operators L−1

and

v → L−1 f (v)

(1.57)

strongly squash/flatten the unit ball of E onto a low-dimensional manifold, namely that the image of the unit ball of E is well approximated by a lowdimensional linear subspace of E. Note that the last assumption is formally (but not quantitatively) accounted for in the compactness of L−1 . Equation (1.56) is generally an infinite-dimensional system of partial differential equations, but can be treated as a finite- (but large-) dimensional system upon spatial discretization. The above-mentioned low-dimensional linear manifold will be calculated using the method of snapshots [162], which are computed using a numerical solver on Eq. (1.56). The snapshots are selected to be N portraits of the state of the system at N representative values of t, t1 , . . . , tN , namely v 1 = v(t1 ), . . . , v N = v(tN ).

(1.58)

The set of snapshots can be calculated in various ways. Here, we only assume that it approximately contains the dynamics that is being calculated. Applying

General introduction and scope of the book Chapter | 1

19

(compact) POD machinery, as described in the last section, we obtain the POD modes, u1 , . . . , u r ,

(1.59)

which are orthonormal with the usual L2 inner product, and the singular values associated with the snapshots (1.58), denoted as σ1 ≥ σ2 ≥ . . . ≥ σr > 0,

(1.60)

where r is the rank of the snapshot set. As explained in the last section, when the singular values decay fast, the low-dimensional vector space spanned by the first n POD modes, with n somewhat small, approximately contains the vector space spanned by the snapshots and seemingly, it also contains approximately the dynamics of the system if the snapshots have been selected well. Thus, we approximate the dynamics of (1.56) as v v nGS =

n 

Ai (t) ui ,

(1.61)

i=1

for certain amplitudes Ai . Replacing (1.61) into (1.56) and projecting the resulting equations onto the POD modes (recalling that these are orthonormal), yield the following Galerkin system (GS): dAi  GS Lij Aj + fiGS (A1 , . . . , An ), = dt n

(1.62)

j =1

where the matrix LGS and the nonlinear function f GS are defined by LGS ij = ui , Luj ,

   n fiGS = ui , f Ak uk ,

(1.63)

k=1

in terms of the L2 inner product ·, ·, approximated consistently with the discretization of (1.56) that is being used. The n × n matrix LGS is computed only once, but the function f GS depends nonlinearly on A1 , . . . , An , and must be calculated at each time step in the temporal integration of (1.62); this can be quite computationally expensive if calculations are based on the whole amount of mesh points used by the numerical solver (as it must be done if the usual L2 inner product is used in (1.63)). This computational cost can be decreased noting that the evaluation of fiGS can be performed using a limited number of mesh points, of the order of the number of retained POD modes (say, 2–4 times as much). In other words, a set of mesh points is selected in the spatial grid where snapshots are defined, {x 1 , . . . , x Np }. Using these, the following inner product

20 Higher Order Dynamic Mode Decomposition and Its Applications

is considered to both apply POD and perform Galerkin projection: Np 1  u1 (x k ) · u2 (x k ), u1 , u2  = Np

(1.64)

k=1

where · is a natural inner product to multiply two state vectors. Note that (1.64) is not a true inner product in the (infinite-dimensional) phase space of Eq. (1.56), E. Usually, it is neither a true inner product in the space of all possible spatial distributions of the state vector u on the whole computational mesh, but it is generically an inner product in the POD manifold, provided that the number of selected mesh points, Np , be somewhat larger than the number of retained POD modes, as it has been repeatedly checked by one of the authors of this book (and collaborators) in some related work (see, e.g., [2–4,12,140,174]). Furthermore, the mesh points can be selected using only mild requirements, namely they should include the representative regions of the expected solution. In fact, equispaced mesh points are frequently (when complexity is not quite localized in space) a good selection [2–4,12,140], which gives impressive results [175] (namely, highly complex dynamics are well simulated using snapshots computed in transients converging to simple attractors). Obviously, concentrating the selected mesh points in those regions where the solution shows a strong activity (e.g., boundary layers) improves computational efficiency, as expected [174]. Also, the selection of mesh points can avoid regions of concentrated errors, which must be expected in the fairly rough numerical solvers that are usual in industrial applications [2–4,12]. On the other hand, a good strategy could be an automatic selection using an appropriate sampling method, such as missing point estimation [11], empirical [15,52] and discrete empirical [33,51,133] interpolation, hyper-reduction [7,148], and LUPOD [142]. A different idea from the above-mentioned projection-based ROMs relying on the method of snapshots, is the method known as proper generalized decomposition [36], which consists in obtaining tensorial representations of the solution in a given grid directly from the governing equations. This method gives very good results for linear and some nonlinear problems depending on a large number of variables or parameters.

1.3 Organization of the book After the present introductory chapter, the main post-processing tools that will be used along the book, namely the higher order dynamic mode decomposition and the spatio-temporal Koopman mode decomposition, will be described in Chapters 2 and 4, respectively, which contain the main theoretical part of the book. The remaining chapters are devoted to various applications of these methods, including the treatment of experimental data resulting from magnetic resonance and flight tests (Chapter 3), numerical data in some pattern forming

General introduction and scope of the book Chapter | 1

21

systems (Chapter 5), the analysis of some representative numerical and experimental data in fluid dynamics (Chapter 6), the study of some relevant numerical and experimental data useful in the wind industry (Chapter 7), and the use of the higher order dynamic mode decomposition and spatio-temporal Koopman decomposition methods as purely data-driven ROMs (Chapter 8). The main conclusions of the book are given in Chapter 9.

1.4 Some concluding remarks This preliminary chapter has focused on several issues, including: • An introduction to post-processing tools, concentrating in those tools that will be used in the remaining of the book. In particular, SVD, which applies to matrices (two-dimensional databases), has been addressed in Sect. 1.1.1, where three versions of this tool were considered, including economy SVD, compact SVD, and truncated SVD. These versions of SVD have been illustrated in a toy model in Sect. 1.1.2. On the other hand, the POD method, which is closely related with SVD, has been considered in Sect. 1.1.3. The extension of SVD to tensors (higher-than-two dimensional databases) has been considered in Sect. 1.1.4, where the HOSVD method were described, considering three versions of this method, namely economy HOSVD, compact HOSVD, and truncated HOSVD, which are higher-dimensional extensions of economy SVD, compact SVD, and truncated SVD. The HOSVD method was illustrated in a toy model in Sect. 1.1.5. • An introduction to ROMs has also been considered, addressing both purely data-driven ROMs (Sect. 1.2.1) based on POD or HOSVD, and projectionbased ROMs (Sect. 1.2.2), based on POD plus Galerkin projection. The latter ROMs result from projecting the governing equations on a limited set of POD modes, which in turn are obtained by applying POD to a set of representative snapshots computed with a standard numerical solver. • A short description of the organization of the remaining of the book, in Sect. 1.3.

1.5 Annexes to Chapter 1 Annex 1.1. Selected practice problems for Chapter 1, which can be completed using the MATLAB functions given in Annex 1.2: 1. Apply both economy SVD and compact SVD to the matrix A defined in Eq. (1.10) and check that the obtained singular values are as displayed in Eqs. (1.11) and (1.12), respectively. 2. Use the function (1.19) to calculate the matrix A defined in (1.20). For this matrix, check that its approximate rank is 18 and reproduce Figs. 1.3 and 1.4. 3. Elaborate on the questions raised in the last practice problem by changing the size of the matrix A, considering at least the cases in which A is a 100 × 200 and 200 × 100 matrix.

22 Higher Order Dynamic Mode Decomposition and Its Applications

4. Add to the matrix considered in the second practice problem the noise defined in Eqs. (1.21)–(1.22) and reproduce Fig. 1.5. 5. Use the function (1.41) to calculate the tensor defined in (1.42). Check that the approximate ranks of this tensor along the first, second, and third dimensions are 25, 18, and 26, respectively. Reproduce Fig. 1.8. Annex 1.2. MATLAB functions to compute the various described versions of SVD and HOSVD This annex includes some MATLAB functions to compute: • Compact SVD and truncated SVD, which are implemented in the functions ‘CompactSVD.m’ and ‘TruncSVD.m’, respectively, given in items A and B below, respectively. These functions only need some standard MATLAB commands. Namely, they do not need any of the additional functions given below. The SVD variant economy SVD, which was mentioned along the chapter, is not considered below because it does not need any specific new MATLAB function. Instead, economy SVD is computed using the MATLAB command ‘svd’, option ‘econ’. • Economy HOSVD, compact HOSVD, and truncated HOSVD, which are implemented in the functions ‘EconomyHOSVD.m’, ‘CompactHOSVD.m’, and ‘TruncHOSVD.m’, respectively, given in items C-E below. These functions need to have, in the same directory, the functions ‘CompactSVD.m’ and ‘TruncSVD.m’, along with several additional MATLAB functions (to perform the various operations involved in the construction of HOSVD) that can be downloaded from a Mathworks repository https://es.mathworks.com/matlabcentral/fileexchange/25514-tp-tool. A. Compact SVD This variant of SVD is computed using the following MATLAB function: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function [U,sv,V,Aapprox] = CompactSVD(A) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % This function computes the compact SVD of the matrix A. % This is done by first computing econ SVD and then %retaining only those modes whose sigular values are %such that

sigma(n)/sigma(1)>=1e(-15), which defines

%the approximate rank of A.

General introduction and scope of the book Chapter | 1

23

%U and V are the matrices of left and right modes, % respectively % sv are the singular values, sorted in decreasing order % Aapprox=U*S*V’ is the associated approximation of A, %where the diagonal matrix S is S=diag(sv) %APPLY STANDARD svd [U1,S1,V1]=svd(A,’econ’); sv=diag(S1); % TRUNCATE ACCORDING TO THE TOLERANCE 1e-15 TO % COMPUTE THE APPROXIMATE RANK cea=0; for mm=1:length(sv) if sv(mm)/sv(1)>=1e-15 cea=cea+1; end end sv=sv(1:cea); % COMPUTE THE TRUNCATED SINGULAR VALUES MATRIX % AND THE TRUNCATED LEFT AND RIGHT MODE MATRICES S=diag(sv); U=U1(:,1:cea); V=V1(:,1:cea); %COMPUTE THE APPROXIMATION OF THE MATRIX Aapprox=U*S*V’; end

B. Truncated SVD This MATLAB function computes truncated SVD. It needs to have, in the same directory, the MATLAB function ‘CompactSVD.m’ given in item A above.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function [U sv V Aapprox] = TruncSVD(A,epsilon) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

24 Higher Order Dynamic Mode Decomposition and Its Applications

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %This function computes the Truncated SVD of the % real or complex matrix A, using the compact SVD of A %but retaining %only those modes such that the relative RMS of the %reconstruction is smaller than epsilon. %U and V are the left and right modes, respectively. %Aapprox=U*S*V’ is the associated approximation of A %where the diagonal matrix S is S=diag(sv). %COMPUTATION OF THE STANDARD svd OF THE MATRIX A [U1,S1,V1]=svd(A,’econ’); sv=diag(S1); %RESTRICTION OF THE SINGULAR VALUES REQUIRED TO OBTAIN %THE GIVEN APPROXIMATION cea=0; MM=length(sv); aca=norm(sv,2); for mm=1:MM-1; if norm(sv(mm+1:MM),2)/aca>epsilon cea=cea+1; end end cea=min(cea,MM); sv=sv(1:cea); %RESTRICTION OF THE DIAGONAL MATRIX S AND THE MATRICES %U AND V TO RETAIN ONLY THE REQUIRED NUMBER %OF SINGULAR VALUES S=diag(sv); U=U1(:,1:cea); V=V1(:,1:cea); %COMPUTATION OF THE TRUNCATED APPROXIMATION OF A Aapprox=U*S*V’; end

General introduction and scope of the book Chapter | 1

25

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

C. Economy HOSVD This MATLAB function computes economy HOSVD. It needs to have, in the same directory, some functions from the Mathworks repository https://es.mathworks.com/matlabcentral/fileexchange/25514-tp-tool. function [S U sv Tapprox] =EconomyHOSVD(T) %Economy HOSVD

of a higher than two order tensor

% % T

- multidimensional array

% S

-

% U

- mode matrices for each dimension

% sv

-

core tensor such that T=tprod(S, U) singular values long the various dimensions of T

%Tapprox

- approximation of T given by the method

% %Needs, in the same directory, the functions wshift.m, tprod.m, %ndim_unfold.m, and ndim_fold.m, which can be found in the Mathworks %repository % https://es.mathworks.com/matlabcentral/fileexchange/25514-tp-tool} %DEFINE THE SIZE OF THE TENSOR, n, AND ITS ORDER, N n=size(T); N = length(n) %Allocate the various cells that will be used U = cell(1,N); UT = cell(1,N); sv = cell(1,N); %THE SUBSEQUENT OPERATIONS ARE PERFORMED ALONG EACH % TENSOR DIMENSION. for i = 1:N A = ndim_unfold(T, i); [Ui Si] = svd(A); svi=diag(Si); if n(i) M. In addition to the already mentioned optimized DMD [35] (see also [10]), there are several extensions of standard DMD that improve its performance. Among them: • Total least-squares DMD [76] combines total least squares and standard DMD to repair noisy data. In addition, there are other improvements of standard DMD to correct the effect of noise. Among them, Dawson et al. [43] tried to characterize the noise properties, Takeishi et al. [171] combined a Bayesian formulation with DMD, and Dicle et al. [45] computed low-rank factors of the DMD operator while satisfying the total least-squares constraints. • Extended DMD [186] extends the standard DMD approximation to include more basis functions, allowing the method to capture more complex behavior. • Sparsity promoting DMD [83] uses a penalty to identify a smaller set of important modes via convex optimization methods. However, these extensions are still based on the assumption (2.8) and thus, in principle, are not appropriate for the case in which the spatial complexity, M, is strictly smaller than spectral complexity, N . For the general case (M ≤ N ), we shall replace the linear equation that is implicit in assumption (2.8) by the higher order linear equation v k+d  R 1 v k + R 2 v k+1 + . . . + R d v k+d−1

(2.13)

for k = 1, . . . , K − d, where d ≥ 1 is a tunable integer. Obviously, this new assumption reduces to (2.8) if d = 1. Equation (2.13) is the essence of the HODMD method [101], which will be considered in Sect. 2.2.2, where an algorithm will be given to compute the expansion (2.6) based on assumption (2.13). It turns out that, with appropriate d, the HODMD method is able to construct the expansion (2.6) for general values of the spatial and spectral complexities (see Appendix A in [101]). Note that, in assumption (2.13), each snapshot is linearly related not only with the last snapshot but also with d − 1 additional former time-lagged snapshots. In fact, the use of delayed snapshots in (2.13) can be seen as applying standard DMD to a set of enlarged snapshots that include, not only ordinary snapshots but also delayed snapshots. In other words, HODMD synergically combines standard DMD and some consequences [151] of the Takens’ delayed embedding theorem [172], in which Takens followed and formalized former seminal ideas by Packard et al. [129]. Moreover, combinations of DMD and delayed snapshots had previously been suggested [179] and performed [24], but in a spirit different from that in [101]. Other applications of Takens’ theorem in different contexts (not related to DMD) include, e.g., modal parameter identification and model reduction [84], identification of invariant sets and bifurcations

34 Higher Order Dynamic Mode Decomposition and Its Applications

from time-series [23], and analysis of high-dimensional time-series combining the Laplacian eigenmaps with time-lagged embedding [67]. As it happened with standard DMD, Eq. (2.13) is related to Koopman operator theory. In fact, it is related to a combination of Koopman theory and delay embedding. For an arbitrary vector observable function g, the joint Koopman  = (R 1 , . . . , R d ) is defined such that its action on the observable g is operator R given by (cf. Eq. (2.9)) (R 1 · g)(v(t)) + (R 2 · g)(v(t + t)) + . . . + (R d · g)(v(t + (d − 1)t)) = g(v(t + dt)). (2.14) As it happened with the standard Koopman operator defined in (2.9), this joint  is linear and infinite-dimensional, and reduces to (2.13) Koopman operator R by setting g = the identity. Thus, the relation of (2.13) with Koopman theory has been established.

2.2

DMD and HODMD: methods and algorithms

Let us describe the HODMD method and give an algorithm to compute the expansion (2.6) using this method. This algorithm will be called the DMD-d algorithm, where d ≥ 1 is the index appearing in the assumption (2.13). For the sake of clarity, we begin considering standard DMD as a particular case of HODMD. This will permit us to clearly see how HODMD is an extension of standard DMD.

2.2.1 The standard (optimized) DMD method: the DMD-1 algorithm Here, for the sake of clarity, we give a method to compute standard optimized DMD that, in fact, coincides with the DMD-d method described in the next section to compute HODMD for the particular case d = 1. Also, the DMD-1 algorithm considered here is very similar to the method proposed by Schmid [155], except for one minor point that will be commented on below. As anticipated, standard DMD relies on the assumption (2.8), which is recalled here for convenience v k+1  R v k

for k = 1, . . . , K − 1.

(2.15)

This equation can be written in matrix form (in terms of snapshot matrices) as K−1 . VK 2  RV 1

(2.16)

Here, for general k1 < k2 , the snapshot matrix V kk21 is defined such that its columns are the snapshots with indices between k1 and k2 , namely V kk21 = [v k1 , v k1 +1 , . . . , v k2 ].

(2.17)

Higher order dynamic mode decomposition Chapter | 2

35

The method proceeds in three steps: • Step 1: dimension reduction. This step takes advantage of redundancies between the snapshots, and is performed by applying the truncated SVD method (1.1) (implemented in the MATLAB function ‘TruncSVD.m’ given in Chapter 1, Annex 1.2) to the full snapshot matrix, as H VK 1 UST ,

(2.18)

U H U = T H T = IN×  N ,

(2.19)

with

. The number of retained SVD where IN×  N  is the unit matrix of order N  is chosen by limiting the relative RMS error of the approximation, modes, N,  is the smallest index such that according to (1.17). Namely, N   2 2 σ + σ + . . . + σr2 N+2  N+1 < εSVD , (2.20) σ12 + σ22 + . . . + σr2 where r ≤ min{J, K} is the (approximate) rank of the full snapshot matrix VK 1 and the threshold εSVD is tunable. When the data are very precise, the default value for εSVD is 10−8 , but larger values may be convenient to filter numerical and experimental errors. For noisy snapshots, εSVD can be taken as comparable to the noise level if this is known beforehand. Otherwise, the noise level can be guessed from the singular values distribution, namely by changes in the tendency of the plot of σn vs. n; see Fig. 1.5 in Chapter 1. ×N  diagonal maAs in (1.1), the matrix S appearing in (2.18) is the N trix containing the retained SVD singular values sorted in decreasing order,  matrix U and σ1 , σ2 , . . . , σN , and the (orthonormal) columns of the J × N  the K × N matrix T are the spatial and temporal SVD-modes, respectively. Now, Eq. (2.18) can also be written as K VK 1 U V1 ,

(2.21)

where the reduced snapshot matrix is defined as H K V 1 =ST ,

(2.22)

H K K V 1 =U V1 .

(2.23)

or

 × K-matrix V K According to (2.22), the rows of the N 1 are proportional to the SVD temporal modes and can thus be seen as rescaled temporal modes.

36 Higher Order Dynamic Mode Decomposition and Its Applications

Rescaling is important to get consistent results in the dimension-reduced formulation. K The columns of V 1 will be called the reduced snapshots and denoted as  v 1 , . . . , v k . Namely, the reduced snapshot matrix is written as K v 1 , . . . , v k ]. V 1 = [

(2.24)

The reduced snapshots exhibit a much smaller dimension than the original  J , which is the usual case when simulating large scale snapshots if N systems that, because of redundancies, admit a strong dimension reduction. Thus, it is this reduced snapshot matrix that will be used below in preliminary computations, to calculate the damping rates, frequencies, and modes. In particular, Eq. (2.21) implies that the snapshots and reduced snapshots are related as vk  U  vk ,

(2.25)

or, pre-multiplying (2.25) by U H and invoking (2.19),  vk  U H vk .

(2.26)

Similarly, pre-multiplying Eq. (2.6) by U H leads to the reduced expansion  vk 

 N 

an un e(δn +i ωn ) (k−1)t

(2.27)

 u n = U H un .

(2.28)

n=1

for k = 1, . . . , K, with

And, on the contrary, if the expansion (2.27) were known (in fact, this expansion will be computed in the next step), then the expansion (2.6) (whose computation is the goal of this method) would readily follow by pre-multiplying (2.27) by U and setting un = U  un .

(2.29)

• Step 2: computation of the DMD modes, growth rates, and frequencies via the reduced Koopman matrix. Premultiplying Eqs. (2.8) and (2.16) by U H and invoking (2.25)–(2.26) yield  vk  v k+1  R

(2.30)

in terms of the reduced snapshots, or   K−1 , K V 2 R V1

(2.31)

Higher order dynamic mode decomposition Chapter | 2

37

in terms of the reduced snapshot matrix computed above, in Eq. (2.23). On ×N -matrix R  will be called the reduced Koopman the other hand, the N matrix and is related to the Koopman matrix appearing in (2.16) by  = UH R U. R

(2.32)

This equation is written here for consistency, but it is useless to compute  because the original Koopman matrix R is the reduced Koopman matrix R not known. Instead, the reduced Koopman matrix is now calculated from Eq. (2.31) via the pseudoinverse, as follows. Standard compact (no truncation) SVD, implemented in the MATLAB function ‘CompactSVD.m’ given  K−1 in Annex 1.2 to Chapter 1, applied to the matrix V leads to 1  K−1  H . V =U ST 1

(2.33)

 ≤ K − 1 (see Eq. (2.7)), the N ×N  matrix U  and the (K − Here, since N   1) × N matrix T are such that H U =U U  H = IN× U  N 

and

H T  = IN× T  N ,

(2.34)

, and the N ×N  diagonal matrix  where IN×N is the unit matrix of order N S is non-singular. Substituting (2.33) into (2.31), post-multiplying the resulting −1 H   , and invoking (2.34) lead to the following expression equation by T S U for the reduced snapshot matrix =V K  −1  H R 2 T S U .

(2.35)

 has been calculated, assuming Now, once the reduced snapshot matrix R that its eigenvalues are all different among each other (which is generic, and consistent with our assumption above that the growth rate/frequency pairs appearing in the expansion (2.1) are different from each other; see Eq. (2.38) below), the general solution of Eq. (2.30) can be written as  vk 

 N 

an q n μk−1 n ,

with qn 2 = 1,

(2.36)

n=1

 and μn are the where q n are the conveniently rescaled eigenvectors of R associated eigenvalues. This equation readily leads to the expansions (2.6) and (2.27), where invoking (2.25), the reduced modes, modes, growth rates, and frequencies appearing in (2.6) and (2.27) are given by  un = q n ,

un = U  un

(2.37)

δn + iωn =

1 log μn t

(2.38)

and

38 Higher Order Dynamic Mode Decomposition and Its Applications

 Note that we already have all ingredients appearing in for n = 1, 2, . . . , N. the right hand sides of the expansions (2.6) and (2.27), except for the mode amplitudes an , which are computed, rescaled, and truncated in the next step. Also note that the number of linearly independent reduced modes or modes , which coincides with the number in the expansions (2.6) and (2.27) is N of growth rates/frequencies, consistently with the fact that the spatial and spectral complexities coincide in the present case, as anticipated above. As also anticipated, the calculation of the modes, damping rates, and frequencies here is readily seen to differ from that in [155] (p.9) only in one point. The matrix U is calculated here, in Eq. (2.18), by applying truncated SVD to the whole snapshot matrix V K 1 , while Schmid [155] calculates U by applying truncated SVD to the smaller snapshot matrix V K−1 (ignoring the last 1 snapshot). If the number of snapshots K  1 (the usual case), then both calculations give almost identical results. In fact, in all calculations performed in the book, the method in [155] (p.9) gives quite similar results to those obtained by the method described here, which will be called DMD-1 in the sequel. Thus, the method in [155] (p. 9) will not be further considered in this book. • Step 3: computation, truncation, and rescaling of the mode amplitudes. The DMD amplitudes are now calculated considering Eq. (2.36) (a version of the reduced expansion (2.27) when using (2.38)), which is recalled here for convenience  vk 

 N 

an un μk−1 n

(2.39)

n=1

 for k = 1, . . . , K. Since the size of the (known) reduced snapshots,  v k , is N  and we have N linearly independent reduced modes  un , each of the equations appearing in (2.39), for any value of k, could uniquely determine the mode amplitudes an . However, the various approximations made above could have introduced (small but nonzero) errors that would decrease the accuracy of the computation of the amplitudes if only one of the equations appearing in (2.39) were used. Thus, instead, we consider the whole system of K equations. This system is highly over-determined, especially if K  1 (the usual case), but can be solved using the pseudoinverse, which represents a minimization of the least-squares error in the approximation (2.39) and is essentially equivalent to the so-called optimized DMD method [35]. This step is performed by rewriting (2.39) in matrix form as L a = b,

(2.40)

K × N )-matrix coefficient L, the unknown amplitudes vector a, with the (N and the forcing term b as given by

Higher order dynamic mode decomposition Chapter | 2

⎡ ⎢ ⎢ L=⎢ ⎣

 U  UM ...  MK−1 U

⎤ ⎥ ⎥ ⎥, ⎦

⎤ a1 ⎢ a2 ⎥ ⎥ a=⎢ ⎣ ... ⎦, aN ⎡

⎤  v1 ⎢  v2 ⎥ ⎥ b=⎢ ⎣ ... ⎦.  vK

39



(2.41)

×N  matrix U  are the reduced modes appearing Here, the columns of the N  = [ ×N  diagonal matrix M is in (2.39), namely U u1 , . . . , uN ], and the N formed by the eigenvalues appearing in Eq. (2.38), μ1 , . . . , μN . Equation (2.40) is solved via the pseudoinverse by applying standard compact SVD (no truncation), implemented in the MATLAB function ‘CompactSVD.m’ given in Annex 1.2 to Chapter 1, to the matrix L, which gives H 1  SU L=U 2 ,

(2.42)

×N  diagonal matrix  K × N )-matrix with the N S non-singular, and the (N  -matrix U  2 such that  U 1 and the N × N H   H  H U  N . 1 U 1 = U 2 U 2 = U 2 U 2 = IN×

(2.43)

. Substituting (2.42) into Here, IN×  N  denotes the unit matrix of order N −1 H  2  1 , and invoking (2.40), pre-multiplying the resulting equation by U S U (2.43) yield −1 H  1 b. 2  S U a=U

(2.44)

This completes a preliminary calculation of the amplitudes (these will be redefined below), which is usually the most computationally expensive step if K  1 (the usual case). However, the dimension reduction performed in the first step highly decreases the computational cost of this calculation when  (which is typical for extended systems). J is much larger than N Once the amplitudes have been calculated, we truncate the reduced expansion (2.27) by neglecting those terms such that |an | < εDMD , max |an |

(2.45)

for some threshold εDMD that controls the number of modes to be retained and should be somewhat smaller than the intended relative RMS error in the reconstruction of the reduced snapshots, defined as   K  approx.   v k − 2 vk k=1   2 , (2.46) RRMSE ≡ K 2



v k k=1 2 approx.

where  v k are the reduced snapshots, as above, and  vk denotes their reconstruction via the DMD-1 method. Note that this generally gives a smaller

40 Higher Order Dynamic Mode Decomposition and Its Applications

. Also note that, since the modes are linnumber of retained modes, N ≤ N early independent, this selection generally decreases both the spatial and  to N ), which remain as equal to each other in temporal complexities (from N this method, as anticipated. The expansion (2.27) is rewritten as  vk 

N 

an un e(δn +i ωn ) (k−1)t .

(2.47)

n=1

Finally, we rescale the mode amplitudes in the full expansion Eq. (2.6), which is obtained from the reduced expansion (2.47) by pre-multiplying this latter expansion by U and invoking (2.25) and (2.29). Namely, the full expansion (2.6) is given by vk 

N 

an un e(δn +i ωn ) tk

(2.48)

n=1

where the modes un are related to their counterparts appearing in the reduced expansion (2.47) by un = U  un ,

(2.49)

and the mode amplitudes are equal in both cases. Now, we require that (i) the mode amplitudes be real and positive (which is not necessarily true as computed above, specially for complex snapshots) and (ii) the modes un exhibit a unit RMS norm. Both conditions are accomplished by redefining both the modes and the amplitudes as √ old old Ja u

a old uold 2 new un = oldn old n , annew = n √ n (2.50)

an un 2 J where · 2 is the Euclidean norm, as above. Note that the new rescaled modes un exhibit unit RMS norm, the new amplitudes are real and ≥ 0, as required, and the product an un remains invariant under this transformation, which means that the expansion (2.6) also remains invariant. The relative RMS reconstruction error using the expansion (2.6) is given by   K  approx.    2 k=1 v k − v k  2 RRMSE ≡ , (2.51) K 2

v

k k=1 2 approx.

where v k are the original snapshots and v k via the DMD-1 method.

denotes their reconstruction

2.2.2 The DMD-d algorithm with d > 1 As anticipated above, strict HODMD (not reducing to standard DMD) relies on the higher order Koopman condition (2.13), which for convenience is rewritten

Higher order dynamic mode decomposition Chapter | 2

41

here as v k+d  R 1 v k + R 2 v k+1 + . . . + R d v k+d−1 ,

(2.52)

for k = 1, . . . , K − d. This more general condition is now used to construct an algorithm to compute the expansion (2.6) in cases in which the DMD-1 algorithm considered in the last section does not give optimal results. The DMD-d algorithm proceeds in four steps: • Step 1: first dimension reduction. This step coincides with step 1 in the DMD-1 method but, for the sake of clarity, it is summarized here. To begin with, the truncated SVD method (1.1) (implemented in the MATLAB function ‘TruncSVD.m’ given in Annex 1.2 to Chapter 1) is applied to the full snapshot matrix, V K 1 = [v 1 , v 2 , . . . , v K ], as H VK 1 UST ,

(2.53)

, is the smallest integer such that where the number of retained modes, N   2 2 σ + σ + . . . + σr2 N+2  N+1 < εSVD . (2.54) σ12 + σ22 + . . . + σr2 Here, r is the (approximate) rank of the snapshot matrix and, as in the DMD-1 algorithm, the selection of the tunable threshold εSVD depends on the noise level that is present in the snapshots. Namely, when the snapshots are very precise, the default value for εSVD is 10−8 , but larger values can be convenient for noisy snapshots to filter numerical and experimental errors. If the noise level is known in advance, then εSVD can be taken as comparable to this noise level. Otherwise, the noise level can be guessed from the singular values distribution, namely by changes in the tendency of the plot of σn vs. n; see Fig. 1.5. ×N  diagonal matrix S appearing in Eq. (2.53) is nonsingular and The N  matrix U is contains in its diagonal the retained singular values; the J × N such that U H U = IN×  N ,

(2.55)

. Equation (2.53) is rewritten as where IN×N is the unit matrix of order N  VK 1 U V1 ,

(2.56)

H K V 1 =ST ,

(2.57)

K

where

will be called the reduced snapshot matrix, and its columns, the reduced snapshots, denoted as  v k . Note that, Eq. (2.56) implies that the snapshots

42 Higher Order Dynamic Mode Decomposition and Its Applications

and reduced snapshots are related as vk  U  vk ,

(2.58)

or, pre-multiplying this equation by U H and invoking (2.55),  vk  U H vk .

(2.59)

Using these, the counterpart of Eq. (2.52) for the reduced snapshots reads 1  2  d   v k+d  R vk + R v k+1 + . . . + R v k+d−1 ,

(2.60)

for k = 1, . . . , K − d, where 1 = U H R 1 U , R

2 = U H R 2 U , . . . , R d = U H R d U . R

(2.61)

This equation is written here for consistency, but it is useless to compute 1 , R 2 , . . . , R d , because the original Koopthe reduced Koopman matrices, R man matrices, R 1 , . . . , R d , are not known. Instead, we define the enlarged– , as reduced snapshots, whose size is d N ⎡ v ∗k

⎢ ⎢ ≡⎢ ⎢ ⎣

 vk  v k+1 ...  v k+d−2  v k+d−1

⎤ ⎥ ⎥ ⎥, ⎥ ⎦

(2.62)

for k = 1, . . . , K − d. The name ‘enlarged–reduced snapshots’ comes from the fact that these snapshots are defined by enlarging, in (2.62), the reduced snapshots  v 1 , v 2 , . . .. Noting that, according to Eq. (2.60), each reduced snapshot depends linearly on the former d reduced snapshots, it turns out that each of the enlarged–reduced snapshot is related to the former enlarged–reduced snapshot through an enlarged–reduced Koopman matrix, R ∗ , whose size is  × d N,  as dN v ∗k+1  R ∗ v ∗k ,

(2.63)

for k = 1, . . . , K − d. Note that when defining the enlarged–reduced snapshots (2.62), we are losing d − 1 of the original snapshots and we have the enlarged–reduced snapshot matrix = [v ∗1 , v ∗2 , . . . , v ∗K−d+1 ]. (V ∗ )K−d+1 1

(2.64)

Equation (2.63) can be written in matrix form as  R ∗ (V ∗ )K−d . (V ∗ )K−d+1 2 1

(2.65)

Higher order dynamic mode decomposition Chapter | 2

43

The sizes of the enlarged–reduced snapshot matrix and the enlarged–reduced Koopman matrix appearing in this equation are  × K − d + 1 and d N  × d N,  dN

(2.66)

 is large. However, the dimenrespectively, which may be fairly large if d N sion reduction performed in the first step has decreased these sizes since, without the dimension reduction, they would have been dJ ×K and dJ ×dJ , respectively, which are much larger than their counterparts in (2.66), espe is usually much smaller than J . In fact, cially in extended systems, where N for extended systems in, e.g., three-dimensional fluid dynamics, J can be as large as J ∼ 107 . If, in addition, K ∼ 1000 and d ∼ 100, which is typical in quasi-periodic dynamics (see the applications along the book), the method would be absolutely impractical without the spatial dimension reduction performed in the first step. In any event, the enlarged–reduced Koopman matrix R ∗ will not be calculated from Eq. (2.65) because this equation will be further dimension-reduced in the next step. • Step 2: second dimension reduction. The enlarged–reduced snapshot matrix defined in (2.64) is dimension reduced upon application of truncated SVD (implemented in the MATLAB function ‘TruncSVD.m’ given in Annex 1.2 to Chapter 1), as (V ∗ )K−d+1 = U ∗1 S ∗ (U ∗2 )H , 1

(2.67)

 × N ∗ matrix U ∗ is such that where the d N 1 (U ∗1 )H U ∗1 = IN ∗ ×N ∗ ,

(2.68)

with IN ∗ ×N ∗ = the unit matrix of order N ∗ . The number of modes retained in this application of truncated SVD, , N∗ ≥ N

(2.69)

is selected in terms of the singular values of the enlarged–reduced snapshot matrix appearing in the left hand side of (2.67), σ1∗ , σ2∗ , . . . , σr∗∗ , using the counterpart of (2.54), namely N ∗ is the smallest index such that  (σN∗ ∗ +1 )2 + . . . + (σr∗∗ )2 ≤ εSVD . (2.70) (σ1∗ )2 + . . . + (σr∗∗ )2 Here, r ∗ is the (approximate) rank of the enlarged–reduced snapshot matrix appearing in the left hand side of (2.67), (V ∗ )K−d+1 , and for simplicity the 1 tunable threshold εSVD is taken to coincide with its counterpart in the first dimension reduction performed in step 1 above; see Eq. (2.54).

44 Higher Order Dynamic Mode Decomposition and Its Applications

Equation (2.67) is rewritten as  ∗ )K−d+1 , (V ∗ )K−d+1 = U ∗1 (V 1 1

(2.71)

 ∗ )K−d+1 , is dewhere the reduced–enlarged–reduced snapshot matrix, (V 1 fined as ∗

 )K−d+1 = S ∗ (U ∗2 )H . (V 1

(2.72)

The name ‘reduced–enlarged–reduced snapshot matrix’ comes from the fact that this matrix is obtained upon dimension reduction applied to the enlarged– . The columns of the reduced–enlarged– reduced snapshot matrix (V ∗ )K−d+1 1 reduced snapshot matrix are called reduced–enlarged–reduced snapshots, v ∗K−d+1 . Namely, the reduced–enlarged–reduced and denoted as  v ∗1 , . . . , snapshot matrix is written as  ∗ ∗   ∗ )K−d+1 =  v 1 , v 2 , . . . , v ∗K−d+1 . (V 1

(2.73)

Note, invoking Eqs. (2.68), (2.71), and (2.72), that the enlarged–reduced snapshots and the reduced–enlarged–reduced snapshots are related among each other by v ∗k v ∗k = U ∗1 

and  v ∗k = (U ∗1 )H v ∗k .

(2.74)

Using these, we can pre-multiply Eq. (2.65) by (U ∗1 )H , which gives ∗





 (V  )K−d ,  )K−d+1  R (V 2 1

(2.75)

or, in terms of the reduced–enlarged–reduced snapshots, ∗

   v ∗k+1  R v ∗k ,

(2.76)

∗ is related to the where the reduced–enlarged–reduced Koopman matrix R enlarged–reduced Koopman matrix by ∗ = (U ∗1 )H R ∗ U ∗1 . R

(2.77)

As in the DMD-1 method, this equation is written only for consistency, but it will not be used below. • Step 3: computation of the DMD modes, growth rates, and frequencies. ∗ is computed using the The reduced–enlarged–reduced Koopman matrix R pseudoinverse in Eq. (2.75), which is done by applying standard compact  ∗ )K−d , which gives SVD (no truncation) to the N ∗ × (K − d) matrix (V 1 ∗

 )K−d = U ∗3 S ∗1 (U ∗4 )H , (V 1

(2.78)

Higher order dynamic mode decomposition Chapter | 2

45

where the N ∗ × N ∗ diagonal matrix S ∗1 is nonsingular, and the N ∗ × N ∗ matrix U ∗3 and the (K − d) × N ∗ matrix U ∗4 are such that (U ∗3 )H U ∗3 = U ∗3 (U ∗3 )H = IN ∗ ×N ∗ and (U ∗4 )H U ∗4 = IN ∗ ×N ∗ ,

(2.79)

with IN ∗ ×N ∗ denoting the unit matrix of order N ∗ , as above. Now, substituting (2.78) into (2.75), post-multiplying the resulting equation by U ∗4 (S ∗1 )−1 (U ∗3 )H , and taking into account (2.79) yield the following expression for the reduced–enlarged–reduced Koopman matrix:  ∗ )K−d+1 U ∗4 (S ∗1 )−1 (U ∗3 )H . ∗  (V R 2

(2.80) ∗

 has been Once the N ∗ × N ∗ reduced–enlarged–reduced Koopman matrix R calculated, assuming that the eigenvalues of this matrix are different from each other (which, as above, is generic and consistent with our assumption that the growth rate/frequency pairs appearing in the expansion (2.1) are different from each other), we invoke Eq. (2.76), whose general solution is given by ∗

 v ∗k 

N 

an q ∗n μk−1 n ,

(2.81)

n=1

for k = 1, . . . , K, where μ1 , . . . , μN ∗ are the eigenvalues of the reduced– ∗ and  q ∗1 , . . . , q ∗N ∗ are the associated enlarged–reduced Koopman matrix R eigenvectors. This expansion can also be written as ∗

 v ∗k



N 

an q ∗n e(δn +i ωn ) (k−1)t ,

(2.82)

n=1

∗ where the growth rates and frequencies are related to the eigenvalues of R by δn + iωn =

1 log μn . t

(2.83)

Pre-multiplying Eq. (2.82) by U ∗1 and invoking (2.74) lead to the following expansion for the enlarged reduced snapshots: ∗

v ∗k

=

N 

an q ∗n e(δn +i ωn ) (k−1) t ,

(2.84)

n=1

where the enlarged modes q ∗n are given in terms of their counterparts in Eq. (2.82) by q ∗n = U ∗1 q ∗n .

(2.85)

46 Higher Order Dynamic Mode Decomposition and Its Applications

 components of the Now, consistently with the definition (2.62), the first N ∗ enlarged–reduced snapshots appearing in (2.84), v k , are precisely the reduced q n the conveniently snapshots considered in step 1,  v k . Thus, denoting as   components of the enlarged modes q ∗n rescaled vectors built by the first N appearing in the expansion (2.84), the counterpart of this expansion reads ∗

 vk =

N 

an un e(δn +i ωn ) (k−1) t ,

(2.86)

n=1

for k = 1, . . . , K, which gives the reduced modes and modes appearing in the expansions (2.6) and (2.27), with q n /  q n 2 ,  un = 

un = U un

(2.87)

. Invoking Eqs. (2.86) and (2.87), we have all ingredients for n = 1, . . . , N appearing in the right hand sides of the expansions (2.6) and (2.27), except for the mode amplitudes, which are computed in the next step. Recall that the sizes of the reduced snapshots,  v k , and the reduced modes, , which means that the rank of the system of re un , are both equal to N . On the other hand, the number of terms appearing duced modes is at most N  (see in the reduced expansion (2.85) is generally larger, namely N ∗ ≥ N Eq. (2.69)). This is consistent with the fact that the DMD-d algorithm, with d > 1, is prepared to cope with expansions in which the spectral complexity is larger than the spatial complexity. • Step 4: computation, truncation, and rescaling of the mode amplitudes. This step is similar to its counterpart in the DMD-1 algorithm, but it is developed here for the sake of clarity because there are differences in the size of some matrices. Invoking (2.83), the reduced expansion (2.86) can also be written as  vk 

 N 

an un μk−1 n

for k = 1, . . . , K.

(2.88)

n=1

This overdetermined system of equations is solved for the mode amplitudes using the pseudoinverse. To this end, the system (2.88) is first written in matrix form as L a = b,

(2.89)

K × N ∗ )-matrix coefficient L, the unknown amplitudes vector where the (N a, and the forcing term b are defined as ⎡ ⎤ ⎡ ⎡ ⎤ ⎤  U  v1 a1 ⎢  ⎥ ⎢ v2 ⎥ ⎢ a2 ⎥ ⎢ ⎥ ⎥, b = ⎢  ⎥ (2.90) L = ⎢ U M ⎥, a = ⎢ ⎣ ... ⎦. ⎣ ⎦ ... ⎣ ⎦ ... aN  vK  MK−1 U

Higher order dynamic mode decomposition Chapter | 2

47

 × N ∗ matrix U  are the reduced modes calcuHere, the columns of the N  = [ uN ∗ ], and the lated above and appearing in Eq. (2.88), namely U u1 , . . . , ∗ ∗ N × N diagonal matrix M is formed by the eigenvalues appearing in (2.83), namely μ1 , . . . , μN ∗ . As in the DMD-1 method, step 3, Eq. (2.89) is solved via the pseudoinverse by applying standard compact SVD (no truncation) to the matrix L, as 1  2 , L=U SU H

(2.91)

K × N ∗ )-matrix U  1 and the N ∗ × N ∗ -matrix U  2 such that with the (N H H   H  U 1 U 1 = U 2 U 2 = U 2 U 2 = IN ∗ ×N ∗ ,

(2.92)

where IN ∗ ×N ∗ denotes the unit matrix of order N ∗ ; the N ∗ × N ∗ matrix  S is diagonal and nonsingular. Substituting (2.91) into (2.89), pre-multiplying −1 H  1 , and invoking (2.92) yield the amplitude  2 S U the resulting equation by U vector as −1 H 2   1 b. S U a=U

(2.93)

This completes a preliminary calculation of the amplitudes that, as in the DMD-1 algorithm, is usually the most computationally expensive step if K  1 (the usual case in extended systems). However, the dimension reduction performed in the first step highly decreases the computational cost of . this computation when either J or K is much larger than N After computing the amplitudes appearing in Eq. (2.86), or in Eq. (2.82), this latter equation is truncated by eliminating those terms such that |an | < εDMD , max |an |

(2.94)

where, as in the DMD-1 algorithm, the threshold εDMD should be somewhat smaller than the intended relative RMS error of the reconstruction of the reduced snapshots, defined in (2.46). Note that this generally gives a number of retained modes, N , that is smaller than N ∗ . Thus, Eq. (2.88) is rewritten as  vk 

N 

an un μk−1 n

for k = 1, . . . , K,

(2.95)

n=1

or invoking (2.83)  vk 

N  n=1

an un e(δn +i ωn ) (k−1)t

for k = 1, . . . , K.

(2.96)

48 Higher Order Dynamic Mode Decomposition and Its Applications

Now, this equation is pre-multiplied by U , which invoking (2.58) yields the following expansion by the original snapshots: vk =

 N 

an un e(δn +i ωn ) (k−1)t ,

(2.97)

n=1

where the modes un are given by un = U  un .

(2.98)

Equation (2.97) is precisely the expansion (2.6), whose derivation was the goal of the method. Obviously, as in the DMD-1 method, step 3, the modes un computed in Eq. (2.98) do not generally exhibit unit RMS norm and neither the amplitudes an are generally real and ≥ 0. However, these two conditions are fulfilled by redefining both the modes and the amplitudes as we did in the DMD-1 method, step 3, using Eq. (2.50), which is rewritten here for convenience, as √ unew n

=

J anold uold n ,

anold uold

n 2

annew =

2

anold uold . √n J

(2.99)

Note that this transformation leaves an un invariant (thus, it also leaves invariant the expansion (2.6)), and also, the amplitudes are real and nonnegative and the RMS norm of the new modes is one, as required. As in the DMD-1 method, the relative RMS error of the reduced snapshots and the original snapshots are given by   K  approx.   v k − 2 vk k=1   2 RRMSE ≡ K 2



v k 2 k=1 and

(2.100)

  K  approx.    2 k=1 v k − v k  2 RRMSE ≡ K 2

v

k k=1 2

respectively, where, as in the former section, the superscript the reconstruction via the DMD-d method.

(2.101) approx.

denotes

2.2.3 HODMD for spatially multidimensional data, involving more than one spatial variables In some cases, the snapshot matrix considered above must be substituted by a snapshot tensor depending on three or four indices, labeling the time variable

Higher order dynamic mode decomposition Chapter | 2

49

and two or three space variables. In other words, instead of Eq. (2.6), which can also be written as N 

vj k 

an uj n e(δn +i ωn ) (k−1) t

for k = 1, . . . , K,

(2.102)

n=1

where v1k , v2,k , . . . , vJ k are the components of the vector v k (of size J ) and, similarly, u1n , u2,n , . . . , uJ n are the components of the vector un , we may consider expansions of the form, e.g., N 

vjl 1 j2 k =

an ulj1 j2 n e(δn +i ωn ) (k−1) t

for k = 1, . . . , K,

(2.103)

n=1

for l = 1, 2, j1 = 1, . . . , J1 , j2 = 1, . . . , J2 , and k = 1, . . . , K, or vjl 1 j2 j3 k 

N 

an ulj1 j2 j3 n e(δn +i ωn ) (k−1)t

for k = 1, . . . , K.

(2.104)

n=1

for l = 1, 2, 3, j1 = 1, . . . , J1 , j2 = 1, . . . , J2 , j3 = 1, . . . , J3 , and k = 1, . . . , K. These cases arise, for instance, when each of the snapshots (and modes) collect, e.g., the two (for l = 1 and 2) or three (l = 1, 2, and 3) velocity components of the velocity vector of a fluid flow depending on two or three spatial coordinates, discretized in a structured mesh with grid points labeled by the indices (j1 , j2 ) or (j1 , j2 , j3 ). Note that the snapshot matrix appearing in the left hand side of Eq. (2.102) is substituted in the left hand sides of Eqs. (2.103) and (2.104) by a snapshot tensor, of order three and four, respectively. Obviously, a matrix or tensor can always be organized as a vector, namely the indices (j1 , j2 ) or (j1 , j2 , j3 ) can be encompassed in a single index. Thus, the theory developed above also applies to these cases. However, maintaining dependence of the snapshots v k on more than one index may give better results. In this case, truncated SVD can be replaced by truncated HOSVD (implemented in the MATLAB environment in Annex 1.2 at the end of Chapter 1) in the dimension reduction step 1 of the HODMD method. For the sake of clarity, we only consider the expansion (2.103), which is obtained in a way similar to what we did in Sect. 2.2.2, except that we now have the fourth order snapshot tensor appearing in the left hand side of Eq. (2.103), with components vjl 1 j2 k , instead of a snapshot matrix. First, the snapshot tensor is dimension-reduced upon application of truncated HOSVD, which gives (cf. Eq. (1.28) in Chapter 1) vjl 1 j2 k ≡ Tj1 j2 lk = M3  M2  M4 M1   m1 =1 m2 =1 m3 =1 m4 =1

1

2

x Sm 1 m 2 m 3 m 4 v m vx c vt , 1 j1 m 2 j2 m 3 l m 4 k

(2.105)

50 Higher Order Dynamic Mode Decomposition and Its Applications

in terms of the core tensor S and the reduced snapshots along the dimensions j1 , j2 , l, and k, denoted as ν x1 , ν x2 , c, and ν t , respectively. This is the counterpart of the standard truncated SVD expansion in Eq. (2.53). Truncation in the expansion (2.105) is performed according to Eq. (1.39) in Chapter 1, with the tunable parameter εHOSVD such that it can be very small is the snapshot tensor is very clean, but no so small values of εHOSVD may help to filter noise. For convenience, the expansion (2.105) is rewritten as vjl 1 j2 k =

M3  M2  M4 M1   m1 =1 m2 =1 m3 =1 m4 =1

t  vm , Sj1 j2 lm4  4k

(2.106)

where the rescaled core tensor and the (rescaled) reduced snapshots are defined as M3 M2  M1  1  x1 x2  Sm 1 m 2 m 3 m 4 v m Sj1 j2 lm4 = 4 j1 v m 2 j2 c m 3 l 1 σm4

(2.107)

m1 =1 m2 =1 m3 =1

and t t  vm = σm4 4 vm , 4k 4k

(2.108)

respectively. Here, σm4 4 are the HOSVD singular values along the fourth dimension of the tensor T appearing in the left hand side of Eq. (2.105). For convenience, we consider the reduced snapshots as the vectors of size M4 ⎡ ⎢ ⎢  v tk = ⎢ ⎣

t  v1k t  v2k ... t  vM 4k

⎤ ⎥ ⎥ ⎥. ⎦

(2.109)

Using these reduced snapshots, we define the enlarged reduced snapshots as we did in Eq. (2.62), namely ⎡ ⎢ ⎢ ⎢ v ∗k ≡ ⎢ ⎢ ⎢ ⎣

 v tk  v tk+1 ... t  v k+d−2  v tk+d−1

⎤ ⎥ ⎥ ⎥ ⎥. ⎥ ⎥ ⎦

(2.110)

Proceeding with these enlarged reduced snapshots as we did in Sect. 2.2.2, steps 1, 2, and 3, we obtain for the reduced snapshots the expansion (2.96), which

Higher order dynamic mode decomposition Chapter | 2

51

invoking (2.109) can also be written as t  vm = 4k

N 

an um4 n e(δn +i ωn ) (k−1) t .

(2.111)

n=1

Substituting this into (2.106), after some algebra, leads to vjl 1 j2 k 

N 

an ulj1 j2 n e(δn +i ωn ) (k−1) t ,

(2.112)

n=1

where ulj1 j2 n =

M4 

 um4 n . Sj1 j2 lm4 

(2.113)

m4 =1

In principle, Eq. (2.112) gives the expansion (2.103), whose derivation was the goal of this section. However, the amplitudes an appearing in (2.112) need not be real and positive, and neither the modes ulj1 j2 n need to exhibit unit RMS norm. This difficulty is overcome as we did in Sect. 2.2.2 (see Eq. (2.99)). Namely, we redefine the amplitudes and modes as   2 J1 J2  2 an ulj1 j2 n  1   l annew = |an | = . (2.114) uj1 j2 n , ul,new j1 j2 n 2J1 J2 annew l=1 j1 =1 j2 =1

Note that the new amplitudes are real and > 0 and the new modes exhibit unit RMS norm, as required. In addition, the products of the amplitudes times the modes are preserved, namely l annew ul,new j1 j2 n = an uj1 j2 n .

(2.115)

This latter property implies that the expansion (2.112) is also preserved under the transformation. This method will be applied in Chapter 6, Sect. 6.4 to treat some experimental data resulting from the zero-net-mass-flux jet.

2.2.4 Iterative HODMD It must be noted that the dimension reduction and amplitude calculation (and truncation) steps, both in the standard DMD-d algorithm (developed in Sect. 2.2.2) and in its multidimensional extension (derived in Sect. 2.2.3) do not commute, which means that a new application of the method to the reconstructed snapshots gives newly reconstructed snapshots that do not necessary coincide with the former reconstructed snapshots. The good news is that, for

52 Higher Order Dynamic Mode Decomposition and Its Applications

noisy data, the new application of the HODMD method frequently leads to an additional elimination of noisy artifacts, on top of the natural noise filtering property of the primary application of HODMD to the original (noisy) data. This process can be iterated in such a way that more and more noisy artifacts are filtered along the iterations. The iterative process is terminated when the identified frequencies converge (to a given tolerance). The resulting method is called iterative HODMD. It has been used by us in [102] and will be applied in Chapter 6 to an experimental database.

2.2.5 Some key points to successfully use the DMD-d algorithm with d ≥ 1 Let us consider some extensions and remarks about the practical implementation of the DMD-d algorithm with d ≥ 1 described in the last two sections: 1. A general algorithm to compute DMD-d for d ≥ 1, implemented in MATLAB, is given in Annex 2.2 at the end of this chapter. 2. As it happens with standard DMD, even though the HODMD algorithm is obtained from the linear system (2.71), which can be seen as invariant under ∗ is constant), this algorithm applies time translation (because the matrix R to general nonlinear systems that do not need to be invariant under time translations; an application to a nonlinear, periodically forced system will be considered in Chapter 6, Sect. 6.4. The only requirement is that the dynamics be described in the form (2.1), independently of where these dynamics come from. 3. A good means to identify the dominant modes, namely those modes exhibiting the largest amplitudes, is to plot the amplitudes vs. the associated frequencies, obtaining what will be called the a–ω diagram. This diagram will be systematically used along the book, in both toy models and more realistic applications. The shape of the a–ω diagram will also be useful to guess the nature of the underlying dynamics. 4. As anticipated, if the dynamics are real (the snapshots are real too and) the expansions (2.1) and (2.6) are invariant under the transformation un → un ,

ωn → −ωn .

(2.116)

This implies, in particular, that the a–ω diagram is symmetric around ω = 0 in this case. 5. The HODMD method leads to the decomposition (2.1), which can be seen as a semi-analytical description of the underlying dynamics. This semianalytical description can be used to: • Constructing a very fast purely data-driven ROM for the online simulation of the dynamics underlying the given snapshots, which will be further discussed in Chapter 8. Moreover, in contrast to the data-driven ROMs considered in Chapter 1, these HODMD-based ROMs permit, not only

Higher order dynamic mode decomposition Chapter | 2

53

interpolating, but also extrapolating in time, which as anticipated may be very useful to calculate attractors from transients. Examples and applications will be given along the book. • Uncovering spatio-temporal details of the underlying dynamics, such as whether we have permanent, increasing or decreasing dynamics, and the dominant damping rates and frequencies (namely, those associated with the dominant modes). In extended systems, plotting the dominant modes gives an idea of the spatial regions where these modes exhibit the largest activity. 6. As constructed, the method involves several tunable parameters, some of which have been already commented. However, for the sake of clarity, these comments are summarized here: • The sampled interval T appearing in Eq. (2.1) must be chosen somewhat larger than the largest period that is to be expected namely, that is to be identified in the expansion Eq. (2.1). As it will be seen along the book, taking the length of the timespan, say, 1.5 times larger than the largest period is frequently enough. • The temporal distance between snapshots, t (which defines the sampling frequency), must be taken as somewhat smaller (say, five times smaller) than the smallest period that is to be identified. • The threshold εSVD used in the dimension-reduction steps; see Eqs. (2.20), (2.54), and (2.70). As anticipated, this threshold can be taken as small as εSVD = 10−8 (or even smaller; see the toy models below) when the snapshots are very clean. However, for noisy snapshots, εSVD should be taken comparable to (or somewhat larger than) the noise level. If this is not known in advance, the singular value distribution may help to guess it, as illustrated in Fig. 1.5. • The threshold εDMD that has been used for amplitude truncation (see Eqs. (2.45) and (2.94)), which should be chosen somewhat smaller than the relative RMS error in the reduced snapshots reconstruction, defined for the DMD-1 and DMD-d with d ≥ 1, algorithms in Eqs. (2.46) and (2.100), respectively. • The index d, which must be chosen to minimize the relative RMS error for the reduced snapshots, defined in Eqs. (2.46) and (2.100) for the DMD-d algorithm with d = 1 and d > 1, respectively. This selection is not critical. To begin with, we may take d = 1 and then increase d to minimize the relative RMS error for the reduced snapshots. Taking d ∼ K/10 is frequently a good choice to begin with. It must be noted that d cannot be larger than K and, in fact, should not be very close to K because, in the DMD-d algorithm, when defining the modified snapshots, the information in the last d − 1 snapshots is partially lost; see Eq. (2.64). Fortunately, the error is usually fairly flat near optimal value of d. In fact, as it will be checked along the book (in both toy models and other applications), significant variations of d lead to essentially the same results. In any event, it must

54 Higher Order Dynamic Mode Decomposition and Its Applications

be noted that, for a given dynamics and given values of the remaining tunable parameters (including the sampled timespan T ), d scales with the number of snapshots, K (namely, d should be doubled if K is doubled). 7. Consistency and robustness of the results are two very important properties, which help to identify the dynamically relevant modes that are present in the dynamics underlying the snapshots. In particular, it must be checked that the damping rates and frequencies are stable when the sampled time interval is enlarged or shifted and/or the number of snapshots is varied. This property will be tested along the book. Note, in particular, that enlarging and/or shifting the sampled time interval has to do with the ability of the method for extrapolation, which is commented below. Also note that the robustness of the results commented here helps to distinguish between the dynamically relevant and spurious modes. The latter modes can be promoted by errors, either errors that are already present in the snapshots, or errors promoted by the HODMD method itself, due to (i) round-off errors, (ii) the various approximations involved in the applications of SVD, and (iii) an inappropriate selection of the sampled timespan or the sampling frequency. 8. If the expansion (2.1) has been accurately computed, which as anticipated is expected via the DMD-d algorithm (provided that the various parameters of the method have been appropriately tuned; see remark 7 above), this expansion can be used to analyze the structure of the attractor in permanent dynamics. Also, in transient dynamics converging to an attractor, this expansion can be used to extrapolate to t  1, obtaining an approximation of the final attractor. In this case, the extrapolation is performed by just ignoring in (2.1) those DMD modes with negative δn , which generally gives a smaller number of asymptotic modes, N∞ . It must be noted that, because of the above-mentioned errors, some relevant asymptotic modes may exhibit a small-but-nonzero |δn |, with the sign of δn uncertain. Thus, a threshold ε 1 is defined and the growth rate of those modes with |δn | < ε is set to zero before performing extrapolation. Extrapolation, from the timespan where the DMD-d method has been applied to much larger values of t, is also possible in permanent dynamics. This will be checked in several toy models later in this chapter. 9. When computing the DMD expansion (2.1) in two different timespans (see remark 6) or when calculating two different extrapolations (as considered in the last remark), some care must be taken with possible relative time-shifts between the reconstructions. The computation of those time-shifts is not difficult in periodic dynamics with not very large period, where the time-shifts can be taken smaller than one period, and can be computed via least-squares fitting. However, in quasi-periodic dynamics, the time-shift can be very large, which makes least-squares fitting problematic. In this case, a good strategy may be comparing the growth rates and frequencies for both reconstructions. Note that comparing modes has to be made with care because, due to the time-shifts, the modes may be affected by phase lags.

Higher order dynamic mode decomposition Chapter | 2

2.3

55

Periodic and quasi-periodic phenomena

Assuming that the damping rates δn are all zero, the permanent dynamics displayed in Eq. (2.1) are periodic if the frequencies are all commensurable, namely ωi /ωj = pj /pj , with pi and pj = mutually prime integers (in this case, the fraction pj /pj is said to be irreductible), for all i and j . Instead, the dynamics is quasi-periodic otherwise, namely if at least two frequencies cannot be written in this way. This issue can be discerned if the frequencies are known exactly (as in some of the toy models considered below) but, avoiding ingenuity, it must be noted that this is a subtle matter if the frequencies are calculated via finite precision computations, where two frequencies are always commensurable. In this case, we shall guess that ω1 > 0 and ω2 > 0, with ω1 < ω2 , are commensurable when ω1 /ω2 can be approximated as a commensurable fraction as    ω1 p     ω − q  ≤ ε, 2

(2.117)

where p and q are mutually prime integers (namely, the fraction p/q is irreductible), q is not too large, and ε is very small. If, instead, condition (2.117) is fulfilled with ε very small, the fraction p/q irreductible, and q quite large, then we guess that ω1 and ω2 , with ω1 < ω2 , are incommensurable. A good question here is how large q should be to guess that ω1 and ω2 are incommensurable; in the context of this book, we shall guess that ω1 and ω2 are incommensurable if q is larger than, say, 1000. As we shall see, the errors (either errors in the original snapshots or errors due to the method itself) play a role here. Approximations by irreductible fractions are considered in the next subsection.

2.3.1 Approximate commensurability Without loss of generality, we assume that 0 < r = ω1 /ω2 < 1; note that in the cases r = 0 and 1 ω1 and ω2 are obviously commensurable, and the case r > 1 is converted into the case r < 1 by interchanging ω1 and ω2 . Rational approximations of r by irreductible fractions can be obtained via elementary number theory, using an iterative algorithm that is based on Haros– up up Farey fractions [125] and proceeds as follows. Let p0low , q0low , p0 , and q0 be up up up such that r0low = p0low /q0low and r0 = p0 /q0 are both irreductible fractions up verifying r0low < r < r0 . Since 0 < r < 1, a possible selection is p0low = 0, up up low q0 = 1, p0 = 1, and q0 = 1. In the subsequent iterations, for n ≥ 1, we conlow = p low /q low and r up = sider the mediant of the irreductible fractions rn−1 n−1 n−1 n−1 up up pn−1 /qn−1 , which is defined as  rn =

p n ,  qn

(2.118)

56 Higher Order Dynamic Mode Decomposition and Its Applications

where up

up

low p n = pn−1 + pn−1 ,

low  qn = qn−1 + qn−1 .

(2.119)

It turns out that the mediant is also irreductible and such that up

low rn−1 r, then we set low pnlow = pn−1 ,

low qnlow = qn−1 ,

up

pn = p n ,

and

up

qn =  qn .

(2.122) up

In this way, the interval between the irreductible fractions rnlow and rn is strictly narrowed as n increases and, moreover, contains r in its interior. Thus, the endpoints of this interval, namely rnlow =

pnlow qnlow

up

and

up

rn =

pn up , qn

(2.123)

both converge to r. At the nth iteration, we select as approximation of r that up end-point of the interval between rnlow and rn that is closest to r. This end-point will be denoted as rn∗ and the iteration process will be terminated when |rn∗ − r| < ε,

(2.124)

for some threshold ε, which is usually taken as very small. Note that it may up happen that one of the end-points of the interval, namely either rnlow or rn , remains constant along various iterations and, moreover, this end-point may be the one giving the best approximation of r, namely rn∗ . The method is implemented in the MATLAB function ‘ApproxCommensurabilty.m’, given in Annex 2.2, at the end of this chapter. Although, in principle, the method is designed to obtain approximations of irrational numbers by irreductible fractions, we illustrate it now considering several values of r = ω1 /ω2 in cases in which ω1 and ω2 are both commensurable and incommensurable. In all cases, the threshold ε in (2.124) will be set as ε = 10−15 . The first case to be considered is that in which the frequencies ω1 = 0.3 and ω2 = 0.5 are exactly commensurable, since ω1 /ω2 = 3/5. In this simple case, the method converges (with zero-machine error) in just three iterations (see Table 2.1). Similarly, for the commensurable frequencies ω1 = 2.3 and ω2 = 3.7 (with ω1 /ω2 = 23/37), the method converges (with near-zero-machine error) in eight

Higher order dynamic mode decomposition Chapter | 2

57

TABLE 2.1 Irreductible fractions p/q approximating the ratio ω1 /ω2 and the associated errors for the commensurable frequencies ω1 = 0.3 and ω2 = 0.5. Iterations

p

q

|ω1 /ω2 − p/q|

1

1

2

0.1

2

2

3

0.067

3

3

5

0

TABLE 2.2 Counterpart of Table 2.1 for the commensurable frequencies ω1 = 2.3 and ω2 = 3.7. Iterations

p

q

|ω1 /ω2 − p/q|

1

1

2

0.12

2

2

3

0.045

3

3

5

0.022

4–5

5

8

3.4 · 10−3

6

13

21

2.6 · 10−3

7

18

29

9.3 · 10−4

8

23

37

1.1 · 10−16

iterations, shown in Table 2.2. Note that p and q remain constant in the fourth and fifth iterations, which was a possibility anticipated above. √ For the incommensurable frequencies ω1 = 2 and ω2 = 2 (which are such that ω1 < ω2 , as required), the method needs 38 iterations (shown in Table 2.3) to converge with near-zero-machine precision, and approximates ω1 /ω2 in irreductible form as ω1 /ω2 = p/q, with p = 15994428 and q = 22619537. Note that q is quite large, as expected. It is interesting that decreasing ε to 10−16 , the method converges in 43 iterations with zero-machine precision. However, the method does not really converge in these 43 iterations to the exact value of ω1 /ω2 (which is not possible), but to its double precision approximation, namely 0.7071067811865476, which is the approximation of ω1 /ω2 considered by MATLAB. Let us now illustrate how errors play an important role in this computation. For instance, the 17th iteration in Table 2.3, shows that the commensurable frequencies ω1 = 1.393 and ω2 = 1.97,

(2.125)

58 Higher Order Dynamic Mode Decomposition and Its Applications

TABLE 2.3 Counterpart of Table 2.1 for the incommensurable √ frequencies ω1 = 2 and ω2 = 2. Iterations

p

|ω1 /ω2 −p/q|

q

1

1

2

0.207

2–3

2

3

0.0404

4

5

7

7.18 · 10−3

5

7

10

7.11 · 10−3

6–7

12

17

1.22 · 10−3

8

29

41

2.1 · 10−4

9

41

58

2.1 · 10−4

10–11

70

99

3.6 · 10−5

12

169

239

6.19 · 10−6

13

239

338

6.19 · 10−6

14–15

408

577

1.1 · 10−6

16

985

1393

1.82 · 10−6

17

1393

1970

1.82 · 10−7

18–19

2378

3363

3.13 · 10−8

20–21

5741

8119

5.36 · 10−9

22

13860

19601

9.2 · 10−10

23

19601

27720

9.2 · 10−10

24–25

33461

47321

1.6 · 10−10

26

80782

114243

2.71 · 10−11

27

114243

161564

2.71 · 10−11

28–29

195025

275807

4.65 · 10−12

30

470832

665857

7.97 · 10−13

31

665857

941664

7.97 · 10−13

32–33

1136689

1607521

1.37 · 10−13

34

2744210

3880899

2.35 · 10−14

35

3880899

5488420

2.34 · 10−14

36–37

6625109

9369319

4 · 10−15

38

15994428

22619537

7.8 · 10−16

are near-incommensurable, since they differ from namely,

√ 2/2 by a very small error,

 √   √   √  ω 2   1.393 2   1393 2  1 − − − = =  ∼ 1.82 · 10−7 .   ω2 2   1.97 2   1970 2 

(2.126)

Higher order dynamic mode decomposition Chapter | 2

59

As expected, applying the method to the commensurable frequencies defined in (2.125), the method converges in 17 iterations, as shown in Table 2.4. As can be seen, these iterations are very close to the first 17 iterations appearing in Table 2.3. TABLE 2.4 Counterpart of Table 2.1 for the commensurable frequencies defined in Eq. (2.125). Iterations

p

q

|ω1 /ω2 − p/q|

1

1

2

0.207

2–3

2

3

0.0404

4

5

7

7.2 · 10−3

5

7

10

7.1 · 10−3

6–7

12

17

1.22 · 10−3

8

29

41

2.1 · 10−4

9

41

58

2.1 · 10−4

10–11

70

99

3.6 · 10−5

12

169

239

6.4 · 10−6

13

239

338

6 · 10−6

14–15

408

577

8.8 · 10−8

16

985

1393

3.6 · 10−7

17

1393

1970

0

For other commensurable or incommensurable√frequencies, the performance of the method is similar. For instance, if ω1 = 3 and ω2 = π, the method converges with near zero-machine precision in 63 iterations (omitted for the sake of brevity) to the irreductible fraction p/q, with p = 6418716 and q = 11642263, which once more are very large. Let us remark that, in the computations performed above, the method is very fast in a standard PC.

2.3.2 Semi-analytic representation of periodic dynamics and invariant periodic orbits in phase space For simplicity, we consider here real data. Complex data could be treated similarly. For periodic dynamics, assuming that all growth rates are zero (or very small) the expansion (2.1) can be written as v(t) 

N1  n1 =−N1

an1 un1 ein1 ω (t−t1 )

for t1 ≤ t ≤ t1 + T ,

(2.127)

60 Higher Order Dynamic Mode Decomposition and Its Applications

where ω is the fundamental frequency and the remaining terms are the, positive or negative, harmonics; since the data are real, the various terms in the expansion (2.1) appear in complex conjugate pairs (namely, the retained frequencies appear in pairs with the same modulus and opposite signs), which permits rewriting (2.1) in the form (2.127), where the nonzero frequencies are organized as positive and negative harmonics. Note that neither the fundamental frequency nor all harmonics of this frequency need to be present in the expansion (2.127), which is taken into account by setting to zero some of the amplitudes an1 appearing in (2.127). Equation (2.127) is a very useful semi-analytic description of the periodic dynamics. Also, a semi-analytic description of the associated orbit in phase space is given by N1 

v(θ ) 

an1 un1 ein1 θ

for 0 ≤ θ < 2π,

(2.128)

n1 =−N1

which is obtained by setting ω(t − t1 ) = θ in Eq. (2.127). Note that this representation of the orbit is periodic in θ , of period 2π.

2.3.3 Semi-analytic representation of quasi-periodic dynamics and the associated invariant tori in phase space For the simplest quasi-periodic solution, defining a 2-torus, all growth rates are zero (or very small) in the expansion (2.1), and this expansion is written as v(t) 

N1 

N2 

an1 n2 un1 n2 ei (n1 ω1 +n2 ω2 ) (t−t1 )

for t1 ≤ t ≤ t1 + T ,

n1 =−N1 n2 =−N2

(2.129) where ω1 and ω2 are the fundamental frequencies, assumed to be incommensurable. As in the last section, since the data are real, the various terms in the expansion (2.1) appear in complex conjugate pairs, which means that the nonzero frequencies can be organized in pairs with equal modulus and opposite signs, which permits writing (2.1) in the form displayed in (2.129). As in the periodic dynamics considered in Eq. (2.127), some of the harmonics appearing in (2.129) (for some values of n1 and n2 ) may be absent, which is taken into account by setting to zero the associated values of the mode amplitudes an1 n2 . This quasi-periodic solution ‘fills’ an invariant 2-torus, whose representation in phase space can be written as v(θ1 , θ2 ) 

N1 

N2 

an1 n2 un1 n2 ei (n1 θ1 +n2 θ2 )

for 0 ≤ θ1 , θ2 < 2π.

n1 =−N1 n2 =−N2

(2.130)

Higher order dynamic mode decomposition Chapter | 2

61

By ‘filling the torus’ we mean here that, for 0 ≤ t < ∞, the orbit approaches any point of the torus as much as required. Equations (2.129) and (2.130) give very useful semi-analytic representations of the quasi-periodic orbit and the associated 2-torus filled by the orbit, respectively. Note that if, instead, the fundamental frequencies ω1 and ω2 are commensurable, and written as an irreductible fraction as ω1 /ω2 = p/q, then the orbit is periodic, with a fundamental frequency ω = ω1 /p = ω2 /q, which is very small if p and q are large, namely if the frequencies ω1 and ω2 are nearincommensurable. In other words, in the latter case, the period of the periodic orbit, 2π/ω, is very large and the dynamics may be hardly recognized as periodic in time intervals of limited length. Moreover, the orbit behaves as nearlyquasi-periodic, and the torus (2.130) (which is just a torus containing the orbit) is almost filled by the orbit. Quasi-periodic orbits depending on more than two incommensurable fundamental frequencies and the associated invariant tori are obtained similarly. For instance, with three fundamental frequencies, ω1 , ω2 , and ω3 , Eq. (2.129) must be substituted by v(t) 

N1 

N2 

N3 

an1 n2 n3 un1 n2 n3 ei (n1 ω1 +n2 ω2 +n3 ω3 ) (t−t1 ) , (2.131)

n1 =−N1 n2 =−N2 n3 =−N3

while the semi-analytic expression above for the invariant 3-torus ‘filled’ by the orbit is v(θ1 , θ2 , θ3 ) 

N1 

N2 

N3 

an1 n2 n3 un1 n2 n3 ei (n1 θ1 +n2 θ2 +n3 θ3 ) ,

n1 =−N1 n2 =−N2 n3 =−N3

(2.132) for θ1 , θ2 , and θ3 such that 0 ≤ θ1 , θ2 , θ3 < 2π . Higher-dimensional tori are obtained similarly.

2.4 Some toy models As a first toy model, we consider the scalar function defined as v(t) =

2 , 2 − cos ω1 t cos ω2 t

(2.133)

for several values of the frequencies ω1 and ω2 ; see below. Note that, using a Taylor expansion in the right hand side of (2.133), v(t) can also be written as     w(t) 3 w(t) w(t) 2 2 + + ..., =1+ + v(t) = 2 − w(t) 2 2 2

(2.134)

62 Higher Order Dynamic Mode Decomposition and Its Applications

where  w(t) = cos ω1 t cos ω2 t =

eiω1 t + e−iω1 t

  iω t  e 2 + e−iω2 t . 4

(2.135)

This means that the scalar function (2.133) can be cast into the form (2.1), with the relevant frequencies ±n1 ω1 ± n2 ω2 ,

for n1 , n2 = 0, 1, 2, 3, . . . .

(2.136)

Thus, the function is periodic if ω1 and ω2 are commensurable and quasiperiodic otherwise. On the other hand, since the function v(t) is scalar, the spatial complexity is one, while the spectral complexity is larger than one. In fact, the spectral complexity can be very large, depending on the number of terms that are retained in the expansion (2.134). According to our discussion on spatial and spectral complexities above, standard DMD cannot cope with this toy model, which is appropriate to see the advantages of HODMD. For the periodic case, with ω1 = 3 and ω2 = 5, the fundamental frequency is ω = ω2 − ω1 = 2, which means that the period is 2π/ω = π. A plot of the function v in the timespan 0 ≤ t ≤ 6 is given in Fig. 2.1. Let us apply the HODMD

FIGURE 2.1 The function v(t) defined in Eq. (2.133), with ω1 = 3 and ω2 = 5, in the timespan 0 ≤ t ≤ 6. The exact function (thin solid line), the reconstruction using DMD-500 (thick dashed line), and reconstruction using DMD-1 (thin dashed line), are compared.

algorithms DMD-1 and DMD-d, with appropriate d > 1, in the timespan considered in Fig. 2.1, 0 ≤ t ≤ 6, whose length is slightly smaller than two periods. After some (slight) calibration, the tunable parameters of the method are chosen to be K = 1000,

εSVD = 10−12 ,

εDMD = 10−5 .

(2.137)

Note that εSVD is very small, which is due to the fact that the data are very clean. Using these tunable parameters, DMD-1 only recognizes one spurious

Higher order dynamic mode decomposition Chapter | 2

63

mode (with negative damping rate and frequency equal to zero) and gives a very poor reconstruction (see Fig. 2.1), as expected. DMD-500, instead, with the parameter d = 500 chosen after some slight calibration, identifies 49 modes (with very small damping rates and frequencies extremely close to the correct ones, which are multiples of 2, and reconstructs the function v(t) with a very small relative RMS error, as defined in Eq. (2.51), namely RRMSE∼ 1.22 · 10−5 . Because of this very small error, the function v(t) and its DMD-500 reconstruction are plot-indistinguishable, as seen in Fig. 2.1. The DMD-500 algorithm is very robust in connection with the tunable parameters, since decreasing εDMD increases the number of identified frequencies and decreases the relative RMS error, as expected. Also, the results are essentially the same if (i) d is varied in the range 475 ≤ d ≤ 525 and K in the range 850 ≤ K ≤ 1050 (recall that K and d scale with each other and thus it is convenient to maintain the ratio K/d), or (ii) the timespan is enlarged or shifted. The a–ω diagram obtained by DMD-500, with the tunable parameters (2.137), is given in Fig. 2.2. As can be

FIGURE 2.2 The a–ω diagram obtained by DMD-500 with the tunable parameters (2.137) for the periodic function v(t) defined (2.133), with ω1 = 3 and ω2 = 5.

seen in this semi-logarithmic plot, the amplitudes approach two straight lines as |ω| increases, which implies spectral decay of the amplitudes. This is a typical property of smooth periodic dynamics, which will be checked along the book. The ability of the HODMD method for extrapolation is shown in Fig. 2.3, where the exact and extrapolated (from the HODMD reconstruction) functions v(t) are seen to be plot-indistinguishable, in spite of the fairly large distance between the original and extrapolated timespans. This is because the relative RMS difference between both, as defined in Eq. (2.51), is very small, namely it is RRMSE∼ 5.4 · 10−5 . The results above show that HODMD gives very good results in this case, in which standard DMD completely fails. However, since the function v(t) is scalar and periodic, standard FFT or its improvement, PSD, could also be applied. A comparison between the three methods, in the same timespan used

64 Higher Order Dynamic Mode Decomposition and Its Applications

FIGURE 2.3 Extrapolation to the interval 200 ≤ t ≤ 206 of the DMD-500 reconstruction (thick dashed line) with the tunable parameters (2.137) of the periodic function v(t) (thin solid line) defined in Eq. (2.133), with ω1 = 3 and ω2 = 5.

FIGURE 2.4 The a–ω diagrams, as obtained in the interval 0 ≤ t ≤ 6, using FFT (red +), PSD (blue ×), and the DMD-500 algorithm (black o) with the tunable parameters defined in Eq. (2.137).

above for the HODMD method, namely 0 ≤ t ≤ 6, is given in Fig. 2.4, where it can be seen that HODMD gives much better (in particular, much cleaner) results than both FFT and PSD. Of course, FFT and specially PSD improve as the timespan is increased, but the timespan must be increased a lot to eliminate sideband artifacts. Note that, in the end, FFT converts the given function into a new periodic function (defined in the whole line −∞ < t < ∞), whose period is the given timespan, which precisely introduces the sideband artifacts. The HODMD method, instead, does not convert the given data into an spurious periodic data, and thus it does not introduce sideband artifacts. √ For the quasi-periodic case, with ω1 = 2 and ω2 = 2, the function v(t) is more complex, as seen in Fig. 2.5. Now, the HODMD method is applied in the

Higher order dynamic mode decomposition Chapter | 2

65

FIGURE 2.5 Counterpart in the time√interval 0 ≤ t ≤ 40 of Fig. 2.1 for the quasi-periodic function v(t) defined (2.133), with ω1 = 2 and ω2 = 2. Again, the function v(t) and its DMD-2500 reconstruction are indistinguishable.

timespan 0 ≤ t ≤ 40, with tunable parameters K = 5000,

εSVD = 10−12 ,

εDMD = 10−5 .

(2.138)

Note that the timespan and the number of snapshots are both much larger than in the periodic case, as expected, but the thresholds εSVD and εDMD are maintained. With these tunable parameters, the DMD-1 algorithm recognizes just one mode, and gives a very poor reconstruction, shown in Fig. 2.5. The DMD-2500 algorithm (with the index d = 2500 chosen after some slight calibration), instead, recognizes 99 modes and reconstructs the function v(t) with a relative RMS error, as defined in Eq. (2.51), RRMSE∼ 2.7 · 10−5 , which is really small. This explains that the original function is plot-indistinguishable from its DMD-2500 reconstruction in Fig. 2.5. The a–ω diagram for the present quasi-periodic case is plotted in Fig. 2.6, where it can be seen that now the a vs. ω plot tends to fill a triangle. This is typical of quasi-periodic dynamics, as it will be seen along the book. Note the rectilinear oblique boundaries of the triangle in this semi-logarithmic plot, which indicates spectral decay of the amplitudes. However, not all quasi-periodic dynamics give triangular regions in the a vs. ω diagram. For instance, if the involved frequencies were ω0 + mω1 ,

(2.139)

with m = integer and ω0 incommensurable with ω1 , then the dynamics is quasiperiodic, but the relevant amplitudes in the a vs. ω diagram are located along two lines, one for m positive and another for m negative. In fact, ω0 just produces a shift in the diagram for the periodic dynamics that would be obtained if ω0 were absent; see Fig. 2.10, which corresponds to the toy model defined below in Eq. (2.146).

66 Higher Order Dynamic Mode Decomposition and Its Applications

√ FIGURE 2.6 The a–ω diagram for the quasi-periodic function v(t) defined (2.133), with ω1 = 2 and ω2 = 2, applying the DMD-2500 algorithm with the tunable parameters defined in Eq. (2.138).

The method is also able to extrapolate in this quasi-periodic case, as seen in Fig. 2.7. This figure shows that, as in the periodic case, the extrapolation is very good in spite of the fairly large distance between the original and extrapolated timespans. In fact, the relative RMS error (as defined in Eq. (2.51)) of the extrapolated solution is small, namely RRMSE∼ 2.4 · 10−3 , which implies that the exact and extrapolated solutions are plot-indistinguishable, as seen in Fig. 2.7.

FIGURE √ 2.7 Counterpart of Fig. 2.3 for the quasi-periodic function v defined in Eq. (2.133), with ω1 = 2 and ω2 = 2, applying the DMD-2500 algorithm with the tunable parameters defined in Eq. (2.138).

As an important final remark in connection with this toy model, we note that comparison of Figs. 2.2 and 2.6 could lead us to the conclusion that quasiperiodic dynamics could be recognized by the property that its a–ω diagram ‘fills’ a triangular region. However, this conclusion must be taken with care since, e.g., the frequencies ω1 = 1.393 and 1.97

(2.140)

Higher order dynamic mode decomposition Chapter | 2

67

are obviously commensurable but, as seen in Eqs. (2.125) and (2.126), they are near-incommensurable and the a–ω diagram might fill a triangular region. This is seen by taking these frequencies in the definition (2.133). Using the same tunable parameters as in the previous case, the DMD-1500 (with the value d = 1500 chosen after a slight calibration as in the previous cases) algorithm gives the a–ω diagram plotted in Fig. 2.8.

FIGURE 2.8 The a–ω diagram for the quasi-periodic function v(t) defined (2.133), with ω1 and ω2 as defined in Eq. (2.140), applying the DMD-1500 algorithm with the tunable parameters defined in Eq. (2.138).

Note invoking Table 2.4 (17th iteration) that the frequencies defined in (2.140) are such that their ratio is written in irreductible form as ω1 /ω2 = p/q, with p = 103 ω1 and q = 103 ω2 . This means that the fundamental frequency of the periodic function is very small, namely ω = ω1 /p = ω2 /q = 10−3 , which in turn means that the period of the periodic solution is very large, namely equal to 2π/ω = 2π103 . This explains that its a–ω diagram could be thought to be associated to a quasi-periodic function. In fact, as further anticipated in Sect. 2.3.3, the function is nearly-quasi-periodic in the present case. In the toy model considered above, defined in Eq. (2.133), the spatial complexity was one and the necessity of using the DMD-d algorithm with d > 1 was clear. Let us now consider a toy model with fixed spatial complexity and varying spectral complexity. To this end, we consider the 10 vectors, of size 10, u1 , u2 , . . . , u10 defined as the first, second,. . . , tenth columns of the unit matrix of order 10. Note that these vectors are linearly independent (in fact, they are orthogonal). In addition, we consider the amplitudes and frequencies defined as an = 3−n

and

ωn = 10 n,

for n = 1, . . . , 10.

(2.141)

Using these, we define the vector function v(t) =

10  n=1

an un cos ωn t.

(2.142)

68 Higher Order Dynamic Mode Decomposition and Its Applications

Note that the spatial complexity is 10, but the spectral complexity is larger, namely 20, because this expansion involves the frequencies ±ω1 ,

±ω2 , . . . , ±ω10 .

(2.143)

Thus, we do not expect standard DMD to give good results. Let us consider the sampled timespan 0 ≤ t ≤ 1, whose length is larger than the largest involved period, which is 2π/ω1 = 2π/10, and take K = 1000 equispaced snapshots, which gives a sampling frequency that is ten times larger than the largest involved frequency, which is 100. In addition, the thresholds for the dimension-reduction and amplitude-truncation steps are set as εSVD = εDMD = 10−6 .

(2.144)

Using these, we first apply the algorithm DMD-1, which is only able to compute five pairs of modes with nonzero damping rates. Moreover, these frequencies, ±5.96,

±19.65,

±44.52,

±12.54,

±26.97,

(2.145)

are spurious, as comparison with (2.141) shows. Applying the DMD-50 (this value of d, chosen after a slight calibration, is somewhat optimal for this case) algorithm, instead, gives extremely small damping rates (∼ 10−12 ), recovers the 10 frequencies defined in Eq. (2.141) with a relative error ∼ 10−11 , and reconstructs the snapshots with a relative RMS error, as defined in Eq. (2.51), RRMSE∼ 5.5 · 10−13 . Increasing or decreasing d not too much (to, e.g., d = 80 or 20, respectively), the very small reconstruction error is essentially maintained. The a–ω diagram for this case is given in Fig. 2.9. As can be seen in

FIGURE 2.9 The a–ω diagram using DMD-50 for the dynamics defined in Eq. (2.142).

this diagram, the amplitudes decay spectrally as |ω| increases, which is consistent with the fact that the dynamics are periodic and smooth.

Higher order dynamic mode decomposition Chapter | 2

69

Let us now replace (2.142) by √

v(t) = ei

30 t

10 

an un cos ωn t,

(2.146)

n=1

which is quasi-periodic because it exhibits the incommensurable frequencies √ √ √ 30 ± ω1 , 30 ± ω2 , . . . , 30 ± ω10 . (2.147) As in the former case, the algorithm DMD-1 gives poor results because, again, the spatial complexity is 10, but the spectral complexity is 20. The algorithm DMD-50, with the same tunable parameters as before, gives very good results, comparable to those obtained for the former case. The a–ω diagram for the present case is given in Fig. 2.10 As can be seen in this figure, the relevant

FIGURE 2.10 The a–ω diagram using DMD-50 for the dynamics defined in Eq. (2.146).

points do not fill a triangular region in spite of the fact that the dynamics are quasi-periodic. Instead, this latter diagram is√obtained from that in Fig. 2.9 by just shifting the frequencies by a quantity 30, which is consistent with the exponential factor appearing in (2.146). Thus, this is a good example of our comment above that quasi-periodic dynamics do not always give triangular a–ω diagrams. In the two examples considered above, the spectral complexity was larger than the spatial complexity and the DMD-1 algorithm did not give good results. Let us now consider the complex function v(t) =

10  n=1

an un eiωn t ,

(2.148)

70 Higher Order Dynamic Mode Decomposition and Its Applications

which only involves the 10 positive frequencies ω1 , . . . , ω10 , meaning that the spatial and spectral complexities are both equal to 10 in this case. Thus, the standard DMD algorithm is expected to provide good results. In fact, using the same tunable parameters as above, both the DMD-1 algorithm and the DMD-d algorithm, with 1 ≤ d ≤ 500, give very good results, since the damping rates are extremely small (∼ 10−13 ), the exact frequencies are computed with a relative error ∼ 10−11 , and the snapshots are reconstructed with a relative RMS error, as defined in Eq. (2.51), RRMSE∼ 4.5 · 10−13 . The a–ω diagram for the present case is given in Fig. 2.11.

FIGURE 2.11 The a–ω diagram using DMD-50 for the dynamics defined in Eq. (2.148).

2.5 Some concluding remarks The HODMD method has been described in detail, showing that this method is able to give expansions of the type (2.1) in cases in which standard DMD fails. The necessity of using the new method has been clarified using the concepts of spatial and spectral complexities, which are the rank of the system of spatial modes and the number of involved damping rates/frequencies, respectively. It was seen that standard DMD does not give good results when the spatial complexity is smaller than the spectral complexity, and that the spatial complexity can be increased by considering not only the standard snapshots, but also d ≥ 1 additional time-lagged snapshots. The algorithm to compute the new method was described in Sect. 2.2, where for the sake of clarity the cases d = 1 and d > 1 were considered separately, in Sects. 2.2.1 and 2.2.2, respectively. Also, the question of whether the expansion (2.1) computed by the method gives periodic or quasi-periodic dynamics depends on the commensurability of the involved frequencies, which is a highly non-trivial issue in finite precision computations, where two frequencies are always commensurable. This point has been also addressed in Sect. 2.3.1, giving an algorithm that approximates any ra-

Higher order dynamic mode decomposition Chapter | 2

71

tional or irrational number (giving the ratio of two frequencies) by irreductible fractions; this algorithm in turn has been tested in several commensurable and incommensurable pairs of frequencies. The HODMD method gives a semi-analytic representation of the periodic or quasi-periodic dynamics underlying the given data. Also, some semi-analytic expressions have been given for the invariant orbit and invariant torus containing the periodic and quasiperiodic dynamics, respectively; see Sections 2.3.2 and 2.3.3, respectively. The performance of the new HODMD method has been tested in Sect. 2.4 using several academic, representative toy models that include periodic and quasi-periodic dynamics. The method will be further tested in subsequent chapters, considering less academic applications.

2.6 Annexes to Chapter 2 Annex 2.1. Selected practice problems for Chapter 2. 1. Apply the MATLAB algorithm ‘ApproxCommensurability.m’ given in Annex 2.1 to compute Tables 2.1, 2.2, 2.3, and 2.4. 2. For the toy model function defined in Eq. (2.133), recover the results described in Sect. 2.4 for the considered √ values of the frequencies ω1 and ω2 , namely (ω1 , ω2 ) = (3, 5), (ω1 , ω2 ) = ( 2, 2), and (ω1 , ω2 ) = (1.393, 1.97). Reproduce Figs. 2.1–2.8. 3. For the toy model functions defined in Eqs. (2.142), (2.146), and (2.148), use the DMD-1 and DMD-d algorithms to check the remaining results described in Sect. 2.4. Reproduce Figs. 2.9–2.11. Annex 2.2. MATLAB functions to compute HODMD and approximate commensurability. Concerning HODMD, for convenience, the algorithms DMD-1 and DMD-d with d > 1 are given in separate MATLAB functions, consistently with the description of HODMD in this chapter. The main MATLAB function and several auxiliary MATLAB functions are given in items A-C below. These functions must located in the same directory. The functions to compute HODMD are the following: • The main program is in the function ‘main-HODMD.m’. It is necessary to set the snapshot matrix and the parameters for the HODMD analysis. • Functions ‘DMDd-SIADS.m’ and ‘DMD1-SIADS.m’, which include DMDd and DMD-1 algorithms. • Function ‘ContReconst-SIADS.m’, which reconstructs the original data as an expansion of DMD modes.

72 Higher Order Dynamic Mode Decomposition and Its Applications

A. HODMD algorithm: the main program The following function computes the HODMD algorithm to calculate the main frequencies, damping rates, and DMD modes of a toy model with snapshot matrix saved in the MATLAB binary file ‘V.mat’. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% main-HODMD.m %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% %%%

%%% HODMD algorithm - MAIN program

%%%

%%% %%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % HODMD analyzes the Toy model saved in V.mat

%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% %% INPUT: %% %%% d: parameter of DMD-d (higher order Koopman assumption) %%% V: snapshot matrix %%% Time: vector time %%% varepsilon1: first tolerance (SVD) %%% varepsilon: second tolerance (DMD-d modes) %%% %% OUTPUT: %% %%% Vreconst: reconstruction of the snapshot matrix V %%% deltas: growht rate of DMD modes %%% omegas: frequency of DMD modes(angular frequency) %%% amplitude: amplitude of DMD modes %%% modes: DMD modes %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% clc clear all close all load V.mat load Time.mat

Higher order dynamic mode decomposition Chapter | 2 73

%%% Input parameters %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Number of snapshots: nsnap nsnap=1000 V=V(1:nsnap); %% DMD-d d=270 %% Tolerance DMD-d varepsilon1=1e-10 %SVD varepsilon=1e-3 %DMD %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% [M N]=size(V) if d>1 [Vreconst,deltas,omegas,amplitude] =... DMDd_SIADS(d,V,Time,varepsilon1,varepsilon); else [Vreconst,deltas,omegas,amplitude] =... DMD1_SIADS(V,Time,varepsilon1,varepsilon); end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% RMS Error %%% NormV=norm(V(:),2); diff=V-Vreconst; RelativeerrorRMS=norm(diff(:),2)/NormV; RelativeerrorRMS %%% MAX Error %%% RelativeerrorMax=norm(diff(:),Inf)/norm(V(:),Inf); RelativeerrorMax figure(1) plot(omegas,deltas,’k+’) xlabel(’\omega_n’) ylabel(’\delta_n’) figure(2) semilogy(omegas,amplitude/max(amplitude),’k+’) xlabel(’\omega_n’) ylabel(’a_n’) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

74 Higher Order Dynamic Mode Decomposition and Its Applications

B. DMD-d algorithm This function computes the DMD-d algorithm with d > 1. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% DMDd-SIADS.m %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function

[Vreconst,GrowthRate,Frequency,Amplitude,...

DMDmode] =DMDd(d,V,Time,varepsilon1,varepsilon) %%%%%%%%%%%%%%%% DMD-d %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% %% INPUT: %% %%% d: parameter of DMD-d (higher order Koopman assumption) %%% V: snapshot matrix %%% Time: vector time %%% varepsilon1: first tolerance (SVD) %%% varepsilon: second tolerance (DMD-d modes) %%% %% OUTPUT: %% %%% Vreconst: reconstruction of the snapshot matrix V %%% GrowthRate: growht rate of DMD modes %%% Frequency: frequency of DMD modes(angular frequency) %%% Amplitude: amplitude of DMD modes %%% DMDmode: DMD modes %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% [J,K]=size(V); %% STEP 1: SVD of the original data [U,Sigma,T]=svd(V,’econ’); sigmas=diag(Sigma); n=length(sigmas); NormS=norm(sigmas,2); kk=0; for k=1:n if norm(sigmas(k:n),2)/NormS>varepsilon1

Higher order dynamic mode decomposition Chapter | 2 75

kk=kk+1; end end U=U(:,1:kk); %% Spatial complexity: kk (’Spatial complexity’) kk %% Create reduced snapshots matrix hatT=Sigma(1:kk,1:kk)*T(:,1:kk)’; [N,~]=size(hatT); %% Create the modified snapshot matrix tildeT=zeros(d*N,K-d+1); for ppp=1:d tildeT((ppp-1)*N+1:ppp*N,:)=hatT(:,ppp:ppp+K-d); end %% Dimension reduction [U1,Sigma1,T1]=svd(tildeT,’econ’); sigmas1=diag(Sigma1); Deltat=Time(2)-Time(1); n=length(sigmas1); NormS=norm(sigmas1,2); kk1=0; for k=1:n RRMSEE(k)=norm(sigmas1(k:n),2)/NormS; if RRMSEE(k)>varepsilon1 kk1=kk1+1; end end (’Spatial dimension reduction’) kk1

U1=U1(:,1:kk1); hatT1=Sigma1(1:kk1,1:kk1)*T1(:,1:kk1)’; %% Reduced modified snapshot matrix

76 Higher Order Dynamic Mode Decomposition and Its Applications

[~,K1]=size(hatT1); [tildeU1,tildeSigma,tildeU2]=svd(hatT1(:,1:K1-1),’econ’); %% Reduced modified Koopman matrix tildeR=hatT1(:,2:K1)*tildeU2*inv(tildeSigma)*tildeU1’; [tildeQ,tildeMM]=eig(tildeR); eigenvalues=diag(tildeMM); M=length(eigenvalues); qq=log(eigenvalues); GrowthRate=real(qq)/Deltat; Frequency=imag(qq)/Deltat; Q=U1*tildeQ; Q=Q((d-1)*N+1:d*N,:); [NN,MMM]=size(Q); for m=1:MMM NormQ=Q(:,m); Q(:,m)= Q(:,m)/norm(NormQ(:),2); end %% Calculate amplitudes Mm=zeros(NN*K,M); Bb=zeros(NN*K,1); aa=eye(MMM); for k=1:K Mm(1+(k-1)*NN:k*NN,:)=Q*aa; aa=aa*tildeMM; Bb(1+(k-1)*NN:k*NN,1)=hatT(:,k); end [Ur,Sigmar,Vr]=svd(Mm,’econ’); a=Vr*(Sigmar\(Ur’*Bb)); u=zeros(NN,M); for m=1:M u(:,m)=a(m)*Q(:,m); end Amplitude=zeros(M,1); for m=1:M aca=U*u(:,m); Amplitude(m)=norm(aca(:),2)/sqrt(J);

Higher order dynamic mode decomposition Chapter | 2 77

end UU=[u;GrowthRate’;Frequency’;Amplitude’]’; UU1=sortrows(UU,-(NN+3)); UU=UU1’; u=UU(1:NN,:); GrowthRate=UU(NN+1,:); Frequency=UU(NN+2,:); Amplitude=UU(NN+3,:); kk3=0; for m=1:M if Amplitude(m)/Amplitude(1)>varepsilon kk3=kk3+1; else end end %% Spectral complexity: number of DMD modes. (’Spectral complexity’) kk3 u=u(:,1:kk3); GrowthRate=GrowthRate(1:kk3); Frequency=Frequency(1:kk3); Amplitude=Amplitude(1:kk3); (’Mode number, delta, omega, Amplitude’) GrowthRateOmegAmpl=... [(1:kk3)’,GrowthRate’,Frequency’,Amplitude’] %% Reconstruction of the original snapshot matrix hatTreconst=zeros(N,K); for k=1:K hatTreconst(:,k)= ContReconst_SIADS(Time(k),... Time(1),u,GrowthRate,Frequency); end Vreconst=U*hatTreconst; %% Calculation of DMD modes DMDmode=zeros(J,kk3); Amplitude0=zeros(kk3,1); for m=1:kk3 NormMode=norm(U*u(:,m),2)/sqrt(J);

78 Higher Order Dynamic Mode Decomposition and Its Applications

Amplitude0(m)=NormMode; DMDmode(:,m)=U*u(:,m)/NormMode; end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

C. DMD-1 algorithm This function computes the DMD-1 algorithm. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% DMD1-SIADS.m %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function

[Vreconst,deltas,omegas,amplitude,DMDmode] =...

DMD1(V,Time,varepsilon1,varepsilon) %%%%%%%%%%%%%%%%%%%%%% DMD-1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% %% INPUT: %% %%% V: snapshot matrix %%% Time: vector time %%% varepsilon1: first tolerance (SVD) %%% varepsilon: second tolerance (DMD-d modes) %%% %% OUTPUT: %% %%% Vreconst: reconstruction of the snapshot matrix V %%% deltas: growht rate of DMD modes %%% omegas: frequency of DMD modes(angular frequency) %%% amplitude: amplitude of DMD modes %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% [J,K]=size(V); [U,Sigma,T]=svd(V,’econ’); sigmas=diag(Sigma); Deltat=Time(2)-Time(1); n=length(sigmas); NormS=norm(sigmas,2);

Higher order dynamic mode decomposition Chapter | 2 79

kk=0; for k=1:n if norm(sigmas(k:n),2)/NormS>varepsilon1 kk=kk+1; end end %% Spatial complexity: kk (’Spatial complexity’) kk U=U(:,1:kk); %% Create reduced snapshots matrix hatT=Sigma(1:kk,1:kk)*T(:,1:kk)’; [N,K]=size(hatT); [hatU1,hatSigma,hatU2]=svd(hatT(:,1:K-1),’econ’); %% Calculate Koopman operator hatR=hatT(:,2:K)*hatU2*inv(hatSigma)*hatU1’; [Q,MM]=eig(hatR); eigenvalues=diag(MM); M=length(eigenvalues); qq=log(eigenvalues); deltas=real(qq)/Deltat; omegas=imag(qq)/Deltat; %% Calculate amplitudes Mm=zeros(M*K,M); Bb=zeros(M*K,1); for k=1:K Mm(1+(k-1)*M:k*M,:)=Q*(MM^(k-1)); Bb(1+(k-1)*M:k*M,1)=hatT(:,k); end [Ur,Sigmar,Vr]=svd(Mm,’econ’); a=Vr*(Sigmar\(Ur’*Bb)); u=zeros(M,M); for m=1:M u(:,m)=a(m)*Q(:,m); end amplitude=zeros(M,1);

80 Higher Order Dynamic Mode Decomposition and Its Applications

for m=1:M aca=U*u(:,m); amplitude(m)=norm(aca(:),2)/sqrt(J); end UU=[u;deltas’;omegas’;amplitude’]’; UU1=sortrows(UU,-(M+3)); UU=UU1’; u=UU(1:M,:); deltas=UU(M+1,:); omegas=UU(M+2,:); amplitude=UU(M+3,:); kk2=0; for m=1:M if amplitude(m)/amplitude(1)>varepsilon kk2=kk2+1; else end end %% Spectral complexity: number of DMD modes. (’Spectral complexity’) kk2 u=u(:,1:kk2); deltas=deltas(1:kk2); omegas=omegas(1:kk2); amplitude=amplitude(1:kk2); (’Mode number, delta, omega, amplitude’) DeltasOmegAmpl=[deltas’,omegas’,amplitude’] hatTreconst=zeros(N,K); for k=1:K hatTreconst(:,k)= ContReconst_SIADS(Time(k), ... Time(1),u,deltas,omegas); end Vreconst=U*hatTreconst; %% Calculation of DMD modes DMDmode=zeros(J,kk2); Amplitude0=zeros(kk2,1); for m=1:kk2 NormMode=norm(U*u(:,m),2)/sqrt(J);

Higher order dynamic mode decomposition Chapter | 2 81

Amplitude0(m)=NormMode; DMDmode(:,m)=U*u(:,m)/NormMode; end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

D. Reconstruction of the original field This function reconstructs the original data as an expansion of DMD modes. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ContReconst-SIADS.m %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function ContReconst=ContReconst(t,t0,u,deltas,omegas) [N,M]=size(u); vv=zeros(M,1); for m=1:M vv(m)=exp((deltas(m)+i*omegas(m))*(t-t0)); end ContReconst=u*vv;

E. Approximate commensurability The following MATLAB function ‘ApproxCommensurability.m’ approximates frequency ratios by irreductible fractions. A preliminary FORTRAN version of this function was provided to us by Marta Net and Joan Sánchez, from the Universitat Politècnica de Catalunya. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function ApproxCommensurability.m %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % % % This function computes irreductible fractions, p/q,

82 Higher Order Dynamic Mode Decomposition and Its Applications

%

that converge to the ratio omega1/omega2, with the

%

frequencies

%

omega1 0

(4.51)

j1 =1 j2 =1 l=1

and ulmn (yj11 , yj22 ) =

 ulmnj1 j2 amn

,

(4.52)

where J1 and J2 are the number of grid points along the transverse coordinates y 1 and y 2 , respectively, L is the number of components of the vector defining the dynamics (see Eq. (4.39)), and x t  ulmnj1 j2 = am an

P4 P1   p1 =1 p4 =1

 Sp1 j1 j2 p4 l uxp1 m utp4 n .

(4.53)

132 Higher Order Dynamic Mode Decomposition and Its Applications

Note that the amplitudes are real and positive and the modes exhibit unit RMS norm. As in the previous cases, once the amplitudes amn have been calculated, the expansion (4.37) is truncated eliminating those terms such that amn < εDMD , max amn

(4.54)

where the threshold εDMD is taken to coincide with its counterpart in the application of the HODMD method to the reduced spatial and temporal snapshots; see Eqs. (4.48)–(4.49). Finally, setting xi and tk equal to x and t, respectively, in (4.50), which involves interpolation in these variables, and applying a convenient interpolation in the transverse variables y 1 and y 2 , using the values of these variables in the spatial grid yj11 and yj22 , completes the derivation of the expansion (4.42). The longitudinal coordinate x can be an azimuthal coordinate in systems invariant under rotation. One such system, associated with thermal convection in a spherical shell, will be considered in Chapter 5.

4.3.4 Vector state variable with one transverse and two longitudinal coordinates Let us now derive the expansion (4.11), which is rewritten here for convenience as v(x 1 , x 2 , y, t) =

M2  M1  N 

am1 m2 n um1 m2 n (y) e

1 +i κ 1 ) x 1 +(ν 2 +i κ 2 ) x 2 +(δ +i ω ) t (νm n n m m m 1

1

2

2

,

m1 =1 m2 =1 n=1

(4.55) or, denoting with the index l the components of the state vector v, the expansion (4.55) is written as v l (x 1 , x 2 , y, t) =

M2  M1  N  m1 =1 m2 =1 n=1

am1 m2 n ulm1 m2 n (y) e

1 +i κ 1 ) x 1 +(ν 2 +i κ 2 ) x 2 +(δ +i ω ) t (νm n n m m m 1

1

2

2

.

(4.56) As anticipated, the transverse space variable y must be scalar in this case. As also anticipated, if three longitudinal variables are present, the transverse space variable y must be absent and the modes are (generally complex) spatially constant vectors. This case of three longitudinal space variables is not developed for the sake of brevity. It can be treated using a method similar to the one described below to derive the expansion (4.55).

Spatio-temporal Koopman decomposition Chapter | 4 133

In the present case, the snapshot tensor is of fifth order, with its components given by Ti1 i2 j kl = v l (xi11 , xi22 , yj , tk ),

(4.57)

where the discrete values of y, yj , need not be equispaced, but those of x 1 , x 2 , and t are equispaced, namely xi11 = (i1 − 1) x 1 ,

xi22 = (i2 − 1) x 2 ,

tk = (k − 1) t.

(4.58)

As in the cases considered in the former subsections, the method proceeds in three steps. • Step 1: Dimension reduction and separation into longitudinal and temporal coordinates. Now, truncated higher order SVD (see Eq. (1.28)) is applied to the snapshot tensor (4.57), which gives Ti1 i2 j kl =

P3  P5 P2  P4  P1   p1 =1 p2 =1 p3 =1 p4 =1 p5 =1

Sp1 p2 p3 p4 p5 xp11 i1 xp22 i2 yp3 j tp4 k cp5 l , (4.59)

where truncation is applied according to Eq. (1.39). The tunable parameter εHOSVD is chosen as we did in the former subsections. Namely, this parameter can be very small if the snapshot tensor is very clean, but larger values of εHOSVD may help to filter noise. Now, recalling Eq. (4.57), Eq. (4.59) can also be written as v l (xi11 , xi22 , yj , tk ) ≡ Ti1 i2 j kl =

P2  P4 P1   p1 =1 p2 =1 p4 =1

1 2  vpx 1 i1  vpx 2 i2  vpt 4 k , Sp1 p2 jp4 l 

(4.60) where the components of the rescaled core tensor  S are  Sp1 p2 jp4 l =  3

1 σp11 σp22 σp44

P5 P3  

Sp1 p2 p3 p4 p5 yp3 j cp5 l

(4.61)

p3 =1 p5 =1

and the components of rescaled snapshot matrices along the longitudinal and temporal dimensions are    1 2 vpx 2 i2 = 3 σp22 xp22 i2 , and  vpt 4 k = 3 σp44 tp4 k . (4.62)  vpx 1 i1 = 3 σp11 xp11 i1 ,  In Eqs. (4.61) and (4.62), σ 1 , σ 2 , and σ 4 are the HOSVD singular values along the first, second, and fourth directions of the original snapshot tensor appearing in (4.59). Note the presence of the cubic roots of the singular values

134 Higher Order Dynamic Mode Decomposition and Its Applications 1

2

to rescale  vx ,  v x , and  v t , which is due to the fact that now we have three coordinates to rescale, and is consistent with the spatio-temporal expansions (4.55) and (4.56). • Step 2: Calculation of the reduced DMD spatial and temporal expansions. Now we note that the indexes i1 , i2 , and k appearing in the reduced snapshots 1 2 vpx 2 i2 ), and ( vpt 4 k ) can be seen as associated with the spatial and ( vpx 1 i1 ), ( temporal meshes defined in (4.58). Applying the HODMD algorithm DMD-d 1 2 described in Chapter 2 (with appropriate indexes d x , d x , d t ≥ 1, which as in the previous subsections do not necessarily coincide for the spatial and temporal expansions) to the spatial and temporal reduced snapshots defined in (4.62), leads to 1

 vpx 1 i1  2  vpx 2 i2



vˆpt 4 k 

M1  m1 =1 M2  m2 =1 N 

1

1

x x am u e 1 p1 m1

2

2

x x am u e 2 p2 m2

1 +i κ 1 ) x 1 (νm m i 1

1

1

2 +i κ 2 ) x 2 (νm m i 2

2

2

,

(4.63)

,

(4.64)

ant utp4 n e(δn +i ωn ) tk ,

(4.65)

n=1

for p1 = 1, . . . , P1 , i1 = 1, . . . , I1 , p2 = 1, . . . , P2 , i2 = 1, . . . , I2 , p4 = 1, . . . , P4 , and k = 1, . . . , K. Concerning the tunable parameters in the application of the HODMD method to the spatial and temporal reduced snapshots, the threshold for the dimension reduction step, εSVD , is taken to coincide with its counterpart εHOSVD in step 1 above, and the remaining parameters selected as explained in Chapter 2, Sect. 2.2.5, item 7. • Step 3: Calculation of the spatio-temporal DMD expansion. Substituting (4.63)–(4.65) into (4.60) yields the discrete counterpart of (4.56), namely v l (xi11 , xi22 , yj , tk ) =

M1  M2  N  m1 =1 m2 =1 n=1

am1 m2 n ulm1 m2 n (yj ) e

1 +i κ 1 ) x 1 +(ν 2 +i κ 2 ) x 2 +(δ +i ω ) t (νm n n k m m m i i 1

1

1

2

2

2

,

(4.66) with   L J

2 1    l  um1 m2 nj > 0, am 1 m 2 n = √  J L j =1 l=1

(4.67)

Spatio-temporal Koopman decomposition Chapter | 4 135

where J is the number of grid points for the transverse variable y, L is the number of components of the vector defining the dynamics (see Eq. (4.56)), ulm1 m2 n (y j ) =

 ulm1 m2 nj am 1 m 2 n

(4.68)

and 1

2

x x t  ulm1 m2 nj = am a a 1 m2 n

P2  P4 P1   p1 =1 p2 =1 p4 =1

1 2  Sp1 p2 jp4 l uxp1 m1 uxp2 m2 utp4 n .

(4.69)

Note that, as in the previous sections, the mode amplitudes are real and positive and that the (generally complex) modes exhibit unit RMS norm. After calculating the mode amplitudes, the expansion (4.37) is truncated by setting to zero those elements such that am 1 m 2 n < εDMD , max am1 m2 n

(4.70)

where the threshold εDMD is chosen to coincide with its counterpart in the application of the HODMD method to the reduced spatial and temporal snapshots, in step 2. Finally, setting xi11 = x 1 , xi22 = x 2 , and tk = t in (4.66), which involves interpolation in these variables, and conveniently interpolating in the transverse variable y using its values in the spatial grid y j , yields the expansion (4.56) and completes the derivation of this expansion.

4.4

Some key points about the use of the STKD method

As formulated above, the method involves several tunable parameters, namely the sampled space and time intervals, the associated sampling frequencies, the 1 2 spatial and temporal indexes d x (or d x and d x when two longitudinal coort dinates are present) and d in the application of the DMD-d algorithm to the reduced spatial and temporal snapshots, and the thresholds εSVD (or εHOSVD when the snapshot matrix must be replaced by a snapshot tensor), which governs the dimension reduction step, and εDMD , which governs amplitude truncation. Some remarks about the selection of these parameters are now in order: • The lengths of the spatial and temporal sampled intervals must be somewhat larger (say, 1.5 as large) than the largest spatial and temporal periods involved in the dynamics. • The number of spatial and temporal grid points, on the other hand, must be such that the distances x (or x 1 and x 2 when two longitudinal coordinates are present) and t should be much smaller (say, five times smaller) than the smallest spatial wavelengths along the spatial coordinates and the temporal period involved in the dynamics.

136 Higher Order Dynamic Mode Decomposition and Its Applications 1

2

• The spatial and temporal HODMD indexes, d x (or d x and d x when two longitudinal coordinates are present) and d t are selected through trial and error noting that, as explained in Chapter 2, they scale with the numbers of spatial and temporal grid points, respectively. However, the selection of these indexes is not critical. On the contrary, the method gives almost identical results for d x and d t in wide intervals around the optimal values. • Concerning the thresholds εSVD (or εHOSVD ) and εDMD , these must be somewhat large compared to the expected accuracy in the databases that are being used. On the other hand, for clean data, εSVD (or εHOSVD ) can be very small and must be somewhat smaller than εDMD , which in turn must be comparable to the accuracy that is expected in the STKD computation. As anticipated, for noisy data, choosing εSVD (or εHOSVD ) somewhat larger than the noise level provides a means to filter noise [102,167]. Note in this connection that an iterative method was developed in [102] that was very efficient for noise filtering in the context of purely temporal HODMD. This method could be generalized to the present STKD method, but such extension is beyond the scope of this book. In any event, it is convenient to ensure the robustness of the results. Namely, checking that the obtained spatio-temporal expansion is insensitive to the spatial and temporal sampled spans, the sampled frequencies in both space and time, and the remaining tunable parameters.

4.5 Some toy models For simplicity, we consider several scalar toy models exhibiting permanent dynamics. Thus, the state variable is scalar and the spatial and temporal growth rates are both equal to zero. In addition, the various (spatially one-dimensional) patterns will be spatially periodic, with period 1. The counterpart of the expansion (4.6) is v(x, t) =

N M  

amn umn ei (2πm x+ωn t) ,

(4.71)

m=1 n=1

where the modes umn are complex scalars with unit absolute value and the mode amplitudes amn > 0 are real. Thus, the version of the SKTD method to be applied is that in Sect. 4.3.1. The aim is to see how the dispersion diagrams provided by the STKD method can be used to identifying pure or modulated standing and traveling wave patterns. The HODMD method (described in Chapter 2), which will also be applied, decomposes v(x, t) as v(x, t) =

N  n=1

an un (x) ei ωn t ,

(4.72)

Spatio-temporal Koopman decomposition Chapter | 4 137

where the spatial modes un (x) will be periodic with fundamental wavenumber equal to 2π. Also, in all cases, we shall take 4000 temporally equispaced snapshots in the interval 0 ≤ t ≤ 1, with each of them spatially equispaced using 2000 points in the interval 0 ≤ x ≤ 2π . The thresholds for the dimension reduction and the mode truncation steps in both the HODMD and STKD methods will always be εSVD = 10−10

and εDMD = 10−6 ,

(4.73)

respectively. The index d for the HODMD algorithm DMD-d and the spatial and temporal indexes, d x and d t , for the STKD method will be chosen for the various toy models after slight calibration. The toy models considered below will be constructed using the real function w(x) =

6 

3−m cos(mx),

(4.74)

m=1

which is periodic with fundamental frequency equal to one. The first toy model to be considered very clearly shows the advantages of the STKD method to identify TWs. This toy model is defined as v(x, t) = w(30t + 10πx),

(4.75)

where w is as defined in Eq. (4.76). The spatio-temporal color map of this pattern is given in Fig. 4.1. As can be seen, the pattern is a pure TW, which remains stationary in a frame moving to the left. This is consistent with the definition (4.75).

FIGURE 4.1 Spatio-temporal color map for the dynamics defined in Eq. (4.75).

Applying the HODMD method, which decomposes the function in the form (4.72), gives the following results. The DMD-1 algorithm recognizes 13 modes,

138 Higher Order Dynamic Mode Decomposition and Its Applications

computing the exact frequencies with 13 significant digits, calculating damping rates (which should be zero) of the order of 10−11 , and giving very good results since it reconstructs the snapshots with a relative RMS error, as defined in Eq. (2.51), RRMSE∼ 1.8 · 10−12 . The resulting a–ω diagram is depicted in Fig. 4.2. The DMD-d algorithm gives similarly good results in the range 1 < d ≤ 300. As it happened Chapter 2, this illustrates the robustness of the HODMD method.

FIGURE 4.2 The a–ω diagram using DMD-1 for the dynamics defined in Eq. (4.75).

On the other hand, the STKD method, with d x = d t = 1 reconstructs the snapshots with a relative RMS error RRMSE∼ 1.8 · 10−12 , and it gives both the exact frequencies and wavenumbers with 14 significant figures and the dispersion diagram shown in Fig. 4.3. As can be seen in this diagram, the relevant

FIGURE 4.3 The dispersion diagram using the STKD method with d x = d t = 1 for the dynamics defined in Eq. (4.75).

points are contained in a straight line passing through the origin, consistently

Spatio-temporal Koopman decomposition Chapter | 4 139

with the fact that the pattern is a pure TW, since invoking Eqs. (4.74) and (4.75), the pattern can be written as w(x, t) =

6 1  −|m| i m (30 t+10 πx) 3 e . 2

(4.76)

m=−6

Thus, the slope of the straight line containing the points in Fig. 4.3 is ω/κ = 3/π = −c, where c = −3/π is the phase velocity of the pattern, namely the velocity at which the pattern moves (to the left because the phase velocity is negative). In addition, Eq. (4.76) shows that: since 3−6 is much larger than εDMD = 10−6 (see Eq. (4.73)), (i) all terms appearing in (4.76) are retained by the HODMD method, which explains the very small errors; and (ii) the spatial and spectral complexities are both equal to 13, which is consistent with the good performance of the DMD-1 algorithm. As a second toy model, we consider two counter-propagating waves of the form (4.75) namely v(x, t) = w (30 t + 10πx) + w (30 t − 10πx),

(4.77)

where w is as defined in Eq. (4.76). The colormap of this pattern is depicted in Fig. 4.4, where the two counter-propagating waves are visible, but it is seen that

FIGURE 4.4 Spatio-temporal color map for the dynamics defined in Eq. (4.77).

the pattern can also be considered as a modulated SW, in which the positions of the nodes and crests do not remain constant, but oscillate left and right. Note that this pattern is reflection-symmetric in x. Now, the HODMD algorithm DMD-1 only captures 7 modes and gives a poor reconstruction of the snapshots, with a relative RMS error RRMSE∼ 0.17. The algorithm DMD-300 (with the index d = 300 selected after a slight calibration), instead, identifies 13 modes, computes the exact frequencies with 13 significant digits, calculates damping rates (which should be zero) of the order of 10−12 , and reconstructs the snapshots with a relative RMS error RRMSE∼

140 Higher Order Dynamic Mode Decomposition and Its Applications

FIGURE 4.5 The a–ω diagram using the HODMD algorithm DMD-300 for the dynamics defined in Eq. (4.77).

3.9 · 10−12 . The resulting a–ω diagram is given in Fig. 4.5. It is interesting to note that the counterpart of Eq. (4.76) for the present case is given by v(x, t) =

6 

3−|m| e30 i m t cos(10 m πx),

(4.78)

m=−6

which noting that cos(10 m πx) = cos(−10 m πx) implies that the spatial complexity is 7, while the spectral complexity is 13. This explains why the DMD-1 algorithm does not give good results in the present case. Also, Eq. (4.78) shows that the pattern can be seen as a modulated SW. The STKD method, with d t = 300 and d x = 150 gives both the exact frequencies and wavenumbers with 13 significant figures and reconstructs the snapshots with a relative RMS error RRMSE∼ 1.3 · 10−12 . The dispersion diagram is depicted in Fig. 4.6, where it can be seen that the relevant points are contained

FIGURE 4.6 The dispersion diagram using the STKD method with d x = 30 and d t = 300 (bottom) for the dynamics defined in Eq. (4.77).

in two symmetric straight lines passing through the origin, which, according to Eq. (4.77), is consistent with the fact that the pattern is the superposition of two

Spatio-temporal Koopman decomposition Chapter | 4 141

counter-propagating TWs. Also, the points in this diagram can be seen as organized in a family of horizontal straight lines, which is consistent with the fact that the pattern is a modulated SW. The third toy model is given by v(x, t) = w(30t) w(10πx)

(4.79)

where w is as defined in Eq. (4.76). This toy model represents a simpler, pure SW, because the nodes and crests remain at constant position (namely, at the nodes and crests of the function w(10πx)), as can be guessed in the colormaps in Fig. 4.7. Note that this toy model is both reflection-symmetric in x and periodic in t.

FIGURE 4.7 Spatio-temporal color map for the dynamics defined in Eq. (4.79).

Applying the DMD-1 algorithm, this method only recognizes one mode and gives a very poor reconstruction of the snapshots, with a relative RMS error RRMSE∼ 0.29. The HODMD algorithm DMD-600 (with the index d = 600 selected after a slight calibration, as in the previous cases), instead, identifies 13 modes, computes the exact frequencies with 13 significant digits, and reconstructs the snapshots with a relative RMS error RRMSE∼ 1.6 · 10−12 . The associated a–ω diagram is given in Fig. 4.8. In the present case, the poor performance of the DMD-1 algorithm is a consequence of the fact that the spatial complexity of the dynamics is just one because, according to Eq. (4.79), all snapshots are proportional to w(10πx); the spectral complexity is obviously larger than one (namely, equal to 13 with the present chosen value of the mode truncation parameter, εDMD ). On the other hand, the STKD method, with d t = 600 and d x = 300, reconstructs the snapshots with a relative RMS error RRMSE∼ 9 · 10−7 and computes the exact frequencies and wavenumbers with 14 significant digits. The resulting dispersion diagram is given in Fig. 4.9. Note that the points in this diagram are aligned in horizontal lines, which is consistent with the fact that the pattern is a standard SW.

142 Higher Order Dynamic Mode Decomposition and Its Applications

FIGURE 4.8 The a–ω diagram using the HODMD algorithm DMD-600 for the dynamics defined in Eq. (4.79).

FIGURE 4.9 The dispersion diagram using the STKD method with d x = 300 and d t = 600 (right) for the dynamics defined in Eq. (4.79).

The last toy model to be considered is given by v(x, t) = w(40 t) w(30t + 10πx),

(4.80)

where w is as defined in Eq. (4.76). Note that this toy model represents a modulated TW, whose difference with the first toy model defined in (4.75), is that now the pattern is not stationary in moving axes, but it also oscillates as it travels. The counterpart of Fig. 4.1 is given in Fig. 4.10, where the modulation of the TW (which is moving to the left) is clearly seen. Note that, since w is periodic with fundamental frequency equal to one, this toy model is periodic because the involved frequencies, ω = 30 and ω = 40, are commensurable. As for the former toy models, we apply to this toy model the HODMD algorithm DMD-1, which only recognizes 13 modes and gives a very poor reconstruction of the snapshots, with a relative RMS error RRMSE∼ 0.32. The algorithm DMD-400, (again, the index d = 400 is chosen after a slight calibration) instead, identifies 77 modes, computes the exact frequencies with 11 significant digits, and reconstructs the snapshots with a relative RMS error

Spatio-temporal Koopman decomposition Chapter | 4 143

FIGURE 4.10 Spatio-temporal color map for the dynamics defined in Eq. (4.80).

FIGURE 4.11 The a–ω diagram for the dynamics defined in Eq. (4.80).

RRMSE∼ 6.2 · 10−7 . The a–ω diagram obtained by the latter method is given in Fig. 4.11. Note the spectral decay of the amplitude for large |ω|, which is consistent with the fact that the pattern is periodic and smooth. On the other hand, the STKD method, with d t = 400 and d x = 200, reconstructs the snapshots with a relative RMS error RRMSE∼ 9 · 10−7 and computes the exact frequencies and wavenumbers with 11 significant digits. The resulting dispersion diagram is given in Fig. 4.12. Note that the points are aligned in a family of oblique parallel lines, which is consistent with the fact that the pattern is a modulated TW. Summarizing, the STKD method has been able to identify standard and modulated TWs and SWs in a very clear way. Specifically, the a − ω diagrams were able to identify the dominant modes in all cases. Also, the shape of the dispersion diagrams (whose points can be located in an oblique straight line passing through the origin for pure TWs, or in families of oblique or horizontal straight lines for modulated TWs or SWs, respectively) has helped to elucidate the nature of the spatio-temporal patterns.

144 Higher Order Dynamic Mode Decomposition and Its Applications

FIGURE 4.12 The dispersion diagram for the dynamics defined in Eq. (4.80) as obtained using the STKD method with d t = 400 and d x = 200.

4.6 Some concluding remarks The STKD method has been described in detail. This method is a spatiotemporal extension of the HODMD method described in Chapter 2. The new STKD method is able to give expansions involving oscillations and growth or decay time and one or more distinguished spatial coordinates, which have been called longitudinal coordinates; see Eqs. (4.4), (4.6), (4.7), and (4.11). The expansion may involve additional spatial coordinates, called transverse coordinates. The STKD method has been described in Sect. 4.3 for various representative cases, involving scalar or vector state variables and one or more longitudinal and/or transverse coordinates. The method has been illustrated, and compared with the HODMD method, in Sect. 4.5 using various academic toy models. These toy models illustrate that: • The relevant points of the dispersion diagram are aligned in an oblique straight line passing through the origin (as in Fig. 4.3) for pure TWs (which are stationary in a moving frame; see Fig. 4.1). • For modulated TWs (which oscillate as they travel; see Fig. 4.10), the relevant points in the dispersion diagram are contained in a family of parallel oblique straight lines (as in Fig. 4.12). • For standard SWs (whose nodes and crests remain at constant positions; see Fig. 4.7) the relevant points in the dispersion diagram are contained in one or more horizontal straight lines (as in Fig. 4.9). • Modulated SWs, whose nodes and crests oscillate left and right, can be obtained in various ways. Here, for simplicity, we have obtained these waves as superposition of counter-propagating pure TWs (see Fig. 4.4), whose dispersion diagram is given in Fig. 4.6. Note that the points in this diagram are aligned in two oblique straight lines resulting from the two counterpropagating waves, but can also be seen as contained in families of horizontal lines, each containing just two points.

Spatio-temporal Koopman decomposition Chapter | 4 145

More realistic scientific and industrial applications of the STKD method will be considered in subsequent chapters.

4.7 Annexes to Chapter 4 Annex 4.1. Selected practice problems for Chapter 4. 1. Check carefully that Eq. (4.75) can also be written in the form (4.76). Use the latter equation to compute the spatial and spectral complexities of the dynamics of the first toy model and explain the good performance of the algorithm DMD-1. 2. Check carefully that Eq. (4.77) can also be written in the form (4.78). Use the latter equation to compute the spatial and spectral complexities of the dynamics of the second toy model and explain the bad performance of the HODMD algorithm DMD-1. 3. Compute the spatial and spectral complexities of the dynamics of the third toy model (4.79) when all terms are retained in Eq. (4.74) for w. 4. Check all results described in Sect. 4.5 in connection with the application of the HODMD methods to the four toy models considered in this section. 5. Reproduce Figs. 4.1 and 4.12. 6. Construct MATLAB functions to compute the versions of the STKD method described in Sects. 4.3.2 and 4.3.4. Annex 4.2. MATLAB functions to compute STKD algorithm. This annex includes appropriate MATLAB functions to compute the STKD algorithm; in particular, it may be used to calculate traveling waves in a toy model with one spatial dimension. The MATLAB functions provide the retained wavenumbers and temporal frequencies, amplitudes, and spatial and temporal growth rates of the model and also provides the dispersion diagram ω − κ. The various MATLAB functions must be in the same directory. The functions to compute STKD are the following: • The main program is in the function ‘STKD-main-toyModel.m’. It is necessary to set the snapshot matrix and the parameters for the STKD analysis. • The function ‘CalculateDMDdSdT.m’ uses the functions ‘DMD1-STKD.m’ and ‘DMDd-STKD.m’ to compute the spatio-temporal expansion of DMD, calculating spatial and temporal frequencies and growth rates. It uses a version of the DMD-1 and DMD-d algorithms (implemented in the separate files ‘DMD1-STKD.m’ and ‘DMDd-STKD.m’), which were implemented in Chapter 2, Annex 2.2. • Function ‘Reconst-STKD.m’, which reconstructs the original solution as an expansion of spatio-temporal DMD modes. • Function ‘DispersionFreqWaveNum.m’, which calculates a matrix that permits constructing the dispersion diagram ω − κ.

146 Higher Order Dynamic Mode Decomposition and Its Applications

A. The STKD algorithm: main function This is the main file to compute the STKD method in a toy model. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% STKD-main-toyModel.m %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%

STKD in a toy model

%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% clc clear all close all %% LOAD THE SNAPSHOT MATRIX load V.mat %% SET GRID POINTS (equi-distant) IN X AND TIME [J,K]=size(V) DeltaX=10; Exis= DeltaX*[0:J-1]/(J-1); DeltaT=0.2; Time=DeltaT*[0:K-1]/(K-1); %% APPLY STKD TO V [J,K]=size(V) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%% STKD PARAMETERS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% SET PARAMETERS STKD %% Tolerances % SVD varepsilon1=1e-8 % DMD varepsilon2=1e-7

Spatio-temporal Koopman decomposition Chapter | 4 147

%% Set index d: dSpace for spatial analysis and % dTime for temporal analysis dSpace=10; dTime=1; %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%% STKD ALGORITHM

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% [Vreconst,Modes,Amplitudes,Amplitudesx,GrowthRatex,... Frequencyx,Amplitudest,GrowthRatet,Frequencyt]=... CalculateDMDdSdT(dSpace,dTime,Time,Exis,V,varepsilon1,... varepsilon2); %% CALCULATE RRMS ERROR dif=Vreconst-V; errorRMS=norm(dif(:))/norm(V(:)); (’Reconstruction error: RRMSE’) errorRMS %% DISPERSION DIAGRAM frequency vs wavenumber [mm,nn]=size(Amplitudes); [Amplitudes,Frequencyt00,Frequencyx00,mm,nn]=... DispersionFreqWaveNum(Amplitudes,Frequencyt,Frequencyx); % Plot dispersion diagram h7=figure; axes7 = axes(’Parent’,h7,’FontSize’,14,’FontName’,... ’Agency FB’); box(axes7,’on’); NumModes=0; for n=1:nn for m=1:mm if Amplitudes(n,m)/max(max(Amplitudes))>varepsilon2 plot(-Frequencyx00(m),Frequencyt00(n),... ’+’,’color’,’k’,’MarkerSize’,8); NumModes=NumModes+1; hold on end end end ylabel(’Frequency’) xlabel(’Wavenumber’)

148 Higher Order Dynamic Mode Decomposition and Its Applications

NumModes %% PLOT RECONSTRUCTION h9=figure; axes9 = axes(’Parent’,h9,’FontSize’,14,... ’FontName’,’Agency FB’); box(axes9,’on’); pcolor(Exis,Time,real(Vreconst’)) shading(’interp’) colormap(jet) xlabel(’x’) ylabel(’t’) %% Plot: frequency/wavenumber vs. amplitude, vs. growth rate figure1 = figure; axes1 = axes(’Parent’,figure1); hold(axes1,’on’); box(axes1,’on’); semilogy(Frequencyt,GrowthRatet,’o’,’linewidth’,2,... ’color’,’k’,’MarkerSize’,8); semilogy(Frequencyx,GrowthRatex,’o’,’linewidth’,2,... ’color’,’b’,’MarkerSize’,8); set(axes1,’YMinorTick’,’on’,’YScale’,’log’); xlabel(’Frequency/Wavenumber’) ylabel(’Growth Rate’) figure1 = figure; axes1 = axes(’Parent’,figure1); hold(axes1,’on’); box(axes1,’on’); semilogy(Frequencyt,Amplitudest,’+’,’linewidth’,2,... ’color’,’k’,’MarkerSize’,8); semilogy(Frequencyx,Amplitudesx,’+’,’linewidth’,2,... ’color’,’b’,’MarkerSize’,8); set(axes1,’YMinorTick’,’on’,’YScale’,’log’); xlabel(’Frequency/Wavenumber’) ylabel(’Amplitude’)

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Spatio-temporal Koopman decomposition Chapter | 4 149

B. Spatio-temporal DMD This function computes HODMD in space and time using DMD-d or DMD-1, which are implemented in two separated functions. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% CalculateDMDdSdT.m %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function [Vreconst,Modes,Amplitudes,Amplitudesx,... GrowthRatex,Frequencyx,Amplitudest,GrowthRatet,Frequencyt]... =CalculateDMDdSdT(d1,d2,Tiempos,Exis,V,varepsilon1,varepsilon2) [J,K]=size(V) %% dimension reduction: SVD [U,Sigma,T]=svd(V); sigmas=diag(Sigma); n=length(sigmas); SigmNorm=norm(sigmas,2); kk=0; for k=1:n if norm(sigmas(k:n),2)/SigmNorm>varepsilon1 kk=kk+1; end end (’Spatial Dimension: singular values’) kk %% Escale with singular values hatx=Sigma(1:kk,1:kk)*U(:,1:kk)’; hatT=Sigma(1:kk,1:kk)*T(:,1:kk)’; Sigma=Sigma(1:kk,1:kk); %[N,~]=size(hatT); %% HODMD: SPACE if d1>1 [GrowthRatex,Frequencyx,hatModesx] =...

150 Higher Order Dynamic Mode Decomposition and Its Applications

DMDd_STKD(d1,hatx,Exis,varepsilon1,varepsilon2); else [GrowthRatex,Frequencyx,hatModesx] =... DMD1_STKD(hatx,Exis,varepsilon1,varepsilon2); end [NNx,MMx]=size(hatModesx); Modesx0=hatModesx; Modesx=zeros(NNx,MMx); for pp=1:MMx Amplitudesx(pp)=norm(Modesx0(:,pp),2)/sqrt(NNx); Modesx(:,pp)=Modesx0(:,pp)/Amplitudesx(pp); end GrowthRateOmegAmplx=... [(1:MMx)’,GrowthRatex’,Frequencyx’,Amplitudesx’] %% HODMD: Time if d2>1 [GrowthRatet,Frequencyt,hatModest] =... DMDd_STKD(d2,hatT,Tiempos,varepsilon1,varepsilon2); else [GrowthRatet,Frequencyt,hatModest] =... DMD1_STKD(hatT,Tiempos,varepsilon1,varepsilon2); end [NNt,MMt]=size(hatModest); Modest0=hatModest; Modest=zeros(NNt,MMt); for pp=1:MMt Amplitudest(pp)=norm(Modest0(:,pp),2)/sqrt(NNt); Modest(:,pp)=Modest0(:,pp)/Amplitudest(pp); end %% DMD MODES: Q q=(Modesx*diag(Amplitudesx))’*inv(Sigma)*... (Modest*diag(Amplitudest)); Amplitudes=abs(q); Modes=q./Amplitudes; [MM,NN]=size(Amplitudes); ModesNumber0=MM*NN; ModesNumber=0; for n=1:NN for m=1:MM if Amplitudes(m,n)/max(max(Amplitudes))varepsilon1 kk1=kk1+1; end end (’Spatial dimension reduction’) kk1 U1=U1(:,1:kk1); hatT1=Sigma1(1:kk1,1:kk1)*T1(:,1:kk1)’; %% Reduced modified snapshot matrix [~,K1]=size(hatT1); [tildeU1,tildeSigma,tildeU2]=svd(hatT1(:,1:K1-1),’econ’); %% Reduced modified Koopman matrix tildeR=hatT1(:,2:K1)*tildeU2*inv(tildeSigma)*tildeU1’; [tildeQ,tildeMM]=eig(tildeR); eigenvalues=diag(tildeMM); M=length(eigenvalues); qq=log(eigenvalues); GrowthRate=real(qq)/Deltat; Frequency=imag(qq)/Deltat; Q=U1*tildeQ; Q=Q((d-1)*N+1:d*N,:); [NN,MMM]=size(Q);

Spatio-temporal Koopman decomposition Chapter | 4 153

for m=1:MMM NormQ=Q(:,m); Q(:,m)= Q(:,m)/norm(NormQ(:),2); end Mm=zeros(NN*K,M); Bb=zeros(NN*K,1); aa=eye(MMM); for k=1:K Mm(1+(k-1)*NN:k*NN,:)=Q*aa; aa=aa*tildeMM; Bb(1+(k-1)*NN:k*NN,1)=hatT(:,k); end [Ur,Sigmar,Vr]=svd(Mm,’econ’); a=Vr*(Sigmar\(Ur’*Bb)); u=zeros(NN,M); for m=1:M u(:,m)=a(m)*Q(:,m); end hatamplitudes=zeros(M,1); for m=1:M aca=u(:,m); hatamplitudes(m)=norm(aca(:),2); end UU=[u;GrowthRate’;Frequency’;hatamplitudes’]’; UU1=sortrows(UU,-(NN+3)); UU=UU1’; u=UU(1:NN,:); GrowthRate=UU(NN+1,:); Frequency=UU(NN+2,:); hatamplitudes=UU(NN+3,:); kk3=0; for m=1:M if hatamplitudes(m)/hatamplitudes(1)>varepsilon2 kk3=kk3+1; end end

154 Higher Order Dynamic Mode Decomposition and Its Applications

%% Spectral complexity: number of DMD modes. (’Spectral complexity’) kk3 %amplitudes=UU(NN+3,:); hatmodos=u(:,1:kk3); Frequency=Frequency(1:kk3); GrowthRate=GrowthRate(1:kk3); %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

D. DMD-1 for STKD This function computes the DMD-1 algorithm adapted to the STKD method. This is a version of the DMD-1 function presented in Chapter 2, Annex 2.2. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% DMD1-STKD.m %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function [GrowthRates,Frequency,hatmodos] =... DMD1_STKD(hatT,Tiempos,varepsilon1,varepsilon2) [Jprima,K]=size(hatT); N=Jprima; Deltat=Tiempos(2)-Tiempos(1); % [hatU1,hatSigma,hatU2]=svd(hatT(:,1:K-1),’econ’); %% Calculate Koopman operator hatR=hatT(:,2:K)*hatU2*inv(hatSigma)*hatU1’; [Q,tildeMM]=eig(hatR); eigenvalues=diag(tildeMM); M=length(eigenvalues); qq=log(eigenvalues); GrowthRates=real(qq)/Deltat; Frequency=imag(qq)/Deltat; %% Normalize eigenvectors [NN,MMM]=size(Q); for m=1:MMM Qnorm=Q(:,m);

Spatio-temporal Koopman decomposition Chapter | 4 155

Q(:,m)= Q(:,m)/norm(Qnorm(:),2); end %% Calculate amplitudes Mm=zeros(NN*K,M); Bb=zeros(NN*K,1); Id=eye(MMM); for k=1:K Mm(1+(k-1)*NN:k*NN,:)=Q*Id; Id=Id*tildeMM; Bb(1+(k-1)*NN:k*NN,1)=hatT(:,k); end [Ur,Sigmar,Vr]=svd(Mm,’econ’); a=Vr*(Sigmar\(Ur’*Bb)); u=zeros(NN,M); for m=1:M u(:,m)=a(m)*Q(:,m); end hatamplitudes=zeros(M,1); for m=1:M aca=u(:,m); hatamplitudes(m)=norm(aca(:),2); end UU=[u;GrowthRates’;Frequency’;hatamplitudes’]’; UU1=sortrows(UU,-(NN+3)); UU=UU1’; u=UU(1:NN,:); GrowthRates=UU(NN+1,:); Frequency=UU(NN+2,:); hatamplitudes=UU(NN+3,:); kk3=0; for m=1:M if hatamplitudes(m)/hatamplitudes(1)>varepsilon2 kk3=kk3+1; else end end

156 Higher Order Dynamic Mode Decomposition and Its Applications

%% Spectral complexity: number of DMD modes. (’Spectral complexity’) kk3 %amplitudes=UU(NN+3,:); hatmodos=u(:,1:kk3); Frequency=Frequency(1:kk3); GrowthRates=GrowthRates(1:kk3); %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

E. Reconstruction This function computes the reconstruction of the original data as an expansion of spatio-temporal DMD modes. This is a version of the function for reconstruction presented in Chapter 2, Annex 2.2. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Reconst-STKD.m %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function Vreconst=Reconst_STKD(Tiempos,Exis,q,deltasx,... omegasx,deltast,omegast) [N1,N2]=size(q); J=length(Exis); K=length(Tiempos); %% vvx=zeros(N1,J); for jj=1:J for m=1:N1 vvx(m,jj)=exp((deltasx(m)+i*omegasx(m))*Exis(jj)); end end %% vvt=zeros(N2,K); for kk=1:K for m=1:N2 vvt(m,kk)=exp((deltast(m)+i*omegast(m))*Tiempos(kk)); end end

Spatio-temporal Koopman decomposition Chapter | 4 157

%% Vreconst=vvx’*(q*vvt); %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

F. Dispersion diagram This function computes the dispersion diagram κ − ω in the STKD algorithm. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% DispersionFreqWaveNum.m %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function [Amplitudes,Frequencyt00,Frequencyx00,mm,nn]=... DispersionFreqWaveNum(Amplitudes,Frequencyt0,Frequencyx0) [mm,nn]=size(Amplitudes); % Organize data for amplitudes: Time IndOr=[Amplitudes(1:mm,1:nn)’,Frequencyt0(1:nn)’]; IndOr=sortrows(IndOr,-(mm+1)); ampl=IndOr(:,1:mm); Frequencyt00=IndOr(:,mm+1); % Organize data for amplitudes: Space IndOrX=[ampl’,Frequencyx0(1:mm)’]; IndOrX=sortrows(IndOrX,-(nn+1)); Amplitudes=IndOrX(:,1:nn)’; Frequencyx00=IndOrX(:,nn+1); %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Chapter 5

Application of HODMD and STKD to some pattern forming systems 5.1 Introduction to pattern forming systems The most obvious application of the HODMD and STKD methods is to pattern forming systems [40,41], which are designed to produce relevant and distinguishable spatio-temporal patterns. In this chapter, we consider two types of pattern forming of systems: • The one-dimensional complex Ginzburg–Landau equation (CGLE) [9], which is a basic amplitude equation that accounts for weakly nonlinear dynamics in many physical systems. This equation applies near the oscillatory instability [40] in physical systems whose spatial size is large compared to the wavelength of the marginally unstable mode. It turns out that, in some cases, Ginzburg–Landau-like equations also apply, at least qualitatively, far beyond the primary instability [136]. The CGLE is invariant under space translations. Depending on the boundary conditions (which may keep invariance under translations or not), the CGLE can exhibit TWs or not. In this chapter, we consider both situations. • Thermal convection systems [91,92], which are fluid systems in which patterns are produced by convection. Convection, in turn, is promoted by the interaction of the gravitational force and buoyancy when these oppose to each other. The simplest case is the famous Lorenz system, in which the system of partial differential equations (namely, the continuity, Navier–Stokes, and energy equations) governing the thermal convection problem is drastically reduced to a system of three ordinary differential equations. A much more detailed and complex thermal convection system, which will also be considered, is that accounting for convection in a rotating spherical shell heated from the inside, with a radial gravitational force pointing inwards. This system somewhat mimics convection in the upper layers of the atmospheres of stars and giant planets. The methods considered in this chapter apply to other pattern forming systems as well, which are not considered here for the sake of brevity. For instance, these methods give very good results [6] in identifying the so called superhighway Higher Order Dynamic Mode Decomposition and Its Applications https://doi.org/10.1016/B978-0-12-819743-1.00012-4 Copyright © 2021 Elsevier Inc. All rights reserved.

159

160 Higher Order Dynamic Mode Decomposition and Its Applications

patterns (already detected, both experimentally [39] and numerically [5]), in binary fluid convection in slightly inclined cylindrical containers.

5.2

The one-dimensional CGLE

This equation is given by ∂v ∂ 2v = (1 + iα) 2 + μ v − (1 + iβ) |v|2 v, ∂t ∂x

(5.1)

where the state variable v is complex, those terms in the right hand side exhibiting real coefficients are associated with diffusion, linear growth, and nonlinear damping, and those terms proportional to the coefficients of α and β account for dispersion and nonlinear detuning, respectively. The real parameter μ is usually seen as a bifurcation parameter and the complexity of the dynamics generally (but not always) increases as μ increases. The CGLE will be considered in the next two subsections in the interval 0 < x < 1,

(5.2)

with two type of boundary conditions, either Neumann, ∂v = 0 at x = 0, 1, ∂x

(5.3)

or periodic ∂v(0, t) ∂v(1, t) = . (5.4) ∂x ∂x In the latter case, Eqs. (5.1) and (5.4) can be extended to the whole x-axis imposing periodicity of period 1 and the resulting problem is invariant under x-translations, namely under the action v(0, t) = v(1, t),

x →x+c

for all c.

(5.5)

This is interesting and shows that Eqs. (5.1) and (5.4) may exhibit TWs. The problem posed by Eqs. (5.1) and (5.3) could also be extended to the whole line upon reflection on one of the end-points of the interval 0 < x < 1 and periodic extension (with period 2). However, the Neumann boundary conditions at x = 0, 1 break invariance under translations and TWs are not possible with these boundary conditions.

5.2.1 Properties of the CGLE With either Neumann or periodic boundary conditions, the following properties hold:

Application of HODMD and STKD to pattern forming systems Chapter | 5

161

• The CGLE may exhibit quite complex behavior if αβ < −1

(5.6)

due to the modulational instability [9], which occurs at some threshold value of μ. Condition (5.6) is called Newell’s condition. It must be noted that complexity does not increase monotonously as μ is increased beyond the modulational instability threshold. Instead, as μ increases, the complexity of the dynamics may alternate between simple and complex dynamics; see, e.g., the bifurcation diagram in [176], Figure 7. For both Neumann and periodic boundary conditions, we shall consider the following values of α and β: α = 10 and

β = −10.

(5.7)

Note that with these values of α and β, the Newell condition (5.6) is strongly satisfied. Also note that these values of α and β exhibit the same absolute values but opposite sings compared to their counterparts considered in [176]. However, the CGLE (5.1) is invariant under the transformation α → −α

β → −β,

v → v.

(5.8)

This means that the bifurcation diagram in [176], Figure 7, also applies to the present case because what was plotted in this bifurcation diagram was |v| vs. μ and, obviously, |v| = |v|. • If μ ≤ 0, then v → 0 for all initial conditions. This is seen multiplying Eq. (5.1) by the complex conjugate of v, adding to the resulting equation its complex conjugate, integrating in 0 < x < 1, integrating by parts, and substituting the Neumann or periodic boundary condition, namely (5.3) or (5.4), respectively. These give  1  1  1 d |v|2 dx = 2μ |v|2 dx − 2 |v|4 dx. (5.9) dt 0 0 0 Since μ ≤ 0, this equation readily implies that  1 |v|2 dx → 0 as t → ∞

(5.10)

0

for all initial conditions. In principle, condition (5.10) is weaker than the assertion v(x, t) → 0

uniformly in 0 < x < 1 as t → ∞.

(5.11)

However, using a priori estimates for parabolic partial differential equations (see [144] and the references therein), it turns out that condition (5.11) also holds for all initial conditions if μ ≤ 0. The a priori estimates are intricate and well beyond the scope of this book.

162 Higher Order Dynamic Mode Decomposition and Its Applications

• The property in the last item shows that the large time dynamics of the CGLE are trivial if μ ≤ 0. In other words, interesting dynamics can only occur if μ > 0. In this case, μ can be set to 1 upon the change of variables τ = μt,

ξ=



μ x,

√  v = v/ μ,

(5.12)

which permits rewriting the CGLE (5.1) as v ∂ 2 ∂ v v − (1 + iβ) | v |2  v. = (1 + iα) 2 +  ∂τ ∂ξ

(5.13)

This equation applies in the interval 0 0 is the Prandtl number, r > 0 is a rescaled Rayleigh number, and b > 0 is an aspect ratio. As a third order ODE system, taking v(t) in Eq. (2.1), the spatial complexity is equal to three, meaning that standard DMD can only cope with dynamics with spectral complexity equal to at most three, namely involving at most three frequencies (i.e., ω = 0 and ±ω1 ). In other words, standard DMD can only cope monochromatic dynamics. However, this system is well known to exhibit very complex dynamics, including chaotic behavior and highly-non-monochromatic periodic solutions [128]. Instead, the Lorenz system does not exhibit quasiperiodic solutions. This is because if a quasiperiodic solution existed, then the associated invariant torus in phase space would be a closed invariant surface for the flow field defined by (5.37). On the other hand, invoking (5.38), the right hand side of (5.37) is such that its divergence satisfies ∇ ·f ≡

∂f ∂f ∂f + + = −σ − 1 − b < 0. ∂v1 ∂v2 ∂v3

(5.39)

This means that the system (5.37) shrinks volume in phase space. However, the volume inside the torus, , cannot be shrunk because the torus is invariant. A more rigorous argument to prove this assertion follows by noting that, because of (5.39), the integral of ∇ · f in is such that  ∇ · f dv1 dv2 dv3 < 0. (5.40)

However, applying the divergence theorem leads to   ∇ · f dv1 dv2 dv3 = f · n dA = 0,

(5.41)

S

where S is the boundary of , namely the torus, and n is the outward unit normal to S. The surface integral appearing in (5.41) is zero because, according to (5.37) f = dv/dt is tangent to the torus (recall that the torus is invariant) and thus f is orthogonal to n. Equations (5.40) and (5.41) are in contradiction and thus the invariant torus cannot exist. Note that the same argument shows that the Lorenz system does not exhibit closed invariant surfaces in phase space. As an example of highly-non-monochromatic orbit, we consider the following values of the parameters appearing in (5.37)–(5.38): σ = 10,

r = 400,

b = 3.

(5.42)

The precise representation of the periodic attractor obtained for these values of the parameters requires a large number of harmonics (M N = 3); see Fig. 5.19. To apply HODMD, we first integrate (5.37) with initial condition (v1 , v2 , v3 ) = (0.1, 0.1, 0.1)

at t = 0

(5.43)

178 Higher Order Dynamic Mode Decomposition and Its Applications

FIGURE 5.19 The Lorenz system, plotting v1 vs. t for the considered periodic orbit (thin solid), and the reconstructions using the HODMD algorithms DMD-1 (thin dashed) and DMD-200 (thick dashed).

using MATLAB ‘ode45’ (with relative and absolute tolerances both equal to = 10−8 ) in the interval 0 ≤ t ≤ 100 to eliminate the transient behavior, where the solution approaches a periodic orbit whose period is 0.346109. Then, we set t = 0 and integrate the system during a timespan equal to 0.5, which is ∼ 1.5 times the period of the orbit, taking as initial condition the value of v at t = 100 in the former transient integration. In the latter integration, we collect K = 50 equispaced snapshots in the considered interval, 0 ≤ t ≤ 1.5. The tunable thresholds for dimension reduction and mode truncation in both the DMD-1 and DMD-d HODMD algorithms applied below are εSVD = 10−12

and

εDMD = 10−3 ,

(5.44)

respectively. Standard DMD, using the DMD-1 algorithm, identifies just three modes, which give a poor monochromatic approximation (plotted with thin dashed line in Fig. 5.19), with a relative RMS error RRMSE∼ 0.16. The algorithm DMD-200 (with the index d = 200 chosen after some slight calibration), instead, identifies 17 modes, with damping rates smaller than 10−4 . In addition to the dominant mode, associated with the frequency ω = 0 (mean field), the remaining frequencies are the fundamental frequency ω1 = 18.1538 (which is exact to 6 significant digits) and 16 positive or negative harmonics of it. The method produces a quite good reconstruction (plotted with thick dashed line in Fig. 5.19), with a relative RMS error RRMSE∼ 8.2 ∼ 10−4 , which makes the reconstruction plot-indistinguishable from its exact counterpart. The a–ω diagram produced by the DMD-200 algorithm is plotted in Fig. 5.20, where it can be seen that (i) the relevant points approach two straight lines as |ω| increases, which is consistent with the fact that the dynamics are periodic and smooth, and (ii) the orbit is highly non-monochromatic, since 17 modes are needed to obtain a good reconstruction.

Application of HODMD and STKD to pattern forming systems Chapter | 5

179

FIGURE 5.20 The a–ω diagram obtained using the algorithm DMD-200 for the Lorenz system considered in Fig. 5.19.

These very good results of the DMD-200 algorithm method are maintained with the index d in the interval 100 ≤ d ≤ 200, and also if the timespan where the snapshots are computed is enlarged (enlarging K and d accordingly because, as anticipated in Chapter 4, d scales with K) or shifted. The HODMD results are also very good for other values of the parameters σ , r, and b, but those results are not shown here for the sake of brevity.

5.3.2 Thermal convection in a three-dimensional rotating spherical shell Let us consider the thermal convection of a fluid of almost constant density in a three-dimensional rotating (with angular velocity ) spherical shell, heated from the inside (assuming for simplicity that the inner and outer spheres both remain at constant temperature), subject to inwards radial gravity. In addition to its own scientific interest, this problem is relevant in geophysical and astrophysical fluid dynamics [32], to analyze the convective transport of mass and energy in the upper atmospheres of giant planets and stars. This transport is associated with periodic and quasiperiodic rotating waves, which will be considered below. For nondimensionalization, the dimensional units for length, time, temperature, and pressure are d ∗ , (d ∗ )2 /ν, ν 2 /(α γ d ∗4 ), and ρ0 ν 2 /d ∗ , respectively, where d ∗ = ro∗ − ri∗ is the shell gap, namely the difference between the (dimensional) outer and inner radii, ro∗ and ri∗ , respectively, ν is the kinematic viscosity, γ ri∗ is the imposed radial gravity at the inner radius, α is the thermal expansion coefficient, all assumed constant, and ρ0 is the density at the reference temperature. Using the Boussinesq approximation to model buoyancy, the governing nondimensional continuity, momentum, and energy equations, in a rotating frame linked to the spherical shell, are ∇ · v,

(5.45)

180 Higher Order Dynamic Mode Decomposition and Its Applications

∂v + (v · ∇) v + 2E −1 k × v = −∇  + ∇ 2 v + r, ∂t

R ∂ r · v. + v · ∇  = ∇ 2 + σ ∂t η (1 − η)2 r 3

(5.46) (5.47)

In these equations, the nondimensional variables v, , and  are the velocity vector, the pressure, and the perturbation of the temperature from the quiescent purely conductive state, respectively, r = (x, y, z) is the position vector referred to a Cartesian coordinate system, with origin at the center of the spherical shell and the z axis along the axis of rotation, and k is the upwards vertical unit vector along the z axis. There are four nondimensional parameters appearing in (5.45)–(5.47): the Prandtl number σ = ν/κ (with κ = thermal conductivity), the Eckman number E = ν/( d 2 ), the Rayleigh number R = γ αT d 4 /(ν κ), where T is the temperature difference between the inner and outer spheres, and the inner to outer radii ratio η = ri∗ /ro∗ . Note that E −1 is proportional to the angular velocity and R is proportional to the temperature difference between the inner and outer spheres. The boundary conditions at the inner and outer spheres, r = ri ≡ η/(1 − η) and r = ro ≡ 1/(1 − η), respectively, with r = |r|, are homogeneous Dirichlet for both the velocity and temperature, namely v = 0 and

 = 0 at r = ri , ro ,

(5.48)

which result from noslip and fixing the temperatures at the boundary as equal to their unperturbed values. For non-zero rotation, E −1 = 0, reflection on the azimuthal coordinate is broken. However, the problem is still invariant under rotation around the vertical z axis and up/down reflection on the equatorial plane. Invariance under rotation is essential to obtain rotating waves, most of which are also (instantaneously) invariant under the up/down reflection symmetry. The problem (5.45)–(5.47) can be simplified as follows. The pressure  and the continuity equation (5.45) are both eliminated by taking the curl of the momentum equation and writing the (solenoidal) velocity field as v = ∇ × (r) + ∇ × ∇ × (r),

(5.49)

where  and  are the nondimensional toroidal and poloidal scalar potentials, respectively, first introduced by Chandrasekhar [31]. Substituting (5.49) into the energy equation and the curl of the momentum equation leads to three scalar equations for the toroidal and poloidal scalar potentials and the nondimensional temperature. These equations are omitted here because they will not be used further in this book, where these equations will not be integrated. Instead, we shall use data provided to us by Profs. Marta Net and Joan Sanchez, from the Universitat Politècnica de Catalunya, Barcelona, Spain. The numerical solver used by them to simulate the three above-mentioned scalar equations was constructed

Application of HODMD and STKD to pattern forming systems Chapter | 5

181

[65,66,149] by writing these equations in spherical coordinates (r, θ, φ), where θ and φ are the colatitude and longitude, respectively. The nondimensional temperature and the scalar potentials were expanded in spherical harmonics, and spatially discretized using the associated Gauss–Lobatto collocation. See [65, 149] and the references therein for further details on the numerical solver. The numbers of collocation points in the numerical integration are 25, 64, and 128 in r, θ , and φ, respectively. By construction, the solution is periodic in the longitudinal coordinate φ and collocation points are equispaced in this direction, thus appropriate to use φ as a longitudinal coordinate in the STKD method. The convenient STKD algorithm for the present problem is that considered in Chapter 4, Sect. 4.3.3, which can be applied either considering the vector state variable that includes the two scalar potentials and the temperature field or considering the temperature field alone, obtaining the spatio-temporal expansions [, , ] (rj1 , θj2 , φ, t) =

M,N

amn [mn , mn , mn ]j1 j2 ei(km φ+ωn t) ,

m,n=1

(5.50) (rj1 , θj2 , φ, t) =

M,N

amn mnj1 j2 ei(km φ+ωn t) ,

(5.51)

m,n=1

where, because the data are real, the various terms in these expansions appear in complex conjugate pairs. The discretizations in the radius and colatitude are made with some of the collocation points used by the numerical solver (see below), and the scalar and vector STKD modes appearing in (5.50)–(5.51) are scaled as explained in Chapter 4, Sect. 4.3.3. Thus, because of the different scalings of the modes, the thermal modes in (5.51) do not necessarily coincide with the thermal components of the joint modes in (5.50). However, consistency between both descriptions requires that (i) both approximations give reconstructions with comparable accuracy for the thermal field, and (ii) the dispersion diagrams essentially coincide. These will be checked below in all cases as a part of the robustness tests, which will also require that the results be insensitive to the snapshots sets and the tunable parameters of the STKD method, as it happened for the CGLE considered in Sect. 5.2.3. In the remainder of this section, we consider two representative attractors of the system for E = 10−4 , η = 0.35, σ = 0.1, and two values of the Rayleigh number, namely R = 8 · 105

and 1.1 · 106 .

(5.52)

The associated databases give the scalar potentials and the temperature using J1 = 25, J2 , and I = 128 collocation points in the radial, colatitudinal, longitudinal directions, respectively, and K discrete values of time, for the considered attractors; see below for the values of J2 and K, which will depend on the two

182 Higher Order Dynamic Mode Decomposition and Its Applications

considered attractors. For consistency, the I values of the longitudinal coordinate and the K values of time are equispaced. In fact, the two attractors addressed below are instantaneously reflectionsymmetric about the equatorial plane, which permits applying the STKD method only to the upper half of the three-dimensional computational domain, θ ≥ 0, where we take J2 = 16 positive values of the colatitudinal coordinate. Concerning the timespan where the STKD method will be applied and the number of snapshots, these will be selected for each particular case. The remaining tunable parameters will be εHOSVD = 10−8 ,

εDMD = 10−3 ,

d x = 1,

dt = 1

(5.53)

in the two cases considered below. Note that d x = d t = 1, which is consistent with the fact that the spatial and spectral complexities coincide for both the spatial and temporal expansions in the STKD method. For the value R = 8 · 105 of the Rayleigh number, we consider the timespan 0 ≤ t ≤ 0.2766,

(5.54)

using K = 101 equispaced values of t. Applying the STKD method, we obtain the following results. Considering only the temperature distribution, , or the joint (, , ) field, the number of retained spatio-temporal modes is 7 in both cases, and the dispersion diagrams are plot-indistinguishable, as shown in Fig. 5.21. In both cases, the retained frequencies are all (positive or negative) multiples of ω1 = 45.49931 with seven significant digits, and the wavenumbers are all multiples of 8 with eight significant digits, meaning that the wave is periodic in both the longitudinal coordinate and time. The method reconstructs the pattern with a relative RMS error RRMSE∼ 1.4 · 10−4 both when considering the temperature field alone and when considering the joint distribution of , , and . Moreover, the temporal and spatial growth rates are such that |δ| < 10−5 and |ν| < 10−8 , respectively. As can be seen in Fig. 5.21, both dispersion diagrams are plot indistinguishable and, moreover, the relevant points are contained in a straight line passing through the origin. As anticipated, this means that the pattern is a pure TW (in fact, a pure rotating wave in the present context). In other words, the pattern is stationary in rotating axes, which is illustrated in the color maps for the two scalar potentials and the temperature plotted in Fig. 5.22, where the pure TW nature is clearly seen. Note that the TW moves to the right in this diagram, which is consistent with the negative slope of the straight line that contains the dispersion diagram in Fig. 5.21; recall that the phase velocity of the TWs and the slopes of the straight lines in the dispersion diagrams exhibit opposite signs. Also note that the pattern is spatially eight-fold, which is consistent with the fact that the wavenumbers are all multiples of 8.

Application of HODMD and STKD to pattern forming systems Chapter | 5

183

FIGURE 5.21 The dispersion diagrams for the rotating shell at R = 8 · 105 , as resulting from using only the temperature field (+ blue symbols) and the joint (, , ) field (black circles). For all points, the + symbols are exactly inside the circles, and the centers of both symbols are plotindistinguishable.

FIGURE 5.22 Colormaps of , , and  for R = 8 · 105 at r = (r o + r i )/2 and θ = 45◦ .

Let us now apply the STKD method for the value R = 1.1 · 106 of the Rayleigh number, in the timespan 0 ≤ t ≤ 0.6995,

(5.55)

using K = 1400 values of t; this value of K is larger than in the former case, which is due to the larger complexity of the pattern. The STKD method gives following results in the present case. Considering the temperature distribution alone or the joint (, , ) distribution, the number of retained spatiotemporal modes is 27 in both cases, and the dispersion diagrams are plotindistinguishable, as shown in Fig. 5.23. In both cases, the retained wavenumbers are all multiples of 2 with nine significant digits, which is consistent

184 Higher Order Dynamic Mode Decomposition and Its Applications

FIGURE 5.23 Counterpart of Fig. 5.21 for R = 1.1 · 106 . As in Fig. 5.21, for all points, the + symbols are exacty inside the circles, and the centers of both symbolds are plot-indistinguishable.

with the fact that the pattern is periodic in the longitudinal coordinate. Also, the temporal and spatial growth rates are very small, such that |δ| < 10−6 and |ν| < 10−8 . Moreover, the frequency-wavenumber pairs are of the form (ω, κ) = p(ω1 , 8) + q(ω2 , 6), where ω1 = 47.589 and ω2 = 239.841, with p, q = integer numbers. The method reconstructs the pattern with a relative RMS error RRMSE∼ 1.6 · 10−3 both when the temperature alone is considered and when the joint (, , ) distribution is used. As can be seen in Fig. 5.23, the relevant points of the dispersion diagram are contained in a family of oblique straight lines. As anticipated, this means that the pattern is a modulated TW (in fact, a modulated rotating wave in the present context), which is illustrated in Fig. 5.24. Note that the modulated rotating wave exhibits an overall motion to

FIGURE 5.24 Counterpart of Fig. 5.24 for R = 1.1 · 106 .

the left in this diagram, which is consistent with the positive slope of the oblique straight lines that contain the dispersion diagram in Fig. 5.23.

Application of HODMD and STKD to pattern forming systems Chapter | 5

185

5.4 Some concluding remarks Four representative pattern forming systems have been considered, namely the CGLE with both Neumann and periodic boundary conditions and two thermal convection problems, the Lorenz system and the thermal convection in a rotating spherical shell. The basic properties of the CGLE where stated and justified in Sect. 5.2.1. Then, the HODMD method was applied in Sect. 5.2.2 to the CGLE with Neumann boundary conditions for representative periodic and quasiperiodic (or nearly-quasiperiodic) attractors; in particular, quasiperiodicity (which, as anticipated, is a subtle matter in finite precision computations) was analyzed using the method developed in Chapter 2. The case of periodic boundary conditions was analyzed in Sect. 5.2.3, where the periodic or quasiperiodic nature of the attractors was analyzed once more using the HODMD method and various TWs and SWs were identified using the STKD method. The Lorenz system was considered in Sect. 5.3.1, where standard DMD was seen to produce very poor results, while the HODMD algorithm DMD-200 gave extremely good results. Finally, the thermal convection problem in a rotating spherical shell was addressed in Sect. 5.3.2, where two values of the Rayleigh number were considered, one giving pure rotating waves and another giving modulated rotating waves. In both cases, the STKD method gave fairly good results, allowing for discerning the nature of the patterns. Moreover, results obtained from the temperature field alone or the joint distribution of the temperature and the toroidal and poloidal potentials gave consistent results. Summarizing, the HODMD and STKD methods are powerful data driven methods to analyze the underlying dynamics resulting from databases numerically obtained from the considered pattern forming systems.

5.5 Annexes to Chapter 5 Annex 5.1. Selected practice problems for Chapter 5. 1. Read carefully Sect. 5.2.1 and check the various properties stated and justified in this section. 2. Construct the numerical solver for the CGLE described at the end of Sect. 5.2.1, considering both Neumann and periodic boundary conditions. 3. Use the numerical solver constructed in the former practice problem to integrate the CGLE (5.24)–(5.25) with initial condition (5.26). Using this solver, compute the snapshots mentioned in Sect. 5.2.2 for the values of μ considered in this section. Apply the HODMD method to obtain the results and reproduce the various figures appearing in Sect. 5.2.2. 4. Use the numerical solver constructed in the practice problem 2 to integrate the CGLE (5.30)–(5.31) with initial condition (5.32). Using this solver, compute the snapshots mentioned in Sect. 5.2.3 for the values of μ considered in

186 Higher Order Dynamic Mode Decomposition and Its Applications

this section. Use the HODMD and STKD methods to obtain the results and reproduce the various figures appearing in Sect. 5.2.3. 5. Prepare the MATLAB solver ‘ode45’ to integrate the Lorenz system (5.37), with f as defined in Eq. (5.38). Use this solver to obtain the snapshots indicated in Sect. 5.3.1 and apply the HODMD method to obtain the results and reproduce the figures appearing in this section. Compare the outcomes of this solver with their counterparts obtained via the MATLAB function ‘lorenz-main-Chap5.m’, provided in Annex 5.2. Annex 5.2. MATLAB functions to treat the Lorenz system. The main program is the MATLAB function ‘lorenz-main-Chap5.m’ included below. This function calls the function ‘lorenz’ to integrate in time the Lorenz system. Before running this function, it is necessary to download the MATLAB function ‘lorenz’ from the MathWork FileExchange repository: https://es.mathworks.com/matlabcentral/fileexchange/ 30066-lorenz-attaractor-plot?focused=5176856&tab=function.

Using this code, the reader should create a function called ‘lorenz.m’ and include the file in the same directory as ‘lorenz-main-Chap5.m’. The codes for computing HODMD and STKD are the same as those given in Chapter 2, Annex 2.2 and Chapter 4, Annex 4.2, respectively. The following file contains the main function to treat the Lorenz system. It is necessary to set the parameters to obtain periodic or chaotic solutions (recall that quasiperiodic solutions are not possible in the Lorenz system). %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% lorenz-main-Chap5.m %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%

Lorenz equation: periodic solution

%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % This function uses the Matlab function ’lorenz’ %%%%%%%% % to integrate in time the Lorenz system of equations. %%%% % Before running this code, you should download the %%%%%% % funtion ’lorenz’ from the MathWork FileExchange %%%%%%%% % repository:

% %

% https://es.mathworks.com/matlabcentral/fileexchange/...%% %30066-lorenz-attaractor-plot?focused=5176856&tab=function% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Application of HODMD and STKD to pattern forming systems Chapter | 5

clc clear all close all %% Periodic attractor: 1 freq. Period +- 16.17 rho=350; sigma=10; beta=8/3; %% Solver parameters % ode solver precision eps=1e-6; % Initial time t0=0; % Final time tf=20000; % Initial condition initV=[0 0.1 0]; % Time interval to create the snapshot matrix V dt=1e-2; %% Time = t0:dt:tf; [x,y,z,t] = lorenz(rho, sigma, beta, initV, Time, eps); %% Save data V=[x’;y’;z’]; save V_Chap5_Lorenz.mat V save Time.mat Time %% Plot Lorenz orbit figure(1) hold on box on plot3(x,y,z,’b’) grid; title(’Lorenz attractor’); xlabel(’X’); ylabel(’Y’); zlabel(’Z’);

figure(2) hold on box on plot(t,x,’b’) title(’Lorenz attractor’);

187

188 Higher Order Dynamic Mode Decomposition and Its Applications

xlabel(’t’); ylabel(’X’); %% Save data V=[x’;y’;z’]; save V_Chap5_Lorenz.mat V save Time.mat Time

Chapter 6

Applications of HODMD and STKD in fluid dynamics 6.1 Introduction to fluid dynamics and global instability analysis In fluid dynamics, the governing equations describing the movement of viscous Newtonian flows are the continuity and Navier–Stokes equations. For incompressible flows, the nonlinear form of these equations is ∇ · v = 0, ∂v 1 + (v · ∇)v = −∇p + v, ∂t Re

(6.1) (6.2)

where v and p are the nondimensional velocity and pressure, respectively, and Re is the Reynolds number, defined as Re = V L/ν, where V is a characteristic flow velocity, ν the kinematic viscosity of the fluid, and L a characteristic length defined by the geometry of the flow. Equations (6.1)–(6.2) have been nondimensionalized as usual, using L and L/V as units for length and time, respectively. The velocity V and the characteristic length L depend on the kind of flow modeled. For instance, in the flow past a cylinder, V is the free-stream velocity and L is the body diameter if the cylinder is circular, while if the cylinder is square, L is the length of the body sides; in the flow modeling the wake of a wind turbine, V is the incoming streamwise velocity and the characteristic length is given by the diameter of the wind turbine; in channel flows, V is a measure of the streamwise velocity component and L is given by the height of the channel; et cetera. The bifurcation process describing the evolution from laminar to transitional and turbulent flow is driven by the Reynolds number. The critical values of Reynolds number are associated with qualitative changes in the flow, such as those (i) starting from steady solutions and producing (unsteady) temporally periodic two-dimensional flows, and (ii) starting from periodic two-dimensional dynamics and producing two-dimensional temporally periodic or quasiperiodic flow patterns. Linear stability analysis [177] is one of the most popular techniques that are used to determine the main mechanisms triggering the flow unsteadiness or the flow three-dimensionality in the aforementioned bifurcation processes. In particular, when a base steady flow loses stability, this analysis identifies the unsteady mode, growing in time and altering the steady flow Higher Order Dynamic Mode Decomposition and Its Applications https://doi.org/10.1016/B978-0-12-819743-1.00013-6 Copyright © 2021 Elsevier Inc. All rights reserved.

189

190 Higher Order Dynamic Mode Decomposition and Its Applications

pattern, which ends up in flow unsteadiness. When the base flow is already timeperiodic, Floquet linear stability analysis [14] is then used to identify the secondary flow instability producing changes in the flow, such as, e.g., the evolution from two-dimensional periodic to three-dimensional periodic or quasiperiodic patterns. Linear stability theory (LST) mainly studies the evolution of a small perturbation imposed upon a base flow. The classical theory defines the base flow as a steady state [72,177], although in more recent contents, the base flow could be defined by the mean flow calculated from unsteady solutions [152]. LST starts assuming a Reynolds decomposition of the instantaneous state variable associated with the flow field q(x, t), as ˜ t), q(x, t) = qb (x, t) + q(x,

(6.3)

˜ t) represent the base flow and the velocity fluctuations, where qb (x, t) and q(x, respectively. For both the base flow and the fluctuations, the state variable q ≡ (v, p) is the state vector representing the velocity vector v(x, t) and the pressure p(x, t). The Navier–Stokes equations are then linearized over the base flow, resulting in the linearized Navier–Stokes equations (LNSE), defined as ∇ v˜ = 0, ∂ u˜ 1 2 + (v b · ∇)˜v + (˜v · ∇)v b = −∇ p˜ + ∇ v˜ . ∂t Re

(6.4)

These equations can be written as the following initial-value problem: B

d q˜ ˜ = A(Re, qb ) q, dt

(6.5)

where A and B are linear operators, with A depending on the Reynolds number and the base flow (and its spatial derivatives). These equations are usually discretized in the same spatial mesh that has been used to compute the base flow. The solutions of this linearized problem contain the relevant information regarding the main mechanisms triggering the flow bifurcations, known as flow instabilities. For steady base flow, the separability between temporal and spatial coordinates in (6.5) allows for introducing a Fourier decomposition in time as q˜ = qˆ e−iλt , leading to the following generalized matrix eigenvalue problem (EVP): ˆ B = λ A q,

(6.6)

where matrices A and B contain the discrete form of operators A and B, respectively, which collect information regarding the boundary conditions of the problem. The eigenvalues λ, defined as λ = δ + iω, describe the frequency, ω, and the growth rate, δ, of the leading modes driving the flow and qˆ = [ˆv , p] ˆ 

Applications of HODMD and STKD in fluid dynamics Chapter | 6

191

are the eigenvectors, defining the shape of the leading modes driving the flow motion. Those modes with positive growth rates are identified as flow instabilities. The previous expression can be re-written in the following form: d qˆ ˆ = C q, dt

(6.7)

where C = B−1 A, (by the nature of the Navier–Stokes operator, B is nonsingular). The explicit solution of this system is the following: qˆ = qˆ 0 eC t ,

(6.8)

where the term eCt is known as the propagator operator. In the secondary flow bifurcation, the base flow qb may be not steady, ˆ but time-periodic instead. Floquet multipliers, defined such that q(x, T) = ˆ μ q(x, 0), where T is the period of the base flow, reflect the growth rate, defined as δ = log |μ|/T , of the secondary mode that is responsible for triggering the secondary flow. This secondary flow instability is represented by those modes with positive growth rate, which correspond to Floquet multipliers such that |μ| > 1. Identifying the leading modes triggering the main flow instabilities is a research topic of high interest in both, the industry and academia [177]. Having this knowledge could help to develop artificial devices to delay (or overtake) the presence of such instabilities, which in some cases produce undesirable effects. In other words, if the main mechanism involved in the transition from laminar to turbulent flow is identified, it is possible to delay/overtake the presence of turbulent flow (i.e., in some cases, turbulence causes undesirable effects such as fatigue loads or drag increase, raising the quantity of fuel consumption, although in other cases, turbulent flow produces positive effects, such as the enhancement of mixing or heat transfer). This chapter illustrates the capabilities of the HODMD and STKD algorithms to identify the leading modes driving the flow, which are connected to LST. The main advantage of using these methods lies in the possibility of analyzing complex flows, giving information about the nonlinear effects, which provide a more complete description of the flow than LST (DMD modes are prone to identify LST modes and their nonlinear interaction). The first part of this chapter introduces an application to identify the main flow structures in the three-dimensional wake of a circular cylinder, while in the second part, we identify the main flow structures in a complex flow in laminar-transitional regime modeling a zero-net-mass-flux jet. These two analyses are challenging in the sense that in the first case the data analyzed are obtained numerically, but are three-dimensional and consider a transient of a numerical simulation, where a large number of transient modes coexist with the physical modes associated with the final large time attractor. In the second case, the data analyzed

192 Higher Order Dynamic Mode Decomposition and Its Applications

are experimental, where we will show the ability of HODMD to clean the noise, identifying from the signal some physical structures that are completely hidden by experimental noise in the given data. Finally, we present a simple exercise for the reader to apply the knowledge acquired along this chapter. This exercise consists in a simple application of HODMD to identify flow structures in a three-dimensional cylinder wake.

6.2 The two- and three-dimensional cylinder wake Let us consider the temporal evolution of the flow describing the spanwise periodic wake behind a circular cylinder, which is governed by the continuity and Navier–Stokes equations (6.1)–(6.2), previously defined. For low Reynolds number (defined for the present case as Re= V D/ν, where V is the incoming free-stream velocity, assumed to be purely streamwise, D is the cylinder diameter, and ν is the kinematic viscosity), the flow remains steady. The first flow bifurcation turns up at Re 46, and is a Hopf bifurcation [80,136] that produces a two-dimensional but unsteady periodic von Karman vortex street flow. The flow remains orbitally stable up to Re 189, where it suffers a secondary bifurcation and becomes three-dimensional for some specific values of the spanwise wave length β (or the spanwise period Lz = 2π/β) [14]. Figure 6.1 shows a sketch with the various patterns appearing in the cylinder wake as the Reynolds number is increased, from steady and two-dimensional to unsteady three-dimensional flow.

FIGURE 6.1 Flow bifurcations in the wake of a circular cylinder. From left to right: steady flow (Re< 46), two-dimensional periodic flow (46 189).

Linear and Floquet stability analyses of the steady and two-dimensional periodic solutions, respectively, are the techniques generally most used in the literature to identify and study the evolution of the primary and secondary flow bifurcations in the cylinder wake. A single mode with positive growth rate is leading the process from steady to unsteady periodic two-dimensional flow, while the three-dimensional effects of the cylinder wake are related to a mode with Floquet multiplier such that |μ| ≥ 1. In the three-dimensional cylinder wake, the value of the nondimensional spanwise length, Lz , determines the type of modes developed beyond this secondary instability, namely the synchronous periodic modes and the asynchronous quasiperiodic modes. The synchronous modes, known as modes A

Applications of HODMD and STKD in fluid dynamics Chapter | 6

193

and B (with different spatio-temporal symmetries), are standing wave (SW) modes and oscillate simultaneously with similar frequency to that of the twodimensional periodic flow [14,21]. Mode A emerges at Re 189, while mode B appears at Re 259. Finally, the asynchronous quasiperiodic (QP) modes set in at Re  380, when a complex-conjugate pair of Floquet exponents defining their common frequency emerge with a positive growth rate [21].

6.3

Flow structures in the three-dimensional cylinder wake

In this section, we apply an appropriate combination of the HODMD and STKD methods. These methods are applied to elucidate the spatio-temporal structures driving the three-dimensional wake of the flow past a circular cylinder. The flow is assumed to be spatially periodic in the spanwise direction. The data are obtained using numerical simulations carried out with the open source code Nek5000 [60], which uses high order spectral elements as spatial discretization. The dimensions of the computational domain and the boundary conditions are similar to their counterparts presented in the literature [14]. Recalling that, with the nondimensionalization used above for the continuity and the Navier–Stokes equations (6.1)–(6.2), the diameter of the cylinder is one and the free-stream velocity is V = 1, the dimensions of the computational domain are defined as Lx = 50 and Lx = 15 in the streamwise direction, downstream and upstream the cylinder, respectively, and Ly = ±15 in the normal direction. The boundary conditions impose the streamwise velocity V = 1 and Neumann conditions for the pressure at the inlet, upper, and lower boundaries of the domain, and Neumann conditions for velocity and Dirichlet conditions for the pressure at the outlet boundary. Finally, the flow is imposed to be spatially periodic, with period Lz in the spanwise direction. This is done by taking Lz as the dimension of the computational domain in this direction, where the equations are discretized using Fourier collocation, homogeneous, with 64 equispaced planes. A grid independence study has been carried out in order to guarantee the accuracy of the results presented. Figure 6.2 shows the mesh used for the numerical simulations. To ensure that the flow is three-dimensional, the Reynolds number is set to Re= 280. A Floquet linear stability analysis, using as base flow the twodimensional periodic flow at this value of the Reynolds number (as usually done in the literature), is first carried out to validate the results with those in the literature [21]. The temporal nondimensional frequency of the base flow is St 0.21, where St is the Strouhal number, defined as St= f D/V , with f , D, and V being the dimensional oscillation frequency, cylinder diameter, and freestream velocity, respectively. As seen in Fig. 6.3, this linear analysis identifies two most unstable (Floquet multipliers such that |μ| ≥ 1) modes, corresponding to modes A and B, which are identified as the peaks of the Floquet multipliers |μ| in Fig. 6.3. These local maxima of |μ| are attained at β  1.8 and β  7.6, respectively. In addition, a branch of stable modes is also identified in the interval with 2 ≤ β ≤ 6, which include two distinguished quasiperiodic modes,

194 Higher Order Dynamic Mode Decomposition and Its Applications

FIGURE 6.2 Plane x–y extracted at z = 0 for the computational domain of a three-dimensional circular cylinder. The mesh is homogeneous along the spanwise direction.

FIGURE 6.3 Floquet stability analysis in the circular cylinder at Re = 280, plotting the absolute value of the most unstable Floquet multiplier as a function of β (spanwise wavenumber).

namely a quasiperiodic SW and a quasiperiodic TW, denoted as QP-SW and QP-TW, respectively. Figure 6.4 shows the three-dimensional reconstruction of modes A and B. As seen, in a computational domain with the same Lz , the spanwise wavelength of mode B is smaller (larger wavenumber) than that of mode A. Let us now appropriately combine the HODMD and STKD methods to identify the modes A and B in the cylinder wake considered above, for Re= 280. In contrast to the previous Floquet analysis, the data analyzed are fully nonlinear (obtained from the continuity and Navier–Stokes equations (6.1)–(6.2), integrated with the aforementioned Nek5000 solver), which shows that the analysis

Applications of HODMD and STKD in fluid dynamics Chapter | 6

195

FIGURE 6.4 Contours of streamwise velocity component for modes A (left) and B (right) obtained at Re= 280 in a computational domain with Lz = 6.99.

below can be extended to analyze global flow instabilities in any type of data, including experimental measurements. The three-dimensional computational domain is defined with a spanwise dimension Lz = 6.99. Thus, the minimum wavenumber fitting in domain is β = Lz /(2π)  0.9, which according to Fig. 6.3 means that the modes A and B, predicted by the linear theory, with maximum Floquet exponent and wavenumbers 1.8 and 7.6, respectively, will fit in this computational domain. In other words, the spanwise direction is chosen to let modes A and B to develop in the spatial domain. Regarding the predictions by the linear theory, the second flow bifurcation emerges with a Floquet multiplier larger than one but close to one for β  0.8. Since the smallest wavenumber fitting this computational domain is ∼ 0.9, we expect this solution to be close to that emerging from the secondary flow bifurcation, and consequently the convergence to the saturated state is expected to be slow. According to the aforementioned description of the bifurcation process, the selected value of the Reynolds number, Re=280, is larger than the critical values for the destabilization of modes A and B, but smaller than the critical value of Re where the quasiperiodic (QP) modes come into play. Below, we shall consider a transient of the numerical simulation in which the QP modes have not completely disappeared. Thus, in the present case, we expect to identify (i) the modes A and B predicted by the linear theory growing in time, as the modes driving the flow, and (ii) the two QP modes decaying in time. The temporal evolution of the streamwise velocity component in both the streamwise and spanwise spatial directions, identifies the transient and the saturated states of the numerical simulations. As seen in the spatio-temporal diagrams in Fig. 6.5, the flow oscillates from the time value t ∼ 80 on, with an order of magnitude of the oscillations of the streamwise velocity component ∼ 0.01. Starting at this instant, 120 temporally equispaced snapshots, with a time distance between them t = 1, are collected. Now, the HODMD method is applied to this set of snapshots, considering in each snapshot 64 equispaced points along the spanwise direction with z = 0.10938. The dimension reduction and mode truncation tolerances are εSVD = εDMD = 0.01 (the order of magnitude of the streamwise velocity fluctuations) and the index d for the DMD-d algorithm is taken in the range 5 ≤ d ≤ 25, obtaining quite similar results. The a–ω diagram obtained from this analysis is given in Fig. 6.6. As can be seen in this figure, in addition to the mean flow

196 Higher Order Dynamic Mode Decomposition and Its Applications

FIGURE 6.5 Top: spatio-temporal diagram of streamwise velocity component vs. the streamwise coordinate x at the straight line (y, z) = (2, 0.5) in the three-dimensional cylinder wake. Bottom: zoom-in views of the top diagrams.

(namely, that mode with ω  0), the method retains four temporal modes with frequencies ω1  1.28, ω2  1.26, ω3  2.59, and ω4  2.35. It is to be noted that the frequencies ω1 and ω2 are fairly close to each other. They are also close to the frequency of the two-dimensional periodic base flow, which is ω  1.27. Thus, the modes associated with these two frequencies, ω1 and ω2 , include both the effect of the two-dimensional periodic base flow, and also the joint effect of modes A and B, which cannot be distinguished among each other in this analysis because of the nearby values of ω1 and ω2 . Those modes with frequencies ω3 and ω4 , instead, are associated with the QP-SW and QP-TW patterns, respectively. Note that the modes identified as QP could also be considered as the second harmonic of modes A and B, since ω3 ∼ ω4 ∼ 2ω1 . Nevertheless, we insist that these are QP modes because their growth rates are negative. Figure 6.7 shows the dominant mode (associated with ω2 ) resulting from the HODMD analysis. Note that since ω1 and ω2 are close to each other, the associated modes are very similar too. Thus, the counterpart of Fig. 6.7 for that mode associated with ω1 is very similar to that plotted in Fig. 6.7 itself. As seen in the figure, this mode is related to the von Karman vortex street of the two-dimensional flow (for the streamwise and normal velocity components),

Applications of HODMD and STKD in fluid dynamics Chapter | 6

197

FIGURE 6.6 The a–ω diagram obtained upon application of the HODMD method to threedimensional cylinder wake in the aforementioned transient timespan. Circles represent modes 1 and 2, the square is mode 3, and the cross is mode 4. The modes with ω = 0 represent the mean flow.

which is identified by the linear theory as the first flow bifurcation. However, this mode is more complex than the von Karman street, which is due to the fact that, as already mentioned, it is expected to also include the effect of the modes A and B.

FIGURE 6.7 Dominant temporal HODMD mode (which includes the joint distribution of modes A and B) in the x − y plane extracted at z = Lz /2. From left to right: the streamwise, normal, and spanwise velocity components, respectively, plotting the real (top plots) and imaginary (bottom plots) parts of the modes.

As anticipated, the analysis above does not permit distinguishing between the modes A and B, which are mixed in Fig. 6.7. These two modes, A and B, are identified as follows. To begin with, we only retain in the temporal expan-

198 Higher Order Dynamic Mode Decomposition and Its Applications

sion obtained via HODMD those terms with frequencies ±ω1 and ±ω2 and set ω1 = ω2 = ω = 1.27 in the resulting reconstruction. Then we treat this reconstruction via the STKD method, with the same tunable parameters used in the HODMD method applied above and d x = 5 as index for the spatial DMD-d method along the spanwise direction. Obviously, this application of the STKD method only recognizes the frequency ω = 1.27, but it recognizes three spanwise wavenumbers, namely β1 = 1.8 (dominant wavenumber, associated with mode A), β2 = 3.66 (first harmonic of mode A), and β3 = 7.03 (mode B). As identified in the literature [21], the spatial structure of modes A and B is different. This is confirmed in the reconstruction of mode A (with ω = 1.27 and β1 = 1.8) and mode B (with ω = 1.27 and β1 = 7.03), which is presented in Figs. 6.8 and 6.9, respectively.

FIGURE 6.8 Counterpart of Fig. 6.7 for the real parts of the velocity components associated with mode A.

As can be seen in Fig. 6.8, mode A is invariant under the transformation y −→ −y, vx −→ −vx , and vz −→ −vz . This spatial symmetry implies that the associated SW exhibits the spatio-temporal symmetry in which the spanwise vorticity changes sign under the action t −→ t + T /2, where T is the oscillation period, T = 2π/ω. This is the same spatio-temporal symmetry that has been identified in the literature for mode A [21]. Mode B (plotted in Fig. 6.9) instead does not show any evident symmetry.

FIGURE 6.9 Counterpart of Fig. 6.8 for mode B.

Applications of HODMD and STKD in fluid dynamics Chapter | 6

199

Note that, by definition, linear stability theory only retains a single mode as a function of the spanwise length value, mode A or mode B. Nevertheless, in the analysis of nonlinear data, we retain these two modes simultaneously plus their high order harmonics (according to the tolerances imposed in the HODMD/STKD analyses). In particular, for the tolerances used above, the method retains the first harmonic of mode A, which should also preserve the symmetries of mode A. As already mentioned, mode B could also be seen as the third harmonic of mode A. However, the difference between the spatio-temporal symmetries identified in these two modes justifies the different nature of these modes.

6.4 The zero-net-mass-flux jet This second application considered in the present chapter illustrates the very good properties of HODMD to (i) filtering experimental errors to levels even below the uncertainty of the experimental data, providing a deeper insight into small flow patterns, and (ii) taking advantage of the temporal interpolation, which permits detecting fast events that cannot be found in the initial collection of data analyzed. These two properties introduce HODMD as a suitable tool to predict temporal events from a reduced number of data. The results presented in this section justify the convenience of using the HODMD method as a data-driven equation-free ROM, as presented latter in this book, in Chapter 8. The zero-net-mass-flux (ZNMF) jet [27] is a pulsating (incompressible) fluid jet formed by the periodic interaction of vortex rings that separate from the exit of a jet nozzle. This type of flow is produced by the periodic, monochromatic oscillation of a piston or a membrane inside a cavity. Because of this oscillation, the flow leaves the cavity by an orifice and goes back to the cavity by the same orifice, periodically. The resulting effect is that the net mass flux in the jet is zero but, for large Reynolds number, it generates a nonzero net momentum flux from the orifice into the working fluid. Because of these properties, using this type of jets is very attractive for several industrial applications, such us heat transfer [131] and mixing [183] enhancement, jet vectoring [164], flow control [28,69], et cetera. ZNMF jets are also relevant in nature, since they represent the swimming propulsion of some marine animals such as octopus, squids, and jellyfish [44]. The topological picture that emerges from ZNMF jets is mainly described by two qualitatively different flow patterns, defined in the phases of injection (the flow leaves the cavity) and suction (the flow re-enters into the cavity) of the jet, as presented in Fig. 6.10. The injection pattern is a vortex ring formed progressively in the injection phase, transporting momentum. This jet emerges from the contact of the flow with the edges of the jet nozzle, travels downstream, and finally breaks down. The suction pattern is formed by a saddle point, identified in the suction phase as the point separating the flow that continues traveling downstream from the flow that re-enters into the cavity. The periodic forcing

200 Higher Order Dynamic Mode Decomposition and Its Applications

FIGURE 6.10 Pseudo-streamlines and streamwise velocity in two representative snapshots of the injection (top) and suction (bottom) phases in a ZNMF jet.

is temporally reflection symmetric and the resulting flow would be temporally symmetric too at low Reynolds number. However, at high Reynolds number, the flow is highly non-reflection symmetric, which explains the nonzero momentum flux property of ZNMF jets. Also, at low Reynolds number, due to the circular orifice, the flow is considered rotationally symmetric near the jet, although, at higher Reynolds number (transitional or turbulent flow), flow visualization experiments [27] show a symmetry breaking process produced by flow instabilities, which is still unknown. The high complexity of this flow, composed of a large number of spatio-temporal scales, makes that understanding the origin of the main flow bifurcations that lead from laminar to turbulent flow, remains as an open topic. Thus, studying the flow physics in the near field of a ZNMF jet by means of analyzing a set of experimental data, is a challenging problem and consequently a very good example to test the performance of HODMD. Particle image velocimetry (PIV) experiments have been carried out (in the experimental facility for ZNMF jets of the Laboratory for Turbulence Research and Combustion in Monash University [27,102,166]) with the aim at analyzing the near field of this type of flow. The (essentially random) error of the experimental measurements is 2.4% at the 95% confidence level. The flow analyzed lies in the laminar-transitional regime (fully turbulent in the far field). The two nondimensional parameters characterizing the flow conditions are the Reynolds and Strouhal numbers. The Reynolds number is defined as Re= U D/ν, where D is the jet orifice diameter, ν the kinematic viscosity, and U is a characteristic velocity scale, based on the momentum velocity (see [102] for more details), defined as U=

Dp Vˆp √ , D 2

(6.9)

Applications of HODMD and STKD in fluid dynamics Chapter | 6

201

where Dp and Vˆp are the piston diameter (Dp = 5D) and the peak oscillation velocity of the piston, respectively. The Strouhal number is defined as St= f D/U , where f is the piston oscillation frequency. The flow analyzed is laminar-transitional (see [27]) for Re= 13329 and St= 0.03, which are the values of these two nondimensional parameters for the experimental data. The orifice of the jet nozzle is circular and the resulting near-field flow is approximately axisymmetric, at least in the large scale. The experimental data is formed by 1872 snapshots, representing 12 piston oscillation cycles with frequency St= 0.03, giving the streamwise and radial velocity components, u(xi , rj , tk ) and v(xi , rj , tk ). The variables x and r are the streamwise and radial coordinates, respectively, with origin at the center of the orifice, and t is the time variable, with origin at the beginning of the suction phase. The time variable is defined in terms of the oscillating frequency and the Strouhal number is St= 0.03, meaning that the piston oscillating period is T = 2π/0.03. The domain analyzed is conformed by a rectangle of dimensions 0 ≤ x/D ≤ 2.5 in the streamwise direction (with x/D = 0 being the exit of the jet orifice) and −1.2 ≤ r/D ≤ 1.2 in the radial component, obtained in a window containing 52 × 68 grid points (the spatial dimension is J = 52 × 68). The data are organized in a snapshot tensor, meaning that HOSVD must be used instead of standard SVD in the first step of the HODMD method; see Section 2.2.3 in Chapter 2. Also, HODMD is applied using the iterative multi-dimensional HODMD algorithm described in Chapter 2, Sect. 2.2.4. The MATLAB code that combines HODMD with HOSVD iteratively is presented in Annex 6.2 at the end of the present chapter. A HODMD analysis via the DMD-d algorithm, has been carried out considering two test cases (namely, two sets of values of the tunable parameters). In the first test case, we set d = 700 and d = 1 and perform the dimension reduction and mode truncation steps with tolerances εSVD = εDMD = 2.4 · 10−2 , which are similar to the random experimental error level, while in the second test case we set d = 700 and εSVD = εDMD = 10−2 , which is below the level of the experimental error. These two test cases will permit elucidating the good properties of HODMD for uncovering physical phenomena, hidden by the experimental noise. Figure 6.11 shows the a–ω diagrams obtained in the analyses. As can be seen, DMD-1 captures 11 modes, all of them spurious. On the contrary, DMD-700 captures 19 modes for the same tolerance. It is remarkable that the spatial complexity calculated (number of SVD modes) is 5, justifying the failure of DMD-1. Using the lower tolerance, DMD-700 retains 49 modes (and 11 SVD modes). The outcome of the DMD-700 algorithm shows that the flow is periodic in the near field, with a dominant frequency St= 0.03 (piston oscillation frequency), and retains 9 and 24 harmonics for the first and second test cases, respectively. Since the data analyzed are real, the modes appear in complexconjugate pairs and the mean flow corresponds to St= 0. Thus, the total number

202 Higher Order Dynamic Mode Decomposition and Its Applications

FIGURE 6.11 Amplitude vs. St obtained using DMD-1 (red triangles) and DMD-700 (blue crosses) for low accuracy, and DMD-700 (black squares) for high accuracy.

of modes retained by HODMD is 19 and 49 for low and high accuracy, respectively. The original solution is reconstructed using the DMD expansion (2.103) in Chapter 2 for d= 700 and the two groups of tolerances. In other words, the data are reconstructed using 9 and 24 harmonics. As presented in Fig. 6.12, which shows the instantaneous reconstructed pseudo-streamlines (namely, lines that are tangent to the instantaneous velocity field) in the injection phase for a representative snapshot, the HODMD method removes spurious artifacts produced by experimental noise. Moreover, in the reconstruction using 24 harmonics (high accuracy – lower tolerance), it is possible to identify, close to the jet nozzle, a couple of small vortices, which are not even guessed in the experimental data. In principle, these vortices could be simply related to noise coming from the experimental measurements, since in this analysis the level of the tolerance selected is below the experimental uncertainty. Nevertheless, these small vortices are smooth, suggesting that they could be truly related to low amplitude modes describing the actual, physical flow dynamics. In order to elucidate the physical relevance of the small vortices, we perform numerical simulations using the open source code solver Nek5000 [60], which is based on a high order spectral elements as spatial discretization (highly accurate and efficient). To reduce the computational cost of the simulation, the Reynolds number is set as Re= 1000, which is one order of magnitude smaller than in the experimental measurements. Figure 6.13 shows the presence of the small vortices also in the numerical outcome, suggesting that these vortices are also a mechanism driving the flow motion. In addition, the application of the HODMD

Applications of HODMD and STKD in fluid dynamics Chapter | 6

203

FIGURE 6.12 Pseudo-streamlines and streamwise velocity contours for a representative snapshot at the injection phase. Reconstruction of the flow field using DMD-700 and εSVD = εDMD = 2.4 · 10−2 (left), εSVD = εDMD = 10−2 (middle), and the experimental data (right).

FIGURE 6.13 Same as the snapshot considered in Fig. 6.12-middle but numerically computed for Re=1000.

method to the numerical data shows that the small vortices are associated to the high order harmonics of the DMD modes (at least after the 10th harmonic). Regarding the suction phase, the results obtained in the reconstruction using both 9 and 24 harmonics are quite similar, although the reconstruction is slightly smoother in the more accurate case. Figure 6.14 results from the more accurate

FIGURE 6.14 Evolution from the suction to the injection phase. Representative snapshots extracted at t = T /18 (left), t = 2T /9 (middle) and t = T /2 + T /72 (right).

reconstruction and shows the evolution of a saddle point that is formed in this phase, using 24 harmonics. As seen, the saddle point is created near the jet orifice and travels downstream up to a maximum distance approximately equal to 2D, remaining almost steady at this point on the rest of the suction phase. On the other hand, at the beginning of the injection phase, a short-living saddle point is created near the orifice, traveling downstream very fast. Due to topological reasons, this saddle point must be present. However, it is not seen in the

204 Higher Order Dynamic Mode Decomposition and Its Applications

experimental data. Moreover, this saddle point has never seen before in studies previous to [102]. This short-living saddle point is seen in the reconstructed solution due to the temporal interpolation that is implicit in HODMD. In other words, this interpolation sheds light on new spatio-temporal flow structures that are very difficult (and very costly) to identify using traditional techniques to postprocess experimental data. To sum up, it is remarkable the good performance of the multi-dimensional, iterative HODMD algorithm in the analysis of noisy experimental data. The method (i) filters errors below the uncertainty level of the experiments, identifying flow structures hidden by the experimental noise; (ii) uncovers fast events (such as the saddle point), taking advantage of the temporal interpolation that is implicit in the HODMD method; and (iii) provides information of the main mechanisms driving the flow, which permits constructing purely data-driven ROMs from experimental data, modeling the ZNMF jet.

6.5 Exercise: apply HODMD to analyze the three-dimensional cylinder wake This section introduces a simple example in fluid dynamics with the aim at motivating the reader using the multi-dimensional, iterative HODMD algorithm (developed in Chapter 2), which combines iteratively HOSVD with DMD-d, to identify the main mechanism driving the flow dynamics in a three-dimensional cylinder wake. This is a benchmark problem in fluid dynamics that has been studied in a wide range of cases. At the end of this chapter, in Annex 6.2, the reader will find the appropriate MATLAB functions that are needed to analyze the three-dimensional cylinder wake. Before starting with this example, the reader should download the database modeling the two-dimensional cylinder wake at Re= 220 in the following repository: https://data.mendeley.com/datasets/hw6nz3zkg3/1. The database is conformed by 500 snapshots, equi-distant in time, with t = 0.5. The database has been generated using the aforementioned numerical code Nek5000 [60]. The boundary conditions and dimensions of the computational domain are the same as in the application presented in Section 6.3. The spanwise length is Lz = 4. Thus, only mode A is developed (in the previous example, mode B was also developed since Lz = 6.99). For simplicity, this three-dimensional database is reduced to four planes extracted at z = 0, 1, 2, and 3 in the computational domain defined in the intervals 0.7 ≤ x ≤ 3.5 and −0.75 ≤ y ≤ 0.75, and it is interpolated to a structured grid with dimension 40 × 40. The data are organized in tensor form. It is notorious the small number of grid points discretizing the computational domain. The idea of this section is to present a toy model that can be analyzed in any type of computer. Nevertheless, smoother and three-dimensional grids should be used instead for accurate analysis of the data and detailed study of the flow physics. To postprocess such

Applications of HODMD and STKD in fluid dynamics Chapter | 6

205

data, it is necessary using high technology computers equipped with high capacity of RAM memory. A general overview of this database is presented in Fig. 6.15-top, which considers a representative snapshot of the streamwise, normal, and spanwise velocity components. Figure 6.15-bottom shows the temporal evolution of the flow at a particular point in the spatial domain, providing a general idea about the flow complexity. It is possible to identify a periodic solution composed of several frequencies interacting among each other.

FIGURE 6.15 From left to right: streamwise, normal, and spanwise velocity components in a threedimensional cylinder at Re= 220. Top: representative snapshot. Spatial dimensions normalized with the body diameter, D = 1. The cylinder is located at (x, y) = (0, 0). Plane extracted at z = 0. Bottom: temporal evolution at a representative point extracted at (x, y) = (2, 0.5). Database from the following repository https://data.mendeley.com/datasets/hw6nz3zkg3/1.

HODMD is applied to analyze these data using the tolerances εSVD = εDMD = 10−2 . For d = 100, it is possible to reconstruct the original solution with a relative RMS error ∼ 2.5%, and the method retains 11 nonzero frequencies (plus their complex conjugates) and the steady mode (ω = 0) representing the mean flow. The leading mode is ω1  1.17 (St 0.18), in good agreement with the literature [14,21]. It is also possible to identify a sub-harmonic of the leading mode, ω01 = ω1 /3  0.38. The remaining modes are organized as high order harmonics of ω01 , which can also be considered as harmonics of ω1 . Applying HODMD with d = 1, the method only retains six nonzero frequencies (plus their complex conjugates and the steady mode), and the relative RMS reconstruction error is ∼ 5.5%, showing the better performance of the HODMD method. Figure 6.16 compares the a–ω diagrams obtained using d = 100 and 1, showing that the accuracy in the frequency calculations of the low amplitude modes (sub-harmonics) is larger using d = 100 than using d = 1. The reconstructions of the solution using d = 1 and d = 100 are compared with the original field in Fig. 6.17. As seen, the two considered values of the index d are able to represent the streamwise and normal velocity fields with high ac-

206 Higher Order Dynamic Mode Decomposition and Its Applications

FIGURE 6.16 The a–ω diagrams resulting from applying HODMD algorithm DMD-d with d = 100 (black circles) and d = 1 (blue crosses) in the three-dimensional cylinder wake at Re= 220.

curacy. The main difference is found in the spanwise velocity field, where the magnitude of the spanwise velocity component is relatively small compared to the two remaining velocity components. A good option to overcome this difficulty is treating these fields as separated databases, applying HODMD with the tolerance that best fits each case, diminishing the reconstruction error. However, these results are not presented here for the sake of brevity. The influence of the parameter d of the DMD-d algorithm in the RMS reconstruction error is tested in two datasets composed by a different number of snapshots. As shown in Table 6.1 for a given tolerance, the optimal value of d is TABLE 6.1 HODMD in the threedimensional wake of a circular cylinder at Re= 220 with tolerances ε1 = ε2 = 10−2 . # snapshots 500

250

d

RRMSE

1

5.5%

50–350

2.5%

470

3.4%

490

14%

1

4.8%

25–175

2.7%

230

6%

defined in the interval K/10 ≤ d ≤ 3K/2, where K is the total number of snapshots. This selection agrees with the main remarks given about the HODMD

Applications of HODMD and STKD in fluid dynamics Chapter | 6

207

FIGURE 6.17 From left to right: HODMD-reconstruction, using the DMD-d algorithm for the indicated values of d, of the streamwise, normal, and spanwise velocity components in a threedimensional cylinder at Re= 220. For reference, the original field is also considered in the top plots.

algorithm in Chapter 2, Sect. 2.2.5 and also with the results presented in Chapter 3, where this optimal value was defined as K/10 ≤ d ≤ K/3. In the present example, the upper bound of this interval is larger, suggesting that the number of snapshots collected to perform this analysis is still too large. Similar results could be obtained using smaller values of K, considering a smaller number of oscillations of the slowest frequency of the flow. Some random noise, with level ∼ 5–7%, is now added to the original database, and the DMD-d algorithm is again applied using d = 100 and d = 1. The tolerances now are set as εSVD = εDMD = 5 · 10−2 , which is the order of magnitude of the noise level. The method retains the same number of DMD modes as in the previous (clean) case, but the relative RMS reconstruction error is ∼ 6.4% and ∼ 8% for the cases with d = 100 and d = 1, respectively. Figure 6.18 compares the reconstruction of the velocity field with the original solution. As seen, the method is able to clean the solution using both values of d. Although some small fluctuations are identified in both cases in the normal and spanwise velocity fields, these fluctuations are smoother in the case with

208 Higher Order Dynamic Mode Decomposition and Its Applications

FIGURE 6.18 Counterpart of Fig. 6.17 in the case with 5% of additional random noise.

d = 100. Additionally, as in the previous case, the method approximates better the streamwise and normal velocity components than the spanwise component.

6.6 Some concluding remarks The HODMD and STKD methods have been applied to some databases resulting from two realistic, representative, fairly different, fluid dynamic problems, namely the three-dimensional wake of a circular cylinder and the approximately axisymmetric flow in the near field of a zero-net-mass-flux jet.

Applications of HODMD and STKD in fluid dynamics Chapter | 6

209

For the wake of a circular cylinder, the data used were numerical, as obtained upon numerical integration of the continuity and Navier–Stokes equations. The considered Reynolds number was such that the well-known synchronous modes A and B are fully developed, while the quasiperiodic asynchronous modes are still decaying in time. A standard application of the HODMD method did not permit identifying modes A and B separately. This is because these two modes were associated with nearby frequencies and thus were mixed, producing two modes, each of them containing a mixture of modes A and B. Thus, in order to identify modes A and B separately, an application of the STKD method was performed to the HODMD reconstruction of the joint mode (in which the two nearby frequencies are set as equal) containing modes A and B simultaneously. Thus, the two above-mentioned modes that mix modes A and B were collected together in a single reconstructed mode. The outcome of the STKD method produced two different modes that exhibited the same frequency but different wavenumbers. These different wavenumbers permitted identifying modes A and B separately. It turned out that the resulting mode A exhibited the appropriate symmetries of this mode, which are well known. For the zero-net-mass-flux jet, the data analyzed was experimental, obtained in a water tunnel, representing the near field of the jet. These data were analyzed using the iterative, multi-dimensional HODMD method introduced in Chapter 2. In addition to efficiently cleaning the experimental noise, the method was able to identify some small eddies that had not be seen before. The physical relevance of these small eddies was elucidated via numerical simulations, which also showed the eddies. Finally, as an exercise for the reader, the HODMD method has also been applied to analyze a reduced database representing the three-dimensional wake of a circular cylinder. The performance of the DMD-d algorithm has been tested for various values of d as a function of the number of snapshots. Moreover, a case with 5% of additional random noise was also analyzed, showing again the good capabilities of the HODMD method to remove noisy spurious artifacts, and thus recovering the clean databases. After the analysis carried out in this chapter, we may conclude that an appropriate combined application of the HODMD and STKD methods can be very useful in the identification of spatio-temporal patterns in several fluid dynamics problems.

6.7 Annexes to Chapter 6 Annex 6.1. Selected practice problems for Chapter 6: 1. Download the numerical database modeling the wake of a three-dimensional circular cylinder at Re= 220 from the repository: https://data.mendeley.com/datasets/hw6nz3zkg3/1.

210 Higher Order Dynamic Mode Decomposition and Its Applications

2. Adapt the code presented in Annex 6.2 using the functions presented in Chapter 2 Annex 2.2 to create the MATLAB code for the iterative, multidimensional HODMD. 3. Apply HODMD to analyze the previous data and reproduce Figs. 6.16 and 6.17. 4. Modify the value of d and the number of snapshots in the analysis reproducing Table 6.1. 5. Add some random noise to the original database and apply HODMD reproducing the results presented in Fig. 6.18. Annex 6.2. MATLAB functions to compute iterative, multi-dimensional HODMD algorithm. This annex includes the functions to compute the iterative, multi-dimensional HODMD algorithm. This program calculates the main frequencies, amplitudes, and growth rates of the DMD modes. The main program and the following (remaining) MATLAB functions must be in the same directory. The MATLAB functions to compute multi-dimensional HODMD are the following: • The main program is in the MATLAB function ‘main-multiDimHODMDChap6.m’. It is necessary to set the snapshot matrix and the parameters for the iterative HODMD analysis. • The MATLAB functions ‘DMD1.m’ and ‘DMDd.m’ compute the temporal DMD expansion. They use the DMD-1 and DMD-d algorithms, respectively, which were implemented in Chapter 2, Annex 2.2, and can be easily adapted to this algorithm. • The MATLAB function ‘DMDreconst.m’, which reconstructs the original solution as an expansion on spatio-temporal DMD modes. This function is adapted to the present algorithm using the function ‘ContReconst-SIADS.m’ that was implemented in Chapter 2, Annex 2.2. • The MATLAB function ‘calculateDMDmode.m’, which calculates the DMD modes. This function was implemented in Chapter 2, Annex 2.2, and is adapted to this iterative algorithm. • To make the chapter self-contained, the MATLAB function ‘hosvd-function’ is included here, instead of relaying on the MATLAB function ‘TruncHOSVD’, which was provided in Chapter 1, Annex 1.2. A. Iterative, multi-dimensional HODMD in the wake of a threedimensional circular cylinder This is the main function to compute the iterative, multi-dimensional HODMD. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% main-multiDimHODMD-Chap6.m

Applications of HODMD and STKD in fluid dynamics Chapter | 6

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%

%%%

% This Matlab file uses the Multidimensional HODMD % Extension of HODMD for the analysis of complex % signals, noisy data, complex flows... %%%

%%% %%%

%%%

%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %

%

% This code presents an example describing the near field % %field of the wake of a three-dimensional cylinder Re=220 % %

%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% %% INPUT: %% %%% d: parameter of DMD-d (higher order Koopman assumption) %%% V: snapshot matrix %%% deltaT: time between snapshots %%% varepsilon1: first tolerance (SVD) %%% varepsilon: second tolerance (DMD-d modes) %%% %% OUTPUT: %% %%% Vreconst: reconstruction of the snapshot matrix V %%% deltas: growht rate of DMD modes %%% omegas: frequency of DMD modes(angular frequency) %%% amplitude: amplitude of DMD modes %%% modes: DMD modes %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% clc clear all close all %%%%%%%%%%%%%%% SAVE RESULTS IN FOLDER DMD_solution %%%%%% mkdir(’DMD_solution’) filename = sprintf(’./DMD_solution/DMD_history.txt’ ); diary(filename) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% USER: % load data in tensor form: Tensor_lijk: % l: components (i.e.:streamwise, normal and % spanwise velocity) % i: X (or Y)

211

212 Higher Order Dynamic Mode Decomposition and Its Applications

% j: Y (or X) % s: Z % k: Time %% Original data: Tensor(components,nx,ny,nz,nt) load ./Tensor_Re220p6_t2900_3d.mat %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% Input parameters %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Set the snapshot number SNAP=500 varepsilon1=1e-2 varepsilon2=1e-2 d=100 deltaT=0.5; %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Tensor dimension - number of snapshots: Tensor=Tensor(:,:,:,:,1:SNAP); Time=[1:SNAP]*deltaT; %% ALGORITHM: % Modes selection for HOSVD nn0=size(Tensor); nn(1)=nn0(1); nn(2:length(nn0))=0; %% PERFORM HOSVD DECOMPOSITION TO CALCULATE THE REDUCED % TEMPORAL MATRIX hatT [hatT,U,S,sv,nn1]=... hosvd_function(Tensor,varepsilon1,nn,nn0); % hatT: reduced temporal matrix % U: temporal SVD modes % S: tensor core % sv: singular values % nn: number of singular values % % Tensor: initial data % n: dimension of Tensor % varepsilon1: tolerance SVD %% PERFORM DMD-d TO THE REDUCED TEMPORAL MATRIX hatT if d>1 [GrowthRate,Frequency,Amplitude,hatMode] =... DMDd(d,hatT,Time,varepsilon1,varepsilon2); else

Applications of HODMD and STKD in fluid dynamics Chapter | 6

[GrowthRate,Frequency,Amplitude,hatMode] =... DMD1(hatT,Time,varepsilon1,varepsilon2); end % % GrowthRate: growth rate of the DMD modes % Frequency: frequency of the DMD modes % hatMode: reduced DMD mode % Amplitude: amplitudes weighting the DMD modes % varepsilon1: tolerance for the dimension % reduction via SVD % varepsilon2: tolerance to set the DMD modes % (the amplitude of the DMD % modes retained is > varepsilon2 %% RECONSTRUCT THE ORIGINAL TENSOR % USING THE DMD EXPANSION [TensorReconst]=DMDreconst(GrowthRate,... Frequency,hatMode,Time,U,S,sv,nn1); % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % TensorReconst: reconstruction of the original tensor % In the analysis of complex data comment the following line: TensorReconst=real(TensorReconst); % (’Relative mean square error made in the calculations’) RRMSE=norm(Tensor(:)-TensorReconst(:),2)/norm(Tensor(:),2) % (’Growth rate, Frequency, Amplitude’) GrowthrateFrequencyAmplitude=... [GrowthRate’,Frequency’,Amplitude’] diary off % Save the reconstruction of the tensor and the % Growth rates, frequencies % and amplitudes save ./DMD_solution/TensorReconst.mat TensorReconst -v7.3 save ./DMD_solution/GrowthrateFrequencyAmplitude.mat ... GrowthrateFrequencyAmplitude (’Calculating DMD modes...’) %% Calculate DMD modes [N,~]=size(hatT); [DMDmode]=calculateDMDmode(N,hatMode,...

213

214 Higher Order Dynamic Mode Decomposition and Its Applications

Amplitude,U,S,nn1); % Save DMD modes save ./DMD_solution/DMDmode.mat DMDmode -v7.3 %% Plot the results % Frequency vs. absolute value of Growth rate h=figure; semilogy(Frequency,abs(GrowthRate),’o’,... ’linewidth’,2,’color’,’k’); name1 = sprintf(’./DMD_solution/FrequencyGrowthrate_d%03i’,d ); xlabel(’\omega_m’) ylabel(’|\delta_m|’) saveas(h,name1,’fig’) % Frequency vs. amplitudes h2=figure; semilogy(Frequency,Amplitude,’o’,’linewidth’,2,’color’,’k’); name2 = sprintf(’./DMD_solution/FrequencyAmplitude_d%03i’,d ); xlabel(’\omega_m’) ylabel(’a_m’) saveas(h2,name2,’fig’) (’Check the folder DMD_solution!’) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

B. Calculate HOSVD This MATLAB function, called ‘hosvd-function.m’, calculates the truncated HOSVD approximation. As anticipated, this MATLAB function is quite similar to the truncated HOSVD and the function ‘TruncHOSVD.m’ presented Chapter 1, in Annex 1.2. In particular, a function called ‘hosvd.m’ is given below that is quite similar to the function ‘EconomyHOSVD.m’ already included in Chapter 1, in Annex 1.2. The function ‘hosvd.m’ uses several additional MATLAB functions, which must be located in the same directory and can be downloaded from the Mathworks repository: https://es.mathworks.com/matlabcentral/fileexchange/25514-tp-tool. It must be noted that the function hodsvd.m included here it similar, but it does not exactly coincide with its counterpart in the Mathworks repository. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% hosvd-function.m

Applications of HODMD and STKD in fluid dynamics Chapter | 6

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function [hatT,U,S,sv,nn]=... hosvd_function(Tensor,varepsilon1,nn,n) % PERFORM Economy HOSVD RETAINING ALL SINGULAR VALUES % [L,I,J,K]=size(Tensor) % n=[L,I,J,K]; [TT S U sv n] = hosvd(Tensor, n); % SET THE TRUNCATION OF THE SINGULAR % VALUES USING varepsilon1 (automatic truncation) %nn=[L,0,0,0]; for i=2:length(nn) count=0; for j=1:size(sv{i},1) if sv{i}(j)/sv{i}(1)>=varepsilon1 count=count+1; else break end end nn(i)=count; end (’Initial number of singular values’) n (’Number of singular values retained’) nn % HOSVD retaining n singular values: % reconstruction of the modes [TT S U sv nn] = hosvd(Tensor, nn); % Construct the reduced matrix containing the % temporal modes for pp=1:length(nn) UT{pp}=U{pp}’; end for kk=1:nn(5)

215

216 Higher Order Dynamic Mode Decomposition and Its Applications

hatT(kk,:)=sv{5}(kk)*UT{5}(kk,:); end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% hosvd.m %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Function downloaded from MathWorks Repository and adapted %(1)https://es.mathworks.com/matlabcentral/fileexchange/25514-tp-tool % It needs to include in the same folder the function files: % ndim_fold.m, ndim_unfold.m, tprod.m and wshift.m, % which sould be downloaded from (1) function [TT S U sv n] = hosvd(T, n) M = size(T); P = length(M); U = cell(1,P); UT = cell(1,P); sv = cell(1,P); product=n(1); for i=2:P product=product*n(i); end for i = 1:P n(i)=min(n(i),product/n(i)); end for i = 1:P n(i)=min(n(i),product/n(i)); A = ndim_unfold(T, i); % SVD based reduction of the current dimension (i) % [Ui svi] = svdtrunc(A, n(i)); if nargin == 1 ns=length(sv); end [Ui S] = svd(A, ’econ’); svi=diag(S); % COMPUTE THE MODE MATRIX IN THE i DIMENSION if n(i) 12, the flow structures are more disorganized. These qualitative changes in the flow can be related to the wake meandering motion (see [98] for more details). These two different regions are also found in the normal component (vy ) of the velocity field. Figure 7.10 gives the instantaneous, time-averaged, and fluctuating normal

FIGURE 7.10 The normal velocity component at x/D = 6 (top) and x/D = 18 (bottom), which correspond to regions I and II, respectively. From left to right: time-averaged field Vy , velocity fluctuations vy , and instantaneous flow field. In each plot, the domain covers −2R ≤ r ≤ 2R.

velocity components, extracted at two representative planes perpendicular to the streamwise direction, located in regions I and II. In region I, the normal velocity component is reflection-symmetric with respect to the z-axis. This is because this flow is rotationally symmetric, namely the radial velocity component is independent of the azimuthal angle. Likewise, the vertical velocity component (not shown for the sake of brevity) is also reflection-symmetric with respect to the y-axis. In fact, this velocity component is obtained from the normal component by π/2-rotation. Instead, in region II these reflection symmetries are broken; the flow is no longer invariant under rotation. The origin of such symmetry breaking could be a drift instability, associated with parity breaking [42], occurring along the azimuthal direction, which leads to the flow rotation. Such rotation is identified in both, the instantaneous and the fluctuating velocity fields, suggesting that such flow instability starts in the fluctuating field (small amplitude) and is transferred to the entire flow field.

228 Higher Order Dynamic Mode Decomposition and Its Applications

The STKD method is now applied to the instantaneous flow field. The analysis is performed in a set of 100 snapshots, temporally equi-spaced with t = 0.2, and using 240 point along the streamwise direction, spatially equispaced with x = 5.8543 · 10−2 . The various tunable parameters of the STKD method are as follows: d x = 40, d t = 20, and dimension reduction and mode truncation tolerances εSVD = εDMD = 0.01. The dispersion diagram is presented in Fig. 7.11-left, which shows that the main flow field is composed by a single

FIGURE 7.11 SKTD analysis of the instantaneous velocity field, plotting the spatio-temporal x–t diagram of the reconstruction along the line r = 1, θ = π/4 (left) and the dispersion diagram (right).

traveling wave. Figure 7.11-right gives a spatio-temporal x–t diagram of the reconstruction of this field, which represents the general dynamics described in region I; region II, instead, is lost in this diagram. This suggests that the origin of the flow transition from region I to region II is connected with small amplitude modes (only retained using SKTD with tolerances εSVD and εDMD smaller than those considered above, namely smaller than 0.01). Instead, these tolerances should be much smaller than the typical values of the velocity fluctuations. The high amplitude of the TW describing the main flow dynamics hides the small amplitude effects. In order to understand the flow dynamics related to the rotational effect, it is possible to proceed in two different ways. Namely, either (i) subtracting the high amplitude TW from the complete flow field and using this new velocity field to perform a new STKD analysis; or (ii) directly carrying out the STKD analysis in the fluctuating field, which is obtained by subtracting the mean flow from the complete velocity field. These two strategies will be completed below. Let us start with the first strategy, namely applying the STKD method to the flow field calculated when subtracting the previous TW from the complete flow field. The resulting flow field is shown in Fig. 7.12-left. The accurate reconstruction of this velocity field, presented in Fig. 7.12-right, shows that these two approximations agree for the flow description. However, identifying the main flow instabilities triggering the origin of region II is still an open question. The dispersion diagram of this analysis using εSVD = εDMD = 0.1, is presented in Fig. 7.13, where it is possible to organize the solution as groups of standing

Applications of HODMD and STKD in the wind industry Chapter | 7 229

FIGURE 7.12 The field obtained subtracting the TW from the complete flow field (left) and its STKD reconstruction (right).

FIGURE 7.13 Dispersion diagram obtained in the SKTD analysis applied to the field obtained subtracting the TW from the complete flow field.

waves (SWs) or TWs. In other words, the horizontal lines with constant ω and various κ describe the flow as a modulated SW comformed by a combination of SWs. Nevertheless, the flow can also be described using oblique lines with various values of κ and ω, representing a modulated TW, built by a combination of TWs. Following the second strategy mentioned above, let us apply STKD directly to the velocity fluctuations, obtained by subtracting the mean flow from the complete velocity field. The tolerances used in this analysis are εSVD = εDMD = 0.05. The dispersion x–t diagram, presented in Fig. 7.14-left, shows a simple description of the flow as a modulated TW, composed by several TWs moving with positive (negative slopes in the dispersion diagram) phase velocity; a TW moving with negative phase velocity can also be identified. Figure 7.14-right shows that this approximation matches reasonably well with the original fluctuating field presented in Fig. 7.9-right, distinguishing regions I and II. However, if the method is applied using the tolerances εSVD = εDMD = 0.1, the dispersion diagram in Fig. 7.14-left only identifies a few TWs with positive phase velocity, and the reconstruction of the instantaneous flow field only shows region I. This

230 Higher Order Dynamic Mode Decomposition and Its Applications

FIGURE 7.14 The turbulent wake of a horizontal wind turbine. Left: Dispersion diagram obtained by applying the STKD method to the fluctuating velocity field. Right: Reconstructed spatio-temporal x–t diagram for the velocity fluctuations.

fact suggests that the rotational effects found in region II, strongly linked with the wake meandering motion, are related with low amplitude TWs moving with negative and positive phase velocities. Summarizing, the STKD method is able to identify: (i) the high amplitude traveling wave driving the motion of the wake of the wind turbine and (ii) the small amplitude traveling waves related with velocity fluctuations, which seem to be related to the mechanism triggering the wake meandering motion. Using STKD, it is possible to detect spatio-temporal flow instabilities in complex flows, providing a deep insight into the flow physics.

7.4 LiDAR experimental data: wind velocity spatial predictions Wind velocity forecasting in LiDAR experiments provides relevant information for various applications, including, among others, the correct setting for the working conditions of the wind turbine and the maximization of power extraction in the wind farm. In the application presented in this section, LiDAR measurements have been carried out during 24 h, describing wind velocity at the left and right sides of the incoming flow, upstream a wind turbine. This experiment extracts information about the wind velocity at J different distances, located in the range from 33 to 201 meters upstream the wind turbine. The spatial complexity of this system is small (J is limited to a few distances), while the temporal complexity is very high (turbulent flow). Moreover, the level of noise of this time-dependent signal is also high (15–20%). These three factors introduce HODMD as a suitable tool for the analysis of LiDAR experimental data. As it will be presented below, HODMD will perfectly capture the dominant frequencies and growth rates describing LiDAR temporal measurements. However, this application intends to predict wind velocities at a distance of 229 m upstream the wind turbine, in the same timespan where the data is collected.

Applications of HODMD and STKD in the wind industry Chapter | 7 231

In other words, HODMD will be used for spatial predictions. The reader could then question the reason of using temporal HODMD rather than STKD, which is specific for spatio-temporal analyses. The answer is very simple, since the small number of spatial experimental data, J = 6, compared to the large number of temporal samples (in the data analyzed, this is K = 144), encourages using HODMD combined with alternative algorithms to predict wind velocities at different spatial points. This new methodology is called as spatial forecasting HODMD (sfHODMD) and is encompassed in the five steps presented below. Since the left and right signals are somewhat uncorrelated, these two signals are treated separately. • I. Temporal HODMD: calibration. HODMD is applied to analyze the experimental data. The tunable parameters set in this analysis are selected to minimize the relative RMS error of the reconstruction. Note that, at this step, the method cleans the noise of the experimental signal. Thus, in some cases, the order of magnitude of the minimum relative RMS error could be similar to the level of noise. • II. HODMD: dimension reduction. Once that HODMD has been properly calibrated, the method is again applied to analyze the same data, but now it is necessary to differentiate the several sub-steps of this method (see Chapter 2). First, SVD is applied to the snapshot matrix (whose columns are the snapshots). Using the threshold εSVD , the dimension of the data is reduced from J to N . Nevertheless, in LiDAR measurements, the spatial dimension of the data, J , is already small, meaning that it is not surprising that N = J . This fact encourages the next steps of this algorithm. • III. Correlation and tendency in SVD modes. Each one of the SVD modes retained in the previous step is studied in detail observing its trend in space. This trend is assumed to continue in space, meaning that the SVD modes can be extrapolated using an appropriate type of correlation (e.g., linear or quadratic). • IV. Extrapolation. Based on the spatial correlation determined in the previous step, each one of the SVD modes is extrapolated in space. For instance, when the data are extrapolated to one additional spatial point, the spatial dimension goes from J to J + 1. • V. The DMD-d algorithm. This algorithm (with appropriate d) is applied to the extrapolated snapshots. The DMD expansion (2.1) in Chapter 2 is constructed, with the new spatial dimension J + 1. The information contained in the new spatial points shows the temporal evolution of the wind velocity at the new (J + 1)th location. It is noteworthy that this application of HODMD for spatial data forecasting is applied in this chapter for the analysis of LiDAR experimental data. Nevertheless, this same methodology could be applied to predict spatial features in any type of data. For the sake of clarity, the method is first applied to a toy model, and then to actual LiDAR experimental data.

232 Higher Order Dynamic Mode Decomposition and Its Applications

7.4.1 HODMD for spatial forecasting: toy model Before showing the application to actual LiDAR experimental data, let us start using the sfHODMD method in a simple toy model, modeling the temporal evolution of LiDAR experimental data, but free of noise. The pattern is based on the model introduced in [97], and is defined as u(x, t) = (2 · 10−3 x 3 + 8 · 10−2 x 2 + x) [2 sin(ω˜ 1 t) + 0.25 cos(ω˜ 2 t)], (7.1) √ where ω˜ 1 = 2π/25 and ω˜ 2 = 5. The dynamics associated with this model is multi-scale, since the two frequencies imposed exhibit quite different values (small and large), quasi-periodic, because ω˜ 1 and ω˜ 2 are incommensurable and, consequently, fairly demanding. It is possible to represent this model as a DMD expansion involving four frequencies, namely ±ω˜ 1 and ±ω˜ 2 . The spatiotemporal evolution of this model is presented in Fig. 7.15. The model is spatially defined in the interval 0 ≤ x ≤ 50 (although the figure only shows variations up to x = 10 for simplicity).

FIGURE 7.15 Toy model (7.1) defined in the interval x∈ [0, 10].

In order to illustrate this application of sfHODMD, we only consider the spatial interval 0 ≤ x ≤ 2, defined using four spatial points. A set of 1000 snapshots is collected in the time interval 0 ≤ t ≤ 200. HODMD is applied to analyze these data to set the proper calibration parameters (step I). For simplicity, since this model is noise free, the parameters used for this analysis are chosen to obtain zero machine error, as εSVD = 10−8 and εDMD = 10−7 . Setting d in the interval 10 ≤ d ≤ 100, the relative RMS error in the reconstruction of the original snapshots using HODMD is ∼ 10−14 , suggesting that the method is properly calibrated. Now, we can proceed to the step I I of the sfHODMD algorithm. Figure 7.16-left shows the singular values calculated in the first part of the HODMD algorithm. As seen, this model can be represented using the mode associated with the first singular value only because, in this case, the neglected, second singular value is extremely small, namely such that σ2 /σ1 < 10−18 . Figure 7.16-right shows the shape of the retained SVD mode, considering only the four points defining the spatial domain. As seen, for spatial locations x < 2, it is

Applications of HODMD and STKD in the wind industry Chapter | 7 233

FIGURE 7.16 Singular values (left) and first SVD mode (right) calculated in the toy model (7.1). In the right plot, the dashed line follows a linear tendency while the continuous line follows the actual SVD mode tendency.

possible to adjust the retained mode as a linear function, as first approach, but a quadratic approximation is required for adjusting the mode at positions x > 2. These correlations identified for step I I I are in good agreement with the definition of the model (7.1), where we can identify a linear function for small values of x, a quadratic function for intermediate values of x (identified as 8 · 10−2 x 2 ), and a cubic function for large values of x (identified as 2 · 10−3 x 3 ). In this case, we intend to approximate intermediate values of x (2 ≤ x ≤ 10) and use a quadratic approximation, justifying the trend identified in the singular value. The following step, step I V , uses the previous trends to extrapolate the SVD mode to spatial values defined in the interval 2 ≤ x ≤ 10. Once the spatial dimension of the model is enlarged, we proceed with the DMD-d algorithm as defined in step V . The method provides the DMD expansion spatially defined in the interval 0 ≤ x ≤ 10. As expected, the method identifies four exact frequencies, namely ±ω˜ 1 and ±ω˜ 2 , and reconstructs the original model with a relative RMS error ∼ 10−14 . Figure 7.17 compares the prediction of the signal at x ∼ 3 and x ∼ 10 with the original model. As expected, the linear prediction provides good results for small values of x, namely, up to x ∼ 3, with a relative RMS error ∼ 4.5 · 10−2 . However, the error for the linear approximation at x ∼ 10 is larger, namely ∼ 3.8 · 10−1 . This second location is better predicted using the quadratic function, leading to a relative RMS error 5.5 · 10−2 . The presence of the cubic term in Eq. (7.1), prevents to decrease the error of these predictions. Moreover, at further locations, the importance gained by this term deteriorates the predictions using the quadratic approximation, being the relative RMS error ∼ 2.5 · 10−1 and ∼ 4.7 · 10−1 for predictions at x = 25 and x = 50, respectively. As expected, using a cubic approximation to extrapolate the model decreases such error. Nevertheless, in actual LiDAR experimental data, considered below, in Sect. 7.4.2, the extrapolation will not be too further away from the considered data, and a quadratic approximation will suffice to obtain good results. In order to test the performance of this model and the influence of the linear and quadratic extrapolations, the previous analyses have been repeated varying

234 Higher Order Dynamic Mode Decomposition and Its Applications

FIGURE 7.17 Predictions in the toy model (7.1). Blue crosses and black circles represent the original model and the prediction, respectively.

the parameters of HODMD and the coefficients of the polynomial appearing in the toy model (7.1). In other words, we consider the corrected toy model defined as u(x, t) = (ax 3 + bx 2 + cx) [f sin(ω˜ 1 t) + g cos(ω˜ 2 t)].

(7.2)

Table 7.2 shows the relative RMS error calculated in the predictions at various locations x, depending on the test function (polynomial coefficients) and the approximation used (linear/quadratic). As seen, the calibration of HODMD plays an important role for the proper performance of this method. Using the DMD-2 algorithm provides completely spurious results since the spatial complexity of this problem (represented by the number of singular values, N = 1) is smaller than the spectral complexity (represented by the number of frequencies defined in this TM M = 4). Thus d should be at least 4, although for the examples presented the optimal solution is found for values of d > 10. As observed in the relative RMS error for the prediction, the difference between the linear and quadratic approximations increases when the value of b (quadratic term) increases compared to a (linear term). Figure 7.18 shows two examples using the quadratic extrapolation for the cases in Table 7.2 with a = 2.3 · 10−3 and

Applications of HODMD and STKD in the wind industry Chapter | 7 235

TABLE 7.2 Application of sfHODMD in toy model (7.2) for c = 1, f = 2, √ g = 0.25, ω1 = 2π/25 and ω2 = 5. Coefficients a = 2.3 · 10−3 , b = 8 · 10−2 a = 2.3 · 10−3 , b = 8 · 10−2 a = 2.3 · 10−3 , b = 8 · 10−2

DMD-d

Position x

RRMSE

DMD-50

2.92

4.5 · 10−2

Extrapolation Linear

DMD-50

2.92

7.4 · 10−4

Quadratic

DMD-2

2.92

1

Quadratic Linear Quadratic

a = 2.3 · 10−3 , b = 8 · 10−2

DMD-50

4.94

1.5 · 10−1

a = 2.3 · 10−3 , b = 8 · 10−2

DMD-50

4.94

8.6 · 10−3

a = 2.3 · 10−3 , b = 8 · 10−2

DMD-50

24.74

7.1 · 10−1

Linear

a = 2.3 · 10−3 , b = 8 · 10−2

DMD-50

24.74

2.5 · 10−1

Quadratic

a = 10−2 , b = 10−2

DMD-50

2.92

1.1 · 10−2

Quadratic

a = 10−2 , b = 10−2

DMD-50

4.94

7.6 · 10−2

Quadratic

a = 10−2 , b = 10−2

DMD-50

9.59

3.2 · 10−1

Quadratic Linear

a = 10−2 , b = 2

DMD-50

2.92

2.1 · 10−1

a = 10−2 , b = 2

DMD-50

2.92

2.0 · 10−3

Quadratic

a = 10−2 , b = 2

DMD-50

9.59

3.1 · 10−2

Quadratic

a = 10−2 , b = 2

DMD-50

50

2.1 · 10−1

Quadratic

FIGURE 7.18 Predictions in the toy model (7.2). Quadratic approximation at x = 2.5253 using DMD-2 (left) and at x = 24.7475 using DMD-50 (right). As in Fig. 7.17, blue crosses and black circles indicate the original model and the prediction, respectively.

b = 8 · 10−2 , using DMD-2 for the prediction at x = 2.5253 and a = 2.3 · 10−3 and using DMD-50 for the prediction at x = 24.7475.

7.4.2 HODMD for spatial forecasting: LiDAR experiments The company ZephIR Lidar [191] carried out a LiDAR experimental campaign, performing measurements during 24 hours. A LiDAR device was located upstream a horizontal axis wind turbine with the aim at controlling the variations in the incoming wind velocity. For confidentiality reasons, the location and de-

236 Higher Order Dynamic Mode Decomposition and Its Applications

sign of the wind turbine cannot be detailed here. However, we can say that they were selected to maximize the wind power extraction. Two sets of measurements were carried out at six different positions upstream the wind turbine, defined in the interval 33 m ≤ x ≤ 201 m (for similar values of the transversal coordinate y). Each one of this measurement set is collected at both sides of the wind turbine (left and right sides). The sfHODMD algorithm will be used to predict wind velocity at position x  228 m. Figure 7.19 shows the sketch of the LiDAR experimental campaign.

FIGURE 7.19 LiDAR experimental configuration: data collected in six distances upstream the wind turbine. Predictions are to be carried out at a seventh distance.

Two groups (left and right sides of the turbine) of 1500 snapshots, equidistant in time, with t ∼ 57.5 s, are collected in the 24 hour experimental campaign. The data are highly noisy, which increases the complexity of using this prediction model. Velocity fluctuations are then attenuated using data promediated in time every 10 minutes (a common practice carried out in the wind industry), resulting into a total number of 144 snapshots. Figure 7.20 shows the original measurements (top) at distances 1 and 6 (two representative points), and the promediated values (bottom), where it is possible to see the attenuating effect in the velocity fluctuations. After some calibration, the parameters used in the HODMD analysis are εSVD = εDMD = 10−4 and 30 ≤ d ≤ 40. Using these parameters, the reconstruction error is maintained always smaller than 5% at the six distances of measurements. Then we proceed with the analysis of the singular values (Step I I ), shown in Fig. 7.21 for the left and right sets of measurements. As seen, both cases can be represented using five from a total of six modes (the amplitude of the sixth singular value is small). The first singular value presents the largest amplitude, while the amplitude of the remaining modes (from second to fifth) is similar. In the HODMD calibration process previously carried out, the smallest reconstruction error, ∼ 5%, was found for tolerances εSVD equivalent to retaining N = 4 SVD modes. The RMS reconstruction error retaining N = 5 modes is similar, but this error increases retaining N < 4 modes. Since Steps I I I and I V require the evaluation and extrapolation of the SVD modes, the optimal value is N = 4 (reasonable error using a reasonable number of modes).

Applications of HODMD and STKD in the wind industry Chapter | 7 237

FIGURE 7.20 LiDAR measurements obtained upstream the wind turbine (right side) at two representative distances. Top: raw measurements. Bottom: measurements promediated every 10 minutes. Hereinafter, all the velocity measurements are scaled with a factor k, which is not given for confidentiality.

FIGURE 7.21 Singular values calculated in the data sets obtained in the left and right sides of the wind turbines, represented by black crosses and red circles, respectively.

The spatial structure of the four retained SVD modes calculated for the right and left data sets are presented in Fig. 7.22. For distances larger than about 80 m, the first three modes follow a linear trend, suggesting that using a linear or quadratic correlation could be enough for extrapolation. The prediction at distance 228 m is shown in Fig. 7.23. As seen, similar results are obtained using the linear and quadratic extrapolations. The error of the prediction is ∼ 7%. Nevertheless, considering that the initial reconstruction error from the HODMD analysis was ∼ 5%, the total error for this prediction is only ∼ 2%.

238 Higher Order Dynamic Mode Decomposition and Its Applications

FIGURE 7.22 SVD modes calculated in the data sets obtained in the left (top) and right (bottom) sides of the wind turbines. From left to right: modes 1 to 4.

FIGURE 7.23 Original (solid lines) and HODMD-predictions (red dotted lines) of the wind velocity upstream the wind turbine at x = 228 m using a linear (top) and quadratic (bottom) approximation model. Left: data set from the left side of the wind turbine. Right: data set from the right side of the wind turbine.

7.5 Some concluding remarks Two realistic, representative applications of the HODMD and STKD methods have been considered using numerical and experimental data resulting from wind turbine operation.

Applications of HODMD and STKD in the wind industry Chapter | 7 239

The flow around wind turbines with vertical and horizontal axes were considered in Sects. 7.2 and 7.3, respectively. For vertical wind turbines, the HODMD method was able to identify reasonably well the main spatio-temporal structures both near the wind turbine and in the wake. Horizontal wind turbines exhibited spatio-temporal structures that were different from those of vertical wind turbines. They clearly exhibited traveling waves that were identified using the STKD method separately to the mean flow and the fluctuations. LiDAR experimental data, obtained upstream a wind turbine with horizontal axis, was considered in Sect. 7.4. The aim in this section was to spatially extrapolate velocity measurements obtained at six distances, upstream the wind turbine, to a new further distance that was not accessible to raw LiDAR measurements. The method combined the HODMD outcomes with extrapolation on the spatial modes, and gave fairly good results.

7.6 Annexes to Chapter 7 Annex 7.1. Selected practice problems for Chapter 7 1. Create the toy model from Eq. (7.1) using the MATLAB function ‘ToyModel-Chap7-dataForecasting.m’ presented in Annex 7.2. 2. Use the MATLAB function ‘main-Chap7-dataForecasting.m’ presented in Annex 7.2 and extrapolate the previous toy model to reproduce Fig. 7.17. 3. Vary the polynomial coefficients of the previous toy model as in Eq. (7.2) and reproduce the results presented in Table 7.2 and Fig. 7.18. 4. Extend the model and adapt the code to perform cubic extrapolations. 5. Add random noise in the toy model with amplitude ∼ 10−3 and repeat the previous calculations. You should re-calibrate the tolerances εSVD and εDMD and increase the lower bound of d. Try using εSVD = εDMD = 10−6 and εSVD = εDMD = 10−3 . Annex 7.2. Some MATLAB functions to compute HODMD for spatial forecasting (sfHODMD). This annex includes MATLAB functions to compute the HODMD algorithm to perform spatial forecasting in a toy model. The first file introduces the toy model from Eq. (7.1) introduced in Sect. 7.4.1. The second file includes the main program for computing HODMD for spatial forecasting. It is also necessary using the MATLAB functions for the HODMD algorithm presented in Chapter 2, Annex 2.2. The main program and the following functions must be in the same directory. The functions are the following: • The first MATLAB function, called ‘ToyModel-Chap7-dataForecasting.m’ creates the toy model from Eq. (7.1) introduced in Sect. 7.4.1. • The main program computing HODMD for spatial forecasting is in the function ‘main-Chap7-dataForecasting.m’. It is necessary to set the snapshot matrix and the parameters for the HODMD analysis and the interval for extrapolation.

240 Higher Order Dynamic Mode Decomposition and Its Applications

• To apply sfHODMD, it is also necessary to include in the same folder the function ‘DMDd-SIADS.m’, which computes the DMD-d algorithm, and extracts the value of the variables Q, U, and hatQ as a part of the function; and the function ‘ContReconst-SIADS.m’, which reconstructs the original solution as an expansion of DMD modes, both presented in Chapter 2, Annex 2.2. A. Toy model The first MATLAB function, called ‘ToyModel-Chap7-dataForecasting.m’ creates the toy model from Eq. (7.1) introduced in Sect. 7.4.1. This function is necessary to complete the practice problem 1 stated in Annex 7.1. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ToyModel-Chap7-dataForecasting.m %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % TOY MODEL for data forecasting in LiDAR experiments

%%%

%

%%%

%u(x,t)=(ax^3+bx^2+cx)*(f*sin(omega1*t)+g*cos(omega2*t))%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% clc clear all close all %% Set parameters for spatial and temporal discretization % # Spatial points nx=100 % # Temporal points nt=1000 % Spatial dimension of the domain xf=10 % Final time tf=200 %% Set frequencies omega1=2*pi/25 omega2=sqrt(5) %% Set spatial and temporal coefficients in the toy model a=2e-3

Applications of HODMD and STKD in the wind industry Chapter | 7 241

b=8e-2 c=1 f=2 g=0.25 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Create mesh x=[0:xf/(nx-1):xf]; t=[0:tf/(nt-1):tf]; [X T]=meshgrid(0:xf/(nx-1):xf,0:tf/(nt-1):tf); %% Create Toy Model: u(x,t)= %

(ax^3+bx^2+cx)*(f*sin(omega1*t)+g*cos(omega2*t))

for i=1:nx for j=1:nt u(i,j)=(a*x(i)^3+b*x(i)^2+c*x(i))* ...(f*sin(omega1*t(j))+g*cos(omega2*t(j))); end end %% Plot TM figure1 = figure; colormap(jet); axes1 = axes(’Parent’,figure1); hold(axes1,’on’); pcolor(X,T,u’) xlabel(’x’) ylabel(’t’) box(axes1,’on’); set(axes1,’FontSize’,18); colorbar(axes1); shading(’interp’) colormap(jet) %% Save TM V_TM=u; Time=t; save V_TM_Chap7_LiDAR.mat V_TM save Time.mat Time save X.mat x %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

242 Higher Order Dynamic Mode Decomposition and Its Applications

B. Spatial data forecasting This MATLB function, called ‘main-Chap7-dataForecasting.m’ extrapolates in space the toy model from Eq. (7.2) using the sfHODMD algorithm introduced in Sect. 7.4. This function completes the practice problems 2 and 3 stated in Annex 7.1. This MATLAB function is as follows: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% main-Chap7-dataForecasting.m %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %HODMD is applied to extrapolate in space the Toy model%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% %% INPUT: %% %%% d: parameter of DMD-d (higher order Koopman assumption) %%% V: snapshot matrix %%% Time: vector time %%% varepsilon1: first tolerance (SVD) %%% varepsilon: second tolerance (DMD-d modes) %%% extrapolation: (1) linear or (2) quadratic %%% r: range of extrapolation %%% %% OUTPUT: %% %%% Vreconst_Extrapol: reconstruction of snapshot matrix V %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% clc clear all close all load V_TM_Chap7_LiDAR.mat V_TM V=V_TM; load Time.mat Time load X.mat x %% Input parameters Extrapolation % extrapolation = 1 lineal extrapolation %

2 quadratic extrapolation

extrapolation = 2; % Spatial definition of the TM: 0