Computational Intelligence for Water and Environmental Sciences (Studies in Computational Intelligence, 1043) 9811925186, 9789811925184

This book provides a comprehensive yet fresh perspective for the cutting-edge CI-oriented approaches in water resources

109 14 14MB

English Pages 561 [547] Year 2022

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Computational Intelligence for Water and Environmental Sciences (Studies in Computational Intelligence, 1043)
 9811925186, 9789811925184

Table of contents :
Preface
Contents
List of Figures
List of Tables
Part I Computational Intelligence-Based Optimization
1 Optimization Algorithms Surpassing Metaphor
1.1 Introduction
1.2 Gradient-Based Optimizer (GBO)
1.2.1 Brief Summary of Newton's Method
1.2.2 Modified Version
1.2.3 Gradient-Based Optimizer
1.3 Runge Kutta Optimizer (RUN)
1.3.1 Brief Reviews of Runge Kutta Method (RKM)
1.3.2 Introduction of Runge Kutta Optimizer
1.4 Differential Evolution (DE)
1.4.1 Vector Initialization
1.4.2 Mutation Operation
1.4.3 Crossover Mechanism
1.4.4 Evaluation and Selection Operations
1.5 Conclusion
References
2 Improving Approaches for Meta-heuristic Algorithms: A Brief Overview
2.1 Introduction
2.2 Meta-heuristic Optimization Algorithms
2.2.1 Swarm Intelligence Algorithms
2.2.2 Evolutionary Computation Algorithms
2.2.3 Science-Based Algorithms
2.2.4 Human-Based Algorithms
2.3 Improvement Strategies
2.3.1 Opposition-Based Learning
2.3.2 Lévy Flight
2.3.3 Chaotic Maps
2.3.4 Greedy Search Strategy
2.3.5 Quantum Computing
2.3.6 Binary Search
2.3.7 Self-adaptive Mechanism
2.3.8 Multiple Strategies
2.4 Hybrid Models
2.5 Conclusion
References
3 Multi-Objective Optimization Application in Water and Environmental Sciences
3.1 Introduction
3.2 Multi-Objective Optimization
3.2.1 Multi-Objective Problem
3.2.2 Basic Associated Concepts and Definitions
3.3 Multi-Objective Optimization Approaches
3.3.1 Prior Methods
3.3.2 Posteriori Methods
3.3.3 Interactive (Progressive) Methods
3.4 Multi-Objective Optimization Problems Methodologies
3.4.1 Classical Methods
3.4.2 Meta-Heuristic Optimization Algorithms
3.5 Multi-Objective Optimization Application in Water and Environmental Sciences
3.6 Conclusion
References
4 A Survey of PSO Contributions to Water and Environmental Sciences
4.1 Introduction
4.2 Particle Swarm Optimization: Theory and Mechanism
4.3 From Initial PSO to Recent Developments
4.3.1 PSO Parameters
4.3.2 PSO for Multi-Objective Problems
4.3.3 Hybrid Versions of PSO
4.4 PSO Contributions to Water and Environmental Sciences
4.4.1 Water Quality and Pollution
4.4.2 Water Distribution Networks
4.4.3 Calibration of Hydrologic Models
4.4.4 Water Infrastructure
4.4.5 Reservoir Operation
4.4.6 Groundwater
4.4.7 Solar Radiation
4.4.8 Some Other Contributions
4.5 Concluding Remarks
References
5 Firefly Algorithms (FAs): Application in Water Resource Systems
5.1 Introduction
5.2 Firefly Algorithm (FA)
5.3 Multi-objective Firefly Algorithm (MOFA)
5.4 Developed Firefly Algorithm (DFA) in Water Resource Systems
5.5 Multi-objective Developed Firefly Algorithm (MODFA) in Water Resource Systems
5.6 Conclusion
References
6 Multi-Objective Calibration of a Single-Event, Physically-Based Hydrological Model (KINEROS2) Using AMALGAM Approach
6.1 Introduction
6.2 Materials and Methods
6.2.1 Study Area
6.2.2 Data Set
6.2.3 Kineros2
6.2.4 Model Parameters in Optimization Process
6.2.5 Amalgam
6.2.6 Objective Functions
6.3 Results and Discussion
6.3.1 Optimization Results
6.3.2 Multi-Objective Flood Simulation
6.4 Conclusion
References
7 Ant Colony Optimization Algorithms: Introductory Steps to Understanding
7.1 Introduction
7.2 The Ant Colony Algorithm Methodology
7.2.1 The ACO Algorithm Detail
7.2.2 Main Advantages and Disadvantages of ACO
7.2.3 General Code of ACO in MATLAB
7.3 The Ant Lion Algorithm Methodology
7.3.1 The ALO Algorithm Detail
7.3.2 General Code of ALO in MATLAB
7.4 The Multi-objective Ant Lion Algorithm Methodology
7.4.1 The MOALO Algorithm Detail
7.4.2 General Code of MOALO in MATLAB
7.5 Conclusion
References
Part II Data Mining and Machine Learning
8 Data Mining Methods for Modeling in Water Science
8.1 Introduction
8.2 Basic Concepts of Data Mining (DM) Techniques
8.2.1 Support Vector Machine (SVM)
8.2.2 Adaptive Neural-Fuzzy Inference System (ANFIS)
8.2.3 Basic Concepts of Extreme Learning Machine (ELM)
8.2.4 Genetic Programming (GP)
8.2.5 Basic Concepts of Multivariate Adaptive Regression Spline (MARS)
8.3 Conclusion
References
9 Using Support Vector Machine (SVM) in Modeling Water Resources Systems
9.1 Introduction
9.2 Classification Differences in SVM with Other Neural Networks
9.3 The Concept of Support Vector and Support Vector Machine
9.4 Application of SVM in the Nonlinear Distribution of Data and the Concept of Transmission Functions
9.5 Support Vector Regression (SVR)
9.5.1 Support Vector Linear Regression (LSVR)
9.5.2 Support Vector Nonlinear Regression (NLSVR)
9.6 The Practical Process of Using SVM
9.7 Prepare Data for Introduction to SVM
9.8 SVM Software and Problem-Solving
9.8.1 SVM and SVR in MATLAB
9.8.2 SVM in Tanagra
9.9 Conclusion
References
10 Decision Tree (DT): A Valuable Tool for Water Resources Engineering
10.1 Introduction
10.2 DT Structure
10.3 DT Concept and Its Types
10.3.1 Univariate DT
10.3.2 Multivariate DT
10.3.3 Binary DT
10.4 Input Features to a Node
10.4.1 Nominal Attributes
10.4.2 Ordinal Attributes
10.4.3 Continuous Attributes
10.5 Impurity Criterion to Identify the Input Feature to the Node
10.5.1 Entropy Impurity Criterion
10.5.2 Gini Impurity Criterion
10.5.3 Impurity Criterion of Misclassification Error
10.6 DT Training
10.6.1 Use the Returned Function in DT Training
10.7 Information Gain and Gain Ratio
10.8 Stopping Tree Growth and Leaf Production
10.9 Decision Boundary
10.10 Regression Tree and Its Difference with Classification Tree
10.11 DT in MATLAB
10.11.1 Types of Optimization Criteria in MATLAB
10.11.2 Optimal Growth of DT in MATLAB
10.12 Basic Commands in MATLAB
10.13 Conclusion
References
11 The Basis of Artificial Neural Network (ANN): Structures, Algorithms and Functions
11.1 Introduction
11.2 Methodology
11.2.1 ANN and Its Structures
11.2.2 Stimulus or Transmission Functions
11.2.3 ANN Architecture
11.2.4 Perceptron Neural Network
11.2.5 ANN Training
11.2.6 ANN Classification Based on the Training Method
11.2.7 How ANN Works After Training
11.2.8 ANN Implementation Steps
11.2.9 ANN Software and Problem Solving
11.3 Conclusion
References
12 Genetic Programming (GP): An Introduction and Practical Application
12.1 Introduction
12.2 Methodology of Genetic Algorithms
12.2.1 Initial Population Formation
12.2.2 Population Assessment
12.2.3 Choosing Parents
12.2.4 Crossover (Composition)
12.2.5 Mutation
12.2.6 Selection of Offspring
12.2.7 Tournament Replacement
12.2.8 Termination of the Algorithm
12.3 Detailed Introduction of GP
12.3.1 Chromosomal Representation
12.3.2 Hierarchical Labeled Structure Trees
12.3.3 Modular GP and Automatic Defined Functions (ADF)
12.3.4 Other Shows
12.4 The Problem-Solving Process by the GP
12.4.1 Initial Steps
12.4.2 Initialization
12.4.3 Breeding Populations of Programs
12.4.4 Tree Depth Control
12.4.5 Termination of the Process and Determining the Results
12.5 GP Pseudo-Code in MATLAB
12.6 Conclusion
References
13 Deep Learning Application in Water and Environmental Sciences
13.1 Introduction
13.2 Deep Learning
13.3 Deep Learning Architectures
13.3.1 Convolutional Neural Network
13.3.2 Auto-Encoder
13.3.3 Restricted Boltzmann Machine and Deep Belief Network
13.3.4 Recurrent Neural Network
13.3.5 Long Short-Term Memory and Gated Recurrent Unit
13.4 Deep Learning Application in Water and Environmental Sciences
13.5 Conclusions
References
14 Support Vector Machine Applications in Water and Environmental Sciences
14.1 Introduction
14.2 Support Vector Machines
14.2.1 SVM for Classification
14.2.2 SVM for Linear Regression
14.2.3 SVM for Nonlinear Regression
14.2.4 Variant of SVM
14.2.5 Hyper-Parameters and Kernels Selection
14.2.6 Hybrid SVM Models
14.3 SVM Application in Water and Environmental Sciences
14.4 Conclusion
References
15 Fuzzy Reinforcement Learning for Canal Control
15.1 Introduction
15.2 Basic Concepts
15.2.1 Fuzzy Systems
15.2.2 Reinforcement Learning
15.3 RL with Critic-Only Architecture
15.3.1 Fuzzy SARSA Learning
15.3.2 Fuzzy Q Learning
15.4 Nonlinear Model
15.5 Case Study
15.6 Performance Indicators
15.7 Test Scenarios
15.8 Results and Discussion
15.9 Conclusions
References
16 Application of Artificial Neural Network and Fuzzy Logic in the Urban Water Distribution Networks Pipe Failure Modelling
16.1 Introduction
16.2 Material and Method
16.2.1 Parameters Affecting Pipe Failure
16.2.2 Pipe’s Failure Rate
16.2.3 Evaluation Indices
16.2.4 Intelligent Combine Model (ICM) Development
16.3 Example
16.3.1 Specifications of the Studied Network
16.3.2 Calculation of the Existing Pipes Failure Rate
16.3.3 Development of the Intelligent and Statistical Models
16.4 Evaluation of the Models
16.5 Conclusion
References
17 Parallel Chaos Search Based Incremental Extreme Learning Machine Based Empirical Wavelet Transform: A New Hybrid Machine Learning Model for River Dissolved Oxygen Forecasting
17.1 Introduction
17.1.1 Background
17.1.2 Literature Review
17.1.3 Contributions and Objectives
17.2 Materials and Methods
17.2.1 Study Site
17.2.2 Performance Assessment of the Models
17.3 Methodology
17.3.1 Parallel Chaos Search Based Incremental Extreme Learning Machine (PC-ELM)
17.3.2 Artificial Neural Network
17.3.3 Empirical Wavelet Transform (EWT)
17.4 Results and Discussion
17.5 Summary and Conclusions
References
18 Multi-step Ahead Forecasting of River Water Temperature Using Advance Artificial Intelligence Models: Voting Based Extreme Learning Machine Based on Empirical Mode Decomposition
18.1 Introduction
18.1.1 Background and Motivation
18.1.2 Literature Review
18.1.3 Objective, Contributions and Innovation
18.1.4 Chapter Structure
18.2 Materials and Methods
18.2.1 Study Area and Data
18.2.2 Performance Assessment of the Models
18.3 Methodology
18.3.1 Artificial Neural Network (MLPNN)
18.3.2 Voting Based Extreme Learning Machine (VELM)
18.3.3 Empirical Mode Decomposition (EMD)
18.4 Results and Discussion
18.4.1 Results at the USGS 14210000 Station
18.4.2 Results at the USGS 14211010 Station
18.5 Conclusions and Future Recommendations
References
Part III Challenges and Nuances of Modern Computational Intelligence Methods
19 Computational Intelligence: An Introduction
19.1 Introduction
19.2 Computational Intelligence
19.2.1 Artificial Neural Networks
19.2.2 Fuzzy Systems
19.2.3 Evolutionary Computation Algorithms
19.2.4 Hybrid Models
19.3 Conclusion
References
20 Pre-processing and Input Vector Selection Techniques in Computational Soft Computing Models of Water Engineering
20.1 Introduction
20.2 Methodology
20.2.1 Boruta Feature Selection Algorithm (BFS)
20.2.2 Gamma Test (GT)
20.2.3 The Gamma Test Pseudocode
20.2.4 Subset Selection by SSMD
20.2.5 The SSMD Phyton Code
20.2.6 Principal Component Analysis (PCA)
20.2.7 PCA Algorithm Pseudo Code
20.2.8 Experimental Data of the Test Case
20.3 Application of Pre-processing Techniques
20.4 Conclusions
References
21 Application of Cellular Automata in Water Resource Monitoring Studies
21.1 Introduction
21.2 Introduction to Cellular Automata (CA)
21.3 Application of CA in Water Resources
21.3.1 Runoff
21.3.2 Water Quality Monitoring
21.3.3 Reservoir Operation
21.3.4 Water Demand
21.3.5 Water Distribution
21.3.6 Flood
21.3.7 Effects of Land Use/Land Cover Dynamics on Water Resources
21.3.8 Modeling and Simulating Groundwater
21.4 Conclusion
References
22 The Challenge of Model Validation and Its (Hydrogeo)ethical Implications for Water Security
22.1 Introduction
22.2 The Challenge of Validating Models
22.3 The (Hydro)geoethics: The Ethics of Using Models for Water Management
22.4 Water Modeling Validation Criteria: An Hydrogeoethical and Practical Point of View
22.5 Conclusion
References
23 Application of Agent Based Models as a Powerful Tool in the Field of Water Resources Management
23.1 Introduction
23.2 Methodology
23.2.1 What is a Complex System?
23.2.2 How to Model a Complex System
23.2.3 Agent-Based Model
23.2.4 Explanation of Water Resources as Complex Systems
23.3 Examples
23.3.1 Application of Agent-Based Modeling in Water Resources Management
23.3.2 How to Define Each Agent in Water Resources Management
23.4 Conclusion
References
24 Enhancing Vegetation Indices from Sentinel-2 Using Multispectral UAV Data, Google Earth Engine and Machine Learning
24.1 Introduction
24.2 Material and Method
24.2.1 Study Area
24.2.2 Vegetation Indices (VI)
24.2.3 Aerial Images
24.2.4 Calculating Vegetation Indices Using Google Earth Engine
24.2.5 Statistical Analysis
24.3 Machine Learning with Python
24.3.1 Data Processing
24.3.2 Model Training
24.3.3 Model Evaluation
24.4 Conclusion
References
25 Ten Years of GLEAM: A Review of Scientific Advances and Applications
25.1 Global Land Evaporation from Space
25.2 What is GLEAM?
25.3 The GLEAM Dataset
25.4 Metanalysis of Existing Applications
25.5 Accuracy and Validation
25.6 Scientific Applications
25.7 GLEAM Future Roadmap
25.8 Conclusion
References

Citation preview

Studies in Computational Intelligence 1043

Omid Bozorg-Haddad Babak Zolghadr-Asli   Editors

Computational Intelligence for Water and Environmental Sciences

Studies in Computational Intelligence Volume 1043

Series Editor Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland

The series “Studies in Computational Intelligence” (SCI) publishes new developments and advances in the various areas of computational intelligence—quickly and with a high quality. The intent is to cover the theory, applications, and design methods of computational intelligence, as embedded in the fields of engineering, computer science, physics and life sciences, as well as the methodologies behind them. The series contains monographs, lecture notes and edited volumes in computational intelligence spanning the areas of neural networks, connectionist systems, genetic algorithms, evolutionary computation, artificial intelligence, cellular automata, self-organizing systems, soft computing, fuzzy systems, and hybrid intelligent systems. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution, which enable both wide and rapid dissemination of research output. Indexed by SCOPUS, DBLP, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science.

Omid Bozorg-Haddad · Babak Zolghadr-Asli Editors

Computational Intelligence for Water and Environmental Sciences

Editors Omid Bozorg-Haddad College of Agriculture and Natural Resources University of Tehran Karaj, Alborz, Iran

Babak Zolghadr-Asli Centre for Water in the Minerals Industry Sustainable Minerals Institute The University of Queensland Brisbane, QLD, Australia The Centre for Water Systems (CWS) The University of Exeter Exeter, UK

ISSN 1860-949X ISSN 1860-9503 (electronic) Studies in Computational Intelligence ISBN 978-981-19-2518-4 ISBN 978-981-19-2519-1 (eBook) https://doi.org/10.1007/978-981-19-2519-1 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface

As we usher into the new era of technology and computational capabilities, vast opportunities have been presented to those working in the field of applied sciences and engineering to take advantage of these capacities to tackle the real-world problem in a fashion that mainly relies on finding approximate solutions. Computational intelligence (CI), which is the umbrella term that has been used to refer to this branch of frameworks, is indeed technically a subdivision of artificial intelligence (AI). Despite that CI, which is occasionally referred to as soft computing, has been arguably around since the 1980s, it has yet to find an anonymous agreed-upon definition for the term. Generally speaking, however, these approaches, which are often inspired by nature, embrace the way that the human brain perceived the surrounding world and employs reasoning to tackle complex problems that conventional mathematical and traditional methods struggle to find exact solutions. In contrast to conventional school of thoughts of computation, these approaches are inherently able to tolerate to some degree a certain level of imprecision, uncertainty, partial truth, and approximation in their core principle provided a shifting paradigm on the way that complex real-world problems were perceived. In the context of water and environmental sciences, the application of CI for this general purpose, which has been labelled in the literature as a subcategory of hydroinformatics, provides a practical approach to understand and resolve complicated and intertwined real-world problems that often imposed serious challenges to traditional deterministic precise frameworks. Naturally, resorting to these techniques comes at the cost of introducing a certain level of errors and tolerating some inaccuracy in the emerging solutions. However, the main idea behind these methods is that they can cap these imprecisions within negligible margins. That being said, the sheer speed and capacity of these techniques to address complex real-world problems are indeed compensations enough for obvious reasons. The complications that exist in modern water- and environmental-related problems imposed by phenomena such as climate change and over-population have, in turn, made a trending and topical subject out of soft computing in this field. One could even argue that, in this day and age, any attempt to truly achieve sustainability

v

vi

Preface

on a large-enough scale, given that such problems are inherently rather too complicated and vast to be captured with traditional and conventional approaches. In that spirit, this book intends to provide a comprehensive yet fresh perspective for the cutting-edge CI-oriented approaches that can be applied to modern-day real-world problems. Needless to say, these said approaches are pushing the current boundaries of simulation, optimization, modelling, forecasting, projections, prediction, timeseries analysis, and decision-making in water resources planning and management. As such, here, we tend to take a deep dive into topics like meta-heuristic evolutionary optimization algorithms (e.g. GA, PSO, etc.), data mining techniques (e.g. SVM, ANN, etc.), probabilistic and Bayesian-oriented frameworks, fuzzy logic, AI, deep learning, and expert systems. The current book is structured in a way that each chapter could be seen as a stand-alone lecture on a specific topic. The interested reader, which can be either an experienced and seasoned researcher or an early-career student who wants to get a foot at the door, can explore the said subject area from a somewhat different and fresh perspective. The book can also be seen as a wholesome lecture, and the reader can start from Chap. 1 and go through the entire book and, in fact, we encourage the readers to do so. As such, you would be engaged in an interesting, thought-provocative idea that can bring a fresh perspective on how one perceives sustainability in terms of water and environmental sciences. Karaj, Iran Brisbane, Australia/Exeter, UK

Omid Bozorg-Haddad Babak Zolghadr-Asli

Contents

Part I

Computational Intelligence-Based Optimization

1

Optimization Algorithms Surpassing Metaphor . . . . . . . . . . . . . . . . . . Arvin Samadi-Koucheksaraee, Seyedehelham Shirvani-Hosseini, Iman Ahmadianfar, and Bahram Gharabaghi

2

Improving Approaches for Meta-heuristic Algorithms: A Brief Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Arya Yaghoubzadeh-Bavandpour, Omid Bozorg-Haddad, Babak Zolghadr-Asli, and Amir H. Gandomi

3

4

Multi-Objective Optimization Application in Water and Environmental Sciences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Arya Yaghoubzadeh-Bavandpour, Omid Bozorg-Haddad, Babak Zolghadr-Asli, and Mohammad Reza Nikoo A Survey of PSO Contributions to Water and Environmental Sciences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ahmad Ferdowsi, Sayed-Farhad Mousavi, Seyed Mohamad Hoseini, Mahtab Faramarzpour, and Amir H. Gandomi

3

35

63

85

5

Firefly Algorithms (FAs): Application in Water Resource Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Ali Arefinia, Omid Bozorg-Haddad, Arman Oliazadeh, Babak Zolghadr-Asli, and Arturo A. Keller

6

Multi-Objective Calibration of a Single-Event, Physically-Based Hydrological Model (KINEROS2) Using AMALGAM Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Mohsen Pourreza-Bilondi, Hadi Memarian, Mahnaz Ghaffari, and Zinat Komeh

vii

viii

7

Contents

Ant Colony Optimization Algorithms: Introductory Steps to Understanding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Arman Oliazadeh, Omid Bozorg-Haddad, Ali Arefinia, and Sajjad Ahmad

Part II

Data Mining and Machine Learning

8

Data Mining Methods for Modeling in Water Science . . . . . . . . . . . . . 157 Seyedehelham Shirvani-Hosseini, Arvin Samadi-Koucheksaraee, Iman Ahmadianfar, and Bahram Gharabaghi

9

Using Support Vector Machine (SVM) in Modeling Water Resources Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Ali Arefinia, Omid Bozorg-Haddad, Milad Akhavan, Ramin Baghbani, Alireza Heidary, Babak Zolghadr-Asli, and Heejun Chang

10 Decision Tree (DT): A Valuable Tool for Water Resources Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Maedeh Enayati, Omid Bozorg-Haddad, Masoud Pourgholam-Amiji, Babak Zolghadr-Asli, and Mohsen Tahmasebi Nasab 11 The Basis of Artificial Neural Network (ANN): Structures, Algorithms and Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Soheila Zarei, Omid Bozorg-Haddad, and Mohammad Reza Nikoo 12 Genetic Programming (GP): An Introduction and Practical Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Arman Oliazadeh, Omid Bozorg-Haddad, Hamidreza Rahimi, Saiyu Yuan, Chunhui Lu, and Sajjad Ahmad 13 Deep Learning Application in Water and Environmental Sciences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Arya Yaghoubzadeh-Bavandpour, Omid Bozorg-Haddad, Babak Zolghadr-Asli, and Francisco Martínez-Álvarez 14 Support Vector Machine Applications in Water and Environmental Sciences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Arya Yaghoubzadeh-Bavandpour, Mohammadra Rajabi, Hamed Nozari, and Sajjad Ahmad 15 Fuzzy Reinforcement Learning for Canal Control . . . . . . . . . . . . . . . . 311 Kazem Shahverdi, Farinaz Alamiyan-Harandi, and J. M. Maestre

Contents

ix

16 Application of Artificial Neural Network and Fuzzy Logic in the Urban Water Distribution Networks Pipe Failure Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 Seyed Mehran Jafari, Omid Bozorg-Haddad, and Mohammad Reza Nikoo 17 Parallel Chaos Search Based Incremental Extreme Learning Machine Based Empirical Wavelet Transform: A New Hybrid Machine Learning Model for River Dissolved Oxygen Forecasting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 Salim Heddam 18 Multi-step Ahead Forecasting of River Water Temperature Using Advance Artificial Intelligence Models: Voting Based Extreme Learning Machine Based on Empirical Mode Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 Salim Heddam Part III Challenges and Nuances of Modern Computational Intelligence Methods 19 Computational Intelligence: An Introduction . . . . . . . . . . . . . . . . . . . . 411 Arya Yaghoubzadeh-Bavandpour, Omid Bozorg-Haddad, Babak Zolghadr-Asli, and Vijay P. Singh 20 Pre-processing and Input Vector Selection Techniques in Computational Soft Computing Models of Water Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 Hossien Riahi-Madvar and Bahram Gharabaghi 21 Application of Cellular Automata in Water Resource Monitoring Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 Matin Shahri, Maryam Naghdizadegan Jahromi, Najmeh Neysani Samany, Gianluigi Busico, and Seyyed Kazem Alavipanah 22 The Challenge of Model Validation and Its (Hydrogeo)ethical Implications for Water Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477 César de Oliveira Ferreira Silva 23 Application of Agent Based Models as a Powerful Tool in the Field of Water Resources Management . . . . . . . . . . . . . . . . . . . . 491 Nafiseh Bahrami, Seyed Mohammad Kazem Sadr, Abbas Afshar, and Mohammad Hadi Afshar

x

Contents

24 Enhancing Vegetation Indices from Sentinel-2 Using Multispectral UAV Data, Google Earth Engine and Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507 Mojtaba Naghdyzadegan Jahromi, Shahrokh Zand-Parsa, Ali Doosthosseini, Fatemeh Razzaghi, and Sajad Jamshidi 25 Ten Years of GLEAM: A Review of Scientific Advances and Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525 Mojtaba Naghdyzadegan Jahromi, Diego Miralles, Akash Koppa, Dominik Rains, Shahrokh Zand-Parsa, Hamidreza Mosaffa, and Sajad Jamshidi

List of Figures

Fig. 1.1 Fig. 1.2 Fig. 1.3 Fig. 1.4 Fig. 1.5 Fig. 4.1 Fig. 4.2 Fig. 4.3

Fig. 5.1 Fig. 5.2 Fig. 5.3 Fig. 5.4 Fig. 5.5 Fig. 5.6 Fig. 6.1 Fig. 6.2 Fig. 6.3

Fig. 7.1 Fig. 7.2 Fig. 7.3

Flowchart of gradient-based algorithm (GBO) . . . . . . . . . . . . . . Process to obtain the next position in Runge Kutta optimizer (RUN) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Search process in Runge Kutta optimizer (RUN) . . . . . . . . . . . . Flowchart of Runge Kutta optimizer (RUN) . . . . . . . . . . . . . . . . Runge Kutta optimizer (RUN) process . . . . . . . . . . . . . . . . . . . . Pseudo code of PSO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The relationship between PSO parameters (Adopted from Harrison et al., 2016) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The most notable algorithms that have been hybridized with PSO (GA = Genetic Algorithm, DE = Differential Evolution, BA = Bat Algorithm, ABC = Artificial Bee Colony, ACO = Ant Colony Optimization, SA = Simulated Annealing) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pseudo code of FA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The target space of the problem is the goal of minimization . . . The target space of the two-objective problem of minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MOFA flowchart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DFA flowchart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MODFA flowchart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Geographic location of Tamar watershed in Iran . . . . . . . . . . . . Land use map of Tamar watershed . . . . . . . . . . . . . . . . . . . . . . . . Pareto optimal front using a three-objective optimization for the storm events #1-#4 (a-c-e-g) and observed hydrograph versus simulated hydrographs based on different objective functions with interactive solutions for the events #1-#4 (b-d-f-h) . . . . . . . . . . . . . . . . . . . . . . . . . . . . Flowchart of ant colony algorithm . . . . . . . . . . . . . . . . . . . . . . . . Flowchart of antlion algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . Schematic prey of antlion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15 21 23 25 27 87 89

90 109 110 110 111 116 117 123 124

130 140 144 145 xi

xii

Fig. 7.4 Fig. 8.1 Fig. 8.2 Fig. 8.3 Fig. 9.1 Fig. 9.2 Fig. 9.3 Fig. 9.4 Fig. 9.5 Fig. 9.6 Fig. 9.7 Fig. 9.8 Fig. 9.9 Fig. 9.10 Fig. 9.11 Fig. 9.12 Fig. 9.13 Fig. 9.14 Fig. 9.15 Fig. 9.16 Fig. 9.17 Fig. 9.18 Fig. 9.19 Fig. 10.1 Fig. 10.2 Fig. 10.3 Fig. 10.4

Fig. 10.5 Fig. 10.6 Fig. 10.7 Fig. 10.8 Fig. 11.1

List of Figures

Flowchart of MOALO algorithm . . . . . . . . . . . . . . . . . . . . . . . . . Flowchart of adaptive neural-fuzzy inference system (ANFIS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Structure of adaptive neural-fuzzy inference system (ANFIS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Structure of extreme learning machine (ELM) . . . . . . . . . . . . . . Data distribution diagram based on two features A and B . . . . . An example of a possible separator . . . . . . . . . . . . . . . . . . . . . . . Data distribution diagram with boundary data and support vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data distribution diagram with support vectors and support vector machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nonlinear distribution of data . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data transferred to another space using the transfer function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Geometric representation of the relation of an insensitive epsilon error function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Information of the introduced data file . . . . . . . . . . . . . . . . . . . . Program training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Definition of inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Definition of outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Definition of parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Defining the type of transfer function . . . . . . . . . . . . . . . . . . . . . Definition of input and target data . . . . . . . . . . . . . . . . . . . . . . . . Regression assessment selection . . . . . . . . . . . . . . . . . . . . . . . . . Selecting the selected option in the regression assessment window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Selecting the unselected option in the regression assessment window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . View training data error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DT approach for classification of diabetic patients . . . . . . . . . . . Different ways of classifying nominal features: a multiple classification and b binary classification . . . . . . . . . . . . . . . . . . . Different types of sequential feature classification . . . . . . . . . . . Different types of continuous feature classification (numerical input mode): a binary output and b domain output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Classification of a binary tree with Gini index . . . . . . . . . . . . . . Comparison of three impurity criteria: entropy, Gini, and misclassification error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DT Example of buying a house . . . . . . . . . . . . . . . . . . . . . . . . . . The decision boundary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Input vector, weight matrix, bias, transformation function, and output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

150 165 165 168 181 181 182 183 183 184 185 192 192 193 193 194 194 195 195 196 196 197 197 203 206 207

208 211 213 217 218 227

List of Figures

Fig. 11.2 Fig. 11.3 Fig. 11.4 Fig. 11.5 Fig. 11.6 Fig. 11.7 Fig. 11.8 Fig. 11.9 Fig. 11.10 Fig. 11.11 Fig. 11.12 Fig. 12.1 Fig. 12.2 Fig. 12.3 Fig. 12.4 Fig. 12.5 Fig. 12.6 Fig. 12.7 Fig. 12.8 Fig. 13.1 Fig. 13.2 Fig. 13.3 Fig. 13.4 Fig. 13.5 Fig. 13.6 Fig. 13.7 Fig. 14.1

Fig. 14.2 Fig. 15.1 Fig. 15.2 Fig. 15.3 Fig. 15.4 Fig. 15.5

xiii

A single-input neuron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multiple inputs ANN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stimulus functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Monolayer perceptron output . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linear separation of samples in monolayer perceptron (impossible (a) and possible (b) states) . . . . . . . . . . . . . . . . . . . . Non-linear separation of Boolean XOR function . . . . . . . . . . . . Multi-layer perceptron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Convection function minimization (Singer, 2016) . . . . . . . . . . . ANN learning with supervised methods (Swingler, 2012) . . . . . Comparison between observation and simulated data by ANN in linear problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparison between observation and simulated data by ANN in nonlinear problem . . . . . . . . . . . . . . . . . . . . . . . . . . . Flowchart of GA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Schematic of the roulette wheel . . . . . . . . . . . . . . . . . . . . . . . . . . Displays the single-point crossover . . . . . . . . . . . . . . . . . . . . . . . Displays multi-point crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . Displays uniform crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The mutation genetic operator . . . . . . . . . . . . . . . . . . . . . . . . . . . The coupling genetic operator . . . . . . . . . . . . . . . . . . . . . . . . . . . Flowchart of GP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Convolutional neural network structure. Adapted from Shen (2018) and Zhang et al. (2018a, b) . . . . . . . . . . . . . . Autoencoder model structure. Adapted from Shen (2018) . . . . . Restricted Boltzmann machines model structure. Adapted from Sit et al. (2020) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deep belief network structure. Adapted from Sit et al. (2020) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A simple RNN structure. Adapted from Zhang et al. (2018a, b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LSTM model structure. Adapted from Barzegar et al. (2020) and Yuan et al. (2020) . . . . . . . . . . . . . . . . . . . . . . . . . . . . GRU model structure. Adapted from Alom et al. (2019) . . . . . . An example of a linear support vector machine. Adapted from Mountrakis et al. (2011) and Raghavendra and Deka (2014) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nonlinear SVR. Adapted from Raghavendra and Deka (2014) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The block diagram of a fuzzy system . . . . . . . . . . . . . . . . . . . . . The block diagram of RL interaction . . . . . . . . . . . . . . . . . . . . . . The block diagram of critic-only architecture . . . . . . . . . . . . . . . A schematic plan view of the E1R1 Dez canal . . . . . . . . . . . . . . Assigned reward during the learning (FSL) . . . . . . . . . . . . . . . .

228 229 231 231 235 236 236 240 241 247 247 254 256 257 258 258 259 263 268 276 278 279 280 281 282 283

293 296 314 316 317 321 324

xiv

Fig. 15.6 Fig. 15.7 Fig. 15.8 Fig. 15.9 Fig. 15.10 Fig. 15.11 Fig. 15.12 Fig. 15.13 Fig. 15.14 Fig. 16.1 Fig. 16.2 Fig. 16.3 Fig. 16.4 Fig. 16.5 Fig. 16.6 Fig. 16.7 Fig. 17.1

Fig. 17.2 Fig. 17.3 Fig. 17.4

Fig. 17.5 Fig. 17.6

List of Figures

Water depth variations upstream of check structures in Scenario 1 (FSL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Flow passing under check structures and delivering to off-takes in Scenario 1 (FSL) . . . . . . . . . . . . . . . . . . . . . . . . . . Water depth variations upstream of check structures in Scenario 2 (FSL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Flow passing under check structures and delivering to off-takes in Scenario 2 (FSL) . . . . . . . . . . . . . . . . . . . . . . . . . . Assigned reward during the learning (FQL) . . . . . . . . . . . . . . . . Water depth variations upstream of check structures in Scenario 1 (FQL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Flow passing under check structures and delivering to off-takes in Scenario 1 (FQL) . . . . . . . . . . . . . . . . . . . . . . . . . Water depth variations upstream of check structures in Scenario 2 (FQL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Flow passing under check structures and delivering to off-takes in Scenario 2 (FQL) . . . . . . . . . . . . . . . . . . . . . . . . . Schematic of a typical FFNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample structure of an ANFIS model (Gasemnezhad et al., 2014) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The ICM model’s structure for predicting failure rates of WDNs pipes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The CMSER index values of various training algorithms and number of neurons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The CMSER index values for various neurons’ number and smoothing factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The values of the CMSER index for various weight functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Observed and predicted (by CIM model) value of pipes failure rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Map showing the location of the USGS 14181500 station at North Santiam River at Niagara, Marion County, Oregon, USA (Adopted from Risley et al., 2012) . . . . . . . . . . . . Sample autocorrelation (ACF) and partial autocorrelation function (PACF) for daily dissolved oxygen concentration . . . . The flowchart of the EWT&ELM&PCELM models . . . . . . . . . Flowchart of the parallel chaos search based incremental extreme learning machine (PC-ELM) developed for forecasting daily river dissolved oxygen concentration . . . . Multi resolution analysis (MRA) components of dissolved oxygen dataset decomposed by the EWT method . . . . . . . . . . . Scatterplots of measured against calculated river dissolved oxygen concentration (DO) at the USGS 14181500 station using the MLPNN coupled empirical wavelet transform (EWT) models (validation stage) . . . . . . . . .

325 325 326 326 328 328 329 329 330 339 340 344 346 347 349 350

360 360 362

364 366

369

List of Figures

Fig. 17.7

Fig. 17.8

Fig. 17.9 Fig. 17.10

Fig. 18.1 Fig. 18.2 Fig. 18.3 Fig. 18.4 Fig. 18.5 Fig. 18.6 Fig. 18.7

Fig. 18.8

Fig. 18.9

Fig. 18.10 Fig. 18.11

xv

Scatterplots of measured against calculated river dissolved oxygen concentration (DO) at the USGS 14181500 station using the ELM coupled empirical wavelet transform (EWT) models (validation stage) . . . . . . . . . Scatterplots of measured against calculated river dissolved oxygen concentration (DO) at the USGS 14181500 station using the PCELM coupled empirical wavelet transform (EWT) models (validation stage) . . . . . . . . . Boxplot of measured versus computed dissolved oxygen (DO) for all models (Validation level) . . . . . . . . . . . . . . . . . . . . . Violinplot showing the distributions of measured versus computed dissolved oxygen (DO) concentration for all models (Validation level) . . . . . . . . . . . . . . . . . . . . . . . . . . Location map of the two USGS station in Clackamas River, Oregon, USA (from Lee, 2011) . . . . . . . . . . . . . . . . . . . . Sample autocorrelation (ACF) and partial autocorrelation function (PACF) for daily dissolved oxygen concentration . . . . The flowchart of the proposed modelling strategy based on empirical mode decomposition (EMD) . . . . . . . . . . . . . . . . . Structures of the extreme learning machines (ELM) and the artificial neural networks models (ANN) . . . . . . . . . . . . Architecture of the voting-based extreme learning machines (VELM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Result of the original river water temperature for USGS 14211010 station decomposed by EMD . . . . . . . . . . . . . . . . . . . Scatterplots of measured against calculated river water temperature (T w ) at the USGS 14210000 station using MLPNN coupled empirical mode decomposition (EMD) models (validation stage) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scatterplots of measured against calculated river water temperature (T w ) at the USGS 14210000 station using ELM coupled empirical mode decomposition (EMD) models (validation stage) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scatterplots of measured against calculated river water temperature (T w ) at the USGS 14210000 station using VELM coupled empirical mode decomposition (EMD) models (validation stage) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Boxplot of measured versus computed Water T w for all models for the USGS14210000 (Validation level) . . . . . . . . . . . Violinplot showing the distributions of measured versus Water T w for all models for the USGS14210000 (Validation level) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

370

370 371

371 382 383 384 386 387 388

393

393

394 396

397

xvi

Fig. 18.12

Fig. 18.13

Fig. 18.14

Fig. 18.15

Fig. 18.16

Fig. 20.1 Fig. 20.2 Fig. 21.1 Fig. 21.2 Fig. 21.3 Fig. 21.4 Fig. 21.5 Fig. 21.6 Fig. 21.7 Fig. 22.1 Fig. 22.2

Fig. 23.1

List of Figures

Scatterplots of measured against calculated river water temperature (T w ) at the USGS 14211010 station using MLPNN coupled empirical mode decomposition (EMD) models (validation stage) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scatterplots of measured against calculated river water temperature (T w ) at the USGS 14211010 station using ELM coupled empirical mode decomposition (EMD) models (validation stage) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scatterplots of measured against calculated river water temperature (T w ) at the USGS 14211010 station using VELM coupled empirical mode decomposition (EMD) models (validation stage) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Boxplot showing the distributions of measured versus Water T w for all models for the USGS14211010 (Validation level) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Violinplot showing the distributions of measured versus Water T w for all models for the USGS14211010 (Validation level) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The SSMD results in data clustering for train and test subsets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scree plot showing the variance of all components . . . . . . . . . . a Physical realm of a one-dimensional problem, b relating CA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . a Physical realm of a two-dimensional problem, b relating CA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample 2D network a grid-based, b triangle-based, and c hexagonal-based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Neighborhood in CA a null neighborhood, b Von Neumann neighborhood, and c Moor neighborhood . . . . . . . . . Number of articles published per year . . . . . . . . . . . . . . . . . . . . . Number of water resource articles using CA . . . . . . . . . . . . . . . Classification for application of CA in water resources . . . . . . . Basic flowchart of a usual validation process of water models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interactions of real-world problems and the (hydrogeo)ethical dilemmas of water modeling validation. Adapted from Khazanchi (1996) and Silva et al. (2021) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Three sets of mathematical models for complex dynamic systems. This is a conceptual expression of White-Box, Black-Box, and Gray Box, based on the level of comprehensibility of the process, inspired by Kalmykov and Kalmykov (2015) . . . . . . . . . . . . . . . . . . . . . .

401

401

402

403

404 440 445 453 453 454 454 458 458 459 480

484

494

List of Figures

Fig. 23.2

Fig. 24.1 Fig. 24.2

Fig. 25.1

Fig. 25.2

xvii

Concept of the law if (Stimulus), then (response), inspired by “How adaptation builds complexity” (Holland & Order, 1995) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Result of “print (satellite_collection)” command in Google earth engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Maps of NDVI, EVI, EVI2, and SAVI retrieved from Sentinel-2 (left column), model’s prediction (middle column), and drone images (right column) . . . . . . . . . . . . . . . . . Number of peer-reviewed scientific articles using GLEAM products (E, E s , E t , E i , E p , SM, λE) since 2013. Source Google Scholar. Date of the search: 15 October 2021 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The number of articles that used GLEAM data in each given Continent. Source Google Scholar. Date of the search: 15 October 2021 . . . . . . . . . . . . . . . . . . . . . . . . . .

498 516

520

529

529

List of Tables

Table 1.1 Table 1.2 Table 1.3 Table 2.1 Table 2.2 Table 2.3 Table 2.4 Table 2.5 Table 2.6 Table 2.7 Table 2.8 Table 2.9 Table 2.10 Table 2.11 Table 2.12 Table 2.13 Table 3.1 Table 3.2 Table 3.3 Table 3.4 Table 3.5 Table 4.1

Table 6.1 Table 6.2

The GBO’s pseudo-code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The RUN’s pseudo-code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The DE’s pseudo-code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Swarm intelligence algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . Science based algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Human-based algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Opposition-based learning strategy application . . . . . . . . . . . . . Lévy flight strategy application . . . . . . . . . . . . . . . . . . . . . . . . . . Various chaotic maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chaotic strategy application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Greedy search strategy application . . . . . . . . . . . . . . . . . . . . . . . Quantum computing application . . . . . . . . . . . . . . . . . . . . . . . . . Binary search application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Self-adaptive mechanism application . . . . . . . . . . . . . . . . . . . . . Multiple strategies for improving meta-heuristic optimization algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hybrid models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Swarm intelligence algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . Evolutionary computation algorithm . . . . . . . . . . . . . . . . . . . . . . Science based algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Human-based algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multi-objective optimization application in water and environmental sciences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Contribution of PSO and its variants to water and environmental problems (in some of these studies, PSO was used with simulation models and artificial intelligence techniques) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Characteristics of selected rainfall events . . . . . . . . . . . . . . . . . . Optimization parameters used in hydroPSO, adopted from Memarian et al. (2019) . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15 26 29 37 38 39 40 42 43 44 45 46 47 48 49 51 72 73 73 74 75

96 123 126

xix

xx

Table 6.3

Table 6.4 Table 10.1 Table 11.1 Table 13.1 Table 14.1 Table 15.1 Table 15.2 Table 15.3 Table 15.4 Table 15.5 Table 15.6 Table 15.7 Table 16.1 Table 16.2 Table 16.3 Table 16.4 Table 16.5 Table 16.6 Table 16.7 Table 16.8 Table 17.1 Table 17.2 Table 17.3 Table 18.1 Table 18.2 Table 18.3 Table 18.4 Table 19.1 Table 19.2 Table 19.3 Table 19.4

List of Tables

The K2 optimized parameters in different simulations based on NSE and ESP objective functions and interactive solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The values of different objective functions in hydrological simulations based on the interactive solution . . . . . . . . . . . . . . . Classification of continuous features by Gini criterion . . . . . . . Examples of different applications of ANN in water resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Presents studies that utilize DL models in the different water and environmental topics . . . . . . . . . . . . . . . . . . . . . . . . . . SVM application in water and environmental sciences . . . . . . . The E1R1 canal’s specification . . . . . . . . . . . . . . . . . . . . . . . . . . Performance standard (Molden & Gates, 1990) . . . . . . . . . . . . . The details of delivered flows in the defined scenarios (off-takes 1 and 2 are closed) . . . . . . . . . . . . . . . . . . . . . . . . . . . . Water delivery indicators (FSL) . . . . . . . . . . . . . . . . . . . . . . . . . . Water depth indicators in percent (FSL) . . . . . . . . . . . . . . . . . . . Water delivery indicators (FQL) . . . . . . . . . . . . . . . . . . . . . . . . . Water depth indicators in percent (FQL) . . . . . . . . . . . . . . . . . . . Factors affecting pipe failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . The properties of the studied zone of Grogan WDN . . . . . . . . . Specifications and failure rates of some of the existing network pipes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Proper value of the SVR model’s parameters . . . . . . . . . . . . . . . The predicted failure rate values of 17 pipes using various models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Statistical indices of the models in the validation data . . . . . . . . CMSER index values of different model in validation stage . . . The value of indices used by Tropsha et al. (2003) to evaluate the predictive ability of developed models . . . . . . . . Summary statistics of River water temperature . . . . . . . . . . . . . The input combinations of different models . . . . . . . . . . . . . . . . Performances of different forecasting models at the USGS 14181500 station . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary statistics of River water temperature . . . . . . . . . . . . . The input combinations of different models . . . . . . . . . . . . . . . . Performances of different forecasting models at the USGS 14210000 station . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performances of different forecasting models at the USGS 14211010 station . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ANNs application in water and environmental studies . . . . . . . Fuzzy system application in water and environmental studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Swarm intelligence algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . SI and EC algorithms application . . . . . . . . . . . . . . . . . . . . . . . .

131 132 212 248 284 301 322 323 324 327 327 329 330 335 345 346 348 349 351 351 351 360 361 368 382 383 390 398 415 417 418 419

List of Tables

Table 20.1 Table 20.2 Table 20.3 Table 20.4 Table 20.5 Table 20.6 Table 20.7 Table 21.1 Table 23.1 Table 23.2 Table 23.3 Table 23.4 Table 24.1 Table 24.2 Table 25.1

xxi

An example of GT with input’/output features . . . . . . . . . . . . . . The Euclidean distance between the input features of example in Table 20.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The sorted Euclidean distance table and corresponding nearest neighbor index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The statistical results of applying SSMD on the whole database and resulted train/test subsets . . . . . . . . . . . . . . . . . . . . The GT results on the selected 12 input masks for feature selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The PCA results on bedload transport data . . . . . . . . . . . . . . . . . Rotated PC loading of bedload effective parameters . . . . . . . . . Summary of selected articles for water resources using CA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Main features, environment, and rules of behavior drinking water consumers as an agent . . . . . . . . . . . . . . . . . . . . . Main features, environment, and rules of behavior agricultural water consumer as an agent . . . . . . . . . . . . . . . . . . . Main features, environment, and rules of behavior industrial water consumer as an agent . . . . . . . . . . . . . . . . . . . . . Main features, environment, and rules of behavior environmental sector as an agent . . . . . . . . . . . . . . . . . . . . . . . . . Name of different satellite collection/production . . . . . . . . . . . . Statistical analysis between model’s prediction and drone index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Evolution of forcing data sets of GLEAM . . . . . . . . . . . . . . . . .

432 433 433 441 443 444 445 456 500 501 502 502 514 519 528

Part I

Computational Intelligence-Based Optimization

Chapter 1

Optimization Algorithms Surpassing Metaphor Arvin Samadi-Koucheksaraee , Seyedehelham Shirvani-Hosseini, Iman Ahmadianfar, and Bahram Gharabaghi

1.1 Introduction Real-world applications usually have a wide range of features such as limited space and constraints in artificial intelligence systems, which can be according to the preferences and budget of the program. To solve problems in diverse fields, scientists and decision-makers have to achieve practical, intelligible and adequate solutions within appropriate time through different algorithms, being explicit or proximate. Take image segmentation (Zhao et al., 2020), epileptic seizure detection (Li et al., 2020), engineering applications (Ba et al., 2020), For instance. Having said that, with rising dynamism and complexity of the problems in terms of uncertainty, multimodality and so forth in the feature space, the intricacy and efficiency of enhanced solvers are converted into the main preoccupation. A few good such examples, include intelligent traffic management (Liu et al., 2020), smart agriculture (Song et al., 2020), brain disease diagnosis (Fei et al., 2020), and bankruptcy prediction (Yu et al., 2021).

A. Samadi-Koucheksaraee (B) · I. Ahmadianfar Department of Civil Engineering, Behbahan Khatam Alanbia University of Technology, Behbahan, Iran e-mail: [email protected] I. Ahmadianfar e-mail: [email protected] S. Shirvani-Hosseini Department of Chemical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran e-mail: [email protected] B. Gharabaghi School of Engineering, University of Guelph, Guelph, Ontario N1G 2W1, Canada e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 O. Bozorg-Haddad and B. Zolghadr-Asli (eds.), Computational Intelligence for Water and Environmental Sciences, Studies in Computational Intelligence 1043, https://doi.org/10.1007/978-981-19-2519-1_1

3

4

A. Samadi-Koucheksaraee et al.

For the single-objective one (Yang et al., 2019), it is necessary to supply the diverse objects in the single known function, while developed for other models including various objectives (Cao, Dong, et al., 2020; Cao, Zhao, et al., 2020), memetic methods (Fu et al., 2020), multi-objective one (Cao et al., 2019), large scale optimization (Cao, Dong, et al., 2020; Cao, Zhao, et al., 2020) and fuzzy optimization (Luo et al., 2021). Furthermore, two strategies can be employed to tackle problems and sophisticated mathematical models. The first procedure is about using gradient and explicit equations in unfolding the ordeals (Zeng et al., 2019), and another strategy addresses the outlook of the problem by trial-and-error technique (Abdel-Basset et al., 2019; de Lacerda et al., 2020; Houssein et al., 2020; Kumar et al., 2019; Luo et al., 2018; Naruei et al., 2021; Talebi et al., 2021). In this classification, metaheuristic or swarm-based optimization methods are most popular such as migrating birds optimization (MBO) (Duman et al., 2012), artificial bee colony (ABC) (Akay et al., 2012), bird mating optimizer (BMO) (Askarzadeh, 2014), tree-seed algorithm (TSA) (Kiran, 2015), sine cosine algorithm (SCA) (Mirjalili, 2016), grasshopper optimization algorithm (GOA) (Saremi et al., 2017), butterfly optimization algorithm (BOA) (Arora et al., 2019), sailfish optimizer (SFO) (Shadravan et al., 2019), seagull optimization algorithm (SOA) (Dhiman et al., 2019), spider monkey optimization (SMO1) (Sharma et al., 2019), sea lion optimization (SLnO) (Masadeh et al., 2019), monarch butterfly optimization (MBO) (Houssein et al., 2020), bald eagle search (BES) (Alsattar et al., 2020) and normative fish swarm algorithm (NFSA) (Tan et al., 2020). Admittedly, reaching the optimal solution without gradient data concerning an objective function can be challenging for multimodal rotated or composition problems. Recently, calculating the most appropriate solutions and using them based on their certainty levels have been converted into an exciting matter for users (Ahmadianfar et al., 2017; Samadi-koucheksaraee et al., 2019). As a result, there is a specific concentration on meta-heuristic algorithms (Mas), which leads them to be used widely in diverse areas of science, engineering and machine learning. This increasing phenomenon is rooted in new difficulties in the real world and the rising demands of such solvers for sophisticated problems. The main aim of Mas is to prevent local optimum, provide simplicity and use gradient-free process, which is concomitant with sufficient solutions for complicated problems leading to a large number of problematic local optima and solution area. Therefore, The key role of Mas is to manage multimodal areas with iterative exploratory and exploitative manners. However, few stumbling blocks, concerns, and gaps are seen in the previous generation of swarm-based optimization methods. There is a wide range of animalbased methods that have recently been proposed and renowned. Nevertheless, recent studies on these methods, shows that there is a need for a comprehensive study on the of the deficiencies of the mathematical models causing poor performance, their structural flaws, and problem with model verification methods (Hu et al., 2021). According to studies on these methods (Yu et al., 2020), it is evident that these matters have a profound impact on the reliability in the optimization regardless of their performance, parameters, complexity compared to the cutting-edge and advanced optimizers.

1 Optimization Algorithms Surpassing Metaphor

5

The manner of nearly all methods is to develop the emerged population in the particle swarm optimizer (PSO) (Clerc, 2010; Kennedy et al., 1995; Poli et al., 2007), and genetic algorithm (GA) (Holland, 1992), being converted into the swarm intelligence (SI) optimizers (Bonabeau et al., 1999) and evolutionary algorithm (EA) optimizers (Zitzler et al., 1998). What’s more, EA contains evolutionary programming (EP) (Yao et al., 1999), evolution strategy (ES) (Hansen et al., 2003) and genetic programming (GP) (Koza et al., 1992). Biological evolutionary operations can cope with optimization problems through distinct processes, namely mutation, selection, and reorganization. The GA-inspired EA was created by Holland (1992) according to Darwin’s theory. The differential evolution (DE) algorithm (Lampinen et al., 2004) works the same as GA, albeit different in operation definition. Concerning SI, simulating natural organisms’ behaviours, a wide range of algorithms was created based on hunting, finding their food, etc. PSO (Kennedy et al., 1995) simulates the discipline of birds’ cluster, which shares the information amongst the group to manage the group. Ant colony optimization (ACO) (Dorigo et al., 2005) imitates the food-gathering behaviour in ant colonies, employed for various problems. Other SI algorithms that have recently been developed are cuckoo search (CS) (Yang et al., 2009), monarch butterfly optimization (MBO) (Wang et al., 2019) and hunger games search (HGS) (Yang et al., 2021). In addition to these algorithms, the laws of physics play a vital role in some algorithms called Physics-based algorithms. Take, big-bang big-crunch (BBBC) optimization technique (Erol et al., 2006), water cycle algorithm (WCA) (Eskandar et al., 2012), black hole (BH) algorithm (Hatamlou, 2013), sine cosine algorithm (SCA) (Mirjalili, 2016), newton particle optimizer (NwPO) (Jeong et al., 2019) and equilibrium optimizer (EO) (Faramarzi et al., 2020), for instance. Despite the fact that MAs are classified into many classifications, they have similar features in the search stages, exploration and exploration. For the first step, the randomness of the probable search had better be guaranteed, and the search space is founded. In the second step, it is necessary to concentrate on distinct areas of feature space explored in the first step. Therefore, balancing these steps is a feasible must for optimizer’s efficient performance. It is true to say that there are not any algorithms to overcome all optimization problems as the most suitable and effective method. Besides, each technique and algorithm demonstrate superiority and ability in certain optimization problems, which causes researchers to find and develop more appropriate algorithms. This chapter introduces and elaborates on some optimizers that could be employed in environmental and water science problems. In fact, the root purpose of this chapter is to give technical details concerning these methods and provide reliable guidance for scientists in optimization methods.

6

A. Samadi-Koucheksaraee et al.

1.2 Gradient-Based Optimizer (GBO) Gradient-based optimizer (GBO) (Ahmadianfar et al., 2020) was established according to the idea of the gradient-based Newton’s manner. Hence, In the following, we scrutinize Newton’s technique. GBO, after that, will be meticulously discussed.

1.2.1 Brief Summary of Newton’s Method Newton’s method works powerfully to find the root to solve equations (Bazaraa et al., 2013) mathematically, which operates the Taylor series’ terms. Indeed, this method works with (h 0 ) as a single point at the beginning while using the Taylor series at the point x0 in the following to account for another point, being near the solution. This trend is followed until the desirable solution emerges. The Taylor series of φ(h 0 ) function can be defined as: φ(h) = φ(h 0 ) + φ(h 0 )(h − h 0 ) + +

φ  (h 0 )(h − h 0 )2 2!

φ (3) (h 0 )(h − h 0 )3 + ··· 3!

(1.1)

where φ  (h), φ  (h), and φ (3) (h) are considered the first-, second-, and third-order derivatives of φ(h) correspondingly. According to the initial figure being on a par with the actual root, (h − h 0 ) is tiny and the higher-order terms in the Taylor series will be 0. In turn, abbreviating the series (Eq. 1.1) achieves φ(h): φ(h) ≈ φ(h 0 ) + φ  (h 0 )(h − h 0 )

(1.2)

In order to obtain the root for φ(h), left φ(h) zero and solve for h: h=

h 0 φ  (h 0 ) − φ(h 0 ) φ  (h 0 )

(1.3)

Thus, based on provided h n , h n+1 can be defined as: h n+1 = h n −

φ(h n ) φ  (h n )

(1.4)

Newtons’ method operates this procedure frequently to find the ultimate result.

1 Optimization Algorithms Surpassing Metaphor

7

1.2.2 Modified Version The chapter expresses a version of the theory of Newton presented by Weerakoon et al. (2000) to describe the algorithm as: φ(h n )  h n+1 = h n −   φ (h n+1 ) + φ  (h n ) /2

(1.5)

where φ  (h n+1 ) is the first-order derivative of φ(h) based on h n+1 . According to Özban (2004): h n+1 = h n −

φ



φ(h n ) , [cn+1 +h n ]

(1.6)

2

where cn+1 = h n −

φ(h n ) φ  (h n )

(1.6.1)

Thus, by using the arithmetic mean of cn+1 and h n , the new version of this method can be gained.

1.2.3 Gradient-Based Optimizer Newton’s method is operated in the GBO (Ahmadianfar et al., 2020), combining the gradient theory with population-based techniques to find the search domain by a collocation of vectors as well as two main operators.

1.2.3.1

Initialization

Constraints, decision variables, and a benefit function are known as root parts of an optimization issue. The control elements in the techniques have a probability rate and a transition parameter, taking place from the exploration to exploitation (γ ). The iteration numbers and population size are detected according to complexity in such problems. In addition, in this method, each population is known as a “vector”, with the algorithm including N vectors in a D dimensional solution area. Hence, a vector is represented in Eq. 1.7:   h n,d = h n,1 , h n,2 , . . . , h n,D , n = 1, 2, . . . , N , d = 1, 2, . . . , D

(1.7)

8

A. Samadi-Koucheksaraee et al.

Overall, in the D-dimensional search domain, GBO’s the first vectors are created, and it can be formulated as: h n = h min + rand(0, 1) × (h max − h min )

(1.8)

where the bounds of decision variable h are considered by h min and h max , and rand is selected randomly between [0, 1].

1.2.3.2

Gradient Search Rule (GSR)

In order to achieve a better position and search in the possible range, the vectors’ movement in the gradient search rule is handled. The GSR is presented according to the GB method’s concept to boost the exploration phase, as well as accelerate the convergence. This rule is obtained through Newton’s gradient-based method (Ypma, 1995). Thus, a procedure, numerical gradient one, is substituted by the direct derivation of the function because the majority of optimization problems are not differentiable. Overall, the GB uses an initial solution after which it addresses the other position during a gradient-specified pathway. By using the Taylor series for estimation of the first-order derivative, the GSR according to Eq. (1.4) is extracted while the Taylor series for functions φ(h + h) and φ(h − h) can be formulated as: φ(h + h) = φ(h) + φ  (h 0 )h + +

φ (3) (h 0 )h 3 + ··· 3!

φ(h − h) = φ(h) − φ  (h 0 )h + −

φ  (h 0 )h 2 2! (1.9) φ  (h 0 )h 2 2!

φ (3) (h 0 )h 3 + ··· 3!

(1.10)

By Eqs. (1.9) and (1.10), the first-order derivative is prepared (Patil & Verma, 2006): φ  (h) =

φ(h + h) − φ(h − h) 2h

(1.11)

According to the Eqs. (1.4) and (1.11), the new one (h n+1 ) is expressed as: h n+1 = h n −

2h × φ(h n ) φ(h n + h) − φ(h n − h)

(1.12)

1 Optimization Algorithms Surpassing Metaphor

9

The GSR has a profound impact on this algorithm as a major body, so considering some modifications is vital to control this search. demonstrated that position h n +h has a worse fitness than h n , whereas h n − h is more suitable than h n Mainly because φ(h) is a minimization problem. Furthermore, the GBO algorithm substitutes position h n −h with h best , has the more appropriate situation in the neighbourhood of position h n , whilst h n + h replaced with h wor st is an inappropriate situation in the neighbourhood of h n . In this regard, the GBO uses the position (h n ), instead of its fitness (φ(h)) since the fitness of a position is highly time-consuming for estimation. Therefore, GSR is described as follows: G S R = randn ×

2h × h n (h wor st − h best + ε)

(1.13)

in which Eq. 1.13. is comprised of randn, a distributed random figure; ε, small number between [0, 0.1]; and h best and h wor st , the best and worst solutions. By a random parameter, the GSR is developed in order to enhance GBO in terms of the search ability, and to balance exploration (global) and exploitation (local). In general, balancing the exploration and exploitation in optimisation algorithm act as a deniably precursor to reaching the acceptable regions in the search range and being converged to the universally optimal results. As a result, the G S R is able to be changed using an adaptive coefficient so as to reach the optimal global result. δ1 is considered the critical parameter in the GBO on account of balancing the exploration and exploitation, and it is formulated as: δ1 = 2 × rand × γ − γ       3π 3π   + sin μ × γ = μ × sin  2 2  3 2 i μ = μmin + (μmax − μmin ) × 1 − I

(1.14) (1.14.1)

(1.14.2)

in which μmin and μmax are expressed 0.2 and 1.2, correspondingly, i and I are on a par with the iteration figures, as well as the total of iterations respectively. Parameter δ1 can be changed by γ to balance the exploration and exploitation, while this can be altered at each iteration. The parameter μ changes along with the number of iteration. At early iterations, μ has a high figure to improve the population in terms of diversity, and μ descend as the figure of iteration rises to accelerate the convergence. These gained must be able to find the search area being close to the best solutions. Consequently, increasing the parameter figure, from 550 to 750, help the optimizer to ignore local optima, Inasmuch as it increases the population diversity for the efficient result. Therefore, Eq. (1.15) is:

10

A. Samadi-Koucheksaraee et al.

G S R = randn × δ1 ×

2h × h n (h wor st − h best + ε)

(1.15)

Equation (1.15) depicts h is detected according to the disparity of the best solution (h best ) and a randomly chosen position h ri 1 (see Eqs. 1.16, 1.16.1, and 1.16.2). Besides, parameter ∂ is expressed by Eq. (1.16.2), which confirms h alters at each iteration. To enhance the exploration in this algorithm, a random value (rand) is used. h = rand(1 : N ) × |step| step =

h best − h ri 1 + ∂ 2

  i  h r 1 + h ri 2 + h ri 3 + h ri 4  i  ∂ = 2 × rand ×  − hn  4

(1.16)

(1.16.1)

(1.16.2)

in which rand(1 : N ) constitutes a random value along with N dimensions, r 1, r 2, r 3, and r 4(r 1 = r 2 = r 3 = r 4 = n) are different integers chosen randomly between [1, N], step is detected by h best and h ri 1 . According to the GSR, Eq. (1.12) can be reformulated as: h n+1 = h n − G S R

(1.17)

The DM or direction of movement is considered to reach more suitable exploitation around the area of h n . The DM actually utilizes the best vector and steers (h n ) in the way of (h best − h n ). Consequently, this brings along an appropriate local search in order to create decent change in the rate of convergence in this technique. The DM is defined: D M = rand × δ2 × (h best − h n )

(1.18)

where rand is selected randomly between [0, 1], and δ2 , supporting the exploration process, is considered a random factor, which helps all vectors to reach a variable step size. δ2 is provided by: δ2 = 2 × rand × γ − γ

(1.19)

Eventually, (1.20) and (1.21) can be considered to update the current vector’s Eqs. position h in through the GSR and DM.

1 Optimization Algorithms Surpassing Metaphor

11

R1in = h in − G S R + D M

(1.20)

2h × h in R1in = h in − randn × δ1 × (h wor st − h best + ε) + rand × δ2 × h best − h in

(1.21)

in which R1in is the new vector gained via updating h in . According to Özban (2004), to develop the GSR, Eqs. (1.6) and (1.11) are combined so that the GSR is formulated as: 2h × φ(h n ) φ(vn + h) − φ(vn − h)   cn+1 + h n vn = 2

h n+1 = h n −

(1.22)

(1.23)

Equation 1.22 renews the situation of the prevalent solution by Eq. 1.12. To put it simply, this formula employs the mean of two vectors cn+1 and h n instead of h n . The new formula develops the search process in the solution area, which helps the optimization algorithm. According to Eqs. (1.15), and (1.22) can be changed into a population-based search technique, in which cn+1 is equal with: cn+1 = h n −

2h × φ(h n ) φ(h n + h) − φ(h n − h)

(1.24)

Next, Eq. (1.25) is reformulated in order to change into a population-based algorithm, as: cn+1 = h n − randn ×

2h × h n (h wor st − h best + ε)

(1.25)

vn + h and vn − h in Eq. (1.22) is provided by:   cn+1 + h n + h vn + h = 2   cn+1 + h n vn − h = − h 2

(1.26)

(1.27)

Equations (1.26) and (1.27) are rewritten to develop diversity and exploration. In this regard, vn + h and vn − h are converted into vln and v jn , as:

12

A. Samadi-Koucheksaraee et al.

  cn+1 + h n + rand × h vln = rand × 2

  cn+1 + h n v jn = rand × − rand × h 2

(1.28)

(1.29)

in which vln and v jn are considered two positions obtained regarding Cn+1 and h n , correspondingly. Therefore, the G S R is formulated based on the above equations, as: G S R = randn × δ1 ×

2h × h n (vln − v jn + ε)

(1.30)

Using the D M as well as G S R, the two below equations are employed to create the R1in . R1in = h in − G S R + D M

(1.31)

2h × h in R1in = h in − randn × δ1 × i vln − v jni + ε + rand × δ2 × h best − h in

(1.32)

Then, the new vector (R2in ) can be created by replacing the best vector’s position (h best ) with the current vector (h in ) in Eq. (1.32), so: 2h × h in vlni − v jni + ε i

R2ni = h best − randn × δ1 ×

+ rand × δ2 × h ri 1 − h r 2

(1.33)

This method accentuates the exploitation process, and also, this search method defined via Eq. (1.33) is suitable for local search, although being restricted for global exploration. However, the technique presented in Eq. (1.32) is appropriate for global exploration while limiting the local search. As a result, the GBO has the edge over both search methods (Eqs. (1.32) and (1.33)) so as to enhance both exploration and exploitation. Hence, the novel solution at the upcoming iteration (h i+1 n ), inspired the positions R1in , R2in , is expressed as: = ra × rb × R1in + (1 − rb ) × R2ni + (1 − ra ) × R3in h i+1 n R3in = h in − δ1 × R2ni − R1in

(1.34) (1.35)

1 Optimization Algorithms Surpassing Metaphor

13

in which ra and rb are also two random figures in [0, 1]. The updating process of a vector position is demonstrated based on R1in , R2ni , and R3in . in a 2D search area. Also, with regard to Eq. (1.34), h i+1 could be at random area n detected through R1in , R2ni , and R3in in the search region. In other words, these i+1 demonstrateh i+1 n , and other ones alter randomly their situations around h n . 1.2.3.3

Local Escaping Operator (LEO)

The LEO have an indispensable part in developing the GBO’s ability for unfolding remarkably. sophisticated issues. Indeed, the Local escaping operator can alter h i+1 n This means this operator can bring along a solution with a significant ability h iL E O through employing several solutions, compromising the best position (h best ), the i i solutions R1 in andR2in , two random solutions i h r 1 and h r 2 , and a new randomly created i solution h k . In addition, the solution h L E O is made by the scheme mentioned below: i f rand < pr i f rand < 0.5

h iL E O = h i+1 + φ1 × u 1 × h best − u 2 × h ik + φ2 × δ1 × u 3 × R2ni − R1in n +u 2 × h ri 1 − h ri 2 /2

h i+1 = h iL E O n else

h iL E O = h best + φ1 × u 1 × h best − u 2 × h ik + φ2 × δ1 × u 3 × R2ni − R1in +u 2 × h ri 1 − h ri 2 /2 h i+1 = h iL E O n

End End (1.36) in which φ1 and φ2 randomly is determined between [−1, 1] and [0, 1] respectively, pr is equal to probability, and u 1 , u 2 , and u 3 are three random figures, which are described as:

2 × rand i f r1 < 0.5 u1 = (1.37) 1 other wise

rand i f r1 < 0.5 u2 = (1.38) 1 other wise

14

A. Samadi-Koucheksaraee et al.

u3 =

rand i f r1 < 0.5 1 other wise

(1.39)

in which rand and r1 are a random number from 0 to 1. So: u 1 = Z 1 × 2 × rand + (1 − Z 1 )

(1.40)

u 2 = Z 1 × rand + (1 − Z 1 )

(1.41)

u 3 = Z 1 × rand + (1 − Z 1 )

(1.42)

where Z 1 is a binary parameter with an amount of 0 or 1. provided that r1 is lower than 0.5, the amount of Z 1 is 1; otherwise, it is equal to 0. In order to detect the solution h ik in Eq. (1.36), an appropriate scheme is provided below:

h rand i f r2 < 0.5 h dk = (1.43) h ip other wise h rand = Hmin + rand(0, 1) × (Hmax − Hmin )

(1.44)

where h rand is expressed as a new solution, h ip demonstrates a solution randomly selected for ( p ∈ [1, 2, . . . , N ]). r2 randomly selected within the domain of [0, 1]. h ik = Z 2 × h ip + (1 − Z 2 ) × h rand

(1.45)

where Z 2 is a binary parameter, 0 or 1. Providing that r2 is under 0.5, Z 2 is 1; otherwise, is equal to 0. This behaviour in determining the amounts of u 1 , u 2 , and u 3 supports to raise the diversity and ignore the local solutions. In the following, Fig. 1.1 and Table 1.1 reveal flowchart and pseudo-code of GBO algorithm.

1.3 Runge Kutta Optimizer (RUN) In this part, a useful metaheuristic optimization, created according to the Runge Kutta (RK) technique (Kutta, 1901), will be introduced. The RUN employs the notion of slope variations as a searching process for global optimization (Ahmadianfar et al., 2021). In the searching process, the exploration and exploitation phases and the enhanced solution quality (ESQ) operator have developed the power of the optimizer to avoid the local results and enhance the speed of convergence (Ahmadianfar et al., 2021).

1 Optimization Algorithms Surpassing Metaphor

15

Fig. 1.1 Flowchart of gradient-based algorithm (GBO) Table 1.1 The GBO’s pseudo-code

Step 1. Initialization Set ,

and , then, creating a first population

Evaluating the objective function value, and Meeting Step 2. Main loop

Select randomly , , , Calculate the position using

Eq. 1.34

Local escaping operator Calculate the position

Update

Step 2. return

and

using Eq. 1.36

,

, and

,

, …,

,

16

A. Samadi-Koucheksaraee et al.

1.3.1 Brief Reviews of Runge Kutta Method (RKM) To tackle ordinary differential equations, Runge Kutta method (RKM) has been converted into a conventional method (Kutta, 1901; Runge, 1895). RKM can produce a meticulous numerical approach through functions, with it, without high-order derivatives, being done (Zheng et al., 2017). The RKM is defined in the following. The first-order ordinary differential equation is described as: dφ = f (h, φ), φ(h 0 ) = φ0 dt

(1.46)

Indeed, this method describes f (h, φ) as the slope(S), being the best for linear line in the chart at the place of (h, φ). By considering (S) at the place of (h 0 , φ0 ), next point would be gained by operating the best fitted linear line: (h 1 , φ1 ) = (h 0 + h, φ0 + S0 h) and (h 2 , φ2 ) = (h 1 + h φ1 + S1 h), in which S0 = f (h 0 , φ0 ) and S0 = f (h 0 , φ0 ) respectively. This procedure is repeatable during m times so that it is concomitant with a solution in the domain of [h 0 , h 0 + mh]. The derivation of RKM has inspired the Taylor series, which is provided via: φ(h + h) = φ(h) + φ  (h)h + φ  (h)

(h)2 + ··· 2!

(1.47)

The following approximate equation can be achieved by creating a reduction in the higher-order terms. φ(h + h) ≈ φ(h) + φ  (h)h

(1.48)

Based on Eq. (1.48), the formulation of the first-order Euler method can be defined as: φ(h + h) = φ(h) + c1 h

(1.49)

where c1 = φ  (h) = f (h, φ); and h = h n+1 − h n . The first-order derivative (φ  (h)) can be (Patil & Verma, 2006): φ  (h) =

φ(h + h) − φ(h − h) 2h

(1.50)

Therefore, the principle in Eq. (1.49) can be reformulated as: φ(h + h) = φ(h) +

φ(h + h) − φ(h − h) 2

(1.51)

1 Optimization Algorithms Surpassing Metaphor

17

To enhance the proposed optimization method, the fourth-order Runge Kutta (RK4), gained by Eq. (1.47), get operated in this process (England, 1969). The RK4 is formulated according to the four increments’ weighted average and described in the following: 1 φ(h + h) = φ(h) + (c1 + 2 × c2 + 2 × c3 + c4 )h 6

(1.52)

So, the four weighted factors (c1 , c2 , c3 , and c4 ) are defined as: c1 = φ  (h) = f (h, v)   h h ,y+ × c1 c2 = f h + 2 2   h h ,y+ × c2 c3 = f h + 2 2 c4 = f (h + h, y + h × c3 )

(1.53)

As can be observed in the given factors, the first increment is c1 detecting (S) at the beginning of the interval [h, h + h] via φ. c2 is considered the second increment and determines according to (S) at the midpoint, which employs φ and c1 . The third increment is c3 describing (S) at the midpoint by φ and c2 . c4 is considered the fourth increment, as well as is detected through (S) at the finish, utilizing φ and c3 . Thus, with regard to RK4, the next value φ(h + h) is presented by the prevalent figure φ(h) plus the four increments’ weighted average.

1.3.2 Introduction of Runge Kutta Optimizer A new swarm-based model has been created along with stochastic components to optimize, being concomitant a model without any cliche inspiration attachment despite other methods. Indeed, the RUN operates a metaphor-free one based on the mathematical function as activated regulations at the accurate time in order to be represented. Having said that, the only merit of employing metaphors to represent the population-based method’s model is to camouflage the main essence of equations used in the algorithms. Consequently, operating the metaphors is not applicable in this regard. Hence, the RUN is enhanced by considering the major idea of the RK manner as well as the population-based assessment. That is to say, in order to estimate and solve ordinary differential equations, the RK employs a formulation (i.e., RK4 method) particularly (Kutta, 1901; Runge, 1895). The fundamental idea of RUN is originated from the estimated (S) concept at the RK manner. At the same time, RUN makes some regulations for the assessment of a data set, population, based on swarm-based algorithms’ logic.

18

1.3.2.1

A. Samadi-Koucheksaraee et al.

Initialization Step

The major goal is to evolve the initial swarm within the permittable range of iterations in this stage. In this regard, the production of N positions are accidental. h n (n = 1, 2, . . . , N ), being each member of the population is considered a solution with the dimension ofD. Therefore, the following formulation makes the initial positions accidentally: h n = L + rand.(U − L)

(1.54)

in which U and L describe the upper as well as lower bounds. Furthermore, rand is considered a random value between 0 and 1. This rule can easily bring along a couple of developments when dealing with limits.

1.3.2.2

Search Operator

The iterative cores play a predominantly influential role in the ability of algorithms. To put it simply, in the exploration core, for instance, a collocation of solutions, gained extremely accidentally, is operated in an optimization algorithm to find the possible area’s promising regions. Compared with the exploration phase, the root solutions and random behaviours experience lower changes in the exploitation one. (Mirjalili, 2015). The search operator of RKO uses the RK method to reach a balance condition between exploitation as well as exploration, and to find the decision area by a collection of random results. In order for the search mechanism to be set in the RUN, the RK4 method is used. Therefore, to express the coefficient c1 , the first-order derivative was employed, which is estimated through Eq. (1.50). In addition, position h n is operated instead of the fitness (φ(h n )) in the proposed optimization algorithm, mainly because applying the fitness of a position is a time-consuming computational process. Concerning Eq. (1.50), h n + h and h n − h are two positions near h n . By assuming φ(h) as an optimization problem, the positions h n − h and h n + h are highly likely to reach better or worst positions correspondingly. Hence, position h n − h is equivalent to h best , the best one around h n , however, the position h n + h is equal to h wor st , the worst one around h k to make a population-based algorithm. So, c1 is described as: c1 =

h wor st − h best 2

(1.55)

where h wor st and h best are defined as the worst and best solutions gained at each iteration, with they being detected according to the three random solutions amongst the population (h r 1 , h r 2 , h r 3 ), and r 1 = r 2 = r 3 = n. Equation (1.55) is reformulated to develop exploration search and make a random trend, as:

1 Optimization Algorithms Surpassing Metaphor

c1 =

19

1 (rand × h wor st − u × h best ) 2h

(1.55.1)

u = r ound(1 + rand) × (1 − rand)

(1.55.2)

According to the above formulas, rand expresses the random figure between [0, 1]. It is true to say the best solution h best has a profound impact on exploring promising regions and consequently moving towards the global best solution. Hence, (u), a random parameter, is operated to emphasize the best solution h best in optimization. In Eq. (1.55), h can be described by: h = 2 × rand × |St p| St p = rand ×



(1.56)

h best − rand × h avg + γ 

i γ = rand × (h n − rand × (u − l)) × ex p −4 × Maxi

(1.56.1)  (1.56.2)

in which, with regard to St p, h is considered a position increment. Indeed, St p defines the step size stemming from the disparity between h best and h avg . The size of solution space, reducing during the optimization trend exponentially, detects the γ as a scale factor. h avg describes the average of three random chosen solutions (h r 1 , h r 2 , and h r 3 ). In addition, a is considered a positive steady number. The aforementioned method is able to make more trends as well as diverse search areas through random figures (rand) in Eqs. (1.56) to (1.56.2). As a result, in the following three other coefficients (i.e., c2 , c3 , and c4 ) can be reformulated, as: c2 =

c4 =

1 (rand.(h wor st + rand1 .c1 .h) − (u.h best + rand2 .c1 .h)) 2h      1 1 rand. h wor st + rand1 . c2 .h c3 = 2h 2     1 − u.h best + rand2 . c2 .h 2 1 (rand.(h wor st + rand1 .c3 .h) − (u.h best + rand2 .c3 .h)) 2h

(1.57)

(1.58) (1.59)

As can be observed, rand 1 and rand 2 are random numbers in the domain of [0, 1], as well as h wor st and h best are detected by the following instructions:

20

A. Samadi-Koucheksaraee et al.

i f f (h n ) < f (h best i ) h best = h n h wor st = h best i el se

(1.60)

h best = h best i h wor st = h n end According to the above, h ki chosen from random solutions (h r 1 , h r 2 , and h r 3 ), is considered the most appropriate random solution. Besides, provided the current solution’s fitness ( f (h n )) was more appropriate than that of h best i , the best and worst ones (h best and h wor st ) are equivalent to h n and h best i correspondingly. Contrarily, they are equivalent to h best i and h n correspondingly. In turn, the following, definition is about the superior search mechanism in RKO: SM =

1 (h R K )h 6

h R K = c1 + 2 × c2 + 2 × c3 + c4

1.3.2.3

(1.61) (1.62)

Updating Solutions

The RUN uses a set of random solutions to carry out the optimization process. RK method is used to update solutions’ position at each iteration. Simply put, the solution and the search method of the RK technique are employed by RUN. The updating position by operating the RK is demonstrated in Fig. 1.2. In the following, an appropriate scheme, making the coming position, is given by considering the local (exploitation) search and the global (exploration). i f rand < 0.5 Ex pl oi t at i on phase h n+1 = (h e ) + S F × (rand × S M + h s ) el se Ex pl oi t at i on phase h n+1 = (h m ) + S F × (rand × S M + h s  ) end The formulas of h s and h s  are defined as below

(1.63)

1 Optimization Algorithms Surpassing Metaphor

21

Fig. 1.2 Process to obtain the next position in Runge Kutta optimizer (RUN)

h s = randn.(h m − h e )

(1.64)

h s  = randn.(h r 1 − h r 2 )

(1.65)

in which randn is considered a random figure which has a normal distribution. h m and h e can be: h e = ϕ × h n + (1 − ϕ) × h r 1

(1.66)

h m = ϕ × h b + (1 − ϕ) × h lbest

(1.67)

in which ϕ describes a random figure between (0, 1). h b is known as the best-so-far result. h lbest is considered the most suitable situation gained per iteration. Also, S F is equal to: S F = 2.(0.5 − rand) × f

(1.68)

in which   f = a × ex p −b × rand ×

i Maxi

 (1.69)

In this regard, a and b are known as two figures, of steadily constant. i is iterations’ number, and Maxi refers to the maximum number of iterations. S F, balancing exploration and exploitation mechanism in this method, was employed in this study.

22

A. Samadi-Koucheksaraee et al.

According to Eq. (1.68), much SF is detected to raise the variety in the early iterations while developing the exploration search. Consequently, the value of SF decreases to boost the exploitation search capability through rising the iteration numbers. The major control RKO’s parameters are comprised of those used in S F, a and b. It is possible to reformulate Eq. (1.63) to apply the local detection near the h e and h m and to find the better areas in search space. i f rand < 0.5 Ex pl oi t at i on phase h n+1 = (h e + r × S F × φ × h e ) + S F × (rand × S M + h s ) el se

(1.70)

Ex pl oi t at i on phase h n+1 = (h m + r × S F × φ × h m ) + S F × (rand × S M + h s  ) end Concerning Eq. (1.63), it can be observed that the RKO chooses the exploitation and exploration leg according to the circumstance rand < 0.5. That is to say, this new approach being applicable in RKO optimization arranges that if rand < 0.5. A global search plays an integral role in the local search as well as the solution area around h e is accomplished at the same time. The RKO can detect the purpose area of the search area by accomplishing an innovative global search, exploration, while if rand ≥ 0.5, RKO employs a local search around solution h m . Hence, the proposed algorithm remarkably raises the convergence speed and concentrates on influential solutions using this local search phase. where r is considered an integer, 1 or −1. It alters the search way as well as raises the variety. φ describes a random figure between [0, 2]. Regarding Eq. (1.70), the local search near h e reduces while the iteration numbers go up. In Fig. 1.3, the method of search in RKO is depicted, which expresses how to make the position h n+1 at the coming iteration.

1.3.2.4

Enhanced Solution Quality

In order to improve the quality of solutions and prevent optima in iterations, ESQ was operated in the RUN, which brings along a more appropriate position for solutions. In this regard, in order to create a new solution (h new1 ), the average, calculated from three random solutions (h avg ), is added to the most suitable position (h best ). In the following, a set of instructions is executed to generate the solution (h new2 ) through ESQ:

1 Optimization Algorithms Surpassing Metaphor

23

Fig. 1.3 Search process in Runge Kutta optimizer (RUN)

i f rand < 0.5 if w f (h n )). So, it is possible to create another new solution (h new3 ) to make another opportunity for generating a suitable solution, which can be described as: i f rand < w h new3 = (h new2 − rand. h new2 ) + S F.( rand. h R K + (v.h best − h new2 )) end (1.75) in which v describes a random number with a proportion of 2×rand. Equation (1.75) works to move solution h new2 towards a better position. According to this equation, in the first principle, a local search near h new2 is made, and also in the second principle, to find a promising area, RUN moves towards the most suitable solution. In this regard, coefficient v is to show the importance of the most appropriate solution. In order to estimate h R K , solutions h best and h wor st are converted into h k and h new2 correspondingly. It is mainly because the fitness quantity of h n is lower compared with that of h new2 (i.e., f (h new2 ) > f (h n )). Figure 1.4 and Table 1.2 depict the flowchart of and the pseudo-code of RKO, respectively, while in Fig. 1.5, there are three routes for the optimization process in RKO. Concerning this algorithm, firstly, it operates the RK search approach to reach the position h n+1 . In the following, it uses the ESQ manner to detect the purpose areas in the search area. Consequently, RKO employs three routes to gain a more acceptable solution. In the initial phases, in order for h new2 to be compared with the h k+1 , h new2 is estimated by the ESQ. provided h k+1 (i.e., f (h new2 ) > f (h n+1 )) is better than that of h new2 , another position (h new3 ) is created. On condition that f (h new3 ) < f (h n+1 ), the most suitable solution is h new3 (second path). Having said that, it is h n+1 (first path). Regarding the third path, if f (h new2 ) < f (h n+1 ), the most acceptable solution is h new2 .

1.4 Differential Evolution (DE) Differential evolution (DE) method as a potent as well as competing metaheuristic method is applicable and fruitful for natural parameter optimization (Storn et al., 1995, 1997). It has had an undeniable high ability to solve accurately an extensive diversity of optimization issues and engineering fields (Das et al., 2010), which stems from its uncomplicated feature. In turn, all characters cause it to work easily. Classical DE indicates several control elements, including crossover rate pcr , scalar factor S

1 Optimization Algorithms Surpassing Metaphor

25

Fig. 1.4 Flowchart of Runge Kutta optimizer (RUN)

and population size N whereas they have a profound impact on DE’s performance (Teo, 2006; Zhang et al., 2009), and in the following, it will be elaborated. Having finished the initialization step, DE is steered into a flow cycle of steps, namely crossover, mutation and selection, repeated continually for G Max .

1.4.1 Vector Initialization DE, a population-based optimizer, produce multiple selected points uniformly and randomly to make population P in order to overcome problems. Indeed, per h of P describes a vector in the real parameter region RD such  as h = [h 1 h 2 , . . . , h D]. In the G = 0, first-generation, based on upper h min = h 1,min , h 2,min , . . . , h D,min and

26

A. Samadi-Koucheksaraee et al.

Table 1.2 The RUN’s pseudo-code

Step 1. Initialization set

as well as

Create an initial population

(

, , …,

)

Evaluate the objective function value Specify the

,

Step 2. Main loop

Calculate the position using

Eq. 1.70

Enhance the solution quality

Calculate the position

using Eq. 1.71

Calculate the position

Update

using Eq. 1.75

and

Update

Step 2. return

  lower h max = h 1,max , h 2,max , . . . , h D,max the domain of decision variable h j of h, so h G=o j,n = h j,min + rand[0, 1] j,n · h j,max − h j,min

(1.76)

1 Optimization Algorithms Surpassing Metaphor

27

Fig. 1.5 Runge Kutta optimizer (RUN) process

in which rand is considered a random figure from 0 to 1. Besides, the vectors produced at each step in the DE algorithm, label with distinct names, vectors/target population, as like as those generated, which is defined as h.

1.4.2 Mutation Operation The mutation phase or differential mutation phase is presented to produce the population of donor h n (n = 1, 2, . . . , N ) in the earlier step of target vector production yn (n = 1, 2, . . . , N ). The primary goal of this operation is to add the difference (h r 1 and h r 2 ) as target vectors, to as the third vectorh r 3 , to gain the mutation vectoryn . The (r 1, r 2, and r 3) are selected from the domain of [1, N], and they are not the same as the index n of the current one. ynG = h rG1 + S · h rG2 − h rG3

(1.77)

According to Eq. (1.77), obviously, G expresses the current generation, and also S defines a control factor, detecting the exploration length (h r 2 - h r 3 ) handling how far the offspring from point h r 1 to be produced. More importantly, S ∈ [0, 1] always gets a positive amount, not more than 1 (Price et al., 2006). Mutation formulas of DE lead to a wide range of nominations. G + S · h rG1 − h rG2 D E/best/1 ynG = h best

(1.78)

G ynG = h nG + S · h best − h nG + S · h rG1 − h rG2 D E/curr ent − to − best /1 (1.79) ynG = h rG1 + S · h rG2 − h rG3 + S · h rG4 − h rG5 D E/rand/2 (1.80)

28

A. Samadi-Koucheksaraee et al.

G ynG = h best + S · h rG1 − h rG2 + S · h rG3 − h rG4 D E/best/2

(1.81)

in which the best solution, h best , is considered in the G.

1.4.3 Crossover Mechanism Vector perturbation operation, called crossover, is the next step, in which mutation is followed in the DE cycle to present the trial population Tn (n = 1, 2, . . . , N ). The main goal of this operation is to raise the variety amongst the parameters of the target h nG and ynG donor vectors, which causes the donor as well as target vectors to alter those with regard to detected circumstanced to create the trial one TnG . Additionally, there are two types of crossovers in the DE family, known as exponential crossover as well as a binary (uniform) crossover. In the following, the binary one is elaborated, called bin.  y Gj,n i f rand j,n [0, 1] ≤ pcr or j = jrand G (1.82) T j,n = h Gj,n other wise in which rand j,n is a consistent random figure from 0 to 1, pcr is considered the factor of control such as S. The value of pcr detects the alteration probability amongst the perturbed vectors. pcr ∈ [0, 1] gains a positive amount being not larger than 1 (Price et al., 2006). jrand ∈ [0, 1, 2, ..., D] expresses the index of a randomly chosen gene, which brings along guaranty for offspring T jG inheriting at least one of the parent’s components.

1.4.4 Evaluation and Selection Operations With regard to Eq. (1.83), an insatiate selection plan occurs to estimate how far has been gained for the most suitable performance. Indeed, this plan is implemented by comparison of the achieved solution’s quality from the crossover stage trial vector TnG against the quality of its corresponding target vector h nG to detect who would remain for the upcoming generation, h nG+1 (n = 1, 2, . . . , N ). The objective function f , considered for the problem at hand, estimates the result quality. The mentioned operation is defined as

h nG+1 =

TnG i f f TnG ≤ f xnG (for minimization problems) h nG other wise

(1.83)

1 Optimization Algorithms Surpassing Metaphor

29

According to the aforementioned equation, providing that the fitness function reaches a more or equal trial vector, in the upcoming generation, the trial vector replaces the respective target point. Alternatively, the old position is maintained in the upcoming generation. This technique ensures that the population would never corrupt by generation, owing to the fact that the population is highly likely to get better and remain steady, regarding reducing the fitness amount. In the following, Table 1.3 demonstrates pseudo-code of the DE. Table 1.3 The DE’s pseudo-code

Step 1. Initialization Create Evaluate the objective function value of each one Step 2. Main loop (Termination condition is not satisfied)

Select randomly

,

,

,

is an integer number

rand

or

rand

Evaluate the objective function value of each one

Step 2. return

30

A. Samadi-Koucheksaraee et al.

1.5 Conclusion This chapter presents a comprehensive review of three powerful metaphor-free, population-based optimization methods, including GBO, Runge Kutta Optimizer (RUN), Differential Evolution (DE). Those interested in engineering optimization could use the techniques to solve environmental and water quality problems. As a result, these techniques ensure more accurate and trustworthy outcomes, saving estimation time during the process. In this chapter, first, a brief review of the algorithms was presented. After introduced the touched upon methods, this chapter descripted the fundamentals of the methods and its algorithmic phase. In the last stage, their codes are presented in detail.

References Abdel-Basset, M., Wang, G.-G., Sangaiah, A. K., & Rushdy, E. (2019). Krill herd algorithm based on cuckoo search for solving engineering optimization problems. Multimedia Tools and Applications, 78(4), 3861–3884. Ahmadianfar, I., Samadi-Koucheksaraee, A., & Bozorg-Haddad, O. (2017). Extracting optimal policies of hydropower multi-reservoir systems utilizing enhanced differential evolution algorithm. Water Resources Management, 31(14), 4375–4397. Ahmadianfar, I., Bozorg-Haddad, O., & Chu, X. (2020). Gradient-based optimizer: A new Metaheuristic optimization algorithm. Information Sciences, 540, 131–159. Ahmadianfar, I., Heidari, A. A., Gandomi, A. H., Chu, X., & Chen, H. (2021). RUN beyond the metaphor: An efficient optimization algorithm based on Runge Kutta method. Expert Systems with Applications, 115079. Akay, B., & Karaboga, D. (2012). A modified artificial bee colony algorithm for real-parameter optimization. Information Sciences, 192, 120–142. Alsattar, H., Zaidan, A., & Zaidan, B. (2020). Novel meta-heuristic bald eagle search optimisation algorithm. Artificial Intelligence Review, 53(3), 2237–2264. Arora, S., & Singh, S. (2019). Butterfly optimization algorithm: A novel approach for global optimization. Soft Computing, 23(3), 715–734. Askarzadeh, A. (2014). Bird mating optimizer: An optimization algorithm inspired by bird mating strategies. Communications in Nonlinear Science and Numerical Simulation, 19(4), 1213–1228. Ba, A. F., Huang, H., Wang, M., Ye, X., Gu, Z., Chen, H., & Cai, X. (2020). Levy-based antlioninspired optimizers with orthogonal learning scheme. Engineering with computers, 1–22. Bazaraa, M. S., Sherali, H. D., & Shetty, C. M. (2013). Nonlinear programming: Theory and algorithms. Wiley. Bonabeau, E., Theraulaz, G., & Dorigo, M. (1999). Swarm intelligence. Springer. Cao, B., Zhao, J., Gu, Y., Fan, S., & Yang, P. (2019). Security-aware industrial wireless sensor network deployment optimization. IEEE Transactions on Industrial Informatics, 16(8), 5309– 5316. Cao, B., Zhao, J., Gu, Y., Ling, Y., & Ma, X. (2020). Applying graph-based differential grouping for multiobjective large-scale optimization. Swarm and Evolutionary Computation, 53, 100626. Cao, B., Dong, W., Lv, Z., Gu, Y., Singh, S., & Kumar, P. (2020). Hybrid microgrid many-objective sizing optimization with fuzzy decision. IEEE Transactions on Fuzzy Systems, 28(11), 2702– 2710. Clerc, M. (2010). Particle swarm optimization (Vol. 93). Wiley.

1 Optimization Algorithms Surpassing Metaphor

31

Das, S., & Suganthan, P. N. (2010). Differential evolution: A survey of the state-of-the-art. IEEE Transactions on Evolutionary Computation, 15(1), 4–31. de Lacerda, M. G. P., de Araujo Pessoa, L. F., de Lima Neto, F. B., Ludermir, T. B., & Kuchen, H. (2020). A systematic literature review on general parameter control for evolutionary and swarm-based algorithms. Swarm and Evolutionary Computation, 100777. Dhiman, G., & Kumar, V. (2019). Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowledge-Based Systems, 165, 169–196. Dorigo, M., & Blum, C. (2005). Ant colony optimization theory: A survey. Theoretical Computer Science, 344(2–3), 243–278. Duman, E., Uysal, M., & Alkaya, A. F. (2012). Migrating birds optimization: A new metaheuristic approach and its performance on quadratic assignment problem. Information Sciences, 217, 65–77. England, R. (1969). Error estimates for Runge-Kutta type solutions to systems of ordinary differential equations. The Computer Journal, 12(2), 166–170. Erol, O. K., & Eksin, I. (2006). A new optimization method: Big bang–big crunch. Advances in Engineering Software, 37(2), 106–111. Eskandar, H., Sadollah, A., Bahreininejad, A., & Hamdi, M. (2012). Water cycle algorithm–A novel metaheuristic optimization method for solving constrained engineering optimization problems. Computers & Structures, 110, 151–166. Faramarzi, A., Heidarinejad, M., Stephens, B., & Mirjalili, S. (2020). Equilibrium optimizer: A novel optimization algorithm. Knowledge-Based Systems, 191, 105190. Fei, X., Wang, J., Ying, S., Hu, Z., & Shi, J. (2020). Projective parameter transfer based sparse multiple empirical kernel learning machine for diagnosis of brain disease. Neurocomputing, 413, 271–283. Fu, X., Pace, P., Aloi, G., Yang, L., & Fortino, G. (2020). Topology optimization against cascading failures on wireless sensor networks using a memetic algorithm. Computer Networks, 177, 107327. Hansen, N., Müller, S. D., & Koumoutsakos, P. (2003). Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evolutionary Computation, 11(1), 1–18. Hatamlou, A. (2013). Black hole: A new heuristic optimization approach for data clustering. Information Sciences, 222, 175–184. Holland, J. H. (1992). Genetic algorithms. Scientific American, 267(1), 66–73. Houssein, E. H., Saad, M. R., Hashim, F. A., Shaban, H., & Hassaballah, M. (2020). Lévy flight distribution: A new metaheuristic algorithm for solving engineering optimization problems. Engineering Applications of Artificial Intelligence, 94, 103731. Hu, J., Chen, H., Heidari, A. A., Wang, M., Zhang, X., Chen, Y., & Pan, Z. (2021). Orthogonal learning covariance matrix for defects of grey wolf optimizer: Insights, balance, diversity, and feature selection. Knowledge-Based Systems, 213, 106684. Jeong, S., & Kim, P. (2019). A population-based optimization method using Newton fractal. Complexity, 2019. Kennedy, J., & Eberhart, R. (1995). Particle swarm optimization. Paper presented at the Proceedings of ICNN’95-International Conference on Neural Networks. Kiran, M. S. (2015). TSA: Tree-seed algorithm for continuous optimization. Expert Systems with Applications, 42(19), 6686–6698. Koza, J. R., & Rice, J. P. (1992). Automatic programming of robots using genetic programming. Paper presented at the AAAI. Kumar, A., & Bawa, S. (2019). Generalized ant colony optimizer: Swarm-based meta-heuristic algorithm for cloud services execution. Computing, 101(11), 1609–1632. Kutta, W. (1901). Beitrag zur naherungsweisen integration totaler differentialgleichungen. Z. Math. Phys., 46, 435–453. Lampinen, J., & Storn, R. (2004). Differential evolution. In New optimization techniques in engineering (pp. 123–166): Springer.

32

A. Samadi-Koucheksaraee et al.

Li, Y., Liu, Y., Cui, W.-G., Guo, Y.-Z., Huang, H., & Hu, Z.-Y. (2020). Epileptic seizure detection in EEG signals using a unified temporal-spectral squeeze-and-excitation network. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 28(4), 782–794. Liu, Y., Yang, C., & Sun, Q. (2020). Thresholds based image extraction schemes in big data environment in intelligent traffic management. IEEE Transactions on Intelligent Transportation Systems. Luo, J., Chen, H., Xu, Y., Huang, H., & Zhao, X. (2018). An improved grasshopper optimization algorithm with application to financial stress prediction. Applied Mathematical Modelling, 64, 654–668. Luo, Z., Xie, Y., Ji, L., Cai, Y., Yang, Z., & Huang, G. (2021). Regional agricultural water resources management with respect to fuzzy return and energy constraint under uncertainty: An integrated optimization approach. Journal of Contaminant Hydrology, 103863. Masadeh, R., Mahafzah, B. A., & Sharieh, A. (2019). Sea lion optimization algorithm. Sea, 10(5). Mirjalili, S. (2015). The ant lion optimizer. Advances in Engineering Software, 83, 80–98. Mirjalili, S. (2016). SCA: A sine cosine algorithm for solving optimization problems. KnowledgeBased Systems, 96, 120–133. Naruei, I., & Keynia, F. (2021). Wild horse optimizer: A new meta-heuristic algorithm for solving engineering optimization problems. Engineering with computers, 1–32. Özban, A. Y. (2004). Some new variants of Newton’s method. Applied Mathematics Letters, 17(6), 677–682. Patil, P., & Verma, U. (2006). Numerical computational methods. Alpha Science International Ltd. Poli, R., Kennedy, J., & Blackwell, T. (2007). Particle swarm optimization. Swarm Intelligence, 1(1), 33–57. Price, K., Storn, R. M., & Lampinen, J. A. (2006). Differential evolution: A practical approach to global optimization. Springer Science & Business Media. Runge, C. (1895). Über die numerische Auflösung von Differentialgleichungen. Mathematische Annalen, 46(2), 167–178. Samadi-koucheksaraee, A., Ahmadianfar, I., Bozorg-Haddad, O., & Asghari-pari, S. A. (2019). Gradient evolution optimization algorithm to optimize reservoir operation systems. Water Resources Management, 33(2), 603–625. Saremi, S., Mirjalili, S., & Lewis, A. (2017). Grasshopper optimisation algorithm: Theory and application. Advances in Engineering Software, 105, 30–47. Shadravan, S., Naji, H., & Bardsiri, V. K. (2019). The Sailfish Optimizer: A novel nature-inspired metaheuristic algorithm for solving constrained engineering optimization problems. Engineering Applications of Artificial Intelligence, 80, 20–34. Sharma, H., Hazrati, G., & Bansal, J. C. (2019). Spider monkey optimization algorithm. In Evolutionary and swarm intelligence algorithms (pp. 43–59). Springer. Song, J., Zhong, Q., Wang, W., Su, C., Tan, Z., & Liu, Y. (2020). FPDP: Flexible privacy-preserving data publishing scheme for smart agriculture. IEEE Sensors Journal. Storn, R., & Price, K. (1995). Differential evolution-a simple and efficient adaptive scheme for global optimization over continuous spaces (Vol. 3). ICSI Berkeley. Storn, R., & Price, K. (1997). Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, 11(4), 341–359. Talebi, S., & Reisi, F. (2021). A clustering approach for EOS lumping—Using evolutionarybased metaheuristic optimization algorithms. Journal of Petroleum Science and Engineering, 207, 109149. Tan, W.-H., & Mohamad-Saleh, J. (2020). Normative fish swarm algorithm (NFSA) for optimization. Soft Computing, 24(3), 2083–2099. Teo, J. (2006). Exploring dynamic self-adaptive populations in differential evolution. Soft Computing, 10(8), 673–686. Wang, G.-G., Deb, S., & Cui, Z. (2019). Monarch butterfly optimization. Neural Computing and Applications, 31(7), 1995–2014.

1 Optimization Algorithms Surpassing Metaphor

33

Weerakoon, S., & Fernando, T. (2000). A variant of Newton’s method with accelerated third-order convergence. Applied Mathematics Letters, 13(8), 87–93. Yang, L., & Chen, H. (2019). Fault diagnosis of gearbox based on RBF-PF and particle swarm optimization wavelet neural network. Neural Computing and Applications, 31(9), 4463–4478. Yang, Y., Chen, H., Heidari, A. A., & Gandomi, A. H. (2021). Hunger games search: Visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Systems with Applications, 177, 114864. Yang, X.-S., & Deb, S. (2009). Cuckoo search via Lévy flights. Paper presented at the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC). Yao, X., Liu, Y., & Lin, G. (1999). Evolutionary programming made faster. IEEE Transactions on Evolutionary Computation, 3(2), 82–102. Ypma, T. J. (1995). Historical development of the Newton-Raphson method. SIAM Review, 37(4), 531–551. Yu, C., Heidari, A. A., & Chen, H. (2020). A quantum-behaved simulated annealing algorithm-based moth-flame optimization method. Applied Mathematical Modelling, 87, 1–19. Yu, C., Chen, M., Cheng, K., Zhao, X., Ma, C., Kuang, F., & Chen, H. (2021). SGOA: annealingbehaved grasshopper optimizer for global tasks. Engineering with Computers, 1–28. Zeng, H.-B., Liu, X.-G., & Wang, W. (2019). A generalized free-matrix-based integral inequality for stability analysis of time-varying delay systems. Applied Mathematics and Computation, 354, 1–8. Zhang, J., & Sanderson, A. C. (2009). JADE: Adaptive differential evolution with optional external archive. IEEE Transactions on Evolutionary Computation, 13(5), 945–958. Zhao, D., Liu, L., Yu, F., Heidari, A. A., Wang, M., Liang, G., Muhammad, K., Chen, H. (2020). Chaotic random spare ant colony optimization for multi-threshold image segmentation of 2D Kapur entropy. Knowledge-Based Systems, 106510. Zheng, L., & Zhang, X. (2017). Modeling and analysis of modern fluid problems. Academic Press. Zitzler, E., & Thiele, L. (1998). An evolutionary algorithm for multiobjective optimization: The strength pareto approach. TIK-report, 43.

Chapter 2

Improving Approaches for Meta-heuristic Algorithms: A Brief Overview Arya Yaghoubzadeh-Bavandpour , Omid Bozorg-Haddad, Babak Zolghadr-Asli , and Amir H. Gandomi

2.1 Introduction Optimization techniques are one of the most well-known methods among researchers and engineers for solving complex optimization problems. Many real-world optimization issues often have a complex, nonlinear, and multi-dimensional nature, which increases the challenges for researchers and engineers to solve these problems. Because such problems may have multiple optimal solutions, a global search technique is needed to achieve the best solution. Meta-heuristic optimization techniques have gained popularity over the years for their acceptable performances in solving complex problems (Yang & He, 2015). Meta-heuristic optimization techniques and algorithms have gained popularity over the years for their acceptable performances in solving complex problems (Yang & He, 2015). For instance, these algorithms A. Yaghoubzadeh-Bavandpour Department of Water Science and Engineering, Faculty of Agriculture, Bu-Ali Sina University, Hamedan, Iran e-mail: [email protected] O. Bozorg-Haddad (B) Department of Geography, University of California, Santa Barbara, CA 93106, USA e-mail: [email protected]; [email protected] O. Bozorg-Haddad · B. Zolghadr-Asli Department of Irrigation & Reclamation Engineering, Faculty of Agricultural Engineering & Technology, College of Agriculture & Natural Resources, University of Tehran, 3158777871 Karaj, Iran e-mail: [email protected] B. Zolghadr-Asli The Centre for Water Systems (CWS), The University of Exeter, Exeter, UK A. H. Gandomi Faculty of Engineering & Information Technology, University of Technology Sydney, Sydney, Australia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 O. Bozorg-Haddad and B. Zolghadr-Asli (eds.), Computational Intelligence for Water and Environmental Sciences, Studies in Computational Intelligence 1043, https://doi.org/10.1007/978-981-19-2519-1_2

35

36

A. Yaghoubzadeh-Bavandpour et al.

have been applied in various water and environmental studies, including, but not limited to, reservoir operations (e.g., Bozorg-Haddad et al., 2021), water distribution networks (e.g., Solgi et al., 2020), groundwater quality monitoring networks (e.g., Ayvaz & Elçi, 2018), groundwater pollution source identification (e.g., Han et al., 2020), etc. Considering the many meta-heuristic optimization algorithms that have been introduced over the past decades, researchers now have multiple choices for solving complex problems. In the late 1990s, a theory known as “No Free Lunch” was proposed by Wolpert and Macready (1997). While researchers believe that there are several global, robust, and efficient algorithms for optimization (Zolghadr-Asli et al., 2018), No Free Lunch (NFE) clearly states that no meta-heuristic optimization method is able to solve all optimization issues. In other words, one optimization algorithm can be a suitable technique to solve a set of problems, but some problems represent unsatisfying results. This situation encourages developers and researchers to improve existing algorithms or propose new algorithms for covering a large set of problems. Meta-heuristic optimization algorithms have promising results in different studies yet are unable to reach the optimal solutions on some occasions. Numerous studies have reported that two common problems that are encountered when solving complex problems are falling into local optima and slow convergence speed (Ba et al., 2020; Saremi et al., 2014a, 2014b; Sun et al., 2018; Yu et al., 2015). Researchers have used various strategies and mechanisms to deal with these issues to improve optimization algorithms performance. This chapter consists of three sections. In Sect. 2.2, meta-heuristic optimization algorithms are defined and described. In Sect. 2.3, some of the most well-known improving strategies are discussed, and finally. In Sect. 2.4, the hybrid improving mechanisms are introduced. In the final section, an overall conclusion is represented.

2.2 Meta-heuristic Optimization Algorithms Over the years, meta-heuristic optimization algorithms have become popular among researchers and engineers. Various studies have reported the promising results of optimization algorithms in different fields. According to Mirjalili et al. (2014a, 2014b), there are four main reasons for the popularity of meta-heuristic optimization algorithms: (1) these algorithms primarily are based on simple concepts and are easy to implement; (2) they can avoid falling in local optimal; (3) they can be applied in various applications; and (4) no gradient information is required. The inspiration sources of optimization algorithms can be categorized into four categories: swarm intelligence, evolutionary computation, human-based, and science-based algorithms which are introduced in the following sub-sections.

2 Improving Approaches for Meta-heuristic Algorithms: A Brief …

37

Table 2.1 Swarm intelligence algorithms Number

Optimization algorithm

Developer

1

Particle swarm optimization (PSO)

Kennedy and Eberhart (1995)

2

Ant colony optimization (ACO)

Dorigo et al. (2006)

3

Artificial bee colony algorithm (ABC) Basturk and Karaboga (2006)

4

Cuckoo Search (CS)

Yang and Deb (2009)

5

Firefly algorithm (FA)

Yang (2010a, 2010b)

6

Krill herd (KH)

Gandomi and Alavi (2012)

7

Grey wolf optimizar (GWO)

Mirjalili et al. (2014a, 2014b)

8

Whale optimization algorithm (WOA) Mirjalili and Lewis (2016)

9

Salp Swarm optimization (SSA)

Mirjalili et al. (2017)

10

Tunicate swarm algorithm (TSA)

Kaur et al. (2020)

11

Marine predators algorithm (MPA)

Faramarzi et al. (2020)

12

Colony predation algorithm (CPA)

Tu et al. (2021)

13

Aquila optimizer (AO)

Abualigah, Diabat, et al. (2021a), Abualigah Yousri, et al. (2021b)

14

Quantum-based avian navigation optimizer algorithm (QANA)

Zamani et al. (2021)

2.2.1 Swarm Intelligence Algorithms The word ‘swarm’ refers to processing units that create a swarm, which can be mechanical, computational, mathematical, array elements, robots, or biological units (Kennedy, 2006). This collective behavior of individuals has inspired researchers to develop models and algorithms, known as Swarm intelligence (SI) algorithms, to solve optimization problems. Beni and Wang (1993) coined the term ‘swarm intelligence’ based on cellular robotic systems, whereby SI algorithms simulate the collective behavior of social entities, such as packs of wolves, flocks of birds, schools of fish, and ant colonies etc. Table 2.1 presents some of the developed SI algorithms.

2.2.2 Evolutionary Computation Algorithms Evolutionary computation (EC) algorithms are motivated by the Darwinian principles of natural evolution. Evolutionary computation is a set of computer-based approaches for solving problems based on natural evolution principles, such as natural selection, mutation, and reproduction (Eiben & Smith, 2015; Engelbrecht, 2007). Various EC algorithms fall under the umbrella of evolutionary-based algorithms, including genetic algorithm (Holland, 1992), evolutionary strategy (Beyer & Schwefel, 2002), genetic programming (Banzhaf et al., 1998), differential evolution (Storn & Price, 1997), and biogeography-based optimization (Simon, 2008).

38

A. Yaghoubzadeh-Bavandpour et al.

Table 2.2 Science based algorithms Number

Optimization algorithm

Developer

1

Artificial chemical reaction optimization algorithm (ACROA)

Alatas (2011)

2

Ray optimization (RO)

Kaveh and Khayatazad (2012)

3

black hole (BH)

Hatamlou (2013)

4

Lightning search algorithm (LSA)

Shareef et al. (2015)

5

Multi-verse optimization (MVO)

Mirjalili et al. (2016)

6

Electromagnetic field optimization (EFO)

Abedinpourshotorban et al. (2016)

7

Water evaporation optimization (WEO)

Kaveh and Bakhshpoori (2016)

8

Lightning attachment procedure optimization (LAPO)

Nematollahi et al. (2017)

9

Thermal exchange optimization (TEO)

Kaveh and Dadras (2017)

10

Henry gas solubility optimization (HGSO)

Hashim et al. (2019)

11

Arithmetic optimization algorithm (AOA)

Abualigah, Diabat, et al., 2021a; Abualigah, Yousri, et al., 2021b)

12

Material generation algorithm (MGA)

Talatahari et al. (2021)

13

Runge Kutta optimizer (RUN)

Ahmadianfar et al. (2021)

2.2.3 Science-Based Algorithms Science-based algorithms consist of algorithms inspired by different disciplines of science and engineering. For instance, these algorithms can take inspiration from physical phenomena, chemical reactions, and cosmology theories, etc. Table 2.2 presents some of the science-based algorithms.

2.2.4 Human-Based Algorithms Human activities and behaviors have encouraged researchers to propose humanbased algorithms, including human behaviors, interactions, lifestyles, and societies. Table 2.3 presents some of the human-based algorithms.

2.3 Improvement Strategies In some real-world problems, optimization algorithms fail to obtain optimal solutions. Several strategies and tools are available to improve the performance of optimization algorithms in the process of finding optimal solutions(s). Some of

2 Improving Approaches for Meta-heuristic Algorithms: A Brief …

39

Table 2.3 Human-based algorithms Number

Optimization algorithm

Developer

1

Cultural algorithm (CA)

Reynolds (1994)

2

Harmony search (HS)

Geem et al. (2001)

3

Seeker optimization algorithm (SOA)

Dai et al. (2007)

4

Imperialist competitive algorithm (ICA)

Atashpaz-Gargari and Lucas (2007)

5

Teaching learning-based optimization algorithm (TLBO)

Rao et al. (2011)

6

Mine blast algorithm (MBA)

Sadollah et al. (2013)

7

Interior search algorithm (ISA)

Gandomi (2014)

8

League championship algorithm (LCA)

Husseinzadeh Kashan (2014)

9

Socio evolution & learning optimization algorithm (SELO)

Kumar et al. (2018)

10

Queuing search (QS)

Zhang et al. (2018)

11

Group teaching optimization algorithm (GTOA)

Zhang and Jin (2020)

12

Search and rescue (SAR) optimization algorithm

Shabani et al. (2020)

13

Hunger game search (HGS)

Yang et al. (2021)

these strategies can improve search space exploration, while others improve convergence speed. There are many approaches for improving the optimization algorithms performance, and in the following part, some of these approaches will be introduced.

2.3.1 Opposition-Based Learning Tizhoosh (2005) introduced opposition-based learning (OBL) for improving the exploration of optimization algorithms in finding the global solution of optimization problems (Abd Elaziz & Oliva, 2018). Without any historical knowledge about the solution space, most optimization algorithms start with an initial random population for finding optimal solutions. In some optimization problems, the optimization process cannot converge to a global solution and falls into local optima. In such circumstances, the OBL strategy is used to solve this situation by first computing the opposite solution for each solution, then considering the value of the objective function to select the best solution.

2.3.1.1

Opposite Number

In an optimization problem, the process search for X ∈ [Lb, Ub] (Lb is the lower boundary and Ub is the upper boundary of the problem), the opposite of real number

40

A. Yaghoubzadeh-Bavandpour et al.

X is defined as follows: S = U b + Lb − S

2.3.1.2

(2.1)

Opposite Vector in Multi-dimensional

Let us define an S = [S1 , S2 , ..., Sn ] as a point in multi-dimensional space, where   S1 , S2 , ..., Sn ∈ R and S i ∈ [Lbi , Ubi ]. The opposite point S = S 1 , S 2 , . . . , S n is defined as follows: S i = U bi + Lbi − Si i = 1, 2, . . . , n

2.3.1.3

(2.2)

Opposition-Based Learning

In OBL, solution S is replaced by  the opposite S based on the objective function values. If f (S) is better than f S , then the process continues with S; otherwise, the process continues with S. Therefore, solutions are updated based on superior values of S and S via comparison between S and S values. The OBL strategy has been employed in many studies for improving various optimization algorithms, which are listed in Table 2.4. Table 2.4 Opposition-based learning strategy application Number

Developer

OBL + Meta-heuristic optimization algorithm

1

Rahnamayan et al. (2008)

OBL + Differential evolution

2

Gao, Wang, et al. (2012)

OBL − Harmony search

3

Ma et al. (2014)

OBL + Multi-objective evolutionary algorithm based on decomposition

4

Yu et al. (2015)

OBL + Firefly algorithm

5

Wang et al. (2016a), Wang et al. (2016c), Wang, Gandomi, et al. (2016b)

OBL + Krill herd algorithm

6

Abd Elaziz et al. (2017)

OBL + Sine cosine algorithm

7

Ewees et al. (2018)

OBL + Grasshopper optimization

8

Bao et al. (2019)

OBL + Dragonfly algorithm

9

Zhang et al. (2020)

OBL + Ant colony optimization

10

Yu et al. (2021)

OBL + Grey wolf optimizer

2 Improving Approaches for Meta-heuristic Algorithms: A Brief …

41

2.3.2 Lévy Flight As a non-Gaussian random walking process, lévy flight applies Lévy distribution to determine step size. Lévy flight, as a searching operator, improves an algorithm’s exploration through the search space by a combination of short-distance walking and long-distance jumping routes (Zhang & Wang, 2020). This situation will lead to efficient searching and prevent falling in local optima. Lévy distribution includes a total of N random variables, as well as Fourier transform and inverse Fourier transform, which are presented as follows (Yang, 2010a, 2010b): FN (k) = exp(−N |k|β ) 1 π

L(s) =





(2.3)

β

cos(τ s)e−ατ dτ (0 > 5 i [ 0:9  T > > : 6 i [ 0:95  T It can be seen that when the iteration number increases, the radius of random motion decreases inorder to confirm convergence of the ALO algorithm. Lastly, when a trap gets into the mouth of antlion, it will eaten by the predator. Therefor, the location of the antlion should be updated to increase the chance to catch a new prey which is shown in the quation below.     Antlion ij ¼ Antji if f Antji [ f Antionij The ALO algorithm which is inspired by the hunting behavior of antlions, was briefly reviewed in this part, and the main mathematical models of the ALO algorithm were discussed in detail to creat a clear insight into the hunting of antlions-inspired optimization algorithm.

7 Ant Colony Optimization Algorithms: Introductory Steps …

7.3.2

147

General Code of ALO in MATLAB

For a general ALO algorithm in MATLAB software, the following code and description are used: % Initialize the positions of antlions and ants antlion_position=initialization(N,dim,ub,lb); ant_position=initialization(N,dim,ub,lb); Sorted_antlions=zeros(N,dim); Elite_antlion_position=zeros(1,dim); Elite_antlion_fitness=inf; Convergence_curve=zeros(1,Max_iter); antlions_fitness=zeros(1,N); ants_fitness=zeros(1,N); for i=1:size(antlion_position,1) antlions_fitness(1,i)=fobj(antlion_position(i,:)); end [sorted_antlion_fitness,sorted_indexes]=sort(antlions_fitness); for newindex=1:N Sorted_antlions(newindex,:)=antlion_position(sorted_indexes (newindex),:); end Elite_antlion_position=Sorted_antlions(1,:); Elite_antlion_fitness=sorted_antlion_fitness(1); Current_iter=2; while Current_iterub; Flag4lb=ant_position(i,:)25%) in E estimates over short canopies compared to tall canopies. Xu et al. (2019) used the three-corned hat method to investigate the uncertainty of 12 different E products over the United States and their results show that GLEAM and a machine learning method (monthly global E data by upscaling FLUXNET observations with a model tree ensemble) had lower uncertainty compared to the other datasets. Several studies focused on comparing GLEAM with other E datasets, including satellite-based products and land surface model-based estimates. These comparisons were used to select the most accurate and relevant E datasets for different applications like hydrological modelling and analysis. In another E intercomparison study using the Australian Water Resource Assessment Landscape (AWRA-L), GLEAM, GLDAS, and MOD16, Khan et al. (2020) highlighted the superior performance of GLEAM in forest and grassland landscapes. However, when the overall accuracy is considered, AWRA-L model outperformed the other datasets. Another GLEAM E evaluation was performed by Niyogi et al (2020) where the authors evaluated the GLEAM (both a and b versions), MOD16, North American Land Data Assimilation System (NLDAS), and Operational Simplified Surface Energy Balance (SSEBop) models using AmeriFlux data. Despite the coarser spatial resolution of GLEAM compared to the other three models, the authors reported higher accuracy of GLEAM version b E estimates. The GLEAM a version, however, showed lower accuracy and a relatively inferior performance, attributed to the reanalysis-based inputs (compared to the remotely sensed data used in the GLEAM b version). During the WAter Cycle Multi-mission Observation Strategy—EvapoTranspiration (WACMOS-ET) project a variety of evaporation estimation methods were evaluated using consistent forcing (Michel et al., 2016; Miralles et al., 2016). The

25 Ten Years of GLEAM: A Review of Scientific Advances and Applications

531

models evaluated were the Priestley-Taylor Jet Propulsion Laboratory model (PTJPL), the Penman–Monteith model from the MOD16, the Surface Energy Balance System (SEBS), and GLEAM. Michel et al. (2016) used meteorological data from 24 FUXNET towers between 2005 and 2007 to validate different models. Results showed a better performance in wet climate compared to dry climate regimes for all models, and an overall better performance of GLEAM and PT-JPL (with SEBS and MOD16 preferentially overestimating and underestimating, respectively). Nonetheless, in dry conditions, disparities in temporal dynamics between GLEAM and PTJPL were apparent, as expected given their different approaches to represent evaporative stress. The results in WACMOS–ET were confirmed by the analyses during the Global Energy and Water Cycle Exchanges (GEWEX) LandFlux project (McCabe et al., 2016), dedicated to the validation of the same evaporation models but using a different set of forcing datasets. More recently, Talsma et al. (2018a) studied the sensitivity of GLEAM and two other global remote sensing evaporation models (PT-JPL and MOD16) to their different forcing variables. Their results indicated that total evaporation in GLEAM is primarily sensitive to net radiation, while for PM-MOD and PT-JPL, it is relative humidity and NDVI (respectively) that are primary controls. GLEAM was also reported to be more robust in relation to errors in the forcing data. In a subsequent study, the outputs of three models were further evaluated in their ability to partition E into E s , E t , and E i (Talsma et al., 2018b). These individual E components were found to have significantly larger bias compared to the total E, yet results varied largely depending on climate and land cover type. In the case of GLEAM, the comparisons showed lower E s compared to the other datasets. These errors were attributed to the absence of below canopy evaporation estimates in the GLEAM algorithm; on the other hand, GLEAM E t and E s were shown to be robust.

25.6 Scientific Applications GLEAM datasets have been used for a variety of applications, such as long-term trend analysis (Lu et al., 2019; Nooni, et al., 2019; Yang et al., 2018; Zhang et al., 2019a, 2019b), investigation of heatwaves (Liu et al., 2020; Miralles et al., 2014), hydrological model calibration and validation (Dembélé et al., 2020a, Dembélé et al.,2020b; Jin & Jin, 2020; Koppa & Gebremichael, 2017, 2020; López et al., 2017; Sneeuw et al., 2014; Wagner et al., 2016), water budgets assessment (Rasmy et al., 2019), and dynamic vegetation change studies (Pourmansouri & Rahimzadegan, 2020; Duveiller et al., 2018; Papagiannopoulou et al., 2017). Spatial and temporal variation of E were analysed based on GLEAM products over different regions such as Switzerland (Rouholahnejad Freund et al., 2020), the Amazon River Basin (Da Motta Paca et al., 2019), China (Lu et al., 2019; Qiu et al., 2021; Yang et al., 2018, Yang et al.,2021a, 2021b), and the Nile River Basin (Nooni et al., 2019). Another frequent application involves the use of the data for drought monitoring (see e.g., Jiang et al., 2021; Liang et al., 2020; Miralles et al., 2019; Rehana & Monish, 2021; Rehana & Naidu,

532

M. N. Jahromi et al.

2021; Satgé et al., 201; Vicente-Serrano et al., 2018; Zhang et al., 2019a, 2019b). For example, results from Jiang et al. (2021) show the value of using GLEAM data for drought monitoring over China, Rehana & Monish, (2021) and Rehana & Naidu, (2021) over India, and Peng et al. (2020) over Africa. Also globally, Peng et al. (2019) used the data to estimate the Evaporative Stress Index (ESI), and Vicente-Serrano et al. (2018) to develop the Standardised Evaporative Deficit Index (SEDI). Likewise, GLEAM data has been used for the study of heatwaves and soil moisture–temperature feedbacks during hot events in multiple studies (e.g., Geirinhas et al., 2021; Miralles et al., 2012, 2014; Schumacher et al., 2019). Hydrological models used to estimate streamflow usually require calibration; using datasets such as GLEAM can help to optimise this calibration process (Koppa et al., 2019). For instance, Jinand Jin (2020), Sirisena et al. (2020), and LópezBallesteros et al. (2019) used GLEAM for the calibration of hydrological models over the northwest region of China, Myanmar, Nigeria, and southeast of Spain, respectively. In all these cases, the use of GLEAM resulted in an improvement in streamflow simulations. Likewise, studies such as Lv et al. (2021) used GLEAM data to evaluate, rather than to calibrate, hydrological models. Evaporation from GLEAM has also been used in conjunction with estimates of other water balance components to study the water budget over the different regions, such as Canada (Wong et al., 2021), South America (Moreira et al., 2019), Sri Lanka (Rasmy et al., 2019), or worldwide (Lorenz et al., 2014). In addition, multiple studies have targeted different facets of climate change, in specific regions (Qiu et al., 2021; Rehana & Monish, 2021; Rehana & Naidu, 2021; Yang et al., 2021a, 2021b; Yin et al., 2022) and also globally (Konapala et al., 2020; Mao et al., 2015). Finally, vegetation growth and activity in response to climate has been investigated using GLEAM data over China (Liang et al., 2020; Yang et al., 2021a, 2021b), Iran (Pourmansouri & Rahimzadegan, 2020), Canada (Gonsamo et al., 2019), and over the globe (Duveiller et al., 2018; Lv et al., 2021; Papagiannopoulou et al., 2017; Porada et al., 2018). Several studies have merged different E datasets, including GLEAM, to increase the accuracy of the final dataset. For instance, Da Motta Paca et al. (2019) combined six different E products (GLEAM, ALEXI, SEBSMOD16, CMRSET, and SSEBop) to create a merged E dataset over the Amazon River Basin. In addition, several approaches towards a global merger have been proposed (e.g., Hobeichi et al., 2018; Jiménez et al., 2018; Mueller et al., 2013). One of the most frequent uses of the GLEAM data has been the evaluation of climate models and reanalysis datasets. For instance, Reichle et al. (2017) used E i data from GLEAM to evaluate the improvements in the Modern-Era Retrospective Analysis for Research and Applications, version 2 (MERRA-2) with respect to its predecessor. They indicated that E i estimates from MERRA-2, and specially MERRA-Land, were much closer to GLEAM, correcting previously reported overestimations in the original MERRA. More recently, Wang et al. (2021) evaluated the E estimates from the Coupled Model Intercomparison Project (CMIP6) models using GLEAM data, and showed that stateof-the-art climate models tend to overestimate evaporation in most of the world, even though they are still able to represent climatic gradients correctly.

25 Ten Years of GLEAM: A Review of Scientific Advances and Applications

533

The use of GLEAM for agricultural and water management applications, however, has been limited over the past decade. This is mostly attributed to the coarse (0.25degree) resolution of the open-access datasets being insufficient for most applications aiming to resolve local scale-water balances. Nonetheless, Martens et al. (2018) explored the potential of GLEAM for high-resolution simulations (GLEAMHR), executing the model at 100 m spatial resolution over the Netherlands, Flanders, and Western Germany. This was enabled by the availability of high-resolution microwave-based geophysical observations and local in situ data provided by the Dutch commercial company VanderSat (now Planet Labs). GLEAM demonstrated a good performance at those resolutions, with correlation coefficients ranging between 0.65 and 0.95 against in situ measurements for SM (29 sites) and E (5 sites). This study set the basis for the ongoing collaboration between the GLEAM developing team and VanderSat to execute GLEAM at high resolutions and near-real time for commercial applications. Currently, VanderSat provides estimates of E and evaporative deficit over the Netherlands to all the Dutch waterboards to aid in operational water management (i.e. SATDATA project). The system is implemented on the cloud and provides hindcast data as well as a two-day forecast at 100 m × 100 m resolution, with daily updates based on weather forecast data. The final stakeholders of these data range from (re)insurance companies, to agricultural actors and water managers.

25.7 GLEAM Future Roadmap Ongoing developments in GLEAM strive in two directions: high resolution (GLEAM-HR), and hybrid modelling (GLEAM-Hybrid). Plans to continue building upon the high-resolution prototype by Martens et al. (2018) in collaboration with VanderSat are enabled by the increasing availability of high-resolution satellite observations. Specifically, within the ongoing ET-Sense project funded by the Belgian Science Policy Office (BELSPO), the potential of using observations from the Sentinel constellation is explored, aiming to produce E and SM at 1 km resolution across Europe. For example, the assimilation of Sentinel-1 backscatter measurements was recently tested using either a traditional empirical model to convert between backscatter observations and model variables or a Machine-Learning (SVR) approach (Rains et al., 2021). Ongoing work involves the creation of a 1 km net radiation dataset to force GLEAM by combining the high temporal resolution of geostationary radiation estimates and land surface temperature retrievals with the high spatial resolution of Sentinel-3 land surface temperature. Likewise, the Digital Twin Earth (DTE) Hydrology project funded by the European Space Agency (ESA) focuses on the production of 1 km evaporation estimates over the Po River basin in Northern Italy. The aim of DTE-Hydrology is to provide a digital prototype of the water cycle by optimally combining satellite observations with models and other data sources like in situ observations. As a follow up from DTE-Hydrology, a new ESA project (namely 4DMED-Hydrology) recently launched with the goal of providing 1 km resolution hydrological variables (including GLEAM E and SM) over the entire Mediterranean basin.

534

M. N. Jahromi et al.

Future developments of GLEAM focus on improving evaporation estimates by leveraging recent developments in deep learning. This follows the intent to rely on artificial intelligence for improving those processes that are in need for improvement, as identified by the studies dedicated to the evaluation of GLEAM (see Sect. 25.5). The resulting “hybrid” model (GLEAM-Hybrid) seamlessly combines the processbased nature of the GLEAM model with machine learning. A current prototype has already been developed, which replaces the most uncertain and empirical component: the evaporative stress module (Koppa et al., 2021a, 2021b). The default empirical stress function in GLEAM (see Sect. 25.2) has been developed based on results from local-scale experiments, and does not consider environmental drivers of transpiration stress such as air temperature (heat stress), vapour pressure deficit (atmospheric demand for water), incoming shortwave radiation (light limitation), and carbon dioxide. In GLEAM-Hybrid, this empirical equation is replaced by a deep learning model trained on transpiration stress, as observed at measurement sites distributed across the globe, and using the various environmental factors detailed above as predictors. The deep learning model is then embedded within the processbased version of GLEAM such that the process-based and deep learning components run at daily scales. An extensive validation showed that the GLEAM-Hybrid model performs well in representing the temporal dynamics and spatial structure of evaporation observed at flux towers (Koppa et al., 2021a, 2021b). The flexibility of the GLEAM-Hybrid architecture will enable the assimilation of higher resolution data in the future, aiming towards a high-resolution land reanalysis hybrid model, that leverages from the existing range of satellite observations with the dedicated goal of providing accurate E data.

25.8 Conclusion In this study we provided an overview of the GLEAM model and its development. A total of 178 peer-reviewed scientific articles (in English) that used GLEAM data, published from January 2013 to October 2021, were reviewed. Our assessments showed that most of the studies that used the GLEAM outputs have been performed over Asia or globally, and that the total evaporation product is the most broadly used output of GLEAM. The accuracy of E products has been reported to vary across different biomes. Nevertheless, several studies highlighted its overall good performance with respect to other existing E datasets. The model has also been reported to be particularly sensitive to net radiation, and a lower uncertainty has been reported for the version in which inputs are fully based on remote sensing. Given the empirical nature of the evaporative stress module, improving this aspect has been a priority in recent years. To do so, ongoing progress is centered around a hybrid approach in which the model physics are reinforced by artificial intelligence. Moreover, the use of GLEAM at higher resolution by assimilating novel satellite observations remains a priority, both for scientific as well as for commercial applications.

25 Ten Years of GLEAM: A Review of Scientific Advances and Applications

535

Acknowledgements The development of GLEAM has been enabled by the Belgian Science Policy Office (BESLPO) ET-Sense (contract SR/02/377) and ALBERI (SR/02/373) projects, and the European Space Agency (ESA) DTE–Hydrology (contract 4000129870/20/I-NB) project. DGM acknowledges support from the European Research Council (ERC), DRY–2–DRY project (grant no. 715254). AK acknowledges support from the European Union Horizon 2020 Pro-640gramme (DOWN2EARTH, 869550).

References Abdi, A., Ghahreman, N., & Ghamghami, M. (2020). Evaluation of evapotranspiration estimations of GLEAM model in northern part of Karkhe basin. Iranian Journal of Irrigation & Drainage, 14(2), 366–378. Albergel, C., Munier, S., Leroux, D. J., Dewaele, H., Fairbairn, D., Barbu, A. L., Gelati, E., Dorigo, W., Faroux, S., Meurey, C. and Le Moigne, P., & Calvet, J. C. (2017). Sequential assimilation of satellite-derived vegetation and soil moisture products using SURFEX_v8. 0: LDAS-Monde assessment over the Euro-Mediterranean area. Geoscientific Model Development, 10(10), 3889– 3912. Bai, P., & Liu, X. (2018). Intercomparison and evaluation of three global high-resolution evapotranspiration products across China. Journal of Hydrology, 566, 743–755. Baik, J., Liaqat, U. W., & Choi, M. (2018). Assessment of satellite-and reanalysis-based evapotranspiration products with two blending approaches over the complex landscapes and climates of Australia. Agricultural and Forest Meteorology, 263, 388–398. Baik, J., Park, J., & Choi, M. (2020). Blending multi-source evapotranspiration datasets via triple collocation approach. Authorea Preprints. Baik, J., Park, J., Lee, S., Kim, U., & Choi, M. (2018b). Assessment of merging technique using Triple Collocation (TC) from satellite and reanalysis dataset over Different Land Covers in East Asia: GLDAS, MOD16, GLEAM, and MERRA. In AGU Fall Meeting Abstracts (Vol. 2018b, pp. H51R-1554). Benedict, I., Heerwaarden, C. C. V., Weerts, A. H., & Hazeleger, W. (2019). The benefits of spatial resolution increase in global simulations of the hydrological cycle evaluated for the Rhine and Mississippi basins. Hydrology and Earth System Sciences, 23(3), 1779–1800. Chao, L., Zhang, K., Wang, J., Feng, J., & Zhang, M. (2021). A Comprehensive evaluation of five evapotranspiration datasets based on ground and GRACE satellite observations: implications for improvement of evapotranspiration retrieval algorithm. Remote Sensing, 13(12), 2414. Dembélé, M., Hrachowitz, M., Savenije, H. H., Mariéthoz, G., & Schaefli, B. (2020a). Improving the predictive skill of a distributed hydrological model by calibration on spatial patterns with multiple satellite data sets. Water Resources Research, 56(1). Dembélé, M., Ceperley, N., Zwart, S. J., Salvadore, E., Mariethoz, G., & Schaefli, B. (2020). Potential of satellite and reanalysis evaporation datasets for hydrological modelling under various model calibration strategies. Advances in Water Resources, 143, 103667. Dolman, A. J., Miralles, D. G., & de Jeu, R. A. (2014). Fifty years since Monteith’s 1965 seminal paper: The emergence of global ecohydrology. Ecohydrology, 7(3), 897–902. Dorigo, W., Dietrich, S., Aires, F., Brocca, L., Carter, S., Cretaux, J. F., Dunkerley, D., Enomoto, H., Forsberg, R., Güntner, A. Hegglin, M. I., & Aich, V. (2021). Closing the water cycle from observations across scales: Where do we stand? Bulletin of the American Meteorological Society, 102(10), E1897–E1935. Draper, C. S., Reichle, R. H., & Koster, R. D. (2018). Assessment of MERRA-2 land surface energy flux estimates. Journal of Climate, 31(2), 671–691. Duveiller, G., Hooker, J., & Cescatti, A. (2018). The mark of vegetation change on Earth’s surface energy balance. Nature Communications, 9(1), 1–12.

536

M. N. Jahromi et al.

Gash, J. H. C. (1979). An analytical model of rainfall interception by forests. The Quarterly Journal of the Royal Meteorological Society, 105, 43–55. https://doi.org/10.1002/qj.49710544304 Geirinhas, J. L., Russo, A., Libonati, R., Sousa, P. M., Miralles, D. G., & Trigo, R. M. (2021). Recent increasing frequency of compound summer drought and heatwaves in Southeast Brazil. Environmental Research Letters, 16(3), 034036. Gonsamo, A., Ter-Mikaelian, M. T., Chen, J. M., & Chen, J. (2019). Does earlier and increased spring plant growth lead to reduced summer soil moisture and plant growth on landscapes typical of Tundra-Taiga interface? Remote Sensing, 11(17), 1989. Guillod, B. P., Orlowsky, B., Miralles, D., Teuling, A. J., Blanken, P. D., Buchmann, N., Ciais, P., Ek, M., Findell, K. L., Gentine, P. Lintner, B. R., & Seneviratne, S. (2014). Land-surface controls on afternoon precipitation diagnosed from observational data: uncertainties and confounding factors. Atmospheric Chemistry and Physics, 14(16), 8343–8367. Hobeichi, S., Abramowitz, G., Evans, J., & Ukkola, A. (2018). Derived optimal linear combination evapotranspiration (DOLCE): A global gridded synthesis ET estimate. Hydrology and Earth System Sciences, 22(2), 1317–1336. Jiang, S., Wei, L., Ren, L., Xu, C. Y., Zhong, F., Wang, M., Zhang, L., Yuan, F., & Liu, Y. (2021). Utility of integrated IMERG precipitation and GLEAM potential evapotranspiration products for drought monitoring over mainland China. Atmospheric Research, 247, 105141. Jiménez, C., Martens, B., Miralles, D. M., Fisher, J. B., Beck, H. E., & Fernández-Prieto, D. (2018). Exploring the merging of the global land evaporation WACMOS-ET products based on local tower measurements. Hydrology and Earth System Sciences, 22(8), 4513–4533. Jin, X., & Jin, Y. (2020). Calibration of a distributed hydrological model in a data-scarce basin based on GLEAM datasets. Water, 12(3), 897. Khan, M. S., Liaqat, U. W., Baik, J., & Choi, M. (2018). Stand-alone uncertainty characterization of GLEAM, GLDAS and MOD16 evapotranspiration products using an extended triple collocation approach. Agricultural and Forest Meteorology, 252, 256–268. Khan, M. S., Baik, J., & Choi, M. (2020). Inter-comparison of evapotranspiration datasets over heterogeneous landscapes across Australia. Advances in Space Research, 66(3), 533–545. Konapala, G., Mishra, A. K., Wada, Y., & Mann, M. E. (2020). Climate change will affect global water availability through compounding changes in seasonal precipitation and evaporation. Nature Communications, 11(1), 1–10. Koppa, A., Rains, D., Hulsman, P., & Miralles, D. (2021a). A deep learning-based hybrid model of global terrestrial evaporation. Koppa, A., Alam, S., Miralles, D. G., & Gebremichael, M. (2021b). Budyko-based long-term water and energy balance closure in global watersheds from earth observations. Water Resources Research, 57(5), e2020WR028658. Koppa, A., & Gebremichael, M. (2020). Improving the applicability of hydrologic models for food–energy–water nexus studies using remote sensing data. Remote Sensing, 12(4), 599. Koppa, A., Gebremichael, M., & Yeh, W. W. (2019). Multivariate calibration of large scale hydrologic models: The necessity and value of a Pareto optimal approach. Advances in Water Resources, 130, 129–146. Koppa, A., & Gebremichael, M. (2017). A framework for validation of remotely sensed precipitation and evapotranspiration based on the Budyko hypothesis. Water Resources Research, 53(10), 8487–8499. Lee, Y., Im, B., Kim, K., & Rhee, K. (2020). Adequacy evaluation of the GLDAS and GLEAM evapotranspiration by eddy covariance method. Journal of Korea Water Resources Association, 53(10), 889–902. Liang, C., Chen, T., Dolman, H., Shi, T., Wei, X., Xu, J., & Hagan, D. F. T. (2020). Drying and wetting trends and vegetation covariations in the drylands of China. Water, 12(4), 933. Liu, X., He, B., Guo, L., Huang, L., & Chen, D. (2020). Similarities and differences in the mechanisms causing the European summer heatwaves in 2003, 2010, and 2018. Earth’s Future, 8(4), e2019EF001386.

25 Ten Years of GLEAM: A Review of Scientific Advances and Applications

537

Liu, W., Wang, L., Zhou, J., Li, Y., Sun, F., Fu, G., Li, X., & Sang, Y. F. (2016). A worldwide evaluation of basin-scale evapotranspiration estimates against the water balance method. Journal of Hydrology, 538, 82–95. López, P., Sutanudjaja, E. H., Schellekens, J., Sterk, G., & Bierkens, M. F. (2017). Calibration of a large-scale hydrological model using satellite-based soil moisture and evapotranspiration products. Hydrology and Earth System Sciences, 21(6), 3125–3144. Lu, J., Wang, G., Gong, T., Hagan, D. F. T., Wang, Y., Jiang, T., & Su, B. (2019). Changes of actual evapotranspiration and its components in the Yangtze River valley during 1980–2014 from satellite assimilation product. Theoretical and Applied Climatology, 138(3), 1493–1510. Lv, M., Xu, Z., & Lv, M. (2021). Evaluating hydrological processes of the atmosphere-vegetation interaction model and MERRA-2 at global scale. Atmosphere, 12(1), 16. López-Ballesteros, A., Senent-Aparicio, J., Srinivasan, R., & Pérez-Sánchez, J. (2019). Assessing the impact of best management practices in a highly anthropogenic and ungauged watershed using the SWAT model: A case study in the El Beal Watershed (Southeast Spain). Agronomy, 9(10), 576. Lorenz, C., Kunstmann, H., Devaraju, B., Tourian, M. J., Sneeuw, N., & Riegger, J. (2014). Large-scale runoff from landmasses: A global assessment of the closure of the hydrological and atmospheric water balances. Journal of Hydrometeorology, 15(6), 2111–2139. Majozi, N. P., Mannaerts, C. M., Ramoelo, A., Mathieu, R., Mudau, A. E., & Verhoef, W. (2017). An intercomparison of satellite-based daily evapotranspiration estimates under different eco-climatic regions in South Africa. Remote Sensing, 9(4), 307. Mao, J., Fu, W., Shi, X., Ricciuto, D. M., Fisher, J. B., Dickinson, R. E., Wei, Y., Shem, W., Piao, S., Wang, K., Schwalm, C. R., & Zhu, Z. (2015). Disentangling climatic and anthropogenic controls on global terrestrial evapotranspiration trends. Environmental Research Letters, 10(9), 094008 Martens, B., De Jeu, R. A., Verhoest, N. E., Schuurmans, H., Kleijer, J., & Miralles, D. G. (2018). Towards estimating land evaporation at field scales using GLEAM. Remote Sensing, 10(11), 1720. Martens, B., Miralles, D. G., Lievens, H., Van Der Schalie, R., De Jeu, R. A., Fernández-Prieto, D., Beck, H.E., Dorigo, W. A., & Verhoest, N. E. (2017). GLEAM v3: Satellite-based land evaporation and root-zone soil moisture. Geoscientific Model Development, 10(5), 1903–1925. McCabe, M. F., Ershadi, A., Jimenez, C., Miralles, D. G., Michel, D., & Wood, E. F. (2016). The GEWEX LandFlux project: Evaluation of model evaporation using tower-based and globally gridded forcing data. Geoscientific Model Development, 9(1), 283–305. Melo, D. C. D., Anache, J. A. A., Borges, V. P., Miralles, D. G., Martens, B., Fisher, J. B., Nobrega, R. L., Moreno, A., Cabral, O. M., Rodrigues, T. R., Wendland, E. (2021). Are remote sensing evapotranspiration models reliable across South American ecoregions?. Water Resources Research, e2020WR028752. Michel, D., Jiménez, C., Miralles, D. G., Jung, M., Hirschi, M., Ershadi, A., Martens, B., McCabe, M. F., Fisher, J.B., Mu, Q., Seneviratne, S. I. & Fernández-Prieto, D. (2016a). The WACMOSET project–Part 1: Tower-scale evaluation of four remote-sensing-based evapotranspiration algorithms. Hydrology and Earth System Sciences, 20(2), 803–822. Miralles, D. G., Jiménez, C., Jung, M., Michel, D., Ershadi, A., McCabe, M. F., Hirschi, M., Martens, B., Dolman, A. J., Fisher, J. B., Mu, Q. & Fernández-Prieto, D. (2016b). The WACMOSET project–Part 2: Evaluation of global terrestrial evaporation data sets. Hydrology and Earth System Sciences, 20(2), 823–842. Miralles, D. G., Gash, J. H., Holmes, T. R., de Jeu, R. A., & Dolman, A. J. (2010). Global canopy interception from satellite observations. Journal of Geophysical Research: Atmospheres, 115(D16). Miralles, D. G., Gentine, P., Seneviratne, S. I., & Teuling, A. J. (2019). Land–atmospheric feedbacks during droughts and heatwaves: State of the science and current challenges. Annals of the New York Academy of Sciences, 1436(1), 19.

538

M. N. Jahromi et al.

Miralles, D. G., Holmes, T. R. H., De Jeu, R. A. M., Gash, J. H., Meesters, A. G. C. A., & Dolman, A. J. (2011). Global land-surface evaporation estimated from satellite-based observations. Hydrology and Earth System Sciences, 15(2), 453–469. Miralles, D. G., Teuling, A. J., Van Heerwaarden, C. C., & De Arellano, J. V. G. (2014). Megaheatwave temperatures due to combined soil desiccation and atmospheric heat accumulation. Nature Geoscience, 7(5), 345–349. Miralles, D. G., Van Den Berg, M. J., Teuling, A. J., & De Jeu, R. A. M. (2012). Soil moisturetemperature coupling: A multiscale observational analysis. Geophysical Research Letters, 39(21). Moletto-Lobos, I., Mattar, C., & Barichivich, J. (2020). Performance of satellite-based evapotranspiration models in temperate pastures of Southern Chile. Water, 12(12), 3587. Moreira, A. A., Ruhoff, A. L., Roberti, D. R., de Arruda Souza, V., da Rocha, H. R., & de Paiva, R. C. D. (2019). Assessment of terrestrial water balance using remote sensing data in South America. Journal of Hydrology, 575, 131–147. Mueller, B., Hirschi, M., Jimenez, C., Ciais, P., Dirmeyer, P. A., Dolman, A. J., Fisher, J.B., Jung, M., Ludwig, F., Maignan, F. and Miralles, D. G., & Seneviratne, S. I. (2013). Benchmark products for land evapotranspiration: LandFlux-EVAL multi-data set synthesis. Hydrology and Earth System Sciences, 17(10), 3707–3720. Mueller, B., Seneviratne, S. I., Jimenez, C., Corti, T., Hirschi, M., Balsamo, G., Ciais, P., Dirmeyer, P., Fisher, J.B., Guo, Z., Jung, M., Zhang, Y. (2011). Evaluation of global observations-based evapotranspiration datasets and IPCC AR4 simulations. Geophysical Research Letters, 38(6). Niyogi, D., Jamshidi, S., Smith, D., & Kellner, O. (2020). Evapotranspiration climatology of Indiana using in situ and remotely sensed products. Journal of Applied Meteorology and Climatology, 59(12), 2093–2111. Nooni, I. K., Wang, G., Hagan, D. F. T., Lu, J., Ullah, W., & Li, S. (2019). Evapotranspiration and its components in the Nile River Basin based on long-term satellite assimilation product. Water, 11(7), 1400. Paca, V. H. D. M., Espinoza-Dávalos, G. E., Hessels, T. M., Moreira, D. M., Comair, G. F., & Bastiaanssen, W. G. (2019). The spatial variability of actual evapotranspiration across the Amazon River Basin based on remote sensing products validated with flux towers. Ecological Processes, 8(1), 1–20. Papagiannopoulou, C., Miralles, D. G., Dorigo, W. A., Verhoest, N. E. C., Depoorter, M., & Waegeman, W. (2017). Vegetation anomalies caused by antecedent precipitation in most of the world. Environmental Research Letters, 12(7), 074016. Peng, J., Dadson, S., Leng, G., Duan, Z., Jagdhuber, T., Guo, W., & Ludwig, R. (2019). The impact of the Madden-Julian Oscillation on hydrological extremes. Journal of Hydrology, 571, 142–149. Peng, J., Dadson, S., Hirpa, F., Dyer, E., Lees, T., Miralles, D. G., Vicente-Serrano, S. M., & Funk, C. (2020). A pan-African high-resolution drought index dataset. Earth System Science Data, 12(1), 753–769 Porada, P., Van Stan, J. T., & Kleidon, A. (2018). Significant contribution of non-vascular vegetation to global rainfall interception. Nature Geoscience, 11(8), 563–567. Pourmansouri, F., & Rahimzadegan, M. (2020). Evaluation of vegetation and evapotranspiration changes in Iran using satellite data and ground measurements. Journal of Applied Remote Sensing, 14(3), 034530. Priestley, C. H. B., & Taylor, R. J. (1972). On the assessment of surface heat flux and evaporation using large-scale parameters. Monthly Weather Review, 100(2), 81–92. Qiu, L., Wu, Y., Shi, Z., Chen, Y., & Zhao, F. (2021). Quantifying the responses of evapotranspiration and its components to vegetation restoration and climate change on the Loess Plateau of China. Remote Sensing, 13(12), 2358. Rains, D., Lievens, H., De Lannoy, G. J., McCabe, M. F., de Jeu, R. A., & Miralles, D. G. (2021). Sentinel-1 backscatter assimilation using support vector regression or the water cloud model at European soil moisture sites. IEEE Geoscience and Remote Sensing Letters.

25 Ten Years of GLEAM: A Review of Scientific Advances and Applications

539

Rasmy, M., Sayama, T., & Koike, T. (2019). Development of water and energy Budget-based Rainfall-Runoff-Inundation model (WEB-RRI) and its verification in the Kalu and Mundeni River Basins, Sri Lanka. Journal of Hydrology, 579, 124163. Rehana, S., & Monish, N. T. (2021). Impact of potential and actual evapotranspiration on drought phenomena over water and energy-limited regions. Theoretical and Applied Climatology, 144(1), 215–238. Rehana, S., & Naidu, G. S. (2021). Development of hydro-meteorological drought index under climate change—Semi-arid river basin of Peninsular India. Journal of Hydrology, 594, 125973. Reichle, R. H., Draper, C. S., Liu, Q., Girotto, M., Mahanama, S. P., Koster, R. D., & De Lannoy, G. J. (2017). Assessment of MERRA-2 land surface hydrology estimates. Journal of Climate, 30(8), 2937–2960. Romanovsky, V. E., Smith, S. L., Isaksen, K., Shiklomanov, N. I., Streletskiy, D. A., Kholodov, A. L., Christiansen, H. H., Drozdov, D. S., Malkova, G. V., & Marchenko, S. S. (2019). Terrestrial permafrost [in “State of the Climate in 2018”]. Bulletin of the American Meteorological Society, 100(9). Rouholahnejad Freund, E., Zappa, M., & Kirchner, J. W. (2020). Averaging over spatiotemporal heterogeneity substantially biases evapotranspiration rates in a mechanistic large-scale land evaporation model. Hydrology and Earth System Sciences, 24(10), 5015–5025. Satgé, F., Hussain, Y., Xavier, A., Zolá, R. P., Salles, L., Timouk, F., Seyler, F., Garnier, J., Frappart, F. & Bonnet, M. P. (2019). Unraveling the impacts of droughts and agricultural intensification on the Altiplano water resources. Agricultural and Forest Meteorology, 279, 107710. Schwingshackl, C., Hirschi, M., & Seneviratne, S. I. (2017). Quantifying spatiotemporal variations of soil moisture control on surface energy balance and near-surface air temperature. Journal of Climate, 30(18), 7105–7124. Schumacher, D. L., Keune, J., Van Heerwaarden, C. C., de Arellano, J. V. G., Teuling, A. J., & Miralles, D. G. (2019). Amplification of mega-heatwaves through heat torrents fuelled by upwind drought. Nature Geoscience, 12(9), 712–717. Shi, Q., & Liang, S. (2014). Surface-sensible and latent heat fluxes over the Tibetan Plateau from ground measurements, reanalysis, and satellite data. Atmospheric Chemistry and Physics, 14(11), 5659–5677. Sirisena, T. A., Maskey, S., & Ranasinghe, R. (2020). Hydrological model calibration with streamflow and remote sensing based evapotranspiration data in a data poor basin. Remote Sensing, 12(22), 3768. Sneeuw, N., Lorenz, C., Devaraju, B., Tourian, M. J., Riegger, J., Kunstmann, H., & Bárdossy, A. (2014). Estimating runoff using hydro-geodetic approaches. Surveys in Geophysics, 35(6), 1333–1359. Talsma, C. J., Good, S. P., Jimenez, C., Martens, B., Fisher, J. B., Miralles, D. G., McCabe, M. F., & Purdy, A. J. (2018a). Partitioning of evapotranspiration in remote sensing-based models. Agricultural and Forest Meteorology, 260, 131–143. Talsma, C. J., Good, S. P., Miralles, D. G., Fisher, J. B., Martens, B., Jimenez, C., & Purdy, A. J. (2018b). Sensitivity of evapotranspiration components in remote sensing-based models. Remote Sensing, 10(10), 1601. Tian, Y. (2019). A priori parameter estimates for distribution of soil moisture storage capacity in Hymod model using information extracted from GLEAM root-zone soil moisture data. In Geophysical Research Abstracts (Vol. 21). Valente, F., David, J. S., & Gash, J. H. C. (1997). Modelling interception loss for two sparse eucalypt and pine forests in central Portugal using reformulated Rutter and Gash analytical models. Journal of Hydrology, 190(1–2), 141–162. Vicente-Serrano, S. M., Miralles, D. G., Domínguez-Castro, F., Azorin-Molina, C., El Kenawy, A., McVicar, T. R., Tomás-Burguera, M., Beguería, S., Maneta, M., & Peña-Gallardo, M. (2018). Global assessment of the Standardized Evapotranspiration Deficit Index (SEDI) for drought analysis and monitoring. Journal of Climate, 31(14), 5371–5393.

540

M. N. Jahromi et al.

Wang, G., Pan, J., Shen, C., Li, S., Lu, J., Lou, D., & Hagan, D. F. (2018). Evaluation of evapotranspiration estimates in the Yellow River Basin against the water balance method. Water, 10(12), 1884. Wang, Z., Zhan, C., Ning, L., & Guo, H. (2021). Evaluation of global terrestrial evapotranspiration in CMIP6 models. Theoretical and Applied Climatology, 143(1), 521–531. Wagner, S., Fersch, B., Yuan, F., Yu, Z., & Kunstmann, H. (2016). Fully coupled atmospherichydrological modeling at regional and long-term scales: Development, application, and analysis of WRF-HMS. Water Resources Research, 52(4), 3187–3211. Wati, T., & Sopaheluwakan, A. (2018). Comparison pan evaporation data with global land-surface evaporation GLEAM in Java and Bali Island Indonesia. The Indonesian Journal of Geography, 50(1), 87–96. Wong, J. S., Zhang, X., Gharari, S., Shrestha, R. R., Wheater, H. S., & Famiglietti, J. S. (2021). Assessing water balance closure using multiple data assimilation—and remote sensing-based datasets for Canada. Journal of Hydrometeorology, 22(6), 1569–1589. Xu, T., Guo, Z., Xia, Y., Ferreira, V. G., Liu, S., Wang, K., Yao, Y., Zhang, X., & Zhao, C. (2019). Evaluation of twelve evapotranspiration products from machine learning, remote sensing and land surface models over conterminous United States. Journal of Hydrology, 578, 124105 Yang, L., Feng, Q., Adamowski, J. F., Alizadeh, M. R., Yin, Z., Wen, X., & Zhu, M. (2021). The role of climate change and vegetation greening on the variation of terrestrial evapotranspiration in northwest China’s Qilian Mountains. Science of the Total Environment, 759, 143532. Yang, J., Wang, W., Hua, T., & Peng, M. (2021). Spatiotemporal variation of actual evapotranspiration and its response to changes of major meteorological factors over China using multi-source data. Journal of Water and Climate Change, 12(2), 325–338. Yang, X., Yong, B., Ren, L., Zhang, Y., & Long, D. (2017). Multi-scale validation of GLEAM evapotranspiration products over China via ChinaFLUX ET measurements. International Journal of Remote Sensing, 38(20), 5688–5709. Yang, X., Yong, B., Yin, Y., & Zhang, Y. (2018). Spatio-temporal changes in evapotranspiration over China using GLEAM_V3. 0a products (1980–2014). Hydrology Research, 49(5), 1330–1348. Yin, G., Wang, G., Zhang, X., Wang, X., Hu, Q., Shrestha, S., & Hao, F. (2022). Multi-scale assessment of water security under climate change in North China in the past two decades. Science of the Total Environment, 805, 150103. Zanin, P. R., & Satyamurty, P. (2021). Interseasonal and interbasins hydrological coupling in South America. Journal of Hydrometeorology, 22(6), 1609–1625. Zhang, B., AghaKouchak, A., Yang, Y., Wei, J., & Wang, G. (2019). A water-energy balance approach for multi-category drought assessment across globally diverse hydrological basins. Agricultural and Forest Meteorology, 264, 247–265. Zhang, Y., Kong, D., Gan, R., Chiew, F. H., McVicar, T. R., Zhang, Q., & Yang, Y. (2019). Coupled estimation of 500 m and 8-day resolution global evapotranspiration and gross primary production in 2002–2017. Remote Sensing of Environment, 222, 165–182. Zhang, Y., Peña-Arancibia, J. L., McVicar, T. R., Chiew, F. H., Vaze, J., Liu, C., Lu, X., Zheng, H., Wang, Y., Liu, Y. Y., Miralles, D. G., Pan, M. (2016). Multi-decadal trends in global terrestrial evapotranspiration and its components. Scientific Reports, 6(1), 1–12.