Metaheuristics Algorithms in Power Systems [1st ed.] 978-3-030-11592-0, 978-3-030-11593-7

This book discusses the use of efficient metaheuristic algorithms to solve diverse power system problems, providing an o

345 97 11MB

English Pages XII, 221 [231] Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Metaheuristics Algorithms in Power Systems [1st ed.]
 978-3-030-11592-0, 978-3-030-11593-7

Table of contents :
Front Matter ....Pages i-xii
Introduction to Metaheuristics Methods (Erik Cuevas, Emilio Barocio Espejo, Arturo Conde Enríquez)....Pages 1-8
Metaheuristic Schemes for Parameter Estimation in Induction Motors (Erik Cuevas, Emilio Barocio Espejo, Arturo Conde Enríquez)....Pages 9-22
Non-conventional Overcurrent Relays Coordination (Erik Cuevas, Emilio Barocio Espejo, Arturo Conde Enríquez)....Pages 23-59
Overcurrent Relay Coordination, Robustness and Fast Solutions (Erik Cuevas, Emilio Barocio Espejo, Arturo Conde Enríquez)....Pages 61-110
Bio-inspired Optimization Algorithms for Solving the Optimal Power Flow Problem in Power Systems (Erik Cuevas, Emilio Barocio Espejo, Arturo Conde Enríquez)....Pages 111-136
A Modified Crow Search Algorithm with Applications to Power System Problems (Erik Cuevas, Emilio Barocio Espejo, Arturo Conde Enríquez)....Pages 137-166
Optimal Location of FCL (Erik Cuevas, Emilio Barocio Espejo, Arturo Conde Enríquez)....Pages 167-185
Clustering Representative Electricity Load Data Using a Particle Swarm Optimization Algorithm (Erik Cuevas, Emilio Barocio Espejo, Arturo Conde Enríquez)....Pages 187-210
Back Matter ....Pages 211-221

Citation preview

Studies in Computational Intelligence 822

Erik Cuevas Emilio Barocio Espejo Arturo Conde Enríquez

Metaheuristics Algorithms in Power Systems

Studies in Computational Intelligence Volume 822

Series editor Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland e-mail: [email protected]

The series “Studies in Computational Intelligence” (SCI) publishes new developments and advances in the various areas of computational intelligence—quickly and with a high quality. The intent is to cover the theory, applications, and design methods of computational intelligence, as embedded in the fields of engineering, computer science, physics and life sciences, as well as the methodologies behind them. The series contains monographs, lecture notes and edited volumes in computational intelligence spanning the areas of neural networks, connectionist systems, genetic algorithms, evolutionary computation, artificial intelligence, cellular automata, self-organizing systems, soft computing, fuzzy systems, and hybrid intelligent systems. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution, which enable both wide and rapid dissemination of research output. The books of this series are submitted to indexing to Web of Science, EI-Compendex, DBLP, SCOPUS, Google Scholar and Springerlink.

More information about this series at http://www.springer.com/series/7092

Erik Cuevas Emilio Barocio Espejo Arturo Conde Enríquez •



Metaheuristics Algorithms in Power Systems

123

Erik Cuevas Departamento de Electrónica, CUCEI Universidad de Guadalajara Guadalajara, Mexico

Emilio Barocio Espejo CUCEI Universidad de Guadalajara Guadalajara, Mexico

Arturo Conde Enríquez Universidad Autónoma de Nuevo León San Nicolás de los Garza, Nuevo León, Mexico

ISSN 1860-949X ISSN 1860-9503 (electronic) Studies in Computational Intelligence ISBN 978-3-030-11592-0 ISBN 978-3-030-11593-7 (eBook) https://doi.org/10.1007/978-3-030-11593-7 Library of Congress Control Number: 2018967415 © Springer Nature Switzerland AG 2019 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

Power systems represent one of the main technologies in this electricity-driven modern civilization. In this context, there exist a great variety of problems in which power systems can be applied. Such problems are generally nonlinear and complex, demanding other complementary methods to solve them. Recently, power systems have been conceived as a multidisciplinary field regarding the multiple approaches used for their design and analysis. Therefore, each new scheme that is developed by other scientific community is quickly identified, understood, and assimilated in order to be applied to power system problems. This multidisciplinarity covers from signal processing, electronics to computational intelligence including the current trend of using metaheuristic computation. In the last years, researchers, engineers, and practitioners in power systems have faced problems of increasing complexity. These problems can be stated as optimization formulations. Under these circumstances, an objective function is defined to evaluate the quality of each candidate solution composed of the problem parameters. Then, an optimization method is used to find the best solution that minimizes/maximizes the objective function. Metaheuristic methods use as inspiration our scientific understanding of biological, natural, or social systems, which at some level of abstraction can be conceived as optimization processes. They are considered as general-purpose easyto-use optimization techniques capable of reaching globally optimal or at least nearly optimal solutions. In their operation, searcher agents emulate a group of biological or social entities which interact with each other based on specialized operators that model a determined biological or social behavior. These operators are applied to a population of candidate solutions (individuals) that are evaluated with respect to an objective function. Thus, in the optimization process individual positions are successively attracted to the optimal solution of the system to be solved. The aim of this book is to provide an overview of the different aspects of metaheuristic methods in order to enable the reader in reaching a global understanding of the field and in conducting studies on specific metaheuristic techniques that are related to applications in power systems. Our goal is to bridge the gap v

vi

Preface

between recent metaheuristic optimization techniques and power system applications. To do this, in each chapter we endeavor to explain basic ideas of the proposed applications in ways that can be understood by readers who may not possess the necessary backgrounds on either of the fields. Therefore, power system practitioners who are not metaheuristic computation researchers will appreciate that the techniques discussed are beyond simple theoretical tools since they have been adapted to solve significant problems that commonly arise in such areas. On the other hand, members of the metaheuristic community can learn the way in which power system problems can be translated into optimization tasks. Metaheuristic algorithms are vast and have many variants. There exist a rich amount of literature on the subject, including textbooks, tutorials, and journal papers that cover in detail practically every aspect of the field. The great amount of information available makes it difficult for no specialist to explore the literature and to find the right optimization technique for a specific power system application. Therefore, any attempt to present the whole area of metaheuristic computation in detail would be a daunting task, probably doomed to failure. This task would be even more difficult if the goal is to understand the applications of metaheuristic methods in the context of power systems. For this reason, the best practice is to consider only a representative set of metaheuristic approaches, just as it has been done in this book. This book has been structured so that each chapter can be read independently from the others. Chapter 1 describes the main concepts of metaheuristic computation. This chapter concentrates on elementary concepts of metaheuristics. Readers that are familiar with these concepts may wish to skip this chapter. In Chap. 2, an algorithm for the optimal parameter identification of induction motors is presented. To determine the parameters, the presented method uses a recent evolutionary method called the gravitational search algorithm (GSA). Different to the most of existent evolutionary algorithms, GSA presents a better performance in multi-modal problems, avoiding critical flaws such as the premature convergence to sub-optimal solutions. Numerical simulations have been conducted on several models to show the effectiveness of the presented scheme. Chapter 3 considers the problem of overcurrent relay coordination under an optimization perspective. Protective relaying comprehends several procedures and techniques focused on maintaining the power system working safely. Overcurrent relay is one of the oldest protective relays, and its operation principle is straightforward: When the measured current is greater than a specified magnitude, the protection trips. However, its main disadvantages consist in increased tripping times and difficulties in finding faults (since faults could be located far from the relay location). In order to solve this problem, a scheme of coordination among relays is proposed. In the approach, the invasive weed optimization (IWO) algorithm is applied for getting the best configuration. In Chap. 4, the problem of the coordination in overcurrent relays is analyzed. In the approach, both sensitivity and security requirements of relay operation are considered. The scheme has as a basis a metaheuristic algorithm. In order to compare the results, several metaheuristic methods have been employed such as the

Preface

vii

ant colony optimizer (ACO), the differential evolution (DE) algorithm, and the gray wolf optimization (GWO). In Chap. 5, a method to solve an optimal power flow problem with one single function and with multiple and competing objective functions is presented. As a first approach, the modified flower pollination algorithm (MFPA) is employed to show its potential application to solve the OPF problem. Then, the normal boundary intersection (NBI) method is considered as a complementary technique to determine the Pareto front solution of the multi-objective OPF problem. To help in the decision-making process, several strategies are compared to select the best compromise solution from the Pareto frontier. To demonstrate the capabilities of the proposed method, different objective functions are combined to calculate the Pareto front solution on the IEEE 30 bus test system. Finally, a visual tool is developed to display the OPF solution. This tool would help the user to intuitively visualize potential damage on the power system. In Chap. 6, an improved version of the crow search algorithm (CSA) is presented to solve complex optimization problems typical in power systems. In the new algorithm, two features of the original CSA are modified: (I) the awareness probability (AP) and (II) the random perturbation. With such adaptations, the new approach preserves solution diversity and improves the convergence to difficult high multi-modal optima. In order to evaluate its performance, the proposed algorithm has been tested in a set of four optimization problems which involve induction motors and distribution networks. The results demonstrate the high performance of the proposed method when it is compared with other popular approaches. Chapter 7 presents a method for obtaining the optimal configuration of a set of fault current limiters on a distribution network. The approach considers several popular metaheuristic methods as search strategies to find the best architecture of limiters considering different objective functions. The algorithms involve genetics algorithm (GA), particle swarm optimization (PSO), and differential evolution (DE). Finally, Chap. 8 presents a method that combines dimensionality reduction (DR) technique with particle swarm optimization (PSO) algorithm for clustering load profile electricity data. The DR techniques allow to obtain a low-dimensional data model that can be used to project representative electricity load (REL) data onto an easily interpretable 3D space. On the other hand, the PSO algorithm and a validation index algorithm are also applied to obtain an optimal number of clusters. Guadalajara, Mexico Guadalajara, Mexico San Nicolás de los Garza, Mexico 2015

Erik Cuevas Emilio Barocio Espejo Arturo Conde Enríquez

Contents

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

1 1 2 5 6 7

2 Metaheuristic Schemes for Parameter Estimation in Induction Motors . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Problem Statement . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Approximate Circuit Model . . . . . . . . . . 2.2.2 Exact Circuit Model . . . . . . . . . . . . . . . . 2.3 Gravitational Search Algorithm . . . . . . . . . . . . . 2.4 Experimental Results . . . . . . . . . . . . . . . . . . . . 2.4.1 Induction Motor Parameter Identification . 2.4.2 Statistical Analysis . . . . . . . . . . . . . . . . . 2.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

9 9 11 11 12 13 14 15 20 21 21

3 Non-conventional Overcurrent Relays Coordination . . . . . . . . . 3.1 Genetic Algorithms Implementation . . . . . . . . . . . . . . . . . . . 3.2 Invasive-Weed Optimization . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Coordination like Optimization Problem . . . . . . . . . . . . . . . 3.3.1 Overcurrent Relays . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Sensitivity of Relays . . . . . . . . . . . . . . . . . . . . . . . . 3.3.3 Directional Overcurrent Relay (DOCRs) . . . . . . . . . . 3.3.4 Directional Overcurrent Relay Coordination (DOCRs) 3.3.5 Objective Function of the Optimization Algorithms . . 3.3.6 General Model for Non-conventional Time Curves . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

23 24 27 30 31 33 35 36 39 41

1 Introduction to Metaheuristics Methods . . . . . . 1.1 Definition of an Optimization Problem . . . . . 1.2 Classical Optimization . . . . . . . . . . . . . . . . 1.3 Metaheuristic Algorithms . . . . . . . . . . . . . . 1.3.1 Structure of a Metaheuristic Scheme . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

ix

x

Contents

3.4 Coordination with Genetic Algorithms . . . . . . . 3.5 Coordination with Invasive-Weed Optimization 3.5.1 Sequential Quadratic Programming . . . . 3.5.2 Implementation . . . . . . . . . . . . . . . . . . 3.6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

44 47 49 50 51 56 57

.. .. ..

61 61 62

.. ..

64 70

..

71

. . . . . . .

4 Overcurrent Relay Coordination, Robustness and Fast Solutions . 4.1 Overcurrent Relay like Optimization Problem . . . . . . . . . . . . . . 4.2 Ant-Colony Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Steps of Protection Coordination Using Ant-Colony Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Differential Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Steps of Protection Coordination Using Differential Evolution Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Evaluation of DE Family for Overcurrent Relay Coordination Problem . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Grey Wolf Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Motivation and Social Hierarchy . . . . . . . . . . . . . . . . . 4.4.2 Hunting (Searching of Prey) . . . . . . . . . . . . . . . . . . . . . 4.4.3 Encircling of Prey . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.4 Attacking of Prey . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.5 DOCRs Coordination Using GWO . . . . . . . . . . . . . . . . 4.4.6 DOCRs Coordination Using MOGWO . . . . . . . . . . . . . 4.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 Evaluation Among ACO and DE . . . . . . . . . . . . . . . . . 4.5.2 Evaluation of GWO and MOGWO Algorithms . . . . . . . 4.6 On-Line Coordination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. 75 . 77 . 77 . 79 . 80 . 81 . 83 . 88 . 91 . 93 . 96 . 107 . 109 . 110

5 Bio-inspired Optimization Algorithms for Solving the Optimal Power Flow Problem in Power Systems . . . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 General Formulation of OPF Problem . . . . . . . . . . . . . . . . . 5.2.1 Objective Functions fi ðx; uÞ . . . . . . . . . . . . . . . . . . . 5.2.2 Inequality Constrains gi ðx; uÞ . . . . . . . . . . . . . . . . . . 5.2.3 Penalty Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Flower Pollination Algorithm . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Description of the Flower Pollination Algorithm . . . . 5.4 Modified Flower Pollination Algorithm . . . . . . . . . . . . . . . . 5.4.1 Improving the Initial Conditions Process . . . . . . . . . . 5.4.2 Switching the Local to Global Pollination Process . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

111 111 113 113 115 116 116 117 118 118 119

Contents

xi

5.5 Multi Objective Modified Flower Pollination Algorithm . . . . 5.5.1 Normal Boundary Intersection Method for Generation of Pareto Frontier . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 General Description of the Bio-inspired Multi-objective Optimization Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Best Compromise Solution Criteria . . . . . . . . . . . . . . . . . . . 5.7.1 Fuzzy Membership Function Method . . . . . . . . . . . . 5.7.2 Entropy Weight Method . . . . . . . . . . . . . . . . . . . . . . 5.8 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.1 Benchmark Test Function . . . . . . . . . . . . . . . . . . . . . 5.8.2 Optimal Power Flow Solution for a Single Function . 5.8.3 Optimal Power Flow Solution a Multi-objective Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 A Modified Crow Search Algorithm with Applications to Power System Problems . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Crow Search Algorithm (CSA) . . . . . . . . . . . . . . . . . . . 6.3 The Proposed Improved Crow Search Algorithm (ICSA) 6.3.1 Dynamic Awareness Probability (DAP) . . . . . . . . 6.3.2 Random Movement—Lévy Flight . . . . . . . . . . . . 6.4 Motor Parameter Estimation Formulation . . . . . . . . . . . . 6.4.1 Approximate Circuit Model . . . . . . . . . . . . . . . . 6.4.2 Exact Circuit Model . . . . . . . . . . . . . . . . . . . . . . 6.5 Capacitor Allocation Problem Formulation . . . . . . . . . . . 6.5.1 Load Flow Analysis . . . . . . . . . . . . . . . . . . . . . . 6.5.2 Mathematical Approach . . . . . . . . . . . . . . . . . . . 6.5.3 Sensitivity Analysis and Loss Sensitivity Factor . 6.6 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.1 Motor Parameter Estimation Test . . . . . . . . . . . . 6.6.2 Capacitor Allocation Test . . . . . . . . . . . . . . . . . . 6.6.3 Statistical Analysis . . . . . . . . . . . . . . . . . . . . . . . 6.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix: Systems Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

7 Optimal Location of FCL . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Fault Current Limiters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Optimal Location of FCL . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 Formulation of Optimal FCL Sizing and Allocation Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Optimal Function . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . 120 . . . . 120 . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

121 122 123 123 124 124 125

. . . . 129 . . . . 134 . . . . 134 . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

137 137 139 141 141 141 144 144 145 146 146 147 148 150 150 152 159 159 160 163

. . . . . 167 . . . . . 168 . . . . . 168 . . . . . 169 . . . . . 169

xii

Contents

7.3 Fault Current Limiters . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Resonant Fault Current Limiters (R-FCLs) . 7.3.2 Solid-State Fault Current Limiters (SS-FCL) 7.4 Sizing of FCLs . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 R-FCLs . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.2 Sizing SS-FCLs . . . . . . . . . . . . . . . . . . . . . 7.4.3 Evaluation of FCLs . . . . . . . . . . . . . . . . . . 7.5 Performance Analysis . . . . . . . . . . . . . . . . . . . . . . 7.6 Optimal Location Results . . . . . . . . . . . . . . . . . . . 7.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

170 170 172 173 173 174 175 177 180 183 184

8 Clustering Representative Electricity Load Data Using a Particle Swarm Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Dimensional Reduction Techniques . . . . . . . . . . . . . . . . . . . . . 8.2.1 Dimensional Reduction Concept . . . . . . . . . . . . . . . . . . 8.2.2 Principal Component Analysis . . . . . . . . . . . . . . . . . . . 8.2.3 Isometric Feature Mapping (Isomap) . . . . . . . . . . . . . . . 8.2.4 Stochastic Neighbour Embedding (SNE) . . . . . . . . . . . . 8.3 Clustering Tendency of Low-Dimensional Data . . . . . . . . . . . . 8.3.1 Visual Assessment of Cluster Tendency Algorithm . . . . 8.4 Particle Swarm Optimization (PSO) Algorithm . . . . . . . . . . . . . 8.4.1 Clustering Data Problem . . . . . . . . . . . . . . . . . . . . . . . 8.4.2 Codification of PSO Based on Centroids . . . . . . . . . . . . 8.4.3 Objective Function of the Optimization Scheme . . . . . . 8.4.4 Design Criteria Function for Clustering Data . . . . . . . . . 8.5 Validation Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 General Description of Clustering Procedure . . . . . . . . . . . . . . 8.7 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.1 Clustering of Low Dimensional Synthetic Data . . . . . . . 8.7.2 Clustering REL Data of ERCOT System . . . . . . . . . . . . 8.8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

187 187 188 189 190 191 191 192 192 194 194 194 196 197 198 199 200 200 203 208 209

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

Appendix A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211

Chapter 1

Introduction to Metaheuristics Methods

This chapter presents an overview of optimization techniques, describing their main characteristics. The goal of this chapter is to motivate the consideration of metaheuristic schemes for solving optimization problems. The study is conducted in such a way that it is clear the necessity of using metaheuristic approaches for the solution of power system problems.

1.1

Definition of an Optimization Problem

The vast majority of power systems use some form of optimization, as they intend to find some solution which is “best” according to some criterion. From a general perspective, an optimization problem is a situation that requires to decide for a choice from a set of possible alternatives to reach a predefined/required benefit at minimal costs [1]. Consider a public transportation system of a city, for example. Here the system has to find the “best” route to a destination location. In order to rate alternative solutions and eventually find out which solution is “best,” a suitable criterion has to be applied. A reasonable criterion could be the distance of the routes. We then would expect the optimization algorithm to select the route of the shortest distance as a solution. Observe, however, that other criteria are possible, which might lead to different “optimal” solutions, e.g., number of transfers, ticket price or the time it takes to travel the route leading to the fastest route as a solution. Mathematically speaking, optimization can be described as follows: Given a function f : S ! R which is called the objective function, find the argument which minimizes f: x ¼ arg min f ðxÞ

ð1:1Þ

x2S

© Springer Nature Switzerland AG 2019 E. Cuevas et al., Metaheuristics Algorithms in Power Systems, Studies in Computational Intelligence 822, https://doi.org/10.1007/978-3-030-11593-7_1

1

2

1 Introduction to Metaheuristics Methods

S defines the so-called solution set, which is the set of all possible solutions for the optimization problem. Sometimes, the unknown(s) x are referred to design variables. The function f describes the optimization criterion, i.e., enables us to calculate a quantity which indicates the “quality” of a particular x. In our example, S is composed by the subway trajectories and bus lines, etc., stored in the database of the system, x is the route the system has to find, and the optimization criterion f(x) (which measures the quality of a possible solution) could calculate the ticket price or distance to the destination (or a combination of both), depending on our preferences. Sometimes there also exist one or more additional constraints which the solution x has to satisfy. In that case, we talk about constrained optimization (opposed to unconstrained optimization if no such constraint exists). As a summary, an optimization problem has the following components: • • • •

One or more design variables x for which a solution has to be found. An objective function f(x) describing the optimization criterion. A solution set S specifying the set of possible solutions x. (optional) One or more constraints on x.

In order to be of practical use, an optimization algorithm has to find a solution in a reasonable amount of time with reasonable accuracy. Apart from the performance of the algorithm employed, this also depends on the problem at hand itself. If we can hope for a numerical solution, we say that the problem is well-posed. For assessing whether an optimization problem is well-posed, the following conditions must be fulfilled: 1. A solution exists. 2. There is only one solution to the problem, i.e., the solution is unique. 3. The relationship between the solution and the initial conditions is such that small perturbations of the initial conditions result in only slight variations of x .

1.2

Classical Optimization

Once a task has been transformed into an objective function minimization problem, the next step is to choose an appropriate optimizer. Optimization algorithms can be divided into two groups: derivative-based and derivative-free [2]. In general, f(x) may have nonlinear form respect to the adjustable parameter x. Due to the complexity of f ðÞ, in classical methods, it is often used as an iterative algorithm to explore the input space effectively. In iterative descent methods, the next point xk þ 1 is determined by a step down from the current point xk in a direction vector d:

1.2 Classical Optimization

3

xk þ 1 ¼ xk þ ad;

ð1:2Þ

where a is a positive step size regulating to what extent to proceed in that direction. When the direction d in Eq. 1.1 is determined by the gradient (g) of the objective function f ðÞ, such methods are known as gradient-based techniques. The method of steepest descent is one of the oldest techniques for optimizing a given function. This technique represents the basis for many derivative-based methods. Under such a method, the Eq. 1.3 becomes the well-known gradient formula: xk þ 1 ¼ xk  agðf ðxÞÞ;

ð1:3Þ

However, classical derivative-based optimization can be useful as long the objective function fulfills two requirements: – The objective function must be two-times differentiable. – The objective function must be unimodal, i.e., have a single minimum. A simple example of a differentiable and unimodal objective function is f ðx1 ; x2 Þ ¼ 10  eðx1 þ 3x2 Þ 2

2

ð1:4Þ

Figure 1.1 shows the function defined in Eq. 1.4. Unfortunately, under such circumstances, classical methods are only applicable to a few types of optimization problems. For combinatorial optimization, there is no definition of differentiation. Furthermore, there are many reasons why an objective function might not be differentiable. For example, the “floor” operation in Eq. 1.5 quantizes the function in Eq. 1.4, transforming Fig. 1.1 into the stepped shape seen in Fig. 1.2. At each step’s edge, the objective function is non-differentiable:

10 9.8

f (x1 , x2 )

9.6 9.4 9.2 1 9 1

0.5 0

0.5 0 x2

Fig. 1.1 Unimodal objective function

−0.5

−0.5 −1

−1

x1

4

1 Introduction to Metaheuristics Methods

10 8

f (x1 , x2 )

6 4 2 1

0 1

0.5

0.5

0

0

x2

−0.5

−0.5 −1

x1

−1

Fig. 1.2 A non-differentiable, quantized, unimodal function

  2 2 f ðx1 ; x2 Þ ¼ floor 10  eðx1 þ 3x2 Þ

ð1:5Þ

Even in differentiable objective functions, gradient-based methods might not work. Let us consider the minimization of the Griewank function as an example. minimize subject to

f ðx1 ; x2 Þ ¼

x21 þ x22 4000

 cosðx1 Þ cos 30  x1  30 30  x2  30

  x2ffiffi p þ1 2

ð1:6Þ

From the optimization problem formulated in Eq. 1.6, it is quite easy to understand that the optimal global solution is x1 ¼ x2 ¼ 0. Figure 1.3 visualizes the function defined in Eq. 1.6. According to Fig. 1.3, the objective function has many local optimal solutions (multimodal) so that the gradient methods with a randomly generated initial solution will converge to one of them with a significant probability.

2.5 2

f (x1 , x2 )

1.5 1 0.5 0

30 20 10

20

x2

0

0

−10

−10

−20 −30

Fig. 1.3 The Griewank multimodal function

−20 −30

x1

10

30

1.2 Classical Optimization

5

Considering the limitations of gradient-based methods, power system problems make difficult their integration with classical optimization methods. Instead, some other techniques which do not make assumptions and which can be applied to a wide range of problems are required [3].

1.3

Metaheuristic Algorithms

Evolutionary computation (EC) [4] methods are derivative-free procedures, which do not require that the objective function must be neither two-timing differentiable nor unimodal. Therefore, EC methods as global optimization algorithms can deal with non-convex, nonlinear, and multimodal problems subject to linear or nonlinear constraints with continuous or discrete decision variables. The field of EC has a rich history. With the development of computational devices and demands of industrial processes, the necessity to solve some optimization problems arose even though there was not sufficient prior knowledge (hypotheses) on the optimization problem for the application of a classical method. In fact, in the majority of power system applications, the problems are highly nonlinear, or characterized by a noisy fitness, or without an explicit analytical expression as the objective function might be the result of an experimental or simulation process. In this context, the EC methods have been proposed as optimization alternatives. An EC technique is a general method for solving optimization problems. It uses an objective function in an abstract and efficient manner, typically without utilizing more profound insights into its mathematical properties. EC methods do not require hypotheses on the optimization problem nor any kind of prior knowledge on the objective function. The treatment of objective functions as “black boxes” [5] is the most prominent and attractive feature of EC methods. EC methods obtain knowledge about the structure of an optimization problem by utilizing information obtained from the possible solutions (i.e., candidate solutions) evaluated in the past. This knowledge is used to construct new candidate solutions which are likely to have better quality. Recently, several EC methods have been proposed with interesting results. Such approaches use as inspiration our scientific understanding of biological, natural or social systems, which at some level of abstraction can be represented as optimization processes [6]. These methods include the social behavior of bird flocking and fish schooling such as the Particle Swarm Optimization (PSO) algorithm [7], the cooperative behavior of bee colonies such as the Artificial Bee Colony (ABC) technique [8], the improvisation process that occurs when a musician searches for a better state of harmony such as the Harmony Search (HS) [9], the emulation of the bat behavior such as the Bat Algorithm (BA) method [10], the mating behavior of firefly insects such as the Firefly (FF) method [11], the social-spider behavior such as the Social Spider Optimization (SSO) [12], the simulation of the animal behavior in a group such as the Collective Animal

6

1 Introduction to Metaheuristics Methods

Behavior [13], the emulation of immunological systems as the clonal selection algorithm (CSA) [14], the simulation of the electromagnetism phenomenon as the electromagnetism-Like algorithm [15], and the emulation of the differential and conventional evolution in species such as the Differential Evolution (DE) [16] and Genetic Algorithms (GA) [17], respectively.

1.3.1

Structure of a Metaheuristic Scheme

From a conventional point of view, an EC method is an algorithm that simulates at some level of abstraction a biological, natural or social system. To be more specific, a standard EC algorithm includes: 1. 2. 3. 4.

One or more populations of candidate solutions are considered. These populations change dynamically due to the production of new solutions. A fitness function reflects the ability of a solution to survive and reproduce. Several operators are employed in order to explore an exploit appropriately the space of solutions.

The EC methodology suggests that, on average, candidate solutions improve their fitness over generations (i.e., their capability of solving the optimization problem). A simulation of the evolution process based on a set of candidate solutions whose fitness is appropriately correlated to the objective function to optimize will, on average, lead to an improvement of their fitness and thus steer the simulated population towards the global solution. Most of the optimization methods have been designed to solve the problem of finding a global solution of a nonlinear optimization problem with box constraints in the following form: maximize f ðxÞ; subject to x 2 X

x ¼ ðx1 ; . . .; xd Þ 2 Rd

ð1:7Þ

 where f : Rd ! R is a nonlinear function whereas X ¼ x 2 Rd jli  xi  ui ; i ¼ 1; . . .; d g is a limited feasible search space, constrained by the lower ðli Þ and upper ðui Þ limits. In order to solve the problem formulated in Eq. 1.6, in metaheuristic method, a   population Pk fpk1 ; pk2 ; . . .; pkN g of N candidate solutions (individuals) evolves from the initial point (k = 0) to a total gen number iterations (k = gen). In its initial point, the algorithm begins by initializing the set of N candidate solutions with values that are randomly and uniformly distributed between the pre-specified lower ðli Þ and upper ðui Þ limits. In each iteration, a set of heuristic operations are applied over the population Pk to build the new population n Pk þ 1 . Each candidate solution o pki ði 2 ½1; . . .; N Þ represents a d-dimensional vector pki;1 ; pki;2 ; . . .; pki;d where each

1.3 Metaheuristic Algorithms

7

Pk

Pk

k

1

k 0 Random [X]

Operators ( P k )

k+1

Yes

No k gen

solution g

Fig. 1.4 The basic cycle of an EC method

dimension corresponds to a decision variable of the optimization problem at hand. The quality of each candidate solution pki is evaluated by using an objective   function f pki whose final result represents the fitness value of pki . During the evolution process, the best candidate solution gðg1 ; g2 ; . . .gd Þ seen so-far is preserved considering that it represents the best available solution. Figure 1.4 presents a graphical representation of a basic cycle of an EC method.

References 1. B. Akay, D. Karaboga, A survey on the applications of artificial bee colony in signal, image, and video processing. SIViP 9(4), 967–990 (2015) 2. X.-S. Yang, Engineering Optimization (Wiley, 2010) 3. M.A. Treiber, Optimization for Computer Vision an Introduction to Core Concepts and Methods (Springer, 2013) 4. D. Simon, Evolutionary Optimization Algorithms (Wiley, 2013) 5. C. Blum, A. Roli, Metaheuristics in combinatorial optimization: overview and conceptual comparison. ACM Comput. Surv. (CSUR) 35(3), 268–308 (2003). https://doi.org/10.1145/ 937503.937505 6. S.J. Nanda, G. Panda, A survey on nature inspired metaheuristic algorithms for partitional clustering. Swarm Evol. Comput. 16, 1–18 (2014) 7. J. Kennedy, R. Eberhart, Particle swarm optimization, in Proceedings of the 1995 IEEE International Conference on Neural Networks, vol. 4 (December 1995), pp. 1942–1948 8. D. Karaboga, An idea based on honey bee swarm for numerical optimization. Technical Report-TR06. Engineering Faculty, Computer Engineering Department, Erciyes University, 2005 9. Z.W. Geem, J.H. Kim, G.V. Loganathan, A new heuristic optimization algorithm: harmony search. Simulations 76, 60–68 (2001)

8

1 Introduction to Metaheuristics Methods

10. X.S. Yang, A new metaheuristic bat-inspired algorithm, in Nature Inspired Cooperative Strategies for Optimization (NISCO 2010), Studies in Computational Intelligence, vol. 284, ed. by C. Cruz, J. González, G.T.N. Krasnogor, D.A. Pelta (Springer Verlag, Berlin, 2010), pp. 65–74 11. X.S. Yang, Firefly algorithms for multimodal optimization, in Stochastic Algorithms: Foundations and Applications, SAGA 2009. Lecture Notes in Computer Sciences, vol. 5792 (2009), pp. 169–178 12. E. Cuevas, M. C, D. Zaldívar, M. Pérez-Cisneros, A swarm optimization algorithm inspired in the behavior of the social-spider. Expert Syst. Appl. 40(16), 6374–6384 (2013) 13. E. Cuevas, M. González, D. Zaldivar, M. Pérez-Cisneros, G. García, An algorithm for global optimization inspired by collective animal behaviour. Discrete Dyn. Nat. Soc. (2012, art. no. 638275) 14. L.N. de Castro, F.J. von Zuben, Learning and optimization using the clonal selection principle. IEEE Trans. Evol. Comput. 6(3), 239–251 (2002) 15. Ş.I. Birbil, S.C. Fang, An electromagnetism-like mechanism for global optimization. J. Glob. Optim. 25(1), 263–282 (2003) 16. R. Storn, K. Price, Differential evolution—a simple and efficient adaptive scheme for global optimisation over continuous spaces. Technical Report TR-95–012, ICSI, Berkeley, CA, 1995 17. D.E. Goldberg, Genetic Algorithm in Search Optimization and Machine Learning (Addison-Wesley, 1989)

Chapter 2

Metaheuristic Schemes for Parameter Estimation in Induction Motors

Induction motors represent the main component in most of the industries. They use the biggest energy percentages in industrial facilities. This consume depends on the operation conditions of the induction motor imposed by its internal parameters. In this approach, the parameter estimation process is transformed into a multidimensional optimization problem where the internal parameters of the induction motor are considered as decision variables. Thus, the complexity of the optimization problem tends to produce multimodal error surfaces in which their cost functions are significantly difficult to minimize. Several algorithms based on evolutionary computation principles have been successfully applied to identify the optimal parameters of induction motors. However, most of them frequently acquire sub-optimal solutions as a result of an inappropriate balance between exploitation and exploration in their search strategies. This chapter presents an algorithm for the optimal parameter identification of induction motors that uses the recent evolutionary method called the Gravitational Search Algorithm (GSA). In general, GSA presents a better performance in multimodal problems, avoiding critical flaws such as the premature convergence to sub-optimal solutions. The presented algorithm has been tested on several models and its simulation results show the effectiveness of the scheme.

2.1

Introduction

The environmental consequences that overconsumption of electrical energy entails has recently attracted the attention in different fields of the engineering. Therefore, the improvement of machinery and elements that have high electrical energy consumption have become an important task nowadays [1]. Induction motors present several benefits such as their ruggedness, low price, cheap maintenance and easy controlling [2]. However, more than a half of electric energy consumed by industrial facilities is due to use of induction motors. With the © Springer Nature Switzerland AG 2019 E. Cuevas et al., Metaheuristics Algorithms in Power Systems, Studies in Computational Intelligence 822, https://doi.org/10.1007/978-3-030-11593-7_2

9

10

2 Metaheuristic Schemes for Parameter Estimation in Induction …

massive use of induction motors, the electrical energy consumption has increased exponentially through years. This fact has generated the need to improve their efficiency which mainly depends on their internal parameters. The parameter identification of induction motors represents a complex task due to its non-linearity. As a consequence, different alternatives have been proposed in the literature. Some examples include the proposed by Waters and Willoughby [3], where the parameter are estimated from the knowledge of certain variables such as stator resistance and the leakage reactance, the proposed by Ansuj [4], where the identification is based on a sensitivity analysis and the proposed by De Kock [5], where the estimation is conducted through an output error technique. As an alternative to such techniques, the problem of parameter estimation in induction motors has also been handled through evolutionary methods. In general, they have demonstrated, under several circumstances, to deliver better results than those based on deterministic approaches in terms of accuracy and robustness [6]. Some examples of these approaches used in the identification of parameters in induction motors involve methods such as Genetic Algorithms (GA) [7], Particle Swarm Optimization (PSO) [8, 9], Artificial immune system (AIS) [10], Bacterial Foraging Algorithm (BFA) [11], Shuffled Frog-Leaping algorithm [12], hybrid of genetic algorithm and particle swarm optimization [6], multiple-global-best guided artificial bee colony [13], just to mention a few. Although these algorithms present interesting results, they have an important limitation: They frequently obtain sub-optimal solutions as a consequence of the limited balance between exploration and exploitation in their search strategies. On the other hand, the Gravitational Search Algorithm (GSA) [14] is a recent evolutionary computation algorithm which is inspired on physical phenomenon of the gravity. In GSA, its evolutionary operators are built considering the principles of the gravitation. Different to the most of existent evolutionary algorithms, GSA presents a better performance in multimodal problems, avoiding critical flaws such as the premature convergence to sub-optimal solutions [15, 16]. Such characteristics have motivated the use of SSO to solve an extensive variety of engineering applications such as energy [17], image processing [18] and machine learning [19]. This chapter describes an algorithm for the optimal parameter identification of induction motors. To determine the parameters, the presented method uses a recent evolutionary method called the Gravitational Search Algorithm (GSA). A comparison with state-of-the-art methods such as Artificial Bee Colony (ABC) [20], Differential Evolution (DE) [21] and Particle Swarm Optimization (PSO) [22] on different induction models has been incorporated to demonstrate the performance of the presented approach. Conclusions of the experimental comparison are validated through statistical tests that properly support the discussion. The sections of this chapter are organized as follows: in Sect. 2.2 problem statements are presented, Sect. 2.3 describes the evolutionary technique used GSA, Sect. 2.4 shows experimental results considering the comparison with DE, ABC and PSO and a non-parametric statistical validation and finally in Sect. 2.5 conclusions are discussed.

2.2 Problem Statement

2.2

11

Problem Statement

An induction motor can be represented as a steady-state equivalent circuit and treated as a least square optimization problem, which, due its nature highly nonlinear it becomes difficult its minimization. The main objective is minimizing the error between the calculated and the manufacturer data adjusting the parameters of an induction motor equivalent circuit. In this chapter we use the approximate circuit model and the exact circuit model with two different induction motors [10] which are described below.

2.2.1

Approximate Circuit Model

In Approximate circuit model (Fig. 2.1) we use the starting torque, maximum torque and full load torque to determinate the stator resistance, rotor resistance and stator leakage reactance that minimize the error between estimated and manufacturer data. The fitness function and mathematical formulation are computed as follows F ¼ ðf1 Þ2 þ ðf2 Þ2 þ ðf3 Þ2 ;

ð2:1Þ

where f1 ¼ f2 ¼ f3 ¼ Kt ¼

Kt R 2 s½ðR1 þ R2 =sÞ2 þ X12 

 Tfl ðmf Þ

Tfl ðmf Þ Kt R2  Tlr ðmf Þ ðR1 þ R2 Þ2 þ X12 Tlr ðmf Þ K t pffiffiffiffiffiffiffiffiffiffiffi  2½R1 þ R21 þ X12 

Tmax ðmf Þ

Tmax ðmf Þ 3Vph2

xs Subject to Xi;min  Xi  Xi;max ; where Xi;min and Xi;max is the lower and upper bound of parameter Xi respectively. Tmax ðCÞ  Tmax ðmf Þ   0:2; Tmax ðmf Þ where Tmax ðCÞ is the maximum torque calculated.

2 Metaheuristic Schemes for Parameter Estimation in Induction …

12

Fig. 2.1 Approximate circuit model

2.2.2

Exact Circuit Model

In the exact circuit model (Fig. 2.2) we adjust the stator resistance, rotor resistance, stator leakage resistance stator leakage inductance, rotor leakage reactance and magnetizing leakage reactance to determinate the maximum torque, full load torque, starting torque and full load power factor, the objective function and mathematical formulation are described below F ¼ ðf1 Þ2 þ ðf2 Þ2 þ ðf3 Þ2 þ ðf4 Þ2

ð2:2Þ

where f1 ¼ f2 ¼

Kt R 2 s½ðRth þ R2 =sÞ2 þ X 2 

 Tfl ðmf Þ

Tfl ðmf Þ Kt R 2  Tlr ðmf Þ ðRth þ R2 Þ2 þ X 2

Tlr ðmf Þ Ktffiffiffiffiffiffiffiffiffiffiffiffi p  Tmax ðmf Þ 2½Rth þ R2th þ X 2  f3 ¼ Tmax ðmf Þ    cos tan1 Rth þXR2 =s  pffl ðmf Þ f4 ¼ pffl ðmf Þ Vth ¼

Vph Xm ; X1 þ Xm

Rth ¼

R1 Xm ; X1 þ Xm

Xth ¼

X1 Xm ; X1 þ Xm

Kt ¼

Subject to Xi;min  Xi  Xi;max Tmax ðCÞ  Tmax ðmf Þ   0:2 Tmax ðmf Þ pfl  ðI12 R1 þ I22 R2 þ Prot Þ ¼ gfl ðmf Þ pfl

3Vth2 ; xs

X ¼ X2 þ Xth

2.2 Problem Statement

13

Fig. 2.2 Exact circuit model

where Prot is rotational power losses and pfl is the rated power.

2.3

Gravitational Search Algorithm

Gravitation Search Algorithm (GSA) was proposed by Rashedi [14] in 2009 based on the law of gravity and mass interactions inspired on Newtonian gravity and the laws of motion. This algorithm uses agents called masses, the masses attract each other with a ‘gravitational force’ that causes a movement of all masses towards the objects with heavier masses. Now, considering a computational model which has a i-th number of agents defined as follows xi ¼ ðx1i ; . . .; xdi ; . . .; xni Þ

for i ¼ 1; 2; . . .; N

ð2:3Þ

where xdi represents the position of ith agent in the dth dimension. At a time t the force acting from a mass i to a mass j is defined as follows Fijd ðtÞ ¼ GðtÞ

Mpi ðtÞ  Maj ðtÞ d ðxj ðtÞ  xdi ðtÞÞ Rij ðtÞ þ e

ð2:4Þ

where Maj is the active gravitational mass related to agent j, Mpi is the passive gravitational of agent i; GðtÞ is the gravitational constant at time t; e is a small constant and Rij is the Euclidian distance between the ith and jth agents. The force acting over an agent i in a d dimensional space is described below Fid ðtÞ ¼

N X

randj Fijd ðtÞ

ð2:5Þ

j¼1; j6¼i

Hence, following the Newton’s second law the acceleration of the agent i at time t is computed as follows adi ðtÞ ¼

Fid ðtÞ Mni ðtÞ

ð2:6Þ

14

2 Metaheuristic Schemes for Parameter Estimation in Induction …

where Mni is the inertial mass of agent i. therefore the new velocity and position are calculated as follows: vdi ðt þ 1Þ ¼ randi  vdi ðtÞ þ adi ðtÞ xdi ðt þ 1Þ ¼ xdi ðtÞ þ vdi ðt þ 1Þ

ð2:7Þ

The initial value of gravitational constant G, will be changing with time depending to search strategy. Consequently G is a function of the initial value of gravitational constant G0 and time t: GðtÞ ¼ GðG0 ; tÞ

ð2:8Þ

Gravitational and inertia messes are evaluated by a cost function which determinates the quality of the particle, a heavier mass means a better solution. The gravitational and inertia masses are updating by the following equations Mai ¼ Mpi ¼ Mii ¼ Mi ; mi ðtÞ ¼

i ¼ 1; 2; . . .; N;

fiti ðtÞ  worstðtÞ ; bestðtÞ  worstðtÞ

mi ðtÞ Mi ðtÞ ¼ PN j¼1 mj ðtÞ

2.4

ð2:9Þ ð2:10Þ ð2:11Þ

Experimental Results

In these experiments, it was used the Gravitational search algorithm (GSA) to determinate the optimal parameters of two induction motors considering the approximate circuit model and exact circuit model. We also use Differential evolution (DE), Artificial bee colony (ABC) and Particle Swarm Optimization (PSO) to solve the same application, in order to compare and validate the results obtained by GSA, due these algorithms are widely used in literature showing good performance. The parameter used for each algorithm in this experiment are mentioned below 1. PSO, parameters c1 ¼ 2; c2 ¼ 2 and weights factors were set wmax ¼ 0:9; and wmin ¼ 0:4 [23]. 2. ABC, the parameters implemented were provided by [24], limit = 100. 3. DE, in accordance with [25] the parameters were set pc ¼ 0:5 and f ¼ 0:5. 4. GSA, the parameter were set according to [14].

2.4 Experimental Results

2.4.1

15

Induction Motor Parameter Identification

For each algorithm, it was considered a population size of 25 and 3000 iterations. To carry out the experiment 35 independent trials were performed. Fitness value, deviation standard and mean of each algorithm for the approximate circuit model and motor 1 is reported in Table 2.2, using the approximate circuit model with the motor 2 is shown in Table 2.3, for the exact model and motor 1 is given in Table 2.4 and for the exact model and motor 2 the results are reported in Table 2.5 (Figs. 2.4 and 2.5). After evaluating the parameters determined by each algorithm the results were compared with manufacturer data taken from the Table 2.1. It is reported the approximate model with model 1 and motor 2, exact model with motor 1 and motor 2 in Tables 2.6, 2.7, 2.8 and 2.9 respectively. The convergence diagram is plotted in Fig. 2.3 which shows the evolution of each algorithm through iterations in exact circuit model with the motor 2. And finally, the curve generated by the slip versus torque in both models with motor 1 and motor 2 are shown in Figs. 2.6 and 2.7 respectively. Table 2.1 Manufacturer data Capacity (HP) Voltage (V) Current (A) Frequency (Hz) No. poles Full load slip Starting torque Max. torque Statin current Full load torque

Motor 1

Motor 2

5 400 8 50 4 0.07 15 42 22 25

40 400 45 50 4 0.09 260 370 180 190

Table 2.2 Fitness value of approximate circuit model, motor 1 GSA Min Max Mean Std Bold numbers

DE

3.4768e−22 1.9687e−15 1.6715e−20 0.0043 5.4439e−21 1.5408e−04 4.1473e−21 7.3369e−04 represent the best performance results

ABC

PSO

2.5701e−05 0.0126 0.0030 0.0024

1.07474e−04 0.0253 0.0075 0.0075

Table 2.3 Fitness value of approximate circuit model, motor 2 GSA Min Max Mean Std Bold numbers

DE

3.7189e−20 1.1369e−13 1.4020e−18 0.0067 5.3373e−19 4.5700e−04 3.8914e−19 0.0013 represent the best performance results

ABC

PSO

3.6127e−04 0.0251 0.0078 0.0055

0.0016 0.0829 0.0161 0.0165

2 Metaheuristic Schemes for Parameter Estimation in Induction …

16

Table 2.4 Fitness value of exact circuit model, motor 1 GSA

DE

Min 0.0032 0.0172 Max 0.0032 0.0288 Mean 0.0032 0.0192 Std 0.0000 0.0035 Bold numbers represent the best performance results

ABC

PSO

0.0172 0.0477 0.0231 0.0103

0.0174 0.0629 0.0330 0.0629

Table 2.5 Fitness value of exact circuit model, motor 2 GSA

DE

Min 0.0071 0.0091 Max 0.0209 0.0305 Mean 0.0094 0.0190 Std 0.0043 0.0057 Bold numbers represent the best performance results

ABC

PSO

0.0180 0.2720 0.0791 0.0572

0.0072 0.6721 0.0369 0.1108

Table 2.6 Comparison of GSA, DE, ABC and PSO with manufacturer data, approximated circuit model, motor 1 True-val

GSA

Error (%)

DE

Error (%)

ABC

Error (%)

PSO

Tst

15

15.00

0

14.9803

−0.131

14.3800

−4.133

15.4496

Error (%)

Tmax

42

42.00

0

42.0568

0.135

40.5726

−3.398

39.6603

−5.570

Tfl

25

25.00

0

24.9608

−0.156

25.0480

0.192

25.7955

3.182

2.9973

Bold numbers represent the best performance results

Table 2.7 Comparison of GSA, DE, ABC and PSO with manufacturer data, approximated circuit model, motor 2 True-val

GSA

Error (%)

DE

Error (%)

ABC

Error (%)

PSO

Tst

260

260.00

0

258.4709

−0.588

260.6362

0.2446

288.9052

11.117

Tmax

370

370.00

0

372.7692

375.0662

1.3692

343.5384

−7.151

Tfl

190

190.00

0

189.0508

204.1499

7.447

196.1172

0.7484 −0.499

Error (%)

3.2195

Bold numbers represent the best performance results

Table 2.8 Comparison of GSA, DE, ABC and PSO with manufacturer data, exact circuit model, motor 1 True-val

GSA

Error (%)

DE

Error (%)

ABC

Error (%)

PSO

Error (%)

Tst

15

14.9470

−0.353

15.4089

2.726

16.4193

9.462

15.6462

4.308

Tmax

42

42.00

0

42.00

0

42.00

0

42.00

0

Tfl

25

25.0660

0.264

26.0829

4.3316

25.3395

1.358

26.6197

6.4788

Bold numbers represent the best performance results

GSA

Error (%)

Tst 260 258.1583 −0.708 370 370.00 0 Tmax 190 189.8841 −0.061 Tfl Bold numbers represent the best performance results

True-val 262.0565 370.00 192.2916

DE 0.7909 0 1.2061

Error (%) 246.2137 370.00 207.9139

ABC

Table 2.9 Comparison of GSA, DE, ABC and PSO with manufacturer data, exact circuit model, motor 2 −5.302 0 9.428

Error (%)

281.8977 370.00 166.6764

PSO

8.4221 0 −12.27

Error (%)

2.4 Experimental Results 17

2 Metaheuristic Schemes for Parameter Estimation in Induction …

18

Randomized Initialization of population Find the best solution in the initial population wile (stop criteria) for i=1:N (for all agents) update G (t ), best (t ), worst (t ) and M (t ) for i = 1, 2.., N i calculate the mass of each agent M (t ) i calculate the gravitational constant G (t ) d calculate acceleration in the gravitational field a (t ) i d d update the velocity and positions of the agents v , x i i end (for) Find the best solution end (while) Display the best solution

Fig. 2.3 Gravitational search algorithm (GSA) pseudo code

0.45 PSO ABC DE GSA

0.4 0.35

Fitness

0.3 0.25 0.2 0.15 0.1 0.05 0

0

500

1000

1500

2000

Iterations

Fig. 2.4 Convergence evolution through iterations of model 1

2500

3000

2.4 Experimental Results

19

1 PSO ABC DE GSA

0.9 0.8 0.7

Fitness

0.6 0.5 0.4 0.3 0.2 0.1 0

0

500

1000

1500

2000

2500

3000

Iterations

Fig. 2.5 Convergence evolution through iterations of model 2

45 40 35

Torque (Nm)

30 25

Model 1 PSO Model 1 ABC Model 1 DE Model 1 GSA Model 2 PSO Model 2 ABC Model 2 DE Model 2 GSA Manufacturer data

20 15 10 5 0

0

0.2

0.4

0.6

0.8

1

Slip

Fig. 2.6 Curve slip versus torque of motor 1 using PSO, ABC, DE and GSA considering approximate model circuit and exact model circuit

2 Metaheuristic Schemes for Parameter Estimation in Induction …

20 450 400 350

Torque (Nm)

300 250 Model 1 PSO Model 1 ABC Model 1 DE Model 1 GSA Model 2 PSO Model 2 ABC Model 2 DE Model 2 GSA Manufacturer data

200 150 100 50 0

0

0.2

0.4

0.6

0.8

1

Slip

Fig. 2.7 Curve slip versus torque of motor 2 using PSO, ABC, DE and GSA considering approximate model circuit and exact model circuit

2.4.2

Statistical Analysis

After 35 independents executions of each evolutionary technique, we proceed to validate the results using a non-parametric statistical analysis know as Wilcoxon’s rank sum [26] which consider a 0.05 of significance between the “average f ðhÞ value” to determinate if there is difference. Table 2.10 shows the values of P in the comparison of the presented technique, GSA versus DE, GSA versus ABC and GSA versus PSO where the null hypothesis is the value of significance higher to 5% that indicates there is no difference enough between samples. Table 2.10 P-value from Wilcoxon’s rank sum test of the comparison of GSA, DE, ABC and PSO Model/motor Model 1, motor 1 Model 1, motor 2 Model 2, motor 1 Model 2, motor 2

GSA versus DE 6.545500588914223e−13

ABC 6.545500588914223e−13

PSO 6.545500588914223e−13

0.009117078811112

0.036545600995029

0.004643055264741

6.545500588914223e−13

6.545500588914223e−13

6.545500588914223e−13

1.612798082388261e−09

9.465531545379272e−13

3.483016312301559e−08

2.4 Experimental Results

21

For all cases it is possible to perceive, that the significance value is lower than 0.05, this indicates that the null hypothesis was rejected demonstrating that exist difference enough between results given by GSA and the algorithms used in this work for the comparison, such evidence suggest that GSA surpasses to the most common optimization techniques for the consistency of its results through an efficient search strategy and has not occurred by coincidence but the algorithm is able to find efficient solution due its robustness and accuracy.

2.5

Conclusions

This chapter presents the Gravitational search algorithm (GSA) to determinate induction motor parameters of approximate circuit model and exact circuit model using two different induction motors. The estimation process of an induction motor parameters is treated as a least square optimization problem which becomes in a complex task for the non-linearity of the steady-state equivalent circuit model. The presented scheme outperforms the most popular optimization algorithms such as differential evolution (DE), artificial bee colony (ABC) and particle swarm optimization (PSO), minimizing the error between the calculated and manufacturer data and converging faster than the other techniques used for the comparison. After 35 individual executions of each algorithm, we used a non-parametric statistical validation known as Wilcoxon’s rank sum, which proved that there is indeed a significant difference between the results obtained by GSA and the techniques used for the comparison, improving the results reported in literature and showing good performance in complex applications and consistency of its solutions by operators used in the search strategy.

References 1. H. Çaliş, A. Çakir, E. Dandil, Artificial immunity-based induction motor bearing fault diagnosis. Turk J. Elec. Eng. Comp. Sci. 21(1), 1–25 (2013) 2. V. Prakash, S. Baskar, S. Sivakumar, K.S. Krishna, A novel efficiency improvement measure in three-phase induction motors, its conservation potential and economic analysis. Energy Sustain. Dev. 12(2), 78–87 (2008) 3. S.S. Waters, R.D. Willoughby, Modeling induction motors for system studies. IEEE Trans. Ind. Appl. IA-19(5), 875–878 (1983) 4. S. Ansuj, F. Shokooh, R. Schinzinger, Parameter estimation for induction machines based on sensitivity\nanalysis. IEEE Trans. Ind. Appl. 25(6), 1035–1040 (1989) 5. J. De Kock, F. Van der Merwe, H. Vermeulen, Induction motor parameter estimation through an output error technique. IEEE Trans. Energy Conver. 9(1), 69–76 (1994) 6. H.R. Mohammadi, A. Akhavan, Parameter estimation of three-phase induction motor using hybrid of genetic algorithm and particle swarm optimization, J Eng, 2014(148204), 6 (2014) 7. R.R. Bishop, G.G. Richards, Identifying induction machine parameters using a genetic optimization algorithm, IEEE Proc Southeastcon, New Orleans, LA, USA, 476–479 (1990)

22

2 Metaheuristic Schemes for Parameter Estimation in Induction …

8. D. Lindenmeyer, H.W. Dommel, A. Moshref, P. Kundur, An induction motor parameter estimation method. Int. J. Elec. Power Energy Syst. 23(4), 251–262 (2001) 9. V.P. Sakthivel, R. Bhuvaneswari, S. Subramanian, An improved particle swarm optimization for induction motor parameter determination. Int. J. Comp. Appl. 1(2), 71–76 (2010) 10. V.P. Sakthivel, R. Bhuvaneswari, S. Subramanian, Artificial immune system for parameter estimation of induction motor. Expert Syst. Appl. 37(8), 6109–6115 (2010) 11. V.P. Sakthivel, R. Bhuvaneswari, S. Subramanian, An accurate and economical approach for induction motor field efficiency estimation using bacterial foraging algorithm. Meas. J. Int. Meas. Confed. 44(4), 674–684 (2011) 12. I. Perez, M. Gomez-Gonzalez, F. Jurado, Estimation of induction motor parameters using shuffled frog-leaping algorithm. Elec. Eng. 95(3), 267–275 (2013) 13. A.G. Abro, J. Mohamad-Saleh, Multiple-global-best guided artificial bee colony algorithm for induction motor parameter estimation. Turk. J. Elec. Eng. Comp. Sci. 22, 620–636 (2014) 14. E. Rashedi, H. Nezamabadi-pour, S. Saryazdi, GSA: a gravitational search algorithm. Inf. Sci. (Ny). 179(13), 2232–2248 (2009) 15. F. Farivar, M.A. Shoorehdeli, Stability analysis of particle dynamics in gravitational search optimization algorithm. Inf. Sci. 337–338 (April), 25–43 (2016) 16. S. Yazdani, H. Nezamabadi-pour, S. Kamyab, A gravitational search algorithm for multimodal optimization. Swarm Evol. Comput. 14, 1–14 (2014) 17. S.D. Beigvand, H. Abdi, M. La Scala, Combined heat and power economic dispatch problem using gravitational search algorithm. Elect. Power Syst. Res. 133 (April), 160–172 (2016) 18. V. Kumar, J.K. Chhabra, D. Kumar, Automatic cluster evolution using gravitational search algorithm and its application on image segmentation. Eng. Appl. Artif. Intell. 29 (March), 93– 103 (2014) 19. W. Zhang, P. Niu, G. Li, P. Li, Forecasting of turbine heat rate with online least squares support vector machine based on gravitational search algorithm. Knowl-Based Syst. 39, 34– 44 (2013) 20. D. Karaboga, An idea based on honey bee swarm for numerical optimization. Technical Report-TR06, no. TR06, Erciyes University, p. 10, 2005 21. R. Storn, K. Price, Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 341–359 (1997) 22. J. Kennedy, R. Eberhart, Particle swarm optimization. Proceedings of the IEEE International Conference on Neural Networks, 1995 (ICNN’95), vol. 4 (1995), pp. 1942–1948 23. V.P. Sakthivel, S. Subramanian, On-site efficiency evaluation of three-phase induction motor based on particle swarm optimization. Energy 36(3), 1713–1720 (2011) 24. M. Jamadi, F. Merrikh-bayat, New method for accurate parameter estimation of induction motors based on artificial bee colony algorithm, Cornell Univ. Library, New York, NY, USA, Tech. Rep. (2014) 25. R.K. Ursem, P. Vadstrup, Parameter identification of induction motors using differential evolution. Congr. Evol. Comp. 2003 (CEC’03) 2, 790–796 (2003) 26. F. Wilcoxon, in Breakthroughs in Statistics: Methodology and Distribution, ed. by S. Kotz, N.L. Johnson (Springer, New York, NY, 1992), pp. 196–202

Chapter 3

Non-conventional Overcurrent Relays Coordination

The Invasive Weed Optimization (IWO) algorithm has been adapted for the high dimension coordination problem. Many utilities follow the criterion of increase use of differential protection in transmission lines which has absolute selectivity with no backup function offered. Hence, distance relays are used as backup protection for transmission lines and directional overcurrent relay (DOCR) as backup protection for sub-transmission lines. As a result, DOCR coordination in meshed configuration becomes frequent. In addition, there are some occasions where primary protection is not available due to maintenance job or failure, so the overcurrent relay comes in play as primary protection. Thus, having optimal DOCR settings imply an important role in these scenarios. Protective relaying comprehends several procedures and techniques focused on maintaining the power system working safely during and after undesired and abnormal network conditions, mostly caused by faulty events. Overcurrent relay is one of the oldest protective relays, its operation principle is straightforward: when the measured current is greater than a specified magnitude the protection trips; less variables are required from the system in comparison with other protections, causing the overcurrent relay to be the simplest and also the most difficult protection to coordinate; its simplicity is reflected in low implementation, operation, and maintenance cost. The counterpart consists in the increased tripping times offered by this kind of relays mostly before faults located far from their location; this problem can be particularly accentuated when standardized inverse-time curves are used or when only maximum faults are considered to carry out relay coordination. These limitations have caused overcurrent relay to be slowly relegated and replaced by more sophisticated protection principles, it is still widely applied in subtransmission, distribution, and industrial systems. The use of non-standardized inverse-time curves, the model and implementation of optimization algorithms capable to carry out the coordination process, the use of different levels of short circuit currents are proposed methodologies focused on the overcurrent relay performance improvement. These techniques may transform the typical overcurrent relay into a more sophisticated one without changing its © Springer Nature Switzerland AG 2019 E. Cuevas et al., Metaheuristics Algorithms in Power Systems, Studies in Computational Intelligence 822, https://doi.org/10.1007/978-3-030-11593-7_3

23

24

3 Non-conventional Overcurrent Relays Coordination

fundamental principles and advantages. Consequently, a more secure and still economical alternative can be obtained, increasing its implementation area.

3.1

Genetic Algorithms Implementation

Genetic algorithms [1–3] are iterative, population-based metaheuristics developed in the 1960s by John Holland, his students, and colleagues. GA are based on the natural selection theory proposed in parallel [4] by Darwin [5] and Wallace [6] in the late 1850s. These methods are part of the evolutionary computation and were developed to solve optimization problems and to study the self-adaptation of molecules in biological processes; by combining directed and stochastic searches, they obtain a balance between the exploration and the exploitation of the search space [7]. The methodology followed by the GA is depicted in Fig. 3.1 and each step is briefly described next. At first, the population is randomly generated using uniform distribution; each member of the population is called Chromosome (Cx). Conformed by genes or settings for all the system variables, chromosomes are candidates to obtain a complete system solution. The population size commonly remains unaltered during the simulation. Individuals are evaluated according to the objective function on each iteration or generation with the objective of identifying the fittest elements, which will have better chances to survive. The next step is a matter of life and death; it consists of selecting the chromosomes that will be used to leave offspring and consequently discarding some elements of the population. Several schemes may be implemented to carry out this Fig. 3.1 Genetic algorithms methodology

3.1 Genetic Algorithms Implementation

25

Fig. 3.2 Roulette-wheel and universal sampling selection method can use fitness or ranked-based approaches

process [8, 9]. Truncation selects the first n elements according to their fitness and tournament selection is based on the competition of a set of chromosomes. In this work stochastic universal sampling and roulette-wheel selection are implemented. Both methods consist on sorting the chromosomes from the fittest to the least adapted; the individuals are then mapped to contiguous segments computed using Eq. 3.1, these portions can consider fitness-based or ranking-based approaches. While the universal sampling selects by placing equally spaced pointers over the line of ranked chromosomes, the roulette-wheel spins to select each parent. Figure 3.2 depicts an example of both methods. xi Pi ¼ PN

j¼1 xj

ð3:1Þ

where: Pi x N

Portion of the roulette assigned to the chromosome i, Fitness or ranking value of the chromosome, Total of chromosomes.

Considering a four chromosomes population, the supposed fitness results of each one is displayed in the second column of Table 3.1 and the roulette portion assigned by the fitness-based option on the third. There might be generations— specially the initial ones—where the fittest element is much better than the others, on this example the best element would cover the 70% of the roulette-wheel. This situation may cause the population of selected parents to be dominated by this element, reducing the diversity and increasing the possibility of premature convergence.

26

3 Non-conventional Overcurrent Relays Coordination

Table 3.1 Roulette-wheel portion based on fitness and ranking approaches Cx

Fitness

Roulette portion (%)

Ranking

Value

Roulette portion (%)

1 2 3 4

x−1 2x + 1 x2 + 3x x+2

6 15 70 9

3 2 4 1

1 1/2 1/3 1/4

48 24 16 12

Fig. 3.3 The concept of a genetic algorithm

The second option is to designate a value equal to the inverse of the chromosome’s ranking position; this action increases the selection possibilities of the least adapted and brings population diversity. Another benefit of this approach is that the ranking values and consequently the roulette portions can be defined since the beginning of the simulation, avoiding further calculations on each generation and reducing the computational effort. The offspring is generated through the implementation of genetic operators; reproduction or crossover, mutation, and elitism are the most common ones. Crossover is the main genetic operator. Two or more selected parents are randomly chosen to interchange their genes considering one or more crossover points as can be seen in Fig. 3.3b. The objective of mutation is to bring diversity to the population by randomly changing one or more genes of the selected chromosome. Almost all the new chromosomes are derived from crossover, a small percentage comes from mutated elements, and occasionally an even smaller portion is conformed by elite parents, i.e., the fittest elements of the previous generation. The aim of elitism is to ensure that the solution will not worsen over the generations. The operator’s probability of occurrence sums up to one: PðCÞ þ PðMÞ þ PðEÞ ¼ 1

ð3:2Þ

In order to explore the search space, genetic algorithms start the simulation randomizing the initial population, the crossover operator helps by interchanging the fittest elements genes while the mutation one introduces diversity; as the generations pass the algorithm detects optimal zones and exploit their neighborhoods. Genetic algorithms methodology is highly conformed by randomized elements as the selected parents, the percentage of reproductions and mutations, the

3.1 Genetic Algorithms Implementation

27

crossover points, among others. The influence of these elements excludes the possibility to guarantee the obtention of an optimal solution; it is even possible to obtain different solutions on every simulation. Over and above these disadvantages, the implementation of these algorithms’ present diverse benefits: • The implementation of genetic algorithms is straightforward and does not require deep mathematical basis. • GA are robust and flexible to be adapted to different problems. They may not obtain optimal solutions, but in some cases close is enough. • They can be adapted to solve different objective functions allowing any kind of restrictions. • These algorithms explore the search space before it is exploited, lightening the computational effort.

3.2

Invasive-Weed Optimization

In addition, there is a relatively new metaheuristic algorithm named invasive-weed optimization method [10–12]. The strategy is based in a high exploration of the search space by performing different mutation operators. Weeds are plants with vigorous invasive habits that commonly grow on undesirable places; these kinds of plants tend to invade crops in order to find and absorb water resources and nutrients to keep growing and reproduce, becoming a threat difficult to eliminate [13]. Weeds have survived tillage and herbicides, they cannot be fully eradicated and keep spreading and mutating stronger. This description depicts a robust, stubborn, and self adapted to environmental adversities system, properties that can be harnessed by an optimization method. Invasive-weed optimization is a numerical stochastic metaheuristic that mimics the behavior of colonizing weeds; it was proposed by Mehrabian and Lucas [10] in 2006 with the objective of emulating the successful persistence of these plants. The IWO methodology is illustrated in Fig. 3.4. The initial steps are similar to a GA implementation, a possible system solution is known as weed and the weed population is randomly created and then evaluated. The members of the population are allowed to leave a n seeds (S) depending on their own and on the highest and lowest population fitness as described by Eq. 3.3.   Smax  Smin Si ¼ Smin ðFi  Fmin Þ Fmax  Fmin where: Si [Smin, Smax]

Total seeds of the weed i, Range of allowed seeds,

ð3:3Þ

28

3 Non-conventional Overcurrent Relays Coordination

Fig. 3.4 Invasive-weed optimization methodology

Fi [Fmin, Fmax]

Fitness of weed i, Minimum and maximum population fitness.

Once the total seeds of each weed are defined the main characteristic of this method is the introduced; the seeds are subject to invasive-weed operators based on mutation schemes, they are called spreading, dispersing, and rolling-down; these operators—described below—are responsible of the seed’s dissemination, equal to the exploration and exploitation of the search space. Each operator is assigned with a probability of occurrence: PðSÞ þ PðDÞ þ PðRÞ ¼ 1

ð3:4Þ

Spreading This algorithm consists of disseminating the seed by randomly creating a new individual. On this work multiple mutations are applied to less than the half of the content of the current seed; by this mean the seed is spread while some part of it is conserved. An example of this operator is illustrated in Fig. 3.5a. Dispersing The second algorithm is depicted in Fig. 3.5b, aims to disperse the seed to a place close to the original plant. The procedure consists in computing a degree of difference and multiply it by the seed. The distance is gradually reduced as the simulation advance. In addition to this approach, a second dispersion method that consists in mutating a small part—less than the 20%—of the seed content is implemented on this work. Rolling-Down The aim of the last algorithm is moving the seed to a better location. This method evaluates the weed neighborhood and just leaves the seed if a

3.2 Invasive-Weed Optimization

29

Fig. 3.5 Invasive-weed optimization operators

better place is found. The neighborhood is described as places located at a distance equal to one transformation of the current plant. The implementation of this work creates copies of the current seed and apply random mutations, the mutated copies are evaluated and this process is repeated until a copy improves the weed fitness. The improved seeds and the ones with close but different solutions are kept while the others are dismissed. The rolling-down algorithm is shown in Fig. 3.5c. The emulation of invasive-weeds behavior has been widely accepted by the scientific community, this methodology has solved different problems as shown in Refs. [10, 11, 14], among others. Spreading operator explores the search space, dispersion one exploits the weed location, and the rolling-down combines these methods to improve the actual solution. Altogether the invasive-weed operators permit a rapid exploration and exploitation of the search space. Similar to GA, IWO is also a metaheuristic that cannot assure the obtention of optimal values or convergence to the same solution on each simulation. The implementation of this method presents the same benefits that were mentioned for GA. In addition, mutation-based operators create new settings instead of performing crossover operations, requiring less computational effort.

30

3.3

3 Non-conventional Overcurrent Relays Coordination

Coordination like Optimization Problem

The problem has been faced through the implementation of diverse optimization algorithms; the first efforts were focused in the implementation of exact algorithms as linear and nonlinear programming, capable of obtaining optimal coordination solutions. Nevertheless, the trends over the years seem to have turned toward the implementation of heuristic and metaheuristic methods capable to overcome the exact methods limitations with the—maybe negligible—drawback of not guaranteeing an optimal solution. Nowadays coordination of overcurrent relays is a very important subject in different networks. On the contrary to other kinds of relays, fuses and reclosers, overcurrent relays coordination has been presented in many methods. As of such techniques are optimal coordination methods that have advantages as compared to common coordination techniques. The operation of relays in network is considered linear and symmetrical attribute of dial [15]. Whereas that isn’t like this and the attributes of dial and Ipickup are unknown quantities. Thus, the objective function converts this problem into a nonlinear problem. Several optimization methods (deterministic, heuristic, hybrid) have been proposed to attack this problem, first literature that seeks to facilitate achieving coordination employ a linear optimization model in order to solve the problem while one or two variables are considered as adjustable settings (LP) [16, 17]. The main issue in the approach is the requirement of a good initial guess and also the high probability of being trapped in local minima. Later, coordination problems have been approached with evolutionary algorithms, genetic algorithms (GA) are proven to perform well while solving the coordination problem considering one and two adjustable settings [18–21], particle swarm optimization (PSO) [22–24] has also been used with good results, to improve the search function algorithms have been proposed hybrid GA and mixed PSO [25–27]. The above proposals considered standardized time curves while searching for the optimal settings of the time dial setting (dial), the pickup current (Ipickup), or the characteristic constants, combining optimization methods such as linear optimization and genetic algorithms. The cited contributions were aimed to ensure relay coordination for maximum faults—which are the most important. Nevertheless, those algorithms do not monitor whether the coordination is achieved for lower currents. References [28, 29] propose different methods to avoiding curves intersections, i.e., to achieve the coordination for lower currents. The former consists of a trial-and-error curve fitting method that selects optimal values of the three parameters aforementioned; on the other hand, the latter inspects and eliminates curve intersections by modifying the dial and the multiple of the pickup current until coordination is achieved for currents lower than the maximum. Finally, [30] propose two algorithms that consider two magnitudes of short-circuit current in order to achieve the coordination. The sums of the tripping times of the main relays for

3.3 Coordination like Optimization Problem

31

the close-in and the far-end faults magnitudes are computed in order to evaluate the objective function of the proposed algorithms. The use of curves with the same inversion grade does not always prevent curve crossings unless additional curve fitting is employed [28, 29]; in addition, the tripping times of both main and backup relays tend to be high for those currents. In this chapter we consider the tripping times of lower currents as well as the use of non-standardized inverse time curves to improve the coordination for the mentioned current levels. We carry out the coordination of overcurrent relays considering two different levels of short-circuit current and non-standardized inverse time curves, obtained by employing five parameters as adjustable settings. Presently, some commercial relays—through software tools—already allow the user to define the curve parameters instead of simply selecting among the standardized ones.

3.3.1

Overcurrent Relays

The overcurrent relay [15, 31] is the simplest, cheapest, and oldest among all protection principles. The operating time, defined by the tripping characteristics, is inversely proportional to the current, so for severe failures, minimal time will be taken and for overload conditions there will be a higher tolerance. Despite the increased use of more sophisticated protections, it is still commonly used as phase primary protection on distribution and subtransmission systems and as a phase secondary protection on transmission systems. More than a century has passed since OCR was developed and it is still used with almost any modification [32]. It is a general practice to use a set of two or three overcurrent relays for protection against inter-phase faults and a separate overcurrent relay for single phase to ground faults. Separate ground relays are generally favored because they can be adjusted to provide faster and more sensitive protection for single phase to ground faults than the phase relays can provide. However, the phase relays alone are sometimes relied on for protection against all types of faults. On the other hand, the phase relays must sometimes be made to be inoperative on the zero phase sequence component of ground fault current. Overcurrent relay is well suited to distribution system protection for several reasons. Not only is overcurrent relay basically simple and inexpensive but also these advantages are realized in the greatest degree in many distribution circuits. In electric utility distribution circuit protection, the greatest advantage can be taken of the inverse time characteristic because the fault current magnitude depends mostly on the fault location and temporary overload conditions are tolerated; however, is affected by changes in generation or in the high voltage transmission system. Not only may relays with extremely inverse curves be used for this reason but also such relays provide the best selectivity with fuses and reclosers. One of the assets of the inverse time relay is its relative selectivity, that means that it is designed to operate as main protection for the line where it is placed and as

32

3 Non-conventional Overcurrent Relays Coordination

a backup protection for any adjacent line. The principle is straightforward: the OCR gives a signal to trip the protected line when a measured current is greater than the previously set Ipickup. A common approach consists in setting the Ipickup to a magnitude equal to or greater than 1.5 times the maximum load current (Iload) flowing trough the line where the relay operates as a main protection; nevertheless, in some of the reported works, the pickup current multiplier (Pm) is reduced to 1.25. The objective of the Pm is the avoidance of relay operation under temporary overload conditions that can be considered as normal system operation. Thus, the Ipickup is computed as shown by Eq. 3.5. Ipickup ¼ Iload  Pm

ð3:5Þ

The tripping time of an overcurrent relay for a given short-circuit current Isc is computed using Eq. 3.6, defined by the IEEE [33]: 2 6 t ¼ 4

3 Isc

A p

Ipickup

1

7 þ B5  TDS

ð3:6Þ

where: t Isc Ipickup TDS A, B, p

tripping time, short-circuit current, pickup current, time dial setting, and time curve characteristic constants.

An inverse-time characteristic curve is designed for each relay of the system, it can be obtained by evaluating the previous equation for different Isc magnitudes. The curve indicates the time that the relay will take to trip a fault of a given magnitude; it is asymptotic to the Ipickup, consequently the tripping times for currents near to that value tend to infinite. The characteristic constants are responsible to give the inversion grade to the curve, and the TDS is a time multiplier which moves the curve along the vertical axis while keeping its inversion grade unaltered. The methodology followed to design an inverse-time curve consists of the selection of proper values of TDS, Pm, and one of the three sets of characteristic constants established in the ANSI and IEC [34] standards. Since the relay employs a single adjust to operate as main and backup protection, the security of the protected system relays on a correct parameter selection and curve design. The aim of limiting the curve to certain inversion grades consists in giving more compatibility among all the OCR curves in the system. Figure 3.6a shows an example of the three curves plotted in a bilogarithmic scale; the inversion grade is remarkably different for each one of them, so that their names are appropriate to distinguish one from another. In Fig. 3.6b, the previously mentioned multiplicative effect of the TDS on the inverse-time curves is depicted;

3.3 Coordination like Optimization Problem

33

Fig. 3.6 IEEE standardized inverse-time curves and the TDS effect on the curves design

generally, values from 0.5 to 15 can be defined as time dial settings for overcurrent relays. Nevertheless, given that the tripping time is directly proportional to TDS magnitude, big values are not often used. The TDS selection range can be considered continuous for digital relays or discrete for electromechanical ones. As stated in previous paragraphs the tripping time tends to infinity while Isc becomes closer to the pickup current. This behavior is exemplified in the case used to compute the curves depicted in Fig. 3.6a; in this example the Ipickup equals 460 A but the tripping time of the very and extremely inverse-time curves for a current near 600 A is respectively around 20 and 60 s, a slow operation time for coordination purposes. Because of that, the region comprehended from 1 to 1.5 times the Ipickup is commonly not considered during the coordination process. With the objective of ensuring that the relay is capable of detecting a fault magnitude and tripping the line in a reasonable amount of time, before the coordination process is carried out overcurrent relays that exercise as backups are subjected to a sensitivity filter.

3.3.2

Sensitivity of Relays

The sensitivity analysis is the examination of whether the backup relay is sensitive enough to operate for minimum fault located at the far end of its primary relay protection zone. This is computed for every coordination pair and is given in Eq. 3.7:

34

3 Non-conventional Overcurrent Relays Coordination

Ksensitivity ¼

2; Isc Backup Ipickup

ð3:7Þ

2; where Isc is the current that the backup relay senses for the minimum fault simuBackup lated at the far end of the long adjacent line, Ipickup is the pickup current of the backup relay. The sensitivity analysis is a very important matter in the coordination study. For coordination pairs whose backup relays do not fulfill the requirement of sensitivity will lead to very high operation time. In other words, acceptable backup operation times are those coordination pairs whose backup relays fulfill the requirement of sensitivity. It is observed from Fig. 3.7 that there is very high operation time for faults located near the vertical asymptotic relay characteristic curve, and infinitive operation time for fault which has the same magnitude as the vertical asymptotic curve ðIsc ¼ Ipickup Þ. Referring the relay characteristic curve on a log/log graph, the vertical asymptotic curve is located at M ¼ 1 ¼ Ipickup ; where M represents multiples of base current. It should be clear that the region located to the left of M ¼ 1 is a dead zone which relay will never operate ðIsc \Ipickup Þ. The region located between the intervals 1  M  1:5 is an undesired operation zone due to high operation time. And the rest of the region located to the right of M ¼ 1:5 is a desirable operation zone due to the reasonable operation time. Therefore, acceptable backup operation times are given when faults are located outside the vertical asymptotic region. Hence the sensitivity factor M ¼ 1:5 is established to be used as a comparative reference for the sensitivity analysis. In other words, sensitivity is fulfilled for those coordination pairs whose backup two phase fault current is at least 1.5 times its pickup current [15]. The sensitivity constraint is given in:

Ksensitivity  1:5

Fig. 3.7 Sensitivity of relays

ð3:8Þ

3.3 Coordination like Optimization Problem

3.3.3

35

Directional Overcurrent Relay (DOCRs)

The overcurrent relay (OCR, 51) uses the input signals from a current transformer (CT) and compares with the pre-specified value (Ipickup). If the input current exceeds the pre-specified value, then the relay will detect an overcurrent scenario and send a tripping signal to the breaker when the operation time is achieved, which opens its contact to disconnect the protected line. The OCR does not have directionality; thus this can only be implemented in radial lines. This overcurrent relay tripping logic scheme is presented in Fig. 3.8. The directional overcurrent relays (DOCRs, 67) are designed to sense the actual operating conditions on an electrical circuit and trip circuit breakers when a fault is detected. Unlike the normal overcurrent relays (51), the DOCRs have directionality. Two important measuring instruments are needed for this matter: current transformers (CT) and/or potential transformers (PT). Each DOCR is polarized with the voltage signal from PT or polarized current of neutral CT, if is available; which are used as a reference signal. Then when fault occurs, phase relationship of voltage and current or between both currents, are used to determine the direction of a fault [31]. The relay first discriminates whether the fault is located in front of or behind the relay. If the fault is located behind the relay, then no operation will take place. But if the fault is located in front of the relay, a comparison of fault magnitude and reference current will take place in order to make the decision whether to operate or not. Therefore, in order to operate, the DOCR must satisfy both conditions: direction and magnitude. This is illustrated in Fig. 3.9.

Fig. 3.8 Overcurrent relay tripping logic scheme

Fig. 3.9 Directional overcurrent relay tripping logic scheme

36

3 Non-conventional Overcurrent Relays Coordination

Fig. 3.10 Typical 90° type directional overcurrent relay characteristic

The direction of DOCR can be polarized in different ways; the most common one is illustrated in Fig. 3.10. Most system voltages do not change their phase positions significantly during fault. In contrast, line currents can shift around 180° degrees (essentially reverse their direction or flow) for faults on one side of the circuit CTs relative to a fault on the other side of the CTs. Therefore, DOCR is polarized with the voltage signal from PT, which is used as a reference signal. The name of the scheme is due to the quantities considered for the polarization (Vbc) and operation (Ia) functions during steady state. The “maximum-torque line” and “zero-torque line” are terms used for the electromechanical relay designs; they are also known as operating lines or threshold in solid-state relay designs. The maximum operating torque for the relay of phase (a) occurs when the current flows to the tripping direction lags by 60° the voltage Van. The operating (trip, contact close) zone is represented by red dashed half plane, and the non-operating zone is represented by green dashed half plane. Higher current values will be required when Ia deviates from the maximum torque line. The operating torque at any angle is a function of the cosine of the angle between the current (Ia) and the maximum torque line, as well as the magnitudes of the operating quantities.

3.3.4

Directional Overcurrent Relay Coordination (DOCRs)

The main task of protective relaying engineering is coordinating the protective devices. Overcurrent protections are set to clear the faults on the main lines and to operate as back-ups for adjacent lines. The complexity of the problem increases exponentially as the power system grows; for example, the radial system of four buses and three relays shown in Fig. 3.11a can be easily coordinated, nevertheless, the addition of one interconnected node and bilateral generation, can transform this task to a rather difficult one. The number of relays to coordinate increases from three to ten with this slight modification as illustrated in Fig. 3.11b. The Iload is necessary to calculate the Ipickup, moreover the coordination process is performed considering the maximum fault magnitude, commonly caused by three-phase (3u) faults, consequently flow and fault analysis have to be carried out

3.3 Coordination like Optimization Problem

37

Fig. 3.11 Two examples of power systems

since their results are needed to coordinate the protections. The load demand and the results obtained by a fault analysis are well known by the system operators, consequently they could be either computed or retrieved from historical data. The main characteristic of radial systems is their load-flow direction; considering loads connected in nodes 2, 3, and 4 of Fig. 3.11a and a short circuit occurrence in bus 4, the current will flow from node 1 to the fault point, i.e., in downstream direction. The coordination process starts by setting the curve parameters of downstream protections, setting relay 34 to trip its main line as fast as possible for max 34 an Isc ; the load connected to node 4 is considered as Iload . Moreover, the relay 23 is adjusted as main protection for the line 2–3 and also as backup of the relay 34; 23 the Iload will be equal to the sum of loads connected to nodes 3 and 4. This process will continue until the relay closest to the generator is coordinated. The unilateral load-flow in radial systems makes the use of directional-overcurrent relays unnecessary. On the other hand, in an interconnected power system the current flows in both directions, and consequently the use of DOCR is required. Let us consider a fault located at the 80% of the line 2–3 of the system depicted in Fig. 3.11b, it can be seen that the fault contribution will come from both sides of the line; the electric distance between relays 32, 34, and 35 and the fault location—and consequently the measured fault magnitude—is practically identical. The relay 32 must trip its main line, but an operation of the relays 34 and 35 will implicate an undesirable outage of non-faulted lines. Broadly speaking, the directional function will allow or prevent the operation of the relay for faults occurred in a specific direction [15]. The coordination process will be explained using Fig. 3.12. Supposing a three-phase fault f1 occurring in the line 2–3, the relays 32 and 23 had to clear the fault as primary protections nevertheless let us assume that just the first of them

38

3 Non-conventional Overcurrent Relays Coordination

Fig. 3.12 Relay coordination is performed considering that one of the relays operates correctly

accomplished its task. As a consequence of that malfunction, the fault is still being fed by the generators located at the nodes 1 and 4. Directional function prevents relays 21 and 24 from detecting this fault, therefore the protections 12 and 42 are appropriate to operate as backups of the faulted relay, isolating the fault and preventing it from keeping spreading towards the rest of the system. The relays 12 and 23, as well as the relays 42 and 23 form coordination pairs, namely, a pair of relays in which one of them is backup of the other. A relay can be part of as much coordination pairs as adjacent lines are located in its trip direction, meaning that each relay can be backup of multiple relays, as well as multiple relays can be its backups. The coordination current (Ic) is the maximum current seen by the backup relay after the occurrence of a fault located on the main zone of its pair. As its name suggests, the Ic is the current used to carry out the coordination. In Fig. 3.12 the fault magnitude seen by relay 23 after correct operation of relay 32 is equal to x + y A, which is a combination of the contributions coming from the lines 1–2 and 4–2, following the Kirchhoff’s first law. The contributions do not necessarily have the same magnitude, so it can be said that each coordination pair has an individual coordination current. Further, while the tripping time of the primary relay will be computed considering the full amount of current—x + y A in this example—the backup tripping time will be calculated considering just the contribution of its line (x A). This situation is an example of an effect known as infeed current, presented in interconnected and bidirectional power systems. The definitions of the latter paragraphs are complemented using Fig. 3.12. It can be noted that the relay 12 is a coordination pair of the relays 23 and 24, therefore it has to be adjusted to respond as backup if any of them fails. Assuming two independent faults f1 and f2 occurring in different moments. It is supposed that for each case the relay on the right operate correctly and the relay on the left fails to trip; the contribution from the line 1–2 to each fault surely will have a different magnitude, consequently the relay 12 will operate as backup for more than one relay considering different coordination currents. The complexity of the problem increases when the same relay has to be coordinated with its own backups, although this example is not the case. While the coordination process is simple to achieve for small systems, the complexity grows rapidly as the system grows either in nodes or interconnections.

3.3 Coordination like Optimization Problem

39

In order to ensure that the backups will respond only if the main relay failed to operate or if its operation is taking too long, the backup relays should not trip the line immediately, but after a time delay called Coordination Time Interval (CTI); the CTI magnitude is assumed to be 0.3 s when overcurrent relays are coordinated between them. Thus, the desired tripping time of a backup relay (Tbackup) is equal to the sum of the primary relay tripping time (Tprimary) and the CTI, as shown in Eq. 3.9: Tbackup  Tprimary þ CTI

ð3:9Þ

A common assumption in protective relaying is that one of the primary relays will operate correctly; in this example, relay 21 is supposed to do so, consequently considering a fault with open end.

3.3.5

Objective Function of the Optimization Algorithms

It is of great importance to establish the objective function that is going to evaluate the fitness of the settings; that is, the capability of a setting to meet requirements [35]. This objective function can be the sum of several objective functions. This objective function will directly impact the quality of result of the optimization algorithms. An indicator must give the information whether it is bad (not within satisfaction limit), good (within satisfaction limit) or ideal result of the settings so that they can be awarded or penalized before their evaluation in the objective function. This indicator in the case of coordination of relays is the time, CTI. The objective function is the sum of number of violations, sum of primary and backup time, and also the sum of number of coordinated pairs CTI error. It is shown in Eq. 3.10. ! ! PNCP PNCP  NV a¼1 tprimarya b¼1 tbackupb fitness ¼ þ  aþ b NCP NCP NCP ! NCP X ECTIL  d þ 

ð3:10Þ

L¼1

where a; b and d are factors that increase or decrease the influence of each sub-objective function and will do for any other system. NV is the number of violation of coordination constraints, NCP is the number of coordination pairs, tprimarya is the primary operation time of relay a, tbackupb is the backup operation time of relay b, and ECTIL is the CTI error of L-th coordination pair. The tbackupb minimizes the backup operation time of relays, the ECTIL minimizes the CTI to as close to 0.3 s as possible, the NV minimizes the number of violations to zero (avoid converging at local minimum) and the tprimarya , tbackupb , and NV are all

40

3 Non-conventional Overcurrent Relays Coordination

scaled and divided by NCP to be able to sum together. These different values were included in the objective function because it was observed that the use of only tprimarya in the objective function for coordination in larger meshed systems may converge at a result where there may be higher backup time, higher CTI and may have violation of constraints. Therefore, the tbackupb , ECTIL and NV are included in the objective function to further improve the results while maintaining selectivity. Subject to the following restrictions: Coordination time interval to ensure the coordination: CTI  tbackup  tprimary Dial time to ensure acceptable relay operation times: dialmin  dial  dialmax

Fig. 3.13 Parameter tuning for 30 IEEE power system

3.3 Coordination like Optimization Problem

41

Pickup current to ensure security and sensitivity: Ipickupmin  Ipickup  Ipickupmax Parameter Tuning of the Objective Function An optimal solution would be constituted by zero miscoordinations and the lowest tripping times; nevertheless, both attributes are opposed and conform a Pareto frontier [36], obtained when an attribute cannot be improved without worsen another. Similar to other problems that involve time reduction, an optimal solution is not as useful and practical as a solution that obtain low enough times in a reasonable simulation time. In this section, parameter tuning ða; b and dÞ of the proposed objective function in Eq. 3.10 is carried out. The parameters a; b and d are evaluated [0.5:0.5:2]. Therefore, there is a total of 52 combinations. Each test or combination is evaluated in 50 simulation runs using the IEEE 14 bus system in 1000 iterations. The parameters a ¼ 2; b ¼ 1 and d ¼ 2 has the best fitness value and standard deviation of fitness value. Pareto frontier of 14 bus and 30 bus systems in 1000 iter, with 50 best results (see Fig. 3.13). The results shown that they are all very close to each other. It may seem that the points are very dispersed but if we zoom out the figure the points become very very close or we can also observe from the axis units, they are actually decimal variations.

3.3.6

General Model for Non-conventional Time Curves

Let f and g denote respectively the tripping time of a main and a backup overcurrent relay for determined short-circuit (a) and load (b) currents, while h is the difference between those two magnitudes. 2 6 f ðxÞ ¼ 4

3 1

x x3

a b1 x5

1

2

7 þ x2 5  x4

ð3:11Þ

3

6 gðyÞ ¼ 4

1

y  y3

a b2 y5

1

7 þ y2 5  y4

hðx; yÞ ¼ gðyÞ  f ðxÞ

ð3:12Þ

ð3:13Þ

42

3 Non-conventional Overcurrent Relays Coordination

where:  T  T  T a ¼ amin : amax ; x1 ; x2 ; x3 ; x4 ; x5 ; y1 ; y2 ; y3 ; y4 ; y5 ; y  x 2 R Equation 3.14 shows a simplification of the proposed model for the optimization of the overcurrent relay coordination problem. minimize Tmc ;f ;g;h

m n n X X X Tmc fi ðxi Þ gj ðyj Þ hj ðxj ; yj Þ þb þc þa m n n n i¼1 j¼1 j¼1

ð3:14Þ

subject to x4 ; y4  0:5; x5 ; y5  1:4; 8a : h  0:3. where m and n are the total relays and the total coordination pairs in the system while a, b, c are given weighting factors for each one of the objectives. The first two restrictions are given by conceptual limitations of the overcurrent relay [37, 38]; while x4 keeps the TDS greater than a standardized magnitude that emulates a function of the electromechanical relay, x5 aims to ensure that the protection would operate for ampere magnitudes greater than 1.4 times the load current. Both restrictions have upper limits but, since their magnitude is directly proportional to the tripping time, the algorithm seeks to set them as low as possible, therefore those limits are not listed in this model. The last one is the coordination restriction, it establishes that for a given short-circuit magnitude the tripping time difference between the backup and main relays have to be greater than or equal to the coordination time interval. In addition, given that the apparition of negative errors is now an option, the total of miscoordinations (Tmc) is considered. Model Enhancements The x1, x2, and x3 variables, corresponding to the A, B, and p parameters, are theoretically capable of accepting any assigned magnitude; nevertheless, this assumption might lead to undesired curve shapes, i.e., curves that do not present the inverse-time characteristic of the overcurrent relays. Furthermore, a more important drawback is the increase of the search space size. Relays tripping time is augmented if the magnitude of the two remaining variables (x4 and x5) increases, therefore if bigger values that will not bring benefits to the problem solution are discarded the search space may be reduced. Consequently, boundaries are placed to delimitate the selection of each of the five variables. The sets of boundaries are going to be called selection ranges and will be discussed in the following chapters. Moreover, the restriction h  0.3—responsible of relays coordination—is changed for penalty functions. The CTI is included in Eq. 3.15 to redefine Eq. 3.13: hðx; yÞ ¼ gðyÞ  f ðxÞ  CTI

ð3:15Þ

Positive magnitudes of h indicate slower operation and negative one’s lack of coordination.

3.3 Coordination like Optimization Problem

43

The first is acceptable while the last is not; in addition, the use of penalty functions may lead to undesired compensations between both kind of errors. Since lack of coordination should be avoided, those cases are harshly penalized by multiplying the exponential of their backup tripping time and coordination error by a penalty factor (e) as shown in Eqs. 3.16 and 3.17. Meanwhile penalized versions of positive errors are equal to their original magnitudes, i.e., hj  0 ( ) gj = gpj ^ hj ¼ hpj ; hj \0 ( ) gpj ¼ egj  

ð3:16Þ

hpj ¼ ejhj j  

ð3:17Þ

The objective function is still conformed by the operation times of the main relays, but gj and hj are substituted by their penalized versions. The coordination problem has a multi-objective nature, the computations considered as part of the objective function are desired characteristics of the protection system. In essence, each relay has to provide fast fault clearance for its main protection zone and backup protection for the adjacent lines. As stated before, this work seeks to improve coordination for different levels of short-circuit current (see Fig. 3.14), consequently the optimization process is carried out simultaneously considering short-circuit magnitudes caused by two and three-phase solid faults located respectively in the far and close-end bus of each main relay. Both magnitudes are considered as coordination boundaries; since the use of non-standardized inverse-time curves might lead to undesirable curve shapes, a third Isc magnitude equal to the average of both previous values is also used. As a consequence, the coordination is performed considering three levels of short-circuit current, computing times and evaluating Eq. 3.14 for each one of them. The protection system has to ensure coordination for a maximum Isc level, therefore the objective function result for other levels will be considered with smaller weights. The constants i, j, and s serve as weighting constants for the different short-circuit magnitudes. The complete fitness function is then defined as the sum of the weighted OF results for minimum (OFm), intermediate (OFi), and maximum (OFM) short-circuit levels:

Fig. 3.14 Propose DOCRs coordination for three fault currents

T (s) Backup relay

Primary device

I (A)

44

3 Non-conventional Overcurrent Relays Coordination

OF ¼ i  OFm þ j  OFi þ s  OFM

ð3:18Þ

The following describes the GA, IWO, and SQP adaptations and implementations of the directional overcurrent relay coordination problem considering five adjustable settings. Previous calculations have to be performed and given as an input to the following methods. In Appendix, the data of power systems used for testing optimization algorithms are presented.

3.4

Coordination with Genetic Algorithms

The population of the genetic algorithm is conformed by chromosomes, each Cx contain adjustable settings for the total overcurrent relays (Tr). Supposing three AS and a five relays system, the size of each Cx would be [15  1]. The population size Ps is given by the total considered chromosomes ðTCx Þ; the previous example for a population size equal to 20 represent a population matrix ðPCx Þ of [15  20], containing a total of 300 genes. The arrangement of the population matrix is presented in Eq. 3.19 where each column represents a chromosome and each row a relay setting. 0

PCx

TDS11 .. B B . B B TDSTr 1 B B Pm11 B B .. B . B B PmT 1 r B B A11 B B .. ¼B . B B A B Tr 1 B B B 11 B .. B B . B B BTr 1 B B p11 B B .. @ . pTr 1

TDS12 .. . TDSTr 2 Pm12 .. . PmTr 2 A12 .. . ATr 2 B12 .. . BTr 2 p12 .. . pTr 2

... .. . ... ... .. . ... ... .. . ... ... .. . ... ... .. . ...

TDS1TCx .. . TDSTr TCx Pm1TCx .. . PmTr TCx A1TCx .. . ATr TCx B1TCx .. . BTr TCx p1TCx .. . pTr TCx

1 C C C C C C C C C C C C C C C C: C C C C C C C C C C C C C C A

ð3:19Þ

The initial population is created generating uniformly distributed random numbers, each number must be located within each setting boundaries. A coordination pair is conformed by a main relay and its backup; relays may be part of different coordination pairs either as main or backup protection.

3.4 Coordination with Genetic Algorithms

45

The tripping times of all main and backup relays are computed using Eqs. 3.11 and 3.12, then the coordination errors are obtained with Eq. 3.13; the negative errors are penalized through Eqs. 3.16 and 3.17 and thereupon all chromosomes are evaluated considering the objective function presented in Eq. 3.14. The selection process is carried out either with stochastic universal sampling or oulette-wheel selection methods. For a population size of 100 chromosomes—if the roulette-wheel is applied—the probability of selection of the Cx positioned in different ranks is shown in the first part of Table 3.2. On the other hand, considering a population size and total selected elements equal to 100, the total of repeated selections of the first twelve ranked chromosomes is shown. The first twelve ranked elements of a 100 Cx population conforms the 60% of the individuals selected by the universal sampling method. In appearance the roulette-wheel selection would be more diversified, aiming to test this assumption a simple experiment is conducted. The experiment consists in simulating the roulette-wheel method 100,000 times to determine the population percentage occupied by the first twelve ranked elements. The results are illustrated as a box-whiskers plot in Fig. 3.15. The mean result is 59.83% while the median is 60%, almost equal to the universal sampling result. Both methods offer different advantages, while the obtention of more diversification is a strong of the roulette-wheel, the universal sampling offers less computational effort by previously defining the selected ranks. The next step in GA is the conformation of the next generation via the genetic operators. Almost all the new generation is obtained through the crossover operator. A single point crossover methodology is implemented in this work; each group of settings is divided into two blocks; the division point is randomly set between the 25 and 75% of the total system relays. All the corresponding relays settings are interchanged to prevent losing information. An example of the crossover methodology is depicted in Fig. 3.16. The reproduction is followed by mutation operator. The objective of this step is to diversify the population with the introduction of random setting changes. In this work a small part of the population—commonly 5%—is mutated; nevertheless, a mechanism that monitors the slope of the convergence increase the mutation rate up to 40% if the slope remains horizontal after iterations. Elite mutations, consisting on exclusive modifications to the fittest elements, are also performed. The last part of

Table 3.2 Selection probability and repeated selections of some ranked chromosomes Roulette-wheel selection probabilities Ranking

1

2

3

4

5

10

20

30

40

50

75

100

Probability (%)

19.27

9.63

6.42

4.81

3.85

1.92

0.96

0.64

0.48

0.38

0.25

0.19

Universal sampling repealed selections Ranking Selections

1

2

3

4

5

6

7

8

9

10

11

12

20

9

6

5

4

3

3

2

2

2

2

2

46

3 Non-conventional Overcurrent Relays Coordination

Fig. 3.15 Population percentage conformed by the first twelve elements

Fig. 3.16 Crossover operator interchange relay settings of the two selected parents

the population is formed with elite parents, the fittest elements survive through the generations with the objective of ensuring that the result will not get worse over the simulation.

3.4 Coordination with Genetic Algorithms

47

Around the 90% of the new generation is conformed by crossover offspring, the remaining 10% is divided into mutation, elite mutation, and elite parents. Diverse percentages and mutation rates have been tested, results will be presented in last section. The described process is repeated until a stopping criterion is met. The most common criterium is the reach of the total iterations (Ti).

3.5

Coordination with Invasive-Weed Optimization

The initial steps of the invasive-weed optimization method are equivalent to the genetic algorithm ones. The individuals are called weeds, the weed matrix is randomly generated, the tripping times and coordination errors are computed using the same equations and the considered objective function is Eq. 3.14 for different short-circuit levels. The distinctive stages of IWO begin after the evaluation step; the weeds are sorted according to their fitness (F) and each one is allowed to leave seeds in accordance to their ranking as illustrated in Fig. 3.17. A seed is a clone of the actual weed that will be subjected to mutation operators. The total seeds (TS) assigned to each weed are computed using Eq. 3.20.   SrM  Srm TSi ¼ Srm þ ðFw  Fi Þ  Fw  Fb

ð3:20Þ

where Srm and SrM are the minimum and maximum quantity of possible weeds, Fb and Fw are the fitness values of the best and worst elements, and Fi is the fitness of the actual individual. There may be some scenarios were the fitness of the worst element results infinite, so a predefined magnitude is considered as Fw.

Fig. 3.17 Weeds are assigned with a number of seeds in accordance with their fitness

48

3 Non-conventional Overcurrent Relays Coordination

Spreading, dispersing, and rolling weed operators are used to explore and exploit the search space in order to find better results. Each assigned seed is subjected to one of the three operators randomly chosen. The implementation of all three operators is described in the following paragraphs. The objective of the spreading operator is to create a new plant based on the current seed. The implementation consists in mutating up to 50% of the seed content. The second operator’s objective is to disperse the seed in the surrounding neighborhoods of the weed; on this work the dispersion methodology faces two stages, the first one undergoes the seed to small perturbations by multiplying every setting by a maximum variation of ± 1%. The second stage mutates up to 20% of the seed elements. The last operator creates copies of the current seed and then combines the first two mechanisms to disperse and spread the actual seed; the process is repeated until a better solution is found or the seed copies are exhausted. The mutation percentage, the perturbation magnitude, and the settings to mutate are randomly selected. Examples of the three operators can be seen in Fig. 3.18. The fitness of the current weeds and mutated seeds are sorted and the n fittest elements are selected to conform the new population. The method is initialized from

Fig. 3.18 Spreading and dispersing operators mutate respectively up to 50 and 20% of seed elements. Rolling-down combines both and selects an improved mutation

3.5 Coordination with Invasive-Weed Optimization

49

the seed assignment step and this procedure is iteratively repeated until the only stopping criterium considered for this implementation—the reach of total iterations Ti—is met.

3.5.1

Sequential Quadratic Programming

Sequential Quadratic Programming (SQP) [39–41]—proposed by Wilson [42] in 1963—can be seen as a general form of Newton’s method. SQP has evolve to become one of the most effective and successful methods for the numerical solution of nonlinear constrained optimization problems, it generates sequential steps from the initial point by minimizing quadratic subproblems. The simplest form of SQP algorithm uses a quadratic approximation (Eq. 3.21) subject to linearized constraints to replace the objective function. 1 qn ðdÞ ¼ rf ðxn ÞT d þ d T r2xx Lðxn ; kn Þd; 2

ð3:21Þ

where d is the difference between two successive points. The hessian matrix of the lagrangian function is denoted by r2xx Lðxn ; kn Þ; an approximation of this matrix is performed on each iteration using a quasi-Newton method. The quadratic approximation of the Lagrange function (Eq. 3.22) is the base of the problem formulation. Lðxn ; kn Þ ¼ f ðxÞ þ

m X

ki gi ðxÞ

ð3:22Þ

i¼1

Given a non-linear programming problem: minimize f ðxÞ subject to bðxÞ  0; cðxÞ ¼ 0

ð3:23Þ

Simplifying the general nonlinear problem, a quadratic subproblem is obtained by linearizing the nonlinear restrictions, the subproblem is defined as follows: minimize rf ðxn ÞT d þ 12 d T Hn d subject to rbðxn ÞT þ bðxn Þ  0 rcðxn ÞT þ cðxn Þ ¼ 0

ð3:24Þ

where Hn is the BFGS (Broyden–Fletcher–Goldfarb–Shanno) approximation of the hessian matrix of the lagrangian function computation, required by the quadratic program and updated on each iteration. The approximation Hn is computed by:

50

3 Non-conventional Overcurrent Relays Coordination

Hn þ 1 ¼ Hn þ

yn yTn HTn sTn sn Hn  T ; yTn sn sn H n sn

yn ¼ rx Lðxn þ 1 ; kn Þ  rLðxn ; kn Þ; " # " # m m X X ki rgi ðxn þ 1 Þ  rfðxn Þ þ ki rgi ðxn Þ yn ¼ rfðxn þ 1 Þ þ i¼1

ð3:25Þ ð3:26Þ ð3:27Þ

i¼1

Sn ¼ xn þ 1  x n

ð3:28Þ

The Hessian is recommended to be maintained positive by keeping yTn sn positive on each actualization by initializing the method with a positive Hn. If Hn is not positive, yn is modified until this requirement is achieved.

3.5.2

Implementation

Due to the search space dimensions and also the required computation of Jacobian and Hessian matrices, the implementation of nonlinear methods may not be the best option to solve the coordination problem. However, these methods are useful when good initial approximations are provided. The sequential quadratic programming methods are the state of the art in nonlinear programming [39]; this method was implemented through the adaptation of the fmincon function [43] belonging to the Optimization Toolbox [44, 45] of MATLAB [46]. The boundaries that define the feasible area help SQP to make informed decisions on the search directions and step lengths, therefore SQP can often solve nonlinear constrained faster than unconstrained problems. This fact supports the decision of set boundaries to all variables. The SQP is applied in different stages of the GA and IWO simulations. The quasi-Newton approximation of the Hessian of the Lagrangian function is calculated using the BFGS method (see Eqs. 3.25–3.28); then, on each iteration is solved a problem of the form of Eq. 3.22. The first stage of this process consists in evaluating a feasible point—it is commonly required that this point is user-provided —while the second one creates a sequence of feasible points until the stopping criteria is met. The SQP implementation combines the constraint and objective functions into a merit function when a feasible solution for the next step cannot be found. For more information about the MATLAB implementation of SQP, the Optimization Toolbox User’s Guide [45] may be reviewed.

3.6 Results

3.6

51

Results

After experimental tuning, the magnitudes used by each parameter of both algorithms are listed in Table 3.3. The table is divided in three groups of parameters, the first part comprehends columns one to three and enlists settings that are used by a specific algorithm. The probability of occurrence of IWO operators remains unaltered during simulation, nevertheless the GA implementation is subjected to alterations in accordance with the slope decrease; if the best fitness is not improved after Ti/20 iterations, the probabilities P(C), P(M), and P(E) are respectively modified to 0.55, 0.45, 0.05 until the algorithm escapes the local minima. The second and third set of parameters indicates settings that are shared by both methods. While a, b, c, i, j, and s are fixed, both algorithms are allowed to define a continuous magnitude inside the defined boundaries for TDS, Pm, A, B, and p. An adequate selection of those adjustable settings may lead to a relay coordination problem solution. Five coordination approaches are compared in this section. The first case is the proposed method, five parameters that conform the overcurrent relay curve are considered as adjustable settings and the objective function aims to achieve coordination and time reduction for three different levels of short-circuit current. The first proposal is reached by allowing the relays to select continuous curve settings between a predefined boundary while the second one consists in weighting the objective function as indicated in Eq. 3.10. The remaining four cases are selected since they correspond to common practices conducted by researchers, they seek to achieve coordination while two—in cases three and five—and three—in cases two and four—inverse-time curve parameters are contemplated as adjustable. The IEEE very inverse-time curve is predefined for all relays in cases three and five, while in cases two and four each relay is allowed to choose one of the eight curve types presented in [18, 19]. The selection of a curve type involves the unaltered use of their A, B, and p parameters. Another difference is related to the objective function weighting factors, cases two and three seek to achieve coordination for minimum and maximum fault currents

Table 3.3 Selection ranges of each adjustable setting (Boundaries) Parameter

GA

IWO

Parameter

Setting

Parameter

Minimum

Maximum

Ti Ps P(C) P(M) P(E) P(S) P(D) P(R)

2000 200 0.85 0.10 0.05 – – –

700 100 – – – 0.25 0.05 0.70

a b c i j s

0.60 0.30 0.10 0.25 0.25 0.50

TDS Pm A B p

0.50 1.40 0.01 0 0.01

5 2 30 0.50 2

52

3 Non-conventional Overcurrent Relays Coordination

while cases four and five pursuits the same objective considering just a maximum level of short-circuit current. The five cases parameters are resumed in Table 3.4. The overcurrent relay coordination results for the 14 and 30 bus systems obtained by the genetic algorithm are presented. The fitness, miscoordination percentage (mc%), average main and backup tripping times, and coordination errors are the results to be compared. Operation times and coordination errors for different short-circuit magnitudes are shown individually while fitness and miscoordination percentage represent the global result. Negative magnitudes in those columns indicate a case that obtained a better result. In addition, Table 3.5 and Fig. 3.19 illustrates the same results in a set bar plots for better appreciation. The total coordination pairs of each test system are respectively 50 and 124; after the sensitivity filter the total pairs is reduced to 47 and 118; since three coordination points are considered for each pair, these values are multiplied by three to obtain the total coordination points of each system. GA achieves full coordination for the small power systems, nevertheless the miscoordination percentage increase as the systems grow. This behavior is shared by 14 and 30 cases and it is an expected result considering the complexity increase. The results of two systems are fully dominated by the proposed case, improvements from 7% to more than 90% are achieved. Since the miscoordination percentage is improved or at least maintained equal in comparison with standardized cases, the curves compatibilities are not compromised by using unstandardized adjusts. The situation is different for more complex systems, some results obtained by the standard cases are better than the proposed case; even considering these disadvantages the proposed case fitness is better than the others, indicating that its overall result is the best among all scenarios. Results presented in Table 3.6 and illustrated in Fig. 3.20 are obtained by the implementation of the invasive-weed optimization method. They are arranged in the same order and contain the same information as previously analyzed GA results. An important fact to highlight is that in these simulations all results are fully dominated Table 3.4 Selection ranges of each adjustable setting

Parameter

1

TDS Pm A

[0.50–5] [1.4–2] [0.01– 30] [0– 0.50] [0.01– 2] 0.25 0.25 0.50

B p i j s

2

3

4

5

8 curves

19.60

8 curves

19.60

0.25 0.00 0.75

0.49

0.49

2

2

0.25 0.00 0.75

0.00 0.00 1.00

0.00 0.00 1.00

3.6 Results

53

Table 3.5 OCR coordination results obtained by GA

Im SC

Iin SC

IM SC

Im SC

Iin SC

IM SC

f mc % tm ECTI tbu tm ECTI tbu tm ECTI tbu f mc % tm ECTI tbu tm ECTI tbu tm ECTI tbu

0.14 2.13

0.18 2.13

3.67 11.35

0.25 6.38

3.72 13.48

21.84 0.00

96.24 81.25

44.69 66.67

96.30 84.21

14-bus system

0.28 0.77 1.31 0.15 0.13 0.66 0.11 0.10 0.50 0.22 5.37

0.37 0.83 1.48 0.24 0.32 0.86 0.19 0.17 0.67 0.25 5.93

0.68 3.70 4.48 0.47 0.70 1.41 0.43 0.35 1.03 1.43 20.34

0.38 2.18 2.75 0.21 0.36 0.86 0.16 0.15 0.62 0.58 8.47

0.66 2.81 3.61 0.45 0.69 1.40 0.41 0.31 1.01 1.60 15.25

24.92 7.33 11.48 36.74 28.98 23.39 42.20 39.34 25.45 10.98 9.52

59.07 79.15 70.69 68.11 67.61 53.41 73.78 70.55 51.71 84.32 73.61

26.21 64.60 52.35 27.21 37.36 23.68 28.71 33.05 19.21 61.70 36.67

57.78 72.60 63.63 66.87 67.13 53.29 72.73 67.40 50.59 86.05 64.81

30-bus system

0.43 0.78 1.44 0.27 0.33 0.87 0.22 0.17 0.67

0.50 0.81 1.55 0.33 0.41 1.01 0.27 0.24 0.80

0.87 1.77 2.58 0.62 1.00 1.72 0.57 0.41 1.10

0.59 1.45 2.19 0.35 0.60 1.22 0.27 0.29 0.86

0.88 1.59 2.46 0.69 0.61 1.40 0.65 0.26 1.02

13.43 3.49 6.80 17.85 20.43 14.17 18.42 30.24 15.97

50.48 55.74 44.00 55.98 67.21 49.63 60.99 58.34 39.36

27.03 45.93 33.91 21.41 45.56 28.65 18.44 41.29 22.18

50.88 50.61 41.27 60.37 46.40 38.00 65.52 34.81 34.60

Fig. 3.19 Tripping times, errors, and miscoordination percentage using GA

54

3 Non-conventional Overcurrent Relays Coordination

Table 3.6 OCR coordination results obtained by IWO

Im SC

Iin SC

IM SC

Im SC

Iin SC

IM SC

f mc % tm ECTI tbu tm ECTI tbu tm ECTI tbu f mc % tm ECTI tbu tm ECTI tbu tm ECTI tbu

0.10 0

0.13 0

0.30 4.26

0.14 0

0.29 2.13

24.62 /

68.61 100

30.41 /

67.39 100

14-bus system

0.24 0.42 0.93 0.15 0.17 0.61 0.12 0.07 0.49 0.13 0

0.34 0.59 1.21 0.21 0.25 0.75 0.18 0.12 0.60 0.20 0

0.69 1.36 2.30 0.52 0.50 1.29 0.49 0.26 1.02 0.50 5.93

0.35 0.86 1.48 0.21 0.30 0.80 0.17 0.14 0.61 0.18 0.28

0.71 1.55 2.47 0.52 0.53 1.32 0.48 0.29 1.04 0.49 3.67

29.42 29.41 22.86 30.40 30.40 18.60 30.94 45.42 18.31 32.92 /

65.44 69.11 59.46 71.57 65.56 52.46 75.21 74.39 52.45 73.28 100

32.31 51.22 36.98 29.21 42.45 23.15 28.32 53.18 19.99 26.86 100

66.47 72.90 62.16 71.55 67.90 53.70 74.97 76.49 53.31 72.64 100

30-bus system

0.37 0.48 1.11 0.25 0.26 0.78 0.21 0.16 0.64

0.57 0.86 1.68 0.37 0.48 1.13 0.30 0.32 0.90

0.95 1.11 2.11 0.76 0.48 1.34 0.72 0.26 1.08

0.49 0.88 1.62 0.29 0.46 1.03 0.22 0.30 0.81

1.11 1.38 2.49 0.87 0.54 1.53 0.82 0.29 1.24

35.09 43.73 33.91 31.81 46.44 30.95 29.93 50.78 29.35

61.25 56.57 47.22 67.13 46.40 41.86 71.08 40.64 41.26

24.54 45.25 31.45 12.77 44.38 24.60 5.54 47.31 21.11

66.68 65.00 55.40 71.16 52.71 49.24 74.47 46.94 48.43

Fig. 3.20 Tripping times, errors, and miscoordination percentage using IWO

3.6 Results

55

by the proposed case, improvements from 1 to 96% are achieved, obtaining an improvement average for all cases, measurements, and systems of 51.60, 7% more than GA average. The total iterations are increased to 1000 and 1500 for the 57 and 118-bus systems. The proposed methodology achieves coordination for all 618 pairs in the 57-bus power system with better tripping times for all short-circuit levels, the same performance is also obtained for smaller systems. The standardized cases also obtained better results in comparison with genetic algorithms. Furthermore, 118-bus system is successfully coordinated presenting just 0.44% of miscoordinations. This result is outstanding considering the magnitude, interconnection, and complexity of this power system where 2718 pairs are being coordinated and the relays function as backups for up to seven main relays. The tripping times and coordination errors are also improved by the base case. The average convergence of both algorithms after ten experimental repetitions, solving the coordination for the 57-bus system is illustrated in Fig. 3.21, 700 IWO iterations obtained similar results in comparison with 2000 GA iterations. GA total iterations were increased during some experiments in order to perform a better comparison, nevertheless the algorithm did not improved IWO solutions. Table 3.7 compares the results obtained by both algorithms, it can be noted that IWO improves all but one measurement, reducing total of miscoordinations as well as tripping times and coordination errors. Last results to show in this section correspond to sequential quadratic programming implementation. Since nonlinear optimization methods require a good initial guess in order to converge, SQP has been implemented to initialize after eight IWO iterations. The simulation results are shown in Table 3.8, IWO + SQP improvement percentage in comparison with IWO is also reported. Certain tripping times are worsened in some scenarios but fitness and consequently most measurements are slightly improved in all systems. The average fitness convergency after ten simulations of each case is illustrated in Fig. 3.22.

Fig. 3.21 GA and IWO fitness convergence for OCR coordination

56

3 Non-conventional Overcurrent Relays Coordination

Table 3.7 IWO and GA results comparison and IWO percentage of improvement Results f mc% tm ECTI tbu tm ECTI tbu tm ECTI tbu

Im SC

Iin SC

IM SC

14-Bus system IWO GA

%

30-Bus system IWO GA

%

0.10 0 0.24 0.42 0.93 0.15 0.17 0.61 0.33 0.07 0.49

31.00 100 13.66 45.61 28.92 0.25 24.97 6.55 −7.44 33.92 2.32

0.13 0 0.37 0.48 1.11 0.25 0.26 0.78 0.21 0.16 0.64

40.07 100 14.60 38.65 23.10 8.40 21.73 10.46 7.05 8.34 4.79

0.14 2.13 0.28 0.77 1.31 0.15 0.23 0.66 0.11 0.10 0.50

0.22 5.37 0.43 0.78 1.44 0.27 0.33 0.87 0.22 0.17 0.67

Table 3.8 IWO + SQP and IWO results compared Results

Im SC

Iin SC

IM SC

3.7

f mc% tm ECTI tbu tm ECTI tbu tm ECTI tbu

14-Bus system IWO GA

%

30-Bus system IWO GA

%

0.09 0 0.22 0.42 0.92 0.13 0.17 0.59 0.10 0.06 0.46

5.87 / 6.89 0.21 1.69 12.39 1.66 3.19 15.65 7.57 4.54

0.11 0 0.29 0.52 1.07 0.16 0.23 0.67 0.12 0.12 0.52

19.87 / 22.44 −8.48 3.71 34.94 8.46 13.65 41.02 25.12 18.92

0.10 0 0.24 0.42 0.93 0.15 0.17 0.61 0.12 0.07 0.49

0.13 0 0.37 0.48 1.11 0.25 0.26 0.78 0.21 0.16 0.64

Summary

The consideration of unstandardized inverse-time curves improves the overcurrent relay performance and consequently the relay coordination results. The coordination for multiple short-circuit current improves protection reliability and avoids curve crosses for those currents that are lower than maximum. As a result, the use of non-standardized curve reduces the tripping times for the left part of the DOCR inverse-time curves. According to the results, three shortcircuit magnitudes are enough to maintain compatible inversions grade and guarantee coordination fullfillment for certain

3.7 Summary

57

Fig. 3.22 IWO and IWO + SQP average fitness convergency for OCR coordination

region of the curve. Developed methods should be tested in big, highly interconnected, and complex power systems in order to demonstrate robustness and adaptability. The proposed method obtains better results in comparison with conventional approaches using standardized inverse-time curves. The algorithm is tested in five different power systems obtaining important improvements in all of them.

References 1. M. Affenzeller, S. Winkler, S. Wagner, A. Beham, Genetic Algorithms and Genetic Programming: Modern Concepts and Practical Applications, 1st edn. (Chapman & Hall/ CRC, London, UK, 2009) 2. J.H. Holland, Adaptation in Natural and Artificial Systems, 2nd edn. (MITPress, Cambridge, USA, 1992) 3. M. Mitchell, An Introduction to Genetic Algorithms, 5th edn. (MIT Press, Cambridge, USA, 1999) 4. J. Bronowski, The Ascent of Man (British Broadcasting Corporation, London, UK, 1973) 5. C. Darwin, The Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life, 6th edn. (John Murray, London, UK, 1859) 6. A.R. Wallace, My Life; A Record of Events and Opinions, 1st edn. (Chapman & Hall, London, UK, 1905) 7. M. Gen, R. Cheng, L. Lin, Network Models and Optimization: Multiobjective Genetic Algorithm Approach (Decision Engineering), 1st edn. (Springer, New York, 2008) 8. D.E. Goldberg, K. Deb, A comparative analysis of selection schemes used in genetic algorithms, in Foundations of Genetic Algorithms, (Morgan Kaufmann, 1991), pp. 69–93 9. J.E. Baker, Reducing bias and inefficiency in the selection algorithm, in Proceedings of the Second International Conference on Genetic Algorithms on Genetic Algorithms and Their Application, (Erlbaum Associates Inc, New Jersey, USA, 1987), pp. 14–21 10. A.R. Mehrabian, C. Lucas, A novel numerical optimization algorithm inspired from weed colonization. Ecol. Inform. 1(4), 355–366 (2006)

58

3 Non-conventional Overcurrent Relays Coordination

11. H.S. Rad, C. Lucas, A recommender system based on invasive weed optimization algorithm, in Proceedings of the IEEE Congress on Evolutionary Computation, Sept 2007, pp. 4297– 4304) 12. H. Josin ski, D. Kostrzewa, A. Michalczuk, A. Switonski, The expanded invasive weed ́ optimization metaheuristic for solving continuous and discrete optimization problems. Sci. World J. 2014(1), 1–14 (2006) 13. H.G. Baker, G.L, Stebbins, International Union of Biological Sciences, in The genetics of colonizing species: Proceedings (Academic Press, New York, USA, 1965) 14. Y. Zhou, Q. Luo, H. Chen, A novel differential evolution invasive weed optimization algorithm for solving nonlinear equations systems. J. Appl. Math. (4), 1–18 (2013) 15. IEEE Std. C37.113–1999, Guide for protective relay applications to transmission lines (1999) 16. A.J. Urdaneta, R. Nadira, L.G. Perez, Optimal coordination of directional overcurrent relays in interconnected power systems. IEEE Trans. Power Del. 3(3), 903–911 (1988) 17. B. Chattopadhyay, M.S. Sachdev, T.S. Sidhu, An online relay coordination algorithm for adaptive protection using linear programming technique. IEEE Trans. Power Del. 11(1), 165– 173 (1996) 18. C. So, K.K. Li, K.T. Lai, K.Y. Fung, Application of genetic algorithm for overcurrent relay coordination, in Sixth International Conference on Developments in Power System Protection 6 (1997), pp. 66–69 19. F. Razavi, H.A. Abyaneha, M. Al-Dabbagh, R. Mohammadia, H. Torkaman, A new comprehensive genetic algorithm method for optimal overcurrent relays coordination. Electr. Power Syst. Res. 78, 713–720 (2008) 20. P.P. Bedekar, S.R. Bhide, Optimum coordination of overcurrent relay timing using continuous genetic algorithm. Expert Syst. Appl. 38, 11286–11292 (2011) 21. P.P. Bedekar, S.R. Bhide, Optimum coordination of directional overcurrent relays using the hybrid GA-NLP approach. IEEE Trans. Power Deliv. 26(1), 109–119 (2011) 22. M.M. Mansour, S.F. Mekhamer, N.E.-S. El-Kharbawe, A modified particle swarm optimizer for the coordination of directional overcurrent relays. IEEE Trans. Power Del. 22(3), 1400– 1410 (2007) 23. A. Liu, M.-T. Yang, A new hybrid nelder-mead particle swarm optimization for coordination optimization of directional overcurrent relays. Math. Prob. Eng. 2012, Article 456047 (2012) 24. M.R. Asadi, S.M. Kouhsari, Optimal overcurrent relays coordination using particle-swarm-optimization algorithm, in IEEE Power Systems Conference and Exposition (Seattle, WA, 15–18 Mar 2009), pp. 1–7 25. A.S. Noghabi, J. Sadeh, H.R. Mashhadi, Considering different network topologies in optimal overcurrent relay coordination using a hybrid GA. IEEE Trans. Power Del. 24(4), 1857–1863 (2009) 26. A.M. Girjandi, M. Pourfallah, Optimal coordination of overcurrent relays by mixed genetic and particle swarm optimization algorithm and comparison of both, in International Conference on Signal, Image Processing and Applications, vol. 21 (IACSIT Press, Singapore, 2011) 27. P.P. Bedekar, S.R. Bhide, Optimum coordination of directional overcurrent relays using the hybrid GA-NLP approach. IEEE Trans. Power Del. 26, 109–119 (2011) 28. M. Ezzeddine, R. Kaczmarek, A novel method for optimal coordination of directional overcurrent relays considering their available discrete settings and several operation characteristics. Electr. Power Syst. Res. 81, 1475–1481 (2011) 29. Y. Lu, J.-L. Chung, Detecting and solving the coordination curve intersection problem of overcurrent relays in sub-transmission systems with a new method. Electr. Power Syst. Res. 95, 17–27 (2013) 30. T.R. Chelliah, R. Thangaraj, S. Allamsetty, M. Pant, Application of genetic algorithm for overcurrent relay coordination. Int. J. Electric. Power Energy Syst. 55, 341–350 (2014) 31. J.L. Blackburn, T.J. Domin, Introduction and general philosophies, in Protective Relaying, Principles and Applications, 3rd edn. (Taylor and Francis Group, LLC, New York, 2006)

References

59

32. S.H. Horowitz, A.G. Phadke, J.S. Thorp, Adaptive transmission system relaying. IEEE Trans. Power Delivery 3, 1436–1445 (1988) 33. IEEE, IEEE Standard Inverse, Time Characteristic Equations for Over-current Relays, IEEE Std C37.112-1996 (1997), pp. 1–13 34. IEC Standard 255-4, Single Input Energizing Measuring Relays with Dependent Specified Time, 1st edn. IEC Publication 255-4 (1976) 35. J. Otamendi, The importance of the objective function definition and evaluation in the performance of the search algorithms, in 16th European Simulation Symposium (2004) 36. W. Mock, Pareto optimality, in Encyclopedia of Global Justice, ed. by D.K. Chatterjee (Springer, Netherlands, 2011), pp. 808–809 37. K. Chen, Industrial Power Distribution and Illuminating Systems. Electrical and Computer Engineering (Taylor & Francis, New York, USA, 1990) 38. A.F. Sleva, Protective Relay Principles, 1st edn. (Taylor & Francis, New York, USA, 2009) 39. K. Schittkowski, NLPQL: a fortran subroutine solving constrained nonlinear programming problems. Ann. Oper. Res. 5(2) (1986) 40. P.T. Boggs, J.W. Tolle, Quasi-Newton methods and their application to function minimisation. Acta Numer. 4(1), 1–54 (1995) 41. P.E. Gill, E. Wong, Sequential quadratic programming methods, in Mixed Integer Nonlinear Programming, vol. 154, Volumes in Mathematics and its Applications, ed. by J. Lee, S. Leyffer (Springer, New York, USA, 2012), pp. 147–224 42. R.B. Wilson, A Simplicial Algorithm for Concave Programming. Ph.D, thesis, Graduate School of Business Administration, Harvard University, Cambridge, USA, 1963 43. MATLAB, Find minimum of constrained nonlinear multivariable function. http://www. mathworks.com/help/optim/ug/fmincon.html (2015) 44. MATLAB, Optimization toolbox. http://www.mathworks.com/products/optimization (2015) 45. MATLAB, Optimization toolbox user’s guide. http://www.mathworks.com/help/pdf_doc/ optim/optim_tb.pdf (2015) 46. MATLAB, The language of technical computing. http://www.mathworks.com/products/ optimization/ (2015)

Chapter 4

Overcurrent Relay Coordination, Robustness and Fast Solutions

The operational analysis of overcurrent relays is complex compared to the other protection principles, this is because the time in which the protection should operate is not known as it depends on the evolution of the fault current. Therefore, the relays setting is established by defining the maximum boundary condition, both for steady state to determine the pickup current of relays and dynamic state to perform protection coordination. The intermittent sources cause the operating range of the electrical networks to be wider. Therefore, the performance of protection for any other operating condition, may represent a degradation in performance. Both sensitivity and security requirements of relay operation must to be considered. Self-settings relays offer an attractive alternative to improve the quality of protection. The solution of coordination problem is obtained every few minutes, both the loss of sensitivity and the relay operation times can be reduced. For this purpose, it is important to determine the execution time of the coordination algorithm using complex interconnected systems for a better evaluation. Ant Colony Optimizer (ACO), Differential Evolution algorithm (DE) and Grey Wolf Optimization (GWO) are proposed to obtain better results when compared to similar heuristic algorithms.

4.1

Overcurrent Relay like Optimization Problem

Relay coordination is formulated as an optimization problem that seeks to minimize the relay’s operation time by seeking its adequate time dial setting (TDS) and plug setting multiplier (PSM). The objective function is determined by the relay time curves subjected to parametric and time constraints to comply with the coordination criterion. Deterministic methods are very challenging because they are highly dependent on a good initial guess and have a high probability of being trapped in local minimum solutions. In searching for the relay coordination solution, the use of heuristic optimization algorithms is very favorable due to their simplicity, © Springer Nature Switzerland AG 2019 E. Cuevas et al., Metaheuristics Algorithms in Power Systems, Studies in Computational Intelligence 822, https://doi.org/10.1007/978-3-030-11593-7_4

61

62

4 Overcurrent Relay Coordination, Robustness and Fast Solutions

flexibility, derivative-free mechanism, local optimum avoidance, and robustness, showing good results in a highly restricted problem domain. The overcurrent relay problem is solved using optimization algorithms with fast and robust solution characteristics. The overcurrent relay formulation is presented in Chap. 3 in Sect. 3.3.5.

4.2

Ant-Colony Optimization

A method related to swarms is known as the Ant-Colony Optimization (ACO) algorithm. Introduced in 1992 by Dorigo [1, 2] as part of his Ph.D. thesis, it is a method that emulates the behavior of an ant-colony while looking for food, pheromone is left in by the ants in different places until an optimized path from the anthill to the food source is found. In the natural world, ants initially wander randomly until finding food. An ant-k agent runs randomly around the colony. As soon as it discovers food source, it returns to the nest, leaving in its route a trail of pheromone. If other ants encounter this route, they will stop travelling randomly but instead follow this route to the food source and return home. The route is then reinforced with more pheromone deposits. If there were different routes to reach the same food source then, in a given amount of time, the shortest one will be traveled by more ants than the longer routes. The shortest route will be increasingly enhanced, and therefore become more attractive. As time passes by, the longer routes will disappear because pheromones are volatile and eventually all the ants have “chosen” the shortest route. The idea is illustrated in Fig. 4.1a–c. The components of the Algorithm are:

Fig. 4.1 The concept of ant-colony optimization

4.2 Ant-Colony Optimization

63

a. Ant agents are a number of artificial ants that build solutions to an optimization problem and exchange information on their quality through a communication scheme that is reminiscent of the one adopted by real ants. b. The AS-graph (search space) is a matrix that contains discrete settings (states) of the control variables (stages). In other words, this graph or matrix contains the set of feasible solutions to the optimization problem which will be explored by the ant agents. Another matrix called pheromone matrix is created to represent the attractiveness of each discrete setting. c. The Pheromone matrix is a matrix that contains information about the chemical pheromone deposited by ants. The matrix shows pheromone intensity of each discrete setting therefore the attractiveness of every possible route to the solution. The more intense a setting is the more probability it has to be chosen by an ant agent as part of the solution. d. The Transition rule is the probabilistic and stochastic mechanism that ant agents use to evaluate the pheromone intensity in order to decide which is the most attractive point to visit next. The Pheromone update is the process in which pheromone intensities are increased or decreased according to the evaluated results whether the settings are good or bad solutions. This is achieved by decreasing the pheromone values through pheromone evaporation and increasing the pheromone levels by depositing more pheromone if it is a set of good solution. The algorithm is started with many sets of solutions (states), together all the states form the AS-graph search space. This AS-graph is constant throughout the whole searching process hence it does not change from iteration to iteration. On the other hand, pheromone matrix is a representation of attractiveness of each discrete setting (edge) that does change not only in all iterations but in every ant tour. The settings that are good in fitness will consequently lead the ant agents to deposit more and more pheromone until all ants converge in this route (set of settings). Then the optimal solution is found. The whole process is repeated until some condition (reached the number of maximum iterations, improvement of the best solution) is met. This is called stopping criteria. The AS-graph size indicates how many states are there in the AS-graph. If there are too few states, the algorithm will have fewer possibilities to obtain the optimal solution to the problem and only a small part of the search space is explored. On the other hand, if there are too many states, the algorithm increases the possibility to encounter the optimal solution but it drastically slow down the whole process.

64

4 Overcurrent Relay Coordination, Robustness and Fast Solutions

4.2.1

Steps of Protection Coordination Using Ant-Colony Algorithm

4.2.1.1

AS-Graph

Create the AS-graph search space with discrete settings (states) of the control variables (stages). The discrete settings are values within feasible numerical range of the corresponding setting of overcurrent relay. The control variables are the different settings of the overcurrent relay, in this particular case the degrees of freedom. The size of AS-graph is a ðm; n  NRÞ matrix where m represents number of states, n number of stages and NR number of relays. For example, if a system has 10 relays with 1 degree of freedom ðdialÞ, then the size of the AS-graph for 15 states will be (15, 10). On the other hand, if a system has 10 relays with 2 degrees of freedom ðdial and kÞ, then the size of the AS-graph for 15 states will be (15, 20). But if a system has 10 relays with 3 degrees of freedom ðdial; k and A; B; nÞ, then the size of the AS-graph for 15 states will be (15, 30), and there will be a total of 450 discrete settings of relays. As more degrees of freedom are added to the AS-graph, the size of AS-graph increases. Bear in mind that the discrete settings A; B; n are values taken from the IEEE standard and are considered all three together as 1 degree of freedom and also as 1 variable. This is because the three together form a different curve type and not one by one. The idea of AS-graph is shown in Eq. 4.1 below. 2 6 AS ¼ 6 4

dialð1;1Þ .. . dialðm;1Þ

... .. . ...

dialð1;NRÞ .. . dialðm;NRÞ

kð1;NR þ 1Þ .. . kðm;NR þ 1Þ

... .. . ...

kð1;NR2Þ .. . kðm;NR2Þ

ctð1;NR2 þ 1Þ .. . ctðm;NR2 þ 1Þ

... .. . ...

3 ctð1;NR3Þ 7 .. 7 . 5 ctðm;NR3Þ

ð4:1Þ where ct represent the curve type by discrete values of 1, 2 and 3. For example 1 stands for moderate inverse (MI), 2 stands for very inverse (VI) and 3 stands for extremely inverse (EI). In order to create the AS-graph matrix, necessary data such as upper and lower limits, steps of the control variables are needed. For example, if a system has 2 relays with 2 degrees of freedom ðdial; kÞ which the range of dial is [0.5, 1.4] in steps of 0.1 and the range of k is [1.4, 1.6] in steps of 0.05, then the AS-graph is constructed as shown in Fig. 4.2. Note that there are 10 values in the range of dial setting including upper and lower limits, but there are only 5 values in the range of k setting including upper and lower limits. Under this circumstance, complete the rest of the matrix by repeating the upper limit of k setting as illustrated in the blue square in Fig. 4.2. This is to have a sequential order.

4.2 Ant-Colony Optimization

65

Fig. 4.2 AS-graph

4.2.1.2

Pheromone Matrix

The pheromone matrix is a matrix that contains information about the chemical pheromone deposited by ants. The matrix shows pheromone intensity of each discrete setting therefore the desirability of every possible route to the solution. The more intense a setting is, the more probability it has to be chosen by an ant agent as part of the solution. The pheromone matrix cðm; nÞ is constructed according to the size of AS-graph, where m number of states and n is the number of stages. This matrix is initialized as presented in Eq. 4.2: cðm; nÞ ¼ c0 ðm; nÞ ¼ smax

ð4:2Þ

where smax is the maximum pheromone trail and is given in Eq. 4.3: smax ¼

1 a  fgbest

ð4:3Þ

where fgbest is the global best solution (best over the whole past iterations) and a is an empirical value that best suits in the range [0.88, 0.99] [1, 2]. In the case of initializing pheromone matrix, fgbest is an initial estimation of the best solution. Pheromone matrix was first constructed with all equal edges as presented in Eq. 4.3. But as it was presented in Fig. 4.2, the smallest settings of relays occupy the first rows of the AS-graph, these settings are the ideal settings for coordination due to the reason that they will result minimum operation time. Hence after the pheromone matrix is constructed, the pheromones of the first rows of this matrix are increased. This was done to help the algorithm find the best time operation settings in less time, so one decides how many rows to change and in what amount. Obviously one shall not change too many rows and neither in big amount because this will significantly affect the performance of the ant’s exploration. The idea is illustrated in Fig. 4.3.

66

4 Overcurrent Relay Coordination, Robustness and Fast Solutions

Fig. 4.3 Pheromone matrix

4.2.1.3

Transition Rule

Transition rule is the probabilistic and stochastic mechanism that ant agents use to evaluate the pheromone intensity in order to decide which is the most attractive point to visit next. Bear in mind that this rule can also include another factor of desirability which is the vision of the edges. For example, the vision of Traveling Salesman Problem (TSP) is a symmetrical matrix that contains the inverse of the distance between each edge (city), where the main diagonal are zeros because there is zero distance for reaching the same city. A similar element would be the operation time of relays in coordination problem, but due to the reason that the relays can have many primary and backup operation times, this make the construction of the vision matrix impossible. Thus pheromone intensity will be the only factor used in the transition rule. The advantage of having the vision matrix (as in TSP) is that the algorithm always converges in the same result while the disadvantage of not having the vision matrix is that the algorithm gives different optimization result in every run. An analogy would be “An All Seen Ant” versus “A Blind Ant”. When ant-j is at the r-state of the ði  1Þ-stage, it will choose the s-state of the ðiÞ-stage as the next visit according to the transition rule presented in Eq. 4.4: cðr; sÞ pðr; sÞ ¼ P l cðr; lÞ

s; l 2 Nrj

ð4:4Þ

where Nrj is a memory tabu list of ant-j that defines the set of points still to be visited when it is at point r. For TSP Nrj is very important because every ant agent jumps non-sequentially from stage to stage whereas every ant agent explore all stages sequentially for the coordination problem. Thus Nrj was omitted due to its unnecessary existence. The pheromone P of the next possible visit of ðiÞ-stage currently under evaluation is cðr; sÞ and l cðr; lÞ is the pheromone sum of the entire column of the ðiÞ-stage under evaluation. The idea is illustrated in Fig. 4.4a–c. In Fig. 4.4, the green colors represent the r-state of the ði  1Þ-stage and the yellow colors represent the s-state of the ðiÞ-stage. This is the illustration of an

4.2 Ant-Colony Optimization

67

Fig. 4.4 Transition rule

ant-j agent completing its tour according to the transition rule, therefore the set of settings of this ant-j agent tour in the current iteration, referring to the example of Fig. 4.4 are [0.7 0.5 1.60 1.45]. Note that the ant-j agent did not start on the most intense pheromone edge in column one because it was placed randomly on the first stage, this is carried out at the beginning of each iteration.

4.2.1.4

Pheromone Updates

Pheromone update is the process in which pheromone intensities are increased or decreased according to the evaluated results whether the settings are good or bad solutions. This is achieved by decreasing the pheromone values through pheromone evaporation and increasing the pheromone levels by depositing more pheromone if it is a set of good solution.

4.2.1.5

Local Pheromone Update

The pheromone trail on each edge of an ant-j tour is updated immediately as the antk agent finishes its tour. This is given in Eq. 4.5. cðr; sÞ ¼ a  cðr; sÞ þ Dc j ðr; sÞ

ð4:5Þ

0\a\1

ð4:6Þ

Dc j ðr; sÞ ¼

1 Qf

ð4:7Þ

where a is the persistence of the pheromone trail, ð1  aÞ represent the pheromone trail evaporation and Dc j ðr; sÞ is the amount of pheromone that ant-j puts on edge ðr; sÞ. The desirability of the edge ðr; sÞ is represented by Dc j , such as shorter distance, better performance, and in this case less operation time. The objective

68

4 Overcurrent Relay Coordination, Robustness and Fast Solutions

function evaluation of the settings of ant-j tour is represented by f and Q is a positive constant. It is observed from Eq. 4.7 that as constant Q increases, the contribution of pheromone an ant deposits decreases.

4.2.1.6

Global Pheromone Update

After all ant agents have completed their tours in the iteration, primary and backup operation times are computed. The objective function is evaluated for each ant tour and all pheromone edges ðr; sÞ of the best ant tour (ant tour with best fitness value) are updated according to Eq. 4.8: cðr; sÞ ¼ a  cðr; sÞ þ

R fbest

j r; s 2 Jbest

ð4:8Þ

j where fbest is the best solution of this iteration, R is a positive constant and Jbest is the location list of the best ant tour that records the state of each stage when ant-j moves from one stage to another. It is observed from Eq. 4.8 that as constant R increases, the contribution of pheromone an ant deposits increases. Here R was chosen to be 5. The global pheromone update was not performed as described in the previous paragraph. This was due to the reason that updating only the best ant tour leads to premature convergence, thus the global pheromone update was performed by updating a percentage of the best ant tours. For example, 30% of the ant tours that ranked the best were updated. Apart from the change presented in the previous paragraph, the algorithm was programmed to perform global update randomly in some iteration. This was done also to avoid premature convergence and at the same time this makes the ant agents explore the AS-graph better. After the program has executed certain iterations, specific iterations were selected to perform the global update randomly. This idea is similar to the mutation process in the Genetic Algorithm. Empirical tests have shown that the ACO converges faster when both Q and R are arbitrarily large numbers and almost equal to one another. Studies in the literatures might use Q ¼ R ¼ 1; 000; 000 for other problems, but such large constant is not very suitable to use here because the algorithm converges around 30 iterations, leaving many coordination pairs uncoordinated. This is the reason for choosing Q ¼ 100 and R ¼ 5. They were chosen empirically but can work for any network.

4.2.1.7

Intelligent Exploration

Intelligent exploration of the AS-graph consists of exploring setting ðdialÞ of specific relays after a pre-specified amount of iterations. This was programmed to help coordinating all coordination pairs. The dial setting was chosen because it has more influence in the relay operation time.

4.2 Ant-Colony Optimization

69

First, detect the coordination pairs that are not coordinated. Select a pair to start with. Then get the setting of this specific relay (primary) from the best ant tour and use it as the upper limit. Next, Eq. 4.8 is applied again but with a little modification as given in Eq. 4.9. cðr; sÞ ¼ a  cðran1; sÞ þ

R fbest

j r; s 2 Jbest

ð4:9Þ

where ran1 is a random number selected from the interval ½1:r. Pheromone is then deposited on this edge. Note that r represent the upper limit (state) and s represent the specific relay (stage). Depositing pheromone on this specific edge will lead the ant agents to explore the corresponding setting from the AS-graph. And because it has an upper limit, the newly explored edge will correspond to a smaller setting in the AS-graph, leading to a reduction of primary operation time. Now get the setting of the specific relay (backup) from the best ant tour of the same coordination pair that was selected previously and use it as the lower limit. Then Eq. 4.9 is applied again but with a little modification as given in Eq. 4.10. cðr; sÞ ¼ a  cðran2; sÞ þ

R fbest

j r; s 2 Jbest

ð4:10Þ

where ran2 is a random number selected from the interval ½r:end of state. The Pheromone is then deposited on this edge. Note that r represent the lower limit (state) and s represent the specific relay (stage). Depositing pheromone on this specific edge will lead the ant agents to explore the corresponding setting from the AS-graph. Because it has a lower limit, the newly explored edge will correspond to a bigger setting in the AS-graph, leading to an increment of backup operation time. If all coordination pairs were coordinated, then detect coordination pairs that have a greater CTI than the pre-specified and try to reduce them. Choose a coordination pair to start with and get the settings of both relays (primary and backup) from the best ant tour and use them as the upper limit. Then, Eq. 4.10 is applied again, this time, for both primary and backup relay. Note that r represent the upper limit (state) and s represent the specific relay (stage). By doing so, pheromone trails are deposited on these specific edges (both primary and backup). This will lead ant agents to explore the corresponding settings from the AS-graph which due to the reason that they have upper limit, the newly explored edges will correspond to a smaller setting in the AS-graph, hence reduction of both primary and backup operation time. Steps of Protection Coordination using Ant Colony Algorithm 1. Generate the AS-graph (search space) that represents the discrete settings (states) of the control variables (stages). Each discrete setting is a possible solution to the problem as presented in Eq. 4.3. 2. Generate the pheromone matrix cðm; nÞ according to the size of AS-graph, where n is the number of stages and m number of states using Eq. 4.5.

70

4 Overcurrent Relay Coordination, Robustness and Fast Solutions

3. Initialize the pheromone matrix cðm; nÞ ¼ c0 ðm; nÞ ¼ smax . Equation 4.6, in this case fgbest is an initial estimation or guess of best solution. 4. Place randomly M ants on the states of the first stage ði ¼ 1Þ. 5. Every ant must explore all stages sequentially; when ant-j is at the r-state of the ði  1Þ-stage, it will choose the s-state of the ðiÞ-stage as the next visit according to the transition rule Eq. 4.7. 6. Compute primary and backup time of each relay after ant-j making its complete tour (explore all stages sequentially). 7. Execute local pheromone update of ðr; sÞ-trail made by ant-j using Eq. 4.8. 8. Evaluate the fitness fk ðxÞ of ant-j tour and save it to f ðxÞ. 9. Repeat steps 5–8 until all ants (M) have finished their tour. 10. Execute global pheromone update of ðr; sÞ-trail belonging to the best ant tour ðfbest Þ using Eq. 4.10. 11. Terminate the algorithm if stopping criteria is met, if not, then continue all steps from 4 to 10.

4.3

Differential Evolution

Differential Evolution algorithm (DE) [3] is a part of the evolutionary algorithms; it is a metaheuristic search algorithm based on evolutionary ideas of natural selection of genes. The idea of evolutionary computing was introduced in the 1960s by I. Rechenberg in his work “Evolution Strategies”. His idea was then developed by other researchers. Differential Evolution Algorithm (DE) was first reported as a written article by Storn and Price in 1995 [4]. The main advantages of using DE are: (a) Simplicity of the algorithm compared to many other evolutionary algorithms (main body of the algorithm can be written in only a few lines in any programming language) (b) Fine performance in terms of accuracy, convergence speed, and robustness (where finding quality result in reasonable computational time is weighted) (c) The number of control parameters are very few (Cr, F and NP) (d) Low space complexity (low storage consumption), therefore are able to handle large scale and expensive optimization problems The Differential Evolution algorithm operates through similar computational steps as employed by a standard evolutionary algorithm (EA). However, unlike traditional evolutionary algorithms, the differential evolution algorithm perturbs the current generation population members with the scaled differences of randomly selected and distinct population members. Therefore, no separate probability distribution has to be used for generating the offspring. This characteristic made the algorithm has less mathematical operations, hence less execution time compared to other algorithms. In the Differential Evolution community, the individual trial

4.3 Differential Evolution

71

Fig. 4.5 Main stages of the DE algorithm

solutions (which constitute the population) are called parameter vectors or genomes. Each parameter vector contains a set of possible solution information. The algorithm is started with many sets of solutions, together all the parameter vectors form a population. Solutions from one population are taken and used to form a new population. And by employing the selection process, it guarantees the population to either get better (with respect to the minimization of the objective function) or to remain the same in fitness status, but never deteriorates. The whole process is repeated until some condition (reached the number of maximum iterations, improvement of the best solution) is met. This is called stopping criteria. The population size (NP) indicates how many parameter vectors are there in the population (in one generation). If there are too few parameter vectors, the algorithm will have very few possibilities to perform crossover and only a small part of the search space is explored. On the other hand, if there are too many parameter vectors, the algorithm will explore more variety of feasible solutions but the execution time is increased. The three major control parameters in DE are F; Cr and NP. They are analyzed and tuned specifically for the coordination problem, but there are reported in the literatures to be able to make the DE algorithm become more efficient in both time and quality result aspect when using them in dynamic way. The effect of the mutant parameter F is that the algorithm will favor more exploitation or more exploration when the scalar value of F is set closer to 0 or 1 respectively. The effect of crossover parameter Cr is that the algorithm may be tuned to perform efficiently for different types of problem, mainly the complexity and multimodality. Main stages of the DE algorithm are presented in Fig. 4.5.

4.3.1

Steps of Protection Coordination Using Differential Evolution Algorithm

4.3.1.1

Initial Population

Create a new population randomly with reasonable genes. That is to create a new population which all genes of all parameter vectors are initialized somewhere in their feasible numerical range of the corresponding setting of overcurrent relay. Each row is a parameter vector and each column is a gene/variable/setting of a

72

4 Overcurrent Relay Coordination, Robustness and Fast Solutions

relay. The size of the population is ðNP; D  NRÞ where NP represents number of parameter vectors, D number of control variables and NR number of relays. For example, if a system has 10 relays with 1 degree of freedom ðdialÞ, then the size of the population for 10 parameter vectors will be (10, 10). On the other hand, if a system has 10 relays with 2 degrees of freedom ðdial and kÞ, then the size of the population for 10 parameter vectors will be (10, 20). But if a system has 10 relays with 3 degrees of freedom ðdial; k and A; B; nÞ, then the size of the population for 10 parameter vectors will be (10, 30), and there will be a total of 300 genes or settings of relays. As more degrees of freedom are added to the population, the population size increases. Bear in mind that the discrete settings A; B; n are values taken from the IEEE standard and are considered all three together as 1 degree of freedom. This is because the three together form a different curve type and not one by one. The idea of population is shown in Eq. 4.13 below. 2

dialð1;1Þ .. .

... .. .

dialð1;NRÞ .. .

kð1;NR þ 1Þ .. .

... .. .

kð1;NR2Þ .. .

ctð1;NR2 þ 1Þ .. .

... .. .

dialðNP;1Þ

. . . dialðNP;NRÞ

kðNP;NR þ 1Þ

...

kðNP;NR2Þ

ctðNP;NR2 þ 1Þ

...

6 P¼6 4

3 ctð1;NR3Þ 7 .. 7 . 5 ctðNP;NR3Þ

ð4:11Þ where ct represent the curve type by discrete values of 1, 2 and 3. First create the population randomly with values between 0 and 1 then the Eq. 4.12 is applied as presented below in order to let the narrow the initial random population to be within its superior and inferior limits. Pðp; qÞ ¼ limitlower þ ðlimitupper  limitlower Þ  Pðp; qÞrandom

4.3.1.2

ð4:12Þ

Mutation

Biologically, “mutation” means a sudden change in the gene characteristics of a chromosome. In the context of the evolutionary computing paradigm, however, mutation is also seen as a change or perturbation with a random element. In DE-literature, a parent vector from the current generation is called target vector, a mutant vector obtained through the trigonometric mutation operation is known as donor vector and finally an offspring formed by recombining the donor with the target vector is called trial vector. Unlike the GA, mutation in DE is performed to all target vectors (parameter vectors, individuals) in every iteration. Three different vector numbers are randomly selected from the DE population ! for each target vector. Suppose that the selected population members are X r1 ;G , ! ! ! X r2 ;G , X r3 ;G for the i-th target vector X i;G . The indices r1 , r2 and r3 are generated only once for each mutant vector and are mutually exclusive integers randomly chosen from the range [1, NP], which are also different from the index i. The difference of any two of these three vectors is multiplied by a scalar F

4.3 Differential Evolution

73

(which typically lies within 0.4–1), then the answer is added to the third vector in ! order to obtain the donor vector V i;G . The difference vectors mutation scheme is outlined in Eq. (4.12). ! ! ! !  V i;G þ 1 ¼ X r1 þ F X r2  X r3

4.3.1.3

ð4:13Þ

Crossover

The crossover or recombination operation is performed after creating the donor vector via mutation. This operation enhances the diversity of the population by ! exchanging the components of donor vector with the target vector X i;G to generate   ! the trial vector U i;G ¼ u1;i;G ; u2;i;G ; u3;i;G ; . . .; uD;i;G . The DE family algorithms can use two kinds of crossover methods: exponential (or two-point modulo) and binomial (or uniform). Binomial crossover scheme: whenever a randomly generated number between 0 and 1 is less than or equal to the crossover rate Cr value for each of the D variables, binomial crossover is performed. Under this circumstance, there will be a nearly uniform distribution of number of parameters inherited from the donor vector. The binomial crossover scheme may be outlined as Eq. 4.14.  uj;i;G ¼

vj;i;G xj;i;G

  if randi;j ½0; 1  Cr or j ¼ jrand otherwise

ð4:14Þ

where randi;j ½0; 1 is an uniformly distributed random number. This random function is executed for each j-th component of the i-th parameter vector. Then a ! randomly chosen index jrand 2 ½1; 2; . . .; D ensures that the trial vector U i;G gets at ! least one component form the donor vector V i;G . The Cr parameter is selected to be 0.5. Exponential crossover scheme: choose an integer nn and L randomly among the numbers ½1; D. The integer nn denotes the starting point of the target vector, from where the exchange of components with the donor vector starts. L stands for the number of components the donor vector actually contributes to the target vector. The exponential crossover scheme is presented in Eq. 4.15.  uj;i;G ¼

vj;i;G xj;i;G

  if j  ðnnÞD and L  Lc for all other j 2 ½1; D

ð4:15Þ

where ðnnÞD is the starting point of crossover and Lc is the counter of L which can be initially expressed as Lc ¼ 0, then Lc ¼ Lc þ 1 for every evaluation of j-th component; being that L  D.

74

4 Overcurrent Relay Coordination, Robustness and Fast Solutions

4.3.1.4

Selection

The selection operation determines whether the trial or the target vector get through to the following generation, for example at generation G þ 1. The selection operation is presented in Eq. 4.16. ( ~ Xi;G þ 1 ¼

! U i;G ! X i;G

if if

! ! f ðU i;G Þ  f ð X i;G Þ ! ! f ðU i;G Þ [ f ð X i;G Þ

ð4:16Þ

! ! where f ð X Þ is the fitness of the target vector and f ðU Þ is the fitness of the trail vector. If a lower or equal value of fitness is obtained from the new trial vector, then the target vector will be replaced in the next generation; otherwise the target vector is kept in the population. By doing so, the population will never deteriorate since it either gets better or remains the same in fitness quality. Steps of Protection Coordination using Differential Evolution Algorithm 1. Randomly generate the initial population in which each gene of the parameter vectors is found within the feasible solution for the problem, Eqs. 4.14 and 4.15. 2. Mutation: a. For each target vector, evaluate the fitness f ðxÞ of the three selected mutually exclusive parameter vectors. b. Execute the different mutation scheme according to the different DE variants. c. For each target vector, create the donor vector according to the different DE variants. 3. Crossover: a. To form trial vectors, perform binomial or exponential crossover according to Eq. 4.14 or 4.15. 4. Selection: a. For both target and trial vectors, evaluate their fitness f ðxÞ quality. b. Perform selection according to Eq. 4.16 in order to generate the new population. 5. For the new population, evaluate the fitness f ðxÞ quality. 6. Execute the algorithm anew using the new population. 7. Execute all steps from 2 to 6 and terminate until the stopping criteria is met.

4.3 Differential Evolution

4.3.2

75

Evaluation of DE Family for Overcurrent Relay Coordination Problem

In this section, comparisons among different versions of differential evolution algorithm (DE) for coordination of DOCRs using bigger and meshed power systems are presented. The evaluation criteria consist of fitness value, number of violation and standard deviation. The IEEE 14 bus system was used. The comparisons are based on 50 simulation runs of each algorithm. The parameter settings of DE are present in Table 4.1. From the evaluation of IEEE 14 bus system using binomial crossover give better results than using exponential crossover (Table 4.2). Although GA converged at a better result than some of the DE family, it is not the best because it has a great execution time. From Fig. 4.6, it is observed that the DE-Tri using binomial crossover is the most outstanding algorithm from the rest of the DE family, based on its balance on the quickness, result quality and almost zero standard deviation of fitness for the different simulation runs. The primary and backup operation times as well as CTI of one simulation of DE-Tri are very acceptable numbers, since they averaged 0.2999, 0.6726 and 0.3727 s respectively. The tendencies of all 45 coordination pairs of the 14 bus system are presented in Fig. 4.7.

Table 4.1 Parameter settings of DE algorithm

Parameters

DE

CTI Dial k Dial step k step C F Cr Jr pF q r x Individual Iterations

0.3 [0.05:10.0] [1.4:1.6] Continuous Continuous 0.5 0.8 0.5 0.04 0.4 0.5 0.8 0.3 500, 100 1000

76

4 Overcurrent Relay Coordination, Robustness and Fast Solutions

Table 4.2 Time, fitness, number of violations and standard deviation of the DE family using binomial crossover for the 14 bus system Algorithm

t (s)

f(x)

NV

t-SD

f(x)-SD

NV-SD

GA 4028 11.64 0.80 538.89 0.67 0.42 DE 64 67.45 0.24 2.72 84.59 0.43 DE-Tri 90 1.27 0.00 6.81 0.00 0.00 ODE 66 5.50 0.00 1.32 1.55 0.00 OCDE1 86 4.32 0.00 3.42 2.17 0.00 OCDE2 121 446.98 2.32 7.11 96.72 0.55 Either-Or 23 361.91 1.66 1.02 145.09 0.75 DEGL 102 1.40 0.00 3.67 0.24 0.00 SDE 111 1.75 0.00 2.51 1.17 0.00 SaDE 83 3.40 0.00 2.09 2.88 0.00 DESAP 22 1.51 0.00 2.34 0.08 0.00 t represents the averaged execution time of the algorithm in seconds, f(x) represents the averaged fitness value, NV represents the average number of violation of coordination constraints, t-SD represents the averaged standard deviation of the execution time, f(x)-SD represents the averaged standard deviation of fitness value and NV-SD represents the averaged standard deviation of the number of violations of coordination constraints Bold numbers represent the best performance results

Fig. 4.6 Comparison among 4 best DE versions using binomial crossover

Tendencies of tp, tb and CTI

2

tp tb CTI threshold

1.8 1.6

Time (s)

1.4 1.2 1 0.8 0.6 0.4 0.2 0

0

5

10

15

20

25

30

35

40

45

Coordination pairs

Fig. 4.7 Tendencies of primary, backup operation time and CTI for all 45 coordination pairs of the 14 bus system

4.4 Grey Wolf Algorithm

4.4

77

Grey Wolf Algorithm

The Grey Wolf Optimization (GWO) algorithm is a new optimization problem-solving technique proposed by Mirjalili and Lewis et al. in 2014 [5]. The increasing interest in applying meta-heuristic algorithms to solve real life problems in different areas of study has led to the development and improvement of many optimization algorithms that proves very effective considering their simplicity, flexibility and their capabilities to avoid local minima. The algorithms simplicity is due to the easily adaptive natural behavior and evolutionary concepts of natural phenomena. The GWO is part of a larger class family of swarm intelligence (SI) based on the natural imitation of the leadership and hunting mechanism of the grey wolves (Canis Lupus). The characteristic of Swarm intelligence algorithms is their outstanding collective problem-solving capabilities modeled by natural swarm systems. These algorithms have been applied successfully to many real-life applications and have proven to be suitable to address complex problems not easily solved with their particular conventional methods. The Grey Wolf algorithm has been applied to address some of these real-life problems in various areas including engineering and has proven very successful. The inspiration of the GWO algorithm is the natural imitation of the strict leadership hierarchy and hunting mechanism of the grey wolves. It starts by generating a population of wolves (pack) that will have an initial random distribution that covers a greater area to locate a possible prey (solution) in the search space. Each wolf explores its surroundings to determine if there is a weak prey nearby, this process involves evaluating an objective function corresponding to the problem. Their performance on the problem domain will be used to obtain the three best solutions which will be considered the top three wolves in the social hierarchy, Alpha ðaÞ, Beta ðbÞ and Delta ðdÞ that will be used to update the position of all the other wolves in the pack know hereby as the omega wolves ðxÞ or search agents. The fitness value will determine the wolves’ proximity to the weakest prey and, on each iteration, the wolves will update their position to get closer to this prey until they reach close enough so that they can attack. The Algorithm undergoes three main processes, Hunting, encircling and attacking of prey until it reaches an end criterion. At the end of the Algorithm, the position of the Alpha wolf is considered the best solution to the problem.

4.4.1

Motivation and Social Hierarchy

The grey wolves are part of the candidate family and are considered apex predators, meaning that they are at the top of the food chain. They live in packs that are on average 5–12 individuals in size and response to a very strict dominant leadership hierarchy as shown on Fig. 4.8.

78

4 Overcurrent Relay Coordination, Robustness and Fast Solutions

Fig. 4.8 Hierarchy of grey wolf

The leaders of the packs are known as alphas, comprised of a dominant male and a female that makes all important decisions that would reflect the growth and welfare of the pack. These wolves are not necessarily the strongest wolf of the pack but are considered the most suitable to lead and manage the pack. They are also responsible for creating rules for the pack, maintain the pack’s hunting territory, decide on hunting groups and when to go hunting. The Alpha’s key role is to ensure that the pack maintains a family and close focus and act only for the well-being of the pack. The second level within the dominance hierarchy are the Beta Wolves. These wolves do not only receive direct commands from the Alpha wolves for the wolf pack to follow but also assist in the making of the decision and the organization of the pack activities, they are committed and loyal to the pack and ensure reinforcement of the alpha’s decision. Should something happen to the Alpha wolves or gets old, the Beta wolves are the most suitable candidates to replace them. They can sometimes call hunts, and decide when to hunt. The Beta wolf’s key role is to act as an advisor to the Alfa wolves and ensure that the pack follows their commands. The wolves that are on the third level of the hierarchy are the subordinates known as the Delta Wolves which are somewhat in training to become Beta Wolves. These Delta wolves are under the alpha and beta wolves but are over the last level wolves, the Omegas. Delta wolves are made up of scouts to advise the pack for possible dangers, sentinels to ensure the safeness of the pack, elders that were once on the top of the hierarchy, hunters that ensure food abundance and caretakers who keep the pack in a healthy state. The wolves that are on the last level of the hierarchy are the Omegas. These wolves are at the lowest rank, because normally the are the young wolves which are just learning the behavior of the pack, they can also be demoted wolves due to bad behavior or who have returned after leaving the pack. These wolves have to submit to the dominant wolves and are usually the last to eat. Although these are not the in the top level of the hierarchy, they are vital to the pack. Losing the Omega wolf causes internal fighting as observed in nature. To mathematically model the social hierarchy of the wolves the Alpha wolf ðaÞ, Beta Wolf ðbÞ and Delta Wolf ðdÞ will be considered the first, second and third best solution respectively. These wolves will guide the hunting process (Optimization) and the Omega Wolves ðxÞ will follow. The solution of each wolves is derivative of their individual performance on a problem domain, denoted as fitness. Each wolf

4.4 Grey Wolf Algorithm

79

Fig. 4.9 Wolf representation for GWO

consists of a set of traits that are unique to them, similar to the chromosomes in genetic algorithms as shown on Fig. 4.9. The set of traits of each wolf provides them with the knowledge to know how far they are from the prey. The position is determined by evaluating each wolf with a specific objective function designed for the optimization problem to solve.

4.4.2

Hunting (Searching of Prey)

Grey wolves are well known for their pack collaboration and intelligence to take down preys much larger than themselves. Similar to other optimization algorithms, GWO search process initiates by creating a random population of grey wolves (candidate solutions). Figure 4.10 shows a representative of the GWO population known as a pack. Each of the randomly generated wolves of the pack will be located randomly in the search area with the intention that these occupy as much space in the area where a possible prey is. While hunting, wolves are very opportunists, they test their prey to determine its weakness and vulnerability. Each wolf in the pack has a specific role in the searching for this prey. The alpha, beta and delta wolf are in charge to lead the search and locate the prey while the omega wolves update their positions around the prey after it is identified. In the search space, the location of the weakest prey location is not known.

Fig. 4.10 Wolf pack representation for GWO

80

4 Overcurrent Relay Coordination, Robustness and Fast Solutions

To cover as much of the search space, all wolves from the pack diverge from each other to continuously search and select the prey that is in some way the weakest and is more vulnerable than the other individuals in their herds, either because they are sick, injured, old or young. A stronger prey may sometimes find itself in a vulnerable situation making them the target instead. During hunting in GWO, the wolf that performs better (closest to the prey) after each iteration is updated as the alpha wolf; the second and third best solutions are updated as the beta and delta Wolves respectively. The hunting process is ongoing throughout the iterations until an end criterion is met.

4.4.3

Encircling of Prey

Once the pack identifies the weakest prey, the wolves position themselves around it to instill fear and eventually attack. If the prey has many factors that favor it, the wolves will initiate a persecution until the odds for the pack improve. The Wolves winning maneuver in many cases is to separate the prey from the herd so that it becomes more vulnerable and easier to catch. When the prey means no threat to the pack, they would successfully encircle the prey so that it eventually panics and succumbs to their attack. It is of interesting note that the rest of the pack only assist in maximizing the success of the attack if needed. To mathematically model the wolf behavior to encircle the prey, the following equations are used:



! ! ! !

D ¼ C  Xp ðtÞ  X ðtÞ

ð4:17Þ

! ! ! ! X ðt þ 1Þ ¼ Xp ðtÞ  A  D

ð4:18Þ

! ! where t is the current algorithm iteration, A and C are the exploration coefficient ! ! vectors, Xp is the prey position vector, and X is the wolf position vector. The three best solutions a, b, and d have the best knowledge of the prey, their distance from a search agent wolf is given by:



! ! ! !

Da ¼ C1  Xa  X



! ! ! !

Db ¼ C2  Xb  X



! ! ! !

Dd ¼ C3  Xd  X

ð4:19Þ

! ! ! where X a , is the position of the alpha, X b is the position of the beta, X d is the position of the delta. All search agent x wolves update their positions according to the position of the best search agents somewhere in between the best wolves and the prey after each iteration using:

4.4 Grey Wolf Algorithm

81

Fig. 4.11 Position update in GWO

! X1 ðtÞ ¼ ! X2 ðtÞ ¼ ! X3 ðtÞ ¼

! ! ! Xa  A1  Da ! ! ! Xb  A2  Db ! ! ! Xd  A3  Dd

! ! ! X1 ðtÞ þ X2 ðtÞ þ X3 ðtÞ ! X ðt þ 1Þ ¼ 3

ð4:20Þ

! ! ! ! ! ! ! where C 1 , C 2 , C 3 and A 1 , A 2 , A 3 are all random vectors and X is the position of the search agent to update. As shown on Fig. 4.11, the final position of the wolves is anywhere in between the best wolves and the prey. The a, b, and d defines a circular area estimating the position of the prey, and the other search agent wolves updates their position randomly inside the circle, around the prey.

4.4.4

Attacking of Prey

The hunting process is concluded by attacking the prey. In nature, the a, b, and d are in charge to isolate the prey from its herd to consecutively weaken it by issuing timely attacks to specific body parts like the nose and throat while on the chase or when it has stopped moving. The attack can be frontal, from the rear or the sides depending on the type, size and strength of the prey. In GWO the attacking process

82

4 Overcurrent Relay Coordination, Robustness and Fast Solutions

Fig. 4.12 Searching versus attacking of prey

is simulated on the final iterations. The mathematical model used in GWO to diverge to search for a weaker prey in the initial stage and to converge and attack during the last stage is given by:



!

!

A [ 1 and C [ 1  Exploration



!

!

A \1 and C \1  Exploitation

ð4:21Þ

! ! where A and C are used in this stage to force the exploration in the initial stage ! and at the same time oblige the algorithm to converge on the final stage. A is generated randomly for each iteration with values in the interval [−a, a], where ~ a is decreased linearly from 2 to 0 during the iterations. Whenever the random values ! generated A are within the range [−1, 1], the search agent wolf can be located in any position between its last position and the prey. Figure 4.12 shows the effect of ! A in GWO to force the (a) search or the (b) attacking of the prey. ! The other coefficient C favors the exploration and contains random values in [0, ! ! ! 2]. Vector C assist to emphasize stochastically ð C [ 1Þ or deemphasize ð C \1Þ the prey effect with the objective to assist GWO to have a random behavior throughout the optimization to continuously favor the exploration process. As ! ! opposed to A , C is not linearly decreased and shows greater importance especially on the final iterations to prevent the local optima stagnation.

4.4 Grey Wolf Algorithm

4.4.5

83

DOCRs Coordination Using GWO

Grey wolf optimizer has been used to obtain competitive results for many complex engineering problems. In the literature, there have been no reports of solving the DOCRs coordination with the GWO. Hence, this research focuses on the implementation of this algorithm to address the problem. Two approaches are considered to explore the multi-objectivity of the problem; these are the priori and posteriori approach. This section details the first approach to give a general overview of various considerations taken and modifications done to the algorithm. The population used for GWO is known as the pack’s position because every individual in the pack represents the position of each wolf instead of its survival capabilities. Every individual of the pack is referred to as a search agent wolf and is consisted with traits that provide each with an overall knowledge of their distance from a possible prey. The creation of the initial population for the GWO algorithm is similar to the procedures taken for GA. The traits are randomly generated within the boundary limits given by the relay settings. Each search agent wolf is evaluated using the OF given in Chap. 3. The resulting fitness value determines the distance each search agent wolf is from the prey. The initial iterations in GWO serve to explore the entire search area trying to locate the position of a possible prey. The final stages of the algorithm serve as the attacking of the prey. After each iteration, the wolf closest to the prey is the alpha (a), the second closest are the beta (b), and the third closest is the delta (d) after each iteration. Figure 4.13 shows how the evaluation process assists in ranking each wolf after each iteration of the GWO algorithm.

4.4.5.1

Hunting (Exploration Process)

The hunting mechanism is led by the alpha (a), beta (b) and delta (d) wolves. These wolves have better knowledge of the prey’s position and are the three best solutions obtained during each iteration using the original GWO formulation. A slight modification was made to the initial formulation of the GWO to improve the ranking of wolf during the hunting process. The three best solution obtained during the evaluation process are saved as a, b, and d only on the first iteration of the algorithm. For successive iterations, the only solution that is stored is the best solution being this the a position. The b and d

Fig. 4.13 Wolf ranking mechanism based on its distance from the prey

84

4 Overcurrent Relay Coordination, Robustness and Fast Solutions

position are replaced with the previous positions of the a and b respectively not to eliminate the best positions obtained in the previous iteration. The original formulation of the GWO algorithm allows the wolf pack to hunt for the weakest prey during the first half of the total iterations with the help of the ! adaptive values of ~ a and A . During the initial iterations, the wolves diverge from ! each other to assist in searching for a better solution when random values of A is greater than 1 or less than −1. The values of ~ a throughout the iterations are decreases linearly from 2 to 0 using:  ~ a¼2 1

t  Max It

ð4:22Þ

The linear decrease characteristic of ~ a is replaced with an exponential decline to improve the behavior of the wolf during the hunting and attacking process. The exponential decrease allows the GWO algorithm to spend less time searching for a prey and focus all efforts of the pack as soon as the prey is identified. The exponential decrease of ~ a is given by: t  22 ~ a ¼ 2  exp  Max It

ð4:23Þ

The effects of both decreases ~ a of is shown in Fig. 4.14. The exploration time is decreased drastically to almost a quarter of the total algorithm iterations. The exponential decrease characteristic was considered because GWO tends to obtain good results in its initial stage.

Linear vs exponential decrease of a 2 Linear Exponential

1.8

Values of a (p.u)

1.6 1.4 1.2

Exploration

1 Exploitation

0.8 0.6 0.4 0.2 0

0

100

200

300

400

500

600

700

800

Iterations

Fig. 4.14 Comparison between linear and exponential decrease of ~ a

900

1000

4.4 Grey Wolf Algorithm

4.4.5.2

85

Encircling of Prey

After the weakest prey has been identified and the three best wolf position has been identified the omega (x) wolf’s positions are updated. The distance of each search agent wolf from the best wolf’s position is obtained using Eq. 4.19. The wolves are then located somewhere in between the best wolf’s position and the possible prey. The position of a wolf can be updated according to the position of the potential ! ! prey. The adjustment to value of A and C aids to obtain places around the best ! ! wolves on their current position. The vectors A and C are calculated using the r2 values in [0, 1] using: decrease of the component ~ a and random vectors ! r1 and ! ! A ¼ 2~ a! r1  ~ a; ! C ¼2! r

ð4:24Þ

2

The effect of Eqs. 4.17 and 4.18 to update the position of a grey wolf are shown in Fig. 4.15 in (a) a two-dimensional and (b) three-dimensional position vector. A wolf on the position ðX; YÞ can update its position according to the prey ! C ¼ ð1; 1Þ, the position ðX  ; Y  Þ. Adjusting the values A ¼ ð1; 0Þ and ~ ! !   ðX  X; Y Þ. Using the random vectors r1 and r2 , any random position inside the space around the prey can be obtained. Each position update is a modification to the relay settings of the system. There are many possibilities to integrate mutation and other evolutionary operators to mimic the whole life cycle of the grey wolf. Each integration increments the execution time of the algorithm and the opportunity to obtain better results increases. To maintain the GWO algorithm as simple as possible only mutation is considered in this research. The mutation integrated to GWO is the same Specific Gene Mutation (SPG) included in GA. All wolves in the pack are evaluated using Eq. 3.10 to identify which Coordination pairs does not coordinate after each iteration. All

Fig. 4.15 2D and 3D position vector and Wolf possible next location

86

4 Overcurrent Relay Coordination, Robustness and Fast Solutions

Fig. 4.16 SGM in GWO for a violated CP

primary and backup relay settings of the violated CPs are mutated without affecting the entire relay settings on the network aiming to reduce the number of violations. The SGM provides diversity to the wolf so that it can explore surroundings of the search space during the execution of the algorithm, especially during convergence in the final stage to avoid local solutions. Figure 4.16 shows the SGM procedure for a specific CP violated (R4-R2).

4.4.5.3

Attacking of Prey (Exploitation)

The pack concludes the hunting by attacking its prey after it stops moving. When the value of ~ a is decreased below one, the range of ~ A is also decreased forcing the algorithm to converge simulating the attack process in nature. The attacking process with the original formulation of the GWO algorithm commence after 50% of the total iterations and finalizes when the attack is completed ð~ a ¼ 0Þ. The proposed modification of the GWO to solve the coordination problem permits that the attacking process commences after the first quarter of the total iterations. The value of ~ a does not reduce to 0 as on the original GWO but approximates it in the final iterations. The behavior of GWO using the exponential decrease allows the algorithm not to wait to converge until 50% of the iterations have completed. Instead, convergence commences as ~ a\0. The effect of the random values of ~ A during the attacking of prey can be seen in Fig. 4.15. Another coefficient that assists with exploration and exploitation is the coefficient vector ~ C and is calculated with Eq. 4.24. The end criterion is determined by the max number of iteration defined at the start of the optimization. The final result of the Alpha Wolf at the last iteration is the best solution obtain during the optimization. The Alpha score provides the best fitness value, and the Alpha position provides the set of the relay settings that obtains the minimum operation time of any fault in the network.

4.4 Grey Wolf Algorithm

4.4.5.4

87

Block Diagram

The original formulation is modified to include a natural inspired genetic operator named Specific Gene Mutation (SGM) to solve the protection coordination using GWO. Figure 4.17 shows a flow chart of the steps taken to implement GWO to ! ! ! ! resolving the coordination problem using DOCRs. The Xa ; Xb ; Xd ; and Xx represent the position vectors of the Alpha, Beta, Delta and Omega Wolf respectively.

Fig. 4.17 Grey Wolf optimizer algorithm flowchart

88

4.4.6

4 Overcurrent Relay Coordination, Robustness and Fast Solutions

DOCRs Coordination Using MOGWO

The multi-objective Grey Wolf Optimizer (MOGWO) is based on the recently proposed Grey Wolf Optimizer (GWO). This formulation can approximate the true Pareto-optimal solutions of multi-objective problems effectively [6]. The reason to try and solve the DOCRs coordination using various optimization technique is that of the No Free Lunch theorem (NFL). The NFL theorem indicates that there is no optimization technique logically proven to solve all optimization problems. The initial population used in MOGWO is created similar to how GWO’s population is created, with the only exception that each search agents is constructed in a structured array format. Each field contains related data for each search agent including its position (Network relays settings) and resulting cost for each function evaluated. The resulting cost is an array with individual evaluation of each objective function to be minimized given by: Minimize: Fð~ xÞ ¼ f1 ð~ xÞ; f2 ð~ xÞ; f3 ð~ xÞ

ð4:25Þ

(a) Minimization of the primary relay operation times ( f1 ð~ xÞ ¼ min

NCP X

) tp

p¼1

(b) Minimization of the backup relay operation times ( f2 ð~ xÞ ¼ min

NCP X

) tb

b¼1

(c) Minimization of the coordination time interval errors ( f3 ð~ xÞ ¼ min

NCP X

) CTIpb

pb¼1

where NCP is the total numbers of coordination pairs and pb is the coordination pair. The objective functions f1 ð~ xÞ; f2 ð~ xÞ and f3 ð~ xÞ are subjected to the coordination inequality constraints previously discussed. The construction of the grey wolf pack is depicted in Fig. 4.18. The GWO formulation is the foundation and motivation of the MOGWO algorithm. MOGWO provides a different approach to solving the DOCRs coordination by adding three operators, and the specific gene mutation to the simple

4.4 Grey Wolf Algorithm

89

Fig. 4.18 MOGWO construction of Wolfpack

formulation of GWO explained in Sect. 4.4. The three operators integrated to the GWO algorithm to solve the problem using a multi-objective approach are as follows: • Integration of an archive to save all non-dominated solutions. • Integration of a grid mechanism to improve the non-dominated solutions. • Integration of a leader selection mechanism based on alpha, beta and delta wolves to update and replace the solutions in the archive.

4.4.6.1

Archive Integration

The archive is an operator responsible for storing or retrieving non-dominated Pareto optimal solutions after each iteration. An archive controller acts as a key module that controls what enters or exits the archive. The size of the archive depends on the maximum number of desired non-dominated solutions. The archive is on constant activity throughout the optimization process because, after each iteration, the archive residents are compared against all solutions obtained. There is a total of three different cases used by the archive controller to determine if a solution in the archive will be replaced. These cases are as follows: 1. A solution will not be allowed to enter if it is dominated by at least one archive resident. 2. A solution will be allowed to enter if it dominates one or more archive resident. 3. If neither the solution nor any archive dominates each other, the solution will be allowed to enter if the archive is not full.

90

4 Overcurrent Relay Coordination, Robustness and Fast Solutions

In cases where a solution dominates more than one archive resident, all the dominated residents are removed from the archive. All non-dominated solutions represent solutions that are superior after each iteration. The archive comprised of all non-dominated solutions is used at the end of the optimization process to select the best compromise solution with an extraction method.

4.4.6.2

Grid Mechanism Integration

To improve the solutions in the archive, a grid mechanism is integrated to the archive control module. If the archive is full, the grid mechanism should be initiated to re-arrange the objective space segmentation using a grid inflation parameter and the desired number of grids per dimensions so that a segment be selected to omit one of its solutions. The solution is inserted in the least crowded segment so that the final approximated Pareto optimal front diversity is improved. The arrange segments, where the non-dominated solutions are stored, are not necessarily of the same size and are known as hypercube. The probability for a solution to be selected and deleted from the archive increases proportionally to the size of each hypercube. Whenever the archive is full, and a solution needs to be inserted, the hypercube with more non-dominated is selected to omit randomly one of its solutions. Should a solution be inserted outside a hypercube, all segments are extended to cover it.

4.4.6.3

Leader Selection Mechanism Integration

In GWO the three best solutions after each iteration are selected as the alpha, beta and delta wolves. These solutions are responsible for guiding the omega wolves to promising search regions with the hope of locating the best solution that is closer to the global optimum. In MOGWO the selection is not made similar to GWO due to the concepts of Pareto Optimality. Instead, the leader selection mechanism is inserted to assist in the selection process. All non-dominated solutions saved creates the optimal Pareto-front obtained for DOCRs coordination using the multi-objective approach. Figure 4.19 shows an example of the resulting Pareto-front for a max load scenario. The selection mechanism is used to choose a non-dominated Pareto optimal solution from the least crowded segments and assigns it firstly as the Alpha, secondly as the Beta and lastly as the Delta wolf. Similar to the parent selection method in GA, the probability a solution is selected from the hypercube depends on the roulette wheel method. The total number of segments in the roulette wheel depends on the total number of the obtained Pareto optimal solutions in the selected hypercube. If the least crowded hypercube is of dimension is greater than three, the three lead wolves are selected from it ensuring that a wolf is not selected twice. If there are only three non-dominated solutions in the chosen hypercube, then each is

4.4 Grey Wolf Algorithm

91

3D Plot -MOGWO Pareto front

3000 2500

Grey wolves Non-dominated solutions

CTI

2000 1500 1000 500 0 1.5

tb

1 0.50.35

0.4

0.45

0.5

tp

0.55

0.6

0.65

Fig. 4.19 MOGWO Pareto-front evaluation

randomly assigned as either the alpha, beta and delta wolf. For a selected hypercube with a dimension, less than three not all lead wolves will be able to be selected from it. Hence the second least crowded segment is chosen to choose the other the other leaders.

4.4.6.4

MOGWO Block Diagram

See Fig. 4.20.

4.5

Evaluation

The coordination problem is solved using three of the previously discussed optimization techniques. The settings obtained during the optimization process directly affects the operation time and characteristic of the network relays. A reduced dial or a smaller pickup are not directly related to the reduction of the overall operation time. Hence the optimization is guided by the reduction of the operation times instead. The benchmark algorithm GA is used to compare performance between ACO, DE and GWO (MOGWO) algorithms to determine if the proposed solution provides on average, better operation times for the network relays. The two test systems, IEEE-14, and 30 buses are complex interconnected systems that have been proven difficult to coordinate using both manual and graphical techniques. Each test system consists of relays that form more than one coordination pair with others in the network making them even harder to coordinate.

92

4 Overcurrent Relay Coordination, Robustness and Fast Solutions

Fig. 4.20 Grey Wolf optimizer algorithm flowchart

4.5 Evaluation

4.5.1

93

Evaluation Among ACO and DE

In this section, a comparison of the results of each of the algorithms ACO and DE-Tri are presented. The interconnected IEEE 14 and 30 bus systems are used and the result of each algorithm are averaged values in 20 simulation runs. The 14 bus system has been simulated on the maximum, medium and minimum load operating condition to compare the performance of the algorithms whereas the 30 bus system has been simulated only on the maximum load operating condition at 1000 and 5000 iterations (as stopping criteria) to focus on the performance of DE family algorithms. DE has continuous dial and k settings, while ACO has discrete settings. Bear in mind that the AS-graph is constructed by the discrete step settings, as the step size becomes more continuous (smaller), the size of AS-graph increases and therefore the execution time increases. So for real time coordination the step size (dial and k) were chosen as presented in Table 4.3, not too big (will cause bad quality result) and not too small (will cause long execution time).

4.5.1.1

IEEE 14 Bus System

The averaged results are presented from Tables 4.4, 4.5 and 4.6. From Tables, it can be seen that the DE-Tri has minimum execution time for coordinating the 14 bus system, while ACO has the highest execution time which is not desired. From Fig. 4.21 it can be seen that as load operating condition decreases, the fitness value of the three algorithms increases. But the notable difference is that the fitness of ACO increased aggressively while DE family algorithms didn’t suffer drastic changes.

Table 4.3 Parameter settings of ACO and DE

Parameters

ACO

DE

CTI Dial k Dial step k step Q R C F Cr Individual-ants Iterations

0.3 [0.05:2.0] [1.4:1.6] 0.05 0.01 100 5 – – – 500 1000

0.3 [0.05:10.0] [1.4:1.6] Continuous Continuous – – 0.5 0.8 0.5 500 1000

94

4 Overcurrent Relay Coordination, Robustness and Fast Solutions

Table 4.4 Execution time, fitness, number of violations and standard deviation of ACO and DE-Tri for the 14 bus system Algorithm

t (s)

f(x)

NV

t-SD

f(x)-SD

Maximum load operating condition at 1000 iterations ACO 364 3.19 0.00 32.44 DE-Tri 94 0.99 0.00 9.10 Medium load operating condition at 1000 iterations ACO 382 253.19 1.25 37.72 DE-Tri 102 1.36 0.00 6.23 Minimum load operating condition at 1000 iterations ACO 418 662.57 3.30 143.58 DE-Tri 92 2.21 0.00 6.96

Table 4.5 Primary, backup operation time and CTI of ACO and DE-Tri for the 14 bus system

Table 4.6 Number of active relays, sensitive coordination pairs and total coordination pairs of the 14 bus system

Algorithm

f(x)-max

f(x)-min

0.21 0.01

3.73 1.01

2.86 0.98

88.66 0.02

403.05 1.42

202.95 1.34

114.09 0.19

802.69 2.90

403.11 2.07

tp

tb

CTI

Maximum load operating condition ACO 0.65 1.93 DE-Tri 0.21 0.57 Medium load operating condition ACO 0.67 1.78 DE-Tri 0.32 0.71 Minimum load operating condition ACO 0.58 1.28 DE-Tri 0.59 1.01

1.28 0.36 1.11 0.39 0.71 0.42

Operating condition

Active relays

NCP

NCP Total

Maximum load Medium load Minimum load

30 31 31

40 45 50

51 57 57

From Fig. 4.22 it can be seen that both DE family algorithms are very capable of finding quality results of CTI, whereas the results of ACO may be acceptable but not as competitive as DE family.

4.5.1.2

IEEE 30 Bus System

The averaged results in 20 simulation runs of each algorithm of the 30 bus system at maximum load operating condition (1000 and 5000) are presented in Table 4.7. From the important results are outlined in the following graphs. From Table 4.7 it can be seen that the averaged execution time of DE-Tri in 1000, 5000 iterations

4.5 Evaluation

95

Fig. 4.21 Graphical comparison of averaged fitness among ACO and DE-Tri for the 14 bus system

Fig. 4.22 Graphical comparison of averaged CTI among ACO and DE-Tri for the 14 bus system

Table 4.7 Execution time, fitness, number of violations and standard deviation of ACO and DE-Tri for the 30 bus system iterations Algorithm

t (s)

f(x)

NV

t-SD

Maximum load operating condition at 1000 iterations ACO 675.19 1384.40 6.50 42.89 DE-Tri 159.78 2.98 0.00 13.20 Maximum load operating condition at 5000 iterations DE-Tri 812.47 1.87 0.00 60.56

f(x)-SD

f(x)-max

f(x)-min

182.37 0.24

1804.90 3.55

1004.50 2.69

0.04

1.98

1.85

are very significates compared to ACO. The standard deviation of execution time of DE-Tri family is the best. For the optimization algorithms, ACO is a very popular metaheuristic algorithm in many fields but it has shown deficiency in handling complicated and highly constraint DOCR coordination problem. ACO has poor execution time, fitness value, number of violations and lack of robustness due to load operating condition changes. On the other hand, DE-Tri family algorithms have shown competitive results on both IEEE 14 and 30 bus systems at different load operating conditions and different iteration runs.

96

4.5.2

4 Overcurrent Relay Coordination, Robustness and Fast Solutions

Evaluation of GWO and MOGWO Algorithms

This section presents the simulation results of the test cases to determine if any of the proposed algorithms, mainly the GWO and MOGWO formulation provides better results when compared to the benchmark algorithm GA. For the optimization process, a total of 500 chromosomes is considered for the GA on both test systems. Only 60 search agents are used for both the GWO and MOGWO algorithms on the IEEE-14 bus test network system at maximum load and a total of 100 search agents for the IEEE-30 bus test network system at full load. A total of 20 repetitions is done to each algorithm with 1000 iterations to determine their robustness. The test cases used to evaluate the implemented algorithms are as follows; the first test case determines whether the GWO and MOGWO algorithm outperforms the benchmark GA using the base case being this the max load scenario. The second and third test case intends to demonstrate whether the inclusion of the Specific Gene Mutation (SGM) or the modification of the linear decrease of ~ a to an exponential decline, improves the results obtained with both the GWO and MOGWO algorithm respectively. The fourth test case determines if the expansion of the TDS and PSM range improves the results of the GWO algorithm with regards to the number of coordination pair violations and operation times of the DOCRs. The fifth case introduces the TCC as a third degree of freedom to determine if using different TCC improves the results of the algorithms. Lastly, the sixth test case evaluates the expanded objective function to include two levels of the fault current, mainly the maximum near-end and minimum far-ended fault currents to determine if coordination is maintained throughout the TCCs. The first three cases are primarily used to determine the best algorithm among the three implemented for this research, being this the algorithm that provides the overall best performance of both test systems. The last three test cases are used mainly to evaluate the best algorithm obtained from the previous test, to determine if the results improve when other scenarios are considered. Each table presented shows the compact results of each test case. Best is the average operation time of the DOCRs for the best repetition of the algorithm, Average is the mean value of all the mean operation times obtained for each algorithm repetition, and STD is the standard deviation of the repetitions. NV represents the number of coordination pairs that did not coordinate at the end of the optimization process, and TDS and PSM are the average TDS and PSM setting of the DOCRs obtained for the best algorithm repetition. On the figures presented, (a) shows the average operation times (tp, tb and CTI) of the best repetition and the average operation times (Atp, Atb and ACTI) of the 20 repetitions, (b) shows the number of violations in percentage (%NV), the average TDS and PSM settings and the algorithms standard deviation. While (c) shows the box plot of the algorithms overall performance.

4.5 Evaluation

4.5.2.1

97

Case I—Algorithms Comparison

The first test case is the implementation and comparison of the GA, GWO, and MOGWO formulation to solve the DOCRs coordination problem on both complex test systems. For this test case, the Specific Gene Mutation (SGM) is included in all the implemented algorithms and the linear decrease of ~ a is considered for both the GWO and MOGWO algorithms. The compact results obtained for this test case is shown in Table 4.8 and Fig. 4.23 for the IEEE-14 bus test system and in Table 4.9 and Fig. 4.24 for the IEEE-30 bus test system. Analyzing Table 4.8 and Fig. 4.23, it can be clearly observed that the GWO algorithms outperform both the GA and the MOGWO algorithms on the IEEE-14 bus test network. The average Operations times obtained for the best repetition and the average operation times of the 20 repetitions obtained with GWO were less when compared to the other two algorithms. The final number of CP violations for the best algorithm repetition was three for both GA and GWO and four

Table 4.8 Results for Test case I on IEEE 14-bus network system GA tp (s)

tb (s)

CTI (s)

GWO tp (s)

tb (s)

Best 0.7858 1.0901 0.4138 0.5640 0.8649 Average 0.8197 1.1776 0.4541 0.6994 1.0138 Std. 0.0575 0.0665 0.0201 0.0951 0.1303 NV 3 3 TDS 1.021 0.858 PSM 1.445 1.444 Overall 10.3519 9.5423 Bold numbers represent the best performance results

(a)

CTI (s)

MOGWO tp (s) tb (s)

CTI (s)

0.3998 0.4396 0.0333

0.6471 0.7165 0.1347

0.7291 0.7416 0.1628

12.2743

(b)

(c)

Fig. 4.23 Comparison of GA, GWO, and MOGWO on IEEE 14-bus system

1.1960 1.2411 0.2695 4 0.987 1.448

98

4 Overcurrent Relay Coordination, Robustness and Fast Solutions

Table 4.9 Results for Test case I on IEEE 30-bus network system GA tp (s)

tb (s)

CTI (s)

GWO tp (s)

tb (s)

Best 0.7956 1.2041 0.5648 0.7336 1.0974 Average 0.8298 1.2270 0.5830 0.7571 1.0799 Std. 0.0426 0.0568 0.0326 0.0404 0.0462 NV 14 14 TDS 1.198 1.118 PSM 1.466 1.468 Overall 22.0001 21.5250 Bold numbers represent the best performance results

(a)

CTI (s)

MOGWO tp (s) tb (s)

CTI (s)

0.5710 0.5849 0.0276

0.8587 0.7292 0.1360

0.7805 0.7919 0.1445

1.3507 1.1929 0.2220 15 1.240 1.480

23.9267

(b)

(c)

Fig. 4.24 Comparison of GA, GWO, and MOGWO on IEEE 30-bus system

for MOGWO. The averaged TDS and PSM settings obtained with GWO was lower than the results obtained with the GA and MOGWO algorithms. GA had better standard deviation than the GA but overall was not better than the GWO algorithm for this test system as can be seen in the box plot presented. The results obtained for the IEEE-30 bus system also shows that the overall performance of GWO is superior to the GA and MOGWO algorithms. The MOGWO did not perform well on both test systems with exception to the average primary time obtained of all algorithm repetitions on the IEEE-30 bus. The average best primary and backup operation times and TDS settings of the DOCRs for the best repetition have been achieved with the GWO algorithm. GA obtained the best CTI and the average PSM settings for the best algorithm repetition and the best average primary operation time for the 20 repetitions. In the other hand, GWO obtained the best average backup operation time and standard deviation of the 20 repetitions. GA and GWO had 14 CPs violations, and MOGWO had 15. The best fitness value is the fitness value obtained on the last iteration of the GA and GWO algorithm and the best out of all the non-dominated solutions for the

4.5 Evaluation

99

Table 4.10 Comparison of fitness and execution time for both test systems

Best Fit (p.u) Average Fit (p.u) Fitness STD Execution time (s)

IEEE-14 bus GA GWO

MOGWO

IEEE-30 bus GA GWO

MOGWO

77.1388 106.385 16.7176 898.310

111.095 153.820 24.6790 587.800

363.692 417.252 32.4300 2249.32

409.083 543.608 52.6065 724.220

76.2728 102.073 13.4035 65.0500

361.898 427.730 37.3069 159.750

MOGWO algorithm. The Average Fit and the Fitness STD are the mean fitness value and the standard deviation obtained for the 20 algorithm repetitions. The execution time is the time each algorithm takes to execute 1000 iterations. The fitness and execution time of each algorithm for both test system is provided on Table 4.10. On the IEEE-14 bus test network GWO algorithm outperformed the GA and MOGWO algorithm on all aspects displayed in Fig. 4.23. On the IEEE-30 bus, GWO obtained better fitness value and execution time for the best algorithm repetition, but GA obtained the best STD and average operation times of the 20 repetitions. The MOGWO algorithm did not perform well on either test system. As an overall conclusion, the GWO algorithm provides better overall results as compared to the benchmark GA algorithm as was speculated on the hypothesis of this research. The exploration and exploitation mechanisms help the algorithm to search globally and provide better results utilizing a smaller population compared to GA. The MOGWO algorithm although having an independent Pareto-Optimal search mechanism does not improve the results as was expected when compared to the GA and GWO algorithms. To further improve the GWO and MOGWO algorithms the effects of the SGM and the decrease of the component ~ a is analyzed on the following test cases along with the expansion of the TDS and PSM settings range and the two fault level methods referred to as the two-point fault method. The benchmark GA algorithm is not considered for further tests as it did not perform better than the proposed GWO algorithm in this test case. The MOGWO is further analyzed to determine if it does obtain improvements on the following test cases.

4.5.2.2

Case II—Effects of the SGM on the GWO and MOGWO

The second test case intendeds to demonstrate the effect of the Specific Gene Mutation operator included in the GWO and MOGWO formulation to solve the DOCRs coordination problem. The specific gene mutation is not part of the original formulation of GWO and MOGWO hence the effect on the results is analyzed. The compact results obtained is shown in Table 4.11 and Fig. 4.25 for the IEEE-14 bus test network and in Table 4.12 and Fig. 4.26 for the IEEE-30 bus system.

Best 0.564 Average 0.6994 Std. 0.0951 NV TDS PSM Overall 9.9248 Bold numbers represent

GWOI tp (s)

0.3998 0.4396 0.0333

CTI (s) 0.5694 0.6358 0.0833

GWOIs tp (s)

9.6236 the best performance results

0.8649 1.0138 0.1303 3 1.240 1.444

tb (s) 0.857 0.9228 0.0897 3 1.209 1.448

tb (s)

Table 4.11 Results for Test case II on IEEE 14-bus network system

0.3775 0.4074 0.0243

CTI (s)

11.7623

0.6458 0.677 0.0807

0.9983 1.1533 0.1509 4 1.251 1.504

MOGWOI tp (s) tb (s) 0.5179 0.6889 0.0947

CTI (s)

12.2869

0.6676 0.7102 0.0923

1.1315 1.2324 0.1785 4 1.332 1.463

MOGWOIs tp (s) tb (s)

0.6023 0.7465 0.1303

CTI (s)

100 4 Overcurrent Relay Coordination, Robustness and Fast Solutions

4.5 Evaluation

(a)

101

(b)

(c)

Fig. 4.25 Comparison of GWO and MOGWO for test II on IEEE-14 bus

The GWOl and MOGWOl represent the GWO, and MOGWO algorithms with the original linear decrease of ~ a not including the Specific Gene Mutation (SGM) and the GWOls and MOGWOls represents the GWO and MOGWO algorithms with the original linear decrease of ~ a with the inclusion of the SGM. Analyzing the results for the IEEE-14 bus test system, it can be observed that the specific gene mutation does improve the results of the GWO algorithm but not the MOGWO algorithm. The DOCRs operation times and settings on average for the best repetition and average for all the 20 repetitions are better on the GWOls and the MOGWOl formulations. The MOGWOls obtains better PSM settings while the STD is better on GWOl results. The results obtained for the IEEE-30 bus system with the inclusion of the SGM on the GWO and MOGWO algorithm clearly shows that GWOls and the MOGWOl has an overall better performance. The GWOls is considered to be of better performance because it reduces the number of CPs violated which resulted in a slight increase in its DOCRs operation times excluding the CTI on the best repetition and the averaged backup relay and STD of the 20 algorithm repetitions. The increase in the operations times resulted in and increased on the TDS settings but not on the PSM settings. Similar to the IEEE-14 bus test network, the MOGWOl outperformed the MOGWOls which confirms that SGM improves the results for the MOGWO algorithm. All DOCRs operation times and settings are significantly better without the SGM. The MOGWO algorithm is still not better than the GWO formulation because of the resulting times, settings and the number of CPs violated. As a general conclusion to this test case, it can be concluded that the SGM does favors the GWO algorithm formulation on both test systems, although it increased the operation times and average settings of the DOCRs for the IEEE-30 with the reduction of the number of CPs violated. On the other hand, MOGWO algorithm does not benefit from the inclusion of the SGM.

Best 0.7222 Average 0.7515 Std. 0.0353 NV TDS PSM Overall 22.4929 Bold numbers represent

GWOI tp (s)

0.6155 0.5820 0.0260

CTI (s) 0.7336 0.7571 0.0404

GWOIs tp (s)

21.5250 the best performance results

1.0932 1.0801 0.0482 15 1.068 1.471

tb (s) 1.0974 1.0799 0.0462 14 1.118 1.468

tb (s)

Table 4.12 Results for Test case II on IEEE 30-bus network system

0.5710 0.5849 0.0276

CTI (s)

23.1166

0.7306 0.6886 0.0865

1.2002 1.1207 0.1534 15 1.105 1.471

MOGWOI tp (s) tb (s) 0.7388 0.7281 0.0929

CTI (s)

23.9267

0.8587 0.7292 0.1360

1.3507 1.1929 0.2220 15 1.240 1.480

MOGWOIs tp (s) tb (s)

0.7805 0.7919 0.1445

CTI (s)

102 4 Overcurrent Relay Coordination, Robustness and Fast Solutions

4.5 Evaluation

(a)

103

(b)

(c)

Fig. 4.26 Comparison of GWO and MOGWO for test II on IEEE-30 bus

4.5.2.3

Case III—Comparison of the Linear and Exponential Decrease of ~ a

The third test case compares the results obtained with the linear and exponential decrease of ~ a given by Eqs. 4.14 and 4.15 respectively. The decrease of the component ~ a changes the behavior of the GWO and MOGWO algorithm to determine how much iteration is dedicated to the exploration and exploitation process as described in Sect. 4.4. The compact results obtained for this test case is shown in Table 4.13 and Fig. 4.27 for the IEEE-14 bus test network and in Table 4.14 and Fig. 4.28 for the IEEE-30 bus network system. The overall performance of the GWO algorithm including the exponential decrease of ~ a (GWOes) does improve the overall performance of the GWO algorithm but does not improve MOGWO’s. The average backup relay’s operating time and the CTI result for the best repetition and the 20 repetitions along with their corresponding standard deviation were superior to the original formulation of GWO using the linear decrease of ~ a. The average primary relay operating time is less on the GWOl but on average for the 20 repetition was best on GWOes. For the MOGWO formulation the use of an exponential decrease of ~ a does not improves the result when compared to the original formulation of the MOGWO algorithm, similar to what occurred in the previous test case. On the IEEE-30 bus test network tabulated results, the benefits to using an exponential decrease of ~ a instead of a linear decrease can be observed clearly. For all the different averaged results, the GWOes showed significant improvements with the only exception of the averaged PSM setting. The DOCRs operating times and settings along with the algorithm STD were better on GWOes. The MOGWO formulation showed better results on this test network with the inclusion of the exponential decrease of ~ a when compared to the original

Best 0.5640 Average 0.6994 Std. 0.0951 NV TDS PSM Overall Bold numbers represent

GWOes tp (s)

CTI (s)

GWOIs tp (s)

0.8649 0.3998 0.5694 1.0138 0.4396 0.6358 0.1303 0.0333 0.0833 3 0.858 1.444 9.5423 the best performance results

tb (s) 0.8570 0.9228 0.0897 3 0.851 1.448 9.2659

tb (s)

Table 4.13 Results for Test case III on IEEE 14-bus network system

0.3775 0.4074 0.0243

CTI (s) 0.6458 0.6770 0.0807

0.9983 1.1533 0.1509 4 0.872 1.504 11.3833

MOGWOI tp (s) tb (s) 0.5179 0.6889 0.0947

CTI (s)

0.6676 0.7102 0.0923

1.1315 1.2324 0.1785 4 0.999 1.463 11.9538

MOGWOe tp (s) tb (s)

0.6023 0.7465 0.1303

CTI (s)

104 4 Overcurrent Relay Coordination, Robustness and Fast Solutions

4.5 Evaluation

(a)

105

(b)

(c)

Fig. 4.27 Comparison of GWO and MOGWO for test III on the IEEE-14 bus

formulation with the only exception being the average performance for the 20 algorithm repetitions. MOGWO still does not provide better results than the GWO.

4.5.2.4

Case V—Including the TCC as an Additional Setting

The fifth test case is intended to demonstrate whether the inclusion of the TCC as a third degree of freedom to the relay settings reduces both the operation times of the network relays and some coordination pairs violated. The values of the TCC in this research varies from 1 to 3 to choose between the moderately, very and extremely inverse time curve characteristics (TCC) given in Table 4.15. Previous test cases maintained the TCC constant to the very inverse time-current curve characteristics as it is the commonly used characteristic on DOCRs installed on the utility networks. The compact results are shown in Table 4.16 and Fig. 4.29 for the IEEE-14 bus and the IEEE-30 bus network systems. Analyzing the test results obtained for the IEEE-14 bus test system it can be observed that the overall results obtained with the GWO algorithm without the inclusion of the third degree are more favorable to the reduction of the DOCRs average operating times and settings as oppose to the formulation that does not include it. However, the formulation that includes the third degree of freedom (GWOes3D) does favor the reduction of the number of CPs violated although resulting in an increase of the DOCRs operating times and settings. The results obtained for the IEEE-30 bus test network shows similar results except the average best and average of the repetitions where GWOes3D was slightly better. The number of CPs violated also improves with the inclusion of the TCC.

Best Average Std. NV TDS PSM Overall

21.5250

0.7336 0.7571 0.0404

GWOIs tp (s)

1.0974 1.0799 0.0462 14 1.118 1.468

tb (s)

0.5710 0.5849 0.0276

CTI (s)

21.1825

0.6923 0.7386 0.0352

GWOes tp (s) 1.0292 1.0367 0.0342 14 1.043 1.484

tb (s)

Table 4.14 Results for test case III on IEEE 30-bus network system

0.5338 0.5372 0.0179

CTI (s)

23.1166

0.7306 0.6886 0.0865

1.2002 1.1207 0.1534 15 1.105 1.471

MOGWOI tp (s) tb (s) 0.7388 0.7281 0.0929

CTI (s)

23.0233

0.7135 0.6999 0.0742

1.1764 1.1326 0.1292 15 1.103 1.454

MOGWOe tp (s) tb (s)

0.7081 0.7439 0.0881

CTI (s)

106 4 Overcurrent Relay Coordination, Robustness and Fast Solutions

4.6 On-Line Coordination

(a)

107

(b)

(c)

Fig. 4.28 Comparison of GWO and MOGWO for test III on IEEE-30 bus

Table 4.15 IEEE and IEC Standard relay characteristics constants Standard

Curve type

A

B

P

ANSI/IEEE

MI—moderately inverse VI—very inverse EI—extremely inverse NI—normally Inverse STI—short-time inverse SI—standard inverse (C1) VI—very inverse (C2) EI—extremely inverse (C3) STI—short-time inverse (C4) LTI—long-time inverse (C5)

0.0515 19.61 28.2 5.95 0.02394 0.14 13.5 80 0.05 120

0.1140 0.491 0.1217 0.18 0.01694 0 0 0 0 0

0.02 2.0 2.0 2.0 0.02 0.02 1 2 0.04 1

IEC-60255

4.6

On-Line Coordination

Usually, the off-line coordination is degraded under the highly dynamic operation of electric networks such as minimal load conditions, topological changes, and unregulated source operation. The solution of coordination obtained every few minutes can both reduce the loss of sensitivity and the increase in the operation times of the relays. For this purpose, it is important to determine the execution time of the coordination algorithm using complex interconnected systems for a better evaluation. The on-line coordination is to re-coordinate all DOCRs for every change of network topology and element operation. The advantages by doing so are minimum

Best 0.5694 Average 0.6358 Std. 0.0833 NV TDS PSM Overall 9.2659 Bold numbers represent

0.3775 0.4074 0.0243

CTI (s) 0.5924 0.6491 0.0971

1.0421 1.0150 0.1299 2 1.470 1.873

GWO14es3D tp (s) tb (s)

9.8907 the best performance results

0.8570 0.9228 0.0897 3 0.851 1.448

GWO14es tp (s) tb (s) 0.5035 0.4736 0.0453

CTI (s)

Table 4.16 Results for test case V on IEEE-14 and 30 bus network system

21.1825

0.6923 0.7386 0.0352

1.0292 1.0367 0.0342 14 1.043 1.484

GWO30es tp (s) tb (s) 0.5338 0.5372 0.0179

CTI (s)

21.1280

0.6545 0.6857 0.0563

1.0788 1.0510 0.0753 13 1.483 1.741

GWO30es3D tp (s) tb (s)

0.6139 0.6317 0.0568

CTI (s)

108 4 Overcurrent Relay Coordination, Robustness and Fast Solutions

4.6 On-Line Coordination

109

(a)

(b)

(c)

Fig. 4.29 Results including the TCC on GWO for IEEE-14 and 30 buses

Table 4.17 Average fitness and execution time obtained for each algorithm on both test systems

Execution time (s) Fitness Std.

IEEE-14bus GA GWO

MOGWO

DE

IEEE-30bus GA GWO

898.3

94

364

65.05

2249

9.10

32.44

13.40

16.72

32.43

MOGWO

259.78

675.19

13.20

1384.40

DE 159.8 37.31

relay operation time, increase of sensitivity, and the ability to withstand another unknown contingency. Moreover, the idea is to coordinate DOCRs on-line, which as a result enhances in meeting the fundamental requirements. Table 4.17 shows the comparison of the average fitness values and execution time obtained for each algorithm on both test systems. In the tests carried out for the electrical test systems, the GWO stood out among the other algorithms evaluated, since it was the one with the shortest execution time. The fitness results although they are not the best are competitive with the other algorithms.

4.7

Summary

The proposed coordination surely increased the sensitivity of DOCRs in some cases and at the same time reduced the sensitivity of DOCRs in other cases. Also, relay operation time are reduced in some cases and increased in other cases. These are all due to the operation of network. But despite of the reduction of sensitivity or

110

4 Overcurrent Relay Coordination, Robustness and Fast Solutions

increment of relay operation time, these are due to the latest network operation condition. Therefore, the system is prepared for another unknown contingency. This made the system to stay strong at all time For the optimization algorithms, ACO is a very popular metaheuristic algorithm in many fields but it has shown deficiency in handling complicated and highly constraint DOCR coordination problem. ACO has poor execution time, fitness value, number of violations and lack of robustness due to load operating condition changes. On the other hand, DE-Tri family algorithms have shown competitive results on both IEEE 14 and 30 bus systems at different load operating conditions and different iteration runs.

References 1. M. Dorigo, L.M. Gambardella, Ant colony system: a cooperative learning approach to the traveling salesman problem. IEEE Trans. Evol. Comput. 1(1), 53–66 (1997) 2. M. Dorigo, M. Birattari, T. Stutzle, Ant colony optimization: artificial ants as a computational intelligence technique. IEEE Comput. Intell. Mag. 1(4), 28–39 (2006) 3. S. Das, P.N. Suganthan, Differential evolution: a survey of the state of the art. IEEE Trans. Evol. Comput. 15(1) (2011) 4. R. Storn, K. Price, J. Global Optim. 11, 341 (1997). https://doi.org/10.1023/A:1008202821328 5. S. Mirjalili, S.M. Mirjalili, A. Lewis, Grey wolf optimizer. Adv. Eng. Softw. 69, 46–61 (2014). Available: http://www.sciencedirect.com/science/article/pii/S0965997813001853 6. R.K. Mallick, F. Haque, R.R. Rout, M.K. Debnath, Application of grey wolves-based optimization technique in multi-area automatic generation control, in International Conference on Electrical, Electronics, and Optimization Techniques, ICEEOT 2016. IEEE, Mar 2016, pp. 269–274. Available: http://ieeexplore.ieee.org/document/7755160/

Chapter 5

Bio-inspired Optimization Algorithms for Solving the Optimal Power Flow Problem in Power Systems

In this chapter the Optimal Power Flow problem solution based in Bio-inspired optimization algorithms with one single function and with multiple and competing objective functions is presented. As a first approach the Modified Flower Pollination Algorithm (MFPA) to show its potential application to solve the OPF problem, then Normal Boundary Intersection (NBI) method are used in a complementary way to determine the Pareto front solution of the Multi-Objective OPF problem. To help in the decision-making process are compared several strategies to select the best compromise solution from the Pareto frontier. To demonstrate the capabilities of the bio-inspired methods, designed test functions and different objective functions are tested and combined to calculate the Pareto front solution on the IEEE 30 bus test system. Finally, a visual tool is developed to display the OPF solution. This tool would help to the user to intuitive visualize potential damage on the Power System.

5.1

Introduction

Today more than ever, the deregulation of the power system and the increasing demand of electricity in conjunction with a deficit of inversion on the infrastructure projects due to financial and political issues, are pushing the power system to work close to its operating limits. Therefore, a proper operation and planning of the power system requires to consider different factors such as reduction of the generation costs, losses, pollution, as well as improving its security, efficiency and reliability. In this regard, Multi-Objective Optimal Power Flow (MOOPF) has become an important tool with potential applications on power system operation and planning [1]. The commonly adopted method in a multi-objective optimization problem is the one called Pareto front solution, which instead of a single optimal solution, leads to a set of alternatives with different trade-offs among the conflicting objective functions. The MOOPF problem is non-linear, non-convex, large-scale, © Springer Nature Switzerland AG 2019 E. Cuevas et al., Metaheuristics Algorithms in Power Systems, Studies in Computational Intelligence 822, https://doi.org/10.1007/978-3-030-11593-7_5

111

112

5 Bio-inspired Optimization Algorithms for Solving …

and involve a static optimization problem with both continuous and discrete control variables. To solve this challenging problem, diverse methods based on conventional and computational intelligence algorithms have been applied to determine the Pareto front solution [1]. Among the successful conventional multi-objective optimization techniques used to compute the Pareto front solution are: the Weighed Sum Method (WSM) [2, 3], the n-Constrained Method (n-CM) [4–6] and the Normal Boundary Intersection (NBI) method [7, 8]. The first two are easy to implement, however, they are subjected to a proper determination of weights and constrain thresholds which have an important impact in computing the Pareto front solution; in particular, when a non-convex characterization prevails on the MOOPF problem [9]. On the other hand, the NBI has been reported to work well for non-convex problems obtaining evenly distributed points of the Pareto front solution [10]. To circumvent these problems, modern tools based on computational intelligence have motivated significant research in the area of the Evolutionary Optimization Algorithms (EOAs) [9, 11–16] and Bio-inspired Optimizations Algorithms (B-IOAs) [17–21]. The EOAs have a strong robustness at non-convex and non-linear MOOPF and find multiple Pareto-optimal solutions in one single simulation run. On the other hand, the B-IOAs find the Pareto front solution by intensive search over the constrained solution space. Both approaches are mathematically flexible when handling different nonlinear objective functions (generation function cost with valve-point effects), as well as the incorporation of discrete control variables (transformers taps and compensator limits [1]. In spite of the advances in the development and application of computational intelligence algorithms in the MOOPF problem, aspects to improve the search time process and the convergence speed are still under intensive research [1]. Another aspect of solution is related to the Pareto front computation and the selection of the best compromise solution that becomes an important part in the decision-making process because it is not obtained in an automatic way. Furthermore, a criterion to select the best compromise solution results fundamental when the power system operator has to take fast decisions. In this chapter some posteriori approach methods are used to carry out to complement decision making process. In this regard, diverse methods have been proposed in the literature, for example, the Fuzzy membership approach [10] uses a normalized membership function to weight each non-dominated individual solution; a similar philosophy is followed by the Entropy method [10]; the pseudo-weight vector approach [9], uses a weight vector based on the relative distance of the solution from the worst value in each objective function. In this chapter a Bio-inspired Optimization Algorithms to solve the Multi-Objective Optimal Power Flow problem are compared. The performance of the techniques is tested on the standard IEEE 30-bus test system using different objective functions.

5.2 General Formulation of OPF Problem

5.2

113

General Formulation of OPF Problem

Mathematically, a multi-objective OPF problem may be represented as a non-linear programming formulation as follows: min fi ðx; uÞ hj ðx; uÞ ¼ 0 gk ðx; uÞ  0

i ¼ 1; 2; . . .; n j ¼ 1; 2; . . .; m k ¼ 1; 2; . . .; p

ð5:1Þ

where f is the i-th objective function (OF) to be minimized, x is the vector of dependent state variables defined as:  xT ¼ PGref ; VL1 ; . . .; VLnpq ; QG1 ; . . .; QGnpv ; SL1 ; . . .; SLntl 

ð5:2Þ

where PGref is the active power output at the slack bus; VLnpq is the voltage magnitude at the PQ load buses; QGnpv is the reactive power output of all generator units; SLntl is the transmission line loading or line flow, npq is the number of load buses, npv is the number of generator buses (PV buses), and ntl is the number of transmission lines. Finally, u is the vector of independent control variables stated as:  uT ¼ PG2 ; . . .; PGng ; VG1 ; . . .; VGng ; Qc1 ; . . .; Qcnc ; T1 ; . . .; Tnt 

ð5:3Þ

where PGng is the active power generation at the PV buses except the slack bus; VGng is the voltage magnitude at the PV buses; T is the tap setting of the transformers; QCnc is the reactive power of the shunt VAR compensators, ng is the number of generators, nc is the number of VAR compensators, and nt is the number of regulating transformers. In this paper, four different objective functions are considered; these are described in the next section.

5.2.1

Objective Functions fi ðx; uÞ

Four different objective problems are considered to determine the effectiveness of the proposed algorithm, and their functions are used for the multi-objective purposes of this paper. 1. Quadratic fuel cost. This is the characteristic quadratic cost type function to calculate the total cost of generation for thermal units. It is the most common objective function in OPF problems. This function is expressed as:

5 Bio-inspired Optimization Algorithms for Solving …

114

f1 ¼

ng X

ai P2Gi þ bi PGi þ ci

ð5:4Þ

i¼1

where Pgi is the active power from generator i. The reference generator is included but it is important to outline that is not a control variable, it is implicitly a function of the control variables and its value is drawn from the power flow equations in order to complete de cost function. 2. Quadratic cost of generation with valve-point effect. This function models the opening effect of the steam valve. The effect caused by this function is highly nonlinear and discontinuous, for it is not possible to apply gradient based methods. This function is expressed by

f2 ¼

ng X

ai P2Gi þ bi PGi þ ci þ zi

ð5:5Þ

i¼1

 h  i   where zi ¼ di sin ei Pmin  P . ai, bi, ci are the fuel cost coefficients of the i-th G i Gi generator; di and ei are the coefficients of the i-th unit reflecting valve-point effect; and Pmin Gi is the minimum active power generation limit of the i-th generator. Only the reference and bus 2 generators comply with this function. 3. Total active power losses. This function computes the line flow total active power losses and is defined as follows:

f3 ¼

ntl h  X   i rk Vi2 þ Vj2  2Vi Vj cos hi  hj

ð5:6Þ

k¼1

where rk is the k-th transmission line conductance between nodes i and j for i 6¼ j. 4. Voltage profile improvement. Usually this function is used to minimize the voltage deviations of PQ buses but the rest of the PV buses can be included. This function is expressed as:

f4 ¼ q

Nb X

jVi  1j

ð5:7Þ

i¼1

where q is a scale factor and Nb represents the total number of buses. The definition of equality constraints is usually related to typical load flow equations:

5.2 General Formulation of OPF Problem

PGi  PDi  Vi

Nb X

115

     Vj Gij cos hij þ Bij sin hij ¼ 0

j¼1

QGi  QDi  Vi

Nb X



    Vj Gij sin hij  Bij cos hij ¼ 0

ð5:8Þ

j¼1

where PGi is the active power of the i-th generator except for the slack bus, QGi is the reactive power output of the i-th generator; PDi and QDi are the demand of active and reactive power of the i-th bus; Vi and Vj are the voltages of the i-th and j-th bus, respectively; Gij ; Bij and hij are conductance, susceptance, and phase difference of   voltages between the i-th and j-th bus hij ¼ hi  hj ; and Nb is the number of buses.

5.2.2

Inequality Constrains gi ðx; uÞ

Several inequality constraints are considered to restrict the operating conditions on physical devices present in the power system, as well as the limits created to guarantee the system security. Generator limits: Vimin  Vi  Vimax max Pmin Gi  PGi  PGi min QGi  QGi  Qmax Gi i ¼ 1; 2; . . .; npv

ð5:9Þ

Transformer tap limits: Timin  Ti  Timax

i ¼ 1; 2; . . .; nt

ð5:10Þ

i ¼ 1; 2; . . .; nc

ð5:11Þ

Vjmin  Vj  Vjmax

j ¼ 1; 2; . . .; npq

ð5:12Þ

max Smin k  Sk  Sk

k ¼ 1; 2; . . .; ntl

ð5:13Þ

Compensators limits: max Qmin ci  Qci  Qci

Voltages at loading buses:

Line flow limits:

5 Bio-inspired Optimization Algorithms for Solving …

116

Next section describes the incorporation of Penalty Functions to the OPF formulation.

5.2.3

Penalty Functions

It is worth mentioning that control variables are self-constrained, but dependent state variables are not. The inequality constraints of dependent variables can be included as penalty functions which are only active if the constraint is violated accordingly. Thus, any unfeasible solution is declined; although there will be some degree of permissibility if the penalties are very low. Mathematically, the penalty functions added to the objective function can be expressed as follows: fP ¼ fobj þ

NC X

 2 xk xk  xlim k

ð5:14Þ

k¼1

where fP is the augmented objective function, fobj is the current objective function that depends on the method used, xk is the penalty factor of the k-th violated constraint, xlim k is the limit value of the k-th violated dependent variable xk , and NC is the total number of active constraints. The limit value can be determined by the following rule: xlim k

¼

max xmax k ; xk [ xk min min xk ; xk \xk

ð5:15Þ

 min max  where xlim is the k-th dependent variable whose limits were violated k 2 xk ; xk max and can be either lower ðxmin k Þ or upper ðxk Þ, but not both. Each penalty function is active only if its respective constraint is violated, otherwise is zero. The penalty factors xk must be selected carefully because each constraint handles units of different magnitudes. If the constraints are normalized, one single penalty factor is enough; in this case the value should be set according to how much the penalties will affect the augmented objective function (5.15). The quadratic penalty functions are intended to relax the tight constraints and therefore expand the solution space.

5.3

Flower Pollination Algorithm

The FPA developed by Yang is a recent meta-heuristic optimization technique inspired by the flower pollination processing nature. This process is divided in two: cross-pollination and self-pollination. From diverse discussions about the FPA algorithm [22], four rules have been accepted to describe an idealized pollination process:

5.3 Flower Pollination Algorithm

117

1. Cross-pollination is considered as a global pollination process with pollen-carrying pollinators (insects or animals) travelling over long distances performing movements that can be modeled as Levy flights [23, 24]. 2. Self-pollination is considered as a local pollination process which in nature is conducted by the wind or the rain. 3. Self-pollination or local pollination takes place among flowers of the same plant or flowers of the same species. 4. Self or cross pollination process is controlled by a switch probability function Pa 2 ½0; 1. Due to the physical flower proximity and other factors such as wind or rain, local pollination can have a significant fraction of the overall pollination process.

5.3.1

Description of the Flower Pollination Algorithm

From the above idealized rules, we can formulate the standard FPA algorithm as follows: The global pollination (cross-pollination) process is carried out generating random numbers LðkÞ. As shown below the position of the ith flower uti 2 Rn is iteratively updated using its distance from the current best flower g .   uit þ 1 ¼ uti þ LðkÞ  uti  g

ð5:16Þ

where the dot ðÞ indicates element wise product. The step size LðkÞ is drawn from a symmetric Lévy distribution and the equation is called a Levy flight that mimics the behavior of the pollinators. Generating Lévy random numbers is not an easy task. A way to overcome this drawback, the Mantegna’s approximation is used [25]. The step size vector is: si ¼

z jvj1=k

i ¼ 1; 2; . . .ith flower

ð5:17Þ

where z and v are n-dimensional vectors   and the divisions   in (5.16) are element-wise. Each element of zi  N 0; r2z and vi  N 0; r2v are drawn from normal distributions where:

Cð1 þ kÞ sin ðpk=2Þ rz ¼ Cðð1 þ kÞ=2Þk2ðk1Þ=2

1=k ð5:18Þ

The distribution factor k is selected in the range 0.3 to 1.99 in this research, k ¼ 1:5. Finally the step size is LðkÞ ¼ asi , where a is a scale factor set from 0.1 to 0.9 in this work.

118

5 Bio-inspired Optimization Algorithms for Solving …

On the other hand, local pollination is carried out using step sizes as uniformly distributed random number vector 1 lying between 0 and 1 to control the magnitude of mutation of the elements of the next generation flower.   uti þ 1 ¼ uti þ 1  uti  utk

ð5:19Þ

where t is the current generation (iteration) utj and utk are the current pollen from different flowers of the same plant species. Mathematically, if uit þ 1 and uti comes from the same species or selected from the same population, this equivalently becomes a local random walk. The local pollination is carried out by random walks using a uniform probability (5.19), whereas global pollination is carried out by Lévy flights which use an exponential Lévy distribution (5.16). This change is done by setting a switch probability parameter Pa . FPA switch probability pseudo code: if Pa [ rand ð0; 1Þ   0 Do L e vy Flights: uti þ 1 ¼ uti þ LðkÞ  uti  g else   Do random walks: uti þ 1 ¼ uti þ 1  utj  utk end where: t represents the current generation, LðkÞ is the Lévy distribution and 1 2 ½0; 1 is the random number vector generated by a uniform distribution.

5.4

Modified Flower Pollination Algorithm

The backbone of the Flower Pollination Algorithm is related to the best selection of initial condition and switching from local to global pollination process. Both processes have an important impact on the computational burden and convergence solution. In order to improve algorithm’s performance, two modifications are carry out in the next sections.

5.4.1

Improving the Initial Conditions Process

The first modification consists in starting with a closer (fittest) solution by simultaneously checking the opposite guesses. By doing this, the fitter (guess or opposite guess) can be chosen as an initial best solution. In fact, according to probability theory, the likelihood that a guess is further from the solution than its opposite

5.4 Modified Flower Pollination Algorithm

119

guess is 50%. So, starting with the fitter of the two, guess or opposite guess, has the potential of accelerate convergence. Using the concept of quasi-oppositional based learning [25], it is possible to develop a best initial population closer to the optimal solution. The quasi opposi^Q 2 RN is defined by its elements as: tional solution point (or vector) u if ui \umi ^uQi ¼ umi þ ð^ui  umi Þ else ^uQi ¼ umi þ ðumi  ^ ui Þ end

ð5:20Þ

where 2  ½0; 1 is a random number drawn from a uniform distribution, umi ¼ ; ½ai ; bi  are the respective minimum and maximum limits of ui in the current population. Since this is an initial population, the proper limits   ½ai ; bi  will match up max ; u :b u i is the oppositional with the limits of the respective control variables umin i i value of ui for i ¼ 1; 2; . . .; N given by b u i ¼ ai þ bi  ui .

ai þ bi 2

5.4.2

Switching the Local to Global Pollination Process

The second modification consists in merging the Levy flight control and the local random walk equations into a single movement equation:     uti þ 1 ¼ uti þ w1 LðkÞ  uti  g þ cw2 2  utj  utk

ð5:21Þ

where 2 is a Gaussian distribution limited by a scale factor c, and w1 and w2 are adaptive dynamic weights defined as follows: w1 ¼ wmax t 1 w2 ¼

wmax  wmin 1 1 tmax

Þ minðF ðtÞ; F Þ maxðF ðtÞ; F

ð5:22Þ ð5:23Þ

wmax and wmin are the w1 upper and lower limits, respectively, t is the iteration 1 1  is the average counter with limit tmax ; FðtÞ is the fitness function at iteration t, and F of the fitness functions from the current population at iteration t. These modifications eliminate the use of the probability switch.

120

5.5

5 Bio-inspired Optimization Algorithms for Solving …

Multi Objective Modified Flower Pollination Algorithm

In this section, we extend our variant of single MFPA [21] to solve the multi-objective OPF problem. The difference with respect to reference [26] is the application of the NBI method to guarantee evenly distributed points on the Pareto front solution, this modification simplifies the decision-making process.

5.5.1

Normal Boundary Intersection Method for Generation of Pareto Frontier

The NBI is based on a geometric relationship between the Pareto optimal set and the evenly distributed weights of the so called utopia line [7, 27] The utopia line   is  related to the utopia point that contains the minimum values fi ¼ min fi u1 ;     fi u2 ; . . .; fi un gi ¼ 1; 2; . . .; n (for a three-dimensional case); where u1 is the minimizer solution for f 1 ; u2 is the minimizer of f 2 and so on for all the n objective functions involved in the problem. In Fig. 5.1 display the geometric interpretation of NBI for generation of Pareto Frontier. The utopia point is defined as:  T F  ¼ f1 f2 . . . fn

ð5:24Þ

The pay-off square matrix U expresses the relationship of the weights on the utopia line and the optimal points Fi at the Pareto front that form the convex hull of individual minima (CHIM), and is defined by its column vectors as:

Fig. 5.1 Geometric description of the NBI method for two and three objective functions

5.5 Multi Objective Modified Flower Pollination Algorithm

Uði; :Þ ¼ Fi  F 

i ¼ 1; 2; . . .; n

121

ð5:25Þ

On the other hand, the nadir point contains the worst or maximum values of the objective functions, and is defined as:  T F N ¼ f1N f2N . . .fnN

ð5:26Þ

      where fiN ¼ max fi u1 ; fi u2 ; . . .; fi un i ¼ 1; 2; . . .; n, and u1 ; u2 ; . . .; un are the minimizer solutions of f1 ; f2 ; . . .; fn , respectively. Thus, the normalized objective functions are:  fi ¼ fi  fi N fi  fi

i ¼ 1; 2; . . .; n

ð5:27Þ

The original optimization problem (1) is turned into a set of parameterized single-scalar-optimization problems for every point in the Pareto optimal set as follows: min  d s:t :  ðw  deÞ ¼ F  ðx; uÞ U hi ðx; uÞ ¼ 0 i ¼ 1; 2; . . .; m gk ðx; uÞ  0 k ¼ 1; 2; . . .; p

ð5:28Þ

where d is the maximum distance from a point in the utopia line to a point in the real Pareto front and becomes a control variable added to the control vector (5.3), w  is the normalized is the vector of weights for the current individual problem, U  is the vector pay-off matrix using (5.25) and (5.27), de is a ones vector, and F function that contains any combination of any of the normalized objective functions using (5.27). The new problem stated above is the new mathematical optimization model where hi ðx; uÞ contains the typical power flow equations, and gk ðx; uÞ contains the system physical constraints. These constraints are the very same defined at the primitive optimization problem (5.1). To implement this model, the control variable d is used as fobj in Eq. (5.14).

5.6

General Description of the Bio-inspired Multi-objective Optimization Procedure

The proposed multi-objective OPF approach must be applied to every point on the Pareto front; the number of points can be arbitrarily determined. The algorithm is described in the Fig. 5.2.

122

5 Bio-inspired Optimization Algorithms for Solving …

Fig. 5.2 Geometric description of the NBI method for a three objective functions

After the total number of points is accomplished, any of the best compromise solution criteria can be applied.

5.7

Best Compromise Solution Criteria

After finding a non-dominated and diverse solution set, the question of selecting a single preferred solution for implementation in the MOOPF problem becomes important. Therefore, the decision maker has to select the best compromise solution and due to the imprecise nature of human judgment, several criteria methods have been implemented and compared among them in this chapter.

5.7 Best Compromise Solution Criteria

5.7.1

123

Fuzzy Membership Function Method

The fuzzy membership function is one of the most preferable methods in multi-criteria  problems in power systems [9, 10]. Consider the matrix lij Nn where N is the number of Pareto   points and n is the number of objective functions. Every element of the matrix lij is a membership value according to the following fuzzy function: lij ¼

8        : K; d a xi ; ! cpK ¼ min d a xi ; ! cp1 ; . . .; d a xi ; ! cpK

ð8:27Þ

Finally, is examined if the proposed solution lies within the feasible region X. To achieve this, the grouping conditions defined on (8.11)–(8.13) are adapted to guarantee that the encoded minimization vector agree to the criteria design. The conditions (8.12) and (8.13) implicitly exist within the feature extraction and labelling defined in (8.25)–(8.27), while the adaptation of conditions determined in (8.11) are conducted using the following set of equations: ak ¼

0; Gk ¼ ;; 8k 2 f1; 2; . . .; K g 1; Gk ¼ 6 ;; 8k 2 f1; 2; . . .; K g K0 ¼

K X

ð8:28Þ

ð8:29Þ

ak

k¼1

f ð~ x; ZÞ ¼

f ð~ x; ZÞ fPE

K0 ¼ K K 0 \K

ð8:30Þ

Equation (8.28) verify the existence of groups, while (8.29) confirms that the summatory is equal to the coded number of particles K. Finally, (8.30) evaluates if the particles meet all the restrictions.

8.4.4

Design Criteria Function for Clustering Data

The selected clustering functions have the final objective of evaluate the quality of the solution for each particle during the iterative process, searching the vector ~ x that optimizes the objective function f ð~ x; ZÞ following the design criteria. Grouping guidelines include in their definition, compacting and separating features; a compacting function minimizes the intragroup distance, while a separation function maximizes the intergroup distance, respectively. The function depicted on Eq. (8.31), which is employed on [10] tend to divide the searching space in zone of groups with same number of objects. The most popular separation measures, also use the centroid as reference point in order to maximize the intragroup distance with the minimum number of operations as illustrated on the following equations. f1 ð~ x; ZÞ ¼

jGk j K X X k¼1 i¼1

  d a xki ~ ck

ð8:31Þ

198

8

Clustering Representative Electricity Load Data …

Equation (8.32) proposed in [11] is used to maximize the intergroup distance and the cardinality of the groups, which implicitly consider its compactness. K X

f2 ð~ x; ZÞ ¼

cl Þ ck ;~ jGk jjGl jd a ð~

ð8:32Þ

k; l ¼ 1 k 6¼ l Note that Eqs. (8.31) and (8.32), depend on centroids calculations ~ ck and the number of grouped objects with cardinality jGk j, which is represented in the following form: k 1 X xk ; jGk j i¼1 i

jG j

~ ck ¼

8k 2 f1; 2; . . .; K g

ð8:33Þ

The quality of the solution delivered by PSO is evaluated using Eqs. (8.31) and (8.32).

8.5

Validation Index

The algorithm described above finds the clusters and data set labels for a particular pre-chosen K. To determine the optimal number of clusters that best fit the dataset U, the quality of clusters should be evaluate; therefore a clustering validity index is employed. In general, clustering validity indexes are usually defined by combining compactness and separation. The unsupervised The Davies-Bouldin index (DBI) averages all the cluster similarities was selected based on its ability for examining compactness and separation of the clusters [23]. The DBI formulation is defined as follow: (  ) K 1X max d^ðDi Þ þ d^ Dj   DBI ðD; K Þ ¼ k k¼1 i 6¼ j d ci ; cj

ð8:34Þ

  where d ci ; cj , and d^ðDi Þ represent cluster to cluster distance and intra-set distance, respectively. The objective is to obtain clusters or groups with the minimum intra-cluster distances, therefore the minimum DBI value is taken as the optimal partition, indicating the best clustering. To synthetize the clustering procedure, K-means or PSO operates a number of times for different number of clusters. Then its respective curve of validation index GsðiÞ is outlined and automatic search of the significant “knee” within the diagram is performed. The number of clusters at which the “knee” is observed, indicates the optimum clustering for the selected data set. The index is also entitled to obtain its

8.5 Validation Index

199

corresponding labels, which allow to evaluate the quality of the compactness and separation of each cluster as well as provide visual information.

8.6

General Description of Clustering Procedure

The general proposed framework comprises three stages, namely, Collecting data, dimensionality reduction and clustering validation. Stage I: Collecting data Ia. Organization of the load profile data collected in a matrix structure. ^ Ib. Normalization of the load profile data Z. Stage II: Dimensionality reduction IIa. Selection of the dimensionality reduction technique (PCA, Isomap, and SNE). IIb. Computation of the Low-dimensional map Z extracted from the load profile ^ data Z. IIc. Setting the minimum and maximum of number of clusters by iVAT image results. Stage III: Clustering validity IIIa. Clustering validity index (DBI). IIIb. Searching the number of clusters in the low-dimensional model Z. IIIc. Clustering Analysis. Figure 8.1 shows a conceptual representation of the proposed procedure. The proposed procedure allows to evaluate the performance of procedure based on DR

Fig. 8.1 Procedure for determination of the optimal number of clusters

200

8

Clustering Representative Electricity Load Data …

techniques and PSO in terms of time-consuming, clusters visualization, clustering data and potential application on distribution electric systems. From the procedure described in Fig. 8.1, two alternatives may be distinguished. Firstly, the low dimensional data Z may be processed directly by the validation index algorithm as suggested in [6]. Secondly, the iVAT technique may be used to obtain the minimum ðKmin Þ and maximum ðKmax Þ number of clusters. Thus, reducing significantly the search and improve the accuracy of the validity indexes to determine the optimal number of clusters. Both alternatives may be easily extended to evaluate the performance of others DR clustering techniques. The results of this chapter are presented in the next section.

8.7

Results

In this section, identification of groups using databases with low dimensionality are presented. First, the classical approach K-means is compared against the proposed PSO and the performance of each algorithm is evaluated trough different clustering functions.

8.7.1

Clustering of Low Dimensional Synthetic Data

In this example a matrix of synthetic data Z 2 Rnm is employed, which has been randomly created using a multivariate normal distribution. The proposed distribution has a density function which depend of the median denoted by l 2 R1m and a P mm covariance matrix 2 R which is characterized by the m dimensionality of the data. This process allows the user to create synthetic databases with user defined conditions using the following equations: Zk Nm lk ;

! X

ð8:35Þ

k

f Zk ; lk ;

! X k

P1 1 12ðZk lk Þ ðZk lk Þ0 k ffi e ¼ q ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P m ð2pÞ k

ð8:36Þ

l1 ¼ ½22; l2 ¼ ½55; l3 ¼ ½2  2 X 1

 ¼

0:9 0:0255

  0:0255 X 0:5 ; ¼ 0:9 0 2

 X  0 1 ; ¼ 0:3 0 3

ð8:37Þ 0 0:9

 ð8:38Þ

8.7 Results

201

The simulation conditions defined on Eqs. (8.35) and (8.36) are used together with the control parameters depicted on Table 8.2. To evaluate the quality of the clustering algorithms, internal validation metric has been used. The index of David Bouldin (DBI), defined in (8.34). This metric use evaluates the quality of the solution using a distance index that evaluate the compactness and separation of the data groups. Small values of DBI index indicate compact and well separated groups. The validation index of the solutions resulting after applying Eqs. (8.31) and (8.32) using PSO and K-means are depicted on Fig. 8.2. It can be noticed that values of the index DBI is relatively small at K ¼ 3, meanwhile large values are presented for K ¼ 5 and K ¼ 8, displayed at Fig. 8.2. The result for K ¼ 3 indicates a high density and compact groups. To gain insight into the clustering analysis, the study is only focused for K ¼ 3. Figure 8.3 shows three different distributions in red, blue and yellow colours, respectively. From Fig. 8.3, it can be observed that groups in red and yellow Table 8.2 Control parameters for the PSO algorithm for low dimension database Size

# Iterations

Acceleration coefficients

Inertia coefficient

# Clusters

50

50

c1 ¼ 2:0 c2 ¼ 1:5

x ¼ 1:0 : 0:05 nonlin ¼ 10

K¼3

8

1.3

6

1.2 K=5

m2

4

K=8

x

1.1

2 0 -2

1

DBI index

K=5

-4 -6 -10

0.9

-5

0

5

10

5

10

x m1 8

0.8

6

0.7 m2

PSO f

x

0.6 1

Kmeans f

K=3 2

4

6

8

2 0 -2

PSO f 2

0.5 0.4

K=5

4

-4

2

10

-6 -10

# of Groups

Fig. 8.2 Evaluation of clustering quality on low dimensional data

-5

0

x m1

202

8

(a) 10

Clustering Representative Electricity Load Data …

(b) 10 PSO Group 1 Group 2

5

Group 3

PSO

x

x

m2

m2

5

Group 1 Group 2 Group 3

0

0

(c)

10

Group 1 Group 2 Group 3

0

x m1

5

0

x m1

m2

x

0 xm1

5

5

10

808

1600

PSO

0

-5

-5

(d)

K-means

5

-5 -10

-5 -10

10

1

-5

Magnitude of f

-5 -10

10

0

Kmeans

807.5

1400

807

1200

806.5

1000

806 50

# of Iterations

0

800 50

# of Iterations

Fig. 8.3 Clustering low dimensional synthetic data: a clustering with PSO using function (8.31), b clustering with PSO using function (8.32), c clustering with K-means using function (8.31), d comparison of convergence between PSO and K-means using function (8.31)

present high similarity among them and form compact groups of high density, while the group in blue shows high disparity in relation to the other groups. Figure 8.3a–c present the identified groups obtained with PSO using functions (8.31) and (8.32) and the results obtained with traditional K-means. The results present marginal differences; however, the convergence speed among the algorithms is not the same, as illustrated on Fig. 8.3d, where effectiveness of the PSO against the conventional K-means is clearly observed for the function (8.31). As result of this comparative analysis based on DB index, it can be concluded that the quality of the PSO solution depends of the grouping function employed. The user defined features specified on each function, affect directly the convergence speed and precision of the algorithm, using multiple agents for exploration and exploitation of space considerably improve the convergence speed in relation to K-means. The validation indexes marginally vary in relation to the number of groups obtained due to existing objects on superposition zone.

8.7 Results

8.7.2

203

Clustering REL Data of ERCOT System

In this section load profiles of public domain is used to provide more evidence of the PSO performance. The observation measurements were collected by smart meters every 15 min. The database of load profiles created by the American company ERCOT, in Texas, is divided in eight geographical areas [24]. On each area, eight different type of costumers where selected, using a 15 min resolution during summer season (from 21th of June to 20th September 1997). The final database is Z 2 R588896 , where the different costumers were classified as industrial, commercial, residential and street lighting loads costumers. At Fig. 8.4 in can be visualized the multi-scale behaviour associated to each type of ELD. The representative ELD shows at Fig. 8.4 (eight different costumers) are normalized to analyse multiple data with different type of consumers. To achieve this, the following steps where conducted: • Compute the average of the total load profile to calculate the most representative ELD for each type of costumer; • Normalize the total load patterns:

xZ ¼

x i  li ; ri

8i 2 f1; . . .; mg

ð8:39Þ

900 800 700

ELD

600 500 400 300 200 100 0

0

20

40

60

# of samples

Fig. 8.4 RLPs

80

100

204

8

Clustering Representative Electricity Load Data …

Magnitude

4 2 0 -2

-4 100

100 50

50

# Representative ELD

0

0

# Samples

Fig. 8.5 Normalized representative ELD

Table 8.3 Control parameters for the PSO algorithm for high dimensional database Size

# Iterations

Acceleration coefficients

Inertia coefficient

# Clusters

250

250

c1 ¼ 2:0 c2 ¼ 1:5

x ¼ 1:0 : 0:05 nonlin ¼ 10

K = 2–10

The average and normalization simplify comparison of different metric units, permit to find unconventional values and allow to reduce the data Z 2 R6496 . Figure 8.5 shows summer load profiles after computation of average and normalization. The proposed clustering algorithm is initialized considering the control parameters depicted on Table 8.3. In the next section is follow the procedure described at Sect. 8.6 to evaluate the quality of clustering data.

8.7.2.1

Determination of the Optimum Number of Clusters

To determine the optimum number of clusters, the DR techniques (PCA, Isomap and SNE) process a set of data to obtain a low-reduce data in a transformed vector space, as discussed in the proposed methodology given at Sect. 8.6. The settings of the cost function parameter that were employed in this study are listed in Table 8.4. In the table, r represents the perplexity of the conditional probability distribution induced by a Gaussian kernel and g represents the number of nearest neighbors employed in a neighborhood graph. In the experiments with Isomap are only visualized the data points that correspond to vertices in the largest connected component of the neighborhood graph.

8.7 Results

205

Table 8.4 Cost function parameter settings for the study of the data Technique

Cost optimization function parameter settings

ISOMAP SNE

g = 500 aðtÞ ¼ ½0:2  0:8; g ¼ 500; tmax ¼ 1000; r ¼ 500

100

Fig. 8.6 Variance for PCA and Isomap

PCA Isomap

Variance (%)

80

60

40

20

0 0

2

4

6

8

10

12

14

16

18

# of components

The dimensionality reduction techniques, extract the most relevant information of the dataset, helping to improve the clustering and visualization related with the ^ load profiles problem. In Fig. 8.6, it is shown the total variance of the dataset Z, using the most significant principal components. The projection of the output space Z of the PCA and Isomap techniques are obtained according to these components. In Table 8.5, it is shown a comparative analysis of the technique’s performance. All techniques present good results when it comes to computing time. The lower percentage of error in the cost function is reached by the SNE technique. However, PCA and Isomap presents a relative better CPU-time performance. As a first approach for clustering data, the iVAT are used to determinate the minimum and maximum number of clusters in the low-dimensional model data ^ 2 R643 , corresponding at data displayed at Fig. 8.5. The tendency in the number Z of clusters is displayed as dark blocks along the diagonal of the iVAT image as shown at Fig. 8.7. From the results shown in Fig. 8.7a–c, around five groups are identified for the data under study. Table 8.5 Comparison of DR performance

Technique

Performance analysis CPU-time (s) Error (%)

Complexity

PCA

0.28

0.85

OðD3 Þ

ISOMAP

0.42

0.95

OðN 3 Þ

SNE

9.48

0.15

OðN 2 Þ

206

8

Clustering Representative Electricity Load Data …

Fig. 8.7 iVAT result for the different DR techniques: PCA, Isomap and SNE respectively

Based on the visualization results, the search space for the clustering validity indexes is then set as: 2  Kopt  10. In the next section, this information is used to reduce the search of the cluster validation index recursive procedure. Following the procedure presented on Sect. 8.6 stage III, the DBI is computed for a range of 2  Kopt  10. In this case, we have compared the results of applying the PCA, Isomap and SNE DR techniques in conjunction with PSO and K-means to determine the optimal number of groups hidden in collected data shown in Fig. 8.5. The proposed clustering algorithm is initialized considering the control parameters depicted on Table 8.3. Following the procedure to evaluate the quality of clustering data. The results at Fig. 8.8 show small values for DB index. This result indicate a good performance for intergroup function (8.32) and the effect of overlapping load profile data on intragroup function (8.31). Figure 8.8 shows the results for PCA, Isomap and SNE groups for different objective functions. Its observed that SNE techniques help to PSO and Kmeans to identify correctly the eight groups hidden into the processed data as it was initially expected. Note that PSO and Kmeans are stuck at local solution for the function defined at (8.31), meanwhile the PSO function (8.32) present better results.

8.7 Results

207

(a) 1

(b) 1

DBI index

DBI index

0.6 0.4 0.2

SNE PCA Isomap

0.9

0.8

K=8

2

4

6

0.8 0.7 0.6 0.5

8

0.4

10

K=8

2

4

# of Groups

6

8

10

# of Groups

DBI index

(c) 1 0.8 0.6 0.4 0.2

K=8

2

3

4

5

6

7

8

9

10

# of Groups Fig. 8.8 Validations index for functions (8.31) and (8.32). a Clustering with PSO using function (8.31), b clustering with PSO using function (8.32), c clustering with K-means using function (8.31)

8.7.2.2

Cluster Visualization

A cluster visualization comparison is carry out in this section in order to verify the proposed methodology cluster action into a low-dimensional space. The visualization of the clustering data is based on a dispersion graphic with dimensional space of d = 3. On the constructed 3-D scatter plot, each load profile is represented by means of one point. In such, neighboring points correspond to similar profiles, whereas distant points correspond to dissimilar profiles A visualization data for Kopt ¼ 8 is show at Fig. 8.9. In general the SNE technique visualization results shown in Fig. 8.9a, c present better results in comparison with the results obtained from PCA shown at Fig. 8.9b. Note that PCA grouping the Lighting Public load profiles data very well due to data presents a high correlation. Due to the fact that SNE showed superiority for separation group and visualization, Fig. 8.10 shows the clustering results obtained by using this technique. Each cluster was split according to the types of customers (commercial, industrial and residential) successfully. The groups present at Fig. 8.10 presents small variations to conform its groups, in particular among groups: #7, #7 and #3, corresponding to Fig. 8.10a–c respectively.

208

Clustering Representative Electricity Load Data …

8 2

2

1.5

1

G1

1

G2

0

G3 G4

0.5

G5

3

y3

G6

0

y

-1

G7 G8

-2

G1 G2

-0.5

G3

-3

-1

G4 G5

-4

G6 G7

-1.5

G8

-5 6

-2 2

1.5

1

0.5

0

y2

-0.5

-1 -1.5 -2

-1

2

2

1

0

4

0

-2

-4

y2

y

1

2

-6

-5

0

5

10

15

y1

G1 G2 G3

1.5

G4 G5 G6

1

G7 G8

y3

0.5 0 -0.5 -1 -1.5 2

1

0

-1

y2

-2 -2

-1

0

1

2

y1

Fig. 8.9 3-D scatter mapping of LPEC for different DR techniques results with PSO and Kmeans

8.8

Conclusions

The effectiveness of using DR techniques and PSO algorithms has been explored to assess the spatial organization of large volume and to find the optimal number of clusters of RLP data. The proposed methodology includes a comparison of different DR techniques. The implementation of the proposed methodology allows to efficiently identify the clusters. However, it is recommended that dimensionality reduction technique should be chosen based on application where it is going to be applied. For instance, if the data presents a high correlation (e.g., Lighting Public load profiles data), PCA is a good choice due to low the computational burden required to process the data. Furthermore, the data set reduction to three features allows storing a smaller number of data that characterizes each type of costumer in the distribution service provider database. These facts result relevant for the distribution service providers because they can optimize their computational infrastructure and improve the analysis by speeding up the computational procedures. Finally, the performance study of a PSO approach with codification based on centroids has been presented. The proposed scheme simplifies the exploration and exploitation of the continuous searching space trough multidimensional vector motion. As illustrated from the presented results using synthetic data and RLP data with low dimensionality it can be concluded that: The PSO algorithm can be

8.8 Conclusions

209

G1#17 RLP

2

-2

G2#4 RLP

2

0

0

20

40

60

80

100

G3#10 RLP

2

-2

G1#9 RLP

2

0

0

20

40

60

80

-2

100

G4#7 RLP

2

0

0

20

40

60

80

100

G3#14 RLP

2

-2

0

0

0

0

-2

-2

-2

0

20

40

60

80

100

G5#8 RLP

1

0

20

40

60

80

100

G7#8 RLP

2

40

60

80

100

G6#8 RLP

-2

20

40

60

80

100

-5

20

20

40

60

80

100

G8#2 RLP

20

40

60

20

80

100

-2

100

0

20

60

80

100

40

60

80

100

80

100

80

100

G6#2 RLP

40

60

80

100

-5

0

20

40

60

G8#16 RLP

2 0

0

20

40

60

80

100

-2

0

20

# of samples

G1#17 RLP

2

40

G4#5 RLP

5

G7#6 RLP

2

80

20

0

0

# of samples

40

60

# of samples

G2#8 RLP

2 1

0

0 0

20

40

60

80

100

G3#4 RLP

2

-1

0

20

40

60

80

100

80

100

80

100

80

100

G4#2 RLP

5

0 -2

60

G5#8 RLP

0

0

# of samples

-2

40

0 0

-1

0

0

0

2 1

5

0 -2

20

0

0 -1

0

2

0

2

-2 2

G2#4 RLP

2

0

0

0

20

40

60

80

100

G5#14 RLP

2

-5

0

0

-2

-2

0

20

40

60

80

100

G7#10 RLP

2 0

0 -2

20

40

60

# of samples

20

80

100

40

60

G6#2 RLP

0

20

40

60

G8#7 RLP

2

-2

0

0

2

0

20

40

60

# of samples

Fig. 8.10 Groups of RLPS for different DR techniques using PSO and Kmeans

adapted to different user defined criteria functions for clustering data. PSO provides higher accuracy and precision in relation to K-means for separated groups. Intragroup functions dominate intergroup functions; however, the proposed function has similar performance as the intragroup. The accuracy on the clustering data allows the integration of this methodology on sophisticated machine learning models that could be very useful to handle a large volume of data.

References 1. R. Granell, C.J. Axon, D.C.H. Wallom, Impacts of raw data temporal resolution using selected clustering methods on residential electricity load profiles. IEEE Trans. Power Syst. 30(6), 3217–3224 (2015) 2. G. Chicco, R. Napoli, P. Postolache, M. Scutriu, C. Toader, Customer characterization options for improving the tariff offer. IEEE Trans. Power Syst. 18(1), 381–387 (2003) 3. G. Chicco, Overview and performance assessment of the clustering methods for electrical load pattern grouping. Energy 42(1), 68–80 (2012)

210

8

Clustering Representative Electricity Load Data …

4. A. Morán, J.J. Fuertes, M.A. Prada, S. Alonso, P. Barrientos, I. Díaz, in Analysis of Electricity Consumption Profiles by Means of Dimensionality Reduction Techniques. EANN International Conference on Engineering Applications of Neural Networks, London, UK, pp. 152–161 (2012) 5. L. Van der Maaten, E. Postman, J. can den Herik, Dimensionality reduction: a comparative review. J. Mach. Learn. Res. 10, 1–41 (2009) 6. G. Chicco, R. Napoli, F. Piglione, Comparisons among clustering techniques for electricity customer classification. IEEE Trans. Power Syst. 21(2), 933–940 (2006) 7. M. Sun, I. Konstantelosy, G. Strbac, C-Vine copula mixture model for clustering of residential electrical load pattern data. IEEE Trans. Power Syst. 32(3), 2382–2393 (2017) 8. G. Chicco, O.M. Ionel, R. Porumb, Electrical load pattern grouping based on centroid model with ant colony clustering. IEEE Trans. Power Syst. 28(2), 1706–1715 (2013) 9. F. Lezama, A. Rodriguez, E. Muñoz de Cote, L. Sucar, in Electrical Load Pattern Shape Clustering Using Ant Colony Optimization. European Conference on the Applications of Evolutionary Computation (Springer, Cham, 2016), pp. 491–506 10. E.K. Cervera, E. Barocio, F.R. Sevilla Segundo, P. Korba, R.J. Betancourt, Particle swarm intelligence optimization approach for clustering of low and high dimensional databases. Submitted at IEEE general meeting (2019) 11. J. Wu, Advances in K-Means Clustering (Springer theses, China, 2012). H. Ward, Hierarchical grouping to optimize an objective function. J. Am. Stat. Assoc. 58, 236–244 (1963) 12. J.C. Bezdek, Pattern Recognition with Fuzzy Objective Function Algorithm (Plenum, New York, 1981) 13. Y.H. Pao, D.J. Sobajic, Combines use of unsupervised and supervised learning for dynamic security assessment. IEEE Trans. Power Syst. 7(2), 878–884 (1992) 14. I.T. Jolliffe, in Springer Series in Statistic, 2nd edn. Principal Component Analysis (New York, 1986), pp. 32–500 15. N.R. Sakthivel, B.B. Nair, M. Elangovan, V. Sugumaran, S. Saravanmurugan, Comparison of dimensionality reduction techniques for the fault diagnosis of mono block centrifugal pump using vibrations signals. Elsevier Eng. Sci. Technol. Int. J. (2014) 16. A. Arechiga, E. Barocio, J. Ayon, H. Garcia, in Comparison of Dimensionality Reduction Techniques for Clustering and Visualization of Load Profiles. IEEE PES T&D-LA, in Proc. (2016) 17. J. Leeuw, W. Heiser, in Theory of Multidimensional Scaling. Handbook of Statistics, vol. 2, pp. 285–316 (1982) 18. E.W. Dijkstra, A note on two problems in conexion with graphs. Numer. Math. 1, 269–271 (1959) 19. G. Hinton, S. Roweis, in Stochastic Neighbor Embedding. Advances in Neural Information Processing System, vol. 15, MA. pp. 833–840 (2002) 20. L. Wang, T.V.U. Nguyen, J.C. Bezdek, C. Leckie, K. Ramamohanarao, in iVAT and aVAT: Enhanced Visual Analysis for Cluster Tendency Assessment. Proc. PAKDD, Hyderabad, India, June 2010 21. J.C. Bezdek, R.J. Hathaway, in VAT: a Tool for Visual Assessment of Cluster Tendency. Proceedings of the 2002 International Joint Conference on Neural Networks, Honolulu, HI, pp. 2225–2230 (2002) 22. A. Chatterjee, P. Siarry, Nonlinear inertia weight variation for dynamic adaptation in particle swarm optimizacion. Comput. Oper. Res. 33, 859–871 (2006) 23. Q. Zhao, P. Fränti, WB-index: a sum-of-squares based index for cluster validity. Data Knowl. Eng. 92, 77–89 (2014) 24. http://www.ercot.com/mktinfo/loadprofile

Appendix A

Parameter Settings and Test System Data The IEEE test systems were chosen to test and compare the performance of the coordination optimization algorithms. The power flow with n − 1 contingence analysis and near end fault currents are shown for each test system. The coordination pairs and element outage results of sensitivity analysis are also shown. Although the voltage systems may be higher for overcurrent relay application, the main objective is to achieve DOCR coordination in bigger and highlyinterconnected systems, since algorithm robustness is a very important aspect that must to analyzed.

6 Bus System The 6-bus test system is shown in Fig. A.1. The system consists of 10 relays; the voltages were selected to be 69 kV for buses at high voltage side of transformers and 34.5 kV for buses at low voltage side of transformers. The data X’d for the generators one and two are 0.10, 0.15 and 0.20 respectively (Table A.1).

Fault Current The fault currents are calculated with the remote end opened. This was done due to two considerations, to obtain the maximum fault current that the relay senses and the very small probability for the remote end relay to mal-operate. Bear in mind that

© Springer Nature Switzerland AG 2019 E. Cuevas et al., Metaheuristics Algorithms in Power Systems, Studies in Computational Intelligence 822, https://doi.org/10.1007/978-3-030-11593-7

211

212

Parameter Settings and Test System Data

Fig. A.1 6 bus system

as elements’ operation or network topology changes, load flow and fault analysis must be computed again through real time algorithm (Table A.2).

IEEE 14 Bus System The IEEE 14-bus test system is shown in Fig. A.2. The system consists of 32 relays; the voltages were selected to be 138 kV for buses at high voltage side of transformers and 34.5 kV for buses at low voltage side of transformers. The two lines between buses 1 and 2 have the same impedance value. Therefore, the relays [1 2 1], [1 2 2], [2 1 1] and [2 1 2] sense the same amount of maximum load currents of 328 A, but due to the n − 1 contingency analysis the maximum load currents of these relays are 566 A. The current values are based on maximum load operation. The data X’d for the generators one and two are 0.01 and 0.03 respectively. The maximum load is the same as the original 14-busload data while the medium load and minimum load are the 70 and 50% of the 14-busload data (Tables A.3 and A.4).

IEEE 30 Bus System The IEEE 30-bus test system was chosen to test and analyze the dynamic operations in the system. The system is shown in Fig. A.3. The system consists of 72 relays not considering DOCRs as protections for transformers. The voltages were selected to be 34.5 kV for buses at high voltage side of transformers and 22 and 13.8 kV for buses at low voltage side of transformers. All relays are considered to have very inverse time characteristic curve as was presented in Table A.5.

Parameter Settings and Test System Data

213

Table A.1 Load flow and contingence analysis of 6 bus system at maximum load Relays

Iload (A)

Iload (n − 1)

Device outage

1_4_1 4_1_1 1_5_1 5_1_1 1_6_1 6_1_1 4_6_1 6_4_1 5_6_1 6_5_1

306.47 306.47 497.36 497.36 220.17 220.17 177.13 177.13 223.10 223.10

460.54 460.54 497.36 497.36 298.67 298.67 422.34 422.34 295.89 295.89

0_2 0_2 0_3 0_3 1_4 1_4 1_4 1_4 1_4 1_4

Table A.2 Fault current of 6 bus system Primary relay

Backup relay

Primary Isc (A)

Backup Isc (A)

4_6_1 1_5_1 1_6_1 5_6_1 1_4_1 1_6_1 6_4_1 6_5_1 1_4_1 1_5_1 6_1_1 6_5_1 4_1_1 6_1_1 6_4_1 5_1_1

1_4_1 4_1_1 4_1_1 1_5_1 5_1_1 5_1_1 1_6_1 1_6_1 6_1_1 6_1_1 4_6_1 4_6_1 6_4_1 5_6_1 5_6_1 6_5_1

7914 11922 13009 8061 12407 13009 5145 5954 12407 11922 6140 5954 7956 6140 5145 6056

2753 1921 1949 4588 2594 2744 2551 2645 1499 1683 3361 3274 2794 2747 2564 2582

The two lines between buses 1 and 2 have the same impedance value. Therefore, the relays [1 2 1], [1 2 2], [2 1 1] and [2 1 2] sense the same amount of maximum load currents of 1502 A, but due to the n − 1 contingency analysis the maximum load currents of these relays are 2576 A. The current values are based on minimum load operation. The fault currents are calculated with the remote end opened. This was done due to two considerations, to obtain the maximum fault current that the relay senses and the very small probability for the remote end relay to mal-operate. Bear in mind that

214

Parameter Settings and Test System Data

Fig. A.2 IEEE 14 bus test system

Table A.3 Load flow and contingence analysis of IEEE 14 bus system at maximum load Relays

Iload (A)

Iload (n − 1)

Device outage

1_2_1 2_1_1 1_2_2 2_1_2 1_5_1 5_1_1 2_3_1 2_4_1 2_5_1 5_2_1 3_4_1 4_3_1 4_5_1 5_4_1 6_11_1

328.44 328.44 328.44 328.44 317.19 317.19 307.54 233.84 174.22 174.22 101.45 101.45 252.96 252.96 163.81

566.57 566.57 566.57 566.57 408.41 408.41 405.16 375.45 326.40 326.40 407.57 407.57 412.31 412.31 277.86

1_2 1_2 1_2 1_2 1_2 1_2 3_4 2_3 1_5 1_5 2_3 2_3 2_3 2_3 4_5 (continued)

Parameter Settings and Test System Data

215

Table A.3 (continued) Relays

Iload (A)

Iload (n − 1)

Device outage

11_6_1 6_12_1 12_6_1 6_13_1 13_6_1 9_10_1 10_9_1 9_14_1 14_9_1 10_11_1 11_10_1 12_13_1 13_12_1 13_14_1 14_13_1

163.81 140.48 140.48 333.12 333.12 86.61 86.61 155.45 155.45 95.68 95.68 33.90 33.90 114.38 114.38

277.86 330.08 330.08 419.41 419.41 474.40 474.40 411.64 411.64 323.66 323.66 226.73 226.73 252.24 252.24

4_5 6_13 6_13 9_14 9_14 5_6 5_6 5_6 5_6 5_6 5_6 6_13 6_13 9_14 9_14

Table A.4 Fault current of IEEE 14 bus Primary relay

Backup relay

Primary Isc (A)

Backup Isc (A)

2_1_2 2_3_1 2_4_1 2_5_1 1_2_2 1_5_1 2_1_1 2_3_1 2_4_1 2_5_1 1_2_1 1_5_1 5_2_1 5_4_1 3_4_1 4_3_1

1_2_1 1_2_1 1_2_1 1_2_1 2_1_1 2_1_1 1_2_2 1_2_2 1_2_2 1_2_2 2_1_2 2_1_2 1_5_1 1_5_1 2_3_1 2_4_1

18255 20731 20658 20567 45262 46275 18255 20731 20658 20567 45262 46275 4169 4205 2046 4690

3037 2843 2848 2855 2547 2298 3037 2843 2848 2855 2547 2298 1731 1729 1871 1879 (continued)

216

Parameter Settings and Test System Data

Table A.4 (continued) Primary relay

Backup relay

4_5_1 5_1_1 5_4_1 5_1_1 4_3_1 11_10_1 6_12_1 6_13_1 12_13_1 6_13_1 13_12_1 13_14_1 6_11_1 6_12_1 10_11_1 9_14_1 14_13_1 9_10_1 11_6_1 10_9_1 13_14_1 14_9_1 13_6_1 13_12_1

2_4_1 2_5_1 2_5_1 4_5_1 5_4_1 6_11_1 11_6_1 11_6_1 6_12_1 12_6_1 6_13_1 6_13_1 13_6_1 13_6_1 9_10_1 10_9_1 9_14_1 14_9_1 10_11_1 11_10_1 12_13_1 13_14_1 14_13_1 14_13_1

Primary Isc (A) 3484 4122 4205 4122 4690 3574 7517 7296 3371 7296 5546 4840 6526 7517 5396 7750 3425 7368 3272 2605 4840 2455 3767 5546

Backup Isc (A) 1921 1851 1994 2065 2497 3531 1583 1689 3316 834 3948 3736 1246 1194 5272 1824 3264 1374 3229 2471 973 2284 1701 1456

Parameter Settings and Test System Data

217

Fig. A.3 IEEE 30 bus test system

Table A.5 Load flow and contingence analysis of IEEE 30 bus system at maximum load Relays

Iload (A)

Iload (n − 1)

Device outage

1_2_1 2_1_1 1_2_2 2_1_2 1_3_1 3_1_1 2_4_1 3_4_1 4_3_1 2_5_1 2_6_1 4_6_1

1502.31 1502.31 1502.31 1502.31 1395.32 1395.32 766.32 1306.59 1306.59 1389.12 1036.21 1209.63

2576.63 2576.63 2576.63 2576.63 1802.00 1802.00 1440.52 1895.18 1895.18 1798.40 1392.32 2077.68

1_2 1_2 1_2 1_2 1_2 1_2 1_3 4_6 4_6 1_2 2_6 2_6 (continued)

218

Parameter Settings and Test System Data

Table A.5 (continued) Relays

Iload (A)

Iload (n − 1)

Device outage

6_4_1 5_7_1 7_5_1 6_7_1 7_6_1 6_8_1 9_11_1 9_10_1 10_9_1 12_13_1 12_14_1 14_12_1 12_15_1 15_12_1 12_16_1 16_12_1 14_15_1 15_14_1 16_17_1 17_16_1 15_18_1 18_15_1 18_19_1 19_18_1 19_20_1 20_19_1 10_20_1 20_10_1 10_17_1 17_10_1 10_21_1 21_10_1 10_22_1 22_10_1 21_22_1 22_21_1 15_23_1 23_15_1 22_24_1 24_22_1

1209.63 295.61 295.61 628.73 628.73 498.13 410.89 747.97 747.97 270.07 215.90 215.90 502.85 502.85 208.73 208.73 45.25 45.25 103.18 103.18 164.34 164.34 75.55 75.55 189.77 189.77 254.66 254.66 182.46 182.46 487.02 487.02 231.28 231.28 64.62 64.62 152.46 152.46 165.44 165.44

2077.68 807.60 807.60 1793.58 1793.58 739.96 410.89 855.43 855.43 468.63 395.33 395.33 657.79 657.79 657.79 657.79 867.05 867.05 3157.20 3157.20 271.57 271.57 307.23 307.23 355.95 355.95 822.65 822.65 631.78 631.78 797.33 797.33 598.24 598.24 519.44 519.44 380.91 380.91 245.51 245.51

2_6 2_5 2_5 9_11 9_11 6_10 3_4 12_15 12_15 19_20 10_20 10_20 10_20 10_20 19_20 19_20 9_11 9_11 9_11 9_11 10_22 10_22 9_11 9_11 6_9 6_9 12_14 12_14 9_11 9_11 9_11 9_11 9_11 9_11 9_11 9_11 9_11 9_11 4_6 4_6 (continued)

Parameter Settings and Test System Data

219

Table A.5 (continued) Relays 23_24_1 24_23_1 24_25_1 25_24_1 25_26_1 25_27_1 27_25_1 27_29_1 27_30_1 29_30_1 8_28_1 28_8_1 6_28_1

Iload (A) 57.52 57.52 54.55 54.55 111.85 129.61 129.61 168.22 191.14 98.48 40.81 40.81 353.68

Iload (n − 1)

Device outage

57.52 57.52 333.10 333.10 154.44 314.09 314.09 348.53 348.53 1696.60 272.54 272.54 886.20

0_0 0_0 9_11 9_11 24_25 29_30 29_30 29_30 27_30 9_11 4_6 4_6 6_10

as elements’ operation or network topology changes, load flow and fault analysis must be computed again through real time algorithm. The data X’d for the generators one and two are 0.01 and 0.03 respectively. The maximum load is the same as the original 30 busload data (Table A.6).

Table A.6 Fault current of IEEE 30 bus Primary relay

Backup relay

Primary Isc (A)

Backup Isc (A)

2_1_2 2_4_1 2_5_1 2_6_1 1_2_2 1_3_1 2_1_1 2_4_1 2_5_1 2_6_1 1_2_1 1_3_1 3_4_1 4_3_1 4_6_1

1_2_1 1_2_1 1_2_1 1_2_1 2_1_1 2_1_1 1_2_2 1_2_2 1_2_2 1_2_2 2_1_2 2_1_2 1_3_1 2_4_1 2_4_1

73571 83001 83668 83343 181228 185416 73571 83001 83668 83343 181228 185416 8377 16716 17056

12461 11696 11649 11671 10441 9381 12461 11696 11649 11671 10441 9381 8390 7431 7974 (continued)

220

Parameter Settings and Test System Data

Table A.6 (continued) Primary relay

Backup relay

4_6_1 3_1_1 5_7_1 6_4_1 6_7_1 6_8_1 6_28_1 6_7_1 6_ 8_1 6_28_1 4_3_1 7_6_1 7_5_1 8_28_1 10_20_1 10_17_1 10_21_1 10_22_1 9_11_1 14_15_1 12_15_1 15_14_1 15_18_1 15_23_1 12_13_1 12_14_1 12_16_1 16_17_1 12_13_1 12_14_1 12_15_1 18_19_1 15_12_1 15_14_1 15_23_1 19_20_1 18_15_1 20_10_1 19_18_1

3_4_1 4_3_1 2_5_1 2_6_1 2_6_1 2_6_1 2_6_1 4_6_1 4_6_1 4_6_1 6_4_1 5_7_1 6_7_1 6_8_1 9_10_1 9_10_1 9_10_1 9_10_1 10_9_1 12_14_1 14_12_1 12_15_1 12_15_1 12_15_1 15_12_1 15_12_1 15_12_1 12_16_1 16_12_1 16_12_1 16_12_1 15_18_1 18_15_1 18_15_1 18_15_1 18_19_1 19_18_1 19_20_1 20_19_1

Primary Isc (A) 17056 11928 8201 14345 19042 22081 22118 19042 22081 22118 16716 5380 9882 14377 12987 12392 13398 13496 13652 5379 11600 9914 9184 9078 12377 12286 11053 5831 12377 12286 11600 5016 7491 9914 9078 4108 4204 3711 5376

Backup Isc (A) 6871 11935 7478 7702 7518 7224 7224 9991 9774 9778 8372 5089 9611 13791 5445 5530 5391 5375 5642 5288 1675 5794 5384 5347 2280 2759 2844 5762 2280 2318 2616 4964 2278 1974 2251 3930 4150 3671 5207 (continued)

Parameter Settings and Test System Data

221

Table A.6 (continued) Primary relay

Backup relay

20_19_1 17_16_1 10_9_1 10_20_1 10_21_1 10_22_1 21_22_1 10_22_1 22_21_1 22_24_1 10_21_1 22_24_1 21_10_1 23_24_1 15_12_1 15_14_1 15_18_1 24_23_1 24_25_1 22_10_1 22_21_1 24_22_1 24_25_1 23_15_1 25_26_1 25_27_1 24_22_1 24_23_1 27_29_1 27_30_1 25_24_1 25_26_1 29_30_1 28_8_1

10_20_1 10_17_1 17_10_1 17_10_1 17_10_1 17_10_1 10_21_1 21_10_1 10_22_1 10_22_1 22_10_1 21_22_1 22_21_1 15_23_1 23_15_1 23_15_1 23_15_1 22_24_1 22_24_1 24_22_1 24_22_1 23_24_1 23_24_1 24_23_1 24_25_1 24_25_1 25_24_1 25_24_1 25_27_1 25_27_1 27_25_1 27_25_1 27_29_1 6_28_1

Primary Isc (A) 6137 8871 9485 12987 13398 13496 9700 13496 9141 9762 13398 9762 8684 5174 7491 9914 9184 7420 7425 10319 9141 5463 7425 4081 6554 3679 5463 7420 7235 7237 3667 6554 3209 12488

Backup Isc (A) 6102 8670 2363 2475 2191 2166 9300 2210 6278 3749 2016 6014 8265 5106 2306 2046 2316 5181 4804 2704 2886 3075 2480 4010 3308 3585 2247 2096 2473 2473 3586 3295 3163 11819