Stochastic global optimization methods and applications to chemical, biochemical, pharmaceutical and environmental processes 9780128173923, 0128173920

821 89 4MB

English Pages xx, 289 pages : illustrations; 24 cm [295] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Stochastic global optimization methods and applications to chemical, biochemical, pharmaceutical and environmental processes
 9780128173923, 0128173920

Table of contents :
Cover......Page 1
Stochastic Global Optimization Methods and Applications to Chemical, Biochemical, Pharmaceutical and Environmental Processe .........Page 2
Copyright......Page 3
About the authors......Page 4
Preface......Page 5
1 - Basic features and concepts of optimization......Page 9
1.2.1 Optimization and its benefits......Page 10
1.2.3 Illustrative examples......Page 11
1.2.4 Essential requisites for optimization......Page 14
1.3.1 Functions in optimization......Page 15
1.3.2 Interpretation of behavior of functions......Page 20
1.3.3 Maxima and minima of functions......Page 23
1.3.4 Region of search for constrained optimization......Page 26
1.4.1 Classification of optimization problems......Page 27
1.4.3 Bottlenecks in optimization......Page 31
1.5 Summary......Page 32
References......Page 33
3 - Numerical search methods for unconstrained optimization problems......Page 34
2.2 Statement of optimization problem......Page 35
2.3 Analytical methods for unconstrained single-variable functions......Page 36
9.2.1 Air pollution—significance of modeling and optimization......Page 254
2.3.2 Sufficient conditions for convexity and concavity of a function......Page 37
2.4 Analytical methods for unconstrained multivariable functions......Page 39
7.3.3 Analysis of results......Page 40
2.5.1 Direct substitution......Page 43
2.5.2 Penalty function approach......Page 45
2.5.3.1 Necessary condition for a basic problem......Page 47
2.5.3.2 Necessary condition for a general problem......Page 49
2.5.3.3 Sufficient conditions for a general problem......Page 50
2.6 Analytical methods for solving multivariable optimization problems with inequality constraints......Page 52
2.6.1 Kuhn–Tucker conditions for problems with inequality constraints......Page 53
2.6.2 Kuhn–Tucker conditions for problems with inequality and equality constraints......Page 55
References......Page 58
3.2 Classification of numerical search methods......Page 61
5.3.1 Genetic algorithm implementation strategy......Page 132
3.3.1 Newton's method......Page 62
3.3.2 Quasi-Newton method......Page 65
3.3.3 Secant method......Page 67
3.4.1 Quadratic interpolation method......Page 70
3.4.2 Cubic interpolation method......Page 74
3.5 Multivariable direct search methods......Page 77
8.4.1 Basic algorithms and essential components for formulation of multiobjective optimization strategy......Page 236
3.5.2 Hooke–Jeeves pattern search method......Page 78
3.5.2.2 Pattern move......Page 79
3.5.3 Powell's conjugate direction method......Page 81
3.5.4 Nelder–Mead simplex method......Page 83
3.6 Multivariable gradient search methods......Page 85
3.6.1 Steepest descent method......Page 86
3.6.2 Multivariable Newton's method......Page 88
3.6.3 Conjugate gradient method......Page 90
References......Page 92
4 - Stochastic and evolutionary optimization algorithms......Page 94
7.1 Introduction......Page 95
5.2 Examples of numerical functions......Page 96
4.2.3 Genetic algorithm implementation procedure......Page 97
4.2.4 Genetic algorithm pseudocode......Page 99
4.3.1 Historical background......Page 100
4.3.2 Basic principle......Page 101
4.3.3 Simulated annealing implementation procedure......Page 102
4.3.4 Simulated annealing pseudocode......Page 103
4.4.2 Basic principle......Page 104
4.4.3 Differential evolution implementation procedure......Page 105
4.4.5 Advantages and limitations of differential evolution......Page 106
4.5.2 Basic principle......Page 107
4.5.3 Ant colony optimization implementation procedure......Page 108
4.5.5 Advantages and disadvantages ant colony optmization......Page 110
4.6.3 Tabu search implementation procedure......Page 111
4.6.3.1 Neighbors generations and neighborhood search......Page 112
4.6.3.6 Stopping criteria......Page 113
4.6.5 Advantages and disadvantages of tabu search......Page 114
4.7.2 Basic principle......Page 115
4.7.3 Particle swarm optimization implementation procedure......Page 116
4.7.6 Applications of particle swarm optimization......Page 118
4.8.2 Basic principle......Page 119
4.8.3 Artificial bee colony algorithm implementation procedure......Page 120
4.8.4 Pseudocode of artificial bee colony algorithm......Page 121
4.9.1 Historical background......Page 122
4.9.3 Cuckoo search implementation procedure......Page 123
4.9.5 Advantages and limitations of cuckoo search algorithm......Page 125
References......Page 126
5.1 Introduction......Page 131
5.3.2 Optimization results of genetic algorithm......Page 133
5.4.2 Optimization results of simulated annealing......Page 136
5.5.1 Differential evolution implementation strategy......Page 137
7.4.3 Development of response surface models for lipopeptide biosurfactant process......Page 206
7.4.4 RSM-ACO strategy for lipopeptide process optimization......Page 139
5.8.1 Artificial bee colony implementation strategy......Page 144
5.10 Summary......Page 147
References......Page 151
6 - Application of stochastic evolutionary optimization techniques to chemical processes......Page 153
6.1 Introduction......Page 154
6.2.1 Optimal control and its importance in polymerization reactors......Page 155
6.2.3 Multistage dynamic optimization strategy......Page 156
6.2.4 The polymerization process and its mathematical representation......Page 157
6.2.5 Control objectives......Page 161
6.2.6 Multistage dynamic optimization of SAN copolymerization process using DE......Page 162
6.2.7 Analysis of results......Page 163
6.3.2 Multistage dynamic optimization of SAN copolymerization process using tabu search......Page 167
6.4 Optimization of multiloop proportional–integral controller parameters of a reactive distillation column using genetic algorithm......Page 169
6.4.1 The need of evolutionary algorithm for optimization of multiloop controller parameters......Page 170
6.4.3 Controller design using genetic algorithms......Page 172
6.4.3.1 Compositions estimation......Page 173
6.4.3.4 Desired response specifications for controller tuning......Page 174
6.4.3.5 Optimal tuning of controller parameters......Page 176
6.5.1 The need for stochastic optimization methods in design of nonlinear control strategies......Page 178
6.5.2.1 Polynomial ARMA model......Page 180
6.5.3 The process representation......Page 181
6.5.4.1 Optimal control policy computation using genetic algorithm......Page 183
6.5.4.2 Optimal control policy computation using SA......Page 184
6.5.4.2.1 Implementation procedure......Page 185
6.5.5 Analysis of results......Page 186
6.6 Summary......Page 189
References......Page 191
7 - Application of stochastic evolutionary optimization techniques to biochemical processes......Page 199
7.3 Media optimization of Chinese hamster ovary cells production process using differential evolution......Page 201
7.3.1 CHO cell cultivation process and its macroscopic state space model......Page 202
9.3 Process model–based optimization of distillery industry wastewater treating fixed bed anaerobic biofilm reactor using ant c .........Page 255
7.4.1 The lipopeptide biosurfactant process and its culture medium......Page 204
7.4.5 Analysis of results......Page 209
7.5.2 Experimental design and data generation......Page 210
7.5.4 Formulation of multiobjective optimization problem......Page 211
7.5.5 ANN-NSDE strategy for multiobjective optimization of rhamnolipid process......Page 212
7.5.5.2 ANN-NSDE with ε-constraint......Page 213
7.5.6 Analysis of results......Page 216
7.6 ANN-DE strategy for simultaneous optimization of rhamnolipid biosurfactant process......Page 217
8.6 Multiobjective optimization of cytotoxic potency of a marine macroalgae on human carcinoma cell lines using nonsorting gene .........Page 243
7.6.3 ANN model for rhamnolipid process......Page 218
7.6.4 Simultaneous optimization of rhamnolipid biosurfactant process using ANN-DE......Page 219
7.6.4.2 Simultaneous optimization using a distance minimization function......Page 220
9.6.4 Analysis of results......Page 277
7.7 Summary......Page 223
References......Page 224
8 - Application of stochastic evolutionary optimization techniques to pharmaceutical processes......Page 228
8.2 Quantitative model–based pharmaceutical formulation......Page 229
8.3 Simultaneous optimization of pharmaceutical (trapidil) product formulation using radial basis function network methodology......Page 230
8.3.2 Radial basis function network and its automatic configuration......Page 231
8.3.3 Configuring RBFN to trapidil formulation......Page 234
8.3.4 Simultaneous optimization study......Page 235
8.4.2 Configuring RBFN to pharmaceutical formulation......Page 237
8.4.3.2 RBFN-NSDE with ε constraint......Page 238
8.4.4 Analysis of results......Page 240
8.5.2 Response surface model for pharmaceutical formulation......Page 242
8.6.1 Cytotoxic potency of marine macroalgae and necessity for its quantitative treatment......Page 244
8.6.2 Response surface model for evaluating the cytotoxic potency of marine macroalgae on human carcinoma cell lines......Page 245
8.6.4 NSGA-based multiobjective optimization strategy for enhancing cytotoxic potency of marine macroalgae on human carcinoma cel .........Page 246
References......Page 249
9 - Application of stochastic evolutionary optimization techniques to environmental processes......Page 252
9.2 Modeling and optimization of different environmental processes......Page 253
9.3.1 The importance of biofilm reactors in industry wastewater treatment......Page 256
9.3.2 Experimental biofilm reactor and data generation......Page 257
9.3.3 Mathematical and kinetic models......Page 258
9.3.3.1.1 One-dimensional model......Page 259
9.3.3.2.1 Monod model......Page 260
9.3.3.3 Film thickness......Page 261
9.3.5 Ant colony optimization–based inverse modeling of biofilm reactor......Page 262
9.3.5.2 Biofilm reactor with gravel stones......Page 263
9.3.6 Analysis of results......Page 264
9.4 Process model–based optimization of pharmaceutical industry wastewater treating a fixed bed anaerobic biofilm reactor using .........Page 266
9.4.2 Mathematical and kinetic models......Page 267
9.4.4 Analysis of results......Page 268
9.5.2 ACO implementation strategy......Page 270
9.5.3 Analysis of results......Page 271
9.6 Optimal estimation of wastewater treating biofilm reaction kinetics using hybrid mechanistic-neural network rate function m .........Page 272
9.6.1 Configuration of hybrid mechanistic-neural network rate function model......Page 273
9.6.3 Design and implementation of hybrid neural network rate function model for optimal estimation of biofilm reaction kinetics......Page 274
References......Page 279
10 - Conclusions......Page 284
A......Page 286
D......Page 287
F......Page 288
I......Page 289
N......Page 290
P......Page 291
R......Page 292
U......Page 293
W......Page 294
Back Cover......Page 295

Citation preview

Stochastic Global Optimization Methods and Applications to Chemical, Biochemical, Pharmaceutical and Environmental Processes Ch. Venkateswarlu B V Raju Institute of Technology, Narsapur, Andhra Pradesh, India; Formerly: Indian Institute of Chemical Technology (CSIR-IICT), Hyderabad, Telangana, India

Satya Eswari Jujjavarapu Department of Biotechnology, National Institute of Technology, Raipur, Chhattisgarh, India

Elsevier Radarweg 29, PO Box 211, 1000 AE Amsterdam, Netherlands The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom 50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States Copyright © 2020 Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library ISBN: 978-0-12-817392-3 For information on all Elsevier publications visit our website at https://www.elsevier.com/books-and-journals

Publisher: Susan Dennis Acquisition Editor: Kostas Marinakis Editorial Project Manager: Andrea Dulberger Production Project Manager: Nirmala Arumugam Cover Designer: Greg Harris Typeset by TNQ Technologies

About the authors Dr. Ch. Venkateswarlu, the Director R&D at BV Raju Institute of Technology (BVRIT), Narsapur, Greater Hyderabad, India, has earlier worked as Scientist, Senior Principal Scientist, and Chief Scientist at Indian Institute of Chemical Technology (IICT), Hyderabad, a premier research and development (R&D) institute of Council of Scientific and Industrial Research (CSIR, India). Prior to Director R&D at BVRIT, he worked as Professor, Principal, and Head of Chemical Engineering Department of the same institute. He did his graduation from Andhra University as well as from Indian Institute of Chemical Engineers and postgraduation and PhD in Chemical Engineering from Osmania University, Hyderabad, India. He holds 35 years R&D experience along with 19 years teaching experience and 2 years industry experience. His research interests lie in the areas of dynamic process modeling and simulation, process identification and dynamic optimization, process monitoring and fault diagnosis, state estimation and soft sensing, statistical process control and advanced process control, applied engineering mathematics and evolutionary computing, artificial intelligence and expert systems, and bioprocess engineering and bioinformatics. He published more than 100 research papers in peer journals of repute along with few international and national proceeding publications. He is also credited with 150 technical paper presentations and invited lectures and few book chapters. He has executed several R&D projects sponsored by DST and Industry. He is a reviewer of several international research journals and many national and international research project proposals. He has guided several postgraduate and PhD students. He served as a long-term guest faculty for premier institutes such as Bhabha Atomic Research Centre Scientific Officer Training, BITS Pilani MS (off-campus), and IICT-CDAC Bioinformatics Programs. He is a Fellow of Andhra Pradesh Academy of Sciences and Telangana State Academy of Sciences in India. He received various awards in recognition to his R&D and academic contributions. Dr. Satya Eswari Jujjavarapu is currently an Assistant Professor at Biotechnology Department of National Institute of Technology (NIT), Raipur, India. She did her MTech in Biotechnology from Indian Institute Technology (IIT), Kharagpur, and PhD from IIT, Hyderabad. During her research career, she worked as DST woman scientist at the Indian Institute of Chemical Technology (IICT), Hyderabad. Her fields of specializations include bioinformatics, biotechnology, process modeling, evolutionary optimization, and artificial intelligence. She gained considerable expertise in the application of mathematical and engineering tools to biotechnological processes. She has published more than 18 sci/scopus research papers and 25 international conference proceedings. She completed a DST woman scientist project and is currently handling a DST-Early Career Research project and a CCOST project. She has more than 4 years teaching experience and 3 years research experience.

xv

Preface This book is addressed to students, researchers, and industry professionals concerning to multiple domains of engineering and technology. It covers the fundamentals, classical and advanced optimization topics with a number of examples and case studies that are beneficial to the personnel of different disciplines to gain knowledge and apply to the problems encountered in their domains. General readers of initial chapters that cover classical optimization topics are expected to have familiarity with the fundamentals of mathematics and calculus. Readers of lateral chapters of advanced optimization topics are expected to have familiarity with the fundamentals of optimization along with their basic domain knowledge in engineering and science. Optimization is of great interest and it has widespread applications in engineering, science, and business. It has become a major technology contributor to the growth of the industry. It is extensively used in solving a wide variety of problems in design, operation, and analysis of engineering and technological processes. The initial chapters of this book emphasize various classical methods of optimization and their applications. This book mainly focuses on evolutionary, stochastic, and artificial intelligence optimization algorithms with a special emphasis on their design, analysis, and implementation to solve complex optimization problems and includes a number of real applications concerning chemical, biochemical, pharmaceutical, and environmental engineering processes. Formulation, design, and implementation of various advanced optimization strategies to solve a wide variety of base case and real engineering problems make this book beneficial to researchers working in multiple domains. Chapter 1 of this book provides motivation for optimization with the presentation of basic features of optimization along with its scope, examples of applications, and essential components. Further in this chapter, the basic concepts of optimization are described in terms of functions, behavior of functions, and maxima and minima of functions. This chapter also deals with the region of search within the constraints, classification of problems in optimization, general solution procedure, and the obstacles of optimization. Chapter 2 discusses classical analytical methods of optimization. The classical optimization techniques are analytical in nature and make use of differential calculus to solve the problems involving continuous differentiable functions. These techniques with necessary and sufficient conditions are employed to find the optimum of unconstrained single variable functions and multivariable functions with equality and inequality constraints. Chapter 3 elaborates numerical search methods for unconstrained optimization problems. The classical analytical methods based on necessary conditions with their analytical derivatives can yield exact solution for functions that have no complex form of expressions. These analytical methods are usually difficult to apply for nonlinear functions for which the analytical derivatives are hard to compute and for functions involving more variables. Most algorithms of unconstrained and

xvii

xviii

Preface

constrained optimization make use of numerical search techniques to locate the minimum (maximum) of single variable and multivariable functions. These numerical search methods find the optimum by using the function f(x) and sometimes derivative values of f(x) at successive trial points of x. Chapter 3 discusses various gradient and direct search methods that are used to solve single variable and multivariable optimization problems. In this chapter, various one-dimensional gradient search methods, polynomial approximation methods, multivariable direct search methods, and multivariable gradient search methods with examples are discussed. Chapter 4 describes various stochastic and evolutionary optimization algorithms. Classical optimization methods fail to solve problems that pose difficulties concerning to dimensionality, differentiability, multimodality, nonlinearity in objective function and constraints, and problems that have many local optima. There has been a rapidly growing interest in advanced optimization algorithms over the last decade. Stochastic and evolutionary optimization methods are increasingly used to solve challenging optimization problems. These methods are typically inspired by some phenomena from nature and they are robust. These methods are capable of locating global optimum of multimodal functions and they have flexibility with ease of operation. These algorithms do not require any gradient information and are even suitable to solve discrete optimization problems. These methods are extensively used in the analysis, design, and operation of systems that are highly nonlinear, high dimensional, and noisy or for solving problems that are not easily solved by classical deterministic methods of optimization. Various stochastic and global optimization methods are now becoming industry standard. Chapter 4 mainly focuses on evolutionary and stochastic optimization algorithms such as genetic algorithm, simulated annealing, differential evolution, ant colony optimization, tabu search, particle swarm optimization, artificial bee colony algorithm, and cuckoo search algorithm. In Chapter 4, these algorithms are described in detail with flow schemes and implementation procedures. Implementation of stochastic global optimization methods to base case problems involving continuous and discrete numerical functions gives intriguing insight about the efficacy of these methods for their further implementation to real engineering applications. Chapter 5 provides different base case applications and performance evaluation of various stochastic global optimization methods. Chapter 6 discusses application of stochastic evolutionary optimization techniques to chemical processes. The chemical industry is experiencing significant changes because of global market competition, strict bounds on product specifications, pricing pressures, and environmental issues. Optimization is the most important approach that addresses the performance issues related to several areas of chemical process engineering including process design, process development, process modeling, process identification, process control, and real-time process operation. Optimization is also used in process synthesis, experimental design, planning, scheduling, distribution, and integration of process operations. Most of the chemical engineering problems exhibit highly nonlinear dynamics and often present nonconvexity, discontinuity, and multimodality. The classical deterministic

Preface

optimization methods are not effective in solving optimization problems of complex chemical processes and often require high computational time. Stochastic evolutionary optimization methods are robust numerical techniques and are widely used to solve complex chemical engineering problems that are not easily solved by classical deterministic methods of optimization. Chapter 6 deals with various real applications of stochastic and evolutionary optimization strategies to different chemical processes that are highly nonlinear and high dimensional. In this chapter, different stochastic optimization-based multistage dynamic optimization, multiloop controller tuning, and nonlinear model predictive control strategies are designed and applied to complex and high-dimensional processes such as semibatch copolymerization reactors and reactive distillation columns. Chapter 7 concentrates on application of stochastic evolutionary optimization techniques to biochemical processes. Bioprocess technology plays a vital role in delivering innovative and sustainable products and processes to fulfill the needs of the society. In the present situation of increasing energy demand, depleting natural sources, and ever-demanding environmental awareness, bioprocesses occupy a unique position in converting variety of resources into useful products. Modeling and optimization techniques are increasingly used to understand and improve the cellular-based processes. The advantages of these techniques include the reduction of excessive experimentation, facilitating the most informative experiments, providing strategies to optimize and automate the processes and reducing cost and time in devising operational strategies. The model-based bioprocess optimization provides a quantitative and systematic framework to maximize process profitability, safety, and reliability. Chapter 7 mainly focuses on application of stochastic evolutionary optimization techniques for modeling and optimization of biotechnological processes. In this chapter, various mathematical, empirical, and artificial neural network modelebased stochastic and evolutionary optimization strategies are designed and applied for optimization of different biotechnical processes such as Chinese hamster ovary (CHO) cells production, lipopeptide, and rhamnolipid biosurfactant processes. Chapter 8 presents application of evolutionary and artificial intelligenceebased optimization techniques to pharmaceutical processes. Design, modeling, and optimization studies can lead to considerable benefits in pharmaceutical processes in terms of improvement in productivity and product quality as well as reduction in energy consumption and environmental pollution. In this chapter, different strategies are derived by combining artificial neural networks, radial basis function networks, and statistical response surface models with differential evolution, nonsorting differential evolution, and nonsorting genetic algorithms, and these strategies are applied for simultaneous optimization of pharmaceutical product formulation, multiobjective Pareto optimization of a pharmaceutical product formulation, and multiobjective optimization of cytotoxic potency of marine macroalgae on human carcinoma cell lines. Chapter 9 focuses on application of stochastic evolutionary optimization techniques to environmental processes. Modeling and optimization studies can lead to considerable benefits to environmental engineering

xix

xx

Preface

systems in terms of efficiency improvement, energy reduction, and pollution control. A variety of optimization approaches are used for the solution of environmental problems in the areas of air pollution, solid, liquid, and industrial waste management, and energy management. This chapter mainly focuses on application of stochastic and evolutionary optimization techniques to environmental processes concerning to industry wastewater treatment. Various process model and artificial intelligence modelebased strategies involving stochastic optimization algorithms such as ant colony optimization and tabu search are derived and applied for modeling and optimization of different wastewater treatment processes. Chapter 10 given at the end of the book represents the conclusions section. This chapter summarizes the essence of the book and its benefit to the readers. Many references with a variety of classical and advanced optimization problem titles are included at the end of each chapter of this book. These references will be immensely useful to the readers to advance their general knowledge and domain knowledge in the field of optimization.

CHAPTER

Basic features and concepts of optimization

1

Chapter outline 1.1 Introduction ......................................................................................................... 2 1.2 Basic features...................................................................................................... 2 1.2.1 Optimization and its benefits............................................................... 2 1.2.2 Scope for optimization ........................................................................ 3 1.2.3 Illustrative examples........................................................................... 3 1.2.4 Essential requisites for optimization..................................................... 6 1.3 Basic concepts .................................................................................................... 7 1.3.1 Functions in optimization.................................................................... 7 1.3.2 Interpretation of behavior of functions................................................ 12 1.3.3 Maxima and minima of functions ....................................................... 15 1.3.4 Region of search for constrained optimization ..................................... 18 1.4 Classification and general procedure................................................................... 19 1.4.1 Classification of optimization problems .............................................. 19 1.4.2 General procedure of solving optimization problems ............................ 23 1.4.3 Bottlenecks in optimization ............................................................... 23 1.5 Summary ........................................................................................................... 24 References ............................................................................................................... 25

Optimization is the process of finding the set of conditions required to achieve the best solution in a given situation. Optimization is of great interest and finds widespread use in engineering, science, economics, and operations research. This introductory chapter presents the basic features and concepts that set the stage for the development of optimization methods in the subsequent chapters.

Stochastic Global Optimization Methods and Applications to Chemical, Biochemical, Pharmaceutical and Environmental Processes. https://doi.org/10.1016/B978-0-12-817392-3.00001-6 Copyright © 2020 Elsevier Inc. All rights reserved.

1

2

CHAPTER 1 Basic features and concepts of optimization

1.1 Introduction A wide variety of problems in design, operation, and analysis of engineering and technological processes can be resolved by optimization. This chapter provides the motivation for the topic of optimization by means of presenting its basic features along with its scope, examples of its applications, and its essential components. Furthermore, its basic concepts are described in terms of functions, behavior of functions, and maxima and minima of functions. This chapter further deals with the region of search within the constraints, classification of problems in optimization, general solution procedure, and the obstacles of optimization.

1.2 Basic features Optimization with its mathematical principles and techniques is used to solve a wide variety of quantitative problems in many disciplines. In industrial environment, optimization can be used to take decisions at different levels. It is useful to begin the subject of optimization with its basic features and concepts.

1.2.1 Optimization and its benefits Optimization is the process of selecting the best course of action from the available resources. Optimization problems are made up of three basic components: an objective function, a set of unknowns or decision variables, and a set of constraints. An objective function can be maximization or minimization type. In an industrial system, decisions are to be made either to minimize the cost or to maximize the profit. Profit maximization or cost minimization is expressed by means of a performance index. Decision variables are the variables that engineers or managers choose in making technological or managerial system to achieve the desired objective. Optimization has to find the values of decision variables that yield the best values of the performance criterion. Constraints are restrictions imposed on the system by which the decision variables are chosen to maximize the benefit or minimize the effort. Optimization has widespread applications in engineering and science. It has become a major technology contributor to the growth of the industry. In plant operations, optimization provides improved plant performance in terms of improved yields of valuable products, reduced energy consumption, and higher processing rates. Optimization can also benefit the plants by means of reduced maintenance cost, less equipment wear, and better staff utilization. It helps in planning and scheduling of efficient construction of plants. With the systematic identification of objective, constraints, and degrees of freedom in processes or plants, optimization leads to provide improved quality of design, faster and more reliable trouble shooting, and faster decision-making. It helps in minimizing the inventory charges and increases overall efficiency with the allocation of resources or services among various processes or activities. It also facilitates to reduce transportation charges through strategic planning of distribution networks for products and procurement of raw materials from different sources.

1.2 Basic features

1.2.2 Scope for optimization Optimization can be applied to the entire company, a plant, a process, a single unit operation, and a single piece of equipment. In typical industrial environment, optimization can be used in taking decisions at management level, plant design level, and plant operations level [1]. Management level: At management level, optimization helps in taking decisions concerning to project evaluation, product selection, corporate budget, investment in sales, research and development, and new plant construction. At this stage the information available is qualitative and uncertain as these decisions are made well in advance the plant design level. Plant design level: Decisions made at this level are concerned to choice of the process whether batch or continuous, nominal operating conditions, configuration of the plant, size of individual units, use of flow sheeting programs, and aid of process design simulators. Plant operations level: Decisions at this stage include allocation of raw materials on a weekly/daily basis, day-to-day optimization of a plant to minimize steam consumption, cooling water consumption, operating controls for a given unit at certain temperatures and pressures, and costs toward shipping, transportation, and distribution of products.

1.2.3 Illustrative examples The basic applications of optimization are explained in terms of different illustrative examples concerning to industry. (a) Optimum pipe diameter for pumping fluid One typical example is the problem of determining the optimum pipe diameter for pumping a given amount of fluid from one point to another. Here the amount of fluid pumped between the two points can be accomplished by means of different pipe diameters. However, this task can be realized by one particular pipe diameter which minimizes the total cost representing the cost for pumping the liquid and the cost for the installed piping system as shown in Fig. 1.1 [2]. From the figure it can be observed that the pumping cost increases with decreased size of pipe diameter because of frictional effects. But the fixed charges for the pipeline become lower due to reduced capital investment because of the use of smaller pipe diameter. The optimum diameter is located at point E where the sum of the pumping costs and fixed costs for the pipeline becomes a minimum. (b) Optimizing air supply to a combustion process Combustion occurs when fuels such as natural gas, fuel oil, or coal react with oxygen in the air to form carbon dioxide (CO2) and generate heat which can be used in industrial processes [3]. In a combustion process involving a typical fuel,

3

4

CHAPTER 1 Basic features and concepts of optimization

Total cost Cost, $/(year) (ft of pipe length)

Capital investment for installed pipe Power cost for pumping

Optimum pipe diameter E

Pipe diameter

FIGURE 1.1 Determination of optimum pipe diameter.

when small quantity of air is supplied to the burner, there is not enough oxygen in the air to completely react with all the carbon in the fuel to form CO2. Some oxygen in the air can combine with the carbon in the fuel to form carbon monoxide (CO). The CO is highly toxic gas associated with the combustion and its formation has to be minimized. The most efficient use of CO2 can be achieved when its concentration in the exhaust is minimized. This happens only when there is sufficient oxygen (O2) in the air to react with all the carbon in the fuel. The theoretical air required for the combustion reaction depends on the fuel composition and the rate of fuel supply. As the air level is increased up to 100% of the theoretical air, the concentration of CO decreases rapidly to a minimum and the values CO2 and O2 attain their maximum level. Further increase in air supply begins to dilute the exhaust gases causing to decrease the CO2 concentration. These situations are depicted in Fig. 1.2. Optimizing the air supply or air fuel ratio is desired to increase the efficiency of the combustion process.

% CO2

%Gas concentration

% O2

% CO

80 100 120

% Theoretical air

FIGURE 1.2 Efficiency of a combustion process as a function of air supply.

1.2 Basic features

(c) Optimal dilution rate of a chemostat Chemostat is a completely mixed continuous stirred tank reactor used for cultivation of cells. The steady-state mass balances on substrate and cells in a chemostat are described by a Monod chemostat model [4]. Under sterile feed condition, the initial cell mass concentration in the feed is zero. The dilution rate (D) is an important parameter characterizing the processing rate of the chemostat. The steady-state values of substrate (s) and cell mass concentration (x) in the chemostat are influenced by the dilution rate (D). When dilution rate is low and reaches a situation such as D / 0 and s /0, Then x increases to a high value. As D increases continuously, s increases at a faster rate and x tends to decline. As D approaches its maximum rate Dmax, x becomes zero and s attains high value. The condition of loss of cells at steady state where x ¼ 0 and D ¼ Dmax is called near washout condition and when D > Dmax, total washout of cells occur. At near washout condition, the chemostat is very sensitive to variations in D. At this condition, a smaller change in D gives a relatively large shift in x and s. These situations are shown in Fig. 1.3. The rate of cell production per unit volume of chemostat is expressed as Dx which is a function of D as shown in Fig. 1.4. (d) Optimizing the design condition to deal with model uncertainty Model uncertainty is an important issue in product or process design. Failure to account model uncertainty may lead to degradation in process performance [5]. The inputs or parameters of the model are to be selected such that the output responses are robust or insensitive to variations in model parameters. In multiresponse optimization problem, the optimal operating condition chosen for the variables has to simultaneously deal with the robustness and performance for multiple responses. In such case, multiple response problem is converted into a single objective function expressed as a loss function, which takes into account the correlation among the responses and the process economics. This function represents the variability in response, which in turn depends on the operating condition chosen for the variables. The response variability or model uncertainty can be illustrated in Fig. 1.5, where the

x, s Mass/ volume

s x Washout

D

FIGURE 1.3 Variation of x and s with respect to D.

Dmax

5

6

CHAPTER 1 Basic features and concepts of optimization

Dx

D

Doptimum

FIGURE 1.4 Optimum dilution rate that maximizes cell production.

True model

Fitted model

Performance

A

B

FIGURE 1.5 Response behavior due to model uncertainty.

dashed curve represents the true response and the solid curve corresponds to the fitted model. If the goal is to maximize the response performance, the model should provide the optimal value of the design variable at point A. If the fitted model provides point B as the design variable, the response exhibits much lower performance than the true optimal. Thus, even a slight deviation of the fitted model from the true model might result unacceptable performance. It can be observed that the difference between the predicted value of the response based on the design point at B is larger than that of at A.

1.2.4 Essential requisites for optimization The goal of optimization is to find the values of decision variables that yield the best values of the performance criterion or objective function. The process of optimization considered for a particular application depends on the nature of the process, its mathematical description, its constraints, and the type of objective function. Thus, the basic components that are essential for formulation of a mathematical optimization problem are the objective function, the decision variables, and the constraints.

1.3 Basic concepts

Objective function: It is the mathematical function which has to be either minimized or maximized. For example, in a manufacturing process, the aim may be to maximize the profit or minimize the cost. In comparing the data provided by the user-defined model with the observed data, the objective is to determine the decision variables that minimize the deviation between the model prediction data and the observed data. The objective function is usually defined by taking into account the type of optimization application. Decision variables: These are the set of unknowns or variables that control the value of the objective function. In a manufacturing problem, the variables may include the amounts of different resources used or the time spent on each activity. For example, in fitting the data to a model, the unknowns can be the parameters of the model. Constraints: A set of constraints are those that allow the unknowns or variables to take on certain values but exclude others. For example, in a manufacturing problem, one cannot spend negative amount of time on any activity, so one constraint is that the “time” variables are nonnegative. The optimization problem is then to find values of the variables that minimize or maximize the objective function while satisfying the constraints.

1.3 Basic concepts The basic concepts of optimization are described in terms of functions, behavior of functions, maxima and minima of functions, and the regions of search.

1.3.1 Functions in optimization (a) Continuous, discontinuous, and discrete functions A function is continuous at some point x, if f ðxÞ ¼ Limh/0 f ðx þ hÞ

(1.1)

A function of one variable may be represented by an unbroken continuous curve as shown in Fig. 1.6. A discontinuous function is the one in which discontinuity occurs at some point x ¼ x0. The common form of discontinuity is the jump discontinuity which occurs at  þ when h  0, and the the limit of f(x þ h) as h / 0. The function approaches f x 0   function approaches a different value f x 0 if h  0. A discontinuous function is shown in Fig. 1.7. Discrete functions are discontinuous functions which are valid only at discrete values. For example, the pressure drop (DP) of a fluid flowing at a fixed flow rate through a pipe of fixed length is a function of pipe diameter (D) as shown in Fig. 1.8. This would result a discrete function because the pipes are normally available only in standard diameter.

7

8

CHAPTER 1 Basic features and concepts of optimization

f(x) Continuous function

+ ve

x

- ve FIGURE 1.6 Continuous function: f (x) ¼ (x þ 5) (x5), f (x) ¼ 0 for x ¼ 5.

f(x)

Discontinuous function

+ ve

x

- ve

FIGURE 1.7 Discontinuous function: f (x) ¼ 1/(x5), f (x) ¼ a for x ¼ 5.

ΔP=f(D)

D1

D2

D3

D4

D

FIGURE 1.8 Discrete function: DP ¼ f (D).

(b) Monotonic functions Monotonic functions can be increasing, nonincreasing, decreasing, and nondecreasing type as shown in Fig. 1.9. A function is termed to be monotonic increasing along a path when f (x2) > f (x1) for x2 > x1. A function is said to be monotonic

1.3 Basic concepts

(A) f(x)

x1

x2

(B)

x .

f(x)

x1

x2

x1

x2

x

(C) f(x)

x

(D) f(x)

x1

x2

x

FIGURE 1.9 Monotonic functions. (A) Monotonic increasing; (B) monotonic nondecreasing; (C) monotonic decreasing; and (D) monotonic nonincreasing.

9

10

CHAPTER 1 Basic features and concepts of optimization

nondecreasing when f(x2)  f(x1) for x2 > x1. A function is termed to be monotonic decreasing when f(x2) < f(x1) for x2 > x1. A function is said to be monotonic nonincreasing when f(x2)  f(x1) for x2 > x1. (c) Unimodal and multimodal functions When the values of a function are plotted against its independent variables, the function initially starts to increase up to a maximum and then falls away. Similarly, the function may fall to a minimum and then increase for changes in its variable values. Such functions are termed as unimodal functions. These functions possess a local maximum or local minimum represented by a single peak as shown in Fig. 1.10. Functions with two peaks representing the maximum or minimum are called bimodal functions. Functions with more than two peaks are referred as multimodal functions. Such functions are shown in Fig. 1.11. Unimodality: The interpretation of unimodality is useful in the application of numerical search techniques. For one-dimensional case, the unimodality is defined as       f x1;1 < f x1;2 < f x1 if x1;1 < x1;2 < x1 (1.2)       if x1 < x1;3 < x1;4 f x1 > f x1;3 > f x1;4 This definition satisfies unimodality for maximum at x1 as shown in Fig. 1.12. Similarly, unimodality for minimum can be defined as shown in Fig. 1.13. (d) Convex and concave functions Convex function: This function has a single peak denoting the minimum as shown in Fig. 1.14. The function f(x) is said to be convex if for any two points x1 and x2 in x over the convex set for 0  a  1 such that f ½ax2 þ ð1  aÞx1   af ðx2 Þ þ ð1  aÞf ðx1 Þ

f(x) i ii

x

FIGURE 1.10 Unimodal functions: (i) one minimum, (ii) one maximum.

(1.3)

1.3 Basic concepts

f(x)

i ii

x

FIGURE 1.11 Multimodal functions: (i) bimodal function, (ii) multimodal function.

f(x)

x11 x12

x1* x13 x14

FIGURE 1.12 Unimodality for maximum.

f(x)

x11

x12

x1* x13 x14

FIGURE 1.13 Unimodality for minimum.

f(x)

x*

FIGURE 1.14 Convex function.

x

11

12

CHAPTER 1 Basic features and concepts of optimization

A convex set is the set in which every point in the interval joining the points x1 and x2 for 0  a  1 is defined by x ¼ (1a)x1 þ ax2. A convex function passes below the straight line joining the points x1 and x2 as shown in Fig. 1.15. Concave function: This function has a single peak representing the maximum as shown in Fig. 1.16. The function f(x) is said to be concave if for any two points x1 and x2 in x over the set 0  a  1 such that f ½ax2 þ ð1  aÞx1   af ðx2 Þ þ ð1  aÞf ðx1 Þ

(1.4)

A concave set is the set in which every point in the interval joining the points x1 and x2 for 0  a  1 is defined by x ¼ (1  a)x1 þ ax2. A concave function passes above the straight line joining the points x1 and x2 as shown in Fig. 1.17.

1.3.2 Interpretation of behavior of functions (a) Increasing function and its first derivative A function y ¼ f(x) is termed to be an increasing function if y increases with the increase in x. If x takes the values of x1 and x2 in the given interval with x2 > x1, then for an increasing function, f(x2) > f(x1), as shown in Fig. 1.18A. The first derivative of an increasing function can be depicted from Fig. 1.18B. If x and Dx be the values of x with their corresponding function values as y ¼ f(x) and

FIGURE 1.15 Convex function over the convex set.

f(x)

x*

FIGURE 1.16 Concave function.

x

1.3 Basic concepts

FIGURE 1.17 Representation of concave function.

(A)

f(x)

f(x2) f(x1) x x1

(B)

x2

f(x)

f(x+Δx) f(x) x x

x+Δx

FIGURE 1.18 (A) Increasing function and (B) depiction of its first derivative.

y þ Dy ¼ f(x þ Dx), then Dy ¼ f(x þ Dx) e f(x). As x þ Dx > x, Dx > 0. Thus, for an increasing function, f(x þ Dx) > f(x) so that Dy > 0. As Dx and Dy are positive, Dy/Dx > 0. Thus, the derivative of the function f(x) is the limit of Dy/Dx as Dx / 0. Lim Dy=Dx ¼ dy=dx Dx/0

(1.5)

As dy/dx > 0, y 0 (x) ¼ f 0 (x)  0. Thus, if the increasing function is differentiable  0. Similarly, if Dx < 0, then x þ Dx < x and f(x þ Dx) < f(x). When Dx and Dy are both negative, Dy/Dx > 0.

f0 (x)

13

14

CHAPTER 1 Basic features and concepts of optimization

(A)

f(x)

f(x1) f(x2) x x1

(B)

x2

f(x)

f(x) f(x+Δx) x x

x+ Δx

FIGURE 1.19 (A) Decreasing function and, (B) depiction of its first derivative.

(b) Decreasing function and its first derivative A function y ¼ f(x) is said to be a decreasing function over the domain of x, if y decreases with the increase of x. For this case, x2 > x1 and f(x2) < f(x1) as shown in Fig. 1.19A. The first derivative of the decreasing function can be depicted from Fig. 1.19B. If x and Dx be the values of x with their corresponding function values as y ¼ f(x) and y þ Dy ¼ f(x þ Dx). For a decreasing function, Dx > 0 and f(x þ Dx) < f(x). Consequently, Dy < 0 and Dy/Dx < 0. In Eq. (1.5), the limit of Dy/Dx as Dx / 0 is defined as dy/dx. As Dy/Dx 0, the ratio of Dy/Dx is negative and hence f 0 (x)  0. (c) Increasingedecreasing function A function y ¼ f(x) is termed to be an increasingedecreasing function if f 0 (x) > 0 for all x in some interval and f 0 (x) < 0 for all x in some other interval as shown in Fig. 1.20. For f 0 (x) > 0, the tangent to the graph at any point has a positive slope tending up words to the right. If f 0 (x) < 0, the slope of the function lies downward to the right indicating the decrease in y with the increase of x.

1.3 Basic concepts

f(x)

f '(x)

f '(x)

x

FIGURE 1.20 Depiction of increasingedecreasing function in terms of its first derivative.

1.3.3 Maxima and minima of functions Finding the maximum or minimum of a function is important in optimization applications. Local maximum: A function f(x) is said to have a local maximum at x ¼ x0, if f(x0) > f(x) over the domain of x. The point P in Fig. 1.21 corresponds to the local maximum of the function. Local minimum: A function f(x) is said to have a local minimum at x ¼ x0, if f(x0) < f(x) over the domain of x. The point Q in Fig. 1.22 corresponds to the local minimum of the function. Stationary point: The point at x ¼ x0 where the tangent to the function y ¼ f(x) is horizontal is referred as stationary point or critical point. For a continuous function, the condition for critical point is f0 (x0) ¼ 0 as indicated in Fig. 1.23. The critical point refers to a well-defined function for which the derivative exists. Saddle point: The point at which the function indicates neither local maximum nor local minimum is called a saddle point or inflection point. The point R at x ¼ x0 refers to the saddle point condition where the tangent is flat and the derivative of the function f 0 (x0) ¼ 0 (Fig. 1.24). The same optimal point for maximum and minimum: If the point x* represents the minimum value of a function f(x), the same x* can be used to represent the maximum value as the negative of the function f(x) as shown in Fig. 1.25. The optimization can be considered as a problem of minimization of the function P

f(x)

x x1

x0

FIGURE 1.21 Local maximum: f(x0) > f(x1) and f(x0) > f(x2).

x2

15

16

CHAPTER 1 Basic features and concepts of optimization

f(x) Q

x x1

x0

x2

FIGURE 1.22 Local minimum: f (x0) < f (x1) and f(x0) < f(x2).

f(x)

f'(x0)=0

x x0

FIGURE 1.23 Stationary point of a function.

f(x) R

f'(x0)=0

x x0

FIGURE 1.24 Saddle point of a function.

and the maximum of a function can be obtained by seeking the minimum of the negative of the same function. First derivative test for optimum: The stationary point for maximum or minimum of a function can be tested by using the first derivative test. (i) If f 0 (x) alters sign from negative to positive at x ¼ x0 so as f 0 (x) < 0 for x < x0 and f 0 (x) > 0 for x > x0, then the function f(x) has minimum at x ¼ x0 (Fig. 1.26). (ii) If f 0 (x) alters sign from positive to negative at x ¼ x0 so as f 0 (x) > 0 for x < x0 and f 0 (x) < 0 for x > x0, then the function f(x) has maximum at x ¼ x0 (Fig. 1.27).

1.3 Basic concepts

f(x) Minimum of f(x)

x

x*

0

Maximum of - f(x)

FIGURE 1.25 The same point representing the minimum and maximum of f(x).

f(x)

f '(x)

f '(x) (+)

(-)

x x0

FIGURE 1.26 First derivative test for minimum of a function.

f(x)

f '(x)

f '(x)

(+)

(-)

x x0

FIGURE 1.27 First derivative test for maximum of a function.

17

18

CHAPTER 1 Basic features and concepts of optimization

f(x) f'(x) f'(x)

x x0

FIGURE 1.28 First derivative test for saddle point representation.

(iii) If f 0 (x) does not alter sign at x ¼ x0, then the function f(x) exhibits inflection point at x ¼ x0 (Fig. 1.28).

1.3.4 Region of search for constrained optimization Any optimization problem consists of an objective function subject to constraints. The objective function is also termed as cost function or profit function, which is expressed by Maximize=Minimize f ðxÞ

(1.6)

gj ðxÞ  0; hi ðxÞ ¼ 0;

(1.7)

subject to the constraints j ¼ 1; 2; .; m i ¼ 1; 2; .; p

where x ¼ [x1, x2, ., xn]. The constraints gj(x), j ¼ 1, ., m refer inequality constraints and hi(x), i ¼ 1, ., p represent equality constraints. Fig. 1.29 shows a two-dimensional design space that depicts the region of search in which the feasible region is denoted by hatched lines. The set of values of x that satisfy the equation gj(x)  0 forms a boundary surface in the design space called a constraint surface. The constraint surface divides the design space into two regions: (1) gj(x) < 0 and (2) gj(x) > 0. The constraints can be linear or nonlinear and the design space will be bounded by the curves as well. The points lying on the region where gj(x) < 0 are feasible or acceptable. The points that lie on the hyperspace will satisfy the constraints critically, whereas the points lying in the region gj(x) > 0 are infeasible or unacceptable. A design point that lies on the constraint surface is called a bound point, and the associated constraint is called an active constraint. Design points that do not lie on any constraint surface are known as free points. The design points that lie in the acceptable or unacceptable regions can be classified as (i) free and acceptable point, (ii) free and unacceptable point, (iii) bound and acceptable point, and (iv) bound and unacceptable point (Fig. 1.29).

1.4 Classification and general procedure

x2 Linear constraints Bound unacceptable point

Infeasible region

Feasible region

Nonlinear inequality constraint

Free point Bound acceptable points

Linear constraint

x1 Nonlinear inequality constraint

Bound unacceptable points

FIGURE 1.29 Search region for constrained optimization.

1.4 Classification and general procedure This section covers classification of optimization problems, general solution procedure, and the bottlenecks in optimization.

1.4.1 Classification of optimization problems Classification of an optimization problem is an important step in the optimization process, as algorithms for solving optimization problems are tailored to a particular type of problem. Optimization problems can be classified into different types as discussed below [6]. (a) Continuous optimization versus discrete optimization Optimization problems involving functions that can take on any real value of the variables are called continuous optimization problems. Certain functions make sense if the variables take on values from a discrete set and the optimization problems involving such variables are referred as discrete optimization problems. Continuous optimization problems are easy to solve than the discrete optimization problems. However, discrete optimization problems can also be solved efficiently due to improvements in algorithms coupled with advancements in computing technology.

19

20

CHAPTER 1 Basic features and concepts of optimization

(b) Unconstrained optimization versus constrained optimization Another important classification is based on problems involving constraints on the variables and no constraints on the variables. Unconstrained optimization problems arise directly in many practical applications. These problems may also arise because of the replacements of constraints by a penalty term in the objective function formulation of constrained optimization problems. Constrained optimization problems can occur from applications in which there are explicit constraints on the variables. The constraints on the variables can vary widely from simple bounds to systems of equalities and inequalities that model complex relationships among the variables. Constrained optimization problems can be furthered classified according to the nature of the constraints (e.g., linear, nonlinear, convex) and the smoothness of the functions (e.g., differentiable or nondifferentiable). (c) Classification of constrained optimization problems Based on the nature of equations for the objective function and the constraints, optimization problems can be classified as linear, nonlinear, geometric, and quadratic programming problems. The classification is very useful from a computational point of view because many predefined special methods are available for effective solution of a particular type of problem. (i) Linear programming problem If the objective function and all the constraints are “linear” functions of the design variables, the optimization problem is called a linear programming problem (LPP). A linear programming problem is often stated in the standard form: 8 9 x1 > > > > > > > > > > > < x2 > = Find X ¼ (1.8) : > > > > > > : > > > > > : > ; xn which maximizes f ðXÞ ¼

n X i¼1

ci xi

(1.9)

subject to the constraints n X j¼1

aij xj  bi ; xj  0;

where ci, aij, and bi are constants.

i ¼ 1; 2; .; m j ¼ 1; 2; .; n

(1.10)

1.4 Classification and general procedure

(ii) Nonlinear programming problem If any of the functions among the objectives and constraint functions of the optimization problem is nonlinear, the problem is called a nonlinear programming (NLP) problem. This is the most general form of a programming problem and all other problems can be considered as special cases of the NLP problem. (iii) Geometric programming problem A geometric programming (GMP) problem is one in which the objective function and constraints are expressed as polynomials in X. A function h(X) is called a polynomial (with m terms) if h can be expressed as hðXÞ ¼ c1 xa111 xa221 ; .; xann1 þ c2 xa112 xa222 ; .; xann2 þ; /; þcm xa11m xa22m ; .; xannm

(1.11)

where cj (j ¼ 1,., m) and aij (i ¼ 1,., n and j ¼ 1,., m) are constants with cj  0and xi  0. Thus, GMP problems can be posed as follows: Find X, which minimizes ! N0 n a X aij f ðXÞ ¼ cj xi ; cj > 0; xi > 0 (1.12) j¼1

subject to gk ðXÞ ¼

Nk X j¼1

ajk

n a i¼1

i¼1

! q xi ijk

> 0;

ajk > 0;

xi > 0;

k ¼ 1; 2; .; m

(1.13)

where N0 and Nk denote the number of terms in the objective function and in the kth constraint function, respectively. (iv) Quadratic programming problem A quadratic programming problem is the best behaved nonlinear programming problem with a quadratic objective function and linear constraints and is concave (for maximization problems). It can be solved by suitably modifying the linear programming techniques. It is usually formulated as follows: f ðXÞ ¼ c þ subject to

n P i¼1

n X i¼1

qi x i þ

n X n X i¼1 j¼1

aij xi ¼ bj ; j ¼ 1,2, ., m xi  0;

i ¼ 1; 2; .; n

where c, qi, Qij, aij, and bj are constants.

Qij xi xj

(1.14)

21

22

CHAPTER 1 Basic features and concepts of optimization

(d) Deterministic optimization versus stochastic optimization In deterministic optimization, the data of a given problem are defined accurately. In many practical problems, the data cannot be known with certainty and are associated with random noisy behavior. Problems associated with such data are called stochastic optimization problems. (e) Integer programming problem versus real-valued programming problem If some or all of the design variables of an optimization problem are restricted to take only integer (or discrete) values, the problem is called an integer programming problem. A real-valued programming problem is the one in which the values of real variables within an allowed set are systematically chosen to minimize or maximize a real function. (f) No objective, one objective, and more objectives Feasibility problems are problems in which the goal is to find values for the variables that satisfy the constraints of a model with no particular objective to optimize. That means the goal is to find a solution that satisfies the complementarity conditions. Such problems give rise to no objective optimization problems. One objective is the case where a single objective function is involved in the optimization problem. Many applications in engineering and science involve multiple objectives, where some of the objectives are to be maximized while minimizing some other objectives. A set of optimal conditions need to be established to satisfy all the objectives. Such problems are called multiobjective optimization problems. (g) Classification based on separability of the functions Based on this classification, optimization problems can be classified as separable and nonseparable programming problems based on the separability of the objective and constraint functions. Separable programming problems: In this type of a problem the objective function and the constraints are separable. A function is said to be separable if it can be expressed as the sum of n single variable functions, f1(x1), f2(x2), . , fn(xn). A separable programming problem can be expressed in standard form as Find X which minimizes f ðXÞ ¼ subject to gj ðXÞ ¼

n P i¼1

n X i¼1

fi ðxi Þ

(1.15)

gij ðxi Þ  bj , j ¼ 1, 2, ., m, where bj is a constant.

Nonseparable programming problems: These problems are considerably more difficult to solve than separable problems. Any nonseparable objective functions and/or constraints in nonlinear programming problem can be approximately expressed as separable functions. However, various cases of nonseparable problems such as nonseparable convex continuous problems, as well as nonseparable quadratic convex continuous problems can be solved efficiently by using a collection of relevant techniques.

1.4 Classification and general procedure

(h) Optimal control problems An optimal control problem is a mathematical programming problem involving a number of stages, where each stage evolves from the preceding stage in a prescribed manner. It is defined by two types of variables: the control or design variables and state variables. The control variables define the system and control how one stage evolves into the next. The state variables describe the behavior or status of the system at any stage. The problem is to find a set of control variables such that the total objective function/performance index of over all stages is minimized, subject to a set of constraints on the control and state variables.

1.4.2 General procedure of solving optimization problems In design and operation of an engineering system, it is necessary to obtain a suitable model to represent the system, to choose a suitable objective function to guide the decisions, and to select an appropriate method of optimization. Once the model is selected and the solution needed is obtainable, an appropriate method of optimization need to be chosen to determine the information required in the optimization problem. The following general procedure is used for the analysis and solution of optimization problems. 1. Examine the process and identify the process variables and process characteristics of interest. Make a list of all the variables. 2. Determine the criterion for optimization and define the objective function involving the process variables. 3. Develop a valid model of the process or equipment relating the input and output variables and parameters. Define the equality and inequality constraints. Identify the dependent and independent variables and obtain the number of degrees of freedom. 4. If the problem formulation is too large, break it up in to manageable parts and simplify the objective function and the model. 5. Apply a suitable optimization method to solve the problem. 6. Determine the optimum solution for the system. Check the sensitivity of the results to the changes in the parameters of the problem.

1.4.3 Bottlenecks in optimization Many optimization problems in engineering and science are characterized by the nonconvexity of the feasible domain or the objective function may involve continuous and/or discrete variables. For problems with well-behaved objective functions and constraints, optimization presents no difficulty. However, for problems involving complicated objective function and constraints, some of the optimization procedures may be inappropriate and may fail in providing the desired solution. The following characteristics can cause a failure in obtaining the desired optimal solution.

23

24

CHAPTER 1 Basic features and concepts of optimization

1. In certain optimization problems, gradients do not exist at every point or at the optimal point. Difference approximation to the gradient may not be useful and may lead to failure. The algorithm may not converge or may converge to nonoptimal point. 2. The objective function or the constraint functions may be nonlinear functions of variables. Linear approximation of nonlinear functions may lead to loss in accuracy of solution. 3. The objective function or the constraint functions may have finite discontinuities in the continuous parameter values. For example, the pressure drop of a fluid flowing at a fixed flow rate through a pipe of fixed length is not a continuous function of pipe diameter, as the pipes are normally available only in standard diameter. 4. There are certain stiff problems which are analytically smooth but numerically nonsmooth. Such problems may pose difficulties in providing optimal solution. 5. The objective function and the constraint functions may have complicated interactions of the variables. One such interaction is the temperature and pressure dependence in the design of a pressure vessel. Such interactions between the variables can prevent in determining the unique values of the variables. 6. The objective function and the constraint functions may exhibit almost flat behavior for some ranges of variables. Such functions may not be sensitive for certain ranges of values of variables. 7. For functions exhibiting multiple local optima, the solution obtained in some region may be less acceptable than the solution in the other region. The better solution may be reached only by initiating the search for optimum from a different starting point.

1.5 Summary In this chapter the basic features of optimization along with its scope, illustrative examples, and prerequisites are explained. Also the basic concepts of optimization are described in terms of functions, behavior of functions, and maxima and minima of functions. Furthermore, the region of search within the constraints, classification of optimization problems, general solution procedure, and the obstacles of optimization are illustrated. The features, concepts, and benefits of optimization described in this chapter motivate the readers to explore the classical and advanced optimization methods and their applications described in subsequent chapters of this book.

References

References [1] T.F. Edgar, D.M. Himmelblau, S. Lasdon, Optimization of Chemical Processes, second ed., McGraw Hill Higher Education, 2001. [2] M.S. Peters, K.D. Timmerhaus, Plant Design and Economics for Chemical Engineers, fourth ed., McGraw-Hill, Inc., New York, 1991. [3] Combustion Analysis Basics, TSI Incorporated 2004. [4] J.E. Bailey, D.F. Ollis, Biochemical Engineering Fundamentals, McGraw Hill Book Company, New York, 1986. [5] E. Qi, Q. Su, J. Shen, F. Wu, R. Dou, in: E. Qi, Q. Su, J. Shen, F. Wu, R. Dou (Eds.), Proceedings of the 5th International Asia Conference on Industrial Engineering and Management Innovation (IEMI, 2014), Atlatis Press, 2015. [6] S.S. Rao, Engineering Optimization-Theory and Practice, fourth ed., John Wiley & Sons, 2009.

25

CHAPTER

Classical analytical methods of optimization

2

Chapter outline 2.1 Introduction ....................................................................................................... 28 2.2 Statement of optimization problem ...................................................................... 28 2.3 Analytical methods for unconstrained single-variable functions ............................ 29 2.3.1 Necessary and sufficient conditions ................................................... 29 2.3.2 Sufficient conditions for convexity and concavity of a function ............. 30 2.4 Analytical methods for unconstrained multivariable functions ............................... 32 2.4.1 Necessary and sufficient conditions ................................................... 33 2.4.2 Two-variable function ....................................................................... 33 2.4.3 Multivariable function....................................................................... 33 2.5 Analytical methods for multivariable optimization problems with equality constraints36 2.5.1 Direct substitution............................................................................ 36 2.5.2 Penalty function approach................................................................. 38 2.5.3 Method of Lagrange multipliers ......................................................... 40 2.5.3.1 Necessary condition for a basic problem .................................. 40 2.5.3.2 Necessary condition for a general problem ............................... 42 2.5.3.3 Sufficient conditions for a general problem ............................... 43 2.6 Analytical methods for solving multivariable optimization problems with inequality constraints ........................................................................................................ 45 2.6.1 KuhneTucker conditions for problems with inequality constraints ........ 46 2.6.2 KuhneTucker conditions for problems with inequality and equality constraints....................................................................................... 48 2.7 Limitations of classical optimization methods ...................................................... 51 2.8 Summary ........................................................................................................... 51 References ............................................................................................................... 51

Classical optimization techniques with necessary and sufficient conditions are employed to find the optimum of unconstrained single-variable and multivariable functions. These techniques are also used to find the optimum of multivariable problems with equality and inequality constraints. The analytical methods to find the optimum of single-variable functions and multivariable functions have been widely Stochastic Global Optimization Methods and Applications to Chemical, Biochemical, Pharmaceutical and Environmental Processes. https://doi.org/10.1016/B978-0-12-817392-3.00002-8 Copyright © 2020 Elsevier Inc. All rights reserved.

27

28

CHAPTER 2 Classical analytical methods of optimization

discussed in several books [1e5]. This chapter presents different classical optimization techniques to determine the optimum solution of unconstrained and constrained optimization problems.

2.1 Introduction The classical optimization techniques are analytical in nature and make use of differential calculus to solve the problems involving continuous differentiable functions. In engineering applications, optimization problems can have constraints that can arise because of the need of specifying physical bounds on the variables and the problems can also be governed by empirical relations and physical laws. Constraints are classified as equality constraints and inequality constraints. The constraints can be linear and/or nonlinear. In this chapter, different classical optimization techniques are discussed in detail to solve nonlinear programming problems, which include (i) unconstrained singlevariable optimization problems, (ii) unconstrained multivariable optimization problems, (iii) multivariable optimization problems with equality constraints, and (iv) multivariable optimization problems with inequality constraints.

2.2 Statement of optimization problem An optimization problem with no constraints is called unconstrained optimization problem and the problem with constraints is termed as constrained optimization problem. The general mathematical form of a constrained optimization problem can be stated as follows. Minimize f ðxÞ;

x ¼ ½x1 ; x2 ; .; xn T

(2.1)

subject to gi ðxÞ  0;

i ¼ 1; 2; .; m

hj ðxÞ ¼ 0;

j ¼ 1; 2; .; p

(2.2)

where x is an n-dimensional vector called the design vector or decision variable vector, f(x) is an object function, gi(x) and hj(x) denote the inequality and equality constraint functions, respectively. A vector x satisfying all the inequality and equality constraints is called feasible solution. Any feasible solution which minimizes (maximizes) the objective function is called optimal solution. The problem stated in the above equation is called constrained optimization problem. If no constraints are specified, the problem is called unconstrained optimization problem.

2.3 Analytical methods for unconstrained single-variable functions

2.3 Analytical methods for unconstrained single-variable functions Let f(x) be a continuous function of one variable x. The decision vector x* is to be found such that f(x) is minimum or maximum. The definitions for local minimum, local maximum, and saddle point condition are explained in Chapter 1. The conditions for global optimum are defined as follows. Global minimum: A single-variable function f(x) is said to have a global or absolute minimum at x ¼ x*, if f(x*)  f(x) cx. Global maximum: A single-variable function f(x) is said to have a global or absolute maximum at x ¼ x*, if f(x*)  f(x) cx. Stationary point: The point x* is said to be a stationary point if the function has either maximum or minimum. The stationary point is called saddle point or inflection point at which the function exhibits neither maximum nor minimum. At maximum (minimum) point, the function changes from increasing to decreasing (decreasing to increasing). But at the point of inflection, the function is increasing (decreasing) on either side of the point. Fig. 2.1 illustrates these situations. As shown in figure, the function changes its pattern from increasing to decreasing at the minimum point and the function changes its pattern from decreasing to increasing at the maximum point. But the function is observed to be increasing on either side of the inflection point. The classical optimization techniques that are used to solve the nonlinear programming problems are mainly based on differential calculus. To seek the optimum by these techniques, the mathematical function defining the optimization problem must be differentiable.

2.3.1 Necessary and sufficient conditions The necessary and sufficient conditions for locating the optimum of unconstrained single-variable functions are defined as follows. f(x) Global maximum

Inflection point Local minimum

Global minimum

FIGURE 2.1 Stationary points of single-variable function.

x

29

30

CHAPTER 2 Classical analytical methods of optimization

Necessary condition: The point at which the gradient of the first derivative of a function vanishes is called the stationary point of f(x). The condition for stationary point is f 0 ðx0 Þ ¼ 0

(2.3)

Here the derivative of the function df/dx is denoted by f 0 Sufficient condition: To develop a criterion to find whether the stationary point is local minimum or local maximum, we perform Taylor series expansion of the function f(x) about a stationary point x0: 1 f ðxÞ ¼ f ðx0 Þ þ f 0 ðx0 Þðx  x0 Þ þ f 00 ðx0 Þðx  x0 Þ2 þ higher order terms (2.4) 2 2 Here f 00 denotes ddxf2 . We select x sufficiently close to x0 such that the higher order terms are negligible compared with the second order terms. As the first derivative vanishes at the stationary point, the above equation becomes f ðxÞ ¼ f ðx0 Þ þ f 00 ðx0 Þðx  x0 Þ2

(2.5)

As (xx0) is always positive, we can say the point x0 is a local minimum or maximum by examining the value of f 00 (x0). Thus 2

if if if

f 00 ðx0 Þ > 0; f 00 ðx0 Þ < 0; 00

f ðx0 Þ ¼ 0;

then f ðx0 Þ is a minimum then f ðx0 Þ is a maximum

(2.6)

000

then examine f ðx0 Þ.

If the second derivative is zero, it is necessary to examine higher order derivatives. In general, the Taylor series expansion of the function can be expressed as 1 n f ðx0 Þðx  x0 Þn n! If n is even, (xx0)n is always positive. Thus f ðxÞ ¼ f ðx0 Þ þ

if if

f n ðx0 Þ > 0; n

f ðx0 Þ < 0;

then

f ðx0 Þ is a minimum

then

f ðx0 Þ is a maximum

(2.7)

(2.8)

If n is odd, then (xx0)n changes sign as x moves from x < x0 to x > x0, and this refers the stationary point x0 or as the inflection point. Thus if n is odd and f n(x0) does not vanish, then the function exhibits neither maximum nor minimum.

2.3.2 Sufficient conditions for convexity and concavity of a function The sufficient conditions used to locate the maximum or minimum of a function can also specify the convexity and concavity of a function. Thus if n is even and if n is even and

f n ðx0 Þ > 0; then f ðx0 Þ is convex f n ðx0 Þ < 0; then f ðx0 Þ is concave

(2.9)

2.3 Analytical methods for unconstrained single-variable functions

If n is odd, then f(x0) is neither convex nor concave but saddle. We illustrate the unconstrained single-variable optimization problems with the following examples. Example 1 Find the optimum of the function f(x) ¼ x4  8x3 þ 9. Solution The given function is f(x) ¼ x4  8x3 þ 9. The first derivative of the function is f 0 ðxÞ ¼ 4x3  24x2 ¼ 0 The stationary points are (0, 6). To find the extreme of the function, we evaluate the stationary point at higher derivative of the function. The second derivative of the function is f 00 ðxÞ ¼ 12x2  48x f00 (x)

¼ 144 > 0. At x ¼ 6, The function is minimum at x ¼ 6. At x ¼ 0, f00 (x) ¼ 0. As the second derivative is zero at x ¼ 0, we evaluate the third derivative. f 000 ðxÞ ¼ 24x  48 f000 (x)

¼ 48. At x ¼ 0, The third derivative is an odd derivative and does not vanish at x ¼ 0. Thus x ¼ 0 is a point of inflection.

Example 2 A company produces x units of a specific product and generates an income I, which is expressed as a function of number units produced as given by I ¼ 900  3x2 The company’s expenditure for the production of x units is given by 2 E ¼ x3  32x2 þ 200x þ 1000 3 Find the number of units of the produce that maximize the company’s profit. Solution Income: I(x) ¼ 900  3x2 Expenditure: EðxÞ ¼ 23x3  32x2 þ 200x þ 1000 Profit: P(x) ¼ I(x)  E(x) 2 PðxÞ ¼  x3 þ 29x2  200x  100 3 The first derivative of the function is P0 ðxÞ ¼  2x2 þ 58x  200 The stationary point is obtained by setting P0 (x) ¼ 0.

31

32

CHAPTER 2 Classical analytical methods of optimization

The stationary points are x ¼ 4 and x ¼ 25. To find the optimum, we evaluate the second derivative of the function. P00 ðxÞ ¼  4x þ 58 At x ¼ 4, P00 (x) ¼ 42 > 0. The profit is minimum at x ¼ 4. At x ¼ 25, P00 (x) ¼ 42 < 0. The profit is maximum at x ¼ 25.

Example 3 Find the extreme of the objective function defined by f ðxÞ ¼ 1 þ 64x þ 40x2 

20 3 25 4 1 x  x þ 2x5  x6 3 4 6

Solution The first derivative is f 0 ðxÞ ¼ 64 þ 80x  20x2  25x3 þ 10x4  x5 The stationary point is obtained by setting f 0 (x) ¼ 0. f 0 ðxÞ ¼ ð1 þ xÞ2 ð4  xÞ3 ¼ 0 The stationary points are x ¼ 1 and x ¼ 4. To find the extreme of the objective function at these points, we evaluate the second derivative of the function.   f 00 ðxÞ ¼ 80  40x  75x2 þ 40x3  5x4 ¼ 5 1  x2 ð4  xÞ2 At x ¼ 1, f 00 (x) ¼ 0, hence we need to investigate the third derivative.   f 000 ðxÞ ¼  10ð4  xÞ 1 þ 4x  x2 ¼ 0 The third derivative is an odd derivative and does not vanish at x ¼ 1. Thus x ¼ 1 is a point of inflection. At x ¼ 4, f000 (x) vanishes. Hence the fourth derivative must be examined.    f iv ðxÞ ¼ 10  ð4  xÞð4  2xÞ þ 1 þ 4x  x2 ¼ 0 At x ¼ 4, f iv(x) ¼ 10. The function is minimum at x ¼ 4.

2.4 Analytical methods for unconstrained multivariable functions The analytical methods used to solve unconstrained single-variable functions can be extended to unconstrained multivariable optimization problems.

2.4 Analytical methods for unconstrained multivariable functions

2.4.1 Necessary and sufficient conditions An analysis similar to that of single-variable functions can be performed to optimize the functions involving two or more variables. We first apply the necessary and sufficient conditions for a two-variable function and extend it to a function of more variables. We use the notation f 0 (xi) or fxi for first derivative of ith variable and fxi xj for the second derivative of i,jth variable.

2.4.2 Two-variable function We first consider a two-variable function: f ðxÞ ¼ f ðx1 ; x2 Þ

(2.10)

Necessary condition: The condition for stationary point is f 0 ðx1 Þ ¼ f 0 ðx2 Þ ¼ 0

(2.11)

i.e., fx1 ¼ fx2 ¼ 0. Sufficient condition: Define the following determinants: D1 ¼ fx1 x1    fx x fx1 x2   D2 ¼  1 1 fx2 x1 fx2 x2 

(2.12)

The function is minimum if D1 > 0 and D2 > 0. The function is maximum if D1 < 0 and D2 > 0. If D1 > 0 and D2 < 0, the stationary point is neither minimum nor maximum, but it is a saddle point.

2.4.3 Multivariable function A multivariable function of n independent variables is given by f ðxÞ ¼ f ðx1 ; x2 ; .; xn Þ

(2.13)

Necessary condition: The condition for stationary point is f 0 ðx1 Þ ¼ f 0 ðx2 Þ ¼ / ¼ f 0 ðxn Þ ¼ 0 i.e., fx1 ¼ fx2 ¼ ¼ . ¼ fxn ¼ 0. Sufficient condition: Define the following determinants: D1 ¼ fx1 x1    fx1 x1 fx1 x2    D2 ¼  fx2 x1 fx2 x2 

(2.14)

33

34

CHAPTER 2 Classical analytical methods of optimization   f x1 x1  f  xx Di ¼  2 1 «   f

xi x1

 fx1 xi   fx2 xi   «  fxi xi 

. . . .

(2.15)

The function is minimum if Di > 0

for

i ¼ 1; 2; 3; .; n

The function is maximum if Di < 0 and Di > 0

for for

i ¼ 1; 3; 5; . i ¼ 2; 4; 6; .

(2.16)

Failure to satisfy these conditions indicates saddle point. We illustrate the unconstrained multivariable optimization problems with the following examples. Example 4 Find the optimum of the function f ðx1 ; x2 Þ ¼ 2x21  4x22 þ 4x1 x2 þ 2x1 þ 4x2 . Solution The given function is f ðx1 ; x2 Þ ¼  2x21  4x22 þ 4x1 x2 þ 2x1 þ 4x2 The first derivatives are fx1 ¼ 4x1 þ 4x2 þ 2 fx2 ¼ 4x1  8x2 þ 4 The first derivatives are set to zero to obtain the stationary point as x1 ¼ 2 and x2 ¼ 1:5 The second derivatives are evaluated as fx1 x1 ¼ 4; fx2 x1 ¼ 4;

fx1 x2 ¼ 4 fx2 x2 ¼ 8

D1 ¼ jfx1 x1 j ¼ j4j     fx x fx x   4 1 2    11 D2 ¼  ¼  fx x fx x   4 2 1 2 2

 4   8 

D1 < 0 and D2 > 0 The function is maximum.

Example 5 Find the optimal solution of the function defined by f ðx; y; zÞ ¼  2x2  2y2  4z2 þ 2xy þ 4yz þ 8z Solution The given function is f ¼  2x2  2y2  4z2 þ 2xy þ 4yz þ 8z The derivatives are fx ¼ 4x þ 2y fy ¼ 2x  4y þ 4z fz ¼ 4y  8z þ 8

2.4 Analytical methods for unconstrained multivariable functions

By setting the first derivatives equal to zero and solving, the stationary point is obtained as x ¼ 2; y ¼ 4; z ¼ 3 The second derivatives are evaluated as fxx ¼ 4;

fxy ¼ 2;

fxz ¼ 0

fyx ¼ 2;

fyy ¼ 4;

fyz ¼ 4

fzx ¼ 0;

fzy ¼ 4;

fzz ¼ 8

The determinants are formed as D1 ¼ jfxx j ¼ j4j      fxx fxy   4 2      D2 ¼  ¼   fyx fyy   2 4       fxx fxy fxz   4 2 0         D3 ¼  fyx fyy fyz  ¼  2 4 4       fzx fzy fzz   0 4 8  D1 < 0; D2 > 0 and D3 < 0 The function is minimum.

Example 6 If the total cost of a process is expressed by the function C ¼ kx þ xyl þ my þ n, find the process parameters x and y that minimize the cost considering k, l, m, and n as positive constants. Solution The cost function is C ¼ kx þ xyl þ my þ n. The first derivatives are Cx ¼ k 

l x2 y

Cy ¼ m 

l xy2

To find the stationary point, we set Cx ¼ Cy ¼ 0. The stationary point is obtained as  1=3  1=3 lm kl x¼ 2 and y ¼ k m2 To find the minimum cost, we evaluate the second derivatives Cxx, Cxy, and Cyy to determine the determinants as    2l  D1 ¼ jCxx j ¼  3  x y    2l l     3  2 2  Cxx Cxy   x y x y      D2 ¼  ¼    l C  2l yx Cyy    x2 y2 xy3  As D1 > 0 and D2 > 0, the optimized process parameters exhibit the minimum cost of the process.

35

36

CHAPTER 2 Classical analytical methods of optimization

2.5 Analytical methods for multivariable optimization problems with equality constraints Different analytical methods to solve multivariable optimization problems have been discussed in literature [6e8]. A multivariable optimization problem with equality constraints can be stated as follows [3]. Minimize f ðxÞ Subject to gj ðxÞ ¼ 0;

j ¼ 1; 2; .; m

(2.17)

where x ¼ [x1, x2,., xn]T. Here f(x) is the objective function and gj(x) refers equality constraints. Any point x that satisfies the constraints is called a feasible point. The set of all feasible points is called feasible set. Different methods are used to solve the problems with equality constraints. These methods include direct substitution, penalty function approach, and method of Lagrange’s multipliers.

2.5.1 Direct substitution In a problem with n variables and m equality constraints, it is possible to express any set of m variables in terms of the remaining n-m variables. By substituting these expressions into the original objective function, an unconstrained optimization problem with a new objective function in n-m variables is formed. The reduced unconstrained optimization problem is then solved by using classical analytical optimization methods. This method is useful to solve simpler optimization problems. However, many problems in practice involve nonlinear constraints and it may not be possible to eliminate all the equality constraints. The following examples illustrate the method of direct substitution. Example 7 Find the optimum solution of the constrained multivariable problem by direct substitution: z ¼ 2x21 þ ðx2 þ 1Þ2 þ ðx3  1Þ2 Subject to x1 þ 2x2  x3 ¼ 3 Solution The given function and constraint equations are z ¼ 2x21 þ ðx2 þ 1Þ2 þ ðx3  1Þ2

(2.i)

x1 þ 2x2  x3 ¼ 3

(2.ii)

x3 ¼ x1 þ 2x2  3

(2.iii)

z ¼ 2x21 þ ðx2 þ 1Þ2 þ ðx1 þ 2x2  4Þ2

(2.iv)

From Eq. (2.ii), we have

Substituting in Eq. (2.i) gives

2.5 Analytical methods for multivariable optimization problems

The partial derivatives of Eq. (2.iv) with respect to x1 and x2 give zx1 ¼ 3x1 þ 2x2  4

(2.v)

zx2 ¼ 2x1 þ 5x2  7 By setting the above equations equal to zero and solving results, x1 ¼ 10=13 and x2 ¼ 11=13 Eq. (2.iii) gives x3 ¼ 7/13. Thus the stationary point is ðx1 ; x2 ; x3 Þ ¼ ð10 = 13; 11 = 13; 7 = 13Þ The determinants of Eq. (2.iv) evaluated based on x1 and x2 are D1 ¼ jzx1 x1 j ¼ j 3 j     zx x zx x   3 1 2   11  D2 ¼  ¼  zx x zx x   2 2 1

2 2

 2   5

The determinants D1 > 0 and D2 > 0. The function is minimum.

Example 8 Find the optimum solution of the following problem by direct substitution. f ðx1 ; x2 ; x3 Þ ¼ 2x21 þ 2x22 þ x23 with the constraints g1 ðx1 ; x2 ; x3 Þ ¼ x1 þ 2x2 þ 2x3 ¼ 10 g2 ðx1 ; x2 ; x3 Þ ¼ x1  x2 ¼ 2 Solution The given problem is f ¼ 2x21 þ 2x22 þ x23

(2.i)

g1 ¼ x1 þ 2x2 þ 2x3 ¼ 10

(2.ii)

g2 ¼ x1  x2 ¼ 2

(2.iii)

x1 ¼ x2 þ 2

(2.iv)

x3 ¼ 4  1:5x2

(2.v)

f ¼ 2ðx2 þ 2Þ2 þ 2x22 þ ð4  1:5x2 Þ2

(2.vi)

Eq (2.iii) gives

From Eq (2.ii), Substitution of x1 and x3 in Eq (2.i): By taking partial derivative of Eq. (2.vi) with respect to x2, equating it to zero, and solving give x2 ¼  16=5 The values of x1 and x3 are x1 ¼  6=5 and x3 ¼ 44=5

37

38

CHAPTER 2 Classical analytical methods of optimization

Thus the stationary point is ðx1 ; x2 ; x3 Þ ¼ ð  6 = 5; 16 = 5; 44 = 5Þ The determinants evaluated are greater than zero. This indicates the function is minimum.

2.5.2 Penalty function approach In the penalty function approach, the solution always satisfies the constrained equation. In this approach, the minimization function in Eq. (2.17) is combined with the constrained equation to form a modified objective function as given by X (2.18) f ðxÞ ¼ f ðxÞ þ Pk ðgk ðxÞÞ2 ; k ¼ 1; 2; .; K The second term in the above equation is always positive as gk is a square. For minimum, the value of Pk is high. That is for minimum, X f ðxÞ ¼ f ðxÞ þ Pk g2k For maximum, the modified objective function is expressed as X f ðxÞ ¼ f ðxÞ  Pk g2k

(2.19)

The following examples illustrate the method of penalty function approach. Example 9 Apply the method of penalty functions to minimize the constrained problem given by f ðxÞ ¼ 4x21 þ 5x22 subject to 2x1 þ 3x2 ¼ 6 Solution The given function and constrained equations are f ðxÞ ¼ 4x21 þ 5x22

(2.i)

gðxÞ ¼ 2x1 þ 3x2  6 ¼ 0

(2.ii)

The modified objective function is FðxÞ ¼ 4x21 þ 5x22 þ Pð2x1 þ 3x2  6Þ2

(2.iii)

By taking the partial derivatives of Eq. (2.iii) and equate to zero results vF ¼ 8x1 þ 4Pð2x1 þ 3x2  6Þ ¼ 0 vx1

(2.iv)

vF ¼ 10x2 þ 6Pð2x1 þ 3x2  6Þ ¼ 0 vx2

(2.v)

Multiplying Eq. (2.iv) by 3 and Eq. (2.v) by 2 and dividing gives 6 x2 ¼ x1 5

2.5 Analytical methods for multivariable optimization problems

Substituting in Eq (2.iv) results x1 ½40 þ 112P ¼ 120P x1 ¼

120P 120 ¼ 40 þ 112P 40 þ 112 P 15 14

When P / a; x1 ¼ This gives x2 as

18 14 Thus f(x) has a stationary point at (x1, x2) [ (15/14, 18/14). The sufficient condition indicates the stationary point represents the minimum of the function. x2 ¼

Example 10 Minimize f ðxÞ ¼ ðx1  2Þ2 þ ð2x2  4Þ2 subject to gðxÞ ¼ x1 þ x2  8 ¼ 0 Solution The given function and constrained equations are f ðxÞ ¼ ðx1  2Þ2 þ ð2x2  4Þ2

(2.i)

gðxÞ ¼ x1 þ x2  8 ¼ 0

(2.ii)

The modified objective function is FðxÞ ¼ ðx1  2Þ2 þ ð2x2  4Þ2 þ Pðx1 þ x2  8Þ2

(2.iii)

By taking the partial derivatives of Eq (2.iii) and equation to zero, we have vF ¼ 2ðx1  2Þ þ 2Pðx1 þ x2  8Þ ¼ 0 vx1

(2.iv)

vF ¼ 4ð2x2  4Þ þ 2Pðx1 þ x2  8Þ ¼ 0 vx2

(2.v)

Dividing Eq.(2.iv) by Eq. (2.v) and simplifying gives x1 ¼ 4x2  6 Substituting in Eq. (2.iv) results x2 ½8 þ 10P ¼ 16 þ 28P 16 þ 28 16 þ 28P x2 ¼ ¼ P 8 8 þ 10P þ 10 P When P/a; x2 ¼

14 5

This gives x1 as 26 5 Thus the stationary point is (x1, x2) [ (26/5, 14/5). The sufficient condition indicates the stationary point represents the minimum of the function. x1 ¼

39

40

CHAPTER 2 Classical analytical methods of optimization

2.5.3 Method of Lagrange multipliers Consider a general problem of a continuous differentiable function with equality constraints given in Eq. (2.17). The basic feature of this method is explained initially by considering a problem in two variables with one constraint.

2.5.3.1 Necessary condition for a basic problem Consider the basic problem: Minimize f ðx1 ; x2 Þ Subject to gðx1 ; x2 Þ ¼ 0

(2.20)

A necessary condition for a function f to have an optimum at some point (x1 , x2 ) is that the total derivation of f(x1,x2) must be zero as expressed by df ¼

vf vf dx1 þ dx2 ¼ 0 vx1 vx2

(2.21)

The constraint equation at the extreme point should satisfy the condition as   (2.22) g x1 ; x2 ¼ 0 The total derivative of constraint equation is expressed as dg ¼

vg vg dx1 þ dx2 ¼ 0 vx1 vx2

(2.23)

vg vx1 dx2 ¼  dx vg 1 vx2

(2.24)

From Eq. (2.23), we have

By substituting Eq. (2.24) into Eq. (2.21), we have vg vf vf vx1 dx1  $ dx1 ¼ df ¼ vx1 vx2 vg vx2

! vf vf =vx2 vg  dx1 ¼ 0 vx1 vg=vx vx1

(2.25)

2

We can interpret the term Lagrange multiplier l from the gradient of the function and the gradient of the constraint as

l¼ 

Thus Eq. (2.25) leads to



vf vx2 vg vx2

!

(2.26) jx1 ;x2

 vf vg þl ¼0 vx1 vx1 jx ; x 1 2

(2.27)

2.5 Analytical methods for multivariable optimization problems

Similarly, we have



 vf vg þl ¼0 vx2 vx2 jx ; x 1 2

Also, the constraint equation has to satisfy the extreme point:   g x1; x2 x ; x ¼ 0 1

(2.28)

(2.29)

2

Thus Eqs. (2.27)e(2.29) represent the necessary conditions for the point (x1 , x2 ) to be an extreme point. These necessary conditions can be obtained by constructing a Lagrange function L of the form L ðx1 ; x2 ; lÞ ¼ f ðx1 ; x2 Þ þ lgðx1 ; x2 Þ or Lðx; lÞ ¼ f ðxÞ þ lgðxÞ

(2.30)

As L is expressed as a function of x1, x2, and l, the necessary conditions for its extremum are given by vL vf vg ðx1 ; x2 ; lÞ ¼ ðx1 ; x2 Þ þ l ðx1 ; x2 Þ ¼ 0 vx1 vx1 vx1 vL vf vg ðx1 ; x2 ; lÞ ¼ ðx1 ; x2 Þ þ l ðx1 ; x2 Þ ¼ 0 vx2 vx2 vx2

(2.31)

vL ðx1 ; x2 ; lÞ ¼ gðx1 ; x2 Þ ¼ 0 vx1 The compact representation of the above equation is Vf ðx Þ þ lVgðx Þ ¼ 0 gðx Þ ¼ 0

(2.32)

The geometric representation of the minimization of the function f(x) with respect to a single constraint g(x) ¼ 0 can be represented in the following figure (Fig. 2.2).

f=f4

f=f3

f=f2

f=f1

∇f ∇g

g(x*)

FIGURE 2.2 At the constrained minimum, Vf ¼ lVg.

41

42

CHAPTER 2 Classical analytical methods of optimization

2.5.3.2 Necessary condition for a general problem The procedure used to derive the necessary condition for a basic problem can be extended to a general problem of n variables with m equality constraints. The general problem can be formulated by extending the basic problem defined in Eq. (2.30). The Lagrange function L is represented by a Lagrange multiplier lj for each constraint gj(x) as Lðx1 ; x2 ; .; xn ; l1 ; l2 ; .; lm Þ ¼ f ðxÞ þ l1 g1 ðxÞ þ l2 g2 ðxÞ þ . þ lm gm ðxÞ (2.33) By treating L as a function of the (n þ m) unknowns, x1, x2, ., xn,l1,l2, ., lm, the necessary conditions for the extreme of L are given by m X vgj vL vf ¼ þ lj ¼ 0; vxi vxi j¼1 vxi

vL ¼ gj ðxÞ ¼ 0; vlj

i ¼ 1; 2; .; n

j ¼ 1; 2; .; m

(2.34)

(2.35)

Eqs. (2.34) and (2.35) represent (n þ m) equations in terms of the (n þ m) unknowns, xi and lj. The solution of these equations gives

2

x1

3

2

l1

3

6 7 6 7 6 l 7 6 x 7 6 27 6 27 6 7 6 7 6 7 6 7 6 : 6 : 7 6 7 6   x ¼ 6 7 and l ¼ 6 7 6 : 7 6 : 7 6 7 6 7 6 7 6 7 6 7 6 7 6 : 5 6 : 5 4 4 xn

(2.36)

lm

The vector x* corresponds to the relative minimum of f(x) while the vector l* provides the sensitivity information. The compact representation of Eqs. (2.34) and (2.35) is 3 m X 2 Vf ðxÞ þ lj Vgj ðxÞ 7 7 6 j¼1 7 6 7

6 7 6 g1 ðxÞ Vx Lðx; lÞ 7¼0 6 (2.37) VLðx; lÞ ¼ ¼6 7 Vl Lðx; lÞ . 7 6 7 6 7 4 . 5 gm ðxÞ

2.5 Analytical methods for multivariable optimization problems

2.5.3.3 Sufficient conditions for a general problem The Hessian of the Lagrangian in Eq. (2.37) is given by the following matrix

2

m X

6 V2 f ðxÞ þ lj V2 gj ðxÞ 6 j¼1 6 6/ / / / / / / / / V2 Lðx; lÞ ¼ 6 6 VgT1 ðxÞ 6 6 « 4 VgTm ðxÞ

« « « «

3

7 7 / / / / / / / / /7 7 7 7 5 0 Vg1 ðxÞ/Vgm ðxÞ

« (2.38)

The sufficient condition for the function to have an optimum is determined based on the eigenvalues of the matrix determinant: 2 3 L11  l L12 / L1n g11 g21 / gm1 6 7 L22  l / L2n g12 g22 / gm2 7 6 L21 6 7 6 / / / / / / / / 7 6 7 6 7 / / / / / / / 7 6 / 6 7 6 Ln1 Ln2 / Lnn  l g1n g2n / gmn 7 (2.39) 6 7 6 7 g12 / g1n 0 0 / 0 7 6 g11 6 7 6 / / / / / / / / 7 6 7 6 / 7 / / / / / / / 5 4 gm1 gm2 / gmn 0 0 / 0 The sufficient condition for the function to have extreme at point x* is satisfied if the eigenvalues (l) obtained from Eq. (2.39) have the same sign. If all the eigenvalues are negative, the function is maximum and if all the eigenvalues are positive, the function is minimum. If the eigenvalues are zero or they have different signs, the 2 L and vgj , function possesses saddle point. In the above matrix Lij and gij denote vxvi vx vxi j respectively. The following examples illustrate the method of Lagrange multipliers.

Example 11 Solve the following constrained optimization problem using the method of Lagrange multipliers: f ðxÞ ¼ 4x21 þ 2x22

43

44

CHAPTER 2 Classical analytical methods of optimization

subject to 2x1 þ x2 ¼ 2 Solution The given problem is f ðxÞ ¼ 4x21 þ 2x22 gðxÞ ¼ 2x1 þ x2  2 ¼ 0 The Lagrange function is L ¼ 4x21 þ 2x22 þ lð2x1 þ x2  2Þ The first derivatives of the Lagrange function are 8x1 þ 2l ¼ 0 4x2 þ l ¼ 0 2x1 þ x2  2 ¼ 0 The stationary point is obtained as ðx1 ; x2 ; lÞ ¼ ð2 = 3; 2 = 3;  8 = 3Þ: The Hessian of the Lagrangian function is obtained as 3 2 8 0 2 6 7 40 4 15 2 1 0 The sufficient condition for the function can be determined based on the matrix determinant as 2 3 8m 0 2 6 7 4  m 15 4 0 2 1 0 The eigenvalue m ¼ 24/5 indicates the function has minimum at the stationary point.

Example 12 Find the optimum of the constrained optimization problem using the method of Lagrange multipliers: f ðx1 ; x2 ; x3 Þ ¼ 6x2  3x3 subject to 2x1  2x2  x3 ¼ 4 x21 þ x22 ¼ 2 Solution The given problem is f ðxÞ ¼ 6x2  3x3 g1 ðxÞ ¼ 2x1  2x2  x3  4 ¼ 0 g2 ðxÞ ¼ x21 þ x22  2 ¼ 0 The Lagrange function is

2.6 Analytical methods for solving multivariable optimization problems

  Lðx; lÞ ¼ 6x2  3x3 þ l1 ð2x1  2x2  x3  4Þ þ l2 x21 þ x22  2 The first derivatives of the Lagrange function are 2l1 þ 2x1 l2 ¼ 0 6  2l1 þ 2x2 l2 ¼ 0 3  l1 ¼ 0 2x1  2x2  x3  4 ¼ 0 x21 þ x22  2 ¼ 0 The stationary point is obtained as

 4 6 20  pffiffiffiffiffi;  pffiffiffiffiffi;  pffiffiffiffiffi  4 26 26 26   4 6 20 Maximum occurs at pffiffiffiffiffi; pffiffiffiffiffi; pffiffiffiffiffi  4 26 26 26   4 6 20 Minimum occurs at pffiffiffiffiffi; pffiffiffiffiffi; pffiffiffiffiffi  4 26 26 26 

ðx1 ; x2 ; x3 Þ ¼

2.6 Analytical methods for solving multivariable optimization problems with inequality constraints Methods for solving multivariable optimization problems with inequality and equality constraints using KuhneTucker conditions have been discussed elsewhere [9e12]. A nonlinear programming problem with inequality constraints is defined as follows: Minimize Subject to

f ðxÞ gj ðxÞ  0;

where

x ¼ ðx1 ; x2 ; .; xn Þ

j ¼ 1; 2; .; m

(2.40)

In this problem, gj(x) refers inequality constraints. The inequality gj(x)  0 constrains the minimum of f(x). Any point x that satisfies the inequality constraints is called the feasible point and the set of all feasible points is denoted as feasible set. An inequality constraint gj(x)  0 is said to be active at x* if gj(x*) ¼ 0, and the constraint is said to be inactive if gj(x*) < 0. There is no loss of generality in solving the optimization problem by considering it as a minimization problem. The maximization problems can be easily transformed into minimization problems by considering maximize f(x) ¼ minimize f(x). The contours of a typical inequality constraint function g(x)  0 in a twodimensional plane are shown in Fig. 2.3. The contour g(x) ¼ 0 representing the boundary of the feasible region divides the plane into feasible region and infeasible

45

46

CHAPTER 2 Classical analytical methods of optimization

x2

g(x) > 0

Infeasible region

g(x)= 0

g(x) < 0

Feasible region

Boundary of feasible region

x1

FIGURE 2.3 Contours of inequality constraint with feasible and infeasible regions.

region. The contours corresponding to g(x) > 0 represents the infeasible region. The contours corresponding to g(x) < 0 denotes the feasible region. The contours representing the minimum of a typical quadratic function f(x1,x2) with the inequality constraint g(x)  0 are shown in Fig. 2.4. The notations f3, f2, and f1 represent different function values of f. This figure depicts the optimal solution of a constrained minimization problem.

2.6.1 KuhneTucker conditions for problems with inequality constraints The method of Lagrange multipliers is extended via KuhneTucker conditions to deal with the inequality constraints [5]. For the problem defined in Eq. (2.40), the Lagrangian function L(x, l) is defined as x2 g(x)=0 f3 > 0 f2 > 0 f 1> 0

x*, f(x*) (Constrained minimum region)

• f =0 •

g(x) < 0 (Feasible region) (Unconstrained minimum region)

g(x) > 0 (Infeasible region) x1

FIGURE 2.4 Constrained optimization of a typical quadratic function.

2.6 Analytical methods for solving multivariable optimization problems

Lðx; lÞ ¼ f ðxÞ þ

m X j¼1

lj gj ðxÞ

(2.41)

where lj  0. The KuhneTucker conditions define the first-order necessary conditions forthe  problem with inequality constraints. These conditions for x ¼ x1 ; x2 ; .; xn in Eq. (2.40) in terms of Lagrangian function of Eq. (2.41) are defined by vLðx; lÞ  0; vxi xi

vLðx; lÞ ¼ 0; vxi

vLðx; lÞ  0; vlj lj

vLðx; lÞ ¼ 0; vlj

i ¼ 1; 2; .; n i ¼ 1; 2; .; n (2.42) j ¼ 1; 2; .; m j ¼ 1; 2; .; m

xi  0; lj  0 The condition

lj vLðX;lÞ vlj

¼ 0 in Eq. (2.42) implies that lTj gj ðxÞ ¼ 0. That is

    lT j gj ðxÞ ¼ l1 g1 ðx Þ þ / þ lm gm ðx Þ ¼ 0

(2.43)

From this condition it can be observed if gj(x*) < 0, then lj ¼ 0: This indicates that the Lagrange multipliers corresponding to inactive constraints are zero. This can

x*

−∇ g3 (x*)

∇f (x*) −∇ g2 (x*)

g2 = 0

f

g1 = 0 g3 = 0

FIGURE 2.5 Graphical illustration of KuhneTucker conditions for inequality constraints.

47

48

CHAPTER 2 Classical analytical methods of optimization

be illustrated graphically in Fig. 2.5 by considering a general set of inequality constraints, gj(x*)  0, j ¼ 1, 2, 3. As shown in figure, the constraint g1(x*)  0 is inactive. Here g1(x*) < 0 implies l1 ¼ 0: Also Vf ðx Þ ¼ l2 Vg2 ðx Þ  l3 Vg3 ðx Þ where l2 > 0; l3 > 0:

2.6.2 KuhneTucker conditions for problems with inequality and equality constraints A nonlinear programming problem with inequality and equality constraints is defined as follows: Minimize Subject to

f ðxÞ gj ðxÞ  0;

where

hk ðxÞ ¼ 0; x ¼ ðx1 ; x2 ; .; xn Þ

j ¼ 1; 2; .; m k ¼ 1; 2; .; l

(2.44)

Here gj(x) and hk(x) refer inequality and equality constraints, respectively. For the problem defined in Eq. (2.44), the Lagrangian function L(x,l,m) is defined as Lðx; l; mÞ ¼ f ðxÞ þ

m X j¼1

lj gj ðxÞ þ

l X k1

mk hk ðxÞ

(2.45)

The Kuhn conditions for the above nonlinear programming problem are defined as vLðx; l; mÞ  0; vxi xi

vLðx; l; mÞ ¼ 0; vxi

vLðx; l; mÞ  0; vlj lj

vLðx; l; mÞ ¼ 0; vlj

vLðx; l; mÞ ¼ 0; vmk

i ¼ 1; 2; .; n i ¼ 1; 2; .; n j ¼ 1; 2; .; m j ¼ 1; 2; .; m k ¼ 1; 2; .; l

xi  0; lj  0; mk  0

(2.46)

2.6 Analytical methods for solving multivariable optimization problems

If the objective function is minimization type and the constraints are of the form gj(xi)  0, then lj are to be nonpositive in Eqs. (2.45) and (2.46). If the objective function is maximization type and the constraints are of the form gj(xi)  0, then lj are to be nonnegative in Eqs. (2.45) and (2.46). Sufficient conditions are obtained from the second-order requirements by examining the Hessian of the Lagrangian function. Example 13 Maximize f ðx; yÞ ¼ 2x  3y2 subject to the constraint x þ 2y  4 using KuhneTucker conditions. Solution The Lagrangian is formed as   L ¼ 2x  3y2  l x2 þ 2y2  4 2

2

The KuhneTucker conditions are vL ¼ 2  2lx  0 vx vL ¼ 6y  4ly  0 vy   vL ¼  x2 þ 2y2  4  0 vl x; y; l  0

;

xð2  2lxÞ ¼ 0

;

yð 6y  4lyÞ ¼ 0

;

  l  x2  2y2 þ 4 ¼ 0

;

The equations to be solved are 2  2lx ¼ 0 6y  4ly ¼ 0 x2  2y2 þ 4 ¼ 0 On solving the above equations, we have x¼2

;

y¼0 ;

l ¼ 1=2

Thus ðx ; l Þ ¼ ð2; 0; 1 = 2Þ The second-order sufficient condition for the Lagrangian is evaluated as

LH ¼ Fðx Þ  lGðx Þ



0 0 2 0 1  1=2 ¼ 0 6 0 4 0

0



8

LH is negative definite. Hence (x*, y*)¼ (2, 0) is the optimal solution.

Example 14 Apply KuhneTucker conditions to minimize f ðx1 ; x2 Þ ¼ 2x21 þ x22  4x1  2x2 þ 4. Subject to the constraints:

49

50

CHAPTER 2 Classical analytical methods of optimization

g1 ðx1 ; x2 Þ ¼ 4x1  2x2 þ 8  0 g2 ðx1 ; x2 Þ ¼ 2x1  4x2 þ 8  0 Solution The Lagrangian is Lðx; lÞ ¼ 2x21 þ x22  4x1  2x2 þ 4 þ l1 ð  4x1  2x2 þ 8Þ þ l2 ð  2x1  4x2 þ 8Þ

(2.i)

The KuhneTucker conditions are vLðx; lÞ ¼ 4x1  4  4l1  2l2  0 vx1

;

x1 ð4x1  4  4l1  2l2 Þ ¼ 0

(2.ii)

vLðx; lÞ ¼ 2x2  2  2l1  4l2  0 vx2

;

x2 ð2x2  2  2l1  4l2 Þ ¼ 0

(2.iii)

vLðx; lÞ ¼ 4x1  2x2 þ 8  0 vl1

;

l1 ð  4x1  2x2 þ 8Þ ¼ 0

(2.iv)

vLðx; lÞ ¼ 2x1  4x2 þ 8  0 vl2

;

l2 ð 2x1  4x2 þ 8Þ ¼ 0

(2.v)

li  0; i ¼ 1; 2

xi  0; i ¼ 1; 2

;

The equations to be solved are 4x1  4l1  2l2  4 ¼ 0

(2.vi)

2x2  2l1  4l2  2 ¼ 0

(2.vii)

4x1  2x2 þ 8 ¼ 0

(2.viii)

2x1  4x2 þ 8 ¼ 0

(2.ix)

From Eqs (2.vi) and (2.vii), x1 ¼ l1 þ 0.5l2 þ 1 x2 ¼ l1 þ 2l2 þ 1 Substituting in Eqs (2.viii) and (2.ix), we have 6l1  6l2 þ 2 ¼ 0 6l1  9l2 þ 2 ¼ 0 On solving the above equations, we have l1 ¼ 1=3;

l2 ¼ 0; 

x1 ¼ 4=3;

x2 ¼ 4=3



Thus ðx ; l Þ ¼ ð4 = 3; 4 = 3; 1 = 3; 0Þ Solving for the second-order sufficient condition of the Lagrangian, we have LH ðx ; l Þ ¼ Fðx Þ þ l1 G1 ðx Þ þ l2 G2 ðx Þ







4 0 0 0 0 0 4 0 þ 1=3 þ0 ¼ 0 2 0 0 0 0 0 2 LH(x*,l*) is positive definite. Hence (x*)¼ (4/3, 4/3) is the optimal solution.

References

2.7 Limitations of classical optimization methods The classical analytical optimization techniques are useful in finding the optimum solution of unconstrained and constrained continuous and differentiable functions. These analytical methods make use of differential calculus in locating the optimum solution assuming that the function is differentiable twice with respect to the design variables. Although analytical methods with necessary and sufficient conditions are easier to use, these methods are difficult to apply for functions that are not continuous and/or not differentiable. Also, these methods cannot deal with discrete random functions. The direct substitution and penalty functionebased methods are not easier to apply for functions with more number of variables. The method of Lagrange multipliers handles equality constraints and the method of KuhneTucker conditions deals with the inequality constraints of multivariable functions. However, in problems with more design variables, these methods lead to a set of nonlinear simultaneous equations that may be difficult to solve. KuhneTucker conditions comprise both the necessary and sufficient conditions for optimality of smooth convex problems. However, many real problems do not satisfy the convexity assumptions. Although the classical analytical techniques have their merits and demerits, the study of these methods sets a basis for developing most of the numerical techniques that have evolved into advanced techniques to solve more practical problems.

2.8 Summary In this chapter, the optimization problem is first stated with the objective function with constraints and without constraints. Classical analytical methods with necessary and sufficient conditions are employed to solve various single-variable and multivariable unconstrained optimization problems. Different methods such as direct substitution, penalty function approach, and Lagrange multipliers are illustrated to solve multivariable optimization problems with equality constraints. The method of KuhneTucker conditions with its necessary and sufficient conditions is derived and applied to solve multivariable optimization problems with equality and inequality constraints. All these methods are illustrated in terms of their applications to different nonlinear programming problems.

References [1] G.S.G. Beveridge, R.S. Schechte, Optimization: Theory and Practice, McGraw Hill Book Company, New York, 1970. [2] A.D. Belegundu, T.R. Chandruputla, Optimization Concepts and Applications in Engineering, Cambrodge University Press, New York, 2011.

51

52

CHAPTER 2 Classical analytical methods of optimization

[3] M.S. Bazara, H.D. Sherali, C.M. Shetty, Nonlinear Programming: Theory and Applications, John Wiley & Sons, New Jersey, 2006. [4] D.L. Luenberger, Linear and Nonlinear Programming, second ed., Addison-Wesely, Reading, Massachusetts, 1984. [5] T.F. Edgar, D.M. Himmelblau, S. Lasdon, Optimization of Chemical Processes, second ed., McGraw Hill Higher Education, New York, 2001. [6] W.I. Zangwill, Nonlinear programming via penalty functions, Manag. Sci 13 (5) (1967) 344e358. [7] D.P. Bertsekas, Constrained Optimization and Lagrange Multiplier Methods, Athena Scientific, Belmont, Massachusetts, 1996. [8] H.P. Gavin, J.T. Scruggs, Constrained Optimization Using Lagrange Multipliers CEE 201L. Uncertainty, Design, and Optimization, Duke University, Spring, 2016. [9] H.W. Kuhn, A.W. Tucker, Nonlinear programming, In: J. Neyman (Ed.), Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, University of California Press, Berkeley, CA, 1951, pp. 481e492. [10] J. Nocedal, S.J. Wright, Numerical Optimization, Springer-Verlag New York, Inc, 1999. [11] A. Antoniou, W.-S. Lu, Practical Optimization: Algorithms and Engineering Applications, Springer ScienceþBusiness Media, LLC, 2007. [12] J.A. Snyman, An Introduction to Basic Optimization: Theory and Classical and New Basic Gradient Based Algorithms, Springer ScienceþBusiness Media, Inc., New York, 2005.

CHAPTER

Numerical search methods for unconstrained optimization problems

3

Chapter outline 3.1 Introduction ....................................................................................................... 54 3.2 Classification of numerical search methods ......................................................... 54 3.2.1 Direct search methods ...................................................................... 54 3.2.2 Gradient search methods .................................................................. 55 3.3 One-dimensional gradient search methods........................................................... 55 3.3.1 Newton’s method ............................................................................. 55 3.3.2 Quasi-Newton method....................................................................... 58 3.3.3 Secant method................................................................................. 60 3.4 Polynomial approximation methods ..................................................................... 63 3.4.1 Quadratic interpolation method ......................................................... 63 3.4.2 Cubic interpolation method ............................................................... 67 3.5 Multivariable direct search methods ................................................................... 70 3.5.1 Univariate search method.................................................................. 70 3.5.2 HookeeJeeves pattern search method ................................................ 71 3.5.2.1 Exploratory move................................................................... 72 3.5.2.2 Pattern move ........................................................................ 72 3.5.3 Powell’s conjugate direction method .................................................. 74 3.5.4 NeldereMead simplex method........................................................... 76 3.6 Multivariable gradient search methods ................................................................ 78 3.6.1 Steepest descent method .................................................................. 79 3.6.2 Multivariable Newton’s method.......................................................... 81 3.6.3 Conjugate gradient method................................................................ 83 3.7 Summary ........................................................................................................... 85 References ............................................................................................................... 85

Stochastic Global Optimization Methods and Applications to Chemical, Biochemical, Pharmaceutical and Environmental Processes. https://doi.org/10.1016/B978-0-12-817392-3.00003-X Copyright © 2020 Elsevier Inc. All rights reserved.

53

54

CHAPTER 3 Numerical search methods

3.1 Introduction The analytical search methods based on necessary conditions and analytical derivatives discussed in earlier chapter can yield exact solution for functions that have no complex form of expressions. Analytical methods are usually difficult to apply for nonlinear functions for which the analytical derivatives are difficult to compute and for functions involving more variables. Most algorithms of unconstrained and constrained optimization make use of numerical search techniques to locate the minimum (maximum) of single variable and multivariable functions. These numerical search methods find the optimum by using the function f(x) and sometimes derivative values of f(x) at successive trial points of x. In this chapter we discuss various gradient and direct search methods that are used to solve single variable and multivariable optimization problems. The search methods that are most effective in practice are described in detail with examples.

3.2 Classification of numerical search methods Numerical search methods can be broadly classified into two groups: gradient methods and nongradient methods. The nongradient methods are also referred as direct search methods.

3.2.1 Direct search methods Direct search methods do not use the partial derivatives of the function. In these methods, only function values at different points are used to perform a search. These methods involve in evaluating the function at a sequence of points and comparing the values for optimum. Direct search methods are usually applied when the function is not differentiable or the derivatives of the function are complicated to compute. These methods are most suitable for problems that involve a relatively small number of variables [1]. Direct search methods can be employed to solve single variable as well as multivariable optimization problems. Many optimization problems involve efficient unidimensional search techniques to locate local minimum (maximum) of a function of one variable. The region elimination methods such as the sequential search, simultaneous search, dichotomous search, Fibonacci search, and golden section search can be classified under one-dimensional direct search methods. These methods work with the direct function values and not with the derivative information. In these methods, the initially selected search region is reduced to the optimum level through a sequence of calculations. The function values at two points are considered to guide the search. These region elimination methods are not deliberated here as they are widely reported elsewhere [2e4]. The techniques such as the random search, grid search, univariate search, as well as the methods such as HookeeJeeves, NeldereMead, Rosenbrock’s,

3.3 One-dimensional gradient search methods

and Powell’s are the direct search methods used to solve multivariable optimization problems. The direct search methods that are mostly used in practice are discussed in this chapter.

3.2.2 Gradient search methods Gradient-based methods make use of calculus and derivatives of the objective function to search for the optimum. Gradient-based methods are more efficient if gradient information is available for the optimizing functions. Gradient-based methods cannot be applied for engineering problems where the objective function is not differentiable or for the problems where the variables are in discrete representation. There are various gradient search methods that are successfully employed to find the optimum of a function. Gradient descent search involves the negative gradient of the function to find the minimum. Instead, gradient ascent search involves the positive gradient of the function to find the maximum. In general, the methods that use gradient information are considered more efficient. The widely used one-dimensional gradient-based search algorithms are the Newton’s method, Quasi-Newton method, and secant method. Multivariable gradient search involves the extension of calculus of one-dimensional search for optimizing the functions of two or more variables. The commonly used multivariable gradient search methods are the steepest descent method, multivariable Newton’s method, and the conjugate gradient method. Polynomial approximation methods are another class of unidimensional methods to locate the optimum of a function. These methods find the optimum value of an independent variable by interpolation using polynomial approximation models of the function f(x). Both quadratic and cubic polynomial approximations fall under the category of nongradient and gradient-based polynomial approximation methods.

3.3 One-dimensional gradient search methods In this chapter, we describe the one-dimensional gradient search methods that proved effective in practice. The search algorithms discussed here include the Newton’s method, quasi-Newton method, and secant method. More details on these methods are available in literature [5e9].

3.3.1 Newton’s method Newton’s method implies quadratic approximation of a function by using Taylor series expansion in which the function f(l) at l ¼ li is given by f ðlÞ ¼ f ðli Þ þ f 0 ðli Þðl  li Þ þ

1 00 f ðli Þðl  li Þ2 þ . 2

(3.1)

55

56

CHAPTER 3 Numerical search methods

A quadratic function is formed on taking the derivative of the function f(l) with respect to l and elimination of higher order terms f 0 ðlÞ ¼ f 0 ðli Þ þ f 00 ðli Þðl  li Þ ¼ 0

(3.2)

The first-order necessary condition for a local minimum is stationary point is obtained as l ¼ li 

f0 (l)

¼ 0. Thus the

f 0 ðli Þ f 00 ðli Þ

(3.3)

The iterative solution of Eq. (3.3) can be represented as liþ1 ¼ li 

f 0 ðli Þ f 00 ðli Þ

(3.4)

The iterative solution of Eq. (3.4) is assumed to converge when the derivative f0 (li þ 1) is close to zero j f 0 ðliþ1 Þj  ε

(3.5)

where ε is a prespecified tolerance. The convergence process of the method is shown in Fig. 3.1. Newton’s method provides locally converged solution if f 00 (l) # 0. For a quadratic function, this method provides the minimum in one iteration. However, Newton’s method has certain disadvantages. It requires the computation of both first and second derivatives. For obtaining converged solution, the starting point for Newton’s method must be close to true solution. Otherwise the Newton’s iterative process might diverge as illustrated in Fig. 3.2. Example 1:

f′(λ)

λi

λ*

Tangent at λ i+1 λ i+1 λ i+2

Tangent at λ i

FIGURE 3.1 Newton’s method showing converged solution.

λ

3.3 One-dimensional gradient search methods

f′(λ)

λ* λi+1

λi

λi+2

Tangent at λi

λ

Tangent at λi+1

FIGURE 3.2 Newton’s methodddivergence in solution. Apply Newton’s method to minimize f(x) ¼ 2 x4  14 x3 þ 60x2  80x with the initial condition x0 ¼ 0. Solution: Eq. (3.4) can be interpreted as xiþ1 ¼ xi 

f 0 ðxi Þ f 00 ðxi Þ

where f 0 ðxÞ ¼ 8x3  42x2 þ 120x  80 f 00 ðxÞ ¼ 24x2  84x þ 120 The iterative calculations are given as follows: i

xi

f 0 (xi)

0 1 2 3 4 5

0 0.6667 0.8849 0.9028 0.9029 0.9029

80.0000 16.2963 1.1553 0.0066 0.0000 0.0000

The method has provided converged solution in less than five iterations.

Example 2: Apply Newton’s method to minimize f ðxÞ ¼ 13x3  2 sin x with the initial condition x0 ¼ 0:5: Solution: Eq. (3.4) can be interpreted as

57

58

CHAPTER 3 Numerical search methods

xiþ1 ¼ xi 

f 0 ðxi Þ f 00 ðxi Þ

where f 00 ðxÞ ¼ x2  2 cos x f 00 ðxÞ ¼ 2x þ 2 sin x The iterative calculations are given by. i

xi

f 0 (xi)

0 1 2 3 4

0.5 1.2684 1.0405 1.0218 1.0217

1.5052 1.0132 0.0711 0.0005 0.0000

The method has yielded converged solution within five iterations.

3.3.2 Quasi-Newton method This method uses the finite difference approximations of the first and second derivatives of the function involved in Newton’s iterative formula in Eq. (3.4). Different versions of quasi-Newton methods are used to optimize the functions that are twice differentiable. The difference between these versions depends on the approximation considered for the inverse Hessian matrix used in Newton’s method. Here we consider the quasi-Newton method in which the first and second derivatives of the function are approximated by central difference formulas. The central difference approximation of the first derivative is expressed as f ðli þ DlÞ  f ðli  DlÞ (3.6) 2Dl The central difference approximation of the second derivative is given by f 0 ðli Þ ¼

f ðli þ DlÞ  2f ðli Þ þ f ðli  DlÞ (3.7) Dl2 where Dl is a small step size. Substitution of Eqs. (3.6) and (3.7) in to Eq. (3.4) leads to f 00 ðli Þ ¼

liþ1 ¼ li 

Dl½ f ðli þ DlÞ  f ðli  DlÞ 2½ f ðli þ DlÞ  2f ðli Þ þ f ðli  DlÞ

(3.8)

Quasi-Newton method uses Eq. (3.8) iteratively to obtain the optimal solution. The following criterion can be employed for the iterative convergence of the method    f ðliþ1 þ DlÞ  f ðli1  DlÞ ε  (3.9)   2Dl

3.3 One-dimensional gradient search methods

or j f 0 ðliþ1 Þj  ε

(3.10)

Procedure Consider a function f(l) that is twice differentiable. 1. 2. 3. 4. 5. 6.

Choose a starting point lo and step size Dl. Compute the first-derivative approximation using Eq. (3.6). Compute the second-derivative approximation using Eq. (3.7). Determine new l value as l1. Check for the convergence on considering Eq. (3.9) or Eq. (3.10). Repeat from step 2 until optimum is established.

Though this method is computationally faster, the approximation considered in its Hessian leads to slower convergence and less precision. Example 3: Apply quasi-Newton method derived from central difference approximations to minimize the function f(x) ¼ 2x4  14x3 þ 60x2  80x with the initial condition x0 ¼ 0 and Dx ¼ 0.5. Solution: Eq. (3.8) can be interpreted as xiþ1 ¼ xi 

Dx½ f ðxi þ DxÞ  f ðxi  DxÞ 2½ f ðxi þ DxÞ  2f ðxi Þ þ f ðxi  DxÞ

where f(xþ Dx) ¼ f(x þ 0.5) and f(x  Dx) ¼ f(x  0.5) The iterative solution is evaluated as follows. i

xi

f ’ (xi)

0 1 2 3 4 5

0 0.6901 0.9141 0.9291 0.9289 0.9229

83.5000 16.6819 0.9628 0.0105 0.0002 0.0000

The method has yielded converged solution within six iterations.

Example 4: Apply quasi-Newton method derived from central difference approximations to minimize the function f ðxÞ ¼ 13x3  2 sin x with the initial condition x0 ¼ 0:5 and Dx ¼ 0.5. Solution: Eq. (3.8) can be interpreted as xiþ1 ¼ xi 

Dx½ f ðxi þ DxÞ  f ðxi  DxÞ 2½ f ðxi þ DxÞ  2f ðxi Þ þ f ðxi  DxÞ

where f(x þ Dx) ¼ f(x þ 0.5) and f(x  Dx) ¼ f(x  0.5)

59

60

CHAPTER 3 Numerical search methods

The iterative solution is obtained as follows. i

xi

f 0 (xi)

0 1 2 3 4 5

0.5 1.1960 1.0034 0.9871 1.9869 1.9869

1.3496 0.8118 0.0595 0.0000 0.0000 0.0000

The method has yielded converged solution within five iterations.

In quasi-Newton method, the step size has a considerable influence on the convergence of the solution. Though this method with appropriate step size converge to the optimal solution as fast as Newton’s method, the solution is less accurate than Newton’s method.

3.3.3 Secant method This method is based on approximating the second derivative of Newton’s method by means of its first derivative so as to reduce the amount of computation per iteration. On setting the first derivative f 0 (l) in Eq. (3.2) to be zero, we have f 0 ðlÞ ¼ f 0 ðli Þ þ f 00 ðli Þðl  li Þ ¼ 0 By denoting the second derivative of the function f 00 (l) in the above equation to be S, we have f 0 ðli Þ þ Sðl  li Þ ¼ 0

(3.11)

On rearranging the above equation, we have f 0 ðli Þ S The iterative solution of Eq. (3.12) can be represented as l ¼ li 

(3.12)

f 0 ðli Þ (3.13) S The term S in the above equation is the slope of the line called secant line joining two points (A, f 0 (A)) and (B, f 0 (B)), where A and B denote two different approximations to the correct solution, l* as shown in Fig. 3.3. According to Fig. 3.3, the slope S can be expressed as liþ1 ¼ li 



f 0 ðBÞ  f 0 ðAÞ BA

(3.14)

3.3 One-dimensional gradient search methods

f′(λ)

f′(B)

λi+2 A=λi

λi+1 λ*

λ B

f′(A)

FIGURE 3.3 Convergence of solution by secant method.

Thus the new approximate solution of secant method is expressed as liþ1 ¼ A 

f 0 ðAÞðB  AÞ f 0 ðBÞ  f 0 ðAÞ

(3.15)

Secant method can be considered as a form of elimination technique, since part of the interval (A, liþ1) is eliminated at every interval. The iterative process of secant method can be implemented by using the following step by step procedure: 1. Set li ¼ A ¼ 0 and evaluate f 0 (A). The value of f 0 (A) will be negative. Assume an initial trial step length lo. Set i ¼ 1. 2. Evaluate f 0 (l0). 3. If f 0 (l0) < 0, set A ¼ li ¼ l0, f 0 (A) ¼ f 0 (l0), new l0 ¼ 2 l0, and go to step 2. 4. If f 0 (l0)  0, set B ¼ l0, f 0 (B) ¼ f 0 (l0), and go to step 5. 5. Find new approximate solution of the problem. According to Eq. (3.15), we have

liþ1 ¼ A 

f 0 ðAÞðB  AÞ f 0 ðBÞ  f 0 ðAÞ

6. Test for convergence according to Eq. (3.10): j f 0 ðli þ 1Þj  ε where ε is a small quantity. If step 6 is satisfied, take l* ¼ liþ1 and stop the procedure. Otherwise, go to step 7.

61

62

CHAPTER 3 Numerical search methods

7. If f0 (liþ1)  0, set new B ¼ liþ1, f0 (B) ¼ f0 (liþ1), i ¼ iþ1, and go to step 5. 8. If f0 (liþ1) < 0, set A ¼ liþ1, f0 (A) ¼ f0 (liþ1), i ¼ i þ 1, and go to step 5.

Example 5: Apply secant method to minimize the function f ðxÞ ¼ 2x4  14x3 þ 60x2  80x: Solution: By interpreting liþ1 as xiþ1 and A and B as xi and xj, Eq. (3.15) can be written as xiþ1 ¼ xi 

f 0 ðxi Þðxj  xi Þ f 0 ðxj Þ  f 0 ðxi Þ

The initial points that satisfy the negative and positive values of f 0 (x) are set as 0.5 and 1.0. The iterative solution is obtained as follows. i

xi

f 0 (xi)

0 1 2 3 4 5 6

0.5000 0.9155 0.9046 0.9032 0.9030 0.9030 0.9029

29.5000 0.7962 0.1035 0.0134 0.0017 0.0002 0.0000

The optimal solution is obtained by around six iterations.

Example 6: 1 Apply secant method to minimize the function the function f ðxÞ ¼ x3  2 sin x. 3 Solution: By using the iterative formula same as in the above example, we have xiþ1 ¼ xi 

f 0 ðxi Þðxj  xi Þ f 0 ðxj Þ  f 0 ðxi Þ

The initial points that satisfy the negative and positive values of f 0 (x) are set as 0.5 and 1.0. The iterative calculations that provide the solution are as follows. i

xi

f 0 (xi)

0 1 2 3 4

0.5000 1.0283 1.0197 1.0217 1.0217

1.5052 0.0248 0.0074 0.0000 0.0000

The optimal solution is obtained in less than four iterations.

3.4 Polynomial approximation methods

3.4 Polynomial approximation methods Polynomial approximation methods are interpolation techniques which were originally developed as one-dimensional searches within multivariable optimization techniques [1,10]. These methods differ only in the choice of polynomial used to locally approximate the optimizing function. The quadratic and cubic interpolation methods involve polynomial approximations of the given function to find the optimum solution.

3.4.1 Quadratic interpolation method This method employs interpolating quadratic polynomial to find the optimum of a function and this method uses function values only. Thus, this method is useful to optimize the function f(l) for which its partial derivative with respect to the variable l is not available. In this method, the function f(l) is approximated by a quadratic function, h(l), and the minimum l* of h(l) is found. If l* is not sufficiently close to the true minimum, another stage is employed in which a new quadratic approximation h0 (l)is used to approximate f(l), and a new value of l* is obtained. This procedure is continued until a l that is sufficiently close to l* is found. Let us assume the quadratic function h(l) to approximate the function f(l) as hðlÞ ¼ a þ bl þ cl2

(3.16)

The necessary condition for minimum of h(l) is dh ¼ b þ 2cl ¼ 0 dl The solution of Eq. (3.17) gives b 2c The sufficient condition for the minimum of h(l) is given by  d2 h >0 dl2 l l ¼ 

(3.17)

(3.18)

(3.19)

This condition shows that c > 0. To find the constants a, b, and c in Eq. (3.16), we evaluate the function f(l) at three points, l ¼ X, l ¼ Y, and l ¼ Z, and denote the corresponding function values as fX, fY, and fZ as fX ¼ a þ bX þ cX 2 fY ¼ a þ bY þ cY 2 fZ ¼ a þ bZ þ cZ

2

(3.20)

63

64

CHAPTER 3 Numerical search methods

The solution of Eq. (3.20) gives fX YZðZ  YÞ þ fY ZXðX  ZÞ þ fZ XYðY  XÞ ðX  YÞðY  ZÞðZ  XÞ  2      fX Y  Z 2 þ fY Z 2  X 2 þ fZ X 2  Y 2 b¼ ðX  YÞðY  ZÞðZ  XÞ



c¼

fX ðY  ZÞ þ fY ðZ  XÞ þ fZ ðX  YÞ ðX  YÞðY  ZÞðZ  XÞ

From Eq. (3.18), the minimum of h(l) can be found as       fx Y 2  Z 2 þ fY Z 2  X 2 þ fZ X 2  Y 2 b  l ¼ ¼ 2½fX ðY  ZÞ þ fY ðZ  XÞ þ fZ ðX  YÞ 2c

(3.21) (3.22) (3.23)

(3.24)

provided that c > 0. The initial solution for l* is obtained as follows. By choosing the points X, Y, and Z as 0, t and 2t, respectively, and solving the equations (3.21e3.23), we have a ¼ fX 4fY  3fX  fZ 2t fZ þ fX  2fY c¼ 2t2 where t is a prespecified trial step length. Eq. (3.18) gives, b¼

l ¼

4fY  3fX  fZ t 4fY  2fZ  2fX

(3.25) (3.26) (3.27)

(3.28)

provided that fZ þ fX  2fY >0 2t2 The inequality in Eq. (3.29) can be satisfied if c¼

(3.29)

fX þ fZ > fY (3.30) 2 This implies that the function value fY should be smaller than the average value of fX and fZ. This condition can be satisfied if fY lies below the line joining fX and fZ as shown in Fig. 3.4. For the initial fit of quadratic function, X can be set as zero and Y and Z are chosen so that they are positive and negative. One possible test to find the convergence to the minimum l* of the approximating quadratic polynomial h(l) sufficiently close to the true minimum l*of f(l) is to compare f(l*) with h(l*) by the criterion

3.4 Polynomial approximation methods    hðl Þ  f ðl Þ ε    f ðl Þ

(3.31)

where ε is a small quantity. If this convergence criterion is not satisfied, the following procedure can be employed. At each iteration, evaluate the function at four points X, Y, Z, and l* to find the function values fX, fy, fZ and fl and discard the region that has greatest f(l) value. The following possible situations can arise to select the best three points in each situation. Situation I: If λ* lies between Y and Z (a) f* 0

f′(λ)

+



0

X

Y

FIGURE 3.5 Representation of function derivatives at two points.

λ

3.4 Polynomial approximation methods

3 ðfX  fY Þ þ fX0 þ fY0 Y X The value of l* in Eq. (3.43) can be computed to satisfy the condition   Q ¼ Z 2  fX0 fY0 > 0 Z¼

(3.44)

This inequality is satisfied automatically since X and Y are selected such that fX0 < 0 and fY0 > 0. The following convergence criteria can be used to obtain the optimum solution    hðl Þ  f ðl Þ   ε1  (3.45)   f ðl Þ or

 df   ε2 dll

(3.46)

where ε1 and ε2 are small numbers depending on the accuracy desired. If this convergence criteria are not satisfied, a new cubic equation of the form h0 ðlÞ ¼ a0 þ b0 l þ c0 l2 þ d0 l3

(3.47)

can be employed to approximate the function f(l). Now the formula defined by Eq. (3.40) is used to find the optimal solution l*. If f 0 (l*) > 0, the new points X and Y can be taken as X and l*. If f 0 (l*) < 0, the new points A and B can be taken as l* and B. This method is continued till convergence in solution is obtained. If the interpolation function is not representative of the behavior of the function to be minimized within the interval of uncertainty, the minimum may fall outside the interval or the successive iterations may be too close to one another without achieving a significant improvement in the function value. In such cases, safeguarded procedures that combine polynomial interpolations with a simple bisection technique or the golden section search technique can be employed. At the end of the polynomial interpolation, the bisection technique would be used to find the zero of the derivative of the function. Example 8: Find the minimum of the function f(l) ¼ l53l312l þ 3 using the method of cubic interpolation. Solution: The initial two points X and Y are set such that they provide the negative and positive values of the function. Thus initial points are selected as X ¼ 1.5 and Y ¼ 3.0. The convergence is defined by the derivative of the function f 0 (l).

69

70

CHAPTER 3 Numerical search methods

The iterative solution is given as follows. i

li

f 0 (li)

1 2 3 4

1.7421 1.6405 1.6406 1.6406

6.7388 0.0088 0.0000 0.0000

For finding optimal solution of the same problem, cubic interpolation provides faster convergence than the method of quadratic interpolation.

3.5 Multivariable direct search methods Direct search methods are used to solve optimization problems without involving any gradient information of the function. Unlike many optimization algorithms where the information about the gradient or higher derivatives is required to search for an optimal point, a direct search algorithm searches a set of points around the current point that yields the lowest function value for minimization problem. Direct search optimization has been receiving increasing attention over the years to solve optimization problems concerning engineering, science, and management domains. The multivariable direct search methods that are discussed here include the univariate search method, HookeeJeeves method, Powell’s method, and NeldereMead method [11e15].

3.5.1 Univariate search method In this method, for an n variable function, we change only one variable at a time to generate sequence of improved approximations to the minimum point. By starting at a base point x in the ith direction, we fix the values of n  1 variables and vary the remaining variable. Since only one variable is changed at a time, the multidimensional problem is reduced to the form of one-dimensional problem. The search is continued in the new direction which is formed by changing any one of the n  1 variables that were fixed in the previous iteration. We complete one cycle after searching all the n directions sequentially. The entire search process is repeated for sequential optimization of the given problem. This procedure is continued until no further improvement is possible in the objective function in any of the n directions of a cycle. For a multivariable function, the search process is explained as follows. 1. Define the function f ðxÞ ¼ f ðx1 ; x2 ; .; xn Þ

3.5 Multivariable direct search methods

2. Choose the starting vector T  x0 ¼ x01 ; x02 ; .; x0n 3. Consider a step size a1 for x1 and evaluate the function f (x01 þ a1,x1,x2, ., xn) and find a1. 4. Update x1 as x11 ¼ x01 þ a1 .   5. Consider a step size a2 for x2 and evaluate the function f x11 ; x02 þa2 ; x03 ; .; x0n and update x2. 6. Continue the procedure until all variables are updated. This completes one cycle with the updated vector as [x11,x12, . .., x1n]. 7. Go to step 3 and continue the procedure sequentially and iteratively until convergence is obtained.

Example 9: Find the minimum of the function f ðxÞ ¼ 3x21 þ x22  2x1 x2  3x1  4x2 þ 10 using the univariate search technique. Solution: The given function is f ðxÞ ¼ 3x21 þ x22  2x1 x2  3x1  4x2 þ 10 The procedure given Section 3.5.1 is employed to obtain the solution. The iterative solution is computed as follows: i

x1

x2

f (x)

0 1 2 3 4 5 6 7 8 9

1.0000 0.8333 1.4444 1.6481 1.7160 1.7387 1.7462 1.7496 1.7499 1.7500

1.0000 2.8333 3.4444 3.6481 3.7160 3.7387 3.7462 3.7496 3.7499 3.7500

5.0 1.5556 0.0617 0.1043 0.1227 0.1247 0.1250 0.1250 0.1250 0.1250

3.5.2 HookeeJeeves pattern search method The HookeeJeeves method is a direct pattern search technique that works by creating a set of search directions iteratively within the search space. In an n-dimensional problem, this method creates at least n-linearly independent search directions. For a two-variable function, a minimum of two search directions are required for moving from one point to another point in the search space. The search process in this method involves two types of operations called exploratory moves and pattern moves [16].

71

72

CHAPTER 3 Numerical search methods

3.5.2.1 Exploratory move This move starts by choosing an initial base point. From this base point, the algorithm conducts exploratory move to analyze the search space around the base point. In this move, the variables are altered one at a time with a given step length for each variable. The algorithm evaluates the objective function when the variable is altered with a positive step length. If this does not give a better function value, the function is evaluated with a negative step length. This process is continued for each of the variables. Thus the exploratory move is made systematically to find the best point. The algorithm performs the following steps: 1. Choose the base point x ¼ xi and step length Dxi for the variables i ¼ 1, ., n. Set x ¼ x0. 2. Evaluate the function values as f ¼ f(xi), f þ ¼ f(xi þ Dxi), and f  ¼ f(xi  Dxi). 3. Find fmin ¼ min ( f, f þ, f ) and set x corresponding to fmin. 4. If i s n, set i ¼ i þ 1, and go to step 2; else go to step 5. 5. If x s x0, success; else failure. In the exploratory move, the current point is perturbed in positive and negative directions along each variable one at a time and the current point is changed to the best point at the end of each variable perturbation. If the point that found best at the end of all variable perturbations is different than the original point, the exploratory move is a success.

3.5.2.2 Pattern move The point that found best at the end of the current exploratory move and the best point of the previous exploratory move are used to find a new point in pattern move. This new point xpkþ1 in pattern move is found from jumping from the current best point xk along a direction connecting the previous best point xk1 of exploratory moves as follows xpkþ1 ¼ xk þ ðxk  xk1 Þ

(3.48)

This new point obtained from the pattern move serves as the base point for exploratory move. The HookeeJeeves method comprises an iterative application of an exploratory move in the locality of the current point and a subsequent jump using the pattern move. If the pattern move does not take the solution to a better region, the pattern move is not accepted and the extent of exploratory search is reduced. When exploratory moves around a base point do not find a better objective function value, then step length reduction is done. The step length is reduced to half, and exploratory moves start again for the current base point. The algorithm terminates when the step length falls below a given tolerance. The current base point becomes the best result. The flow chart of Hooke and Jeeves pattern search is shown in Fig. 3.6.

3.5 Multivariable direct search methods

Start

1

2

Start at base point

Set new base point

3

Select initial base point and evaluate the function at this point

Does the objective function have better value than at base point?

Perform exploratory moves

Conduct pattern move

Is step length sufficiently reduced?

1

Is the current objective function has better value than at base point?

Perform exploratory moves

No

Reduce step size

Yes

Yes

2

No

3

Yes

2

No

1

1

Stop

FIGURE 3.6 Flowchart of Hooke and Jeeves direct pattern search.

Example 10: Apply HookeeJeeves method to find the minimum of the function given by f ðxÞ ¼ 3x21 þ x22  2x1 x2  3x1  4x2 þ 10 Solution: The given function is f ðxÞ ¼ 3x21 þ x22  2x1 x2  3x1  4x2 þ 10 The stepwise procedure for exploratory move and pattern move given Section 3.5.2 is implemented to obtain the solution. For each set of exploratory moves, a pattern move is performed. Thus exploratory moves and pattern moves are performed to obtain the solution. The starting point is chosen as x1 ¼ 0 and x2 ¼ 0. The incremental vector is set as Dx1 ¼ 0.1 and Dx2 ¼ 0.2. The results of exploratory moves are given as follows. Move no.

x1

x2

f(x)

1 2 3 4 5 6 7 8 9

0.1000 0.1000 0.3000 0.3000 0.6009 0.6000 1.0000 1.0000 1.8000

0.0000 0.2000 0.4000 0.6000 1.0000 1.2000 1.8000 2.0000 3.8000

9.7300 8.9300 7.6900 6.9700 5.0800 4.4800 2.4400 0.1200 0.1200

The solution is not accurate with the fixed step length chosen for the problem. Better accuracy in results may be obtained by varying the step size and performing additional moves.

73

74

CHAPTER 3 Numerical search methods

3.5.3 Powell’s conjugate direction method The method of Powell [17] involves conjugate directions in search process. In this method, the history of previous solutions is used to create new search directions. The search process involves in generating a set of n-linearly independent directions and performing a series of unidirectional searches along each of these search directions. Suppose we initiate the process at two different points xa and xb and perform unidirectional search along parallel directions to arrive the points x1 and x2. The direction of the line that connects x1 and x2 is referred as conjugate direction, and this direction is directed toward the optimum as shown in Fig. 3.7A. The search process can also be initiated from a single point xa, and the conjugate direction can be generated as shown in Fig. 3.7B. The series of computation points (x) along with search directions (s) for optimal solution of a typical two-dimensional problem is shown in Fig. 3.8. The search procedure for optimization of a two-dimensional problem by Powell’s conjugate direction method is given in the following points. 1. Choose a starting point x0 and two different starting directions s1(1,0) and s2(0,1). 2. Staring at x0, perform unidirectional (UD) search along s1 to find the point x1. 3. Staring at x1, perform UD search along s2 to find the point x2. 4. From the point x2, perform UD search along s1 to find the point x3. 5. Find the conjugate direction connecting x3 and x1 as d (d1, d2) ¼ x3  x1. 6. Obtain the new normalized conjugate direction as s2 ¼

ðd ;d ÞT

1 2 . kðd1 ;d2 ÞT k 7. Starting at x3, perform UD search along the new s2 to find the point x4. 8. Find the conjugate direction connecting x4 and x2 as d(d1, d2) ¼ x4  x2.

9. Obtain the new normalized conjugate direction as s1 ¼

ðd1 ;d2 ÞT . kðd1 ;d2 ÞT k

10. Staring at x4, perform UD search along new s1 to find the point x5 (if converged, x* ¼ x5). 11. Continue the search process until convergence in solution is obtained. Powell’s method performs multidimensional optimization, using onedimensional optimizations. For an n-dimensional problem, the procedure can be implemented by considering the solution point as x ¼ (x1, x2, ., xn)T and the direction vectors as (s1, s2, ., sn)T. The initial direction vector s1 can be specified as s1 ¼ (1, 0, 0, ., 0)T. The other initial direction vectors can be specified in an analogs manner. The search procedure ensues the minimum of a quadratic objective function of n variables in n iterations. For nonquadratic functions, the procedure is continued beyond n iterations.

3.5 Multivariable direct search methods

(A)

X1 X2

Xa Xb Conjugate direction

(B)

X1 X2

Xa Conjugate direction

FIGURE 3.7 Conjugate search direction connecting two points: (A) two starting points and (B) one starting point.

Example 11: Apply Powell’s method of conjugate directions to find the minimum of the function f ðxÞ ¼ 3x21 þ x22  2x1 x2  3x1  4x2 þ 10. Solution: The given function is f ðxÞ ¼ 3x21 þ x22  2x1 x2  3x1  4x2 þ 10

75

76

CHAPTER 3 Numerical search methods

FIGURE 3.8 Schematic of typical conjugate directions and successive computation points in Powell’s direct search process. The procedure given Section 3.5.3 is used to solve this problem. The iterative solution is evaluated as follows. step no.

s

x1

x2

f (x)

0 1 2 3 4 5

(0,0) (1,0) (0,1) (1,0) (0.3162, 0.9487) (0.7071,0.7071)

0.5000 0.6667 0.6667 1.3889 1.3889 1.7500

0.5000 0.5 2.6667 2.6667 2.3889 3.7500

7.0000 6.9167 2.2222 0.6574 0.6574 0.1250

This method provides optimal solution by five iterations.

3.5.4 NeldereMead simplex method The NeldereMead simplex method is a widely used direct search method to solve unconstrained nonlinear optimization problems [18]. This method uses only values of the objective function in searching for the optimal solution. For an n-dimensional problem, this method constructs n þ 1 points as vertices of a simplex based on which the search process is initiated. Each iteration begins with a simplex specified by its n þ 1 vertices and the associated function values. Based on the function values of the vertices, the worst point among the n þ 1 points is found and the centroid of the remaining points is calculated. The search strategy follows the steps such as reflection, expansion, outer contraction, and inner contraction in order. The steps performed for a typical two-dimensional problem (2D simplex) are depicted in Fig. 3.9. The algorithm is given as follows: 1. Order the n þ 1 vertices of the simplex with the function values such that fxl  /  fxs  fxh corresponding to the points xl, .., xs, xh, where xl is the best point, xh

3.5 Multivariable direct search methods

FIGURE 3.9 Steps performed on the simplex of a typical two-dimensional problem.

is the worst point, xs is the second worst point, and n refers the dimension of design variables. 2. Obtain the reflection of xh through the mean of the other points. The reflection point xr is given by xr ¼ x þ aðx  xh Þ where x is given by x¼

1 X xi n xi sxh

Compute fxr . If fxl  fxr < fxs accept the reflection point, replace xh with xr and go to step 1. Else go to step 3 or step 4. 3. If fxr < fxl , expand the simplex and find the expansion point xe as xe ¼ x þ bðxr  xÞ Compute fxe : If fxe < fxr , accept the expansion point, replace xh with xe and go to step 1. Else replace xh with xr and go to step 1. 4. If fxr > fxs , contract the simplex. (a) If fxs  fxr < fxh , perform outside contraction as xo ¼ x þ gðxr  xÞ Compute fxo : If fxo  fxr , replace xh with xo and go to step 1. Else go to step 5.

77

78

CHAPTER 3 Numerical search methods

(b) If fxr  fxh , perform inside contraction. xc ¼ x  dðx  xh Þ Compute fxc . If fxc < fxh , replace xh with xc and go to step 1. Else go to step 5. 5. Shrink the simplex around xl and define the new vertices as xi / x l þ s(xixl), for i ¼ 2, ., n þ 1. Go to step 1 and evaluate the function values at these points. The coefficients can be set as a > 0, b > 1, 0 its parent Xi Then update Xi with Vi Else Keep Xi unchanged. End if End for

4.8.5 Advantages and limitations of artificial bee colony algorithm The advantages of ABC algorithm are as follows: • • • •

Simplicity, flexibility, and robustness. Ability to explore local solutions. Ability to handle objective cost. Ease of implementation. The limitations of ABC algorithm are as follows:

• • •

Lack of use of secondary information. Requires new fitness tests for parameters. Requires higher number of objective function evaluations.

4.8.6 Applications of artificial bee colony algorithm The areas of application of ABC algorithm include optimization, image processing, engineering design, scheduling, clustering, bioinformatics, and several others.

4.9 Cuckoo search algorithm 4.9.1 Historical background

CS algorithm is a recently introduced metaheuristic optimization algorithm proposed by Yang and Deb [72,73]. The basis for this algorithm is the adaptation of the intelligent breeding behavior of cuckoos. Cuckoo is a bird that never builds its own nest, and it lays eggs in the nests of other host birds. The host bird takes care of the eggs presuming that the eggs are its own. However, if the host bird

115

116

CHAPTER 4 Stochastic and evolutionary optimization algorithms

identifies the cuckoo eggs, it will either throw that eggs away from its nest or simply move out and builds a new nest in different location. In a nest, each egg represents a solution, and cuckoo egg represents a new and promising solution. The cuckoos improve and learn how to lay eggs more like that of target host bird’s eggs and the host birds learn how to recognize the false eggs. More number of survived eggs in the nest shows the better suitability of that nest, and it receives more attention. In CS, Levy flight is performed for generating a new solution. This obligate brood parasitic behavior of cuckoo’s found in nature has led to the development of CS algorithm. This method of optimization has been studied by various researchers for solving different problems [74e81].

4.9.2 Basic principle CS method is developed by combining the cuckoos laying algorithm with the Levy flight. The three ideal rules followed in devising this method are as follows: (i) each cuckoo lays one egg at a time and dumps it in a randomly chosen nest, (ii) the nest with highest fitness will be carried over to the next generations, and (iii) the number of available host nests is fixed, and the egg laid by a cuckoo is discovered by the host bird with a probability p ˛ (0, 1). Depending on probability, the host bird can either get rid of the egg or simply abandon the nest and build a completely new nest. The algorithm starts by dividing the search process into two phases that are a global phase and a local phase. In the global phase, the formation of new nests takes place, while in the local phase, removal of a fraction of worst nests is followed. Here, global phase refers to the exploration whereas local phase corresponds to the exploitation. The global phase is governed by Le´vy flightebased random walks rather than simple Brownian or Gaussian walks. The local phase is governed by selecting two random solutions from the search space with certain probability. The working of CS is controlled by three parameters:(i) the Levy flight component to control the exploration search equation, (ii) the exploitation or local search equation to control the random solutions, and (iii) the probability to decides the extent of exploration and exploitation.

4.9.3 Cuckoo search implementation procedure Fig. 4.8 shows a flowchart of the proposed algorithm. The algorithm starts with an initial population of cuckoos. These initial cuckoos have some eggs to lay in some host bird’s nests. Some of these eggs that are more similar to the host bird’s eggs have the opportunity to grow up and become a mature cuckoo. Other eggs are detected by host birds and are killed. The grown eggs reveal the suitability of the nests in that area. The more the eggs survived in an area, the more the profit gained. The CS algorithm uses a balanced combination of a local random walk and the global exploration random walk, controlled by a switching parameter pa. The local random walk can be written as   t t t ¼ x þ as5Hðp  εÞ5 x  x xtþ1 (4.28) a i j k i

4.9 Cuckoo search algorithm

Start

Initialize Cuckoos with eggs

Determine Egg laying radius for each cuckoo

Lay Eggs in Different nests Some of eggs are detected and killed

No

Is population less than max value?

Move all cuckoos towards best environment

Determine cuckoo societies

Find nests with best survival rate

Kill Cuckoos in worst area Yes Check Survival of Eggs in nests (get profit values)

Let eggs grow

Stop Condition Satisfied?

No

Yes End

FIGURE 4.8 Cuckoo search algorithm.

Here, xti is the previous solution, and xtj and xtk are two different solutions selected randomly by a random permutation, H(u) is a heavy side function, ε is a random number drawn from a uniform distribution, and s is the step size. Here, 5 means the entry-wise product of two vectors.

117

118

CHAPTER 4 Stochastic and evolutionary optimization algorithms

The global random walk is carried out by using Levy flights as xtþ1 ¼ xti þ aLðs; lÞ i

(4.29)

where lGðlÞsinðpl=2Þ 1 ; ðs[s0 > 0Þ (4.30) p s1þl Here a > 0 is the step size-scaling factor, which should be related to the scales of problem of interest. Eq. (4.29) represents the stochastic equation describing a random walk. In general, the random walk is a Markov chain whose next state/location only depends on the current location (the first term in the above equation) and the transition probability (the second term). However, a substantial fraction of the new solutions should be generated by far field randomization and their locations should be far enough from the current best location; this will make sure that the system will not be trapped in a local optimum. The initial solution is generated based on Lðs; lÞ ¼

x ¼ Lb þ ðUb  Lb Þ  randðsizeðLb ÞÞ

(4.31)

where rand is a random number generator uniformly distributed in the space [0,1], and Ub and Lb are the upper range and lower range of the jth nest, respectively.

4.9.4 Pseudocode of cuckoo search algorithm The pseudocode of CS by Levy flights is given as follows: begin Objective function F(x), x=(x1, ..., xd)T Generate initial population of n host nests xi (i=1,2, ..., n) While (tFj), replace j by the new solution i; End A fraction(pa) of worse nests are abandoned and new ones are built New solutions are generated by Eq. (4.28) Keep the best solutions (or nests with quality solutions) Rank the solutions and find the current best t=t+1 end while Postprocess results and visualization End

4.9.5 Advantages and limitations of cuckoo search algorithm The advantages of CS algorithm are as follows: • • •

The algorithm is simple. It has only two parameters. The algorithm is easier to implement. ́

References

The limitations of CS algorithm are as follows: • •

Lacks proper balance between the exploration and exploitation. Requires proper study on probability to decide the extent of exploration and exploitation.

4.9.6 Applications of cuckoo search algorithm CS is successfully applied to solve various optimization problems such as speech reorganization, job scheduling, engineering design, global optimization, etc.

4.10 Summary This chapter discusses various stochastic and evolutionary optimization methods such as GA, SA, DE, ACO, TS, PSO, ABC algorithm, and CS algorithm. All these methods are illustrated with their algorithmic schemes and implementation strategies. These methods establish a basis for further chapters of this book where various base case problems as well as applications to different domains are brought.

References [1] A. Fraser, Simulation of genetic systems by automatic digital computers, Aust. J. Biol. Sci. 10 (2) (1957) 492e499. [2] H.J. Bremermann, The Evolution of Intelligence: The Nervous System as a Model of its Environment, Tech report 1, Dept. of Mathematics, Univ. of Washington, Seattle, 1958. Contract No. 477 17. [3] A.S. Fraser, Simulation of genetic systems by automatic digital computers. IV. Epistatis, Aust. J. Biol. Sci. 13 (1960) 329e346. [4] R.B. Hollstien, Artificial Genetic Adaptation in Computer Control Systems, PhD Thesis, University of Michigan, 1971. [5] D.E. Goldberg, Simple genetic algorithms and the minimal deceptive problem, in: L. Davis (Ed.), Genetic Algorithms and Simulated Annealing, Morgan Kaufmann, Los Altos, CA, 1987, pp. 74e88. Chapter 6. [6] J.H. Holland, Adaptation in Natural and Artificial Systems, MITPress. Sec., Cambridge, MA, 1975. [7] D.E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, Addison-Wesley, 1989. [8] L. Davis, Handbook of Genetic Algorithms, Van Nostrand Reinhold, 1991. [9] M. Mitchell, Introduction to Genetic Algorithms, Prentice Hall, New Delhi, 1996. [10] M. Gen, R. Cheng, Genetic Algorithms and Engineering Design, New York Wiley, 1997. [11] S.J. Louis, R. Tang, Interactive genetic algorithms for the traveling salesman problem, in: Proceedings of the 1st Annual Conference on Genetic and Evolutionary Computation-Vol 1, Morgan Kaufmann Publishers Inc., 1999.

119

120

CHAPTER 4 Stochastic and evolutionary optimization algorithms

[12] P.V. Paul, P. Dhavachelvan, R. Bhaskaran, A novel population initialization technique for genetic algorithm, in: 2013 International Conference on Circuits, Power and Computing Technologies (ICCPCT), IEEE, 2013. [13] M. Mitchell, An Introduction to Genetic Algorithms, second ed., A Bradford Book, 1996. [14] K. Deb, An efficient constraint handling method for genetic algorithms, Comput. Methods Appl. Mech. Eng. 186 (2000) 311e338. [15] D.W. Dyer, Evolutionary Computation in Java, 2010 (Online). [16] S. Yang, Adaptive non-uniform crossover based on statistics for genetic algorithms, in: Proceedings of the 4th Annual Conference on Genetic and Evolutionary Computation, Morgan Kaufmann Publishers Inc., 2002. [17] S. Kirkpatrick, C.D. Gelatt, M.P. Vecchi, Optimization by simulated annealing, Science 220 (4598) (1983) 671e680. [18] A.W. Metropolis, M.N. Rosenbluth, A.H. Teller, E. Teller, Equations of state calculations by fast computing machines, J. Chem. Phys. 21 (6) (1953) 1087e1092. [19] W.B. Dolan, P.T. Cummings, M.D. Levan, Process optimization via simulated annealing: application to network design, AIChE J. 35 (5) (1989) 725e736. [20] S.P. Brooks, B.J. Morgan, Optimization using simulated annealing, J. R. Stat. Soc. 44 (2) (1995) 241e257. [21] L. Herault, Rescaled simulated annealingdaccelerating convergence of simulated annealing by rescaling the states energies, J. Heuristics 6 (2) (2000) 215e252. [22] R. Storn, K. Price, Differential evolution e a simple and efficient heuristic for global optimization over continuous spaces, J. Glob. Optim. 11 (1997) 341e359. [23] L.S Coelho and V.C. Martin, A hybrid method of differential evolution and SQP for solving the economic dispatch problem with valve point effect, In: Applications of Soft Computing, Adv. Intell. Soft Comput. (36, 2006, 311e320). [24] D. Karaboga, S. Okdem, A Simple and global optimization algorithm for engineering problems: differential evolution algorithm, Turk. J. Elec. Eng. 12 (1) (2004) 53e60. [25] K.V. Price, R.M. Storn, J.A. Lampinen, Differential Evolution: A Practical Approach to Global Optimization, Springer, 2005. [26] M.D. Kapadi, R.D. Gudi, Optimal control of fed-batch fermentation involving multiple feeds using differential evolution, Process Biochem. 39 (2004) 1709e1721. [27] D. Kiranmai, A. Jyothirmai, C.V.S. Murthy, Determination of kinetic parameters in fixed-film bio-reactors: an inverse problem approach, Biochem. Eng. J. 23 (2005) 73e83. [28] J.L. Deneubourg, S. Aron, S. Goss, J.M. Pasteels, The self-organizing exploratory pattern of the Argentine ant, J. Insect Behav. 3 (1990) 159e168. [29] M. Dorigo, V. Maniezzo, A. Colorni, Positive feedback as a search strategy, Dipartimento di Elettronica, Politecnico di Milano, Italy, Tech. Rep. 91e016 (1991). [30] M. Dorigo, Optimization, Learning and Natural Algorithms (In Italian), Ph.D. dissertation, Dipartimento di Elettronica, Politecnico di Milano, Italy, 1992. [31] M. Dorigo, V. Maniezzo, A. Colorni, Ant System: optimization by a colony of cooperating agents, IEEE Trans. Syst. Man Cybern. Part B 26 (1) (1996) 29e41. [32] L.M. Gambardella, E.D. Taillard, M. Dorigo, Ant colonies for the quadratic assignment problem, J. Oper. Res. Soc. 50 (2) (1999) 167e176. [33] V. Maniezzo, A. Colorni, The Ant system applied to the quadratic assignment problem, IEEE Trans. Data Knowl. Eng. 11 (5) (1999) 769e778.

References

[34] M. Dorigo, E. Bonabeaub, G. Theraulaz, Ant algorithms and stigmergy, Future Gener. Comput. Syst. 16 (2000) 851e871. [35] K.C. Abbaspour, R. Schulin, M.T. van Genuchten, Estimating unsaturated soil hydraulic parameters using ant colony optimization, Adv. Water Resour. 24 (2001) 827e841. [36] T.I. Wang, K.T. Wang, Y.M. Huang, Using a style-based ant colony system for adaptive learning, Expert Syst. Appl. 34 (4) (2008) 2449e2464. [37] F. Glover, Future paths for integer programming and links to artificial intelligence, Comput. Oper. Res. 5 (1986) 533e549. [38] P. Hansen, The Steepest Ascent Mildest Descent Heuristic for Combinatorial Programming, Numerical methods in combinatorial programming conference, Capri, Italy, 1986. [39] D. Cvijovic, J. Klinowski, Taboo search: an approach to the multiple minima problem, Science 667 (1995) 664e666. [40] F. Glover, M. Laguna, Tabu Search, Kluwer Academic Publishers, Boston, 1997. [41] M. Gendreau, G. Laporte, F. Semet, A Tabu search heuristic for the undirected selective travelling salesman problem, Eur. J. Oper. Res. 106 (1998) 539e545. [42] C. Wang, H. Quan, X. Xu, Optimal design of multi product batch chemical process using tabu search, Comput. Chem. Eng. 23 (1999) 427e437. [43] L. Cavin, U. Fischer, A. Mosat, K. Hungerbu¨hler, Batch process optimization in a multipurpose plant using tabu search with a design-space diversification, Comput. Chem. Eng. 29 (2005) 1770e1786. [44] B. Lin, D.C. Miller, Solving heat exchanger network synthesis problem with tabu search, Comput. Chem. Eng. 28 (2004a) 1451e1464. [45] B. Lin, D.C. Miller, Tabu search algorithm chemical process optimization, Comput. Chem. Eng. 28 (2004b) 2287e2306. [46] G. Waligora, Tabu search for discreteecontinuous scheduling problems with heuristic continuous resource allocation, Eur. J. Oper. Res. 193 (2009) 849e856. [47] N. Fescioglu-Unver, M.M. Kokar, Self controlling tabu search algorithm for the quadratic assignment problem, Comput. Ind. Eng. 60 (2011) 310e319. [48] J. Kennedy, R.C. Eberhart, Particle swarm optimization. IEEE International Conference on Neural Networks (Perth, Australia), IEEE Service Center, Piscataway, NJ, 1995, pp. 1942e1948, pg. IV. [49] Y. Shi, R.C. Eberhart, Parameter selection in particle swarm optimization, Evol. Program. VII 98 (1998) 591e600. Springer-Verlag, New York. [50] M. Clerc, J. Kennedy, The particle swarm-explosion, stability, and convergence in a multidimensional complex space, IEEE Trans. Evol. Comput. 6 (1) (2002) 58e73. [51] R.C. Eberhart, Y. Shi, Particle swarm optimization: developments, applications and resources, Proc. IEEE Congr. Evol. Comput. 1 (2001) 27e30. [52] R. Mendes, J. Kennedy, J. Neves, The fully informed particle swarm: simpler, may be better, IEEE Trans. Evol. Comput. 8 (3) (2004) 204e210. [53] A.P. Engelbrecht, Fundamentals of Computational Swarm Intelligence, John Wiley & Sons Ltd., Chichester, 2005. [54] R. Poli, J. Kennedy, T. Blackwell, Particle swarm optimization, Overv. Swarm Intell. 1 (1) (2007) 33e57. [55] M. Salahi, A. Jamalian, A. Taati, Global minimization of multi-funnel functions using particle swarm optimization, Neural Comput. Appl. 23 (7e8) (2013) 2101e2106.

121

122

CHAPTER 4 Stochastic and evolutionary optimization algorithms

[56] S.Y. Sun, J.W. Li, Parameter estimation of methanol transformation into olefins through improved particle swarm optimization with attenuation function, Chem. Eng. Res. Des. 92 (11) (2014) 2083e2094. [57] M. Khajeh, K. Dastafkan, Removal of molybdenum using silver nanoparticles from water samples: particle swarm optimization-artificial neural network, J. Ind. Eng. Chem. 20 (5) (2014) 3014e3018. [58] Y.Z. Hsieh, M.C. Su, P.C. Wang, A PSO-based rule extractor for medical diagnosis, J. Biomed. Inform. 49 (53e60) (2014). [59] D.H. Tungadio, B.P. Numbi, M.W. Siti, A.A. Jimoh, Particle swarm optimization for power system state estimation, Neurocomputing 148 (2015) 175e180. [60] R.C. Eberhart, Y. Shi, Comparison between genetic algorithms and particle swarm optimization, in: V.W. Porto, N. Saravanan, D. Waagen (Eds.), Evolutionary Programming VII, [S.l.]:Springer, 1998, pp. 611e616. [61] R.C. Eberhart, Y. Shi Y, Comparing inertia weights and constriction factors in particle swarm optimization, Proc. Congr. Evol. Comput. (2000) 84e88. [62] M. Clerc, The swarm and the queen: towards a deterministic and adaptive particle swarm optimization, in: Proceedings of the Congress on Evolutionary Computation, IEEE Service Center, Piscataway, NJ, 1999, pp. 1951e1957. [63] D. Karaboga, An Idea Based on Honey Bee Swarm for Numerical Optimization (Technical Report-Tr06, October, 2005, Erciyes University, Engineering Faculty Computer Engineering Department Kayseri/Tu¨rkiye, 2005. [64] V. Tereshko, A. Loengarov, Collective decision-making in honey bee foraging dynamics, Comput. Inf. Syst. J. 9 (3) (2005) 1e7. [65] D. Teodorovic, M. Dell’Orco, Bee colony optimizationea cooperative learning approach to complex transportation problems, Adv.OR AI Methods Transp. 51 (2005) 60. [66] H.F. Wedde, M. Farooq, Y. Zhang, BeeHive: an efficient fault-tolerant routing algorithm inspired by honey bee behavior, in: International Workshop on Ant Colony Optimization and Swarm Intelligence, Springer, 2004. [67] H. Drias, S. Sadeg, S. Yahi, Cooperative bees swarm for solving the maximum weighted satisfiability problem, in: International Work-Conference on Artificial Neural Networks, Springer, 2005. [68] B. Yuce, M.S. Packianather, S. Michael, E. Mastrocinque, D.T. Pham, A. Lambiase, Honey bees inspired optimization method: the bees algorithm, Insects 4 (4) (2013) 646e662. [69] D. Karaboga, B. Basturk, A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm, J. Glob. Optim. 39 (3) (2007) 459e471. [70] C. Xu, H. Duan, Artificial bee colony (ABC) optimized edge potential function (EPF) approach to target recognition for low-altitude aircraft, Pattern Recognit. Lett. 31 (13) (2010) 1759e1772. [71] H. Zhang, Y. Zhu, W. Zou, X. Yan, A hybrid multi-objective artificial bee colony algorithm for burdening optimization of copper strip production, Appl. Math. Model. 36 (6) (2012) 2578e2591. [72] X.S. Yang, S. Deb, Cuckoo search via Le´vy flights, in: Proc. of World Congress on Nature &Biologically Inspired Computing, (NaBIC 2009), IEEE Publications, USA, 2009, pp. 210e214.

References

[73] X.S. Yang, S. Deb, Engineering optimization by Cuckoo search, Int. J. Math Model. Numer. Optim. 1 (2010) 330e343. [74] R. Rajabioun, Cuckoo optimization algorithm, Appl. Soft Comput. 11 (2011) 5508e5518. [75] M. Dhivya, M. Sundarambal, Cuckoo search for data gathering in wireless sensor networks’, Int. J. Mob. Commun. 9 (6) (2011) 642e656. [76] A.H. Gandomi, X.S. Yang, A.H. Alavi, Cuckoo search algorithm: a meta heuristic approach to solve structural optimization problems, Eng. Comput. 29 (1) (2013) 17e35. [77] V. Bhargava, S. Fateen, A. Bonilla-Petriciolet, Cuckoo search: a new nature-Inspired optimization method for phase equilibrium calculations, Fluid Phase Equilib. 337 (2013) 191e200. [78] P. Civicioglu, E. Besdok, A conceptual comparison of the cuckoo-search, particleswarm optimization, differential evolution and artificial bee colony algorithms, Artif. Intell. Rev. 39 (4) (2013) 315e346. [79] I.J. Fister, X.S. Yang, D. Fister, I. Fister, Cuckoo search: a brief literature review, in: Cuckoo Search and Firefly Algorithm, vol. 516, Studies in Computational Intelligence Springer, 2014 (49e62). [80] M. Abdel-Baset, I.M. Hezam, Cuckoo search and genetic algorithm hybrid schemes for optimization problems, Appl. Math. Inf. Sci. 10 (No. 3) (2016) 1185e1192. [81] R. Salgotra, U. Singh, S. Saha, New cuckoo search algorithms with enhanced exploration and exploitation properties, Expert Syst. Appl. 95 (384e420) (2018).

123

CHAPTER

Application of stochastic and evolutionary optimization algorithms to base case problems

5

Chapter outline 5.1 Introduction ...................................................................................................125 5.2 Examples of numerical functions.....................................................................126 5.3 Application of genetic algorithm to base case problems ...................................126 5.3.1 Genetic algorithm implementation strategy ..................................... 126 5.3.2 Optimization results of genetic algorithm........................................ 127 5.4 Application of simulated annealing to base case problems ...............................130 5.4.1 Simulated annealing implementation strategy ................................. 130 5.4.2 Optimization results of simulated annealing.................................... 130 5.5 Application of differential evolution to base case problems ..............................131 5.5.1 Differential evolution implementation strategy ................................ 131 5.5.2 Optimization results of differential evolution ................................... 133 5.6 Application of ant colony optimization to base case problems ..........................133 5.6.1 Ant colony optimization implementation strategy............................. 133 5.6.2 Optimization results of ant colony optimization ............................... 133 5.7 Application of particle swarm optimization to base case problems ....................138 5.7.1 Particle swarm optimization implementation strategy....................... 138 5.7.2 Optimization results of particle swarm optimization ......................... 138 5.8 Application of artificial bee colony algorithm to base case problems.................138 5.8.1 Artificial bee colony implementation strategy .................................. 138 5.8.2 Optimization results of artificial bee colony optimization .................. 141 5.9 Analysis of results..........................................................................................141 5.10 Summary .......................................................................................................141 References .............................................................................................................145

5.1 Introduction In Chapter 4, various stochastic and evolutionary optimization algorithms such as genetic algorithm (GA), simulated annealing (SA), differential evolution (DE), ant colony optimization (ACO), tabu search (TS), particle swarm optimization Stochastic Global Optimization Methods and Applications to Chemical, Biochemical, Pharmaceutical and Environmental Processes. https://doi.org/10.1016/B978-0-12-817392-3.00005-3 Copyright © 2020 Elsevier Inc. All rights reserved.

125

126

CHAPTER 5 Application of stochastic and evolutionary algorithms

(PSO), artificial bee colony (ABC) algorithm, and cuckoo search (CS) algorithm are discussed with their algorithmic schemes and implementation procedures. These methods are in general inspired by some phenomena from nature, and they are robust and flexible with ease of operation. The main advantage of these methods is that they do not require any gradient information that is not available in many complex optimization problems. Application of these algorithms to solve continuous and discrete optimization problems and evaluation of their performances gives some intriguing insight about the efficacy of these methods.

5.2 Examples of numerical functions These stochastic and evolutionary optimization methods in accordance with the algorithmic schemes and implementation procedures discussed in Chapter 4 are applied to solve different optimization problems considering them as base case studies. The example problems considered here are given as follows: Example 1.

Nonlinear function

Find the optimum of the function f ðx1 ; x2 ; x3 Þ ¼ 2x21 þ ðx2 þ 1Þ2 þ ðx3  1Þ2 (5.1) Within the search domain: 0.5  xi  1, i ¼ 1, 2, 3. Example 2.

Quadratic function

Optimize the function f ðx1 ; x2 Þ ¼ ðx1 þ 2x2  7Þ2 þ ð2x1 þ x2  5Þ2

(5.2)

Search domain: 10  xi  10, i ¼ 1, 2. Example 3.

Powell quadratic function

f ðx1 ; x2 ; x3 ; x4 Þ ¼ ðx1 þ 10x2 Þ2 þ 5ðx3  x4 Þ2 þ ðx2  2x3 Þ4 þ 10ðx1  x4 Þ4 (5.3) Search domain: 4  xi  5, i ¼ 1,2,3,4.

5.3 Application of genetic algorithm to base case problems The GA discussed in Section 4.2 along with the implementation procedure is used to solve the base case problems given in Section 5.2. Further information on GAs can be referred elsewhere [1e6].

5.3.1 Genetic algorithm implementation strategy A population of size 50 is generated initially within the search domain. At each step, the GA selects individuals stochastically from the current population. The generations

5.3 Application of genetic algorithm to base case problems

Table 5.1 Genetic algorithm (GA) parameters (extracted from GA Toolbox, MATLAB 2016). Population type Population size Creation function Scaling function Selection function Reproduction elite count Cross constant Mutation function Crossover function Migration Fraction Interval Constraint parameters Initial penalty/penalty function Hybrid function Stopping criteria Time limit and fitness limit Stall test Function tolerance Constraint tolerance Evaluation of fitness and constraint functions

Double vector 50 Constraint dependent Rank Stochastic uniform 0.05*population size 0.8 Constraint dependent Constraint dependent Direction forward 0.2 20 Augmented Lagrangian 10/20 None 100*number of variables Inf and inf Average change 10e6 10e3 In serial

are set as 50. In the selection process, the population is modified so that the parents produce children to the next generation. Crossover is performed with a crossover constant of 0.8 to generate new solutions from an existing population. Mutation is applied with a mutation probability of 0.2 to maintain genetic diversity from one generation of population to the next. The population is thus modified to evolve optimal solution. The GA Toolbox in MATLAB 2016 is used for building the GA optimization models for the three example problems given in Eqs. (5.1)e(5.3). The optimum parameters chosen to solve these base case problems are shown in Table 5.1.

5.3.2 Optimization results of genetic algorithm The converged solutions of GA for Example 1, Example 2, and Example 3 of Section 5.2 are shown in Fig. 5.1. The iterative performance of GA leading to optimal solution is shown in Table 5.2.

127

Fitness value

CHAPTER 5 Application of stochastic and evolutionary algorithms

Best: 0.250004 Mean: 0.250015

4

Best fitness Mean fitness

2 0

0

50

100

150

200

250

300

Current best individual

Generation Current Best Individual

1 0

–1

1

2

3

Number of variables (3)

Fitness value

(i)

Best: 1.3499e-09 Mean: 4.93223e-08

400

Best fitness Mean fitness

200 0

0

20

40

60

80

100

120

140

160

180

200

Current best individual

Generation Current Best Individual

4 2 0

1

2

Number of variables (2)

Fitness value

(ii) Best: 0.000196393 Mean: 0.000228686

4000

Best fitness Mean fitness

2000

Current best individual

128

0

0

50

100

150

200

250

300

350

400

Generation Current Best Individual

0.1 0 –0.1

1

2

3

4

Number of variables (4)

(iii) FIGURE 5.1 Optimization results of genetic algorithm for the base case problems of Section 5.2: (i) Example 1, Eq. (5.1), (ii) Example 2, Eq. (5.2) and (iii) Example 3, Eq. (5.3).

Table 5.2 Iterative performance of genetic algorithm (GA). Example 1, Eq. (5.1)

x1 x2 x3 f(x)

1

10

0.1 0.5 0.8 0.382

20

0.01 0.5 0.9 0.272

30

0.01 0.5 0.9 0.259

40

0.01 0.5 0.95 0.256

50

0.01 0.5 0.99 0.253

60

0 0.5 1 0.25

0 0.5 1 0.25

Example 2, Eq. (5.2) Iteration number

1

x1 x2 f(x)

10 0.1 4 5.683

20 0.5 3.2 0.319

0.99 3.1 0.001

30

40

1 3 9e-06

1 3 2e-08

60

50 1 3 0

1 3 0

150

180

0 0 0 0 0

0 0 0 0 0

Example 3, Eq. (5.3) Iteration number x1 x2 x3 x4 f(x)

1 0.99 0.1 0.5 0.25 30.13

30

60 0.1 0.2 0.2 0.2 0.0024

90 0.1 0.1 0.1 0.1 0.00097

120 0 0.09 0.09 0.1 0.00052

0 0 0 0 0.0004

The global minimum obtained by GA for the three base case problems are given as follows: Example 1: x* ¼(0,0.5,1), f(x*) ¼ 0.25. Example 2: x* ¼(1,3), f(x*) ¼ 0.0. Example 3: x* ¼(0,0,0,0), f(x*) ¼ 0.0.

5.3 Application of genetic algorithm to base case problems

Iteration number

129

130

CHAPTER 5 Application of stochastic and evolutionary algorithms

5.4 Application of simulated annealing to base case problems The SA discussed in Section 4.3 along with the implementation procedure is used to solve the base case problems given in Section 5.2. Further detail on SA can be referred elsewhere [7e10].

5.4.1 Simulated annealing implementation strategy The SA algorithm and its implementation procedure discussed in Section 4.3 is used to solve the three example problems given in Section 5.2. The SA Toolbox in MATLAB 2016 is used for building the SA optimization models for the three example problems given in Eqs. (5.1e5.3). The parameters chosen for SA implementation are shown in Table 5.3.

5.4.2 Optimization results of simulated annealing The optimization results of SA for the base case problems of Example 1, Example 2, and Example 3 given in Section 5.2 are shown in Fig. 5.2. The iterative performance of SA leading to optimal solution is shown in Table 5.4.

Table 5.3 Simulated annealing (SA) parameters (extracted from SA Toolbox, MATLAB 2016). Options Max iterations Maximum function evaluations Time limit Function tolerance Objective limit Stall generations Annealing function Reannealing interval Temperature update Initial temperature Accepting criteria Problem type Hybrid function Display interval

Inf 300*number of variables Inf 10e6 Inf 500*number of variables Fast annealing 100 Exponential temperature 100 Simulated annealing acceptance Double None 10

5.5 Application of differential evolution to base case problems

Best Function Value: 0.250279 1.5 1 0.5 0

0

1000 2000 Iteration

3000

0.5 0

–0.5

20

0

–0.5 –1

1

1

2

Number of variables (2)

(ii)

Best Function Value: 0

0.5

2

0

1500

Best point

1

Best point

Function value

Best point

Function value

40

1

3

Best point

3

60

500 1000 Iteration

2

Number of variables (3)

80

0

1

(i)

Best Function Value: 2.00311e-07

0

Best point

1

Best point

Function value

2

0.5 0

–0.5

0

500

1000 1500 2000 Iteration

–1

(iii)

1

2

3

4

Number of variables (4)

FIGURE 5.2 Optimization results of simulated annealing for the base case problems of Section 5.2: (i) Example 1, Eq. (5.1), (ii) Example 2, Eq. (5.2), and (iii) Example 3, Eq. (5.3).

5.5 Application of differential evolution to base case problems The DE algorithm discussed in Section 4.4 along with its implementation procedure is used to solve the base case problems given in Section 5.2. Further detail on DE can be referred elsewhere [11e15].

5.5.1 Differential evolution implementation strategy The DE algorithm along with its implementation procedure discussed in Chapter 4.4 is used to solve the three example problems given in Section 5.2. The population is initialized randomly between the upper (XU) and lower (XL) bounds set for the

131

132

Example 1, Eq. (5.1) Iteration number

1

400 0 0 0 2

x1 x2 x3 f(x)

0 0.5 1 0.2531

800 0 0.49 1 0.253

1200 0 0.5 1 0.25

1600 0 0.5 1 0.25

2000 0 0.5 1 0.25

2277 0 0.5 1 0.25

Example 2, Eq. (5.2) Iteration number x1 x2 f(x)

1 10 2.44 4.5516

50

500

750

0.99 3.003 2e-09

1 3 9.6 e-20

1 3 1.45e-28

1000 1 3 0

1250 1 3 0

1445 1 3 0

Example 3, Eq. (5.3) Iteration number x1 x2 x3 x4 f(x)

1 1.75 0.01 0.02 0.01 1.75

300 0.75 0.01 0.02 0.01 0.881

600 2 0.01 0.01 0 0.12

900 0.75 0 0 0.02 0.073

1200 0.75 0 0 0 0.003

1500 0.75 0 0 0 0.17

2000 0.1 0 0 0 0.0013

The global minimum obtained by SA for the three base case problems are given as follows: Example 1: x* ¼(0,0.5,1), f(x*) ¼ 0.25. Example 2: x* ¼(1,3), f(x*) ¼ 0.0. Example3: x* ¼(-0.1,0,0,0), f(x*) ¼ 0.0013.

CHAPTER 5 Application of stochastic and evolutionary algorithms

Table 5.4 Iterative performance of simulated annealing (SA).

5.6 Application of ant colony optimization to base case problems

variables. The total population size is specified as 50. Maximum number of iterations is set as 1000. The crossover probability is set as 0.2. The stopping criterion specifying the tolerance limit for convergence is set as 0.01. The DE optimization model is built in MATLAB code for the three example problems given in Eqs. (5.1e5.3).

5.5.2 Optimization results of differential evolution The optimization results of DE for the base case problems of Example 1, Example 2, and Example 3 given in Section 5.2 are shown in Fig. 5.3. The results in Fig. 5.3 represent the fitness function values drawn against the number of iterations. The iterative performance of DE leading to optimal solution is shown in Table 5.5.

5.6 Application of ant colony optimization to base case problems The ACO discussed in Section 4.5 along with the implementation procedure is used to solve the base case problems given in Section 5.2. Further detail on ACO can be referred elsewhere [16e22].

5.6.1 Ant colony optimization implementation strategy The ACO algorithm along with its implementation procedure discussed in Chapter 4.5 is used to solve the three example problems given in Section 5.2. The initial population is generated within the bounds specified for the variables in the problem. A population size of 10 and a sample size of 40 are chosen. The number of maximum iterations is set as 1000. The intensification factor is considered as 0.5. The deviation-distance ratio (zeta) is initially set as unity and further tuned for each problem. New population is generated using Gaussian random variables. Solution is built using Gaussian kernel with roulette wheel selection. Sorting is done for the selection of best population. The converged solution with optimal cost is identified. The ACO optimization model is built in MATLAB code for the three example problems given in Eqs. (5.1e5.3).

5.6.2 Optimization results of ant colony optimization The optimization results of ACO for the base case problems of Example 1, Example 2, and Example 3 given in Section 5.2 are shown in Fig. 5.4. The results in Fig. 5.4 represent the cost function values drawn against the iterations. The iterative performance of ACO leading to optimal solution is shown in Table 5.6.

133

CHAPTER 5 Application of stochastic and evolutionary algorithms

(i)

0.42 0.4 0.38

Best Cost

0.36 0.34 0.32 0.3 0.28 0.26 0

100

200

300

400

500 600 Iteration

700

800

900 1000

(ii) 1010

Best Cost

100

10–10

10–20

10–30

10–40

(iii)

0

50

100

150 200 Iteration

250

300

350

102

100

Best Cost

134

10–2

10–4

10–6

10–8

0

1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 Iteration

FIGURE 5.3 Optimization results of differential evolution for the base case problems of Section 5.2: (i) Example 1, Eq. (5.1), (ii) Example 2, Eq. (5.2), and (iii) Example 3, Eq. (5.3).

Example 1, Eq. (5.1) Iteration number x1 x2 x3 f(x)

1 0.35 0.106 0.677 0.3838

60 0.01 0 0.99 0.25

120 0.01 0 1 0.25

180 0.01 0 1 0.25

240 0.01 0 1 0.25

300 0 0.5 1 0.25

360 0 0.5 1 0.25

Example 2, Eq. (5.2) Iteration number x1 x2 f(x)

1 10 2.44 4.5516

200

300

500

0.99 3.003 2e-09

1 3 9.6 e-20

1 3 1.45e-28

700 1 3 0

900 1 3 0

1000 1 3 0

Example 3, Eq. (5.3) Iteration number x1 x2 x3 x4 f(x)

1 4.59 1.35 3.7 3.39 35.949

1500

3000

4500

6000

7500

1.44 0.144 0.084 0.0845 8.6 e-07

1.44 0.144 0.0785 0.0781 3.4e-07

1.44 0.144 0.0785 0.0781 3.4 e-07

1.44 0.1442 0.0613 0.0616 4.84e-08

1.44 0.1441 0.0613 0.0616 5.1e-09

10000 0.14 0.0014 0.0051 0.0079 0

The global minimum obtained by DE for the three base case problems are given as follows: Example 1: x* ¼(0,0.5,1), f(x*) ¼ 0.25. Example 2: x* ¼(1,3), f(x*) ¼ 0.0. Example 3: x* ¼(0.14,-0.0014,-0,0051,-0.0079), f(x*) ¼ 0.0.

5.6 Application of ant colony optimization to base case problems

Table 5.5 Iterative performance of differential evolution (DE).

135

CHAPTER 5 Application of stochastic and evolutionary algorithms

(i)

100 10–50

Best Cost

10–100 10–150 10–200 10–250 10–300 50

100

150

200

250

300

Iteration

(ii)

100

-5

Best Cost

10

-10

10

-15

10

-20

10

0

20

40

60

80

100

120

140

160

180

200

160

180

200

Iteration

(iii) 10 2

10 0 Best Cost

136

10 -2

10 -4

10 -6

0

20

40

60

80

100

120

140

Iteration

FIGURE 5.4 Optimization results of ant colony optimization for the base case problems of Section 5.2: (i) Example 1, Eq. (5.1), (ii) Example 2, Eq. (5.2), and (iii) Example 3, Eq. (5.3).

Example 1, Eq. (5.1) Iteration number x1 x2 x3 f(x)

1

15

0.33 0.71 0.14 0.446

30

0.0121 0.468 0.91 0.25

60

0.0074 0.5 1 0.25

0 0.5 1 0.25

90 0 0.5 1 0.25

150 0 0.5 1 0.25

200 0 0.5 1 0.25

Example 2, Eq. (5.2) Iteration number

1

x1 x2 f(x)

15 3.8 8.63 0.101

30 3.3 0.82 0.1017

0.56 2.87 0.031

60

90

150

200

1.03 2.97 1.03e-05

0.998 3.007 3.6e-08

1 3 0

1 3 0

90

150

200

1.75 0.1759 0.069 0.067 3.8e-05

0 0 0 0 9.7e-06

Example 3, Eq. (5.3) Iteration number x1 x2 x3 x4 f(x)

1 3.02 1.32 0.36 0.63 23.752

15 4.0 0.12 0.084 0.96 0.076

30 2.29 0.16 0.007 0.0924 0.0073

60 0.28 0.031 0.153 0.067 0.0018

0 0 0 0 0

The optimum solution obtained by ant colony optimization for the three base case problems are given as follows: Example 1: x* ¼(0,0.5,1), f(x*) ¼ 0.25. Example 2: x* ¼(1,3), f(x*) ¼ 0.0. Example 3: x* ¼(0,0,0,0), f(x*) ¼ 0.0.

5.6 Application of ant colony optimization to base case problems

Table 5.6 Iterative performance of ant colony optimization (ACO).

137

138

CHAPTER 5 Application of stochastic and evolutionary algorithms

5.7 Application of particle swarm optimization to base case problems The PSO algorithm discussed in Section 4.7 is used to solve the base case problems given in Section 5.2. Further detail on PSO can be referred elsewhere [23e28].

5.7.1 Particle swarm optimization implementation strategy The particle swarm optimization algorithm along with its implementation procedure discussed in Chapter 4.7 is used to solve the three example problems given in Section 5.2. The parameters chosen for implementation of PSO are c1 ¼ 1 and c2 ¼ 0.1. The population size is considered as 100, and the maximum number of iterations is set as 100. The PSO optimization model is built in MATLAB code for the three example problems given in Eqs. (5.1e5.3).

5.7.2 Optimization results of particle swarm optimization The optimization results of PSO for the base case problems of Example 1, Example 2, and Example 3 given in Section 5.2 are shown in Fig. 5.5. The results in Fig. 5.5 represent the cost function values drawn against the iterations. The iterative performance of PSO leading to optimal solution is shown Table 5.7.

5.8 Application of artificial bee colony algorithm to base case problems The ABC optimization algorithm discussed in Section 4.8 is used to solve the three example problems given in Section 5.2. Further detail on ABC algorithm can be referred elsewhere [29e34].

5.8.1 Artificial bee colony implementation strategy The ABC optimization algorithm along with its implementation procedure discussed in Chapter 4.8 is used to solve the three example problems given in Section 5.2. Initial population is created in between the bounds specified for the example problems. Population size (colony size) is set as 100, and the number of onlooker bees is set equal to population size. Acceleration coefficient is set as unity. the maximum number of iterations is specified as 200. The ABC optimization model is built in MATLAB code for the three example problems given in Eqs. (5.1e5.3).

5.8 Application of artificial bee colony algorithm to base case problems

(i) 0.36 0.34

Best Cost

0.32

0.3

0.28

0.26 0

(ii)

10

20

30

40

50

60

70

80

90

100

Iteration 105

Best Cost

100

10–5

10–10

10–15

10–20

0

10

20

30

40

50

60

70

80

90

100

Iteration

(iii)

10 2 10 1

Best Cost

10 0

10 -1 10 -2 10 -3 10 -4

0

10

20

30

40

50

60

70

80

90

100

Iteration

FIGURE 5.5 Optimization results of particle swarm optimization for the base case problems of Section 5.2: (i) Example 1, Eq. (5.1), (ii) Example 2, Eq. (5.2), and (iii) Example 3, Eq. (5.3).

139

140

Example 1, Eq. (5.1) Iteration number x1 x2 x3 f(x)

1 0.0038 0.9042 0.1153 0.359

15 0.038 0.5 1 0.25

30

45

0 0.5 1 0.25

0 0.5 1 0.25

60

75

0 0.5 1 0.25

100

0 0.5 1 0.25

0 0.5 1 0.25

Example 2, Eq. (5.2) Iteration number x1 x2 f(x)

1 2.6 8.63 3.5417

15

30

0.68 3.76 0.12

45 1 3 0.00056

60

1 3 8.08e-06

75 1 3 0

100 1 3 0

1 3 0

Example 3, Eq. (5.3) Iteration number x1 x2 x3 x4 f(x)

1 4.51 1.62 1.37 2.79 17.8941

15 4.0 0.12 0.084 0.96 0.068

30 4.43 0.42 0.429 10 0.554 0.0146

45 4.43 0.42 0.146 0.148 0.0066

60 0.91 0.087 0.146 0.148 0.1e-04

75

100 0 0 0 0 0

0 0 0 0 0

The optimum solution obtained by particle swarm optimization for the three base case problems are given as follows: Example 1: x* ¼(0,0.5,1), f(x*) ¼ 0.25. Example 2: x* ¼(1,3), f(x*) ¼ 0.0. Example3: x* ¼(0,0,0,0), f(x*) ¼ 0.0.

CHAPTER 5 Application of stochastic and evolutionary algorithms

Table 5.7 Iterative performance of particle swarm optimization (PSO).

5.10 Summary

5.8.2 Optimization results of artificial bee colony optimization The optimization results of ABC optimization algorithm for the base case problems of Example 1, Example 2, and Example 3 given in Section 5.2 are shown in Fig. 5.6. The results in Fig. 5.6 represent the cost function values drawn against the iterations. The iterative performance of ABC algorithm leading to optimal solution is shown in Table 5.8.

5.9 Analysis of results The results of various stochastic and evolutionary optimization methods such as GA, SA, DE, ACO, PSO, and ABC are evaluated by applying them to three typical base case problems of different complexities. The optimization programs are executed in MATLAB 2016. The computations are run on Windows system with AMD processor with a 2.2 GHz and 8 GB RAM. The converged solutions of these optimization methods are shown in Figs. 5.1e5.6. The iterative performance of these methods to optimal solution is given in Tables 5.2 and 5.4e5.8. The comparative analysis of these methods is shown in Table 5.9. From these results, it is observed that all most all the optimization methods have converged to the same optimal solution for each of the base case examples in Eqs. (5.1e5.3). However, these methods differ in speed of converge and number of iterations required to obtain the final converged solution. The results on time of convergence show the faster convergence of ABC optimization algorithm followed by GA, SA, and ACO methods for the three base case problems. The results on number of iterations required to obtain a converged solution show that the method of GA has taken less number of iterations followed by PSO, ABC, and ACO methods. When the model complexity is analyzed with respect to computational time, Eq. (5.1) and (5.2) are solvable faster than Eq. (5.3). From the results of the base case problems, it is observed that the increase in model complexity increases the number of iterations required to obtain a converged solution for the methods of GA, SA and DE. However, for the methods of ACO, PSO and ABC, no significant increase in number of iterations to obtain a converged solution is observed with the increase of model complexity.

5.10 Summary This chapter provides a broad and relatively technical treatment on important topics at a level suitable for advanced students and for researchers with a background in engineering optimization applications. Various stochastic and evolutionary search methods such as GA, SA, DE, ACO, PSO, and ABC are evaluated by applying them to three different base case problems. The performance of these methods are assessed with respect to the speed of convergence, number of iterations needed to obtain converged solution, and the complexity of the model equation. The solution strategies and the analysis of results on application to base case nonlinear optimization problems of this chapter lead to extend the usefulness of these methods to solve real and complex optimization problems of different domains.

141

CHAPTER 5 Application of stochastic and evolutionary algorithms

(i) 0.42 0.4

Best Cost

0.38 0.36 0.34 0.32 0.3 0.28 0.26 0

(ii)

20

40

60

80

100

120

140

160

180

200

140

160

180

200

Iteration

105

Best Cost

100

10–5

10–10

10–15 0

(iii)

20

40

60

80

100 120 Iteration

60

80

100 120 140 Iteration

101 100 10–1

Best Cost

142

10–2 10–3 10–4 10–5 0

20

40

160

180

200

FIGURE 5.6 Optimization results of artificial bee colony algorithm for the base case problems of Section 5.2: (i) Example 1, Eq. (5.1), (ii) Example 2, Eq. (5.2), and (iii) Example 3, Eq. (5.3).

Table 5.8 Iterative performance of artificial bee colony (ABC) algorithm. Example 1, Eq. (5.1) Iteration number x1 x2 x3 f(x)

1 0.33 0.71 0.146 0.44612

30

60

0.0074 0.5 1 0.25

90

0 0.5 1 0.25

0 0.5 1 0.25

120 0 0.5 1 0.25

150 0 0.5 1 0.25

200 0 0.5 1 0.25

Example 2, Eq. (5.2) Iteration number x1 x2 f(x)

1

30

2.6 8.63 4.106

60 1.085 2.94 0.0069

1.24 2.9 0.00023

90

120

1.4 2.6 2.5e-05

1.4 2.6 2.4e-05

150

200

1 3 0

1 3 0

200

Example 3, Eq. (5.3) Iteration number

4.51 1.62 1.37 2.79 16.91

30 0.18 0.05 0.64 1.09 0.0692

60 3.73 0.33 0.035 0.049 0.00022

90

120

150

3.0 0.29 0.148 0.14 2.4e-05

3.1 0.29 0.146 0.14 2.3e-05

0 0 0 0 2.4e-05

0 0 0 0 0

The optimum solution obtained by ABC optimization algorithm for the three base case problems are given as follows: Example 1: x* ¼(0,0.5,1), f(x*) ¼ 0.25. Example 2: x* ¼(1,3), f(x*) ¼ 0.0. Example 3: x* ¼(0,0,0,0), f(x*) ¼ 0.0.

5.10 Summary

x1 x2 x3 x4 f(x)

1

143

144

GA

SA

DE

ACO

PSO

ABC

Example 1, Eq. (5.1) Convergence speed

5.1 s.

5.1 s.

7.693 s.

5.910 s.

15.403 s.

4.499 s.

Best individual Number of iterations

(0,0.5,1) 60

(0,0.5,1) 2277

(0,0.5,1) 360

(0,0.5,1) 200

(0,0.5,1) 100

(0,0.5,1) 200

Example 2, Eq. (5.2) Convergence speed

5.9 s.

5.09 s.

7.498 s.

5.342 s.

15.320 s.

4.345 s.

Best individual Number of iterations

(1,3) 60

(1,3) 1445

(1,3) 1000

(1,3) 200

(1,3) 100

(1,3) 200

Example 3, Eq. (5.3) Convergence speed

5.98 s.

6.8 s.

8.9425 s.

7.269 s.

15.229 s.

5.123 s.

Best individual

(0,0,0,0)

(0.1,0,0,0)

(0,0,0,0)

(0,0,0,0)

(0,0,0,0)

Number of iterations

180

2000

(0.14,-0.0014, -0.0051,-0.0079) 10000

200

100

200

CHAPTER 5 Application of stochastic and evolutionary algorithms

Table 5.9 Comparative analysis of different optimization algorithms.

References

References [1] D.E. Goldberg, Genetic Algorithms in Search. Optimization, and Machine Learning, Addison-Wesley, 1989. [2] L. Asadzadeh, Solving the job shop scheduling problem with a parallel and agent-based local search genetic algorithm, J. Theor. Appl. Inf. Technol. 62 (2) (2014) 317e324. [3] F. Pezzella, G. Morganti, G. Ciaschetti, A genetic algorithm for the flexible job-shop scheduling problem, Comput. Oper. Res. 35 (10) (2008) 3202e3212. [4] R.K. Phanden, Multi agents approach for job shop scheduling problem using genetic algorithm and variable neighborhood search method, Proc. 20th WMSCI (2016) 275e278. [5] C.K.H. Lee, A review of applications of genetic algorithms in operations management, Eng. Appl. Artif. Intell. 76 (2018) 1e12. [6] K. Ho¨schel, V. Lakshminarayanan, Genetic algorithms for lens design: a review, J. Opt. 48 (1) (2019) 134e144. [7] S. Kirkpatrick, C.D. Gelatt, M.P. Vecchi, Optimization by simulated annealing, Science 220 (4598) (1983) 671e680. [8] R.W. Eglese, Simulated annealing: a tool for operational research, Eur. J. Oper. Res. 46 (3) (1990) 271e281. [9] L. Liu, H. Mu, H. Luo, X. Li, A simulated annealing for multi-criteria network path problems, Comput. Oper. Res. 39 (12) (2012) 3119e3135. [10] L. Liu, H. Mu, J. Yang, X. Li, F. Wu, A simulated annealing for multi-criteria optimization problem: DBMOSA, Swarm Evolut. Comput. 14 (2014) 48e65. [11] R.M. Storn, K. Price, Differential evolution-a simple and efficient adaptive scheme for global optimization over continuous spaces, J. Global Opt. 11 (4) (1997) 341e359. [12] S. Das, P.N. Suganthan, Differential evolution: a survey of the state-of-the-art, IEEE Trans. Evolut. Comput. 15 (1) (2011) 4e31. [13] P. Chaturvedi, P. Kumar, A cultivated differential evolution variant for molecular potential energy problem, Procedia Comput. Sci. 57 (2015) 1265e1272. [14] M. Leon, N. Xiong, Greedy Adaptation of Control Parameters in Differential Evolution for Global Optimization Problems. 2015, IEEE Congress on Evolutionary Computation (CEC), IEEE, 2015. [15] A.W. Mohamed, A novel differential evolution algorithm for solving constrained engineering optimization problems, J. Intell. Manuf. 29 (3) (2017) 659e692. [16] C. Blum, Ant colony optimization: introduction and recent trends, Phys. Life Rev. 2 (4) (2005) 353e373. [17] J. Yang, X. Shi, M. Marchese, Y. Liang, An ant colony optimization method for generalized TSP problem, Prog. Nat. Sci. 18 (11) (2008) 1417e1422. [18] B. Bullnheimer, R.F. Hartl, C. Strauss, A new rank-based version of the Ant System: a computational study, Cent. Eur. J. Oper. Res. Econ. 7 (1) (1999) 25e38.  ckova´, Solving the travelling salesman problem using the ant colony [19] I. Brezina Jr., Z. Ci optimization, Manag. Inf. Syst. 6 (4) (2011) 10e14. [20] N.Z. Naqvi, H.K. Matheru, K. Chadha, Review of ant colony optimization algorithms on vehicle routing problems and introduction to estimation-based ACO, in: International Conference on Environment Science and Engineering IPCBEE, vol. 8, IACSIT Press, Singapore, 2011.

145

146

CHAPTER 5 Application of stochastic and evolutionary algorithms

[21] K. Jun-man, Z. Yi, Application of an improved ant colony optimization on generalized traveling salesman problem, Energy Procedia 17 (2012) 319e325. ¨ lker, The application of ant colony optimization in the solution of 3D [22] H. Eldem, E. U traveling salesman problem on a sphere, Eng. Sci. Technol. Int. J. 20 (4) (2017) 1242e1248. [23] A.P. Engelbrecht, Computational Intelligence: An Introduction, John Wiley & Sons, 2007. [24] J. Kennedy, R. Eberhart, Particle swarm optimization, in: Proceedings of IEEE International Conference on Neural Networks, Perth, Australia: IEEE, vol. 4, Nov. 1995, pp. 1942e1948. [25] D.P. Rini, S.M. Shamsuddin, S.S. Yuhaniz, Particle swarm optimization: technique, system and challenges, Int. J. Comput. Appl. 14 (1) (2011) 19e26. [26] M.R. Bonyadi, Z. Michalewicz, Particle swarm optimization for single objective continuous space problems: a review, Evol. Comput. 1530 (2014) 9304. [27] X. Yan, Q. Wu, H. Liu, W. Huang, An improved particle swarm optimization algorithm and its application, Int. J. Comput. Sci. 316e324 (2013). [28] Q. Bai, Analysis of particle swarm optimization algorithm, Comput. Inf. Sci. (2010) 180e184. [29] X.S. Yang, Engineering Optimization via Nature Inspired Virtual Bee Algorithms, springer-verlag gmbh, 2005, p. 317. [30] D. Karaboga, B. Basturk, On the performance of artificial bee colony (ABC) algorithm, Appl. Soft. Comput. 8 (1) (2008) 687e697. [31] D. Karaboga, B. Akay, A comparative study of artificial bee colony algorithm, Appl. Math. Comput. 214 (1) (2009) 108e132. [32] B. Akay, D. Karaboga, Solving integer programming problems by using artificial bee colony algorithm, Emergent Perspect. Artif. Intell. 5883 (2009) 355e364. [33] F. Kang, J. Li, Z. Ma, Rosenbrock artificial bee colony algorithm for accurate global optimization of numerical functions, Inf. Sci. 181 (16) (2011) 3508e3531. [34] I. Khan, M.K. Maiti, A swap sequence based artificial bee colony algorithm for traveling salesman problem, Swarm Evol. Comput. 44 (2019) 428e438.

CHAPTER

Application of stochastic evolutionary optimization techniques to chemical processes

6

Chapter outline 6.1 Introduction .....................................................................................................148 6.2 Process modelebased multistage dynamic optimization of a copolymerization reactor using differential evolution....................................................................149 6.2.1 Optimal control and its importance in polymerization reactors ............ 149 6.2.2 Optimal control problem ................................................................. 150 6.2.3 Multistage dynamic optimization strategy ......................................... 150 6.2.4 The polymerization process and its mathematical representation ........ 151 6.2.5 Control objectives........................................................................... 155 6.2.6 Multistage dynamic optimization of SAN copolymerization process using DE ....................................................................................... 156 6.2.7 Analysis of results .......................................................................... 157 6.3 Process modelebased multistage dynamic optimization of a copolymerization reactor using tabu search .................................................................................161 6.3.1 Preliminaries ................................................................................. 161 6.3.2 Multistage dynamic optimization of SAN copolymerization process using tabu search ........................................................................... 161 6.3.3 Analysis of results .......................................................................... 163 6.4 Optimization of multiloop proportionaleintegral controller parameters of a reactive distillation column using genetic algorithm ..........................................163 6.4.1 The need of evolutionary algorithm for optimization of multiloop controller parameters .................................................................................... 164 6.4.2 The process and its characteristics .................................................. 166 6.4.3 Controller design using genetic algorithms........................................ 166 6.4.3.1 Compositions estimation....................................................... 167 6.4.3.2 Multiloop proportionaleintegral controllers .............................. 168 6.4.3.3 Formulation of objective function........................................... 168 6.4.3.4 Desired response specifications for controller tuning ................ 168 6.4.3.5 Optimal tuning of controller parameters.................................. 170 6.4.4 Analysis of results .......................................................................... 172 Stochastic Global Optimization Methods and Applications to Chemical, Biochemical, Pharmaceutical and Environmental Processes. https://doi.org/10.1016/B978-0-12-817392-3.00006-5 Copyright © 2020 Elsevier Inc. All rights reserved.

147

148

CHAPTER 6 Application of stochastic evolutionary optimization

6.5 Stochastic optimizationebased nonlinear model predictive control of reactive distillation column ...........................................................................................172 6.5.1 The need for stochastic optimization methods in design of nonlinear control strategies............................................................................ 172 6.5.2 Nonlinear empirical model, predictive model, and objective function .. 174 6.5.2.1 Polynomial ARMA model...................................................... 174 6.5.2.2 Predictive model formulation ................................................ 175 6.5.2.3 Objective function formulation............................................... 175 6.5.3 The process representation.............................................................. 175 6.5.4 Stochastic optimization methods for computation of optimal control policies in NMPC ........................................................................... 177 6.5.4.1 Optimal control policy computation using genetic algorithm ...... 177 6.5.4.2 Optimal control policy computation using SA .......................... 178 6.5.5 Analysis of results .......................................................................... 180 6.6 Summary .........................................................................................................183 References .............................................................................................................185

6.1 Introduction The chemical industry is experiencing significant changes due to global market competition, strict bounds on product specifications, pricing pressures, and environmental issues. This necessitated the industry to enhance its performance with the implementation of modifications in design and operating procedures and the application of methods and tools with an emphasis of reducing costs, improving efficiency and increasing profitability. Optimization is the most important approach that addresses the performance issues related to several areas of chemical process engineering. The applications of optimization can be found in almost all the areas of chemical engineering including process design, process development, process modeling, process identification, process control, and real-time process operation. Optimization is also used in process synthesis, experimental design, planning, scheduling, distribution, and integration of process operations. Apart from the existing classical optimization methods, various global optimization methods have been emerged to solve complex chemical engineering problems. Global optimization methods can be broadly classified into deterministic and stochastic methods [1]. The deterministic methods can offer guarantee to find the global optimum of the objective function [2e10]. However, these strategies often require high computational time, and in certain cases, they may require reformulation of the optimization problem. Stochastic evolutionary optimization methods on the other hand are robust numerical techniques, and these methods require reasonable computational effort for solving multivariable optimization problems. Most of the chemical process engineering problems exhibit highly nonlinear dynamics. These problems often present nonconvexity, discontinuity, and multimodality. Because of robustness and efficiency, stochastic optimization techniques are widely used to solve complex engineering problems related to analysis, design, modeling, and operation of chemical processes that are highly nonlinear and high dimensional or the problems that are not easily solved by classical deterministic methods of

6.2 Process modelebased multistage dynamic optimization

optimization. Diversified applications have been reported in chemical engineering domain for various stochastic and evolutionary optimization methods such as genetic algorithm (GA) [11e29], simulated annealing (SA) [17,30e40], differential evolution (DE) [41e54], ant colony optimization [55e69], tabu search (TS) [47,70e78], particle swarm optimization [79e89], artificial bee colony (ABC) algorithm [90e96], and cuckoo search algorithm [97e102]. Various real applications of stochastic-, evolutionary-, and artificial intelligenceebased optimization strategies to different chemical processes are presented in subsequent sections of this chapter.

6.2 Process modelebased multistage dynamic optimization of a copolymerization reactor using differential evolution A multistage dynamic optimization strategy with sequential implementation procedure is presented and implemented by using DE for optimal control of a semibatch styrene acrylonitrile (SAN) copolymerization reactor.

6.2.1 Optimal control and its importance in polymerization reactors The determination of open-loop time-varying control policies that maximize or minimize a given performance index is referred as optimal control/dynamic optimization. In a process, the optimal control policies that ensure the satisfaction of the product property requirements and the operational constraints can be calculated offline and are implemented online such that the process is operated in accordance with these control policies. Ensuring product quality is a major issue in polymerization processes as the molecular or morphological properties of a polymer product strongly affects its physical, chemical, thermal, rheological, and mechanical properties as well as polymer applications. In a free radical copolymerization, different reactivities of comonomers can cause composition drifts unless a more reactive monomer is added to the reactor continuously to maintain a constant mole ratio. The copolymer composition influences the end use properties of the final product with respect to its flexibility, strength, and glass transition temperature, where as the molecular weight (MW) and molecular weight distribution (MWD) affects the important end use properties such as viscosity, elasticity, strength, toughness, and solvent resistance. Efficient operation of polymerization reactor can produce a polymer with better product characteristics. Optimal control can determine the course of action to be taken up for operating the process effectively. Hence, the determination of optimal control trajectories is extremely important for design and operation of polymerization reactors. There has been greater interest on the optimal control/dynamic optimization of polymerization reactors. In the past, various methods have been reported for optimal control of polymerization reactors [103e107]. Most of the studies on optimal control of polymerization reactors are based on classical methods of solution such as Pontryagin’s maximum principle [108], which has been applied to solve the

149

150

CHAPTER 6 Application of stochastic evolutionary optimization

optimal control problems of different polymerization reactors [109e114]. However, the limitations of classical methods for optimal control of polymerization reactors have been discussed in literature [115]. The stochastic and evolutionary optimization methods are found advantageous over conventional gradient-based search procedures because they are capable of finding global optima of multimodal functions and searching design spaces with disjoint feasible regions. In this section, a multistage dynamic optimization methodology with sequential implementation procedure is presented for optimal control of polymerization reactors. The evolutionary optimizing features of DE are exploited by implementing this methodology on a semibatch SAN copolymerization reactor.

6.2.2 Optimal control problem The problem is to establish the time-varying open-loop control policies that maximize or minimize the objective function, which is expressed as a function of process variables and their changes. The objective can be stated as either achieving a desired product quality or maximizing the product yield of a process. The general open-loop optimal control problem of a lumped parameter batch/semibatch process with fixed terminal time can be stated as follows. Find a control vector u(t) over tf [to, tf] to maximize (minimize) a performance index J(x,u): Ztf Jðx; uÞ ¼

f½xðtÞ; uðtÞ; tdt

(6.1)

t0

Subject to _ ¼ f ðxðtÞ; xðtÞ

xðt0 Þ ¼ x0

uðtÞ; tÞ;

(6.2)

h½xðtÞ; uðtÞ ¼ 0

(6.3)

g½xðtÞ; uðtÞ  0

(6.4)

L

x  xðtÞ  x

U

uL  uðtÞ  uU

(6.5) (6.6)

where J is the performance index, x is the vector of state variables, and u is the vector of control variables. Eq. (6.2) is the system of ordinary differential equations with their initial conditions, Eqs. (6.3) and (6.4) are the equality and inequality algebraic constraints, and Eqs. (6.5) and (6.6) are upper and lower bounds in the state and control variables.

6.2.3 Multistage dynamic optimization strategy A multistage problem arises due to the natural extension of a single-stage optimization to a system of more stages in which the output from one stage is input to the subsequent stage. The procedure for multistage optimization can be referred elsewhere [116]. Multistage optimization problems require special techniques to break them into computationally manageable units. In multistage dynamic

6.2 Process modelebased multistage dynamic optimization

optimization, the optimal control problem is considered by dividing the entire batch duration into finite number of time instants referred to discrete stages. The control variables and the corresponding state variables that satisfy the objective function are evaluated in a stage-wise manner. The procedure for sequential implementation of multistage dynamic optimization strategy is described as follows. Discretize the process into N stages. Define f i, ui, and xi as the objective function, control vector, and state vector, respectively, for stage i. Here, the procedure adopted for solving the optimal control problem is similar to that of the dynamic programming based on the principle of optimality [117]. This procedure is briefed in the following steps: 1. The optimum value of the objective function, f1[x1], for stage 1 driven by the best control vector u1 along with the state vector x1 is represented as   f 1 x1 ¼ min

  fo1 x1 ; u1

u1

(6.7)

2. The value of the objective function, f2[x2], for stage 2 is determined based on the best control vector u2 along with the state vector x2 as given by     f 2 x2 ¼ f 1 x1 þ min

  fo2 x2 ; u2

u2

(6.8)

3. Recursive generalization of the above procedure for the kth stage is represented by     f k xk ¼ f k1 xk1 þ min

  fok xk ; uk

uk

(6.9)

In this procedure, fok represents the performance index of stage k.

6.2.4 The polymerization process and its mathematical representation In this work, the process of solution copolymerization of styrene and acrylonitrile in a semibatch reactor is considered for sequential implementation of the dynamic optimization strategy. The xylene and AIBN are used as solvent and initiator. The feed is a mixture of monomers, solvent, and initiator and enters the reactor in semibatch mode. Initial volume of the reactor is 1.01 L, and the initial design parameters are the solvent mole fraction fs ¼ 0.25 and initiator concentration I0 ¼ 0.05 mol/L. The mole ratio of monomers in the feed, M1/M2, is 1.5, where M1 and M2 are the molar concentrations of the unreacted monomers (styrene and acrylonitrile). The following kinetic model is used to describe the homogeneous solution freeradical copolymerization of styrene with acrylonitrile [118]. Initiation:

151

152

CHAPTER 6 Application of stochastic evolutionary optimization

I/2R

ki1 R þ M1 ! P10 ki2 R þ M2 ! Q01

(6.10)

kp11 Pn;m þ M1 ! Pnþ1;m kp12 Pn;m þ M2 ! Qn;mþ1 kp21 Qn;m þ M1 ! Pnþ1;m kp22 Qn;m þ M2 ! Qn;mþ1

(6.11)

ktc11 Pn;m þ Pr;q ! Mnþr;mþq ktc12 Pn;m þ Qr;q ! Mnþr;mþq ktc22 Qn;m þ Qr;q ! Mnþr;mþq

(6.12)

Propagation:

Combination termination:

Disproportionation termination: ktd11 Pn;m þ Pr;q ! Mn;m þ Mr;q ktd12 Pn;m þ Qr;q ! Mn;m þ Mr;q ktd22 Qn;m þ Qr;q ! Mn;m þ Mr;q

(6.13)

kf 11 Pn;m þ M1 ! Mn;m þ P10 kf 12 Pn;m þ M2 ! Mn;m þ Q01 kf 21 Qn;m þ M1 ! Mn;m þ P10 kf 22 Qn;m þ M2 ! Mn;m þ Q01

(6.14)

Chain transfer:

where Pn,m represents a growing polymer chain with n units of monomer 1 (styrene) and m units of monomer 2 (acrylonitrile) with monomer 1 on the chain end. Similarly, Qn,m represents growing copolymer chain with monomer 2 on the end. The Mn,m denotes inactive or dead polymer. The copolymer MW and MWD are computed using three leading moments of the total number average copolymers. The instantaneous kth moment is given by ldk ¼

∞ X ∞ X n¼1 m¼1

ðnw1 þ mw2 Þk Mn;m ;

k ¼ 0; 1; 2; .

(6.15)

6.2 Process modelebased multistage dynamic optimization

where w1 and w2 are the MWs of styrene and acrylonitrile, respectively. The total number average chain length (Xn), the total weight average chain length (Xw), and the polydispersity index (PD) are expressed as Xn ¼ ld1 =ld0 Xw ¼ ld2 =ld1

(6.16)

PD ¼ Xw =Xn The modeling equations of the semibatch copolymerization reactor are given as follows: dM1 =dt ¼ uðM1f  M1 Þ=V  ½ðkp11 þ kf 11 ÞP þ ðkp21 þ kf 21 Þ QM1 dM2 =dt ¼ uðM2f  M2 Þ=V  ½ðkp22 þ kf 22 ÞQ þ ðkp12 þ kf 12 ÞPM2 dI=dt ¼ uðIf  IÞ=V  kd I

(6.17)

dV=dt ¼ u The live polymer moments are given by  1=2  P ¼ 2fkd I= ðktc11 þ ktd11 Þ þ 2bðktc12 þ ktd12 Þ þ b2 ðktc22 þ ktd22 Þ

(6.18)

where b ¼ ðkp12 þ kf 12 Þ=ðkp21 þ kf 21 ÞF;

F ¼ M1 =M2

Pseudo steady-state approximation for live polymers leads to the following live polymer moment equations: P1 ¼ ½w1 C1 a1 þ a1 gQ1 =r1 þ w1 ða1 P þ a1 gQ=r1 Þ=ð1  a1 Þ Q1 ¼ ½w2 C2 a2 þ a2 P1 =g r2 þ w2 ða2 Q þ a2 P=g r2 Þ=ð1  a2 Þ   P2 ¼ w21 C1 a1 þ a1 gQ2 =r1 þ 2w1 a1 P1 þ 2w1 a1 gQ1 =r1 þ w21 ða1 P þ a1 gQ=r1 Þ =ð1  a1 Þ  2  Q2 ¼ w2 C2 a2 þ a2 P2 =g r2 þ 2w2 a2 P1 =g r2 þ 2w2 a2 Q1 þ w22 ða2 P=g r2 þ a2 QÞ =ð1  a2 Þ (6.19) where C1 ¼ ½ðkf 11 P þ kf 21 QM1 Þ=kf 11 M1 ; r1 ¼ kp11 =kp12 ;

C2 ¼ ½ðkf 22 Q þ kf 12 PÞM2 =kf 22 M2

r2 ¼ kp22 =kp21 ;

g ¼ kp21 =kp12

a1 ¼ kp11 M1 = ½ðkp11 þ kf 11 ÞM1 þ ðkp12 þ kf 12 ÞM2 þ ðktc11 þ ktd11 ÞP þ ðktc12 þ ktd12 ÞQ a2 ¼ kp22 M2 = ½ðkp22 þ kf 22 ÞM2 þ ðkp21 þ kf 21 ÞM1 þ ðktc22 þ ktd22 ÞQ þ ðktc12 þ ktd12 ÞP The moment equations for dead polymers are given by

153

154

  dld1 =dt ¼ ðktc11 P þ ktd11 P þ ktc12 Q þ ktd12 Q þ kf 11 M1 þ kf 12 M2 ÞP1 þ ðktc22 Q þ ktd22 Q þ ktc12 P þ ktd12 P þ kf 22 M2 þ Kf 12 M1 ÞQ1  ld1 u =V dld2 =dt ¼ ðktc11 P þ ktd11 P þ ktc12 Q þ ktd12 Q þ kf 11 M1 þ kf 12 M2 ÞP2 þ ðktc22 Q þ ktd22 Q þ ktc12 P þ ktd12 P þ kf 22 M2 þ kf 21 M1 ÞQ2 þ ktc11 P21 þktc22 Q21 þ 2ktc12 P1 Q1  ld2 u=V (6.20)

CHAPTER 6 Application of stochastic evolutionary optimization

dld0 =dt ¼ ðktc11 =2 þ ktd11 ÞP2 þ ðktc22 =2 þ ktd22 ÞQ2 þ ðktc12 þ 2ktd12Þ ÞPQ þ ðkf 11 M1 þ kf 12 M2 ÞP þ ðkf 22 M2 þ kf 21 M1 ÞQ  ðld0 =VÞu

6.2 Process modelebased multistage dynamic optimization

The rate constants, kd , kfij , kpij , ktcij , and ktdij , are Arrhenius functions of temperature. The instantaneous copolymer composition F1 is determined by the relative reactivities of the monomers (r1 and r2) and the bulk phase monomer mole fractions (f1 and f2) as given by     (6.21) F1 ¼ r1 f12 þ f1 f2 = r1 f12 þ 2f1 f2 þ r2 f22 or in terms of monomer mole ratio     F1 ¼ r1 F2 þ F = r1 F2 þ 2F þ r2 The conversion of monomer 1 (styrene) is defined as 2 3 2 Zt Zt 4 5 4 uðtÞM1f dt  VM1 ðtÞ = V0 M10 þ uðtÞM1f dt x1 ¼ V0 M10 þ 0

(6.22)

(6.23)

0

The notation in the above equations are given as If ¼ initiator concentration in feed; kd ¼ initiator decomposition rate constant; kfij ¼ chain transfer rate constant of species i,j; kpij ¼ propagation rate constant of species i,j; ktcij ¼ combination termination rate constant of species i,j; ktdij ¼ disproportionation termination rate constant of species i,j; Mif ¼ monomer concentration in reaction in feed of species i; P ¼ total growing polymer concentration of type 1 [mol/l]; Pi ¼ moment of total number MWD of radicals of type 1; Q ¼ total growing polymer concentration of type 2; Qi ¼ moment of total number MWD of radicals of species i; Rp ¼ copolymerization reaction rate; ri,j ¼ monomer reactivity ratio; ui ¼ manipulated variable of species i; V ¼ reactor volume; wi ¼ MW of monomer of species i; f ¼ molar ratio of monomer; ff ¼ monomer mole ratio in feed stream; fs ¼ desired value of molar ratio of monomer in reaction mixture, and ft ¼ crosstermination factor. This model is in the general form of Eq. (6.2), where the set of state variables X and the control vector U are given by XðtÞ ¼ ½M1 ðtÞ; M2 ðtÞ; IðtÞ; VðtÞ; l0 ðtÞ; l1 ðtÞ; l2 ðtÞT

(6.24)

T

and UðtÞ ¼ ½TðtÞ; uðtÞ In the above equations, I is the concentration of the unreacted initiator, V is the volume of the reaction mixture, lk (k ¼ 0, 1,.) is the kth moment of MWD of the dead copolymer, T is the temperature of the reaction mixture, and u is the volumetric flow rate of the feed mixture to the reactor. More details of reaction kinetics and numerical data pertaining to this system can be referred elsewhere [118].

6.2.5 Control objectives The desired values of copolymer composition (F1D) and number average molecular weight (MWD) are chosen to be 0.58 and 30,000, respectively. The objectives are specified as minimizing the deviations of copolymer composition, F1, and MW from their respective desired values during the entire span of the reaction. For satisfying these objectives, monomer addition rate, u, and reactor temperature, T, are

155

156

CHAPTER 6 Application of stochastic evolutionary optimization

selected as control variables. If only one polymer quality parameter is controlled by manipulating one control variable, uncontrolled property parameters may deviate from their desired values as the reaction proceeds. The optimal control problem is considered as optimizing the single objectives as well as both the objectives simultaneously. The objectives of SAN copolymerization are formulated as J1 ¼ ½1  MWðtÞ=MWD2

(6.25)

J2 ¼ ½1  F1 ðtÞ=F1 D2

(6.26)

J3 ¼ ½1  F1 ðtÞ=F1 D2 þ ½1  MWðtÞ=MWD2

(6.27)

Here MW and F1 are the molecular weight and copolymer composition, and MWD and F1D are their respective desired values. The notation t here refers discrete time. The hard constraints are set as 0  uðtÞ  0:07 ðl=minÞ 320  TðtÞ  368ðkÞ VðtÞ  4:0 l

(6.28)

These constraints on operating variables are chosen based on the requirements for reaction rate, heat transfer limitation, and reactor safety. Determination of temperature policy, T, to satisfy J1 (Eq. 6.25) and monomer feed policy, u, to satisfy J2 (Eq. 6.26) is considered as single objective optimization problems. Determination of both u and T policies to satisfy J3 (Eq. 6.27) is considered as a problem of simultaneous optimization.

6.2.6 Multistage dynamic optimization of SAN copolymerization process using DE DE is a population-based search algorithm for global optimization of real-valued functions. Its robustness and effectiveness has been demonstrated in a variety of applications [41e54]. The DE algorithm and its basic optimization applications are given in earlier sections at 4.4 and 5.5 of this book. In this work, DE is designed and implemented for multistage dynamic optimization of SAN copolymerization reactor. The optimal control problem of SAN copolymerization reactor is considered as a problem of dynamic optimization by dividing the entire span of reaction time into finite number of time instants that can also be referred as discrete stages. The total duration of reaction is fixed as 300 min. Thus u and T profiles within the ranges of their constraints for the entire duration of reaction are equally discretized into 19 stages having 20 time points. These discrete control sequences for feed flow and temperature are given by u ¼ ½u1 ; u2 ; .; u20 T

(6.29)

T ¼ ½T1 ; T2 ; .; T20 T

(6.30)

6.2 Process modelebased multistage dynamic optimization

The sequential implementation of DE strategy to compute the optimal control policies of SAN copolymerization reactor is given in the following steps: 1. Divide the time interval tf into P stages, each of length L. Set the DE parameters D, NP, CR, and genmax, where D refers the number of control inputs and genmax refers the maximum generations. 2. Initialize the vectors of the population representing the control input randomly for all stages:   Xi;j ¼ randj ð0; 1Þ$ bj;U  bj;L þ bj;L ; where

bj,L

and

bj,u

i ¼ 1; .; NP; j ¼ 1; .; D

(6.31)

are the lower and upper limits of each control input, D.

3. Set the stage index P ¼ 1 and stage time t ¼ t1. 4. Set the generation index q ¼ 1. 5. Integrate the process model representing the polymerization reactor from the initial time to the end time of the current stage for each of the NP population of the control input (u or T) and evaluate the objective function values at the end of the stage. 6. Perform mutation, crossover, and selection based on the objective function values of trial and target vectors. 7. Set q ¼ q þ 1. Go to step 5 and continue the procedure. 8. Determine the optimal control input u*(P) based on the convergence in objective function or satisfying the specified number of generations. Store the converged values of the control input, state vector, and objective function at the end of current stage for use in the next-stage calculation. 9. Set the stage number P ¼ Pþ1 and stage time t ¼ t2. Go to step 4 and continue the procedure. The evaluation of objective function values for this stage is based on the model integration using the optimal control input for the firststage duration along with the integration for NP control input population of the second stage using the optimal state of the first stage end as the starting point. 10. Go to step 4 and repeat the procedure for all stages. The resulting control input of each stage represents the optimal trajectory. The implementation scheme for multistage dynamic optimization of copolymerization reactor using DE is shown in Fig. 6.1.

6.2.7 Analysis of results For implementation of the strategy, the control input at the beginning of the first stage is chosen at the lower bound of the input space. The process model is integrated for the duration of 15 min with a time step of 1 min from the beginning to the end of the first stage, and the objective function values are evaluated for all the control input population at the end of this stage. The optimal control policies are determined by setting the parameters of DE as NP ¼ 100, CR ¼ 0.6, and F ¼ 0.7 for T policy, NP ¼ 20, CR ¼ 0.6, and F ¼ 0.4 for u policy, and

157

158

CHAPTER 6 Application of stochastic evolutionary optimization

FIGURE 6.1 Multistage dynamic optimization of copolymerization reactor using differential evolution (DE) strategy.

NP ¼ 400, CR ¼ 0.6, and F ¼ 0.5 for T and u policies. The DE operations of mutation, crossover, and selection are implemented iteratively to minimize the objective function and to determine the best control input for this stage. For second-stage solution, random population is generated for the control input at the end of the second stage. The model is integrated from the beginning to the end of the first stage based on the optimal control input of the first stage and from the beginning to the end of the second stage based on the control input population of the second stage. Iterative convergence of the objective function leads to determine the optimal control input for the second stage. This procedure is continued until the end of last stage.

6.2 Process modelebased multistage dynamic optimization

(C)

380

50

37.5

385

Food rate (ml/min)

Temperature . K

(A)

350

335

25

12.5

320 0

(B)

3000

6000

9000 12000 15000 18000 Time(sec)

0

3000

6000

0

3000

6000

0.8

150000 Molecular weight (g/Mol)

0

(D)

112500

9000 12000 15000 18000 Time(sec)

Composition

0.7

75000

37500

0.6

0.5

0 0

3000

6000 9000 12000 15000 Time(sec)

18000

0.4 9000

12000 15000

18000

Time(sec)

FIGURE 6.2 Dual control policies and objectives: (A) & (C) are optimal control policies; (B) & (D) are objectives.

The control input values thus determined at the end of each stage represents the optimal control policy. The single control policies for T and u are determined while satisfying the objective functions specified in Eqs. (6.25) and (6.26), respectively. For simultaneous optimization, DE implementation at each stage considers the combination of the control input populations of T and u and determining the control policies that minimize the objective function defined in Eq. (6.27). The optimal solution for single-control policies and dual-control policies is achieved within 50 iterations. The dual control policies and the corresponding objectives determined by this strategy are shown in Fig. 6.2. The process responses of polydispersity and conversion corresponding to the dual control policies of this strategy are shown in Fig. 6.3. These results show that both the temperature and feed rate policies of DE maintain MW and F1 almost near to their desired values. The DE extracts distance and direction information from the current control input vectors and adds random deviation for diversity to generate new parameter vectors through the operation of mutation, crossover, and selection. These features make the DE effective in determining the optimal control policies to achieve the desired objectives. The computational efficiencies and the normalized absolute error values of DE strategy are shown in Table 6.1. From these

159

CHAPTER 6 Application of stochastic evolutionary optimization

(A) Polydis persity index

2 1.5 1 0.5 0

(B)

0

3000

6000

9000 12000 15000 Time(sec)

18000

1

0.75 Conversion

160

0.5 0.25 0

0

3000

6000

9000 12000 15000 18000 Time(sec)

FIGURE 6.3 Process responses corresponding to dual control policies: (A) polydispersity index, (B) Conversion.

Table 6.1 Computational efficiencies. Computational efficiency Control policy T u T and u

Convergence time (sec) 4.68 6.39 10.72

Implementation time (sec) 22.92 35.62 68.13

Normalized absolute error 0.0284 0.01374 0.03607

results, it is observed that the execution times and the normalized absolute error values of DE strategy are found to be reasonably lower. The comparative performance of DE-based multistage dynamic optimization strategy with iterative dynamic programming (IDP)ebased strategy exhibits the better performance of the DE-based strategy for optimal control of SAN copolymerization reactor [116].

6.3 Process modelebased multistage dynamic optimization

6.3 Process modelebased multistage dynamic optimization of a copolymerization reactor using tabu search A multistage dynamic optimization strategy based on metaheuristic TS is derived and evaluated by applying it to a semibatch SAN copolymerization reactor.

6.3.1 Preliminaries The TS algorithm along with its implementation procedure is given in Section 4.6. The problem statement, the multistage dynamic optimization strategy, the polymerization process and its mathematical representation, and the control objectives are given in Sections 6.2e6.6, respectively. The strategy for design and implementation of multistage dynamic optimization using TS is explained in the following section.

6.3.2 Multistage dynamic optimization of SAN copolymerization process using tabu search TS is a metaheuristic problem-solving approach that incorporates adaptive memory and responsive exploration, which allows the implementation procedures that are capable of searching the solution space economically and effectively. The procedure for sequential implementation of multistage dynamic optimization strategy is given in Section 6.2.3. For implementation of the strategy for SAN copolymerization reactor, the entire span of reaction time is divided into finite number of time instants called as discrete stages. The total duration of reaction is fixed at 300 min, which is divided into 19 stages with 20 time points having a duration of 15 min for each stage. Thus, u and T profiles within the ranges of their constraints specified in Eq. (6.28) are equally discretized. The form of the discrete control sequences is given in Eqs. (6.29) and (6.30), respectively. Initially, the control vector is specified as constant value at each of the discrete stages. TS explores the search space of feasible solutions by a sequence of moves. The control input at the beginning of the first stage is chosen at the lower bound of the input space. The elements of TS for computing the optimal control policies are set as follows. The notation for the parameters of TS algorithm can be referred in Section 4.6 of Chapter 4. The sizes of both recency- and frequency-based tabu lists are fixed as 50. The sizes of these lists are chosen such that they forbid revisiting of unpromising solutions in the search process. An intensification strategy with the format of a sine function is employed. A parameter (q) value of 4.0001 that controls the oscillation period a of sine function is employed. An aspiration criterion based on a sigmoid function with the parameters as kcenter ¼ 0.3 and s ¼ 7/M with M as specified number of iterations is employed. For MW optimization, 10 neighbors are generated at the end of the first stage by introducing random changes in the search space of T with an incremental variation of 0.4e0.4. The number of neighbors for each control input of each stage is set as 10. The process model is integrated for a duration of 15min with a time step of 1 min from the beginning to the end of

161

162

CHAPTER 6 Application of stochastic evolutionary optimization

the first stage, and the objective function values are evaluated for all the generated neighbors at the end of this stage. The iterative convergence of TS establishes the best control input (T) along with its objective function. This becomes the optimal control point for the first stage, which is then used as a starting point for the second stage solution. For second stage solution, random neighbors are generated at the end of the second stage around the optimal T of the first stage. The model is integrated from the beginning to the end of the first stage based on the initial control point (T) and from the beginning to the end of the second stage for each of the neighbors generated at the end of second stage. The optimal control inputs for successive stages until the end of last stage are established in a similar manner. The control input values thus determined at the end of each stage represent the optimal control policy for T. For F1 optimization, 10 neighbors are generated at the end of first stage with an incremental variation of 1.0  106e1.0  106. In analogous manner, optimal control policy for u(F1) is determined adapting the similar TS procedure as in T policy. For establishing dual control policies for T and u, multistage dynamic optimization by TS is carried out by considering incremental variations in neighbors generation of T and u within the limits of 0.4e0.4, and 1.0  106e1.0  106, respectively. This case requires the evaluation of the objective function values for 100 neighbor combinations at each of the control point representing T and u. The implementation of TS strategy for optimal control of SAN copolymerization reactor is shown in Fig. 6.4.

FIGURE 6.4 Multistage dynamic optimization of copolymerization reactor using tabu search.

(C)

50

385

37.5

Temperature . K

(A) 380

Feed rate (ml/min)

6.4 Optimization of multiloop proportional

350

335

320

(B)

0

3000

6000

9000

12.5

0

12000 15000 18000

Time(sec)

150000

Molecular weight

25

(D) Composition

112500 75000 37500 0

0

3000

6000

9000 12000 15000 18000 Time(sec)

0

3000

6000

9000 12000 15000 Time(sec)

0.8

0.7

0.6

0.5

0

3000

6000

9000 12000 15000 18000 Time(sec)

0.4

18000

FIGURE 6.5 Dual control policies and objectives: (A) & (C) are optimal control policies; (B) & (D) are objectives.

6.3.3 Analysis of results TS is designed and applied to determine the optimal control policies that satisfy the individual and multiple objectives of SAN copolymerization reactor. The dual control policies of T and u determined by TS for multistage dynamic optimization of polymerization reactor along with the objective function values are shown in Fig. 6.5. The process responses of polydispersity and conversion corresponding to the dual control policies of this method are shown in Fig. 6.6. These results show that both the temperature and feed rate policies of TS maintain MW and F1 almost near to their desired values. The computational efficiencies of TS are studied in terms of execution times, normalized absolute error values, and memory storage requirements as shown in Table 6.2. The comparative performance of TS-based multistage dynamic optimization strategy with IDP-based strategy shows improved performance of the TS-based strategy for optimal control of SAN copolymerization reactor [119].

6.4 Optimization of multiloop proportionaleintegral controller parameters of a reactive distillation column using genetic algorithm In this work, a GA-based autotuning method is presented to design a decentralized proportionaleintegral (PI) control system for composition control of a highly

163

CHAPTER 6 Application of stochastic evolutionary optimization

Polydispersity index

(A)

2

1.5 1 0.5 0

0

3000

6000

9000

12000

15000

18000

12000

15000

18000

Time(sec)

(B) Conversion

164

1

0.75 0.5 0.25 0 0

3000

6000

9000

Time(sec)

FIGURE 6.6 Process responses due to dual control policies: (A) polydispersity index, (B) Conversion.

interactive and nonlinear reactive distillation column. The objective of GA tuning is to account the multivariable interactions and nonlinear dynamics of the process to optimize the unique set of parameters for the control system that is robust to all kinds of disturbances.

6.4.1 The need of evolutionary algorithm for optimization of multiloop controller parameters The PI and proportionaleintegralederivative (PID) controllers are extensively used in many industrial control systems. Decentralized (multiloop) PI/PID controllers are used in many multiinputemultioutput (MIMO) processes. The main advantages of these controllers are their structure is simple, the design is fast, the principle is easier to understand, and they are found to provide satisfactory performance for many MIMO systems. Most of the tuning techniques used for these controllers rely on transfer function models to obtain the response parameters that are then substituted in the ZieglereNichols type tuning rules to determine the controller parameters. However, when the complexity of a multivariable process increases, its nonlinear dynamics changes to a great extent. This makes the conventional controller tuning extremely difficult. The true dynamics of the processes that exhibit severe nonlinearities and loop interactions cannot be realized in terms of transfer function models,

Table 6.2 Computational efficiencies of tabu search (TS) strategy. Computational efficiency

TS

Control policy T u T and u

Convergence time (sec) 1.89 2.19 4.25

Implementation time (sec) 16.83 13.81 19.21

Normalized absolute error 0.000566 0.01379 0.007535

Memory storage, k 2224 2076 3084

6.4 Optimization of multiloop proportional

Strategy

165

166

CHAPTER 6 Application of stochastic evolutionary optimization

and the tuning procedures that rely on such models can lead to improper design of the controller. Because controller tuning plays a significant role in improving the performance of highly nonlinear and interactive processes, an efficient tuning method is required for optimizing the parameters of the controllers involved in such processes. Thus, it is desired to design a controller for a complex nonlinear process by assessing the control relevant characteristics such as nonlinearities, interactions, and stability. Advanced/evolutionary optimization algorithms with their global optimization ability can be effective in optimizing the controller parameters of highly nonlinear MIMO processes.

6.4.2 The process and its characteristics The process considered is an olefin metathesis that finds wide applications in petrochemical industry. In this process, the olefin is converted into lower and higher MW olefin products. When a reactive distillation column is designed for simultaneous reaction and separation of top and bottom products with high purities, the column typically exhibits very strong interactions and nonlinear behavior. Such high-purity reactive separations impose constraints on the column operation that makes the controller design crucial. Therefore, it is important to explore the characteristics of the process operation in terms of its nonlinearities, interactions, and stability. Olefin metathesis reaction involves reacting 2 moles of 2-pentene to form 1 mole each of 2-butene and 3-hexene. The normal boiling points of the components in this reaction allow an easy separation between the reactant 2-pentene (310 K), top product 2-butene (277 K), and bottom product 3-hexene (340 K). The details concerning to nonlinear analysis, the interaction, and stability aspects of the system as well as the mathematical model of the nonlinear metathesis reactive distillation column are reported elsewhere [120]. The schematic of the composition control scheme for metathesis reactive distillation column is shown Fig. 6.7.

6.4.3 Controller design using genetic algorithms The aim is to present a simple and effective composition control scheme for a nonlinear multivariable reactive distillation column involved in the production of high-purity components. Because controller tuning has significant influence in specifying the performance of the process, in this work, an automatic and robust tuning method based on GAs is presented for designing the multiloop PI controllers. To find a unique set of parameters that satisfy the performance of the controllers for various disturbance conditions, the tuning problem is treated as an optimization problem by formulating an objective function and evaluating the parameters of the controllers by minimizing the function through GA search procedure. The highly nonlinear and multimodal nature of the objective function of the process with lack of derivative motivates the use of GA to solve this optimization problem. Because of its high

6.4 Optimization of multiloop proportional

potential for global optimization, GA is recognized as a powerful tool in many control-oriented applications such as parameter identification and control system design [121,122].

6.4.3.1 Compositions estimation As shown in Fig. 6.7, the controller has to maintain the top and bottom product compositions at their desired values inspite of disturbances occuring in the process. Because composition measurements from the instruments/analyzers are not timely available to the controllers, a state estimator based on extended Kalman filter is designed to infer the compositions to the controllers. The state estimator in support of the simplified dynamic model of reactive distillation considers the optimally configured temperature measurements as its inputs and provides the estimates of compositions at every time instant. The details of composition estimator and the procedure for optimal configuration of temperature measurements to the estimator using an empirical observability gramian-based methodology can be referred elsewhere [120,123]. Based on this methodology, the temperatures of trays 3 and 12 are found as the best measurements, and these are used for composition estimation in metathesis reactive distillation column.

LC D

FL

XD1 XD2 XD3

T12 Composition Estimator

T3

XM1 XM2 XM3

R

Controllers

XB1 XB2 XB3

Qr

LC B

FIGURE 6.7 Inferential control scheme for metathesis reactive distillation.

167

168

CHAPTER 6 Application of stochastic evolutionary optimization

6.4.3.2 Multiloop proportionaleintegral controllers The PI controllers for top and bottom product compositions of reactive distillation column are defined by Zs RðtÞ ¼ R0 þ k1D ðxD ðtÞ  xD setÞ þ k2D

ðxD ðtÞ  xD setÞds 0

(6.32)

Zs Qr ðtÞ ¼ Qr0 þ k1B ðxB ðtÞ  xB setÞ þ k2B

ðxB ðtÞ  xB setÞds 0

where R(t) and Qr(t) are the manipulated reflux flow rate and reboiler heat loads with R0 and Qr0 as their initial steady-state values. The tuning parameters k1D and k2D are the proportional and integral constants of the top loop, and k1B and k2B are the corresponding parameters of the bottom loop.

6.4.3.3 Formulation of objective function The objective function f(x) for GA search is formulated such that it quantifies the controller performance by accounting the nonlinear dynamics of the process operating under different disturbance modes. For PI controllers tuning problem, the x in the objective function f(x) represents the proportional and integral parameters of both top and bottom controllers. Thus, the function f(x) is denoted as Jo as is defined by Jo ¼ f ðk1D ; k2D ; k1B ; k2B Þ

(6.33)

For n number of process disturbance modes, Jo is evaluated as Jo ¼

n X i¼1

Ji

(6.34)

where Ji is the performance criterion corresponding to the ith disturbance mode, which is the weighted sum of the individual loop performances as given by, Ji ¼ wD JiD þ wB JiB JiD

(6.35)

JiB

and are the top and bottom loops objectives, and wD and wB are the Here, weightages assigned to them.

6.4.3.4 Desired response specifications for controller tuning The desired response for each disturbance condition is specified based on certain characteristics such as peak over shoot, under shoot, settling time, response time, and offset. The time-bound limits for the desired response trajectories are set such that any response falling within these bounds can be considered as the desired response. The objective function in GA controller tuning problem can be explicitly expressed by means of the response deviations from the prespecified time bound

6.4 Optimization of multiloop proportional

limits for each disturbance condition. The individual objectives in Eq. (6.35) are evaluated as JiD ¼ f ðk1D ; k2D Þ ¼ JiB ¼ f ðk1B ; k2B Þ ¼

tlimit X t¼0 tlimit X t¼0

      D max LLD i ðtÞ  xD ðtÞ ; 0 þ max xD ðtÞULi ðtÞ ; 0 Dt

      max LLBi ðtÞ  xB ðtÞ ; 0 þ max xB ðtÞ  ULBi ðtÞ ; 0 Dt (6.36)

LLD i ;

ULD i ;

LLBi ;

ULBi

and are the user-defined continuous functions specifying the lower and upper bounds for the top and bottom product compositions for ith disturbance case and their corresponding time limits are t1, t2, and t3. These limits are specified based on studying the response characteristics of the respective disturbance condition. Because of the interactive nature of the column, any process disturbance will result an opposite effect in both the responses. Based on the analysis of the responses resulting from the model of process, the time-bound limits that specify the lower and upper boundaries for the responses of individual disturbance cases can be set as ) ULðtÞ ¼ xset þ l1 ; t < t1 LLðtÞ ¼ xset  l2 ) ULðtÞ ¼ xset þ l3 ; t1 < t < t2 LLðtÞ ¼ xset  l4 ) (6.37) ULðtÞ ¼ xset þ l5 ; t2 < t < t3 LLðtÞ ¼ xset  l6 ) ULðtÞ ¼ xset þ l7 ; t > t3 LLðtÞ ¼ xset  l7 The above equation gives the general representation of the time-bound upper and lower limits for top and bottom loop responses. For ith disturbance condition, these bounds for the desired responses of top and bottom loops can be denoted by D B B ULD i ; LLi and ULi ; LLi , respectively. The magnitudes of l1 e l7 and the time limits t1 e t3 are chosen from the shape of the desired response curve. More specifically, l1 e l4 depends on the allowable peak over shoot and under shoots, l5 and l6 signify the decay ratio, and l7 stands for the allowable offset value. The time limits t1 ; t2 ; and t3 depend on the rise time, response time, and settling time of the desired response. These limits are specific to individual response and vary for each disturbance case.

169

170

CHAPTER 6 Application of stochastic evolutionary optimization

6.4.3.5 Optimal tuning of controller parameters The basic GA is described in Section 4.2, and its implementation to base case problems is illustrated in Section 5.3. With this knowledge, GA is used to identify the parameters of multiloop PI controllers of metathesis reactive distillation column. For parameter identification problem, each candidate solution in GA population represents a set of four parameters of the two PI controllers. Binary coding is used for the parameter vector with a string length of 48 bits accounting 4 substrings each of length 12 bits. The initial population size is set as 40, and the generations considered are 200. The crossover and mutation probabilities are selected as 0.85 and 0.005, respectively. The population evolves itself by reproduction, crossover, and mutation operations to generate new population with improved objective. The GA procedure for computing the optimal tuning parameters of PI controllers is illustrated as a flowchart in Fig. 6.8. The tuning procedure is carried out offline using the

FIGURE 6.8 Flowchart for genetic algorithm (GA) based multiloop controller tuning.

6.4 Optimization of multiloop proportional

mathematical model of metathesis reactive distillation. This procedure takes into account of the dynamic state information of the process for different disturbance conditions and incorporates this information in the performance function J0 for evaluating the tuning parameters. For each set of updated controller parameters, J0 evaluation requires the simulation of the closed loop system for all disturbance conditions. For instance, in run i corresponding to the ith disturbance case, the top and bottom responses are generated and the individual objectives Ji are evaluated. J0 is then obtained from the weighted summation of these individual objectives. For illustration, consider the response curves and time-bound limits in Fig. 6.9 that represent the case of set point change in top product composition. The subfigures (a) and (b) in Fig. 6.9 illustrate the set point tracking of top loop response and the coupling rejection of bottom loop response, respectively. The shaded area formed by parts of the response curve that lie outside the specified bounds contributes to the objective function that has to be minimized by GA search procedure to determine the optimal parameters of the controllers. The shaded area shown in subfigures (a) and (b) of Fig. 6.9 represents the top loop objectives JDi for the respective disturbance condition. In the same way, bottom loop objectives JBi can be obtained for set point change in bottom loop composition. These objectives provide the

Top product composition

(A) l3 l 5

upper limit (UL) l7

l4

l2

l6 set value actual response desired response

lower limit (LL) t1D

t2D

t3D

Bottom product composition

(B)

l1

t1B

t2B

t3B Time

FIGURE 6.9 Performance evaluation for GA tuning for xD set point change: (A) 2-butene, (B) 3-hexene.

171

172

CHAPTER 6 Application of stochastic evolutionary optimization

quantification for how far is the actual response from the desired response over the specified period of time. The function Jo , which is the summated objective of individual objectives evaluated at several disturbance conditions, quantifies the deviation of the actual responses with the desired responses and is minimized through GA search to establish the optimal values of the controller parameters. The parameters of the controllers are to be determined such that for any disturbance condition, the process output response should lie within the bounds specified for desired response. Because the controller design accounts for the nonlinear dynamics of the process to establish a unique set of controller parameters under different disturbance conditions, the controller parameters are not specific to a particular disturbance and are valid for all types of disturbances.

6.4.4 Analysis of results The GA search procedure leads to identify the tuning parameters of the top and bottom loop controllers as k1D ¼ 487.67, k2D ¼ 5431.01, k1B ¼ 23,833,940, and k2B ¼ 81,587,300. In metathesis reactive distillation column, the composition of the top product, butene, is controlled by manipulating the reflux flow rate, and the bottom product, hexene, is controlled by manipulating the reboiler heat load. Fig. 6.10 compares the inferential measurement tracking efficiency of the GA-tuned PI controllers with the actual compositions for simultaneous set point changes in both top and bottom product compositions. The servo performance, the regulatory performance, and the robustness of the GA-tuned PI controllers are also evaluated by considering uncertainties in kinetic and thermodynamic model parameters. These results demonstrate the effectiveness and robustness of the GA-tuned PI controller in the presence of uncertainties in model parameters.

6.5 Stochastic optimizationebased nonlinear model predictive control of reactive distillation column In this work, stochastic optimization algorithms such as GA and simulated annealing (SA) are combined with a polynomial-type empirical process model to derive nonlinear model predictive control (NMPC) strategies. The performance of these strategies is evaluated by applying them for single inputesingle output control of an ethyl acetateereactive distillation column involving an esterification reaction with azeotropism.

6.5.1 The need for stochastic optimization methods in design of nonlinear control strategies Conventional controllers are found inadequate to control highly nonlinear processes such as reactive distillation column. In processes like reactive distillation, the interaction between the simultaneous reaction and distillation introduces a much more

6.5 Stochastic optimizationebased nonlinear model

(A)

0.98

XD (mole fraction)

0.96

XB (mole fraction)

(B)

0.92

Set point Actual Estimated

0.9 0.88 0.98

0

4

8

12

16

20

0

4

8

12

16

20

0

4

8

12

16

0

4

8

12 Time (hr)

16

0.96 0.94 0.92 0.9 0.88

R (k moles /hr)

(C)

0.94

(D)

230 210 190 170 150

Q x 105 (kcal / hr)

74

20

70 66 62 58 54

20

FIGURE 6.10 Closed loop responses of actual and estimated compositions for simultaneous set point changes in top and bottom product compositions: (A) & (B) are product compositions; (C) & (D) are manipulated variables.

complex behavior compared with conventional processes and leads to challenging problems in design, optimization, and control. Advanced controllers such as nonlinear model predictive controllers can be effective for the control of complex nonlinear processes. However, optimization algorithm is most important in dictating the performance of model predictive controllers used for nonlinear systems. The practical usefulness of nonlinear predictive controllers is usually hampered by the

173

174

CHAPTER 6 Application of stochastic evolutionary optimization

unavailability of appropriate optimization algorithms. Although classical optimization algorithms exist to solve convex optimization problems, these problems often becomes nonconvex in the presence of nonlinear characteristics/constraints and leads to nonoptimal solution. Moreover, classical optimization methods are more sensitive to the initialization of the algorithm and usually yields unacceptable solutions due to convergence to local optima. Also, these methods are not expected to provide the global optimum for multimodal functions. Consequently, new optimization techniques are being explored to achieve efficient control performance. Stochastic search and optimization algorithms such as GA and simulated annealing (SA) derived from the principles of natural phenomena are useful to find the global optimum of complex engineering problems. These algorithms are attractive because of their flexibility, ease of operation, and global perspective.

6.5.2 Nonlinear empirical model, predictive model, and objective function Model predictive control involves a predictive model that predicts the process dynamics over a prediction horizon enabling the controller to incorporate future set point changes or disturbances. It also involves an objective function, the minimization of which at every sampling instant provides the control inputs.

6.5.2.1 Polynomial ARMA model The model considered in this study for identification of a nonlinear reactive distillation has a polynomial ARMA structure of the form ybðkÞ ¼ q0 þ

ny X i¼1



iþ1 X

q1;i yðk  iÞ þ

nu X

q2;i uðk  iÞ þ

i¼1

nX y þ1

q3;i yðk  iÞuðk  iÞ þ

i¼1

nu X i¼1

q4;i j uðk  iÞuðk  jÞ þ .

j¼1

(6.38) This model can be expressed as ybðkÞ ¼ f ðyðk  1Þ; .; yðk  ny Þ; uðk  1Þ; .; uðk  nu ÞÞ

(6.39)

Here k refers the sampling time, y and u are the output and input variables, and ny and nu refer the number of output and input lags, respectively. This type of polynomial model structures has been used by various researchers for process control [124,125]. This model has the advantage that it represents the process nonlinearities in a structure with linear model parameters, which can be estimated by using efficient parameter estimation methods such as recursive least squares. Thus, the model in Eq. (6.39) can be rearranged in a linear regression form as ybðkÞ ¼ qT ðk  1Þ4ðk  1Þ þ εðkÞ

(6.40)

6.5 Stochastic optimizationebased nonlinear model

where q is a parameter vector, 4 represents inputeoutput process information, and ε is the estimation error.

6.5.2.2 Predictive model formulation The polynomial inputeoutput model provides one step ahead prediction for process output. By feeding back the model outputs and control inputs, the one-step ahead predictive model can be recurrently cascaded to itself to generate future predictions for process output. The N step predictions can be obtained as follows: ybðk þ 1=kÞ ¼ f ðyðkÞ; .; yðk þ 1  ny Þ; uðkÞ; .; uðk þ 1  nu ÞÞ   ybðk þ 2=kÞ ¼ f ybðk þ 1=kÞ; .; yðk þ 2  ny Þ; uðk þ 1Þ; .:; uðk þ M  nu Þ : ybðk þ N=kÞ ¼ f ybðk þ N  1=kÞ; .; ybðk þ N  ny =kÞ;  uðk þ M  1Þ; .; uðk þ M  nu Þ 

(6.41) where N is the prediction horizon and M is the control horizon.

6.5.2.3 Objective function formulation The optimal control input sequence in NMPC is computed by minimizing an objective function based on a desired output trajectory over a prediction horizon: Min



N M  2 X X   l wðk þ iÞ  ybp ðk þ iÞ þ g Duðk þ i  1ÞT Duðk þ i  1Þ i¼1

i¼1

uðkÞ; uðk þ 1Þ; .; uðk þ M  1Þ

(6.42) Subject to constraints: ymin  ybp ðk þ iÞ  ymax ði ¼ 1; .; NÞ umin  uðk þ iÞ  umax ði ¼ 0; .; M  1Þ Dumin  Duðk þ iÞ  Dumax ði ¼ 0; .; M  1Þ where ybp ðk þiÞ, i ¼ 1,.,N, are the future process outputs predicted over the prediction horizon, wkþi, i ¼ 1,.,N, are the set points and u(k þ i), i ¼ 0,.,M  1, are the future control signals. The l and g represent the output and input weightings. The umin and umax are the minimum and maximum values of the manipulated inputs, and Dumin and Dumax represent their corresponding changes. Computation of future control signals involves the minimization of the objective function so as to bring and keep the process output as close as possible to the given reference trajectory, even in the presence of load disturbances.

6.5.3 The process representation Ethyl acetate is produced through an esterfication reaction between acetic acid and ethyl alcohol: þ

H CH3 COOH þ C2 H5 OH4 H2 O þ CH3 COOC2 H5

(6.43)

175

176

CHAPTER 6 Application of stochastic evolutionary optimization

The achievable conversion in this reversible reaction is limited by the equilibrium conversion. This quaternary system is highly nonideal and forms binary and ternary azeotropes, which introduce complexity to the separation by conventional distillation. Reactive distillation can provide a means of breaking the azeotropes by altering or eliminating the conditions for azeotrope formation. Thus, reactive distillation becomes an attractive alternative for the production of ethyl acetate. All the plates in the reactive distillation column are considered to be reactive in the sense reaction takes places on all plates including the condenser and the reboiler. Vora and Daoutidis [126] have presented a two-feed column configuration for ethyl acetate reactive distillation and found that by feeding the two reactants on different trays countercurrently allows to enhance the forward reaction on trays and results higher conversion and purity over the conventional column configuration, which involves feeding the reactants on a single tray. In this work, the design and performance evaluation of the proposed stochastic optimizationebased NMPCs is carried out by using the double-feed column configuration of ethyl acetate reactive distillation with the control structure shown in Fig. 6.11. The dynamic model representing the process involves mass and component balance equations with reaction terms and

LC Tray 1

CC

FAc

FEth Tray 11

LC

Qh

FIGURE 6.11 Control structure of double feed ethyl acetate reactive distillation column.

6.5 Stochastic optimizationebased nonlinear model

algebraic energy equations supported by vaporeliquid equilibrium and physical properties [127,128].

6.5.4 Stochastic optimization methods for computation of optimal control policies in NMPC Stochastic optimization algorithms such as GA and simulated annealing are designed and implemented to solve the optimization problem in NMPC while accounting the constraints on the outputs and inputs. The control signal, u, is manipulated within the control horizon and remains constant afterward, i.e., u(k þ i) ¼ u(k þ M  1) for i ¼ M,.,N  1. Only the first control move of the optimized control sequence is implemented on the process, and the output measurements are obtained. At the next sampling instant, the prediction and control horizons are moved ahead by one step, and the optimization problem is solved again using the updated measurements from the process. The mismatch dk between the process y(k) and the model ybðkÞ is computed as dk ¼ bðyðkÞ  ybðkÞÞ

(6.44)

where b is a tunable parameter lying between 0 and 1. This mismatch is used to compensate the model predictions in Eq. (6.41): ybp ðk þ iÞ ¼ ybðk þ iÞ þ dk

ðfor all i ¼ 1 to NÞ

(6.45)

These predictions are incorporated in the objective function defined by Eq. (6.42) along with the corresponding set point values. The structure of the GA based/SA based NMPC is shown in Fig. 6.12.

6.5.4.1 Optimal control policy computation using genetic algorithm The basic GA is described in Section 4.2 and its implementation to base case problems is illustrated in Section 5.3. Its application to identify the parameters of multiloop PI controllers of metathesis reactive distillation column is given in Section 6.4. In GA-based NMPC, the control input u is normalized and constrained within the specified limits. The number of input variables coded in a string is set equal to the control horizon, and the length of the string increases as the control horizon increases. If each of the Du over the control horizon, M is represented by a substring of s bits, then the string length over the control horizon becomes sM. A penalty function approach is considered to satisfy the constraints on the input variables. In this approach, a penalty term corresponding to the penalty violation is added to the objective function defined in Eq. (6.42). Thus, the violation of the constraints on the variables is accounted by defining a penalty function of the form P¼

N X

mðDuðk þ iÞÞ2

(6.46)

i¼1

where the penalty parameter m is selected as a high value. The penalized objective function is then given by

177

178

CHAPTER 6 Application of stochastic evolutionary optimization

f ðxÞ ¼ J þ P

(6.47)

The fitness function is formed by the transformation of the penalized objective function as per GA computation. The operation of GA begins with a population of random strings representing the manipulated input variables Dukþi for i ¼ 0,.,M  1. The population is then operated through selection, crossover, and mutation to create a new population of points representing the future control actions over a specified control horizon. The updated values of ukþi are used to compute the model predictions explicitly at future time instants over a specified prediction horizon to evaluate the objective function. The procedure is continued iteratively until the objective function is minimized while satisfying the specified termination criteria. The first value of the uk þ i is implemented on the process and the procedure repeated for the next process measurement condition.

6.5.4.1.1 Implementation procedure The implementation of GA-based NMPC proceeds with the following steps: 1. Initialize the algorithm with the parameters used for processing the information. Choose the initial control vector for u and Du. Specify the generation number. 2. Find the process outputs and compute the model predictions using Eq. (6.41). 3. Perform GA search to find Du that optimizes the cost function and satisfy the constraints. This is accomplished by performing selection, crossover, and mutation on the random population while evaluating the fitness of the solution. 4. Update the control vector as uðk þ iÞ ¼ uðkÞ þ DuðkÞ

(6.48)

5. Terminate GA if the specified number of generations is reached. 6. Go to step 2 and repeat the procedure for every sampling point based on the updated control vector and its corresponding process output using the random population for Du.

6.5.4.2 Optimal control policy computation using SA The basic simulated annealing (SA) is described in Section 4.4, and its implementation to base case problems is illustrated in Section 5.4. In SA-based NMPC, it requires to specify the energy function and random number selection for control input calculation. The control input u is normalized and constrained within the specified limits. The random numbers used for the control input Du are considered equal to the length of the control horizon, and these numbers are generated so as to satisfy the constraints. Constraint violation is accounted by using the penalty function procedure as in the case of GA controller. At any instant, the current control signal, uk, and the prediction output based on this control input, ybðk þiÞ, are used to compute the objective function J in Eq. (6.42)

6.5 Stochastic optimizationebased nonlinear model

as the energy function, E(k þ i). The E(k þ i) and the previously evaluated E(k) provides the DE as DEðkÞ ¼ Eðk þ iÞ  EðkÞ

(6.49)

The comparison of the DE with the random numbers generated between 0 and 1 determines the probability of acceptance of u(k). If DE  0, all u(k) are accepted. If DE > 0, the u(k) are accepted with a probability of exp(DE/TA). If nm be the number of variables, nk be the number of function evaluations, and nT be the number of temperature reductions, then the total number of function evaluations required for every sampling condition are (nT  nk  nm).

6.5.4.2.1 Implementation procedure The implementation of SA-based NMPC proceeds with the following steps: 1. Set TA as a sufficiently high value and let nk be the number of function evaluations to be performed at a particular TA. Specify the termination criterion, ε. Choose the initial control vector, u, and obtain the process output predictions using Eq. (6.41) Evaluate the objective function, Eq. (6.42), as the energy function E(k). 2. Compute the incremental input vector Duk stochastically and update the control vector as in Eq. (6.48). Calculate the objective function, E(k þ i), as the energy function based on this vector. 3. The u(kþi) is accepted unconditionally if the energy function satisfies the condition Eðk þ iÞ  EðkÞ

(6.50)

Otherwise, u(k þ i) is accepted with the probability according to the Metropolis criterion:

ðEðk þ iÞ  EðkÞÞ exp  r (6.51) TA0 where TA0 is the current annealing temperature and r represents random number. This step proceeds until the specified function evaluations, nk, are completed. 4. The temperature reduction is carried out in the outer loop according to the decrement function TA0 ¼ aTA

(6.52)

where a is temperature reduction factor. Terminate the algorithm if all the differences are less than the prespecified ε. 5. Go to step 2 and repeat the procedure for every measurement condition based on the updated control vector and its corresponding process output.

179

180

CHAPTER 6 Application of stochastic evolutionary optimization

6.5.5 Analysis of results The stochastic optimizationebased NMPCs of this study are applied to control the ethyl acetate reactive distillation column with its double feed configuration. The steady-state data considered for the column with the double-feed configuration are D ¼ 6.68 mol/s, B ¼ 7.0825 mol/s, FAc ¼ 6.9 mol/s, FEth ¼ 6.865 mol/s, and Lo ¼ 13.51 mol/s. Because ethyl acetate is produced significantly and is withdrawn as a product in the distillate stream, controlling the purity of this main product is important inspite of disturbances in the column operation. This becomes the main control loop for GA-based NMPC/SA-based NMPC in which reflux flow rate is used as a manipulated variable to control the purity of ethyl acetate. Because reboiler and condenser holdups act as pure integrators, they also need to be controlled. These become the auxiliary control loops and are controlled by conventional PI controllers. The distillate flow rate is considered as a manipulated variable to control the condenser molar holdup, and the bottom flow rate is used to control the reboiler molar holdup. The tuning parameters used for both the PI controllers of reflux drum and reboiler holdups are kc ¼ 0.001 and sI ¼ 1.99  104 s [126]. The inputeoutput data to construct the nonlinear empirical model is obtained by solving the model equations of reactive distillation column using Euler’s integration with a step size of 2.0 s [128]. A PI controller with a series of step changes in the set point of ethyl acetate composition is used for data generation. The input data (reflux flow) are normalized and used along with the outputs (ethyl acetate composition) in model building. The reflux flow rate is constrained within the limits of 20 and 5 mol/ s. A total number of 25,000 datasets are considered to develop the model. The model parameters are determined by using the well-known recursive least squares algorithm [129]. On assessing the model structure (Eq. 6.39) for different orders of ny and nu, the model with ny ¼ 2 and nu ¼ 2 is found appropriate to design and implement the stochastic optimizationebased NMPCs. The structure of such a model has the form ybðkÞ ¼ q0 þ q1 yðk  1Þ þ q2 uðk  1Þ þ q3 yðk  1Þuðk  1Þ þ q4 yðk  2Þuðk  2Þ þ q5 uðk  1Þuðk  2Þ (6.53) The parameters of this model are determined as q0 ¼ 0.000774, q2 ¼ 0.002943, q3 ¼ 0.003828, q4 ¼ 0.000766, and q1 ¼ 1.000553, q5 ¼ 0.000117. This identified model is then used to derive future predictions for the process output by cascading the model to itself as in Eq. (6.41). These model predictions are added with the modeling error, d(k), defined in Eq. (6.44), and this is considered constant for the entire prediction horizon. The weightings l and g in the objective function, Eq. (6.42), are set as 1.0  107 and 7.5  104, respectively. The penalty parameter, m, in Eq. (6.46) is assigned as 1.0  105. The cost function used in GA-based NMPC is the penalized objective function (Eq. 6.47) that is used to compute the fitness function in GA search

6.5 Stochastic optimizationebased nonlinear model

FIGURE 6.12 Structure of stochastic optimizationebased nonlinear model predictive control (NMPC).

procedure. The incremental input, Du, in GA search is constrained within the limits 0.0025 and 0.0025, respectively. The actual input, u, involved with the optimization scheme is a normalized value and is constrained between 0 and 1. A crossover probability of 0.8 and a mutation probability of 0.05 are employed for genetic operators. Roulette wheel selection, single-point crossover, and bitwise mutation are used for GA implementation. Each input variable used in GA is coded as a substring of length 10. The length of the string depends on the control horizon and increases with the increase of input variables. The size of the population is considered equal to the string length. The allowable generation number is fixed as 100. The objective function and the constraints used in SA-based NMPC are the same as in GA-based NMPC. The energy function at each instant is evaluated as the objective function, Eq. (6.42). The initial temperature T in SA-based NMPC is chosen as 500, and the number of iterations at each temperature is set as 250. The temperature reduction factor, a, in Eq. (6.52) is set as 0.5. The control input determined by the stochastic optimizer is denormalized and implemented on the process. A sample time of 2 s is considered for implementation of the controllers. The performance of GA-based NMPC and SA-based NMPC are evaluated by applying for the servo and regulatory control of ethyl acetate reactive distillation column. The effect of the length of prediction and control horizons of these controllers is expressed by means of integral squared error (ISE) by considering a step change in ethyl acetate composition set point. The ISE results have shown that the increase of prediction horizon with a fixed control horizon gradually improves

181

CHAPTER 6 Application of stochastic evolutionary optimization

0.76 0.75

Ethylacetate composition

0.74 0.73 0.72 Set point

0.71

GA based NMPC

0.7

0.69 0.68 0.67 0

10

20

Time,h

30

40

50

25 Reflux flow rate, mol/s

182

20 15 10 GA based NMPC

5 0 0

10

20

Time,h

30

40

50

FIGURE 6.13 Output and input profiles for step increase in ethyl acetate composition set point.

the control performance up to certain length of prediction horizon and afterward shows a marginal degradation. The GA-based NMPC with a prediction horizon of around 10 and a control horizon of around 1 or 2 presents better performance. The SA based NMPC with a prediction horizon of around 10 and a control horizon of around 1 to 3 provides enhanced performance. Similar trend is observed in the ISE results of GA-based NMPC and SA-based NMPC for a step change in reboiler heat input. The GA-based NMPC and SA-based NMPC with the better selected prediction and control horizons are evaluated for their servo performance by introducing a step change in ethyl acetate composition set point from 0.6827 to 0.75 as shown in Figs. 6.13 and 6.14. The responses of GA-based NMPC and SA-based NMPC for tracking multiple step changes in set point of the controlled variable are shown in Fig. 6.15. The results exhibit the better set point tracking performance of stochastic optimization methods. The computational efficiency of stochastic optimizationebased NMPCs is evaluated in terms of CPU time. The computational times of GANMPC and SANMPC

6.6 Summary

Ethylacetate composition

0.76 0.75 0.74 0.73 0.72 0.71

Set point

0.7

SA based NMPC

0.69 0.68 0.67 0

10

20

Time,h

30

40

50

Reflux flow rate, mol/s

25 20 15 10

SA based NMPC

5 0 0

10

20

Time,h

30

40

50

FIGURE 6.14 Output and input profiles for step increase in ethyl acetate composition set point.

for different prediction and control horizons on an X86 computer with 127 MB RAM using C Programming are shown in Table 6.3. These times correspond to a complete simulation period of 10 h with a sample time of 2 s when a step change in ethyl acetate composition from 0.6827 to 0.75 is considered. The SANMPC is preferred for the control of reactive distillation column due to its lower execution times, easier tuning, and lower computational effort.

6.6 Summary Stochastic and evolutionary optimization techniques are widely used to solve complex engineering problems related to analysis, design, modeling, identification, operation, and control of chemical processes that are highly nonlinear and high dimensional or the problems that are not easily solved by classical deterministic methods of optimization. Diversified applications have been reported in chemical engineering domain for various stochastic and evolutionary optimization methods.

183

Ethylacetate composition

CHAPTER 6 Application of stochastic evolutionary optimization

0.76 0.75

Set point GA based NMPC

0.74 0.73 0.72 0.71 0.7 0.69 0.68 0.67 0

Ethylacetate composition

184

0.76 0.75 0.74 0.73 0.72 0.71 0.7 0.69 0.68 0.67

10

20

30 Time,h

40

50

Set point SA based NMPC

0

10

20

30 Time,h

40

50

FIGURE 6.15 Output responses for multiple step changes in ethyl acetate composition set point.

Table 6.3 Computational efficiency of nonlinear model predictive controls (NMPCs). Prediction horizon 10 10 10 10 10 1 2 3 4 5 6 7 8 9

Control horizon 1 2 3 4 5 1 1 1 1 1 1 1 1 1

Time of execution of GANMPC (min) 66.15 92.11 118.27 144.43 170.06 51.26 52.16 54.37 55.30 56.73 59.04 61.33 63.10 64.43

Time of execution of SANMPC (min) 21.41 22.73 24.16 25.83 27.31 5.62 7.91 9.53 11.37 13.73 15.17 16.67 18.19 19.76

References

This chapter deals with the design and implementation of various stochastic global optimization strategies to real engineering applications concerning to multistage dynamic optimization of polymerization reactors, multiloop tuning of PI controllers that account multivariable interactions and nonlinear process dynamics in reactive distillation, and stochastic optimizationebased nonlinear model predictive controllers for efficient control of highly nonlinear reactive distillation columns. The results evaluated for different case studies show the advantages of stochastic global optimization methods for solving complex optimization problems concerning to chemical processes.

References [1] Z. Michalewicz, D.B. Fogel, How to Solve it? Modern Heuristics, Springer, 1999. [2] H. Ratschek, J. Rokne, New Computer Methods for Global Optimization, Ellis Horwood, Chichester, England, 1988. [3] C.A. Floudas, V. Visweswaran, A global optimization algorithm (GOP) for certain classes of nonconvex NLPs-I, Theory Comput. Chem. Eng. 14 (1990) 1397. [4] W.R. Esposito, C.A. Floudas, Global optimization in parameter estimation of nonlinear algebraic models via the error-in-variables approach, Ind. Eng. Chem. Res. 3 (1998) 1841. [5] S.T. Harding, C.A. Floudas, Phase stability with cubic equation of state: global optimization approach, AIChE J. 46 (2000) 1422. [6] M. Srinivas, G.P. Rangaiah, Implementation and evaluation of random tunneling algorithm for chemical engineering applications, Comput. Chem. Eng. 30 (2006) 1400. [7] R. Horst, H. Tuy, Global Optimization: Deterministic Approaches, second ed., Springer-Verlag, Berlin, Germany, 1993. [8] R. Vaidyanathan, M. El-Halwagi, Global optimization of nonconvex nonlinear programs via interval analysis, Comput. Chem. Eng. 18 (1994) 889. [9] H.S. RyooandN, V. Sahinidis, Global optimization of nonconvex NLPs and MENLPs with applications in process design, Comput. Chem. Eng. 19 (1995) 551. [10] C.S. Adjiman, I.P. Androulakis, C.D. Maranas, C.A. Floudas, A Global optimization method, aBB, for process design, Comput. Chem. Eng. 20 (Suppl. l.) (1996) S419. [11] H.M. Cartwright, R.A. Long, Simultaneous optimization of chemical flowshop sequencing and topology using genetic algorithms, Ind. Eng. Chem. Res. 32 (1993) 2706. [12] E.S. Fraga, T.R. Matias, Synthesis and optimization of a nonideal distillation system using a parallel genetic algorithm, Comput. Chem. Eng. 20 (1996) 79. [13] C. Onnen, R. Babuska, J.M. Kaymark, J.M. Sousa, H.B. Verbruggen, R. Isermann, Genetic algorithms for optimization in predictive control, Contrl. Eng. Prct. 5 (10) (1997) 1363e1372. [14] C.M. Castell, R. Lakshmanan, J.M. Skilling, Optimization of process plant layout using genetic algorithms, Comput. Chem. Eng. (22S) (1998) 993e996. [15] T.Y. Park, G.F. Forment, A hybrid genetic algorithm for the estimation of parameters in detailed kinetic models, Comput. Chem. Eng. 22S (1998) 103e110.

185

186

CHAPTER 6 Application of stochastic evolutionary optimization

[16] K.F. Wang, Y. Qian, Y. Yuan, P.J. Yao, Synthesis and optimization of heat integrated distillation systems using an improved genetic algorithm, Comput. Chem. Eng. 23 (1998) 125e136. [17] G.P. Rangaiah, Evaluation of genetic algorithms and simulated annealing for phase equilibrium and stability problems, Fluid Phase Equilib. 187e188 (2001) 83e109. [18] S. Nandi, P. Mukherjee, S.S. Tambe, R. Kumar, B.D. Kulkarni, Reaction modeling and optimization using neural networks and genetic algorithms: case study involving TS-1 catalyzed hydroxylation of benzene, Ind. Eng. Chem. Res. 41 (2002) 2159e2169. [19] R.B. Kasat, D. Kunzru, D.N. Saraf, S.K. Gupta, Multiobjective optimization of industrial FCC units using elitist nondominated sorting genetic algorithm, Ind. Eng. Chem. Res. 41 (2002) 4765e4776. [20] R.B. Kasat, S.K. Gupta, Multi-objective optimization of an industrial fluidized-bed catalytic cracking unit(FCCU) using genetic algorithm(GA) with the jumping gene operators, Comput. Chem. Eng. 27 (2003) 1785e1800. [21] J. Leboreiro, J. Acevedo, Processes synthesis and design of distillation sequences using modular simulators: a genetic algorithm framework, Comput. Chem. Eng. 28 (2004) 1223. [22] U. Rodemerck, M. Baerns, M. Holena, Application of a genetic algorithm and a neural network for the discovery and optimization of new solid catalytic materials, Appl. Surf. Sci. 223 (2004) 168e174. [23] M.K. Singh, T. Banerjee, A. Khanna, Genetic algorithm to estimate interaction parameters of multicomponent systems for liquid liquid, Comput. Chem. Eng. 29 (2005) 1712. [24] R.M. Pereira, F. Clerc, D. Farusseng, J.C. Waal, T. Maschmeyer, Effect of genetic algorithm parameters on the optimization of heterogeneous catalysts, QSAR Comb. Sci. 24 (2005) 45e57. [25] A. Altinten, S. Erdogan, H. Hapoglu, F. Aliev, M. Alpbaz, Application of fuzzy control method with genetic algorithm to a polymerization reactor at constant set point, Chem. Eng. Res. Des. 84 (2006) 1012e1018. [26] J. Causa, G. Karer, A. Nunez, D. Saez, I. Skrjanc, B. Zupancic, Hybrid fuzzy predictive control on genetic algorithms for the temperature control of a batch reactor, Comput. Chem. Eng. 32 (2008) 3254e3263. [27] C. Gutie´rrez-Antonio, A. Briones-Ramı´rez, Pareto front of ideal petlyuk sequences using a multiobjective genetic algorithm with constraints, Comput. Chem. Eng. 33 (2009) 454. [28] J.A. Va´zquez-Castillo, J.A. Venegas, J.G. Segovia-Herna´ndez, H. Herna´ndez-Escoto, S. Herna´ndez, C. Gutie´rrezAntonio, A. Briones-Ramı´rez, Design and optimization, using genetic algorithms, of intensified distillation systems for a class of quaternary mixtures, Comput. Chem. Eng. 33 (2009) 1841. [29] S. Palit, The future vision of the application of genetic algorithm in designing a fluidized catalytic cracking unit and chemical engineering systems-a far-reaching review, Int. J. Chem. Tech. Res. 7 (4) (2014-2015) 1665e1674. [30] D. Vanderbilt, S.G. Louie, A Monte Carlo simulated annealing approach to optimization over continuous variables, J. Comp. Phys. 56 (1984) 259e271. [31] W.B. Dolan, P.T. Cummings, M.D. Le Van, Process optimization via simulated annealing: application to network design, AIChE J. 35 (1989) 725e736, 1989.

References

[32] Y. Zhu, Z. Xu, A reliable prediction of the global phase stability for liquid-liquid equilibrium through the simulated annealing algorithm: application to NRTL and UNIQUAC equations, Fluid Phase Equilib. 154 (1999) 55, 1999. [33] P. Li, K.H. Lowe, G. Arellano, G. Wozny, Integration of simulated annealing to a simulation tool for dynamic optimization of chemical processes, Chem. Eng. Process 39 (2000) 357e363. [34] M. Hanke, P. Li, Simulated annealing for the optimization of batch distillation process, Comput. Chem. Eng. 24 (2000) 1e8. [35] Y. Zhu, H. Wen, Z. Xu, Global stability analysis and phase equilibrium calculations at high pressures using the enhanced simulated annealing algorithm, Chem. Eng. Sci. 55 (17) (2000) 3451e3459. [36] N. Henderson, J.R. De Oliveria, H.P. Amaral Souto, R. Pitanga Marques, Modeling and analysis of the isothermal flash problem and its calculation with the simulated annealing algorithm, Ind. Eng. Chem. Res. 40 (2001) 6028. [37] M.M. Ali, A. To¨rn, S. Viitanen, A direct search variant of the simulated annealing algorithm for optimization involving continuous variables, Comput. Oper. Res. 29 (2002) 87. [38] R.L. Salcedo, R.P. Lima, M.F. Cardoso, Simulated annealing for the global optimization of chemical processes, Proc. Indian Natl. Sci. Acad. 69 (3e4) (2003) 359e401. [39] M. Kundu, S.S. Bandyopadhyay, Modelling vapor-liquid equilibrium of CO2 in aqueous n-methyldiethanolamine through the simulated Annealing algorithm, Can. J. Chem. Eng. 83 (2005) 344. [40] B. Sankararao, S.K. Gupta, Multi-objective optimization of an industrial fluidized-bed catalytic cracking unit(FCCU) using two jumping gene adaptation of simulated annealing, Comput. Chem. Eng. 31 (2007) 1496e1515. [41] M.H. Lee, C. Han, K.S. Chang, Dynamic optimization of a continuous polymer reactor using a modified differential evolution, Ind. Eng. Chem. Res. 38 (1999) 4825e4831. [42] B.V. Babu, K.K.N. Sastry, Estimation of heat transfer parameters in a trickle bed reactor using differential evolution and orthogonal collocation, Comput. Chem. Eng. 23 (1999) 327e339. [43] F.S. Wang, T.L. Su, H.J. Jang, Hybrid differential evolution for problems of kinetics parameter estimation and dynamic optimization of an ethanol fermentation process, Ind. Eng. Chem. Res. 40 (2001) 2876e2885. [44] B.V. Babu, G.P.G. Chakole, J.H. Syed Mubeen, Multiobjective differential evolution(MODE) for optimization of adiabatic styrene reactor, Chem. Eng. Sci. 60 (2005) 4822e4837. [45] B.V. Babu, R. Angira, A. Nilekar, Optimal design of auto thermal ammonia synthesis reactor using differential evolution, Comput. Chem. Eng. 29 (5) (2005) 1041e1045. [46] R. Angira, B.V. Babu, Optimization of process synthesis and design problems: a modified differential evolution approach, Chem. Eng. Sci. 61 (2006) 4707e4721. [47] M. Srinivas, G.P. Rangaiah, Differential evolution with tabu list for global optimization and its application to phase equilibrium and parameter estimation problems, Ind. Eng. Chem. Res. 46 (10) (2007) 3410e3421. [48] Y. Wu, J. Lu, Y. Sun, An improved differential evolution for optimization of chemical process, Chin. J. Chem. Eng. 16 (2008) 228e234. [49] C. Hu, X. Yan, A novel adaptive differential evolution algorithm with application to estimate kinetic parameters of oxidation in supercritical water, Eng. Optim. 41 (2009b) 1051e1062.

187

188

CHAPTER 6 Application of stochastic evolutionary optimization

[50] M.R. Rahimpour, P. Parvasi, P. Setoodeh, Dynamic optimization of a novel radial-flow, spherical-bed methanol synthesis reactor in the presence of catalyst deactivation using Differential Evolution (DE) algorithm, Int. J. Hydrogen Energy 34 (2009) 6221e6230. [51] C. Zhao, Q. Xu, S. Lin, X. Li, Hybrid differential evolution for estimation of kinetic parameters for biochemical systems, Chin. J. Chem. Eng. 21 (2013) 155e162. [52] B. Xu, R. Qi, W. Zhong, W. Du, F. Qian, Optimization of p-xylene oxidation reaction process based on self-adaptive multi-objective differential evolution, Chemometr. Intell. Lab. Syst. 127 (2013) 55e62. [53] M. Va´zquez-Ojeda, J.G. Segovia-Herna´ndez, S. Herna´ndez, A. Herna´ndez-Aguirre, A.A. Kiss, Optimization of an ethanol dehydration process using differential evolution algorithm, Comput. Aided Chem. Eng. 32 (2013) 217e222. [54] X. Nian, Z. Wang, F. Qian, A Hybrid algorithm based on differential evolution and group search optimization and its application on ethylene cracking furnace, Chin. J. Chem. Eng. 21 (2013) 537e543. [55] V.K. Jayaraman, B.D. Kulkarni, S. Karale, P. Shelokar, Ant colony framework for optimal design and scheduling of batch plants, Comput. Chem. Eng. 24 (8) (2000) 1901e1912. [56] P.S. Shelokar, V.K. Jayaraman, B.D. Kulkarni, Ant algorithm for single and multiobjective reliability optimization problems, Qual. Reliab. Eng. Int. 18 (6) (2002) 497e514. [57] P.S. Shelokar, V.K. Jayaraman, B.D. Kulkarni, Multiobjective optimization of reactoreregenerator system using ant algorithm, Pet. Sci. Technol. 21 (7e8) (2003) 1167e1184. [58] P.S. Shelokar, V.K. Jayaraman, B.D. Kulkarni, An ant colony classifier system: application to some process engineering problems, Comput. Chem. Eng. 28 (9) (2004) 1577e1584. [59] Q. Shen, J.H. Jiang, J.C. Tao, G.L. Shen, R.Q. Yu, Modified ant colony optimization algorithm for variable selection in QSAR modeling: QSAR studies of cyclooxygenase inhibitors, J. Chem. Inf. Model. 45 (4) (2005) 1024e1029. [60] Y. He, D. Chen, W. Zhao, Ensemble classifier system based on ant colony algorithm and its application in chemical pattern classification, Chemometr. Intell. Lab. Syst. 80 (1) (2006) 39e49. [61] F. Jalalinejad, F. Jalali-Farahani, N. Mostoufi, R. Sotudeh-Gharebagh, Ant colony optimization: a leading algorithm in future optimization of chemical process, in: 17th European Symposium on Computer Aided Process Engineering, Elsevier, Amsterdam, 2007, pp. 1e6. [62] S.A. Asgari, M.R. Pishvaie, Dynamic optimization in chemical processes using region reduction strategy and control vector parameterization with an ant colony optimization algorithm, Chem. Eng. Technol. 31 (4) (2008) 5007e5512. [63] Y. He, D. Chen, W. Zhao, Integrated method of compromise-based ant colony algorithm and rough set theory and its application in toxicity mechanism classification, Chemometr. Intell. Lab. Syst. 92 (1) (2008) 22e32. [64] M. Atabati, K. Zarei, A. Borhani, Predicting infinite dilution activity coefficients of hydrocarbons in water using ant colony optimization, Fluid Phase Equilib. 293 (2) (2010) 219e224. [65] M. Atabati, K. Zarei, M. Mohsennia, Prediction of l max of 1,4-naphthoquinone derivatives using ant colony optimization, Anal. Chim. Acta 663 (1) (2010) 7e10.

References

[66] J.A. Ferna´ndez-Vargas, A. Bonilla-Petriciolet, J.G. Segovia-Herna´ndez, An improved Ant Colony Optimization method and its application for the thermodynamic modeling of phase equilibrium, Fluid Phase Equilib. 353 (2013) 121e131. [67] K.V. Lakshmi Narayana, V. Naveen Kumar, M. Dhivya, R. Prejila Raj, Application of ant colony optimization in tuning a PID controller to a conical tank, Ind. J. Sci. Technol. 8 (S2) (2015) 217e223. [68] S. Sukpancharoen, T.R. Srinophak, Application of the ant colony algorithm to optimize the synthesis of heat, Int. J. Mech. Eng. Technol. 9 (4) (2018) 514e530. [69] Y. Hou, Parameter prediction model for signal reconstruction fluctuating operation state of chemical machinery equipment based on ant colony algorithm, Chem. Eng. Trans. 66 (2018) 727e732. [70] C. Wang, H. Quan, X. Xu, Optimal design of multiproduct batch chemical process using tabu search, Comput. Chem. Eng. 23 (1999) 427e437. [71] Y.S. The, G.P. Rangaiah, Tabu search for global optimization of continuous functions with application to phase equilibrium calculations, Comput. Chem. Eng. 27 (2003) 1665e1679. [72] B. Lin, D.C. Miller, Tabu search algorithm for chemical process optimization, Comput. Chem. Eng. 28 (2004) 2287e2306. [73] L. Cavin, U. Fischer, F. Glover, K. Hungerbuhler, Multiobjective process design in multi-purpose batch plants using a tabu search optimization algorithm, Comput. Chem. Eng. 28 (2004) 459e478. [74] B. Lin, D.C. Miller, Solving heat exchanger network synthesis problem with tabu search, Comput. Chem. Eng. 28 (2004) 1451e1464. [75] A. Konak, S.K. Konak, Simulation Optimization with Tabu Search: An Empirical Study Proceedings of Winter Simulation Conference, Orlando, Florida, 2005, pp. 2686e2692. [76] M.J. Carnero, J. Herna´ndez, M. sa´nchez, Optimal sensor network design and upgrade using tabu search, Comput. Aided Chem. Eng. 20 (2005) 1447e1452. [77] P.S. Shelokar, V.K. Jayaraman, B.D. Kulkarni, Multicanonical jump walk annealing assisted by tabu for dynamic optimization of chemical engineering processes, Eur. J. Oper. Res. 185 (3) (2008) 1213e1229. [78] O. Exler, L.T. Antelo, J.A. Egea, A.A. Alonso, J.R. Banga, A taboo search based algorithm for mixed-integer nonlinear problems and its applications to integrated process and control system design, Comput. Chem. Eng. 32 (2008) 1877e1891. [79] S. Nandi, S. Ghosh, S.S. Tambe, B.D. Kulkarni, Artificial neural-network-assisted stochastic process optimization strategies, AIChE J. 47 (2001) 126e141. [80] C.O. Ourique, E.C. Biscaia, J.C. Pinto, The use of particle swarm optimization for dynamical analysis in chemical processes, Comput. Chem. Eng. 26 (2002) 1783e1793. [81] P.S. Shelokar, P. Siarry, V.K. Jayaraman, B.D. Kulkarni, Particle swarm and ant colony algorithms hybridized for improved continuous optimization, Appl. Maths Comput. 188 (1) (2007) 129e142. [82] Z.H. Liu, W. Qi, Z.M. He, H.L. Wang, Y.Y. Feng, PSO-based parameter estimation of nonlinear kinetic models for b-mannanase fermentation, Chem. Biochem. Eng. Q. 22 (2) (2008) 195e201. [83] I. Rahman, A.K. Das, R.B. Mankar, B.D. Kulkarni, Evaluationof repulsive particle swarm method for phase equilibrium and phase stability problems, Fluid Phase Equilib. 282 (2) (2009) 65e67.

189

190

CHAPTER 6 Application of stochastic evolutionary optimization

[84] D.M. Prata, M. Schwab, E.L. Lima, J.C. Pinto, Nonlinear dynamic data reconciliation and parameter estimation through particle swarm optimization: application for an industrial polypropylene reactor, Chem. Eng. Sci. 64 (2009) 3953e3967. [85] A. Bonilla-Petricioletand, J.G. Segovia-Herna´ndez, A comparative study of particle swarm optimization and its variants for phase stability and equilibrium calculations in multicomponent reactive and non-reactive systems, Fluid Phase Equilib. 2899 (2) (2010) 110e121. [86] A.P. Marianoa, C.B.B. Costa, E.C.V. de Toledo, D.N.C. Meloa, R.M. Filho, in: S. Pierucci, G.B. Ferraris (Eds.), Optimization of a Three-phase Slurry Catalytic Reactor by Particle Swarm, 20th European Symposium on Computer Aided Process Engineering, Elsevier B.V., 2010. [87] V. Sakhre, S. Jain, Comparison of hybrid PSO-GA & cuckoo search (CS) algorithm for optimization of ETBE reactive distillation, Int. J. Innov. Adv. Comput. Sci. 3 (5) (2014) 12e21. [88] T.T. Nguyen, Z.Y. Li, S.W. Zhang, T.K. Truong, A Hybrid algorithm based on particle swarm and chemical reaction optimization, Expert Syst. Appl. 41 (5) (2014) 2134e2143. [89] N. Khanduja, CSTR control by using model reference adaptive control and PSO, World Academy of Science, Int. J. Mech. Mechatron. Eng. 8 (12) (2014) 2144e2149. [90] D.T. Pham, M. Castellani, The bees algorithm: modelling foraging behaviour to solve continuous optimization problems, Sage J 223 (2009) 2919e2938. [91] N. Stanarevic, M. Tuba, N. Bacanin, Modified artificial bee colony algorithm for constrained problems optimization, Intl. J. Math. Models Methods Appl. Sci. 5 (2011) 644e651. [92] B. Akay, D. Karaboga, Artificial bee colony algorithm for large-scale problems and engineering design optimization, J. Intell. Manuf. 21 (4) (2012) 1e14. [93] W.D. Chang, Nonlinear CSTR control system design using an artificial bee colony algorithm, Simul. Model. Pract. Theory 31 (2013) 1e9. [94] V. Kaliappan, M. Thathan, Enhanced ABC based PID controller for nonlinear control systems, Ind. J. Sci. Technol. 8 (S7) (2015) 48e56. [95] T. Chen, R. Xiao, Modeling design iteration in product design and development and its solution by a novel artificial bee colony algorithm, Comput. Intell. Neurosci. 4 (3) (2014) 1e13. [96] A. Bagis, H. Senberber, ABC algorithm based PID controller design for higher order oscillatory systems, Elektronika Ir Elektrotechnika 23 (6) (2017) 1e9. [97] I.F.D. Fister, I. Fister, A comprehensive review of cuckoo search: variants and hybrids, Int. J. Math. Model. Numer. Optim. 4 (2013) 387e409. [98] V. Bhargava, S.E.K. Fateen, A. Bonilla-Petriciolet, Cuckoo Search: a new natureinspired optimization method for phase equilibrium calculations, Fluid Phase Equilib. 337 (2013) 191e200. [99] V. Sakhre, S. Jain, Comparison of hybrid PSOGSA & cuckoo search (CS) algorithm for optimization of ETBE reactive distillation, Int. J. Innov. Adv. Comput. Sci. 3 (5) (2014) 12e21. [100] H. Jiang, R. Qi, N. Meiling, W. Zhang, Modified Cuckoo Search algorithm and its application to optimization in multiple-effect evaporation, Comput. Appl. Chem. 31 (11) (2014) 1363e1368.

References

[101] J.E. Jaime-Leal, A. Bonilla-Petriciolet, V. Bhargava, S.E.K. Fateen, Nonlinear parameter estimation of e-NRTL model for quaternary ammonium ionic liquids using Cuckoo Search, Chem. Eng. Res. Des. 93 (2015) 464e472. [102] O. Roeva, V. Atanassova, Cuckoo search algorithm for model parameter identification, Int. J. Bioaut. 20 (4) (2016) 483e492. [103] G.Z.A. Wu, L.A. Denton, R.L. Laurence, Batch polymerization of styrene-optimal temperature histories, Polym. Eng. Sci. 22 (1) (1982). [104] K.Y. Choi, D.N. Butala, Synthesis of open loop controls for semi batch copolymerization reactors by inverse feedback control method, Automatica 25 (1989) 917e923. [105] G. Arzamendi, J.M. Asua, Monomer addition policies for copolymer composition control in semi-continuous emulsion polymerization, J. Appl. Polym. Sci. 38 (2019), 1989. [106] P.E. Gloor, R.J. Warner, Developing feed policies to maximize productivity in emulsion polymerization processes, Thermochim. Acta 289 (1996) 243. [107] V.M. Zavala, A.F. Tlacuahuac, E.V. Lima, Dynamic optimization of a semi-batch reactor for polyurethane production, Chem. Eng. Sci. 60 (2005) 3061e3307. [108] L.S. Pontryagin, V.G. Boltyanski, R.V. Gamkrelidze, E.F. Mishchenko, The Mathematical Theory of Optimal Processes, John Wiley & Sons, New York, 1962. [109] I.M. Thomas, C. Kiparissides, Computation of the near optimal temperature and initiator policies for batch polymerization reactors, Can. J. Chem. Eng. 62 (1984) 284e291. [110] S.R. Ponnuswamy, S.L. Shah, C.A. Kiparissides, Computer optimal control of batch polymerization reactors, Ind. Eng. Chem. Res. 26 (1987) 2229e2236. [111] A.R. Secchi, E.L. Lima, J.C. Pinto, Constrained optimal batch polymerization reactor control, Polym. Eng. Sci. 30 (1990) 1209e1219. [112] E.E. Ekpo, I.M. Mujtaba, Optimal control trajectories for a batch polymerization reactor, Int. J. Chem. React. Eng. 5 (2007) 1542e6580. [113] J.H. Chang, J.L. Lai, Computation of optimal temperature policy for molecular weight control in a batch polymerization reactor, Ind. Eng. Chem. Res. 31 (1992) 861e868. [114] D. Salhi, M. Daroux, C. Genetric, J.P. Corriou JP, F. Pla, M.A. Latifi, Optimal temperature-time programming in a batch copolymerization reactor, Ind. Eng. Chem. Res. 43 (2004) 7392e7400. [115] C. Venkateswarlu, Advances in Optimal Control of Polymerization Reactors, Book Chapter Publication in Polymer Science Book Entitled, Polymer Science: Research Advances, Practical Applications, Educational Aspects, Formatex Publishers, Spain, 2016. [116] P. Anand, M. Bhagvanth Rao, C. Venkateswarlu, Multistage dynamic optimization of a copolymerization reactor using differential evolution, Asia Pac. J. Chem. Eng. 8 (2013) 687e698. [117] R.E. Bellman, Dynamic Programming, Princeton University Press, Princeton, NJ, 2003, pp. 947e957. [118] D. Butala, K.Y. Choi, M.K.H. Fan, Multiobjective dynamic optimization of a semibatch free-radical copolymerization process with interactive CAD tools, Comput. Chem. Eng. 12 (1988) 1115e1127. [119] P. Anand, M. BhagvanthRao, C. Venkateswarlu, Dynamic optimization of copolymerization reactor using tabu search, ISA Trans. 55 (2015) 13e26.

191

192

CHAPTER 6 Application of stochastic evolutionary optimization

[120] C. Sumana, C. Venkateswarlu, Genetically tuned decentralized PI controllers for inferential control of reactive distillation, Ind. Eng. Chem. Res. 49 (3) (2010) 1297e1311, 2010. [121] P. Wang, D.P. Kwok, Optimal design of PID process controllers based on genetic algorithms, Contr. Eng. Pract. 2 (1994) 641. [122] D.R. Lewin, A.A. Parag, Constrained genetic algorithm for decentralized control system structure selection and optimization, Automatica 39 (2003) 1801. [123] C. Sumana, C. Venkateswarlu, Optimal selection of sensors for state estimation in a reactive distillation process, J. Proc. Contr. 19 (2009) 1024e1035. [124] J.D. Morningred, B.E. Paden, D.E. Seborg, D.A. Mellichamp, An adaptive nonlinear predictive controller, Chem. Eng. Sci. 47 (1992) 755e762. [125] E. Hernandez, Y. Arkun, Control of nonlinear systems using polynomial ARMA models, AIChE J. 39 (1993) 446e460. [126] N. Vora, P. Daoutidis, Dynamics and control of ethyl acetate reactive distillation column, Ind. Eng. Chem. Res. 40 (2001) 833e849. [127] K. Alejski, F. Duprat, Dynamic simulation of the multicomponent reactive distillation, Chem. Eng. Sci. 51 (1996) 4237e4252. [128] C. Venkateswarlu, D. Damodar Reddy, Nonlinear model predictive control of reactive distillation based on stochastic optimization, Ind. Eng. Chem. Res. 47 (2008) 6949e6960. [129] G.C. Goodwin, K.S. Sin, Adaptive Filtering, Prediction and Control, Prentice Hall, Englewood Cliffs, New Jersey, 1984.

CHAPTER

Application of stochastic evolutionary optimization techniques to biochemical processes

7

Chapter outline 7.1 Introduction .....................................................................................................194 7.2 Bioprocess engineeringdsignificance of modeling and optimization...................195 7.3 Media optimization of Chinese hamster ovary cells production process using differential evolution ........................................................................................195 7.3.1 CHO cell cultivation process and its macroscopic state space model... 196 7.3.2 DE-coupled model-based strategy for cell culture medium optimization................................................................................... 197 7.3.3 Analysis of results .......................................................................... 197 7.4 Response surface modelebased ant colony optimization of lipopeptide biosurfactant process .......................................................................................198 7.4.1 The lipopeptide biosurfactant process and its culture medium............ 198 7.4.2 Experimental design and data generation ......................................... 200 7.4.3 Development of response surface models for lipopeptide biosurfactant process.......................................................................................... 200 7.4.4 RSM-ACO strategy for lipopeptide process optimization ..................... 201 7.4.5 Analysis of results .......................................................................... 203 7.5 ANN modelebased multiobjective optimization of rhamnolipid biosurfactant process using NSDA strategy ...........................................................................204 7.5.1 The rhamnolipid biosurfactant process and its culture medium .......... 204 7.5.2 Experimental design and data generation ......................................... 204 7.5.3 ANN model for rhamnolipid process................................................. 205 7.5.4 Formulation of multiobjective optimization problem .......................... 205 7.5.5 ANN-NSDE strategy for multiobjective optimization of rhamnolipid process.......................................................................................... 206 7.5.5.1 ANN-NSDE with Naı¨ve & Slow .............................................. 207 7.5.5.2 ANN-NSDE with ε-constraint ................................................ 207 7.5.5.3 ANN-NSDE with Naı¨ve & Slow and ε-constraint ...................... 210 7.5.6 Analysis of results .......................................................................... 210

Stochastic Global Optimization Methods and Applications to Chemical, Biochemical, Pharmaceutical and Environmental Processes. https://doi.org/10.1016/B978-0-12-817392-3.00007-7 Copyright © 2020 Elsevier Inc. All rights reserved.

193

194

CHAPTER 7 Application of stochastic evolutionary

7.6 ANN-DE strategy for simultaneous optimization of rhamnolipid biosurfactant process .......................................................................................211 7.6.1 The process and the culture medium................................................ 211 7.6.2 Design of experiments for rhamnolipid process ................................. 212 7.6.3 ANN model for rhamnolipid process ................................................. 212 7.6.4 Simultaneous optimization of rhamnolipid biosurfactant process using ANN-DE.................................................................... 213 7.6.4.1 Individually optimized responses ........................................... 214 7.6.4.2 Simultaneous optimization using a distance minimization function ............................................................................. 214 7.6.5 Simultaneous optimization of rhamnolipid biosurfactant process using RSM-DE strategy ................................................................... 215 7.6.6 Analysis of results .......................................................................... 217 7.7 Summary .........................................................................................................217 References .............................................................................................................218

7.1 Introduction Bioprocess technology plays a vital role in delivering innovative and sustainable products and processes to fulfill the needs of the society. The major solutions offered by this field include sustainable production of biofuels from renewable sources, exploitation of the metabolic capabilities of microorganisms for the production of fine chemicals and pharmaceutical products, identification of novel pathways for environmental bioremediation, upscale and commercial technological solutions to lab-scale experiments and integration of bioprocess technology with chemical processes. In the present situation of increasing energy demand, depleting natural sources, and ever demanding environmental awareness, bioprocesses occupy a unique position in converting variety of resources into useful products. With the advances taking place in biotechnology field and more focused insight on the underlying control mechanisms of cellular growth, the field has surpassed many technological barriers encountered in industry in achieving efficient and cost-effective bioprocesses. Modeling and optimization techniques are increasingly used to understand and improve the cellular-based processes. The advantages of these techniques include the reduction of excessive experimentation, facilitating the most informative experiments, providing strategies to optimize and automate the processes and reducing cost and time in devising operational strategies. The model-based bioprocess optimization provides a quantitative and systematic framework to maximize process profitability, safety, and reliability. This chapter mainly focuses on application of stochastic evolutionary optimization techniques for modeling and optimization of biotechnological processes. Before applying these techniques to real-life problems, a brief description of modeling optimization of bioprocesses provides better insight on the importance of model based optimization in bio-process engineering.

7.3 Media optimization of Chinese hamster ovary cells production process

7.2 Bioprocess engineeringdsignificance of modeling and optimization Bioprocess engineering deals with the engineering aspects of biotechnical processes. This field comprises the development of processes and products related to biochemicals, pharmaceutics, energetics, food and agriculture. Process modeling and optimization has become a vital component in designing and improving the performance of biotechnical processes. The models used to represent bioprocesses are usually complex and are described by differential and algebraic equations. To reduce the effort and to minimize the time required for solving such rigorous models, bioprocesses are also described by approximate versions of complex models or statistical multivariate regression models that relate the data of dependent and independent variables. Model-based optimization problems arising from the biotechnological processes are usually nonconvex in nature and often exhibit multiple local optima. A variety of optimization problems concerning to culture growth, process operation, productivity, and economics can arise in bioprocess engineering field. Bioprocess engineering optimization problems are mostly solved either by using deterministic optimization techniques or stochastic optimization techniques [1,2]. Different optimization techniques based on deterministic optimization approach are reported for solving various bioprocess engineering problems [3e18]. Most of the bioprocess engineering problems exhibit highly nonlinear dynamics. These problems often present nonconvexity, discontinuity, and multimodality. To solve these problems, robust and efficient optimization techniques are required. Stochastic optimization techniques are widely used to solve complex bioprocess engineering problems. These methods have the ability to locate the global optimum in the vicinity of the global solution and can handle problems that are discontinuous and noisy. Additional advantages of these methods include their flexibility and ease of operation. Because of these advantages, various stochastic global optimization methods are applied to solve different nonlinear optimization problems concerning to the field of bioprocess engineering [19e39]. Real-life applications of various stochastic evolutionary and artificial intelligenceebased optimization strategies to different biotechnological processes are illustrated in subsequent sections of this chapter.

7.3 Media optimization of Chinese hamster ovary cells production process using differential evolution In this work, the experimentally validated macroscopic mathematical model of the cell cultivation process identified under metabolic viewpoint is coupled with differential evolution (DE) to optimize the input space of the Chinese hamster ovary (CHO) cell cultivation process.

195

196

CHAPTER 7 Application of stochastic evolutionary

7.3.1 CHO cell cultivation process and its macroscopic state space model CHO cells are the most widely used mammalian cells in bioprocessing and biopharmaceutical applications. The CHO cell cultivation process involves the cultivation of cells in batch mode. The CHO cells are robust in culture and are able to produce a variety of therapeutic antibodies and recombinant glycoproteins. The performance of CHO cell cultivation process is significantly affected by the composition of the culture medium. The glucose and glutamine are the measured extracellular substrate species in the process and the significantly released metabolites are lactate, ammonia, and alanine. Identifying and optimizing the components of the culture medium is most important for maximizing the productivity of CHO cells. The metabolic network representing the central metabolism of CHO cells is used to develop the elementary flux modes of CHO cells. The dynamic mathematical model of the process involves the mass balances of the extracellular species inside the reactor with the vector of species concentrations and the vector of the macroreaction rates. A state space model of the process incorporating MichaeliseMenton kinetics is described as follows [40]: dGðtÞ GX GX GQX ¼  a1  a2  a6 dt KG1 þ G KG2 þ G ðKG6 þ GÞðKQ6 þ QÞ GQX  a7 ðKG7 þ GÞðKQ7 þ QÞ

(7.1)

dQðtÞ QX QX QX GQX ¼  a3  a4  a5  3a6 dt KQ3 þ Q KQ4 þ Q ðKQ5 þ QÞ ðKG6 þ GÞðKQ6 þ QÞ GQX  2a7 ðKG7 þ GÞðKQ7 þ QÞ (7.2) dLðtÞ GX QX ¼ 2a1 þ a4 dt KG1 þ G KQ4 þ Q

(7.3)

dNðtÞ QX QX QX GQX ¼ a3 þ 2a4 þ 2a5 þ a6 dt KQ3 þ Q KQ4 þ Q ðKQ5 þ QÞ ðKG6 þ GÞðKQ6 þ QÞ GQX þ a7 ðKG7 þ GÞðKQ7 þ QÞ (7.4) dAðtÞ QX ¼ a3 dt KQ3 þ Q

(7.5)

where X, G, Q, L, N, and A are the concentrations of biomass, glucose, glutamine, lactate, ammonia, and alanine, respectively. The ai are the maximum specific reaction rates, and kGi and kQi are the Michaelis constants. The numerical values of a1, a2, a3,

7.3 Media optimization of Chinese hamster ovary cells production process

a4, a5, a6, and a7 are identified as 3.5956, 0.1736, 0.2686, 0.2038, 0, 0.1427, and 0.1427, respectively. The values of kGi (i ¼ 1,2,.,7) and kQi (i ¼ 1,2,.,7) are taken to be 0.1 and 0.1, respectively.

7.3.2 DE-coupled model-based strategy for cell culture medium optimization In this work, the experimentally validated macroscopic mathematical model of the cell cultivation process defined by Eqs. (7.1)e(7.5) is coupled with DE to optimize the input space of the cell cultivation process [41]. DE is a stochastic optimization algorithm that operates on a population of potential solutions by applying the principle of survival of the fittest to generate an optimal solution. The effectiveness and robustness of DE has been demonstrated in a variety of applications [22,33]. The DE algorithm and its basic optimization applications are given in earlier chapters at sections at 4.4 and 5.5 of this book. Optimization of culture medium is most important in CHO cell cultivation process for maximizing the cell productivity. The input space of the CHO cell cultivation process is represented by the initial culture medium compositions of glucose (Gi), biomass (Xi), and glutamine (Qi). The responses are the product concentrations of cell density (X), glutamine (Q), and lactate (L). Maximization of cell density (X) is the desired objective. The flowchart of the DE-coupled model-based optimization scheme is shown in Fig. 7.1.

7.3.3 Analysis of results The initial population in input space of culture medium is randomly generated within the ranges of 15e17 for Gi, 5.2e5.4 for Qi, and 0.25e0.35 for Xi, respectively. The population size (NP), crossover operator (CR), and mutation constant Biomass(X) Glucose(G) Glutamine(Q) Lactate(L) Ammonia(N) Alanine(A)

Initial population generation

Start generation

Objective function calculation from CHO cell process dynamic model

Increment generation No Stop

Stopping criterion met?

Selection

Crossover

Mutation

FIGURE 7.1 Flow chart for media optimization of Chinese hamster ovary (CHO) cell process using model-based DE.

197

198

CHAPTER 7 Application of stochastic evolutionary

(F) involved in DE are heuristically selected and set as NP ¼ 100, CR ¼ 59, and F ¼ 0.1. The size of population in the DE search space is of the dimension 100  3. The batch duration is specified as 80 h. The model Eqs. (7.1)e(7.5) are solved using Euler’s integration method with unit step size for a duration of 80 h and the cell concentration is obtained as objective function at final time for each set of random population for each generation. The mutation, recombination, and selection operations of DE are iteratively performed to alter the initial population. The model based DE strategy on convergence provides the optimal medium composition of glucose (Gi), biomass (Xi), and glutamine (Qi) that maximize the biomass concentration. The maximum value of cell density achieved due to this strategy is 3.392397 (mM) with the corresponding culture medium conditions as Gi ¼ 16.45752 mM, Qi ¼ 5.337633 mM, and Xi ¼ 0.3 mM, respectively. The responses of process variables obtained from the model with this optimized initial medium conditions are shown in Fig. 7.2. The optimum medium composition is also heuristically selected as Gi ¼ 16.0 mM, Qi ¼ 5.4 mM, and Xi ¼ 0.3 mM without using optimization strategy. The maximum cell density obtained from the model with the heuristically selected optimum medium composition is 3.0 (mM). The results thus indicate the better performance of the process modelbased DE strategy for optimization of the CHO cell cultivation process.

7.4 Response surface modelebased ant colony optimization of lipopeptide biosurfactant process In this work, a response surface model (RSM) developed using the central composite design data of the biosurfactant process is integrated with ant colony optimization (ACO) to derive an optimization strategy denoted as RSM-ACO and applied it for optimization of the lipopeptide biosurfactant process.

7.4.1 The lipopeptide biosurfactant process and its culture medium Biosurfactants are microbial compounds that exhibit pronounced surface and emulsifying activities. They are produced extracellularly or as part of the cell membrane by a variety of microorganisms from various carbohydrate sources, oils, and biomass wastes. Biosurfactants have special advantages over synthetic surfactants such as biodegradability, lower toxicity, and greater diversity. Among various biosurfactants, the lipopeptide surfactin produced by Bacillus subtilis is the most powerful biosurfactant [42] with potential biotechnological and biomedical applications. The lipopeptide biosurfactant fermentation process medium consists of the components such as glucose, monosodium glutamate, yeast extract, MgSO4$7H2O, and K2HPO4. The process responses are the lipopeptide and biomass concentrations. A bacterial strain B. subtilis is used as the microbe for the lipopeptide production. The component concentrations of the McKeen medium used in the process are glucose (2.5%), monosodium glutamate (1%), yeast extract (0.3%), MgSO4$7H2O (0.1%),

7.4 Response surface modelebased ant colony optimization

FIGURE 7.2 Dynamic responses based on optimized culture medium composition.

199

200

CHAPTER 7 Application of stochastic evolutionary

K2HPO4 (0.1%), and KCl (0.05%). The sterilization of the medium is performed at 121 C for 20 min. Cultivation experiments are conducted in the laboratory. The samples are drawn and analyzed for biomass concentration and lipopeptide activity.

7.4.2 Experimental design and data generation The potential applications of biosurfactant depend on whether it can be produced economically with the desired characteristics. Modeling and optimization studies can provide greater insight in improving the productivity of the process under economic considerations. To build an accurate optimization model, the experimental data used for it must be sufficient and representative of the process. Experimental design provides better insight in studying the impact of potential variables affecting the process. In this work, central composite rotatable design (CCRD) is used to design the experiments and to generate the data for lipopeptide production by B. subtilis. The five independent process variables chosen for the CCRD design are glucose (x1), monosodium glutamate (x2), yeast extract (x3), MgSO4$7H2O (x4), and K2HPO4 (x5). The five levels specified for the design variables are the lowest, low, center, high, and highest. These levels of these design variables are expressed in percent as x1 (0.5, 1, 1.5, 2, 2.5), x2 (1, 2, 3, 4, 5), x3 (0.1, 0.2, 0.3, 0.4, 0.5), x4 (0.1, 0.2, 0.3, 0.4, 0.5), and x5 (0.1, 0.2, 0.3, 0.4, 0.5), respectively. This design has resulted 32 experiments. The measured response variables are the biomass concentration (Y1) and lipopeptide concentration (Y2). Further details on experimental design and design data of lipopeptide biosurfactant process are reported in literature [43].

7.4.3 Development of response surface models for lipopeptide biosurfactant process The objective of this study is to develop RSM using the data of designed experiments of the lipopeptide biosurfactant process and to couple these models with ACO algorithm to optimize the biosurfactant process culture medium to achieve the desired process responses. The RSM that establishes statistical regression relationships between input and output variables is the most preferred approach for fermentation media optimization. The RSM has been studied for different biosurfactant processes [44,45]. The second-order polynomial models are constructed using the experimental design data of lipopeptide biosurfactant process [43]. The coefficients of the models representing the biomass and lipopeptide concentrations are determined using the method of least squares. The polynomial regression models identified for biomass and lipopeptide productivities are given as follows: Ybio ¼ 12:99681 þ 0:260106X1 þ 0:15833X2 þ 0:975X3  0:025X4  0:19167X5  0:0625X12 þ 0:1625X13  0:1375X23  0:1125 X14 þ 0:0375X24 þ 0:1125X34  0:6875X15  0:1375X25  0:0125X35  0:0375X45  3:0507X12  2:92X22  1:771X32  1:396X42  0:321X52 (7.6)

7.4 Response surface modelebased ant colony optimization

Ylipo ¼ 8:80212  0:0234X1  0:7166X2 þ 1:216607X3 þ 1:033333X4  0:66667X5  0:4X12  0:25X13 þ 0:625X23  0:65X24  1:25X34  1:625X15  0:475X25 þ 0:00001X35 þ 0:001X45  0:640X12  0:877X22  0:277128X32 þ 0:9728X42 þ 1:1978X52 (7.7) The factors X1, X2, X3, X4, and X5 in the above equations represent the coded terms of RSM methodology.

7.4.4 RSM-ACO strategy for lipopeptide process optimization ACO is the most efficient metaheuristic search algorithm used to solve combinatorial optimization problems. The ACO algorithm is described in Section 4.5 and its implementation to base case problems is given in Section 5.6. In this work, the ACO algorithm is coupled with the polynomial regression models of biosurfactant process to optimize the process culture medium composition. The medium composition is represented by glucose (x1), monosodium glutamate (x2), yeast extract (x3), MgSO4$7H2O (x4), and K2HPO4 (x5). The Ybio (Y1) and Ylipo (Y2) denote the responses of biomass and lipopeptide concentrations. An objective function is defined in terms of actual and model predicted concentrations of lipopeptide as expressed by J ¼ f ðqÞ ¼

l  X i¼1

Yi  Ybi

2

(7.8)

where J is the cost function, q is the vector of parameters, l is the number of observations, Yi is the measured value of the ith variable, and Ybi is the corresponding predicted value. The optimization problem is stated as Maximize: YLipo ðx1 ; x2 ; x3 ; x4 ; x5 Þ

(7.9)

within the ranges of medium composition defined by 0:5  x1  2:5 1  x2  5 0:1  x3  0:5

(7.10)

0:1  x4  0:5 0:1  x5  0:5 The notation used for the terms in ACO algorithm can be referred in Section 4.5. The dimension of the vector (q) representing the medium compositions is 5.

201

202

CHAPTER 7 Application of stochastic evolutionary

FIGURE 7.3 Graphical representation of discretized parameter space and the ant pathway structure in ACO.

The number of strata (M) is assigned as 3. The number of ants is chosen as 20. The constants are appropriately tuned and set as Cc ¼ 0.5, A ¼ 1.0, and Cs ¼ 0.3. During implementation of ACO algorithm, each ant follows any one of the N pathways, e.g., pathway 29 (1,2,1,1,2) or pathway 99 (2,1,2,3,3) as shown in Fig. 7.3. The ants perform the tasks such as selection of pathways that they pass, remembering the parameter strata along the pathways, passing the parameter values to the model of the process, evaluating the value of objective function for each pathway, and updating the pheromone based on the objective function values. The ACO algorithm is implemented iteratively until the convergence in solution is

Glucose Monosodium glutamate Yeast extract MgSO4·7H2O K2HPO4

Min: Biomass Max: Lipopetide

Central composite rotatable design (CCRD)

2nd Order polynomial models No Convergence criterion met?

Fitness function (2nd order polynomial models)

Yes Stop

FIGURE 7.4 The flowchart of RSM-ACO strategy.

ACO Operations

Optimization of media conditions for lipopeptide and biomass concentration

7.4 Response surface modelebased ant colony optimization

obtained. The number of pathways generated for ants travel is 243. The flowchart of RSM-ACO strategy is shown in Fig. 7.4.

7.4.5 Analysis of results The RSM-ACO strategy is executed by writing the program in C language. The optimized medium composition by ACO is found as x1 ¼ 1.098 (g/L), x2 ¼ 4.01(g/L), x3 ¼ 0.426 (g/L), x4 ¼ 0.431 (g/L), and x5 ¼ 0.219(g/L). The maximum lipopeptide concentration (Ylipo) obtained with respect to the optimized medium composition is 1.501 g/L. The corresponding biomass concentration (Ybio) is obtained as 4.291 g/L. The response surface counter plot drawn for x5 versus x4 on lipopeptide productivity is shown in Fig. 7.5. A classical NeldereMead optimization (NMO) method [46e48] is also employed to compare with ACO. The NMO is coupled with the polynomial RSM of the lipopeptide biosurfactant process to optimize the media composition. The tuning parameters involved in NMO are the reflection, contraction, and expansion coefficients, which are set as 1.0, 2.0, and 0.5, respectively. The optimum medium components found by RSM-NMO are x1 ¼ 2 (g/L), x2 ¼ 3.8 (g/L), x3 ¼ 0.2 (g/L), x4 ¼ 0.4 (g/L), and x5 ¼ 0.4 (g/L). The maximum lipopeptide concentration and the biomass concentration corresponding to the optimum culture medium composition are obtained as 1.327 g/L and 3.99 g/L, respectively. These results indicate the better performance of RSM-ACO strategy over RSM-NMO Lipopeptide concentration (g/L) 2

0

2

4

6

8 10.49

(X5)K2HPO4(%)

1 7.375

10 13.6 10 12.04

(b) 8

6.931

5.819 5.819

0

6

7.375

4 –1 6.931 –2 –2

–1

2 12.04

10.49

0 (X4)MgSO4·7H2O (%)

13.6 1

FIGURE 7.5 Response surface counter plot of x₅ versus x₄ on lipopeptide productivity.

0 2

203

204

CHAPTER 7 Application of stochastic evolutionary

approach. The experimentally validated lipopeptide concentration based on optimized medium of RSM-ACO strategy is found as 1.498 g/L.

7.5 ANN modelebased multiobjective optimization of rhamnolipid biosurfactant process using NSDA strategy In this study, different multiobjective optimization strategies are derived by integrating artificial neural network (ANN) model with DE involving Naı¨ve & Slow and ε-constrained techniques and applied for Pareto optimization of a rhamnolipid biosurfactant process.

7.5.1 The rhamnolipid biosurfactant process and its culture medium Rhamnolipids are a class of biosurfactants produced by Pseudomonas aeruginosa. These biosurfactants are mainly used in bioremediation, pharmaceuticals, therapeutics, cosmetics, and detergents. Among other methods of production, the process of rhamnolipid production by P. aeruginosa is of special interest because this process can be implemented at higher scale. The P. aeruginosa AT10 produces a mixture of surface active rhamnolipids when cultivated on mineral medium with waste free fatty acids as carbon source. The cell growth and the accumulation of metabolic products of the biosurfactant process are strongly influenced by medium compositions such as carbon source, nitrogen sources, growth factors, and inorganic salt concentrations as well as environmental factors and growth conditions such as pH, temperature, agitation, and oxygen availability. Further literature on biosurfactants and rhamnolipid production can be referred elsewhere [49e52].

7.5.2 Experimental design and data generation Optimization of media composition is important for design and scale-up of the process. Different methods are used to design the experiments and to generate the data needed for modeling and optimization. Statistical optimization of the medium concentrations and physical factors plays a very significant role in enhancing the bioprocess productivity. Statistical experimental design data are very useful to build RSM and to analyze the process. The components that are critical for the rhamnolipid process are the carbon source, the nitrogen source (NaNO3), the phosphate content (K2HPO4/KH2PO4), and the iron content (FeSO4$7H2O). The measured responses are the biomass concentration and the rhamnolipid concentration. The details of experimental data and the culture medium used for rhamnolipid process are reported by Abalos et al. [53]. The central composite designebased experimental data for rhamnolipid process are given in the work of Satya Eswari et al. [54].

7.5 ANN modelebased multiobjective optimization

7.5.3 ANN model for rhamnolipid process ANNs are computer systems consisting of large number of computational units called neurons connected in a massively parallel structure. The ANNs map the input data (X) and output data (Y) through a nonlinear function f, i.e., Y ¼ f(X). The ANN concepts and the development of ANN models for different applications are explored by various researchers [30,55e57]. A feedforward network with sigmoid activation function is used to build the ANN model to represent the rhamnolipid process. Four factors, namely, carbon source (X1), nitrogen source (X2), phosphorous source (X3), and iron source (X4), are used as inputs to the ANN. The responses are biomass concentration (Y1) and rhamnolipid concentration (Y2), which represent the network outputs. The ANN model structure representing the rhamnolipid process is shown in Fig. 7.6. The network input and output data and the training and validation procedure to configure optimal neural network for rhamnolipid process are detailed by Satya Eswari et al. [54].

7.5.4 Formulation of multiobjective optimization problem Many bioprocesses involve production of more than one product during the course of fermentation. The conditions for optimization of such processes should be established such that they satisfy more favorable product or more products. Optimization problem involving simultaneous optimization of two or more objectives is referred to multiobjective optimization. When the objectives are conflicting, achieving an optimum for one objective requires some compromise on other objectives. Such nondominated objectives are called Pareto optimal or noninferior solutions.

X1

Y1

X2

Y2

X3 X4 Bia

1 Bias

1

FIGURE 7.6 ANN model structure for rhamnolipid production using Pseudomonas aeruginosa AT10.

205

206

CHAPTER 7 Application of stochastic evolutionary

Multiobjective optimization problem can be formulated as follows: Minimize/maximize fi ðxÞ; i ¼ 1; 2; 3; .; No

(7.11)

Subject to gj ðxÞ  0; j ¼ 1; 2; .; J hk ðxÞ ¼ 0; k ¼ 1; 2; .; K where fi is the ith objective function, x is a decision vector, No is the number of objectives, and J and K are the number of inequality and equality constraints, respectively. A reasonable solution to a multiobjective problem is to investigate a set of solutions, each of which satisfies the objectives at an acceptable level without being dominated by any other solution. For a multiobjective optimization problem, any two solutions x1 and x2 can have one of the two possibilities: one dominates the other or none dominates the other. In a minimization problem, without loss of generality, a solution x1 dominates x2 if the following two conditions are satisfied:          ci ˛ f1; 2; .; No g: fi x1  fi x2 ; dj˛ 1; 2; .; No g: fj x1  fj x2 :

(7.12)

If any of the above conditions is violated, the solution x1 does not dominate the solution x2. If x1 dominates the solution x2, x1 is called the nondominated solution within the set {x1, x2}. The solutions that are nondominated within the entire search space are denoted as pareto-optimal and constitute the pareto-optimal set or paretooptimal front.

7.5.5 ANN-NSDE strategy for multiobjective optimization of rhamnolipid process In rhamnolipid biosurfactant process, carbon source, nitrogen source, phosphate ratio, and iron are the culture medium components which are also referred here as factors and the responses are the biomass and rhamnolipid concentrations. The objective is to maximize the rhamnolipid concentration while minimizing the biomass concentration. No single optimal solution with respect to both objectives exists, as improving the performance of one objective deteriorates the performance of the other objective. For an optimal rhamnolipid production, the best configurations of media components are to be established. In this study, the ANN model of the process is integrated with nondominated sorting differential evolution (NSDE) to derive ANN-NSDE strategy for multiobjective optimization of the biosurfactant process. The DE algorithm involved in NSDE strategy is given in Section 4.4 and its implementation to base case applications is given in Section 5.5. Depending on the techniques such as Naı¨ve & Slow and ε-constraint employed with NSDE for Pareto optimization, different multiobjective

7.5 ANN modelebased multiobjective optimization

optimization strategies are formed. These ANN modelecoupled strategies are referred here as ANN-NSDE with Naive & Slow, ANN-NSDE with ε-constraint, and ANN-NSDE with Naı¨ve & Slow and ε-constraint. The performance of these strategies is evaluated for Pareto optimization of rhamnolipid biosurfactant process.

7.5.5.1 ANN-NSDE with Naı¨ve & Slow

This strategy is derived by integrating ANN model with NSDE and using Naı¨ve & Slow technique for finding Pareto optimal solution. The flow scheme of ANNNSDE with Naive & Slow is shown in Fig. 7.7. The procedure for implementation of this strategy along with its parameters is described elsewhere [54]. The initial population for NSDE is randomly generated within the ranges of 10.0e50.0 g dm3 for X1, 1.0e9.0 g dm3 for X2, 1.0e7.0 g dm3 for X3, and 1.0e21.0 g dm3 for X4. The size of the initial population (NP) is selected to be 300. Well-trained ANN model is used to provide the fitness function values of biomass concentration (Y1) and the rhamnolipid activity (Y2) with respect to the decision variables corresponding to the random population. The Naı¨ve & Slow technique [54,58] is used for Pareto ranking of solutions for biomass concentration and rhamnolipid activity. A total of 18 Pareto solutions are obtained from the initial population in 1750 generations as shown in Fig. 7.8.

7.5.5.2 ANN-NSDE with ε-constraint

This strategy is derived by coupling ANN model with NSDE using ε-constraint technique for finding the optimal solution. The ε-constrained technique converts the multiobjective optimization problem into a single-objective optimization problem [59]. For the case of a problem in two objectives f(x) and 4(x), this method is defined as

Start generations

Initialise random population

Increment generations

Performance DE operations (mutations, cross over, selection)

Converged solutions

Stop

Well trained ANN model

Non dominated solutions

FIGURE 7.7 Flow chart of ANN-NSDE with Naive & Slow technique.

Evaluate objective function values

Naïve & Slow technique to remove dominated solutions

207

CHAPTER 7 Application of stochastic evolutionary

18 16 14 12

Y2 (gdm–3)

208

10 8 6 4 2 0 0

1

2

3

4

5

6

7

8

Y1 (gdm–3)

FIGURE 7.8 Pareto optimal solutions of ANN-NSDE with Naive & Slow.

Maximize f ðxÞ subject to 4ðxÞ  εðgÞ

(7.13)

where ε refers the ε-level comparison. For rhamnolipid process with two objectives, maximization rhamnolipid activity is the main objective while restricting the biomass concentration to lie within the ε-level. In this method, the initial population generation is same as the method of ANN-NSDE with Naı¨ve & Slow. The NSDE with ε-constraint technique is applied according to the procedure given elsewhere [54,58]. The flow scheme of ANN-NSDE with ε-constraint is shown in Fig. 7.9. The Pareto optimal solution obtained by this method is shown in Fig. 7.10. Initialize random population

Converged solution

Increment generation

Start generation

Update the ε level

Well trained ANN model

Perform DE operations (mutation, cross over, selection)

Stop

FIGURE 7.9 Flow chart of ANN-NSDE with ε-constraint technique.

Evaluate objective function values

Set the ε level by ε level control function

7.5 ANN modelebased multiobjective optimization

Y2.(gdm–3)

(A)

18 16 14 12 10 8 6 4 2 0

0

2

4

6

8

10

Y1(gdm–3)

(B) 4.5 4

ε -level

gmax

3.5 3 2.5 2 1.5 1 0.5 0

0

50

100

150

200

ε -Level FIGURE 7.10 Results of ANN-NSDE with ε-constraint: (A) Pareto optimal solution; (B) ε-level plot.

Initialize random population

Converged solution

Start generation

Increment generation

Well trained ANN model

Update the ε level

Evaluate objective function values

Perform DE operations (mutation, cross over, selection)

Stop

FIGURE 7.11 Flow chart of ANN-NSDE with Naı¨ve & Slow and ε-constraint.

Naive & Slow technique

Set the ε level by ε level control function

209

CHAPTER 7 Application of stochastic evolutionary

7.5.5.3 ANN-NSDE with Naı¨ve & Slow and ε-constraint

The flow scheme of ANN-NSDE with Naive & Slow and ε-constraint is shown in Fig. 7.11. In this strategy, the initial population and its size are chosen to be the same as in ANN-NSDE with ε-constraint. The ANN model is called to provide the fitness (objective) function values for the responses with respect to the decision variables. The NSDE with Naı¨ve & Slow technique followed by ε-constraint is implemented to obtain Pareto optimal solutions for the rhamnolipid process [54]. The converged Pareto optimal set obtained by this strategy at 1750th generation is plotted in Fig. 7.12.

7.5.6 Analysis of results Different multiobjective optimization strategies are derived by integrating ANN model with DE involving Naı¨ve & Slow and ε-constrained techniques and applied for Pareto optimization of rhamnolipid biosurfactant process. The well-trained ANN model based on the central composite design data of the biosurfactant process is used to provide the fitness function values for the ANN-NSDE strategies. The analysis of the results exhibit the effectiveness of the ANN-NSDE strategy with Naı¨ve & Slow and ε-constrained technique for Pareto optimization of the biosurfactant process. The optimum medium composition obtained by this strategy is found as X1 ¼ 49.858 g dm3, X2 ¼ 4.988 g dm3, X3 ¼ 1.416 g dm3, and X4 ¼ 17.119 g dm3 with a corresponding maximum rhamnolipid activity of 18.0747 g dm3.

Y2 (gdm–3)

210

20 18 16 14 12 10 8 6 4 2 0

0

2

4 Y1 (gdm–3)

6

FIGURE 7.12 Pareto optimal solutions ANN-NSDE with Naı¨ve & Slow and ε-constraint.

8

7.6 ANN-DE strategy for simultaneous optimization

7.6 ANN-DE strategy for simultaneous optimization of rhamnolipid biosurfactant process In this study, simultaneous optimization of rhamnolipid process is carried out by combining a well trained ANN model with DE and by employing a distance minimization function. In this strategy, individual ANN models are derived for rhamnolipid and biomass concentrations. These models are then combined with DE to provide the individual optimized objective function values representing the rhamnolipid and biomass concentrations. Simultaneous optimization of rhamnolipid process is then performed by using the response data of DE integrated ANN models in a quantitative function referred as distance minimization function. In contrast to earlier work in Section 7.4 where the ANN model is derived based on the reported experimental design data of rhamnolipid process [53], the ANN model configured for simultaneous optimization of rhamnolipid biosurfactant process of this section is based on the experimental design data of Satya Eswari et al. [60].

7.6.1 The process and the culture medium A lyophilized culture (2423) of P. aeruginosa, a bacterial strain obtained in powder form from IMTECH-MTCC Chandigarh, India, is activated in 5 mL of nutrient broth kept in laminar air flow. The nutrient broth is composed of beef extract (1 g/L), yeast extract (2 g/L), peptone (5 g/L), and sodium chloride (5 g/L). The broth culture is incubated in a rotary shaker (190 rpm) at 30 C and it is used to prepare agar slants. The agar slants are subcultured and the inoculum is used for the cultivation of McKeen media. P. aeruginosa is cultivated in 100 mL of McKeen medium with the composition of glucose (10.0 g/L), NH4NO3 (1.7 g/L), yeast extract (5.0 g/L), MgSO4$7H2O (0.2 g/L), KH2PO4 (3.0 g/L), Na2HPO4 (7.0 g/L), and 1 mL hexadecane. The McKeen medium is sterilized at 121 C for 20 min and incubated in a rotary shaker (160 rpm) for 5 days before use in cultivation. Cultivation experiments are conducted in 1000 mL conical flasks containing 500 mL of McKeen

Table 7.1 The levels for the five factors (nutrients) in central composite rotatable design. Levels (%) Factors

Lowest

Low

Center

High

Highest

Glucose (x1) NH4NO3 (x2) Yeast extract (x3) MgSO4$7H2O (x4) KH2PO4 (x5)

7 0 0 0.1 2

12 1 5 0.2 3

17 2 10 0.3 4

22 3 15 0.4 5

27 4 20 0.5 6

211

212

CHAPTER 7 Application of stochastic evolutionary

medium for 48 h on a rotary shaker operated at 160 rpm and 30 C. The samples are drawn and analyzed for biomass concentration and rhamnolipid activity.

7.6.2 Design of experiments for rhamnolipid process A CCRD is used to design the experiments for rhamnolipid biosurfactant process. The levels for the factors (nutrients) used in the design are given in Table 7.1. A total of 32 experiments are conducted as per design and the results are analyzed for rhamnolipid activity and biomass concentration. The experimental design matrix for rhamnolipid process is reported in the work of Satya Eswari et al. [60].

7.6.3 ANN model for rhamnolipid process The modeling capabilities of ANNs are reported elsewhere [55e58]. In this work, a multilayer feedforward ANN is used to build the relationships between the input and output data of the rhamnolipid process. The input layer of the network has five nodes and a bias node representing the input variables of glucose, NH4NO3, KH2PO4, MgSO4$7H2O, yeast extract, and bias input. The network output layer has two nodes representing the biomass concentration and rhamnolipid activity. The number of nodes in the hidden layer is chosen as 12. The initial interconnection weights between the nodes of input to hidden layer and hidden to output layer are arbitrarily assigned. Sigmoid function is used as activation function for the hidden nodes. The learning rate and momentum factor involved in ANN training algorithm are appropriately chosen. The ANN structure used for rhamnolipid process is shown

X1

X2 Y1

X3

Y2

X4

X5

Bias

1 Bias

1

FIGURE 7.13 Artificial neural network architecture for rhamnolipid biosurfactant process.

7.6 ANN-DE strategy for simultaneous optimization

in Fig. 7.13. The normalized input and output design data of 32 experiments [60] are used sequentially and iteratively to train the network. The iterative convergence of the network with the updated optimum interconnection weights is established by minimizing the network error function defined by EðwÞ ¼

p X p¼1

Ep

(7.14)

where Ep is the sum of squared error with each training pattern, EP ¼

M  X K¼1

dpk  ypk

2

Here p refers number of training patterns, M refers number of network outputs, dp denotes model predicted output, and yp denotes the actual process output. For the case of rhamnolipid process experimental data, p ¼ 32 and M ¼ 2. The minimization of Ep is accomplished by training the network using a gradient descent technique called generalized delta rule. The well-trained network can be used to predict the output responses based on the input culture medium conditions.

7.6.4 Simultaneous optimization of rhamnolipid biosurfactant process using ANN-DE In rhamnolipid biosurfactant process, the objective is to maximize the rhamnolipid activity while minimizing the biomass concentration. The culture medium composition that maximizes the rhamnolipid activity might be different than the one that minimizes the biomass concentration. A single set of medium conditions that satisfy both the objectives is needed. For this purpose, a criterion of optimization based on the distance between the predicted value of each response and the optimum value of each response is used. This criterion is represented by [61], vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u  uX FDi  FOi  2 S¼t SDi i¼1

(7.15)

where S is the distance function, SDi is the standard deviation of observed values for each response variable, FDi is the ideal value of each response variable optimized individually over the experimental region, and FOi is the predicted value of each response variable for the same set of medium conditions. To apply this criterion, individually optimized values of rhamnolipid activity and biomass concentration are required. For this purpose, the well-trained ANN model of rhamnolipid process is combined with DE to evaluate the maximum value of

213

214

CHAPTER 7 Application of stochastic evolutionary

rhamnolipid activity and minimum value of biomass concentration along with their respective set of medium compositions.

7.6.4.1 Individually optimized responses The DE algorithm and its basic optimization applications are given in earlier Sections at 4.4 and 5.5 of this book. The DE explores all regions of the solution space using a population of individuals. Initially, the population of individuals is generated randomly. Then mutation, crossover, and selection operations are performed for each generation wise to optimize the desired objective function. In this work, the DE is combined with the well-trained ANN model to determine the medium composition that maximizes the rhamnolipid activity. This problem is stated as Maximize Yrham ðx1 ; x2 ; x3 ; x4 ; x5 Þ

(7.16)

within the ranges of medium composition in percentage: 7  x1  27 0  x2  4 0  x3  20

(7.17)

0:1  x4  0:5 2  x5  6 An initial population of size 30 is generated within the ranges of these medium compositions and DE operations are performed in integration with the well-trained ANN model. The maximum activity of rhamnolipid (Yrham) is attained as 54.954 g/L. The corresponding culture medium composition in percentage is found as glucose (x1):23.110, NH4NO3 (x2):3.210, KH2PO4 (x3):6.784, MgSO4$7H2O (x4):0.236, and yeast extract (x5):2.718. Similarly, the DE is combined with the well-trained ANN model to determine the medium composition that minimizes the biomass concentration. This problem is stated as Minimize Ybio ðx1 ; x2 ; x3 ; x4 ; x5 Þ

(7.18)

DE operations are performed in integration with the well-trained ANN model along with the initial population of size 30 generated within the ranges in Eq. (7.17). The minimum value of biomass concentration (Ybio) is attained as 8.02 g/L. The corresponding culture medium composition in percentage is found as glucose (x1):11.680, NH4NO3 (x2):4.165, KH2PO4 (x3):0.399, MgSO4$7H2O (x4):0.421, and yeast extract (x5):5.099.

7.6.4.2 Simultaneous optimization using a distance minimization function The individually evaluated rhamnolipid and biomass responses, Yrham and Ybio represent FDi (i ¼ 1,2) in Eq. (7.15). The DE in integration with the welltrained ANN model along with the initial population generated in the above

7.6 ANN-DE strategy for simultaneous optimization

variable ranges of medium composition is implemented to compute the FOi (i ¼ 1,2) in Eq. (7.15) and to minimize the distance S, thus establishing a single set of medium conditions. A population size of 30, mutation rate of 0.04 and uniform crossover rate of 0.7 are considered for DE. The optimal responses of rhamnolipid activity and biomass concentration are obtained as 55.9 and 8.2 g/ L, respectively. The optimized medium composition corresponding to these responses is obtained (in percentage) as glucose (x1):24.079394, NH4NO3 (x2): 3.286996, KH2PO4 (x3):7.952117, MgSO4$7H2O (x4):0.242368, and yeast extract (x5):2.6915.

7.6.5 Simultaneous optimization of rhamnolipid biosurfactant process using RSM-DE strategy Response surface methodology (RSM) uses experimental designs and regression analysis for analyzing and optimizing the processes and productivities. In this work, second-order polynomial regression models are used to establish relations between the culture medium components and the responses of rhamnolipid biosurfactant process. The structure of the polynomial models has the following form: Y ¼ b0 þ

k X i¼1

bi xi þ

k X i¼1

bii x2i þ bij xi xj þ ε

(7.19)

where Y, x, and b refer reponses, inputs/factors, and parameters, respectively. The experimental design data of biosurfactant process [60] are used to build polynomial regression models in coded factors as given by Ybio ¼ 1:259574 þ 0:444681X1 þ 0 þ 0:2X3 þ 0:016667X4 þ 0:2X5 þ 0:475X12 þ 0:475X13 þ 0:525X23 þ 0:575 X14 þ 0:825X24 þ 0:625X34  0:1X15  0:2X25 þ 0:2X35 þ 0:3X45 þ 0:543085X12 þ 0:840426X22 þ 0:290426X32 þ 0:015426X42 þ 0:065426X52 (7.20) YRHAM ¼ 0:976596 þ 0:632447X1  3:575X2 þ 2:016667X3 þ 10:81667X4 þ 3:25X5  0:775X12 þ 0:3X13 þ 1:575X23 þ 4:625X14 þ 3:175X24 þ 5:225X34  2:0875X15  3:0625X25  3:7375X35 þ 1:6125X45 þ 8:319681X12 þ 7:310904X12 þ 1:548404X12 þ 1:423404X12 þ 1:385904X52 (7.21) After eliminating the insignificant coefficients, the regression equations for biomass and rhamnolipid concentrations in coded factors are expressed as

215

216

CHAPTER 7 Application of stochastic evolutionary

Ybio ¼ 1:259574 þ 0:444681X1 þ 0:016667X4 þ 0:2X5 þ 0:475X12 þ 0:475X13 þ 0:525X23 þ 0:575 X14 þ 0:825X24 þ 0:625X34 þ 0:2X35 þ 0:3X45 þ 0:543085X12 þ 0:840426X22 þ 0:290426X32 þ 0:065426X52 (7.22) YRHAM ¼ 0:976596 þ 0:632447X1  0:775X12 þ 0:3X13 þ 1:575X23 þ 4:625X14 þ 3:175X24 þ 5:225X34 þ 1:6125X45 þ 8:319681X12 þ 7:310904X22 þ 1:548404X32 (7.23) The DE is combined with the polynomial regression models in Eqs. (7.22) and (7.23) to determine the individual optimum responses for biomass and rhamnolipid concentrations. The parameters used for DE are same as in ANN-DE. The individually optimized rhamnolipid activity (Yrham) is obtained as 54.10 g/L. The corresponding culture medium composition (in percentage) is obtained as x1 ¼ 11.01, x2 ¼ 4.0, x3 ¼ 0.4, x4 ¼ 0.4, and x5 ¼ 5. The individually optimized biomass concentration (Ybio) is obtained as 8.09 g/L. The corresponding culture medium composition is obtained as x1 ¼ 10.99, x2 ¼ 4.183, x3 ¼ 0.409, x4 ¼ 0.420, and x5 ¼ 4.841. These individually evaluated rhamnolipid and biomass responses represent FDi (i ¼ 1,2) in Eq. (7.15). The DE in integration with the polynomial regression models with the initial population generated in the ranges of medium composition in Eq. (7.17) is implemented to compute the FOi (i ¼ 1,2) in Eq. (7.15) and to minimize the distance S, thus establishing a single set of medium conditions. A population size of 30, mutation rate of 0.04, and uniform crossover rate of 0.7 are considered for DE. This simultaneous optimization strategy has yielded the optimized responses of rhamnolipid and biomass concentrations as FO1 ¼ 55.7 g/L and FO2 ¼ 8.49 g/L. The single

Initialize random population of culture medium

Well trained ANN /RSM

DE for FD1 DE for FD2

DE for Simultaneous optimization / Minimization of distance function

Optimized culture medium composition

FIGURE 7.14 Simultaneous optimization strategy by ANN-DE/RSM-DE.

Optimized objectives: FO1, FO2

Predicted Rhamnolipid (g/L)

RSM-DE Y=0.75X+3.88 R2=0.757

50 45 40 35 30 25 20 15 10 5 0 0

10

20

30

40

50

Experimental Rhamnolipid (g/L)

60

Predicted Rhamnolipid activity (g/L)

7.7 Summary

70

ANN-DE

60 50

Y=0.955X+4.116 R2=0.914

40 30 20 10 0 0

10

20

30

40

50

60

Experimental Rhamnolipid (g/L)

FIGURE 7.15 Experimental versus predicted rhamnolipid activities.

set of medium composition (in percentage) corresponding to these responses are x1 ¼ 11.99, x2 ¼ 4.011, x3 ¼ 0.399, x4 ¼ 0.41, and x5 ¼ 4.999.

7.6.6 Analysis of results The simple flow scheme of ANN-DE/RSM-DE for simultaneous optimization of rhamnolipid process is shown in Fig. 7.14. The experimental versus predicted rhamnolipid activities of ANN-DE and RSM-DE are plotted in Fig. 7.15. The ANN-DE (R2 ¼ 0.914) showed better correlation with the experimental rhamnolipid activity than the RSM-DE (R2 ¼ 0.7578). Similar results evaluated for biomass concentration show better correlation of ANN-DE (R2 ¼ 0.799) over RSM-DE (R2 ¼ 0.7868). The experimental validation of these results confirm the better suitability of ANN-DE for optimization rhamnolipid process.

7.7 Summary Bioprocess technology plays a vital role in delivering innovative and sustainable products and processes to fulfill the needs of the society. Modeling and optimization techniques are increasingly used to understand and improve biochemical processes. The main advantages of these techniques include the reduction of excessive experimentation, devising strategies to optimize and automate the processes, reducing cost, and ensuring safety and reliability. This chapter mainly concentrates on design and application of various stochastic evolutionary and artificial intelligenceebased strategies for optimization of biotechnological processes. Various single-objective, simultaneous, and multiobjective optimization strategies based on optimization algorithms such as DE and ACO coupled with mathematical models, RSM, and ANN models are designed and implemented on different real-life applications concerning to biochemical processes. The results evaluated for different case studies

217

218

CHAPTER 7 Application of stochastic evolutionary

show the advantages of stochastic global optimization methods for solving the complex optimization problems concerning to biotechnical processes.

References [1] M.M. Ali, C. Storey, A. Torn, Application of stochastic global optimization algorithms to practical problems, J. Optim. Theory Appl. 95 (3) (1997) 545e563. [2] E. Balsa-Canto, A.A. Alonso, J.R. Banga, Dynamic optimization of bioprocesses: deterministic and stochastic strategies, in: C. Skjoldebremd (Ed.), ACoFop IV (Automatic Control of Food and Biological Processes), vol. 2, Goteborg, Sweden, 1998, pp. 1e6. [3] S. Park, W.F. Ramirez, Optimal production of secreted protein in fed-batch reactors, AIChE J. 34 (1988) 1550e1558. [4] M. de Tremblay, M. Perrier, C. Chavarie, J. Achambault, Optimization of fed-batch culture of hybridoma cells using dynamic programming: single and multi feed cases, Bioprocess Eng. 7 (1992) 229e234. [5] P. Shukla, S. Pushpavanam, Optimisation of biochemical reactors: analysis of different approximations of fed-batch operation, Chem. Eng. Sci. 53 (1998) 341e352. [6] P. Mendes, D. Kell, Non-linear optimization of biochemical pathways: applications to metabolic engineering and parameter estimation, Bioinformatics 14 (10) (1998) 869e883. [7] S. Dhir, K.J. Morrow, R.R. Rhinehart, T. Wiesner, Dynamic optimisation of hybridoma growth in a fed-batch bioreactor, Biotechnol. Bioeng. 67 (1999) 197e205. [8] F.S. Wang, W.M. Cheng, Simultaneous optimization of feeding rate and operation parameters for fed-batch fermentation processes, Biotechnol. Prog. 15 (5) (1999) 949e952, 1999. [9] D. Levisauskas, V. Galvanauskas, G. Zunda, S. Grigaskis, Model-based optimizationofbiosurfatctant production in fed-batch culture, Biotechnol. Lett. 26 (14) (2004) 1141e1146. [10] D. Levisauskas, T. Tekorius, Model-based optimization of fed-batch fermentation processes using predetermined type feed-rate time profiles: a comparative study, Inf. Technol. Control 34 (3) (2005) 231e236, 2005. [11] W.R. Esposito, C. Floudas, Global optimization for the parameter estimation of differential-algebraic systems, Ind. Eng. Chem. Res. 39, 2000, 1291e1310. [12] J.R. Banga, E. Balsa-Canto, C.G. Moles, A.A. Alonso, Dynamic optimization of bioprocesses: efficient and robust numerical strategies, J. Biotechnol. 117 (4) (2005) 407e419. [13] H. Shin, H. Lim, Cell-mass maximization in fed-batch cultures e sufficient conditions for singular arc and optimal feed rate profiles, Bioproc. Biosyst. Eng. 29 (2006) 335e347. [14] R. Karuppiah, A. Peschel, I.E. Grossmann, M. Martı´n, W. Martinson, L. Zullo, Energy optimization for the design of corn-based ethanol plants, AIChE J. 54 (6) (2008) 1499e1525. [15] H.S. Shin, H.C. Lim, Optimal fed-batch operation of recombinant cells subject to plasmid instability and death, Bioproc. Biosyst. Eng. 31 (6) (2008) 655e665. [16] M. Martı´n, E. Ahmetovic, I.E. Grossmann, Optimization of water consumption in second generation bioethanol plants, Ind. Eng. Chem. Res. 50 (7) (2011) 3705e3721.

References

[17] R. Morales-Rodriguez, A.S. Meyer, K.V. Gernaey, G. Sin, A framework for modelbased optimization of bioprocesses under uncertainty: Lignocellulosic ethanol production case, Comput. Chem. Eng. 42 (2012) 115e129. [18] D. Levisauskas, T. Tekorius, An approach to identification of dynamic model for optimization of fed-batch fermentation, processes, Inf. Technol. Control 42 (1) (2013) 15e20. [19] A. Tholudur, W.F. Ramirez, Optimization of fed-batch bioreactors using neural network parameters, Biotechnol. Prog. 12 (1996) 302e309. [20] P. Angelov, R. Guthke, A genetic-algorithm-based approach to optimization of bioprocesses described by fuzzy rules, Bioprocess Eng. 16 (1997) 299e303. [21] J.P. Chiou, F.S. Wang, Hybrid method of evolutionary algorithms for static and dynamic optimization problems with application to a fed-batch fermentation process, Comput. Chem. Eng. 23 (9) (1999) 1277e1291. [22] J.A. Roubos, G. van Straten, A.J. van Boxtel, An evolutionary strategy for fed-batch bioreactor optimization: concepts and performance, J. Biotechnol. 67 (1999) 173e187. [23] V.K. Jayaraman, B.D. Kulkarni, K. Gupta, J. Rajesh, H.S. Kusumaker, Dynamic optimization of fed-batch bioreactors using the ant algorithm, Biotechnol. Prog. 17 (1) (2001) 81e88. [24] F.S. Wang, T.L. Su, H.J. Jang, Hybrid differential evolution for problems of kinetic parameter estimation and dynamic optimization of an ethanol fermentation process, Ind. Eng. Chem. Res. 40 (13) (2001) 2876e2885. [25] S. Nguang, L. Chen, X. Chen, Optimisation of fed-batch culture of hybridoma cells using genetic algorithms, ISA Trans. 40 (2001) 381e389. [26] J.S. Cheema, N.V. Sankpal, S.S. Tambe, B.D. Kulkarni, Genetic programming assisted stochastic optimization strategies for optimization of glucose to gluconic acid fermentation, Biotechnol. Prog. 18 (6) (2002) 1356e1365. [27] J. Na, Y. Chang, B. Chung, H. Lim, Adaptive optimization of fed-batch culture of yeast by using genetic algorithms, Bioproc. Biosyst. Eng. 24 (2002) 299e308. [28] M. Ronen, Y. Shabtai, H. Guterman, Optimization of feeding profile for a fed-batch bioreactor by an evolutionary algorithm, J. Biotechnol. 97 (2002) 253e263. [29] J.R. Banga, C. Moles, A. Alonso, Global optimization of bioprocesses using stochastic and hybrid methods, in: C.A. Floudas, P.M. Pardalos (Eds.), Frontiers in Global Optimization e Nonconvex Optimization and its Applications, vol. 74, Kluwer Academic Publishers, Dordrecht, 2003, pp. 45e70. [30] Y. Nagata, K.h. Chu, Optimization of a fermentation medium using neural networks and genetic algorithms, Biotechnol. Lett. 25 (21) (2003) 1837e1842. [31] D. Sarkar, J. Modak, Optimisation of fed-batch bioreactors using genetic algorithms, Chem. Eng. Sci. 58 (2003) 2283e2296. [32] D. Sarkar, J. Modak, Optimization of fed-batch bioreactors using genetic algorithm: multiple control variables, Comput. Chem. Eng. 28 (2004) 789e798. [33] M. Kapadi, R. Gudi, Optimal control of fed-batch fermentation involving multiple feeds using differential evolution, Process Biochem. 39 (2004) 1709e1721. [34] I. Kookos, Dynamic optimization of fed-batch bioreactors using the ant algorithm, Biotechnol. Prog. 20 (2004) 1285e1288. [35] L. Chen, S.K. Nguang, X.D. Chen, X.M. Li, Modelling and optimization of fed-batch 726 fermentation processes using dynamic neural networks and genetic algorithms, Biochem. Eng. J. 22 (1) (2004) 51e61.

219

220

CHAPTER 7 Application of stochastic evolutionary

[36] A. Maghsoudpour, A. Ghaffari, M. Teshnehlab, Development of a differential evolutionary algorithm application in optimizing microbial metabolic system, Int. J. Comput. Appl. 35 (9) (2011) 5e10. [37] S. Da Ros, G. Colusso, T.A. Weschenfelder, T.L. de Marsillac, F. de Castilhos, M.L. Corazza, M. Schwaab, A comparison among stochastic optimization algorithms for parameter estimation of biochemical kinetic models, Appl. Soft Comput. 13 (5) (2013) 2205e2214. [38] R.M. Mendes, O. Rocha, I. Rocha, E.C. Ferreira, Optimization of fed-batch 786 fermentation processes with bio-inspired algorithms, Expert Syst. Appl. 41 (5) (2014) 2186e2195, 787. [39] M.Z.M. Zain, J. Kanesan, G. Kendall, J. Huang, Optimization of fed-batch fermentation processes using the backtracking search algorithm, Expert Syst. Appl. 91 (2018) 286e297. [40] A. Provost, G. Bastin, Dynamic metabolic modelling under the balanced growth condition, J. Proc. Contrl. 14 (2004) 717e728. [41] J.S. Eswari, C. Venkateswarlu, Optimization of culture conditions for Chinese hamster ovary (CHO) cells production using differential evolution, Int. J. Pharm. Pharm. Sci. 4 (1) (2012) 465e470. [42] M. Morikawa, Y. Hirata, T. Imanaka T, A study on the structureefunction relationship of lipopeptide biosurfactants, BBA-Mol. Cell Biol. L. 1488 (3) (2000) 211e218. [43] J. Satya Eswari, M. Anandand, C. Venkateswarlu, Optimum culture medium composition for lipopeptide production by bacillus subtilis using response surface model-based ant colony optimization, Sadhana, 41 (1) (2016) 55e65. [44] X. Gu, Z. Zheng, H. Yu, J. Wang, F. Liang, R. Liu, Optimization of medium constituents for a novel lipopeptide production by Bacillus Subtilis MO-01 by a response surface method, Process Biochem. 40 (10) (2005) 3196e3201. [45] S.R. Mutalik, B.K. Vaidya, R.M. Joshi, K.M. Desai, S.N. Nene, Use of response surface optimization for the production of biosurfactant from Rhodococcus spp. MTCC 2574, Bioresour. Technol. 99 (10) (2000) 7875e7880. [46] J.L. Kuester, J.H. Mize, Optimization Techniques with Fortran, McGraw-Hill, New York, 1973. [47] J. Satya Eswari, C. Venkateswarlu, Evaluation of anaerobic biofilm reactor kinetic parameters using ant colony optimization, Environ. Eng. Sci. 30 (9) (2013) 527e535. [48] C. Venkateswarlu, K. Gangiah, Dynamic modeling and optimal state estimation using extended Kalman filter for a kraft pulping digester, Ind. Eng. Chem. Res. 31 (3) (1992) 848e855. [49] D.G. Cooper, J.E. Zajic, Surface active compounds from microorganisms, Adv. Appl. Microbiol. 26 (1980) 229e253. [50] R.S. Makkar, K.J. Rockne, Comparison of synthetic surfactants and biosurfactants in enhancing biodegradation of polycyclic aromatic hydrocarbons, Environ. Toxicol. Chem. 22 (2003) 2280e2292. [51] M.J. Brown, Biosurfactants for cosmetic applications, Int. J. Cosmet. Sci. 13 (1991) 61e64. [52] L. Rodrigues, I.M. Banat, J. Teixeira, R. Oliveira, Biosurfactants: potential applications in medicine, J.Antimicrob. Chemother. 57 (2006) 609e618. [53] A. Abalos, F. Maximo, M.A. Manresa, J. Bastida, Utilization of response surface methodology to optimize the culture media for the production of rhamnolipids by Pseudomonas aerogenosaAT10, J. Chem. Technol. Biotechnol. 77 (2002) 777e784.

References

[54] J.S. Eswari, M. Anand, C. Venkateswarlu, Optimum culture medium composition for rhamnolipid production by Pseudomonas aeruginosa AT10 using a novel multiobjective optimization method, J. Chem. Technol. Biotechnol. 88 (2013) 271e279. [55] M.F. Anjum, I. Tasadduqand, K. Al-Sultan, Response surface methodology: a neural network approach, Eur. J. Oper. Res. 101 (1997) 65e73. [56] A.K. Banerjee, K. Kiran, U.S.N. Murthy, C. Venkateswarlu, Classification and identification of mosquito species using ANN, Comput. Biol. Chem. 32 (2008) 442e447. [57] C. Venkateswarlu, K. Kiran, J.S. Eswari, A hierarchical artificial neural system for genera classification and species identification in mosquitoes, Appl. Artif. Intell. 26 (2012) 903e920. [58] G. Narsizi, Multi-objective Optimization: A Quick Introduction, Webpage Notes, Courant Institute of Mathematical Sciences, New York, 2008, http://cims.nyu.edu/gn387/ glp/lec1.pdf. [59] K. Chakraborty, Advances in Differential Evolution (Studies in Computational Intelligence, Springer Verlag, Berlin/Heidelberg, 2008. [60] J.S. Eswari, C. Venkateswarlu, Optimum culture medium response surface modeling and optimization of the medium conditions by artificial neural network linked differential evolution for rhamnolipid production, Indian J. Chem. Technol. 23 (2016) 1e13. [61] P. Anand, B.V.N. Siva Prasad, C. Venkateswarlu, Modelling and optimization of a pharmaceutical formulation system using radial basis function network, Int. J. Neural Syst. 19 (2) (2009) 127e136.

221

CHAPTER

Application of stochastic evolutionary optimization techniques to pharmaceutical processes

8

Chapter outline 8.1 Introduction .....................................................................................................224 8.2 Quantitative modelebased pharmaceutical formulation ......................................224 8.2.1 Response surface methodology ........................................................ 225 8.2.2 ANN- and RBFN methodologies ....................................................... 225 8.3 Simultaneous optimization of pharmaceutical (trapidil) product formulation using radial basis function network methodology ........................................................225 8.3.1 Trapidil product formulation and its design data ............................... 226 8.3.2 Radial basis function network and its automatic configuration............ 226 8.3.3 Configuring RBFN to trapidil formulation .......................................... 229 8.3.4 Simultaneous optimization study ..................................................... 230 8.3.5 Analysis of results .......................................................................... 231 8.4 Multiobjective Pareto optimization of a pharmaceutical product formulation using radial basis function network and differential evolution......................................231 8.4.1 Basic algorithms and essential components for formulation of multiobjective optimization strategy ...................................................................... 231 8.4.2 Configuring RBFN to pharmaceutical formulation .............................. 232 8.4.3 RBFN-NSDE strategies for Pareto optimization of trapidil formulation. 233 8.4.3.1 RBFN-NSDE with Naı¨ve and Slow ......................................... 233 8.4.3.2 RBFN-NSDE with ε constraint............................................... 233 8.4.3.3 RBFN-NSDE with Naı¨ve and Slow and ε constraint .................. 235 8.4.4 Analysis of results .......................................................................... 235 8.5 Multiobjective optimization of pharmaceutical formulation using response surface methodology and differential evolution ..............................................................237 8.5.1 Basic algorithms and essential components for formulation of multiobjective optimization strategy ...................................................................... 237 8.5.2 Response surface model for pharmaceutical formulation.................... 237 8.5.3 RSM-NSDE strategy for multiobjective optimization of pharmaceutical formulation .................................................................................... 238 8.5.4 Analysis of results .......................................................................... 238 Stochastic Global Optimization Methods and Applications to Chemical, Biochemical, Pharmaceutical and Environmental Processes. https://doi.org/10.1016/B978-0-12-817392-3.00008-9 Copyright © 2020 Elsevier Inc. All rights reserved.

223

224

CHAPTER 8 Application of stochastic evolutionary optimization

8.6 Multiobjective optimization of cytotoxic potency of a marine macroalgae on human carcinoma cell lines using nonsorting genetic algorithm ....................................238 8.6.1 Cytotoxic potency of marine macroalgae and necessity for its quantitative treatment ...................................................................................... 239 8.6.2 Response surface model for evaluating the cytotoxic potency of marine macroalgae on human carcinoma cell lines....................................... 240 8.6.3 Optimization of individual objectives in cytotoxic potency evaluation of marine macroalgae using genetic algorithm ...................................... 241 8.6.4 NSGA-based multiobjective optimization strategy for enhancing cytotoxic potency of marine macroalgae on human carcinoma cell lines ............ 241 8.6.5 Analysis of results .......................................................................... 244 8.7 Summary .........................................................................................................244 References .............................................................................................................244

8.1 Introduction Computational intelligence systems play a crucial role in solving real-world complex problems in industry. Computational intelligence systems employ one or more computational intelligence techniques such as neural networks, fuzzy logic, evolutionary algorithms, multiagent approaches, and rule-based systems. It may also be necessary to employ a hybrid system combining techniques to solve a problem. Pharmaceuticals are considered an important component of health care costs; hence, their production is deemed to be a potential source of cost reduction. Design, modeling, and optimization studies can lead to considerable benefits in pharmaceutical processes in terms of improvement in productivity and product quality as well as reduction in energy consumption and environmental pollution. Artificial intelligence is increasingly used in pharmaceutical technology with a better understanding of the relationships between different formulation and process parameters. Neural networks and fuzzy logic are rapidly growing technologies that could be applied to the formulation and processing of pharmaceutical products. Evolutionary algorithms are effectively used in combination with artificial neural networks (ANNs) for predicting and optimizing the formulation conditions. Various studies concerning the application of artificial intelligence, stochastic, and evolutionary techniques have been reported for design, modeling, and optimization of different pharmaceutical processes [1e13]. The significance of this chapter is highlighted with real-life applications involving different artificial intelligence and evolutionary optimization techniques to pharmaceutical processes.

8.2 Quantitative modelebased pharmaceutical formulation Pharmaceutical formulations often face the challenge of finding the right combination of formulation variables that will produce a product with optimum properties [14]. A pharmaceutical optimization problem usually has two main objectives. The first objective is to determine and quantify the relationship between the

8.3 Simultaneous optimization of pharmaceutical (trapidil)

formulations responses and formulation variables, and the other one is to find the optimum values of the formulation variables that produce the best response values. Quantitative modelebased pharmaceutical formulation is carried out using different approaches such as response surface methodology, ANN and radial basis function network (RBFN) methodologies.

8.2.1 Response surface methodology Traditionally, pharmaceutical product formulation has been performed through trial and error based on previous experience, knowledge, and wisdom of the formulator. However, this type of heuristic approach involves numerous combinations of formulation variables, thus requiring large number of experiments. Well-chosen experimental design can maximize the amount of information that can be obtained for a given amount of experimental effort [15]. The statistical design of experiments is an efficient procedure for planning the experiments in advance. The use of systemic experimental design along with a mathematical optimization approach called response surface methodology (RSM) is found useful to determine acceptable formulation of pharmaceutical products [16e18]. The RSM involves statistical design of experiments, establishing mathematical relations between the causal factors and response variables, and optimizing the formulation factors that satisfy the desired product characteristics.

8.2.2 ANN- and RBFN methodologies The RSM approach generally involve statistical regression models to evaluate the objectives with respect to decision variables. The RSM is limited to low-level factor interactions and may not effectively account factor interactions and nonlinearities in the data. To overcome the shortcomings in RSM, ANN-based modeling, prediction, and optimization have been employed in pharmaceutical formulation problems [19e24]. However, most ANN models intend to establish a static network configuration, and the network may not achieve the desired performance unless it has adequate computational units. Therefore, determination of optimal parameters of the network through its automatic configuration is very useful. Thus, RBFN is used as an alternative modeling tool to ANN. The advantage of RBFN is that it automatically configures the network while updating its parameters. RBFN has been used as an effective tool in pharmaceutical formulation applications [25e27].

8.3 Simultaneous optimization of pharmaceutical (trapidil) product formulation using radial basis function network methodology This case study deals with the modeling of a pharmaceutical (trapidil) formulation with RBFN and simultaneous optimization of the formulation process using a distance function approach. This artificial intelligenceebased optimization strategy enables to better understand the application of artificial intelligence modelebased evolutionary algorithms for pharmaceutical formulation problem in the next section.

225

226

CHAPTER 8 Application of stochastic evolutionary optimization

8.3.1 Trapidil product formulation and its design data The sustained-release trapidil tablet used in the treatment of angina pectoris is prepared by using microcrystalline cellulose (MCC), hydroxypropyl methylcellulose (HPMC) as excipients, and magnesium stearate as a lubricant [18]. The amounts of MCC, HPMC, and compression pressure are considered as the formulation variables. The values of release order and rate constant at pH 1.2 and at pH 6.8 are the response variables of the formulation. The release behaviors of trapidil are greatly affected by the levels of casual variable. To obtain the desired release responses, it is necessary to optimize its formulation conditions. The formulation should satisfy the maximization of release order while minimizing the rate constant. The best configuration of formulation variables is to be chosen for optimal trapidil formulation. In this study, statistical experimental design data based RBFN approach is considered for modeling and optimization of pharmaceutical formulation. The lower and upper levels of the trapidil formulation variables considered for factorial experimentation are MCC in the range of 32e112, HPMC in the range of 115e285, and pressure in the range of 65e135. The data of 18 sets of formulation conditions and response variables based on the three factor spherical composite design [18,26] are used for developing the RBFN model. The amounts of MCC, HPMC, and compression pressure represent the casual factors in the design data. The release order (n) and the rate constant (k) at two different pH levels represent the response data of trapidil formulation.

8.3.2 Radial basis function network and its automatic configuration The RBFN consists of a single input layer, single hidden layer, and single output layer, as shown in Fig. 8.1. More details on RBFN is available elsewhere [28].

Ω I1

W1 Ω

I2 Ω

W2 W3

Wn In Ω

FIGURE 8.1 Structure of single output node RBFN.

Wo

Σ

0

8.3 Simultaneous optimization of pharmaceutical (trapidil)

The hidden layer nodes are referred as radial basis functions (RBFs) which are described by the cluster centers, distance measure, and transfer function. The cluster centers in the input space is made up of a center vector mi with elements mij (j ¼ 1 to n). A distance measure to compute the distance of the input vector I with elements Ij (j ¼ to n) from the center vector mi is given by vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi uX 2 u n i (8.1) di ¼ t kj Ij  mij j¼1

where kji is the (i,j)th element of the shape matrix k defined as the inverse of the covariance matrix: hij kji ¼  2 sij

(8.2)

where hij is the correlation coefficient, and sij represents marginal standard deviation. A Gaussian type of transfer function that transforms the Euclidian summation di (i ¼ 1 to m) to give an output for each node is defined by ! di2 fðdi Þ ¼ exp (8.3) g2 where g is a real value. The output of the network O is a weighted sum of the outputs of f(di) from the hidden layer and is given by O ¼ w0 þ

m X i¼1

wi fðdi Þ

(8.4)

The RNFN is automatically configured by a hierarchically self-organizing learning (HSOL) algorithm [29], which automatically creates RBFs and modifies parameter vectors of the RBFN. The progress of learning for a single-output network is measured by using the root mean square error, Erms, for N teaching patterns as defined by vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u N u2 X Erms ¼ t Ep (8.5) N p¼1 with 1 (8.6) Ep ¼ ðtp  op ðnÞÞ2 2 where tp represents the desired output value defined by the pth teaching pattern, Op represents the actual output value of the pth teaching pattern, and n represents a

227

228

CHAPTER 8 Application of stochastic evolutionary optimization

column vector which is a collection of all parameters associated with the output. The parameter saturation vector s is defined as vEp þ ð1  aÞsðp  1Þ (8.7) vn where a is a positive constant between 0 and 1, and p represents the pth teaching pattern presented to the network. The vector s provides the weighted average of (vEp/vn) over the horizon of learning iterations. The saturation criterion r is welldefined as the inverse of kskas 8 pffiffiffiffiffi > < rðp  1Þ þ b ds if p > p0 ksðpÞk rðpÞ ¼ (8.8) > : 0 otherwise sðpÞ ¼ a

where ds is dimension of s, b is small positive constant representing the increment rate of r, and p0 is the delay factor defined as 1/a. The network parameters are updated using the negative gradient of the error function, Eq. (8.6) as follows. The incremental weights (Dwt) that update the weights (wt) between the output and the ith RBF is given as Dwtji ¼

vEp ¼ ðtj  Oj Þfi vwtji

(8.9)

The jth element of the incremental mean vector (Dmn) that update the mean vector (mn) of the ith RBF is given as N M  X vEp X i i ¼ k  mn fi ðtk  Ok Þwtki (8.10) Dmnij ¼ x j j j vmnij j¼1 k¼1 The jth incremental marginal standard deviation (Dss) that update the standard deviation (ss) of the ith RBF is given as  2 N ki xj  mni M X X j j vE p Dssij ¼ ¼ f ðtk  Ok Þwtki (8.11) i vssij ssij j¼1 k¼1 The (j,k)th incremental correlation coefficient (Dhs) that update the correlation coefficient (hs) of the ith RBF is given as    i x  mni M x  mn X j k j k vE 1 p Dhsijk ¼ ¼  f ðtk  Ok Þwtki (8.12) i 2 vhsijk ssij ssik k¼1 The parameter vector of the output unit, n ¼ ½wtT ; mnT ; ssT ; hsT  n

new

¼n

old

þ hDn

T

is updated by (8.13)

where h is the positive constant called the learning rate, and T Dn ¼ ½DwtT ; DmnT ; DssT ; DhsT  . In the above equations, N and M denote the

8.3 Simultaneous optimization of pharmaceutical (trapidil)

number of inputs and outputs, f is the output of RBF, t is the target output and O is the prediction output.

8.3.3 Configuring RBFN to trapidil formulation RBFN models for trapidil formulation are configured using four individual networks as shown in Fig. 8.2. The three casual factors corresponding to the amounts of MCC (x1), HPMC (x2), and compression pressure (x3) form the inputs for each of these networks. Each of the response variables, the release order (n), and the rate constant (k) at two different pH levels form the outputs for each network. C C C

n1

C

k2

C

n2

C

C

C

x1

k1

C C C C

x2

C C C C

x3

C C C C

FIGURE 8.2 Radial basis function network (RBFN) structure for trapidil configuration.

229

CHAPTER 8 Application of stochastic evolutionary optimization

1

1

0.9

Actual

0.9

0.8

RBFN

0.8

0.7

Actual RBFN

0.7

0.6

Release order, n1

Rate constant, k1

230

0.5 0.4 0.3 0.2

0.6 0.5 0.4 0.3 0.2

0.1

0.1

0 1

2

3

4

5

6

7

8

9

0

Experiment number

1

2

3

4

5

6

7

Experiment number

8

9

FIGURE 8.3 Comparison of prediction and actual responses of release order, n1, and rate constant, k1, at pH ¼ 1.2.

The individual RBFNs are trained by using the formulation conditions and response variable data of 18 formulations of trapidil reported in literature [26]. Individual network parameters are iteratively updated so as to achieve convergence in the objective function. The HSOL algorithm enables to provide comprehensive learning with automatic recruitment of RBFs while optimizing the network parameters. The trained RBFNs are found to provide better predictive performance for different formulation conditions. The generalization ability of the trained RBFNs is evaluated by using the formulation data that are not involved in training. The generalized response data of release order n and the rate constant k for the formulation conditions at pH level 1.2 are compared with the actual response data as shown in Fig. 8.3. The close agreement between the RBFN model responses and the actual ones shows the better generalization ability of RBFN models [26].

8.3.4 Simultaneous optimization study The best configuration of formulation variables is to be chosen to satisfy the conflicting objectives of release order and rate constant of trapidil formulation. This requires a function that enables to find optimum formulation conditions while satisfying the criterion of individual objectives. For this purpose, a criterion of optimization based on the distance between the predicted value of each response and the optimum value of each response is used. This criterion is represented as vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u  2 uX FDi  FOi S¼t (8.14) SDi i¼1 where S is the distance function, SDi is the standard deviation of observed values for each response variable, FDi is the ideal value of each response variable optimized individually over the experimental region, and FOi is the predicted value of each response variable for the same set of casual factors.

8.4 Multiobjective Pareto optimization of a pharmaceutical

8.3.5 Analysis of results In this study, an RBFN modeleassisted simultaneous optimization strategy is presented for optimizing a pharmaceutical product (trapidil) formulation. The comparison of the prediction and generalization results of RBFN model is found to be in good agreement with the experimental formulation results. The distance functionebased simultaneous optimization strategy is found effective in dealing with the multiple objectives of trapidil formulation and establishing the formulation conditions that satisfy the criterion of individual objectives. The minimization of the distance function has resulted the optimal formulation conditions as MCC ¼ 54 mg, HPMC ¼ 266 mg, and compression pressure ¼ 140.3 kgf/cm2, respectively. The RBFN model responses corresponding to these conditions are obtained as k1 ¼ 0.2662, n1 ¼ 0.5884, k2 ¼ 0.1684, and n2 ¼ 0.7397, respectively.

8.4 Multiobjective Pareto optimization of a pharmaceutical product formulation using radial basis function network and differential evolution In a multiobjective optimization problem involving multiple or conflicting objectives, achieving an optimum for one objective requires some compromise on other objectives. Such nonconquered objectives are called Pareto-optimal or nonmediocre solutions. These Pareto-optimal solutions are referred to Pareto set, each point of which relates a dissimilar optimum operating condition. In a pharmaceutical formulation involving several composition factors and responses, its optimal formulation requires the best configuration of formulation variables that satisfy response characteristics of individual objectives. This work presents a multiobjective optimization strategy by integrating an evolutionary optimization algorithm with an artificial intelligence model and evaluates its application for a pharmaceutical formulation problem.

8.4.1 Basic algorithms and essential components for formulation of multiobjective optimization strategy Some of the algorithms and components described in the earlier part of this text book are used to build this strategy. The RBFN algorithm and its automatic configuration is described in Section 8.3.2. The data content of trapidil formulation used for RBFN configuration is given in Section 8.3.3. The differential evolution (DE) algorithm and its basic optimization applications are given in Sections 4.4 and 5.5. The applications of DE to chemical and biochemical processes are given in Sections 7.2.3 and 7.5.3. The basic formulation of multiobjective optimization is given in Section 7.4.5. The naive and slow and the ε constraint techniques required for Pareto optimization are briefed in Section 7.4.6. With the availability of this supporting information, we now describe the multiobjective Pareto optimization strategy based on RBFN and DE for pharmaceutical formulation problem.

231

232

CHAPTER 8 Application of stochastic evolutionary optimization

8.4.2 Configuring RBFN to pharmaceutical formulation The RBFN model structure and its configuration to pharmaceutical formulation problem is different to that illustrated in Section 8.3.3. However, the data used for the development of RBFN model of this study are the same as explained along with the references in Section 8.3.3. The optimization strategy of this study requires the development of RBFN model for the case study of trapidil formulation. In trapidil formulation, the amounts of MCC, HPMC, and compression pressure represent the formulation variables and the values of release order and rate constant at pH 1.2 and at pH 6.8 represent the response variables. The structure of RBFN used to represent trapidil formulation is shown in Fig. 8.4. The three factors corresponding to the amounts of MCC (x1), HPMC (x2), and compression pressure (x3) form the inputs to the network, and the responses of release order and rate constant (k1, n1, k2, n2) represent the network outputs. The experimental design data of 18 sets of formulation conditions and response variables reported in literature [18,26] are used to develop the RBFN model. The normalized data of inputs along with the actual outputs are used to train the RBFN. The automatic configuration and learning of the networks is carried out by using the HSOL algorithm. The automatic configuration enables the network to recruit three RBFs, and the convergence in solution is achieved with in 36,000 iterations.

Ω

k1

Ω

n1

Ω

k2

Ω

n2

Ω

X1

Ω

X2

Ω

X3 Ω

Input layer

Hidden layer

Output layer

FIGURE 8.4 Structure of radial basis function network (RBFN) representing formulation model.

8.4 Multiobjective Pareto optimization of a pharmaceutical

8.4.3 RBFN-NSDE strategies for Pareto optimization of trapidil formulation A RBFN model is combined with nonsorted differential evolution (NSDE) involving naı¨ve and slow and ε constraint techniques to derive different Pareto optimization strategies. These strategies are elaborated in earlier section of this book at Section 7.4.6. These multiobjective Pareto optimization strategies are referred here as RBFN-NSDE with naı¨ve and slow, RBFN-NSDE with ε constraint, and RBFNNSDE with naı¨ve and slow and ε constraint.

8.4.3.1 RBFN-NSDE with Naı¨ve and Slow The flow scheme of RBFN-NSDE with naive and slow is shown in Fig. 8.5. The initial DE population for formulation variables is randomly generated within the ranges of design data of x1, x2, and x3 [26].The size of the initial population (NP) is selected to be 300. The well-trained RBFN model is used to provide the fitness function values of n1 and k1 at pH ¼ 1.2 and n2 and k2 at pH ¼ 6.8, respectively, for the data corresponding to the random population. The NSDE with naı¨ve and slow given in Section 7.4.6 is applied to compute the Pareto-optimal solutions for the formulations at both the pH conditions. The Pareto optimal solutions generated by naı¨ve and slow technique are plotted in Fig. 8.6.

8.4.3.2 RBFN-NSDE with ε constraint

The flow scheme of RBFN-NSDE with ε constraint is shown in Fig. 8.7. In this method, the initial population generation is same as the method of RBFN-NSDE with naı¨ve and slow. The NSDE with ε constraint technique is applied according to the procedure given earlier at Section 7.4.6. The Pareto optimal solution obtained by this method is shown Fig. 8.8.

Initialize random population

Converged solution

Increment generation

Start generation

Well trained RBFN model

Perform DE operations (mutation, cross over, selection)

Stop

FIGURE 8.5 Flowchart of RBFN-NSDE with naive and slow technique.

Nondominated solutions

Evaluate objective function values

Naive & Slow technique to remove dominated solutions

233

CHAPTER 8 Application of stochastic evolutionary optimization

0.2055 Rate constant, k2

Rate constant, k1

0.3098

0.3092

0.3086

0.308 0.6

0.604

0.608

0.2045 0.2035 0.2025 0.2015 0.725

0.735

Release order, n1

0.755

0.745

0.765

Release order, n2

FIGURE 8.6 Pareto-optimal solution by RBFN-NSDE with naı¨ve and slow. Initialize random population

Converged solution

Increment generation

Start generation

Well trained RBFN model

Evaluate objective function values

Perform DE operations (mutation, cross over, selection)

Update the ε level

Set the ε level by ε level control function

Stop

FIGURE 8.7 Flowchart of RBFN-NSDE with ε constraint technique.

0.6

0.6

0.5 0.4 0.3 0.2 0.1 0 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Release order, n1

Rate constant, k2

Rate constant, k1

234

0.5 0.4 0.3 0.2 0.1 0 0

0.2

0.4

0.6

Release order, n2

FIGURE 8.8 Pareto-optimal solution by RBFN-NSDE with ε constraint technique.

0.8

8.4 Multiobjective Pareto optimization of a pharmaceutical

8.4.3.3 RBFN-NSDE with Naı¨ve and Slow and ε constraint

The flow scheme of RBFN-NSDE with naive and slow and ε constraint is shown in Fig. 8.9. In this method, the initial population generation is the same to that of RBFN-NSDE with naı¨ve and slow. The RBFN model is called to provide the fitness (objective) function values for the responses with respect to the decision variables. The naı¨ve and slow technique followed by ε constraint is implemented to obtain Pareto-optimal solutions for formulation at pH ¼ 1.2 and at pH ¼ 6.8 as shown in Fig. 8.10.

8.4.4 Analysis of results The performances of the three RBFN-NSDEebased Pareto optimization strategies are evaluated in terms of the mean squared error (MSE), which is a function of the optimum values of release order obtained in final generation and its average value as given by XN 2 ðY  Y i MSE ¼ i1 i where Yi refers the individual responses, Y i refers the mean of all individual responses in each case, and N is the size of the final population. The results evaluated show that the RBFN-NSDE with naı¨ve and slow and ε constraint technique provides better performance for trapidil formulation with the set of decision variables as x1 ¼ 2.102 mg, x2 ¼ 212.2 mg, and x3 ¼ 45.69 kgf/cm2 and the response variables as n2 ¼ 0.8126 and k2 ¼ 0.126 h1.

Initialize random population

Converged solution

Start generation

Increment generation

Well trained RBFN model

Update the ε level

Evaluate objective function values

Perform DE operations (mutation, cross over, selection)

Stop

FIGURE 8.9 Flowchart of RBFN-NSDE with naı¨ve and slow and ε constraint.

Naive & Slow technique

Set the ε level by ε level control function

235

236

Pareto-optimal solution by RBFN-NSDE with naı¨ve and slow and ε constraint technique.

CHAPTER 8 Application of stochastic evolutionary optimization

FIGURE 8.10

8.5 Multiobjective optimization of pharmaceutical formulation

8.5 Multiobjective optimization of pharmaceutical formulation using response surface methodology and differential evolution A response surface model (RSM)ebased NSDE strategy is derived and applied for multiobjective optimization of pharmaceutical formulation.

8.5.1 Basic algorithms and essential components for formulation of multiobjective optimization strategy Some of the algorithms and components described in the earlier part of this text book are used to build this strategy. The experimental design and response surface methodology is briefed in Section 7.3.2. The trapidil formulation data used for RSM methodology are given in Section 8.3.3. The DE algorithm and its basic optimization applications are given in Sections 4.4 and 5.5. The applications of DE to chemical and biochemical processes are given in Sections 7.2.3 and 7.5.3. The basic formulation of multiobjective optimization is given in Section 7.4.5. The naive and slow and the ε constraint techniques required for Pareto optimization are briefed in Section 7.4.6. With the availability of this supporting information, we now develop the multiobjective Pareto optimization strategy based on response surface model (RSM) and DE for pharmaceutical formulation problem.

8.5.2 Response surface model for pharmaceutical formulation The relationship between the casual factors and the response variables of trapidil formulation is expressed in the form of the general second-order polynomial equation: Y ¼ b0 þ

3 X i¼1

bi xi þ

3 X i¼1

bii x2i þ

3 X X i¼1 isj

bij xi xj þ ε

(8.15)

where y ¼ response variable, x ¼ factor representation, b0 ¼ constant, bi ¼ linear terms coefficients, bii ¼ quadratic terms coefficients, and bij ¼ interaction coefficients. The second-order orthogonal design data of trapidil formulation [26] are used to fit the polynomial model. The amounts of MCC (x1), HPMC (x2), and compression pressure (x3) represent the casual factors in the second-order orthogonal design data. The release order (n) and the rate constant (k) at two different pH levels denote the response variable data. The orthogonal design data along with the corresponding response variable data are used to fit a second-order polynomial equation.

237

238

CHAPTER 8 Application of stochastic evolutionary optimization

The reduced order models of trapidil formulation on elimination of insignificant coefficients are represented by the following equations: k1 ¼ 0:363x0  0:07357x2  0:0139x3 þ 0:0299x22 þ 0:005149x23 n1 ¼ 0:5570x0 þ 0:03010x2 þ 0:01943x3  0:01315x22 k2 ¼ 0:223611x0  0:03891x2 þ 0:020293x22 þ 0:006005x23 þ 0:008375x2 x3 n2 ¼ 0:6835x0 þ 0:011008x1 þ 0:03947x2 þ 0:002825x21  0:01683x22 þ0:006795x23  0:0065x2 x3 (8.16)

8.5.3 RSM-NSDE strategy for multiobjective optimization of pharmaceutical formulation The polynomial models representing the trapidil formulation in Eq. (8.16) are combined with NSDE with naı¨ve and slow and ε constraint technique to build Pareto optimization strategy for trapidil formulation. This flow scheme for this strategy is the same as in Fig. 8.9 where the RBFN model is replaced with the regression models. The parameters chosen for NSDE of this strategy are NP ¼ 300, D ¼ 4, F ¼ 0.4, and CR ¼ 0.11. By this strategy, the initial population shown in Fig. 8.11A corresponding to pH ¼ 1.2 is reduced to Pareto-optimal solution as shown in Fig. 8.11B.

8.5.4 Analysis of results The validation of the results against experimental results [18] shows that the optimal formulation results of RSM-NSDE are found to be within the ranges of the experimental data. However, the RSM-NSDE strategy is found to exhibit lower performance than the RBFN-NSDE strategy.

8.6 Multiobjective optimization of cytotoxic potency of a marine macroalgae on human carcinoma cell lines using nonsorting genetic algorithm The application of multiobjective optimization strategy based on nonsorting genetic algorithm (NSGA) in combination with response surface models is studied for Pareto-optimal evaluation of cytotoxic potency of a marine macroalgae on human carcinoma cell lines.

8.6 Multiobjective optimization of cytotoxic potency

FIGURE 8.11 Pareto-optimal solution by RSM-NSDE with naı¨ve and slow and ε constraint technique.

8.6.1 Cytotoxic potency of marine macroalgae and necessity for its quantitative treatment Marine macroalgae is a relatively new source in terms of traditional medicine field, where its extract can be a source of bioactive compound with many therapeutic activities. The extracted compound of marine algae can act as anticoagulant [30], antiviral [31], antioxidant [32], anticancer [33], and antiinflammatory [34]. The methanolic extract of a microalgae called Ulva fasciata Delile along with 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide assay is used for the treatment of human colon carcinoma (HT-29), human hepatocyte carcinoma (Hep-G2), and human breast carcinoma (MCF-7) cell lines. In cancer therapeutic study, the extract concentration and cancer cell line affect the drug absorbance, cell survival, and cell inhibition. Hence, it is important to study the cytotoxic potency of marine microalgae on human cancer based on methanolic extract concentration and cancer cell lines.

239

240

CHAPTER 8 Application of stochastic evolutionary optimization

Quantitative modelebased techniques that establish mathematical relations between the independent variables (factors) and the resulting output variables (responses) are receiving considerable interest in recent times. With these techniques, several responses concerning the effectiveness, safety, and usefulness of a treatment system must be optimized simultaneously. For effective treatment of cancer, a proper combination of the factors, i.e., the extract concentration and cancer cell lines, should be selected so as to minimize the absorbance and enhance the cell survival and the cell growth inhibition. These responses representing the desired objectives are thus conflicting. Hence, an efficient modeling and optimization strategy is needed to establish an optimal treatment system.

8.6.2 Response surface model for evaluating the cytotoxic potency of marine macroalgae on human carcinoma cell lines The aim is to develop response surface models to evaluate the cytotoxic potency of marine macroalgae against cancer based on the data obtained from human colon carcinoma (HT-29), human hepatocyte carcinoma (Hep-G2), and human breast carcinoma (MCF-7) cell lines. The experimental design data for developing quantitative models for cancer treatment system are reported in literature [35], where the Box-Behnken central composite design with 13 experimental trials are employed. These trials involve three different cancer cell lines each at three levels of concentration. Response surface methodology requires the formulation and development of quantitative/polynomial relations between the casual factors and the resulting responses. In cancer treatment system using marine macroalgae, the extract concentration (x1) and cancer cell lines (x2) represent the casual factors, and the drug absorbance at 492 nm (y1), % cell survival (y2), and % cell inhibition (y3) represent the responses. The polynomial equations relating the factors and responses that are developed using the experimental data of cancer treatment system are given as follows [36]: y1 ¼ 0:60  0:086x1 þ 0:023x2  0:011x1 x2  0:041x21  0:10x22 y2 ¼ 71:65  12:32x1  0:51x2  1:02x1 x2  0:4x21  0:66x22

(8.17)

y3 ¼ 28:35 þ 12:32x1 þ 0:51x2 þ 1:02x1 x2 þ 3:84x21 þ 0:66x22 The predictive performance of these models is evaluated for 13 sets of experimental design data. The response data of drug absorbance (y1), % cell survival (y2), and % cell inhibition (y3) are compared with the experimental responses. Such a comparison for model predicted absorbance against experimental absorbance is shown in Fig. 8.12 [36]. These results exhibit better agreement between the model predictions and experimental responses. Thus, the fitted models are found effective in evaluating the cytotoxic potency of macroalgae.

8.6 Multiobjective optimization of cytotoxic potency

FIGURE 8.12 Comparison of model predictions with experimental results.

8.6.3 Optimization of individual objectives in cytotoxic potency evaluation of marine macroalgae using genetic algorithm The basic genetic algorithm (GA) is described in Section 4.2. Its implementation to base case problems is given in Section 5.3, and its application to real life problems is illustrated in Sections 6.4 and 6.5. For the present case study, the GA in MATLAB Optimization Toolbox 2012 is used to optimize the individual responses specified in cytotoxic potency evaluation of marine macroalgae. A random population of size 20 corresponding to the input data of extract concentration (x1) and cancer cell lines (x2) in double vector type is treated with the genetic operators such as reproduction, crossover, and mutation while evaluating the fitness of individuals from the objective functions defined by the regression models in Eq. (8.17). The fitness scaling is done by the ranking method. A stochastic uniform method is used for selection, and elite count of 2 is used in reproduction with crossover constant of 0.8. The termination criteria is defined as average change in the fitness value less than the tolerance function. The generations are set as 50 and the tolerance is specified as 106. The optimized individual responses of drug absorbance (y1), % cell survival (y2), and % cell inhibition (y3) are evaluated as 0.385, 17.93, and 53.30, respectively [36].

8.6.4 NSGA-based multiobjective optimization strategy for enhancing cytotoxic potency of marine macroalgae on human carcinoma cell lines The formulation of multiobjective optimization problem and the concept of Pareto optimization is discussed in Section 7.4.5. The effectiveness of cytotoxic potency of marine macroalgae against cancer can be realized via minimizing the absorbance while enhancing the cell survival and cell inhibition. No single optimal solution

241

242

CHAPTER 8 Application of stochastic evolutionary optimization

exists with respect to satisfying these objectives, as improving the performance of one objective deteriorates the performance of the others. For optimizing the cytotoxic potency of marine macroalgae against cancer, it requires to identify the best combination of casual variables that satisfy response characteristics of individual objectives. For this purpose, a multiobjective optimization strategy is devised by integrating the response surface models in Eq. (8.17) with an evolutionary optimization algorithm called GA. This strategy is referred here as response surface model (RSM)ebased NSGA. The multiobjective optimization problem for cytotoxic potency evaluation of marine macroalgae on human carcinoma cell lines is solved by using the NSGA in MATLAB Optimization Toolbox 2012 [36]. The flowchart of NSGA strategy is shown in Fig. 8.13. The parameters used in NSGA are the double vector type population, default type initial population, tournament selection criteria with tournament size of 2, intermediate type crossover function with crossover constant of 0.8, constraint-dependent mutation, and a tolerance of 104. A random population of size 150 corresponding to the input data of extract concentration (x1) and cancer cell lines (x2) is used to represent the initial population. The NSGA is used to obtain the Pareto-optimal solutions. The strategy has resulted as many as 30 nondominating solutions corresponding to the absorbance (y1), % cell survival (y2), and % cell inhibition (y3). The Pareto-optimal solutions obtained by this strategy are shown in Fig. 8.14. The scattered plot representing the three objectives is shown in Fig. 8.15.

Cytotoxic potency U. fasciataDelile

x1:Concentration x2:human cell lines

Box-Benchen Central Composite Design

Regression Models y1:absorbance y2:% cell survival y3:% cell inhibition Population generation

Stop

Gen=Gen+1 NO

Yes Gen