Recent Advances in Computational Optimization: Results of the Workshop on Computational Optimization WCO 2021 (Studies in Computational Intelligence, 1044) [1st ed. 2022] 3031068386, 9783031068386

This book presents recent advances in computational optimization. The book includes important real problems like modelin

257 91 5MB

English Pages 398 [388] Year 2022

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Recent Advances in Computational Optimization: Results of the Workshop on Computational Optimization WCO 2021 (Studies in Computational Intelligence, 1044) [1st ed. 2022]
 3031068386, 9783031068386

Table of contents :
Organization
Preface
Contents
Learning to Optimize
1 Introduction
2 Related Work
3 Learning Emergence with Cartesian Genetic Programming
4 Results
5 Conclusion
References
Optimal Seating Assignment in the COVID-19 Era via Quantum Computing
1 Introduction
2 Quantum Computing Approach
3 Quantum Computing Solvers
3.1 Quadratic Unconstrained Binary Optimization (QUBO)
3.2 QBSolv
3.3 D-Wave Quantum-Classical Hybrid solver
4 A Case-Study: The Seating Arrangement Optimization Problem
4.1 Problem Description
4.2 Mathematical Programming Formulation
4.3 QUBO Formulation
4.4 Parametric Coefficients Calibration
5 Computational Results
5.1 QUBO model size
5.2 Quality of the solutions
5.3 Efficiency of the solvers
6 Conclusions
References
Hybrid Ant Colony Optimization Algorithms—Behaviour Investigation Based on Intuitionistic Fuzzy Logic
1 Introduction
2 Multiple Knapsack Problem
3 Ant Colony Optimization Algorithm
4 Local Search Procedure
5 InterCriteria Analysis
6 Computational Results and Discussion
6.1 Results of Application of mu minusµ-Biased ICrA
6.2 Results of Application of nu minusν-Biased ICrA
6.3 Results of Application of Unbiased ICrA
6.4 Results of Application of Balanced ICrA
7 Conclusion
References
Scheduling Algorithms for Single Machine Problem with Release and Delivery Times
1 Introduction
2 IJR and ICA Scheduling Algorithms
2.1 Algorithm IJR
3 Combined Scheduling Algorithm ICA
4 Properties of the Schedule Constructed by the Algorithm ICA
5 Combined Scheduling for the Forward and the Inverse Problem FIICA
5.1 Algorithm FIICA
6 Computational Experiment
7 Conclusion
References
Key Performance Indicators to Improve e-Mail Service Quality Through ITIL Framework
1 Introduction
2 Problem Discussion
3 Service Dependencies and Available Approaches
4 Key Performance Indicators Design
4.1 Timeline of ITIL Implementation
4.2 KPIs Classification and Formulation
5 Group Decision Making for KPIs Selection
6 Conclusion
References
Contemporary Bioprocesses Control Algorithms for Educational Purposes
1 Introduction
2 Mathematical Model and Process Monitoring
2.1 Mathematical Model for Control Purposes
2.2 Observer Design
3 Adaptive Linearizing Control Design
3.1 Transformation of the Model for Control
3.2 Observer-Based Estimator of Unknown Kinetic Parameters
3.3 Adaptive Linearizing Control Design
4 Building-In the Control Algorithms in InSEMCoBio
5 Results and Discussion
6 Conclusion
References
Monitoring a Fleet of Autonomous Vehicles Through A* Like Algorithms and Reinforcement Learning
1 Introduction
2 The SPR: Shortest Path under Risk Model
2.1 Transit Network and Risk Function
2.2 Routing Strategies and the SPR Problem
2.3 Some Structural Results
2.4 A Consequence: Risk Versus Distance Reformulation of the SPR Model
2.5 Discussion About the Complexity
3 Algorithms
3.1 Decision Scheme
3.2 A Local Search Algorithm Involving Dynamic Programming
3.3 A A* Algorithm
3.4 Discussion: The Decision Set upper Lamda
4 Speeding Algorithms through Statistical Learning Techniques
4.1 Bounding Decisions
4.2 Bounding States
5 Numerical Experiments
6 Conclusion
References
Rather ``Good In, Good Out'' Than ``Garbage In, Garbage Out'': A Comparison of Various Discrete Subsampling Algorithms Using COVID-19 Data Without a Response Variable
1 Introduction
2 Proposed Research Methodology
2.1 Formal Description of a Dataset Used for Subsampling
2.2 Metrics for Controlling the Quality of the Subsampling
2.3 Subsampling Algorithms
3 Results
4 Conclusion
References
Using Temporal Dummy Players in Cost-Sharing Games
1 Introduction
1.1 Background
1.2 Notation and Problem Statement
1.3 Related Work
1.4 Our Results
2 Hardness Proof for General Networks
3 Cost-Sharing Games on Parallel Links
3.1 Preliminaries and Observations
3.2 The Naive Solution
4 Convergence to the Social Optimum
4.1 Max Cost-reduction Heuristic
4.2 Balancing Heuristic
4.3 Exhaustive Heuristic
4.4 Performance Measure Comparison
5 Exploit a Given Number of Dummy Players
5.1 Max Cost-reduction Heuristic
5.2 Balancing Heuristic
5.3 Exhaustive Heuristic
6 Experimental Results
7 Conclusions and Open Problems
References
Index-Matrix Interpretation of a Two-Stage Three-Dimensional Intuitionistic Fuzzy Transportation Problem
1 Introduction
2 Basic Definitions of the Concepts of Index Matrices and Intuitionist Fuzzy Logic
2.1 Short Remarks on Intuitionistic Fuzzy (IF) logic
2.2 Definition, Operations and Relations Over 3-D Intuitionistic Fuzzy IMs
3 Index-Matrix Interpretation of a Two-Stage Three-dimensional Intuitionistic Fuzzy TP
3.1 Algorithm for Finding of an Optimal Solution of the 2-S 3-D IFTP
3.2 An Application of the Optimal Algorithm of 2-S 3-D IFTP
4 Conclusion
References
Application of an Interval-Valued Intuitionistic Fuzzy Decision-Making Method in Outsourcing Using a Software Program
1 Introduction
2 Basic Concepts of IMs and IVIF Logic
2.1 Interval-Valued Intuitionistic Fuzzy Logic
2.2 Interval-Valued Intuitionistic Fuzzy Index Matrices
3 An Optimal Interval-Valued Intuitionistic Fuzzy Selection for the Outsourcing Service Providers
3.1 Optimal IVIF Selection of the Providers
3.2 A Software Program for Optimal IVIF Selection of the Providers
3.3 A Case Study
4 Conclusion
References
Combinatorial Etudes and Number Theory
1 Morse Sequence
2 Formulation of the Main Results
3 Proof of the Main Results
3.1 Proof of Corollary 1
3.2 Proof of Proposition 1
3.3 Proof of Proposition2
3.4 Proof of Proposition 3
3.5 Proof of Proposition4
4 Arithmetic Progression Theorem
4.1 Product of Arithmetic Progressions
4.2 Generalization of Arithmetic Progression Theorem
References
Multicriteria Optimization of an Algorithm for Charging Energy Storage Elements
1 Introduction
2 A Short Description of the Studied Energy Storage System
3 The Basic Algorithm
4 The Optimized Algorithm
5 Simulation Results and Comparison of the Studied Algorithms
6 Conclusion
References
Optimized Nano Grid Approach for Small Critical Loads
1 Introduction
2 The Need of Photovoltaics
3 The Need of Microgrids
4 Determine the Loads
5 Numerical Example
6 Conclusion
References
Optimized Monte Carlo Methods for Sensitivity Analysis for Large-Scale Air Pollution Model
1 Introduction
2 The Description of the Danish Eulerian Model
3 Latin Hypercube Sampling
4 The van der Corput Sequence
5 Sensitivity Studies with Respect to Emission Levels
6 Sensitivity Studies with Respect to Chemical Reactions Rates
7 Conclusion
References
On a Full Monte Carlo Approach to Computational Finance
1 Introduction
2 Problem Settings and Motivation
3 Quasi-Monte Carlo Methods for Numerical Integration Based on Lattice Sequences
3.1 Lattice Sequence Based on Fibonacci Generating Vector
3.2 Polynomial Lattice Rule
3.3 Lattice Sequences Based on Optimal Vectors
4 Numerical Examples and Results
5 Conclusion
References
Advanced Monte Carlo Methods to Neural Networks
1 Introduction
2 Formulation of the Problem
3 The Description of the Stochastic Approaches
4 Numerical Results
5 Conclusion
References
An Efficient Adaptive Monte Carlo Approach for Multidimensional Quantum Mechanics
1 Introduction
2 The Wigner Monte Carlo Method
3 Numerical Examples
4 Conclusions
References
An Overview of Lattice and Adaptive Approaches for Multidimensional Integrals
1 Introduction
2 Adaptive Approach
3 Lattice Rules
4 Numerical Examples
5 Conclusion
References
Advanced Biased Stochastic Approach for Solving Fredholm Integral Equations
1 Introduction
2 Formulation of the Problem
3 Biased Stochastic Approach
3.1 Monte Carlo Method for Integral Equations
3.2 Error Analysed of the Biased Stochastic Approach
3.3 Error Balancing Conditions for the Biased Stochastic Approach
4 Unbiased Stochastic Approach
4.1 Unbiased Approach for a Simplified Problem
4.2 Unbiased Approach for the General Problem
5 Numerical Examples and Discussion
5.1 Simple Case
5.2 Application to Biology
5.3 Application to Neural Networks
5.4 Application to Physics
5.5 A Comparison Between the Biased and the Unbiased Algorithm
6 Conclusion
References
Improving Performance of Low-Cost Sensors Using Machine Learning Calibration with a 2-Step Model
1 Introduction
2 Background of Air Quality Monitoring
2.1 Air Quality Monitoring Systems
2.2 Opportunities and Disadvantages of Wireless Low-Cost Stations
2.3 Effects from Humidity and Height
2.4 Reference Instrument
2.5 PM Sensors
2.6 The Wireless Network
3 Data Calibration Model
3.1 Datasets and Model
3.2 Methods
4 Application of the Model on Wireless Sensor Network
4.1 Dataset and Model
4.2 Modelling the Pressure Measurement
5 Results and Evaluation
5.1 Evaluation of the Results for Relative Humidity
5.2 Results of the Calibration Model
5.3 Improvements of the Model Through Unsupervised Learning
5.4 Results from the Application on the WSN
6 Conclusion and Future Research
References
Author Index

Citation preview

Studies in Computational Intelligence 1044

Stefka Fidanova   Editor

Recent Advances in Computational Optimization Results of the Workshop on Computational Optimization WCO 2021

Studies in Computational Intelligence Volume 1044

Series Editor Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland

The series “Studies in Computational Intelligence” (SCI) publishes new developments and advances in the various areas of computational intelligence—quickly and with a high quality. The intent is to cover the theory, applications, and design methods of computational intelligence, as embedded in the fields of engineering, computer science, physics and life sciences, as well as the methodologies behind them. The series contains monographs, lecture notes and edited volumes in computational intelligence spanning the areas of neural networks, connectionist systems, genetic algorithms, evolutionary computation, artificial intelligence, cellular automata, selforganizing systems, soft computing, fuzzy systems, and hybrid intelligent systems. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution, which enable both wide and rapid dissemination of research output. Indexed by SCOPUS, DBLP, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science.

Stefka Fidanova Editor

Recent Advances in Computational Optimization Results of the Workshop on Computational Optimization WCO 2021

Editor Stefka Fidanova Department Parallel Algorithms Institute of Information and Communication Technology Bulgarian Academy of Sciences Sofia, Bulgaria

ISSN 1860-949X ISSN 1860-9503 (electronic) Studies in Computational Intelligence ISBN 978-3-031-06838-6 ISBN 978-3-031-06839-3 (eBook) https://doi.org/10.1007/978-3-031-06839-3 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Organization

Workshop on Computational Optimization (WCO 2020) is organized in the framework of Federated Conference on Computer Science and Information Systems FedCSIS—2021.

Conference Co-chairs for WCO Stefka Fidanova, IICT-BAS, Bulgaria Antonio Mucherino, IRISA, Rennes, France Daniela Zaharie, West University of Timisoara, Romania

Program Committee Abud, Germano, Universidade Federal de Uberlândia, Brazil Bonates, Tibérius, Universidade Federal do Ceará, Brazil Breaban, Mihaela, University of Iasi, Romania Gruber, Aritanan, Universidade Federal of ABC, Brazil hadj salem, khadija, University of Tours—LIFAT Laboratory, France Hosobe, Hiroshi, National Institute of Informatics, Japan Lavor, Carlile, IMECC-UNICAMP, Campinas, Brazil Micota, Flavia, West University Timisoara, Romania Muscalagiu, Ionel, Politehnica University Timisoara, Romania Stoean, Catalin, University of Craiova, Romania Zilinskas, Antanas, Vilnius University, Lithuania Jörg Bremer

v

Preface

Many real-world problems arising in engineering, economics, medicine and other domains can be formulated as optimization tasks. Every day we solve optimization problems. Optimization occurs in the minimizing time and cost or the maximization of the profit, quality and efficiency. Such problems are frequently characterized by non-convex, non-differentiable, discontinuous, noisy or dynamic objective functions and constraints which ask for adequate computational methods. This volume is a result of very vivid and fruitful discussions held during the Workshop on Computational Optimization. The participants have agreed that the relevance of the conference topic and quality of the contributions have clearly suggested that a more comprehensive collection of extended contributions devoted to the area would be very welcome and would certainly contribute to a wider exposure and proliferation of the field and ideas. The volume include important real problems like modeling of physical processes, parameter settings for controlling different processes, transportation problems, machine scheduling, air pollution modeling, solving multiple integrals and systems of differential and integral equations which describe real processes, solving engineering and financial problems. Some of them can be solved by applying traditional numerical methods, but others need huge amount of computational resources. Therefore for them are more appropriate to develop algorithms based on some metaheuristic method like evolutionary computation, ant colony optimization, particle swarm optimization, bee colony optimization, constrain programming, stochastic methods, etc. March 2022

Stefka Fidanova Co-Chair, WCO’2021

vii

Contents

Learning to Optimize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jörg Bremer Optimal Seating Assignment in the COVID-19 Era via Quantum Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ilaria Gioda, Davide Caputo, Edoardo Fadda, Daniele Manerba, Blanca Silva Fernández, and Roberto Tadei

1

21

Hybrid Ant Colony Optimization Algorithms—Behaviour Investigation Based on Intuitionistic Fuzzy Logic . . . . . . . . . . . . . . . . . . . . . Stefka Fidanova, Maria Ganzha, and Olympia Roeva

39

Scheduling Algorithms for Single Machine Problem with Release and Delivery Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Natalia Grigoreva

61

Key Performance Indicators to Improve e-Mail Service Quality Through ITIL Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Leoneed Kirilov and Yasen Mitev

79

Contemporary Bioprocesses Control Algorithms for Educational Purposes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Velislava Lyubenova, Maya Ignatova, and Olympia Roeva

95

Monitoring a Fleet of Autonomous Vehicles Through A* Like Algorithms and Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Mourad Baiou, Aurélien Mombelli, and Alain Quilliot Rather “Good In, Good Out” Than “Garbage In, Garbage Out”: A Comparison of Various Discrete Subsampling Algorithms Using COVID-19 Data Without a Response Variable . . . . . . . . . . . . . . . . . . . . . . . . 135 Lubomír Štˇepánek, Filip Habarta, Ivana Malá, and Luboš Marek Using Temporal Dummy Players in Cost-Sharing Games . . . . . . . . . . . . . . 161 Ofek Dadush and Tami Tamir ix

x

Contents

Index-Matrix Interpretation of a Two-Stage Three-Dimensional Intuitionistic Fuzzy Transportation Problem . . . . . . . . . . . . . . . . . . . . . . . . . 187 Velichka Traneva and Stoyan Tranev Application of an Interval-Valued Intuitionistic Fuzzy Decision-Making Method in Outsourcing Using a Software Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Velichka Traneva, Stoyan Tranev, and Deyan Mavrov Combinatorial Etudes and Number Theory . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Miroslav Stoenchev and Venelin Todorov Multicriteria Optimization of an Algorithm for Charging Energy Storage Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Krasimir Kishkin, Dimitar Arnaudov, and Venelin Todorov Optimized Nano Grid Approach for Small Critical Loads . . . . . . . . . . . . . 267 Daniel Todorov and Venelin Todorov Optimized Monte Carlo Methods for Sensitivity Analysis for Large-Scale Air Pollution Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Venelin Todorov, Ivan Dimov, Stefka Fidanova, Tzvetan Ostromsky, and Rayna Georgieva On a Full Monte Carlo Approach to Computational Finance . . . . . . . . . . 289 Venelin Todorov Advanced Monte Carlo Methods to Neural Networks . . . . . . . . . . . . . . . . . 303 Venelin Todorov An Efficient Adaptive Monte Carlo Approach for Multidimensional Quantum Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Venelin Todorov and Ivan Dimov An Overview of Lattice and Adaptive Approaches for Multidimensional Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 Venelin Todorov Advanced Biased Stochastic Approach for Solving Fredholm Integral Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 Venelin Todorov, Ivan Dimov, and Rayna Georgieva Improving Performance of Low-Cost Sensors Using Machine Learning Calibration with a 2-Step Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 Petar Zhivkov Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387

Learning to Optimize Jörg Bremer

Abstract With a growing availability of ambient computing power as well as sensor data, networked systems are emerging in all areas of daily life. Coordination and optimization in complex cyber-physical systems demand for decentralized and self-organizing algorithms to cope with problem size and distributed information availability. Self-organization often relies on emergent behavior. Local observations and decisions aggregate to some global behavior without any apparent, explicitly programmed rule. Systematically designing algorithms with emergent behavior suitably for a new orchestration or optimization task is, at best, tedious and error prone. Appropriate design patterns are scarce so far. It is demonstrated that a machine learning approach based on Cartesian Genetic Programming is capable of learning the emergent mechanisms that are necessary for swarm-based optimization. Targeted emergent behavior can be evolved by evolution strategies. The learned swarm behavior is already significantly better than just random search. The encountered pitfalls as well as remaining challenges on the research agenda are discussed in detail. An additional fitness landscape analysis gives insight in obstructions during evolutions and clues for future improvements. Keywords Machine learning · Cartesian genetic programming · Swarm-based optimization heuristics

1 Introduction Many future scenarios, especially but not exclusively within the smart city context, are going to cope with large, complex cyber physical systems (CPS) where human users are supported by semi-automatic coordination of a large number of distributed entities within the internet of things (Zanella et al. 2014). Many applications of tomorrow will more and more be driven by social interactions (between man and machine J. Bremer (B) University of Oldenburg, Uhlhornsweg 84, 26129 Oldenburg, Germany e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 S. Fidanova (ed.), Recent Advances in Computational Optimization, Studies in Computational Intelligence 1044, https://doi.org/10.1007/978-3-031-06839-3_1

1

2

J. Bremer

as well as between machine and machine). Thus, autonomy and emergence will play an important role. Therefore engineering methods will have to take into account these new dimensions (Di Marzo Serugendo 2003). When large, decentralized scenarios come into play, agent-based or self-organizing coordination algorithms are widely seen as the most promising solution. Multi-agent-based systems are widely accepted to be a suitable approach for coordination and distributed optimization in cyber-physical systems with a large number of distributed entities or sensing or operational equipment (Calvaresi et al. 2018; Zhu 2013), especially for horizontally distributed control tasks (Nieße et al. 2012). Cyber-physical systems are equipped with a steadily increasing degree of autonomy (cf. Jipp and Ackerman 2016; Sheridan and Parasuraman 2016). The technical viability of cyber-physical systems that integrate human interaction has meanwhile achieved broad attention (see for example Parasuraman and Wickens 2008). Often, the autonomy in CPS emerges from self-organization principles that are used for coordination as well as from integrating artificial intelligence (AI)-enabled algorithms—as also stipulated in European Commission (2018) for the example of the European Union. Today’s cyber-physical systems already comprise a huge number of physical operation and sensing equipment that has to be orchestrated for secure and reliable operation—prominent examples are the electric energy grid, modern transportation systems, or environmental management systems (Rapp et al. 2011). As yet, still often human operators monitor and control a hierarchically organized CPS and aggregate information from lower level subsystems. Supervisory control and data acquisition (SCADA) systems—as an example from the energy sector—provide a view on and allow for controlling a decentralized process and are thus a state-ofthe-art means (Cardwell and Shebanow 2016). As the complexity and thus the size of optimization and coordination tasks steadily grows, more autonomy is desirable in future CPS (McKee et al. 2018; Platzer 2019). Due to a limited predictability, adaptive control is desirable and adaptive control can be achieved by self-organization. Thus, self-organization seems most promising for the design of adaptive systems (Collier 2003). A targeted design of algorithms showing a desired emergent behavior needs specialized design patterns. A targeted design of a specific emergent behavior is difficult to achieve, especially when using just general purpose programming languages (Parzyjegla et al. 2011). Indeed, design patterns like (Dormans et al. 2012; Parhizkar et al. 2019) may ease the design process. On the other hand, such patterns are often limited in applicability. There exist meta-heuristics like the combinatorial optimization heuristic for decentralized agents (Hinrichs and Sonnenschein 2017) that exhibit inherent self-organization principles, but they need to be manually adapted to each new use case. Having a systematic methodology describing at the design time the construction of self-organizing algorithms or systems step-by-step, would be highly desirable but is hard to achieve for general applicability (Prehofer and Bettstetter 2005). Only a few paradigms and pattern exist and often they use a rather high abstraction level. The individual adaption to a specific algorithmic or functional goals is mostly

Learning to Optimize

3

left to the designer (Prehofer and Bettstetter 2005); often with scarce advice. Nevertheless, as soon as changes in the system occur or as a system’s dynamic changes, corrective adaption might frequently be necessary. The concept of controlled selforganization addresses this issue by introducing a observer-controller architecture for automated correction of self-organized behavior of the system. If correction can be achieved by (re-)parameterizing self-organizing entities or agents according to changes in the environment, controlled self-organization is a well-established concept. On the other hand, from time to time it might be necessary to change the algorithmic behavior on a level that needs a redesign because the goal might have changed. Ideally, agents can learn by themselves a new coordination behavior. In Bremer and Lehnhoff (2021), a first approach for automated design of emergent behavior using Cartesian genetic programming (CGP) (Miller and Harding 2008) has been proposed. CGP is a form of genetic programming. In Bremer and Lehnhoff (2021) it is used to evolve a swarm behavior that is capable of finding good solutions to arbitrary problem instances of global optimization—like particle swarm optimization (PSO). By using CGP, a control program is evolved that guides individual particles through the search space of so far unseen optimization problems. In this way, a set of different particles jointly (interacting just by observation) performs a swarmbased optimization. Interaction takes place merely by mutual observation. With this approach, purposeful emergence in self-organizing systems is designed by using machine learning. In multi-agent systems, machine learning is already often used for solving problems that are somehow difficult to solve with preprogrammed, fixed agent behavior. Instead, agents are supposed to discover a solution to the problem on their own, using machine learning (Bu¸soniu et al. 2010); often by reinforcement learning. In contrast, the approach from Bremer and Lehnhoff (2021) goes for automatically discovering mechanisms of emergent behavior and self-organization by genetic programming (Koza and Poli 2005). In this way, control programs are learned for individually acting entities in a decentralized system with the goal to jointly solve a specific problem. One of the problems that have been observed in Bremer and Lehnhoff (2021) is premature convergence. Tuning of the evolution strategy that evolves the program became error prone. Thus, this contribution adds a fitness landscape analysis to the result analysis to have a closer look into the problem structure. The rest of the paper is organized as follows. It starts with a extended review of related work with a focus on multi-agent reinforcement learning as related technique and CGP evolution issues. The approach from Bremer and Lehnhoff (2021) is discussed and several pitfalls as well as first results are presented. A fitness landscape analysis is conducted to research hindering aspects. Huge barriers are identified, demanding for an island model with mutation of different magnitudes. Swarms with evolved emergent behavior are assessed by solving standard benchmark problems. The evolved swarms are compared with random search and real particle swarm optimization.

4

J. Bremer

2 Related Work In general, machine learning algorithms automatically build a mathematical model using sample data. These models are then used to make decisions without a need for specifically programming rules to make these decisions. Starting from the first works of Hebb (1949) many different algorithms and approaches have been developed. Among them are reinforcement learning (Kaelbling et al. 1996; Sutton and Barto 2018), classifiers like support vector machines (Pisner and Schnyer 2020), or artificial neural networks (Van Gerven and Bohte 2017), to name just a few. A good overview can for example be found in Burkov (2019). Reinforcement learning is often applied to intelligent agents for learning to take appropriate actions based on observations from the environment that the agent interacts with Sutton and Barto (2018). An extension is multi-agent reinforcement learning (MARL) (Busoniu et al. 2008). In MARL, many agents independently learn how to decide on the most rewarding action in a dynamic environment that is disturbed by the other agents. Many MARL algorithms are designed for static and thus stateless games (Bu¸soniu et al. 2010). But, also use cases for cooperative games are have been researched and may generate emergent behavior (Martinez-Gil et al. 2017; Tan 1993). Nevertheless, the application is limited as agents still just learn to choose from a predefined set of (singular) actions (Busoniu et al. 2008). A subset of machine learning algorithms is made up by a special type of evolutionary algorithms: Genetic programming (GP) is used to discover solutions to problems automatically by using evolutionary mechanisms like random mutation, crossover, a fitness function, and multiple generations of evolution. Alan Turing was probably the first to raise the question, whether programs might be evolved by something like evolution (Turing 1950). After a first implementation by Forsyth (1981) for logical functions represented as tree encoded programs, many improvements were made (Cramer 1985; Koza 1989, 1990, 1992). One of this improvements is the use of a special phenotype representation that allows leaving computational nodes unused. In general, Cartesian genetic programming (CGP) is a more efficient version of genetic programming and encodes computer programs as graph representation (Sotto et al. 2020) and evolves by by rewiring the nodes. CGP is an enhancement of a method originally developed for evolving digital circuits (Miller 2003; Miller et al. 1997). CGP already demonstrated its capabilities in synthesizing complex functions in several different use cases. For example, it has been successfully applied to image processing (Harding et al. 2013), neural network training (Khan et al. 2013), or to the synthesis of Bent functions for cryptography (Hrbacek and Dvorak 2014). CGP uses an integer-based representation of a directed graph to encode the program. Integers encode addresses in data or functions by addresses in a look-up table. Later versions also experimented with float-based representations (Clegg et al. 2007; Harding et al. 2012). Cartesian genetic programs are mostly evolved using a (1 + λ)-ES with mutation only (Turner and Miller 2014b). Mutation works on all or just on active genes (Gold-

Learning to Optimize

5

man and Punch 2013) and selection mechanism. But, also genetic algorithms with crossover schemes are in use. Although initially though of to be useless (Miller et al. 1999), it can help a lot if multiple chromosomes have independent fitness assessments (Walker et al. 2006, 2009).

3 Learning Emergence with Cartesian Genetic Programming The goal was to automatically generate a swarm-based heuristics for optimization similar to the particle swarm algorithm, i.e. to derive a swarm of individually acting particles. Observations from neighboring particles are supposed to be included into a particle’s own move decisions. To achieve this, particles were implemented that can be equipped with a control program learned by CGP. Figure 1 shows the general architecture. A swarm consists of an arbitrary number of particles. In each iteration during optimization, each particle is stepped by a global swarm control; just like PSO. The global control is also responsible for ranking the particles and detecting the best one (in terms of fitness). When a particle is stepped, the CGP program that

Fig. 1 General architecture of the swarm and the incorporated particles with the embedded CGP program (cf. Bremer and Lehnhoff 2021)

6

J. Bremer

code

0 +

function 0

0

1

genotype: phenotype:

2

0 0 1 =

2

3

0

=

⋅(

4

+

2 ⋅

2

5

0 0 1

1 0 3

2 1 1 ,

1

1 -

3

2 0 5

3 4

3

6

7

7 6

)

Fig. 2 Computational graph and its genotype representation in Cartesian genetic programming; modified after (Bremer and Lehnhoff 2021)

determines the new position of the particle is executed and the new fitness value is calculated. Experiments with different input to the CGP program and different internal particle states were conducted as described later Currently only observation of other particles in the swarm are considered. Two succeeding stages of extension will be the integration of inter-entity coordination (1) by using stigmergy and (2) by communication by exchanging messages. Finally, we are opting for problem solving with multi-agent systems. When learning the control program by CGP, the same swarm setting is used. For each CGP solution candidate, several swarms were set up. Each particle was equipped with the solution candidate program. Each swarm was run for several iterations. Finally, the mean achieved optimization result evaluated the solution candidate. Cartesian genetic programming is an advanced form of genetic programming (GP) designed to evolve acyclic graphs (Koza and Koza 1992). The nodes are indexed by their Cartesian coordinates and represent functions of a computational structure (the graph) (Miller and Smith 2006). Many traditional GP approaches suffered from the so called bloat effect (Miller 2001)—programs steadily growing in complexity without any significant objective improvement (Turner and Miller 2014a). CGP does not suffer from this problem (Miller 2001). A chromosome comprising function as well as connection genes, and output genes, encodes the computational graph that represents the executable program. Figure 2 shows an example with six computational nodes, two inputs and two outputs. The gene of a function represents the index in an associated lookup-table (0 to 3 in the example). Each computational node is encoded by a gene sequence consisting of the function look-up index and the connected input (or output of another computation node) that is fed into the function. Thus, the length of each function

Learning to Optimize

7

node gene sequence is n + 1 with n being the arity of the function. The graph in traditional CGP is acyclic. Parameters that are fed into a computation node may only be collected from previous nodes or from the inputs into the system. Outputs are connected to any computation node output or directly to any input. Not all outputs of computational nodes are used as input for other functions. In fact, usually many of such unused computational nodes occur in evolved CGP (Miller 2003). These nodes are considered inactive, do not contribute to the encoded program’s output, and are thus not executed during interpretation of the program. In this way, phenotypes are of variable length whereas the size of the chromosome is static. A computational graph in CGP is typically evolved using a (1 + λ)-evolution strategy, i.e. with probabilistic mutation but no crossover (Turner and Miller 2014c). CGP allows for unused nodes. Thus, the maximum number of nodes is a priori specified. It has been shown to be advantageous to evolution to overestimate the number of nodes due to an induced higher genetic drift (Miller and Smith 2006). For the experiments in Bremer and Lehnhoff (2021) as well as for the landscape analysis in this contribution, an extension to the ECJ-toolkit (Miller and Series 2011; Scott and Luke 2019) has been used. In addition to the traditional integer representation as in Fig. 2, the ECJ version also supports a real-valued representation. For each gene, alleles are allowed to range from [0, 1]. Prior to executing the program, all real values are rounded back to integer for interpretation as described above. With real-valued encoding, it becomes possible to apply a real-valued crossover operator. In this way, the performance of convergence is significantly improved at least for regression tasks (Clegg et al. 2007). In the integer-encoded case, crossover is usually left unused. Nevertheless, more operators are possible with continuous encoding and discovering improved genetic operators for other problems remains an open area of research. The authors chose to use real-valued encoding. For learning the internal particle control, setting up a standard CGP scenario gives a good start. As function set, the authors in Bremer and Lehnhoff (2021) chose to start with the four basic arithmetic operations, a generator for normal distributed random numbers, the classical if-then-else-statement, and the set of standard order relations. As input, the current position (in search space), the current objective value, and the position of the best particle are provided as input to the particle control program. The output of the program was set to be the new particle position. Initially, an additional parameter v meant to be comparable to the velocity in particle swarm optimization (Kennedy and Eberhart 1995) was introduced, that was output and input to the next iteration as well. In this way, it was meant to enable the particle to have a more complex inner state apart from the mere position. But, CPG seems to make no targeted use of it. Thus, it was changed to an automatic increment of the current position. Instead, Bremer and Lehnhoff (2021) extended the function set by a function that is able to determine the current rank of the particle (compared with all other). Moreover, the numbers 0–9 were given as constant functions. In many training processes it was observed that CGP learned to construct needed constants by itself. This was for example achieved by using the if-statement to construct a 1 and then adding it up several times. Usually, this seemed to be a waste of necessary evolutions as well as

8

J. Bremer

of needed computation nodes. With introducing the constants, CGP could use the numbers directly. A further improvement was to reuse the same learned program for all dimensions of a multi-variate problem. Figure 1 shows the final architecture of a particle and its embedding in the swarm. The next challenge was the decision for the objective function. First, a single optimization problem was used for evaluating the fitness of a swarm. The swarm solves each problem several times and the mean achieved residual distance to the known global optimum was taken to evaluate the performance of the swarm in solving the problem. This approach failed, because CGP learned to solve the given optimization problem directly and made the swarm output the problem solution hard-coded. So, no generalization was achieved. Actually, this was to be expected. With the next try, a bunch of different optimization problems with optimal solutions at different positions was used. Without different optimum position, it would have resulted in a directly learned result again. With a given set of objectives that are all to be solved independently by the swarm, a sort of swarm behavior could already be generated—but not as expected. The swarm learned to move along a trace that passes through all the optima of the different problems. Again, this was not optimization. In order to tackle this problem, a random offset was added. For each problem instance f i , a random offset r uniformly sampled from the problem domain was generated and added to x. The offset is fixed for one training episode. So the swarm solves f i (x + r) resulting in a randomly translated optimum x ∗ . Now it was possible to observe a first real optimization behavior within the trained swarms. When just using the goodness of the optimization results as criterion for training, the achieved swarm behavior resembles more or less a random search. As the goal was to generate a swarm behavior that exhibits some emergent characteristics and shows self-organization, further criteria evaluating these characteristics need to be added. Criteria for quantifying emergence are for example known from biology (Balaban et al. 2018) or neuroscience (Hoel et al. 2013a). Applications in computing science are scarce. An example for detecting emergence in technical systems can be found in Shahbakhsh and Nieße (2019). Many fractal analysis tools from chaos theory have a rather high computational complexity. At least, for calling them within an objective function for training that has to be called millions of times. The experiments in Bremer and Lehnhoff (2021) were continued by integrating the so called correlation length as known from fitness landscape analysis (Watson 2010). When analyzing fitness landscapes, the correlation length is a criterion that measures the number of iterations after which the majority of succeeding solutions is statistically no longer 1 from the autocorrelation correlated. It is calculated by λ = − ln(ρ(1)) ρ(σ ) =

E[fk fk+σ ] − E[fk ]E[fk+σ ] V [fk ]

(1)

Learning to Optimize

9

of a series of consecutively sampled objective values { f k }m . When using the inverse version, one can maximize this distance. As additional indicator for desired swarm behavior the following improvement relation was added as criterion: rimp =

1+

n dec −n inc n dec +n inc

(2)

2

to maximize the number of improvements n inc over decreasing optimization steps n dec . Finally, also the eventually reached swarm diameter was integrated to measure contraction. All criteria were combined in a scalarization approach.

4 Results For the experiments, an islanding model for CGP training was used with two (μ + λ)ES. One was set to μ = 20 and λ = 100 with a mutation probability of 0.04. The other was set to μ = 8 and λ = 16 with a mutation probability of 0.4. In a first approximation, the values were determined empirically. This approach results in a rather steadily evolving ES sending individuals every 1000 iteration and a small rather fast-paced, fluctuating one, sending every 100 iterations; thus ensuring liveliness in exploration. The number of nodes was set to 20. As training optimization problems, classical benchmark functions like Rosenbrock, Bohachevsky, Alpine or Booth (Jamil 2013) were used. Because each candidate has to be evaluated several times for each of these functions, the number of swarm iterations during the learning phase was limited to 200. The number of particles was set to 5 during training due to performance issues. Tables 1 and 2 show the best result.The learned optimization algorithm was compared with a random search and with a real PSO. Random search was used as the bottom line that needs to be beaten. Table 1 compares the performance of the swarm, achieved with the number of particles set to 10 and with a budget of 10000 objective evaluations. The performance was tested on six different objective functions; three of which had not been used for learning. Compared with the pure random search,

Table 1 Comparison of the best learned algorithm with random search and PSO when using a budget of 10.000 objective evaluations (cf. Bremer and Lehnhoff 2021) f

Learned algorithm

Random

PSO

1

5.555 × 10−3 ± 2.865 × 10−3

1.638 × 10−2 ± 1.667 × 10−2

4.692 × 10−5 ± 1.251 × 10−4

2

2.928 × 10−1 ± 1.824 × 10−1

1.06 × 10−1 ± 1.187 × 10−1

2.351 × 10−1 ± 1.045 × 100

3

1.822 × 10−1 ± 3.984 × 10−1

7.27 × 10−3 ± 6.257 × 10−3

2.335 × 10−4 ± 4.163 × 10−4

4

−1.013 × 100 ± 1.242 × 10−2

−9.927 × 10−1 ± 4.269 × 10−2 −1.031×100 ± 9.789 × 10−4

4

3.837 × 100 ± 5.941 × 100

2.714 × 10−2 ± 2.603 × 10−2

3.529 × 10−4 ± 1.4 × 10−3

5

3.478 × 10−5 ± 5.81 × 10−5

4.114 × 10−4 ± 8.626 × 10−4

1.365 × 10−9 ± 3.471 × 10−9

10

J. Bremer

Table 2 Comparison of the best learned algorithm with random search and PSO when using a budget of 200.000 objective evaluations (cf. Bremer and Lehnhoff 2021) f Learned algorithm Random PSO 1 2 3 4 5 6

4.971 × 10−7 ± 3.86 × 10−7 1.668 × 10−5 ± 1.902 × 10−5 7.914 × 10−4 ± 1.199 × 10−3 −1.032 × 100 ± 1.798 × 10−6 1.553 × 10−6 ± 1.293 × 10−6 3.85 × 10−13 ± 6.305 × 10−13

7.158 × 10−4 ± 7.738 × 10−4 6.514 × 10−3 ± 6.835 × 10−3 1.329 × 10−3 ± 6.917 × 10−4 −1.03 × 100 ± 1.453 × 10−3 1.042 × 10−3 ± 1.079 × 10−3 4.378 × 10−7 ± 8.11 × 10−7

1.2 × 10−12 ± 4.33 × 10−12 5.476 × 10−10 ± 1.347 × 10−9 6.947 × 10−8 ± 1.532 × 10−7 −1.032 × 100 ± 2.323 × 10−12 1.127 × 10−11 ± 3.635 × 10−11 5.304 × 10−20 ± 2.652 × 10−19

the learned optimization algorithm already behaves rater good, except for the Booth function. For the Rosenbrock function (4-dimensional) and the Six Hump Camel Back functions (2-dimensional), it is already competitive to the PSO. Table 2 shows the results when using a budget of 200000 objective evaluations; demonstrating that the learned algorithm is significantly better than random search. In order to detect emergent behavior or at least to distinguish from pure random behavior in the system, a quick analysis was conducted using two criteria: The correlation dimension (Grassberger and Procaccia 1983) and the Hurst exponent (Hurst 1956, 1957). The correlation dimension is a characteristic measure describing the geometry of chaotic attractors. One of the main applications of the Grassberger–Procaccia-algorithm is to distinguish between stochastic and deterministically chaotic time sequences (Grassberger et al. 1991). It was used to analyze the fitness sequence generated along the path of particles. Table 3 shows example results for some test runs revealing that the behavior of the particles in the learned algorithm behave similar to the ones from PSO when attracted from good solutions. Each run reflects a different objective function. Although, when attacking function 4 from De Jong’s test suite (De Jong 1975) which incorporates noise, the learned particle behavior seems to be attracted from more local optima at the same time (larger correlation dimension). The random approach shows no attraction behavior at all. The Hurst exponent is a measure for the long-term memory of a time series. In this way the long-term statistical dependencies (excluding dependencies from cycles) seen in the series are evaluated (Weron 2002). A Hurst exponent of 0.5 denotes white noise. Larger values denote positive dependency, smaller negative dependency. The results in Table 4 suggest that the PSO as well as the learned algorithm show a behavior of systematically improving solutions whereas the random search (as expected) exhibits mostly white noise.

Learning to Optimize

11

Table 3 Fractal correlation dimension as criterion to distinguish stochastic and deterministic behavior (cf. Bremer and Lehnhoff 2021) Function Learned algorithm Random PSO Sphere Alpine DeJong f4

2.805 0.081 2.295

1.135 × 10−15 −6.107 × 10−16 2.928 × 10−16

2.659 1.447 0.276

Table 4 Hurst exponent as indicator for long term memory of the swarm’s dynamic system (cf. Bremer and Lehnhoff 2021) Function Learned algorithm Random PSO Sphere Alpine DeJong f4

0.936 0.933 0.949

0.556 0.544 0.497

0.872 0.901 0.758

For the experiments, two instances of evolution strategies were used in an island model for evolving the programs, because observations showed a strong bias towards premature convergence. Escaping local optima seemed to be a hard task. To get to the bottom of the problem in more detail, an additional fitness landscape analysis was conducted. Fitness landscape analysis is a tool for scrutinizing the structure and characteristics of—especially non-linear—optimizations problems (Watson 2010). One aspect of fitness landscape analysis is information analysis. Analyzing random paths on a given fitness landscape gives a first impression of the underlying problem structure and possible obstacles during optimization. An extended analysis on entropy measures based on a series of  fitness values {fk }k=1 was proposed in Vassilev et al. (2000). The series of fitness values is sampled by a random walk. Based on ideas from algorithmic information theory (Chaitin 1987) and Shannon’s entropy concept (Shannon 1948), a characterization of the distribution and the number of local optima along the path can be derived as a measure of the complexity of the fitness landscape. Fitness values along random paths on the landscape are interpreted as an ensemble of three basic elements: • flat areas (neighboring points have similar fitness), • isolated points (surrounded merely by better, or worse points), and • slope points (neither isolated nor flat). First, each path is transformed in a sequence of tokens S(ε) = s1 s2 . . . so over the alphabet {1, 0, 1} by ⎧ ⎪ ⎨1, if fi − fi−1 < −ε Si = 0, if |fi − fi−1 | ≤ ε (3) ⎪ ⎩ 1, if fi − fi−1 > ε

12

J. Bremer

for a predefined ε ∈ [0, max fk ] denoting a percentage of the maximum difference between any two fitness values along the path (cf. Vassilev et al. 2000). A token string S(ε) then contains information on the structure of the landscape along a randomly chosen path. Now one further refine this representation and define objects by two successive tokens. For example, the sequence 11 models a change from downslope to upslope and thus a trough. Often, an entropy measure for such an ensemble of objects is calculated according to Vassilev et al. (2000) as a measure of diversity of objects. But also the modality can be derived. The modality must be measured by a classification of objects in order to determine frequency or absolute number of (local) optima. First, the partial information content is determined (Vassilev et al. 2000). To achieve this, the string S(ε) is transformed into S  (ε) = o1 o2 . . . oμ over the alphabet {1, 1}. S  (ε) is calculated such that it contains the shortest string that represents the alternations from uphill to downhill changes along the path by recursively removing successive identical objects and zeros. The partial information content is defined as (Vassilev et al. 2000): |S  (ε)| M(ε) = (4) ∈ [0, 1]. |S(ε)| A value of 1 denotes the maximum modality.  The absolutenumber of (local) minima (according to a given ε) can be derived by (n · M(ε))−2 . All these measures are sensitive to the choice of ε that determines the magnitude of fitness difference that is still interpreted as change. Small values lead to a higher sensitivity to changes in fitness between neighboring solutions. An experiment was conducted with different values for ε. As length of the path, 500 was chosen. One observation was that obviously a series of infinitely high barriers exists that separate basins of attraction of different local optima. Figure 3 shows a single random example series of fitness values. These barriers can be explained by intermediate solutions that encode nonsense control programs leading to no evaluable swarm optimization behavior at all. These barriers are way more frequent than expected and obviously surround rugged funnels to a wide variety of local optima. This can be seen in Fig. 4 which shows the means and variations of the modality

Fig. 3 Example series of fitness values along a random path on the fitness landscape. Values are cut above 10000 to avoid numeric problems with NaN-values

Learning to Optimize

13

Fig. 4 Mean number of optima along random path of length 500 for different difference thresholds (as percentage of the maximum difference in fitness values) no. of minima

60

40

20

0 50

25

10

5

11

0.1

of 100 different random paths for a variety of ε. The barriers often also exhibit a considerable width. In order not to run into problems with not evaluable solutions, fitness values were cut at 10000. Previous experiments suggested that such high values seem to inevitable lead to a NaN-valued region of fitness. Figure 4 shows that on a scale of 25% or 50% of fitness difference threshold still a valuable ruggedness exists. Larger values for ε should actually lead to a flat function. For smaller values, local minima exist with a wide range of magnitudes. Against this background, the chosen procedure of two islanded evolution strategy instances with different magnitudes of mutation seems to be correct. To cope with such characteristics of fitness function, one needs an optimization algorithm that explores a wide range of the objective function across barriers that can only be overcome by jumps (induced by large mutations). On the one hand, the algorithm needs to cope with escaping local minima on a small-scale exploration inside the separated, highly rugged basins, on the other. Another promising approach could be the use of Lévy-distributions for sampling offspring solutions or a combination of different heuristics (Nápoles et al. 2014). Looking at the mid-term agenda, many research challenges still have to be addressed. Among them are the following open questions. • During evolution, program candidates have to be evaluated also based on the achieved emergent behavior that the swarm exhibits. Some first approaches already deal with the question of detecting emergence in distributed systems (Chen et al. 2007; Mihaylov andSpallanzani 2016; Singh et al. 2017; Spaanenburg 2007). Nevertheless, the question for the appropriate approach as well as the best way to describe the wanted behavior a priori is still open. • As this research is supposed to address the evolution of emergent behavior in the end, integration of a means for quantifying emergence is also needed in order to

14

J. Bremer

decide which of two solution candidates exhibits a more promising behavior. For emergence quantification, even less groundwork is at yet available. First attempts for modeling and measuring causal relations on a spatial (of interest here for positions on a fitness landscape) and temporal (of interest for the relation between succeeding solutions in a swarm) level can for example be found in Hoel et al. (2013b) and Hovda (2008). The persistence of information (Ball et al. 2010) or borrowings from biology (Balaban et al. 2018) could also point into promising directions. • The presented evolution problem is actually a multi-objective problem that so far is tackled by a scalarization approach (Jahn 1985). Some first specialized use cases for multi-objective CGP can already be found that harness concepts from the famous NSGA-II approach citedeb2002fast for program evolution (Dourado and Pedrino 2020), but basically for finding smaller programs (Kalkreuth and Rudolph 2016; Kaufmann and Platzner 2018). Thus, the swarm control programs could be evolved by a multi-objective algorithm. On the other hand, this would most likely generate severe performance problems as for each iteration a complete population of solution candidates had to be evaluated. Performance is already an issue with the single-objective case. For each evaluation of a solution candidate, an optimization procedure has to be run several times due to inherent stochasticity and different evaluation criteria have to be calculated. • Finally, the question for the best set of functions is still open. Ideally, this set can be divided into an always-necessary base set and problem specific extensions. In agent-based systems, also negotiation-specific function (messaging, acknowledgement, etc.) need to be integrated. Most likely, CGP versions with integrated recurrent execution cycles (Turner and Miller 2014c) of self-modification capabilities (Harding et al. 2011) are most suitable are useful for the use case. Further steps will comprise the inclusion of (1) stigmergy concepts (Theraulaz and Bonabeau 1999) and (2) message-based information exchange (Poslad 2007). First simple tests of evolving agent-based negotiations via message are already promising for two agents.

5 Conclusion A constantly advancing interconnectedness is producing a growing number of use cases for cyber-physical systems and for necessary coordination and optimization tasks. Many of these use cases are likely to need self-organization mechanisms. As there is still a lack of widely applicable and generalizable design patterns, Bremer and Lehnhoff (2021) presented a first example for automatically evolving emergent swarm behavior for optimization. This can be seen as a first demonstration that emergent behavior can in general be learned by machine learning. Although the observations that are fed into the particles are still rather limited, Cartesian Genetic Programming was already able to come up with solutions that a significantly better

Learning to Optimize

15

than mere random search and even with some (admittedly rare) instances that are on the verge to become competitive to established particle swarm optimization. A fitness landscape analysis of the evolution task has brought to light the major obstacle that hinders the evolution process and pointed to future research direction to overcome current limitations. Recently, recurrent CGP has been developed to foster the evolution of recurrent artificial neural networks. For some other use cases, the recurrent version also showed superior performance (Turner and Miller 2017). Thus, follow-up research will also integrate these newer extensions to CGP. Extensions like recurrent CGP that effect rather interpretation or execution of the phenotype may already now be evolved by the distributed approach with just an adaption of the possible choices of other agents’ output as input for the own node. In the same way, different levels-back parametrizations could be handled (Inácio et al. 2016). So far, the results are promising and the extension by a fitness landscape analysis has shown additional future research directions to overcome premature convergence and to enable a broader and more efficient exploration of the search space even for learning more complex optimization procedures.

References Balaban, V., Lim, S., Gupta, G., Boedicker, J., Bogdan, P.: Quantifying emergence and selforganisation of enterobacter cloacae microbial communities. Sci. Rep. 8(1), 1–9 (2018) Ball, R.C., Diakonova, M., MacKay, R.S.: Quantifying emergence in terms of persistent mutual information. Adv. Complex Syst. 13(03), 327–338 (2010) Bremer, J., Lehnhoff, S.: Towards evolutionary emergence. Ann. Comput. Sci. Inf. Syst. 26, 55–60 (2021) Burkov, A.: The Hundred-page Machine Learning Book, vol. 1. Andriy Burkov Canada (2019) Busoniu, L., Babuska, R., De Schutter, B.: A comprehensive survey of multiagent reinforcement learning. IEEE Trans. Syst. Man Cybernet. Part C (Applications and Reviews) 38(2), 156–172 (2008) Bu¸soniu, L., Babuška, R., De Schutter, B.: Multi-agent reinforcement learning: an overview. Innovations in Multi-agent Systems and Applications-1, pp. 183–221 (2010) Calvaresi, D., Appoggetti, K., Lustrissimini, L., Marinoni, M., Sernani, P., Dragoni, A.F., Schumacher, M.: Multi-agent systems’ negotiation protocols for cyber-physical systems: results from a systematic literature review. In: ICAART (1), pp. 224–235 (2018) Cardwell, L., Shebanow, A.: The efficacy and challenges of scada and smart grid integration. J. Cyber Secur. Inf. Syst. 1(3), 1–7 (2016) Chaitin, G.J.: Algorithmic information theory. Cambridge Tracts in Theoretical Computer Science. Cambridge University Press, Cambridge, Cambridgeshire, New York (1987) Chen, C.C., Nagl, S.B., Clack, C.D.: Specifying, detecting and analysing emergent behaviours in multi-level agent-based simulations. In: Summer Computer Simulation Conference 2007, SCSC’07, Part of the 2007 Summer Simulation Multiconference, SummerSim’07, vol. 2, pp. 969–976. ACM: Association for Computing Machinery (2007) Clegg, J., Walker, J.A., Miller, J.F.: A new crossover technique for cartesian genetic programming. In: Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation, pp. 1580–1587 (2007) Collier, J.: Fundamental properties of self-organization. Causality, emergence, self-organisation, pp. 287–302 (2003)

16

J. Bremer

Cramer, N.L.: A representation for the adaptive generation of simple sequential programs. In: Grefenstette, J.J. (ed.), Proceedings of an International Conference on Genetic Algorithms and the Applications, pp. 183–187. Carnegie-Mellon University, Pittsburgh, PA, USA (1985) De Jong, K.A.: Analysis of the behavior of a class of genetic adaptive systems. Ph.D. Thesis, University of Michigan, Ann Arbor (1975) Di Marzo Serugendo, G.: Engineering emergent behaviour: a vision. In: Hales, D., Edmonds, B., Norling, E., Rouchier, J. (eds.) Multi-Agent-Based Simulation III, pp. 1–7. Springer, Berlin (2003) Dormans, J., et al.: Engineering emergence: applied theory for game design. Universiteit van Amsterdam [Host] (2012) Dourado, A.M.B., Pedrino, E.C.: Multi-objective cartesian genetic programming optimization of morphological filters in navigation systems for visually impaired people. Appl. Soft Comput. 89, 106,130 (2020) European Commission: Draft Ethics Guidelines for Trustworthy AI. Technical Report, European Commission (2018) Forsyth, R.: BEAGLE a darwinian approach to pattern recognition. Kybernetes 10(3), 159–166 (1981). https://doi.org/10.1108/eb005587 Goldman, B.W., Punch, W.F.: Reducing wasted evaluations in cartesian genetic programming. In: European Conference on Genetic Programming, pp. 61–72. Springer (2013) Grassberger, P., Procaccia, I.: Characterization of strange attractors. Phys. Rev. Lett. 50(5), 346 (1983) Grassberger, P., Schreiber, T., Schaffrath, C.: Nonlinear time sequence analysis. Int. J. Bifurc. Chaos 1(03), 521–547 (1991) Harding, S., Graziano, V., Leitner, J., Schmidhuber, J.: Mt-cgp: Mixed type cartesian genetic programming. In: Proceedings of the 14th Annual Conference on Genetic and Evolutionary Computation, GECCO ’12, pp. 751–758. Association for Computing Machinery, New York, NY, USA (2012). https://doi.org/10.1145/2330163.2330268 Harding, S., Leitner, J., Schmidhuber, J.: Cartesian genetic programming for image processing. In: Genetic programming theory and practice X, pp. 31–44. Springer (2013) Harding, S.L., Miller, J.F., Banzhaf, W.: Self-modifying cartesian genetic programming. In: Cartesian Genetic Programming, pp. 101–124. Springer (2011) Hebb, D.O.: The organization of behavior; a neuropsycholocigal theory. A Wiley Book in Clinical Psychology, vol. 62, p. 78 (1949) Hinrichs, C., Sonnenschein, M.: A distributed combinatorial optimisation heuristic for the scheduling of energy resources represented by self-interested agents. Int. J. Bio-Inspir. Comput. 10(2), 69–78 (2017) Hoel, E.P., Albantakis, L., Tononi, G.: Quantifying causal emergence shows that macro can beat micro. Proc. Natl. Acad. Sci. 110(49), 19790–19795 (2013a). https://doi.org/10.1073/pnas. 1314922110. https://www.pnas.org/content/110/49/19790 Hoel, E.P., Albantakis, L., Tononi, G.: Quantifying causal emergence shows that macro can beat micro. Proc. Natl. Acad. Sci. 110(49), 19790–19795 (2013b) Hovda, P.: Quantifying weak emergence. Mind. Mach. 18(4), 461–473 (2008) Hrbacek, R., Dvorak, V.: Bent function synthesis by means of cartesian genetic programming. In: International Conference on Parallel Problem Solving from Nature, pp. 414–423. Springer (2014) Hurst, H.E.: The problem of long-term storage in reservoirs. Hydrol. Sci. J. 1(3), 13–27 (1956) Hurst, H.E.: A suggested statistical model of some time series which occur in nature. Nature 180(4584), 494–494 (1957) Inácio, T., Miragaia, R., Reis, G., Grilo, C., Fernandéz, F.: Cartesian genetic programming applied to pitch estimation of piano notes. In: 2016 IEEE Symposium Series on Computational Intelligence (SSCI), pp. 1–7. IEEE (2016) Jahn, J.: Scalarization in multi objective optimization. In: Mathematics of Multi Objective Optimization, pp. 45–88. Springer (1985) Jamil, M., Yang, X.S., Zepernick, H.J.: 8 - test functions for global optimization: a comprehensive survey. In: Yang, X.S., Cui, Z., Xiao, R., Gandomi, A.H., Karamanoglu, M. (eds.), Swarm Intel-

Learning to Optimize

17

ligence and Bio-Inspired Computation, pp. 193–222. Elsevier, Oxford (2013). https://doi.org/10. 1016/B978-0-12-405163-8.00008-9 Jipp, M., Ackerman, P.L.: The impact of higher levels of automation on performance and situation awareness. J. Cognit. Eng. Dec. Making 10(2), 138–166 (2016). https://doi.org/10.1177/ 1555343416637517 Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: a survey. J. Artif. Intell. Res. 4, 237–285 (1996) Kalkreuth, R., Rudolph, G., Krone, J.: More efficient evolution of small genetic programs in cartesian genetic programming by using genotypie age. In: 2016 IEEE Congress on Evolutionary Computation (CEC), pp. 5052–5059. IEEE (2016) Kaufmann, P., Platzner, M.: Combining local and global search: a multi-objective evolutionary algorithm for cartesian genetic programming. In: Inspired by Nature, pp. 175–194. Springer (2018) Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of ICNN’95-International Conference on Neural Networks, vol. 4, pp. 1942–1948. IEEE (1995) Khan, M.M., Ahmad, A.M., Khan, G.M., Miller, J.F.: Fast learning neural networks using cartesian genetic programming. Neurocomputing 121, 274–289 (2013) Koza, J.R.: Hierarchical genetic algorithms operating on populations of computer programs. In: Sridharan, N.S. (ed.) Proceedings of the Eleventh International Joint Conference on Artificial Intelligence IJCAI-89, vol. 1, pp. 768–774. Morgan Kaufmann, Detroit, MI, USA (1989) Koza, J.R.: Non-linear genetic algorithms for solving problems. United States Patent 4935877 (1990). Filed may 20, 1988, issued june 19, 1990, 4,935,877. Australian patent 611,350 issued september 21, 1991. Canadian patent 1,311,561 Issued December 15, 1992 Koza, J.R.: Genetic Programming: On the Programming of Computers by Means of Natural Selection. MIT Press, Cambridge (1992) Koza, J.R., Koza, J.R.: Genetic programming: on the programming of computers by means of natural selection, vol. 1. MIT Press (1992) Koza, J.R., Poli, R.: Genetic programming. In: Search Methodologies, pp. 127–164. Springer (2005) Martinez-Gil, F., Lozano, M., Fernandez, F.: Emergent behaviors and scalability for multi-agent reinforcement learning-based pedestrian models. Simul. Model. Pract. Theory 74, 117–133 (2017) McKee, D.W., Clement, S.J., Almutairi, J., Xu, J.: Survey of advances and challenges in intelligent autonomy for distributed cyber-physical systems. CAAI Trans. Intell. Technol. 3(2), 75–82 (2018) Mihaylov, G., Spallanzani, M.: Emergent behaviour in a system of industrial plants detected via manifold learning. Int. J. Progn. Health Manag. 7(4) (2016) Miller, J.: What bloat? Cartesian genetic programming on Boolean problems. In: 2001 Genetic and Evolutionary Computation Conference Late Breaking Papers, pp. 295–302. San Francisco, California, USA (2001) Miller, J.: Cartesian Genetic Programming, vol. 43. Springer (2003). https://doi.org/10.1007/9783-642-17310-3 Miller, J., Series, N.C.: Resources for cartesian genetic programming. Cartesian Genetic Programming, p. 337 (2011) Miller, J.F., Harding, S.L.: Cartesian genetic programming. In: Proceedings of the 10th Annual Conference Companion on Genetic and Evolutionary Computation, pp. 2701–2726 (2008) Miller, J.F., Smith, S.L.: Redundancy and computational efficiency in cartesian genetic programming. IEEE Trans. Evol. Comput. 10(2), 167–174 (2006) Miller, J.F., Thomson, P., Fogarty, T.: Designing electronic circuits using evolutionary algorithms. Arithmetic circuits: a case study. Genetic Algorithms and Evolution Strategies in Engineering and Computer Science, pp. 105–131 (1997) Miller, J.F., et al.: An empirical study of the efficiency of learning Boolean functions using a cartesian genetic programming approach. In: Proceedings of the Genetic and Evolutionary Computation Conference, vol. 2, pp. 1135–1142 (1999) Nápoles, G., Grau, I., Bello, M., Bello, R.: Towards swarm diversity: Random sampling in variable neighborhoods procedure using a lévy distribution. Computación y Sistemas 18(1), 79–95 (2014)

18

J. Bremer

Nieße, A., Lehnhoff, S., Tröschel, M., Uslar, M., Wissing, C., Appelrath, H.J., Sonnenschein, M.: Market-based self-organized provision of active power and ancillary services: an agentbased approach for smart distribution grids. In: 2012 Complexity in Engineering (COMPENG). Proceedings, pp. 1–5 (2012). https://doi.org/10.1109/CompEng.2012.6242953 Parasuraman, R., Wickens, C.D.: Humans: still vital after all these years of automation. Hum. Factors 50(3), 511–520 (2008). https://doi.org/10.1518/001872008X312198 Parhizkar, M., Serugendo, G.D.M., Hassas, S.: Leaders and followers: a design pattern for secondorder emergence. In: 2019 IEEE 4th International Workshops on Foundations and Applications of Self* Systems (FAS* W), pp. 269–270. IEEE (2019) Parzyjegla, H., Schröter, A., Seib, E., Holzapfel, S., Wander, M., Richling, J., Wacker, A., Heiß, H.U., Mühl, G., Weis, T.: Model-driven development of self-organising control applications. In: Organic Computing – A Paradigm Shift for Complex Systems, pp. 131–144. Springer (2011) Pisner, D.A., Schnyer, D.M.: Support vector machine. In: Machine Learning, pp. 101–121. Elsevier (2020) Platzer, A.: The logical path to autonomous cyber-physical systems. In: International Conference on Quantitative Evaluation of Systems, pp. 25–33. Springer (2019) Poslad, S.: Specifying protocols for multi-agent systems interaction. ACM Trans. Auton. Adap. Syst. (TAAS) 2(4), 15–es (2007) Prehofer, C., Bettstetter, C.: Self-organization in communication networks: principles and design paradigms. IEEE Commun. Mag. 43(7), 78–85 (2005). https://doi.org/10.1109/MCOM.2005. 1470824 Rapp, B., Solsbach, A., Mahmoud, T., Memari, A., Bremer, J.: It-for-green: Next generation cemis for environmental, energy and resource management. In: Pillmann, W., Schade, S., Smits, P. (eds.), EnviroInfo 2011 - Innovations in Sharing Environmental Observation and Information, Proceedings of the 25th EnviroInfo Conference ‘Environmental Informatics’, pp. 573–581. Shaker Verlag (2011) Scott, E.O., Luke, S.: Ecj at 20: toward a general metaheuristics toolkit. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 1391–1398 (2019) Shahbakhsh, A., Nieße, A.: Modeling multimodal energy systems. Automatisierungstechnik?: AT 67(11), 893–903 (2019) Shannon, C.E.: A mathematical theory of communication. Bell Syst. Techn. J. 27(379–423), 623– 656 (1948) Sheridan, T.B., Parasuraman, R.: Human-automation interaction. Rev. Human Factors Ergon. 1(1), 89–129 (2016). https://doi.org/10.1518/155723405783703082 Singh, S., Lu, S., Kokar, M.M., Kogut, P.A., Martin, L.: Detection and classification of emergent behaviors using multi-agent simulation framework (wip). In: Proceedings of the Symposium on Modeling and Simulation of Complexity in Intelligent, Adaptive and Autonomous Systems, MSCIAAS ’17. Society for Computer Simulation International, San Diego, CA, USA (2017) Sotto, L.F.D.P., Kaufmann, P., Atkinson, T., Kalkreuth, R., Basgalupp, M.P.: A study on graph representations for genetic programming. In: Proceedings of the 2020 Genetic and Evolutionary Computation Conference, GECCO ’20, pp. 931–939. Association for Computing Machinery, New York, NY, USA (2020). https://doi.org/10.1145/3377930.3390234 Spaanenburg, L.: Early detection of abnormal emergent behaviour. European Signal Processing Conference (2007) Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press (2018) Tan, M.: Multi-agent reinforcement learning: Independent vs. cooperative agents. In: Proceedings of the Tenth International Conference on Machine Learning, pp. 330–337 (1993) Theraulaz, G., Bonabeau, E.: A brief history of stigmergy. Artif. Life 5(2), 97–116 (1999) Turing, A.M.: Computing machinery and intelligence. Mind 49(236), 433–460 (1950) Turner, A.J., Miller, J.F.: Cartesian genetic programming: Why no bloat? In: European Conference on Genetic Programming, pp. 222–233. Springer (2014a)

Learning to Optimize

19

Turner, A.J., Miller, J.F.: Recurrent cartesian genetic programming. In: Bartz-Beielstein, T., Branke, J., Filipiˇc, B., Smith, J. (eds.) Parallel Problem Solving from Nature - PPSN XIII, pp. 476–486. Springer International Publishing, Cham (2014b) Turner, A.J., Miller, J.F.: Recurrent cartesian genetic programming. In: International Conference on Parallel Problem Solving from Nature, pp. 476–486. Springer (2014c) Turner, A.J., Miller, J.F.: Recurrent cartesian genetic programming of artificial neural networks. Genet. Program Evolvable Mach. 18(2), 185–212 (2017) Van Gerven, M., Bohte, S.: Artificial neural networks as models of neural information processing. Front. Comput. Neurosci. 11, 114 (2017) Vassilev, V.K., Fogarty, T.C., Miller, J.F.: Information characteristics and the structure of landscapes. Evol. Comput. 8(1), 31–60 (2000). https://doi.org/10.1162/106365600568095 Walker, J.A., Miller, J.F., Cavill, R.: A multi-chromosome approach to standard and embedded cartesian genetic programming. In: Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, pp. 903–910 (2006) Walker, J.A., Völk, K., Smith, S.L., Miller, J.F.: Parallel evolution using multi-chromosome cartesian genetic programming. Genet. Program Evolvable Mach. 10(4), 417 (2009) Watson, J.P.: An introduction to fitness landscape analysis and cost models for local search. In: Handbook of Metaheuristics, pp. 599–623. Springer (2010) Weron, R.: Estimating long-range dependence: finite sample properties and confidence intervals. Physica A 312(1–2), 285–299 (2002) Zanella, A., Bui, N., Castellani, A., Vangelista, L., Zorzi, M.: Internet of things for smart cities. IEEE Internet Things J. 1(1), 22–32 (2014). https://doi.org/10.1109/JIOT.2014.2306328 Zhu, Q., Bushnell, L., Ba¸sar, T.: Resilient distributed control of multi-agent cyber-physical systems. In: Control of Cyber-Physical Systems, pp. 301–316. Springer (2013)

Optimal Seating Assignment in the COVID-19 Era via Quantum Computing Ilaria Gioda, Davide Caputo, Edoardo Fadda, Daniele Manerba, Blanca Silva Fernández, and Roberto Tadei

Abstract In recent years, researchers have oriented their studies towards new technologies based on quantum physics that should allow the resolution of complex problems currently considered to be intractable. This new research area is called Quantum Computing. What makes Quantum Computing so attractive is the particular way with which quantum technology operates and the great potential it can offer to solve real-world problems. This work focuses on solving combinatorial optimization problems, specifically assignment problems, by exploiting this novel computational approach. A case-study, denoted as the Seating Arrangement Optimization problem, is considered. It is modeled through the Quadratic Unconstrained Binary Optimization (QUBO) paradigm and solved through two tools made available by the D-Wave Systems company, QBSolv and a quantum-classical hybrid system. The obtained experimental results are compared in terms of solution quality and computational efficiency. Keywords Quantum computing · Seating arrangement optimization problem · Quadratic unconstrained binary optimization I. Gioda · E. Fadda (B) · R. Tadei Department of Control and Computer Engineering, Politecnico di Torino, 10129 Torino, Italy e-mail: [email protected] I. Gioda e-mail: [email protected] R. Tadei e-mail: [email protected] D. Caputo · B. S. Fernández Data Reply s.r.l., 10126 Torino, Italy e-mail: [email protected] B. S. Fernández e-mail: [email protected] D. Manerba Department of Information Engineering, Università degli Studi di Brescia, 25123 Brescia, Italy e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 S. Fidanova (ed.), Recent Advances in Computational Optimization, Studies in Computational Intelligence 1044, https://doi.org/10.1007/978-3-031-06839-3_2

21

22

I. Gioda et al.

1 Introduction Combinatorial Optimization (CO) is one of the most studied research fields in the area of optimization. The application of this research area extends to many sectors and more and more researches are active to model and solve effectively and efficiently the problems belonging to this category. Among others, one of the most recent and innovative modelling approaches allowing to formulate a CO problem is the so-called Quadratic Unconstrained Binary Optimization (QUBO) paradigm. During the last few decades, many studies have been carried out regarding this mathematical formulation, also known as Unconstrained Binary Quadratic Programming (UBQP). An extensive survey on UBQP researches can be found in [14], in which the authors group in chronological order a good percentage of works addressing UBQP problems up to 2014. Given the potential of this quadratic formulation, Anthony et al. in [2] focus on the quadratization of arbitrary functions, by reporting a systematic study on the lower and upper bounds on the number of additional auxiliary variables needed to trace back to the quadratic pseudo-Boolean optimization case. The literature contains several works that are dedicated on the formulation of traditional problems in the QUBO form to understand the benefits of this formulation; many of them try to model problems directly in the QUBO form, while others provide for the re-cast of already studied and implemented problems from their original formulations to this novel form. The most famous case in literature is the one conducted by Lucas [15]. He studies how to rewrite many well-known problems from their classical form to the QUBO and Ising one (an equivalent formulation, mainly used in physics). Another example is proposed by Alidaee et al. in [1], where the authors show how the QUBO paradigm can effectively be used to formulate and solve set packing problems. Moreover, the authors of [19] propose the formulation of the TSP with Time Windows problem in the QUBO version. Subsequently, researches have been oriented towards the identification of efficient methods for solving problems written in this formulation. For instance, Wang et al. in [22] propose the first two tabu search with path relinking algorithms for solving UBQP problems by generating new solutions through the exploration of trajectories that connect high-quality solutions. Among the various approaches for solving combinatorial optimization problems in the QUBO form, in recent years researchers have begun to be oriented towards a new computational frontier, as the Quantum Computing. Borowski et al. in [5] outline four quantum annealing-based algorithms to solve Vehicle Routing (VRP) and Capacitated Vehicle Routing (CVRP) problems and compare their performance with well-known classical algorithms. A more practical case-study is Volkswagen’s Traffic Flow Optimization [18], which deals with a real-world application for managing and minimizing congestions on-road segments within a particular city. The authors of this work, by considering the number of cars and their routes, formulated the problem as QUBO and solved it using quantum annealing technologies.

Optimal Seating Assignment in the COVID-19 …

23

This paper focuses on the analysis of this new computational approach, specifically for the resolution of assignment problems. We analyzed the Seating Arrangement Optimization problem as a case-study, which was first formulated as a QUBO problem and then solved through the use of some tools made available by D-Wave Systems, a Canadian company specializing in quantum computing. The remaining part of this paper is organized as follows. Section 2 outlines the basic concepts of the novel computational approach of quantum computing, and, specifically, introduces the quantum annealing. Section 3 reports details of the quantum technologies offered by D-Wave Systems to solve combinatorial optimization problems. First, the Quadratic Unconstrained Binary Optimization (QUBO) paradigm is described and, then, the dedicated solvers for problems in this particular form are shown. Section 4 presents the case-study considered, the Seating Arrangement Optimization problem. The problem is first described and modeled as a quadratic problem, then an equivalent QUBO formulation is derived. Section 5 describes and compares the computational results of the experimental analysis. Section 6 provides conclusions and a brief discussion on possible future works.

2 Quantum Computing Approach In the recent years, research has been directed towards studying new technologies that enable a new computational approach for solving complex problems from different areas. Quantum computing is an innovative research field for processing information and algorithms based on a technology that exploits quantum physics instead of the classical one. The fundamental element on which quantum technologies are based is the quantum bit, or qubit. Like a classical computer bit, a qubit can assume one of the two binary values 0 or 1, but the novelty of this element lies in the fact that, during the elaboration process, the qubit finds itself in a third further state, called superposition. In this part of the process, it can assume any state, but its final value is established only during the measurement process, which determines its collapse to one of the classical states. Nowadays, several companies such as Google, IBM, NASA, and Rigetti are involved worldwide in the quantum computing research field. They have been building and, only in recent years, granting access to quantum technology for the resolution of small-to-medium-sized algorithms. In the next few years, quantum computers are expected to surpass the traditional ones in solving classical complex algorithms, which are now intractable or requiring too much time to be executed. Research has been conducted in several directions, as the technologies are different and therefore require different paradigms. However, we can identify two main categories of quantum computers, which differ in the type of structure and the applications for which they are designed [16]. The first category includes universal or gate-model quantum computers. The systems belonging to this class of quantum machines are equipped with a particular circuitry that manipulates the qubits’ state. Due to the many limitations concerning the technology they are built with, universal

24

I. Gioda et al.

quantum computers offers only several qubits of the order of dozens, and currently, they are used to solve small instances of problems in various areas such as machine learning [20] and chemistry [17]. The second type of quantum computers, named quantum annealers, is equipped with hardware with a higher number of qubits (in the order of thousands) and combinatorial optimization is the most important type of elaboration they can manage. Quantum annealers owe their name to the process called quantum annealing and that physically takes place within them and which, by exploiting the intrinsic effects of quantum physics, can be used to solve a specific combinatorial optimization problem. In detail, quantum annealing is a heuristic way to solve NP-hard problems by finding the global minimum of a function with many local minima describing a physical system’s energy. The search process is based on quantum fluctuations. Quantum annealing exploits the physical concept that everything in nature tends to evolve towards the equilibrium. Since quantum physics follows this reasoning, the quantum process is used to find equilibrium states corresponding to optimal solutions for the problem under observation. The quantum annealing and the quantum systems implementing it are based on characteristics that make their computation principle unique and innovative. As previously introduced, the first and most important is the qubit. At the beginning of the annealing process, each qubit finds itself in the superposition state, and its value can assume with equal probability the 0 or 1 state. This probability can be modified and directed towards one of the two states following an external magnetic field, called bias. The second important concept exploited in quantum annealing is the entanglement, i.e., a linking process that involves qubits, which do not work alone but take part of a more complex process. Entanglement binds qubits together, making the state of each qubit dependent from one state of some other qubits. At the hardware level, this linking is done through a procedure called a coupling. The purpose of this procedure is to create a sort of single entity formed by pairs of qubits so that these two assume either the same value (both 0 or 1) or the opposite one (one 0 and the other 1). Therefore, this unique object has four states (00, 01, 10, 11), whose energies depend on the bias of the two single qubits that compose it and the coupling strength between them. What a user of a quantum computer is allowed to do is programming the biases and the couplers strengths in such a way as to create an energy landscape which then allows to be sure to get a solution for the optimization problem when reaching the equilibrium during the annealing process. Thanks to this way of acting, quantum annealers’ technologies are explicitly designed to solve complex combinatorial optimization problems.

Optimal Seating Assignment in the COVID-19 …

25

3 Quantum Computing Solvers The leader company that work with quantum annealers is D-Wave Systems Inc. In particular, this organization deals with building and studying quantum technologies and, for some years, has allowed external people to use their quantum annealers to solve specific commercial problems, especially combinatorial optimization ones. In this section, we show the particular type of problems that quantum annealers can solve (QUBO problems) and, in particular, the tools that the quantum company makes available for their resolution.

3.1 Quadratic Unconstrained Binary Optimization (QUBO) Quantum annealers are designed to solve complex combinatorial optimization problems in a particular formulation, the Quadratic Unconstrained Binary Optimization (QUBO) one. The goal of a QUBO model is to find an optimal solution by minimizing an objective function in the form min

 i∈N



Q i,i xi +

Q i, j xi x j

(1)

i, j∈N |i/= j

where • • • •

N is the set of indexes associated with a variable; xi is the i-th binary variable of the problem; Q i,i is the diagonal (or linear) cost coefficient for the i-th variable; Q i, j is the off-diagonal (or quadratic) cost coefficient for the association of the i-th variable with the j-th one.

The first characteristic of a QUBO problem is the kind of variables that describe it. They are binary and appear in the objective function both in the linear form, i.e., individually xi , and in the quadratic form, i.e., one multiplied by the other xi x j . Alternatively, since for binary variables it holds xi = xi2 for any i ∈ N , a QUBO problem can be described in matricial notation as follows min xT Qx

x∈{0,1}|N |

(2)

where x is a column vector of binary variables of size |N | and Q an upper-triangular |N | × |N | matrix, called QUBO matrix. Clearly, not all the optimization problems come in this form. However, many of them can be rewritten as a QUBO model. The constraints identified for the problem must be readjusted and converted into penalties to form the actual objective function (1) that has to be minimized. Specifically, as in classical Lagrangean relaxations, the purpose of these penalties is to prevent the optimizer from choosing solutions that

26

I. Gioda et al.

Table 1 Some known penalty terms. All the variables are supposed to be binary and p is a positive scalar value (from [13]) Classical constraint Equivalent penalty x+y≤1 x+y≥1 x+y=1 x≤y x1 + x2 + x3 ≤ 1 x=y

p(x y) p(1 − x − y + x y) p(1 − x − y + 2x y) p(x − x y) p(x1 x2 + x1 x3 + x2 x3 ) p(x + y − 2x y)

violate the constraints. They involve the addition of a positive quantity, therefore not favorable to the minimization objective in the case of infeasible solutions [13]. Some standard ways of creating this translation from classical constraints can be found in [13] and are reported in Table 1. Starting from the obtained objective function, it is then possible to construct the QUBO matrix Q. This matrix maintains, in the form of constant values, the numerical relationships between all variables, also originating from the constraints to which the problem must be subjected. This element is also essential at the computational level since its dimensions impact the performance of the problem itself in terms of quality and solution time (see, e.g., [6] or [3]). The QUBO matrix comes in the form of a square |N | × |N | matrix and can be found both as symmetric and as upper-triangular matrix [13] (in the latter case, the second summation of (1) must be restricted to i < j). Interesting enough, it is possible to identify a direct correspondence between the physical implementation of a Quantum Processing Unit (QPU), the hardware processor of a quantum annealer, and a QUBO problem. Given a generic QUBO model, each binary variable is represented into the QPU by a qubit, whose possible states are called spin up and spin down, while the linear and quadratic coefficients Q i,i , and Q i, j correspond respectively to the biases and coupling strengths on a QPU. Despite this correspondence, usually, QUBO problems cannot be directly mapped to the QPU due to the physical interconnections between qubits and the number of qubits available on the hardware (in general, much smaller than the actual number of binary variables in a realistic-size problem). So, in practice, they undergo a translation process called Minor Embedding, which allows the mapping of each logical variable of the problem into a chain of physical qubits.

3.2 QBSolv Over the last few years, D-Wave Systems Inc. has made available useful tools involving quantum technologies to solve optimization problems written in the QUBO formulation. In particular, it has made available a software development kit, the Ocean

Optimal Seating Assignment in the COVID-19 …

27

SDK, containing a series of open-source Python tools for solving hard problems with quantum computers [9]. One of these tools, useful in our case, is QBSolv [10]. QBSolv is an open-source solver released in January 2017, which runs on the CPU just like the traditional solvers. Its goal is to solve large QUBO problems with high connectivity. The solver strategy consists in partitioning significantly-large QUBO problems into smaller components and applying a specified sampling method (the classical Tabu Search algorithm, by default) independently to each of these pieces to find the minimum value required for the optimization. Further technical details on QBSolv can be found in [4].

3.3 D-Wave Quantum-Classical Hybrid solver Another important tool made available by D-Wave Systems Inc. allows to submit and solve a problem modeled as QUBO on a remote quantum computer. To do this, in 2018, the computing company made available to users a cloud service, the D-Wave’s Leap, and a set of Python APIs, the Solver API (SAPI), that allow any developer to access and submit any problem to the D-Wave Quantum System. At the time of writing this paper, there are two types of solvers made available by the D-Wave Systems company: a solver that directly uses a QPU (to be chosen between an Advantage system and a D-Wave 2000Q lower-noise system) [11] and one that exploits a quantum-classical hybrid system [7]. Both types of sampler accept as input a so-called Binary Quadratic Model (BQM), a specific format that can be constructed starting from the Q matrix of a common QUBO problem. Since quantum technologies are still growing due to the noise and the limited number of qubits available, using a hybrid system can be required for solving realistic and large problems. Specifically, a quantum-classical hybrid system uses a computational approach that exploits classical and quantum resources. It analyzes and subdivides significant BQM problems using classical technology to break them up into subproblems and establish which of them should be solved with classical algorithms and can be effectively performed on a QPU by interacting directly with it for their resolution [21].

4 A Case-Study: The Seating Arrangement Optimization Problem The considered case-study focuses on passenger transport on high-speed trains considering the Italian Government’s new regulations on social distancing due to the COVID-19 pandemic. The railway companies have currently adapted their passenger positioning strategies by embracing a seating arrangement as a “checkerboard” pattern, i.e., with the

28

I. Gioda et al.

allocation of passengers to alternate seats, to counter the spread of the COVID-19 virus. Nevertheless, with the adoption of this strategy, the filling capacity of the wagons has dropped to the 50% of the total capacity, leading to a drastic reduction in the high-speed rail operators’ earnings due to the mismatch between the costs necessary for the activation of the railway transport lines and the revenues obtained from ticket sales. From now on, we will denote the examined case study as the Seating Arrangement Optimization problem. In the research conducted in this paper, we investigate a new seat allocation criterion for allocating people on high-speed trains by exploiting the social relationships between them. We aim to maximize the number of transported passengers, still complying with the rules for protecting the customers’ health. The problem has been modeled first as a non-linear program, then through the novel QUBO formulation, and finally solved using the tools provided by D-Wave Systems, i.e., the QBSolv and the Quantum-Classical Hybrid solver.

4.1 Problem Description The objective of the Seating Arrangement Optimization problem is to fill the train wagon as much as possible within the restrictions on social distancing due to the COVID-19 health emergency. Still, it aims to maximize the number of passengers belonging to the same family or living group in adjacent seats. Although the focus of the problem can be extended to the entire train, the study refers to only one wagon. Then, for a multi-wagon train the procedure will be run for each wagon separately. Furthermore, we assumed a static situation: just one train segment, i.e., trip between two adjacent stations, is considered so that the number of passengers and their social relationships are known beforehand without any changes during the travel. Some fundamental elements characterize the Seating Arrangement Optimization problem. A set of passengers that has to be transported on a high-speed train is given. During the ticket reservation procedure, each passenger is associated with a unique identifier, the booking ID, which can be shared or not with other passengers. The important assumption of the problem is that people with the same booking ID belong to the same family or living group. This condition, therefore, assumes they can be excluded from the social distancing impositions prescribed by the regulations against the spread of the COVID-19 virus. A high-speed train’s wagon is then considered. The wagon has a certain number of seats. Each seat is represented by a pair of coordinates, a row and a column number, which collocate it into a sort of grid. A graphic representation of a typical high-speed train’s wagon can be seen in Fig. 1. When modeling the problem, it is necessary to consider the following constraints:

Optimal Seating Assignment in the COVID-19 …

29

Fig. 1 Graphical representation of seats with row and column numbers inside a high-speed train’s wagon

• allocation of one and only one seat to each one of the considered passengers (avoid that a passenger has more than one seat assigned to him); • allocation of one passenger at most to each seat (avoid different passengers being assigned the same seat); • allocation of not adjacent (in front/behind/left/right) seats to people belonging to different families (identified by different booking IDs).

4.2 Mathematical Programming Formulation In this section, we provide a natural non-linear programming formulation of the Seating Arrangement Optimization problem in order to formally define it. Let us consider the following sets and parameters: • • • •

R = {1, 2, . . . , rmax }: set of seats row numbers; C = {1, 2, . . . , cmax }: set of seats column numbers; K : set of booking IDs; n k : number of passengers with the same booking ID k ∈ K .

Moreover, let us define the variable

x(r,c),k

⎧ ⎪ ⎨1 if a passenger with booking ID k is := assigned to seat with row and column (r, c) ⎪ ⎩ 0 otherwise

for each row r ∈ R, column c ∈ C, and booking ID k ∈ K . Then, a natural quadratic programming model for the problem can be stated as: max

 k

x(r,c),k · x(r +1,c),k +

(r,c)

+

 k

(r,c)

x(r,c),k · x(r,c+1),k

(3)

30

I. Gioda et al.

subject to 

x(r,c),k = n k , k ∈ K

(4)

x(r,c),k ≤ 1, r ∈ R, c ∈ C

(5)

(r,c)

 k

x(r,c),k · x(r +1,c),k ' = 0, r ∈ R \ {rmax }, c ∈ C, k, k ' ∈ K , k /= k '

(6)

x(r,c),k · x(r,c+1),k ' = 0, r ∈ R, c ∈ C\{cmax }, k, k ' ∈ K , k /= k '

(7)

x(r,c),k ∈ {0, 1}, r ∈ R, c ∈ C, k ∈ K .

(8)

The objective function (3) maximizes the number of passengers with same booking ID assigned to adjacent seats. Constraints (4) state that each passenger with a certain booking ID is assigned to one seat, while constraints (5) state that each seat is assigned to at most one passenger with a certain booking ID. Constraints (6) ensure that two seats, one next to the other (in the same column), are not assigned to passengers with different booking IDs, while constraints (7) ensure that two seats, one in front of the other (in the same row), are not assigned to passengers with different booking IDs. Finally, binary conditions on the variables are stated in (8).

4.3 QUBO Formulation Since the QUBO paradigm asks for an unconstrained model, as the one in (2), the constraints (4)–(7) and the cost function (3) are relaxed and aggregated into a single objective function through non-negative parameters λ’s, to be calibrated (see later). In particular, we chose to set these parametric coefficients as numerical, and to associate each of them with a specific group of constraints presented in model (3)– (8). We decided to adopt this modeling choice in order to minimize the number of λ parameters needed, as they represent a non-negligible obstacle during the model calibration. To do this relaxation, we built a penalty term for each of the identified constraints by following the approach from [13]. Hence, a QUBO formulation for the Seating Arrangement Optimization problem becomes:

Optimal Seating Assignment in the COVID-19 …

31

min λ A H A + λ B H B + λC HC + λ D H D − HE

(9)

where • the penalty term associated with constraints (4) is 

HA =

(n k −



x(r,c),k )2

(r,c)

k

• the penalty term associated with constraints (5) is HB =



x(r,c),k · x(r,c),k '

(r,c) k,k '

• the penalty term associated with constraints (6) is 

HC =

x(r,c),k · x(r +1,c),k '

(r,c) k,k '

• the penalty term associated with constraints (7) is HD =



x(r,c),k · x(r,c+1),k '

(r,c) k,k '

• the penalty term associated with the objective function (3) is HE =

 k

(r,c)

k

(r,c)



x(r,c),k · x(r +1,c),k + x(r,c),k · x(r,c+1),k

Note that, unlike the other penalties, a squaring has been introduced in H A as it is necessary to be able to grasp the relationship between the values assumed by different variables within the solution. Starting from (9), the Q matrix of model (2) has been derived. To do that, we need to identify the relationship between the problem’s variables, i.e., the multiplicative coefficients of the equation (1). First of all, the single QUBO terms of the function (9) need to be expanded. Then, after the coefficients have been found, they are multiplied by the parametric coefficients λ A , λ B , λC and λ D , whose purpose is to give more or less weight to each QUBO penalty such that the constraints are imposed when searching for the solution.

32

I. Gioda et al.

4.4 Parametric Coefficients Calibration We now provide some indications on how the calibration of the numerical parametric coefficients λ A , λ B , λC , and λ D has been carried out. Expressed as pseudo-code, the high-level steps that has been followed can be found in Algorithm 1. Algorithm 1 λ’s coefficients calibration 1: Initialize λ A , λ B , λC , and λ D with small real random values and choose a small-sized instance of the problem; 2: repeat 3: repeat 4: Run the code, get a particular solution from the solver (QBSolv) and check it 5: if infeasible solution, i.e., solution that does not respect at least one of the constraints (4)-(7) then Check the solution’s energy 6: 7: if the solution’s energy > the energy of a feasible solution already found then 8: The solver has failed to find the solution with the lowest energy 9: else if the solution’s energy < the energy of a feasible solution already found then 10: The current configuration of the λ parameters is not working Increase the λ parameter for which the solution violates the associated constraint 11: 12: end if 13: end if 14: until only feasible solutions for the current instance of the problem are found 15: Increase the size of the considered instance 16: until only feasible solutions are found

5 Computational Results The model has been implemented via software through the Python programming language. This section reports the results obtained by executing several instances of the Seating Arrangement Optimization problem written as QUBO using the two tools offered by D-Wave Systems, the QBSolv, and D-Wave Leap’s cloud-based quantumclassical hybrid solver (from now on referred to as D-Wave Hybrid Solver). Initially, the problem size in terms of number of variables is reported. Then, the two solvers are compared in terms of optimal solutions and computational times. An ad-hoc data-set containing simulated test instances about seats, passengers, and bookings was created for the performed experiments. The input that we provided to our QUBO model has been created based on indicative estimate of realistic data of a high-speed train. In particular, it was decided to use a wagon consisting of 80 seats, which have been placed in a 4 × 20 grid, made up of 4 horizontal (the rows) and 20 vertical (the columns) rows. Taking as a reference a credible number of passengers for

Optimal Seating Assignment in the COVID-19 …

33

Fig. 2 Number of variables in the QUBO model

a high-speed train, 1000 passengers have been created, but only a small subset of them was used for our restricted experimental analysis. In particular, for the experiments reported in Table 2, the maximum number of people that has been tested is 52. Since it was necessary to associate a specific booking ID to each passenger, we decided to use 300 distinct booking IDs to make a reasonably homogeneous assignment. The final range of assigned booking IDs is as follows: • 290 booking IDs • minimum number of passengers with the same booking ID: 1 • maximum number of passengers with the same booking ID: 8 All the experiments have been carried out on a desktop computer with a 1.8 GHz Intel Core i7-8550U processor.

5.1 QUBO model size The QUBO formulation involves a not-so-high number of variables, thus effectively modeling complex optimization problems. In Fig. 2 we can see the number of variables vs. the number of booking IDs of the considered problem instance. It can be noted that, by maintaining the number of available seats fixed at 80, the total number of variables increases almost linearly as the number of considered booking IDs grows.

5.2 Quality of the solutions The quality of the D-Wave Systems solvers is now analyzed. The numerical results for the Seating Arrangement Optimization problem’s various instances can be seen in Table 2. For each problem instance (“Seats”, “Passengers”, “Distinct booking IDs” columns), the optimal solution with the minimum energy (i.e., the minimum value of

34

I. Gioda et al.

the expression (9), reported in the “Total minimum energy” column) that each solver (“Solver” column) was able to find is indicated. Specifically, for each solution, the number of passengers allocated to seats inside the train wagon (fifth column) and the number of people with the same booking ID correctly assigned to adjacent seats (sixth column) are reported. After having calibrated the λ parameters in model (9), by using the Python APIs of the Ocean SDK, the two solvers were used to solve different instances of the analyzed problem.

Table 2 Optimal solutions obtained by running the QUBO model instances with the D-Wave Systems solvers Seats

Passengers

Distinct booking IDs

80

11

3

80

80

80

80

80

80

80

80

80

16

19

23

28

34

39

44

50

51

4

5

7

8

9

11

13

14

15

Solver

Nb. of passengers with an assigned seat

Nb. of Total passengers minimum with same energy booking ID assigned to adjacent seats

QBSolv

11

10

−464.300

D-Wave Hybrid

10

11

−464.300

QBSolv

16

15

−682.800

D-Wave Hybrid

16

15

−682.800

QBSolv

19

18

−762.000

D-Wave Hybrid

19

18

−762.000

QBSolv

23

21

−849.400

D-Wave Hybrid

23

21

−849.400

QBSolv

28

26

−1067.900

D-Wave Hybrid

28

26

−1067.900

QBSolv

34

32

−1382.000

D-Wave Hybrid

34

32

−1382.000

QBSolv

39

37

−1496.700

D-Wave Hybrid

39

37

−1496.700

QBSolv

44

41

−1646.900

D-Wave Hybrid

44

41

−1646.900

QBSolv

50

45

−1947.500

D-Wave Hybrid

50

47

−1958.300

QBSolv

51

46

−1953.000

D-Wave Hybrid

51

47

−1961.100

Optimal Seating Assignment in the COVID-19 …

35

The two optimizers seem to perform well, most of the time reaching the goal of allocating people with the same booking ID to adjacent seats. Furthermore, an improvement compared to the passenger transport’s current situation has been achieved. Both solvers manage to find at least an acceptable seating arrangement up to 15 booking IDs for a total of 51 passengers, bringing therefore to have a filling percentage of the seats up to 63,75% (instead of the classical 50%). For most of the instances, the D-Wave Hybrid Solver finds solutions with the same energy as those found by QBSolv optimizer. This means that the solver running on the CPU performs well in terms of solution quality, even without quantum hardware usage. However, there are two cases, i.e., the ones corresponding to the instances with 14 and 15 distinct booking IDs (respectively 50 and 51 passengers) where D-Wave Hybrid Solver finds two lower energy and better solutions than those found by the QBSolv sampler.

5.3 Efficiency of the solvers The last comparison refers to the computational times of the two solvers. In Table 3, each row reports the times required by the QBSolv (“QBSolv” column) and by the D-Wave Hybrid (“LeapHybridSampler” column) optimizers to solve a specific problem instance formed by a certain number of seats, passengers and booking IDs (“Seats”, “Passengers”, “Distinct booking IDs” columns). As we can see in this table, the time increases as the size of the considered instance of the Seating Arrangement Optimization problem grows. Moreover, both solvers require a very limited amount of time (just some seconds) to solve the problem. The difference between them lies in the way in which they work. The first optimizer, QBSolv, works locally on the CPU. The second one, D-Wave Hybrid Solver, requires remote access via the Internet to a physically remote system shared between multiple users. For this reason, the usage of this type of systems provides for additional time-consuming steps. First of all, the internet latency required to access the remote machine, then the problem management time by the classic resources of the solver, and finally the wait for execution on QPU due to the presence of instructions of other users which must be run on the same quantum processor. More details on QPU access times can be found in the D-Wave Systems documentation [12]. Therefore, all these conditions lead to an overall time more significant than that employed by the solver, which uses the CPU locally. The real advantage of the system that uses quantum technology is the speed with which its quantum elaboration element works. The column of Table 3 called “QPU access time” [8] shows the actual QPU usage time. The QPU executes instructions in microseconds instead of seconds. Still, such computational advantage cannot be fully exploited yet since it is obscured by the additional time needed to resolve the problem on the remote system.

36

I. Gioda et al.

Table 3 Comparison of computational times in seconds of the D-Wave Systems solvers Seats

Passengers

Distinct booking IDs

QBSolv (D-Wave CPU solver)

LeapHybridSampler (D-Wave quantum-classical hybrid solver)

QPU access time (D-Wave quantum-classical hybrid solver)

80

11

3

1.267

6.505

0.040579

80

16

4

1.567

7.792

0.041579

80

19

5

1.801

9.865

0.041494

80

23

7

4.561

15.776

0.042629

80

28

8

5.327

18.214

0.042623

80

34

9

5.976

18.525

0.042694

80

39

11

8.680

19.593

0.042617

80

44

13

10.560

20.819

0.042623

80

50

14

17.527

20.968

0.042624

80

51

15

18.848

22.041

0.042695

6 Conclusions In this paper, we have analyzed how combinatorial optimization problems can be effectively solved through quantum technology tools. Specifically, we aimed to investigate this innovative computation technique, quantum computing, and analyze the advantages and disadvantages that derive from it. For this reason, a brief overview of quantum computing’s branch of interest was exposed. First, quantum annealers, i.e., quantum systems suitable for solving optimization problems, were presented. We then described quantum annealing, i.e., the process through which problems are solved on quantum annealers, its key elements, qubits, and entanglement’s quantum property. We, therefore, presented the Quadratic Unconstrained Binary Optimization (QUBO) formulation. Moreover, we introduced two solvers offered by D-Wave Systems for solving this kind of problems on Central Processing Unit (CPU) and on quantum-classical hybrid remote systems which make use of a Quantum Processing Unit (QPU). We then considered a specific case-study concerning the allocation of passengers to seats on high-speed trains with the recent hygiene and health regulations on social distancing due to the COVID-19 pandemic. We modeled the problem through the QUBO paradigm. Then, we compared the results obtained through the use of the D-Wave Systems’ tools, QBSolv and Leap’s cloud-based quantum-classical hybrid solvers on several instances of the analyzed problem. In our future work, we intend to use the same case-study, the Seating Arrangement Optimization problem, to compare this novel computational approach to a more classical one. In particular, we aim to model the problem through a Mixed-Integer Linear Programming (MILP) formulation and to solve it through the use of a stateof-the-art solver, such as Gurobi or CPLEX. A noteworthy aspect that could also be explored in the future could be to set the λ coefficients as vector parameters,

Optimal Seating Assignment in the COVID-19 …

37

in order to be able to weigh each constraint in a different way. A last interesting development could be the study of the QUBO model with the introduction of an additional α parameter to determine the weight that the HE term has within the objective function (9). With the introduction of this factor and with the addition of a multiplicative coefficient (1 − α) in front of the remaining part of (9), we could study the relationships and the weights that the constraints and the objective function have within the QUBO model. Acknowledgements This study was derived from a proposal and with the contribution of the Quantum Computing Team of Data Reply S.r.l., Torino (Italy).

References 1. Alidaee, B., Kochenberger, G., Lewis, K., Lewis, M., Wang, H.: A new approach for modeling and solving set packing problems. Eur. J. Oper. Res. 186(2), 504–512 (2008). https://doi.org/ 10.1016/j.ejor.2006.12.068 2. Anthony, M., Boros, E., Crama, Y., et al.: Quadratic reformulations of nonlinear binary optimization problems. Math. Program. 162, 115–144 (2017). https://doi.org/10.1007/s10107016-1032-4 3. Asproni, L., Caputo, D., Silva, B., et al.: Accuracy and minor embedding in subqubo decomposition with fully connected large problems: a case study about the number partitioning problem. Quantum Mach. Intell. 2, 4 (2020). https://doi.org/10.1007/s42484-020-00014-w 4. Booth, M., Reinhardt, S.P.: Partitioning Optimization Problems for Hybrid Classical/Quantum Execution, Technical Report (2017) 5. Borowski, M. et al.: New hybrid quantum annealing algorithms for solving vehicle routing problem. In: Krzhizhanovskaya V. et al. (eds) Computational Science—ICCS 2020. ICCS 2020. Lecture Notes in Computer Science, vol 12142. Springer, Cham (2020). https://doi.org/ 10.1007/978-3-030-50433-5_42 6. Lewis, M., Glover, F.: Quadratic unconstrained binary optimization problem preprocessing: Theory and empirical analysis. Networks 70, 79–97 (2017). arXiv:1705.09844 7. D-Wave Hybrid Solvers, D-Wave Documentation. https://docs.ocean.dwavesys.com/en/ stable/overview/hybrid.htmldwave-hybrid-solvers 8. D-Wave Leap’s Hybrid Solvers’ Timing Information, D-Wave Documentation, https://docs. dwavesys.com/docs/latest/timing_hybrid.htmltiming-hybrid 9. D-Wave Ocean SDK Documentation. https://docs.ocean.dwavesys.com/en/stable/ getting_started.html 10. D-Wave QBSolv function, docs. https://docs.ocean.dwavesys.com/projects/qbsolv/en/latest/ index.html, source code: https://github.com/dwavesystems/qbsolv 11. D-Wave Quantum Solvers, D-Wave Documentation. https://docs.ocean.dwavesys.com/en/ stable/overview/qpu.html 12. D-Wave Solver Computation Time, D-Wave Documentation. https://docs.dwavesys.com/docs/ latest/doc_timing.html 13. Glover, F., Kochenberger, G., Du, Y.: Quantum Bridge Analytics I: a tutorial on formulating and using QUBO models. 4OR-Q. J. Oper. Res. 17, 335–371 (2019). https://doi.org/10.1007/ s10288-019-00424-y 14. Kochenberger, G., Hao, JK., Glover, F. et al.: The unconstrained binary quadratic programming problem: a survey. J. Combin. Optim. 28 (2014). https://doi.org/10.1007/s10878-014-9734-0 15. Lucas, A.: Ising formulations of many NP problems. Front. Phys. 2, 5 (2014). https://doi.org/ 10.3389/fphy.2014.00005

38

I. Gioda et al.

16. Marchenkova, A.: What’s the difference between quantum annealing and universal gate quantum computers? https://medium.com/quantum-bits/what-s-the-difference-between-quantumannealing-and-universal-gate-quantum-computers-c5e5099175a1 17. Nam, Y., Chen, J., Pisenti, N.C., Wright, K., Delaney, C., Maslov, D., Brown, K., Allen, S., Amini, J., Apisdorf, J., Beck, K., Blinov, A., Chaplin, V., Chmielewski, M., Collins, C., Debnath, S., Ducore, A.M., Hudek, K., Keesan, M., Kreikemeier, S., Mizrahi, J., Solomon, P., Williams, M., Wong-Campos, J.D., Monroe, C.,Kim, J.: Ground-state energy estimation of the water molecule on a trapped-ion quantum computer. NPJ Quantum Inf. 6 (2019) 18. Neukart F., Compostella G., Seidel C., von Dollen D., Yarkoni S., Parney B.: Traffic Flow Optimization Using a Quantum Annealer, Frontiers in ICT, vol. 4 (2017). https://doi.org/10. 3389/fict.2017.00029 19. Papalitsas, C., Andronikos, T., Giannakis, K., Theocharopoulou, G., Fanarioti, S.: A QUBO model for the traveling salesman problem with time windows. Algorithms 12, 224 (2019). https://doi.org/10.3390/a12110224 20. Schuld, M., Sinayskiy, I., Petruccione, F.: An introduction to quantum machine learning. Contemp. Phys. 56, 172–185 (2014) 21. Three Truths and the Advent of Hybrid Quantum Computing. https://medium.com/d-wave/ three-truths-and-the-advent-of-hybrid-quantum-computing-1941ba46ff8c 22. Wang, Y., Lü, Z., Glover, F., Hao, J.: Path relinking for unconstrained binary quadratic programming. Eur. J. Oper. Res. 223(3), 595–604 (2012). https://doi.org/10.1016/j.ejor.2012.07. 012

Hybrid Ant Colony Optimization Algorithms—Behaviour Investigation Based on Intuitionistic Fuzzy Logic Stefka Fidanova, Maria Ganzha, and Olympia Roeva

Abstract The local search procedure is a method for hybridization and improvement of the main algorithm, when complex problems are solved. It helps to avoid local optimums and to find faster the global one. The theory of intuitionistic fuzzy logic, which is the basis of InterCriteria analysis (ICrA), is used to study the proposed hybrid algorithms for ant colony optimization (ACO). Different algorithms for ICrA implementation are applied on the results obtained by hybrid ACO algorithms for Multiple Knapsack Problem. The hybrid algorithms behavior is compared to the traditional ACO algorithm. Based on the obtained numerical results from the algorithms performance and from the ICrA the efficiency and effectiveness of the proposed hybrid ACO algorithms, combined with appropriate local search procedure, are confirmed. Keywords Local search · Ant colony optimization · Hybrid algorithm · Intercriteria analysis algorithms · Knapsack problem · Index matrix · Intuitionistic fuzzy sets

S. Fidanova (B) Institute of Information and Communication Technology, Bulgarian Academy of Sciences, Sofia, Bulgaria e-mail: [email protected] M. Ganzha System Research Institute, Polish Academy of Sciences, Warsaw and Management Academy, Warsaw, Poland e-mail: [email protected] O. Roeva Institute of Biophysics and Biomedical Engineering, Bulgarian Academy of Sciences, Sofia, Bulgaria e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 S. Fidanova (ed.), Recent Advances in Computational Optimization, Studies in Computational Intelligence 1044, https://doi.org/10.1007/978-3-031-06839-3_3

39

40

S. Fidanova et al.

1 Introduction Engineering applications normally lead to complex decision make problems. Large scale problems can not be solved with traditional numerical methods. It is a challenge to develop a new techniques, which have simple structure and easy application, and can find near optimal solution even the information about the problem is incomplete. In most of the cases these problems are HP-hard. Nature inspired methods are more appropriate for solving NP-hard optimization problem, than other methods, because they are flexible and use less computational resources. They are base on stochastic search. The most popular methods are Evolutionary algorithm Goldberg et al. (1989); Vikhar (2016), which simulates the Darwinian evolutionary concept, Simulated Annealing Kirkpatrick et al. (1983) and Gravitation search algorithm Mosavi et al. (2019), Tabu Search Osman (1993) and Interior Search Ravakhah et al. (2017). The ideas for swarm-intelligence based algorithms come from behavior of animals in the natures. The representatives of this type of algorithms are Ant Colony OptimizationBonabeau et al. (1999), Bee Colony Optimization Karaboga and Basturk (2007), Bat algorithm Yang (2010), Firefly algorithm Yang (2008), Particle Swarm OptimizationKennedy and Eberhart (1995), Gray Wolf algorithm Mirjalili et al. (2014) and so on. Between the best methods for solving combinatorial optimization problems is Ant Colony Optimization (ACO). The impulse for this method comes from the behavior of real ants. They always find the shortest path from the food to the nest. The ants leave a trail called a pheromone and follow the trail with the most concentrated pheromone. The problem is represented with a help of a graph and the solutions are paths in a graph. The optimal solution for minimization problems is a shortest path, and for maximization problems is a longest path in a graph. The solution construction starts from random node of the graph and next nodes are included applying probabilistic rule. The pheromone is imitated by numerical information corresponding to the quality of the solution. ACO is applied to many types of optimization problems. The idea for application of ant behavior for solving combinatorial optimization problems is done by Marco Dorigo twenty five years ago Birattari et al. (2002); Bonabeau et al. (1999); Dorigo and Stutzle (2004). At the beginning it is applied on traveling salesman problem. Later it is successfully applied on a lot of complex optimization problems. During the years, various variants of ACO methodology was proposed: ant system Dorigo and Stutzle (2004); elitist ants Dorigo and Stutzle (2004); ant colony system Dorigo and Gambardella (1996); max-min ant system Stutzle and Hoos (2000); rank-based ant system Dorigo and Stutzle (2004); ant algorithm with additional reinforcement Fidanova (2003). They differ in pheromone updating. For some of them is proven that they converge to the global optimum Dorigo and Stutzle (2004). Fidanova et al. (2011, 2011, 2012) proposed semi-random start of the ants comparing several start strategies. The method can be adapted to dynamic changes of the problem in some

Hybrid Ant Colony Optimization Algorithms—Behaviour …

41

complex biological problems Fidanova and Lirkov (2009); Fidanova (2010); Stutzle and Hoos (2000). Sometimes the metaheuristic algorithm, can not avoid local optimums. Appropriate Local Search (LS) procedure can help to escape them and to improve algorithm efficiency. We apply ACO on Multiple Knapsack Problem (MKP). A local search procedure, related with a specificity of MKP is constructed and combined with ACO to improve the algorithm performance and to avoid local optimums Fidanova et al. (2012). Different algorithms for InterCriteria analysis (ICrA) Roeva et al. (2016) are applied on the numerical results obtained by the traditional ACO and hybrid ACO in order to estimate the algorithms behavior. The ICrA approach Atanassov et al. (2014) has been applied for a large area of problems, e.g. Atanassova et al. (2014); Angelova et al. (2015); Fidanova et al. (2016); Antonov (2020, 2019); Zaharieva et al. (2020). Published results show the applicability of the ICrA and the correctness of the approach. The rest of the paper is organized as follows: The definition of the MKP is in Sect. 2. ACO algorithm is presented in Sect. 3. Local Search procedure is described in Sect. 4. Short notes on ICrA approach and different algorithms for InterCriteria relations calculation are presented in Sect. 5. Numerical results and a discussion are done in Sect. 6. Conclusion remarks are discussed in Sect. 7.

2 Multiple Knapsack Problem In knapsack problem is given a set of items with fixed weights and values. The aim is to maximize the sum of the values of the items in the knapsack, while remaining within the capacity of the knapsack. Each item can be selected only ones. Multiple Knapsack Problem (MKP) is a generalization of the single knapsack problem and instead to have only one knapsack, there are many knapsacks with diverse capacity. Each item is assigned to maximum one of the knapsacks without violating any of the knapsacks capacity. The purpose is to maximize the total profit of the items in the knapsacks. MKP is a special case of the generalized assignment problem Leguizamon and Michalevich (1999). It is a representative of the subset problems. Economical, industrial and other types of problems can be represented by MKP. Resource allocation in distributed systems, capital budgeting, cargo loading and cutting stock problems Kellerer et al. (2004) are some of the applications of the problem. One important real problem which is represented as MKP is patients scheduling Arsik et al. (2017). MKP is related with bin packing problem where the size of the bins can be variable Murgolo (1987) and cutting stock problem for cut row materials Kellerer et al. (2004). Other application is multi-processor scheduling on uniformly related machines Lawler et al. (1993). Other difficult problem which leads to MKP is crypto-systems and generating keys Kellerer et al. (2004). One early application of MKP is tests generation Feuer-

42

S. Fidanova et al.

man and Weiss (1973). MKP is a model large set of binary problems with integer coefficients Leguizamon and Michalevich (1999); Kochenberger et al. (1974). MKP is NP-hard problem and normally is solved with some metaheuristic method such as genetic algorithm Liu et al. (2014), tabue search Woodcock and Wilson (2010), swarm intelligence Krause et al. (2013), ACO algorithm Fidanova (2003, 2021). We will define MKP as resource allocation problem, where m is the number of resources (the knapsacks) and n is the number of the objects. The object j has a profit p j . Each resource has its own budget (knapsack capacity) and consumption ri j of resource j by object i. The purpose is maximization of the profit within the limited budget. The mathematical formulation of MKP can be as follows: Σ max nj=1 p j x j subject to

Σn

j=1 ri j x j

≤ ci i = 1, . . . , m

(1)

x j ∈ {0, 1} j = 1, . . . , n There are m constraints in this problem, so MKP is also called m-dimensional knapsack problem. Let I = {1, . . . , m} and J = {1, . . . , n}, Σwith ci ≥ 0 for all i ∈ I . A well-stated MKP assumes that p j > 0 and ri j ≤ ci ≤ nj=1 ri j for all i ∈ I and j ∈ J . Note that the [ri j ]m×n matrix and [ci ]m vector are both non-negative. The MKP partial solution is represented by S = {i 1 , i 2 , . . . , i j } and the last element included to S, i j is not used in the selection process for the next element. Thus the solution of MKP have not fixed length.

3 Ant Colony Optimization Algorithm NP-hard problems require the use of huge resources and therefore cannot be solved by exact or traditional numerical methods, especially when they are large scale. We apply metaheuristic method aiming to find approximate solution using reasonable resources Dorigo and Stutzle (2004); Fidanova (2021). Firs Marco Dorigo applies ideas coming from ants behavior to solve complicate optimization problems 30 years ago Bonabeau et al. (1999). Some modifications are proposed by him and by other authors for algorithm improvement. The modifications concern pheromone updating Dorigo and Stutzle (2004). The algorithm is problem dependent. Very important is representation of the problem by a graph. Thus the solutions represent paths in the graph. The ants look for an optimal path, taking in to account problem constraints. The transition probability Pi, j , is a product of the heuristic information ηi, j and the pheromone trail level τi, j related to the selection of node j if the previous selected node is i, where i, j = 1, . . . . , n.

Hybrid Ant Colony Optimization Algorithms—Behaviour …

Pi, j =

τi,a j · ηi,b j , Σ a b τi,k · ηi,k

43

(2)

k∈Unused

where U nused is the set of unused nodes. At the beginning the pheromone is initialized with a small constant value τ0 , 0 < τ0 < 1. Every time the ants build a solution, the pheromone is bring up to date Dorigo and Stutzle (2004). The elements of the graph with more pheromone are more tempting to the ants. The main update rule for the pheromone is: τi, j ← ρ · τi, j + Δτi, j ,

(3)

where parameter ρ decreases the value of the pheromone, like evaporation in a nature decreases the quantity of old pheromone. Δτi, j is a new deposited pheromone, which depends on the value of the objective function, corresponding to this solution. The first step, when ACO is applied on some combinatorial optimization problem is representation of the problem by graph. In our case the items are related with the nodes of the graph and the edges fully connect the nodes. The pheromone is deposited on the arcs of the graph. Second step is construction of appropriate heuristic information. This step is very important, because the heuristic information is the main part of the transition probability function and the search process depends mainly on it. Normally the heuristic information is a combination of problem parameters. Σm ri j . For heuristic information we use: Let s j = i=1 ηi j =

⎧ d1 d2 ⎨ p j /s j if s j /= 0 ⎩

(4) p dj 1

if s j = 0

where d1 > 0 and d2 > 0 are parameters. Hence the objects with greater profit and less average expenses will be more desirable. Thus is increased the probability to include more items and most profitable items. This can lead to maximization of the total profit, which is the objective of this problem.

4 Local Search Procedure At times is used hybridization of the used method, for algorithm performance improvement. The goal is avoid some disadvantages of the main method. A possibility for hybridization is one of the methods to be basic and the other only helps to improve the solutions. Most used hybridization manner is local improvement or at the end of the iteration to apply some problem dependent local search procedure.

44

S. Fidanova et al.

The Local Search (LS) procedure is used to perturbs current solution and to generate neighbor solutions Schaffer and Yannakakis (1991). LS generates neighbor solutions in a local set of neighbors. The best solution from the set is compared with the current solution. If it is better, it is accepted as a new current solution. A LS procedure which is consistent with MKP has been developed and combined with ACO algorithm in our previous work Fidanova (2021). The MKP solution is represented by binary string where 0 corresponds to not chosen item and 1 corresponds to item included in the solution. Two positions are randomly chosen. If the value of one of the positions is 0 we replace it with 1 and if the value of other position is 1 we replace it with 0 and vice versa. The feasibility of the new solution is verified. If the solution is feasible we compare it with the current (original) solution. The perturbed solution is accepted if its value of the objective function is greater, than of the original one. We apply this LS procedure ones on each iteration on each solution, disregarding if the new constructed solution is better than current one or not. Thus the proposed LS works without significant increase of the used computational resources.

5 InterCriteria Analysis Based on the apparatuses of index matrices Atanassov (1987, 2010a, b, 2014) and intuitionistic fuzzy sets (IFSs) Atanassov (2012, 1983, 2016), authors in Atanassov et al. (2014) propose a new approach named InterCriteria analysis. Briefly presented, an intuitionistic fuzzy pair (IFP) Atanassov et al. (2013) is an ordered pair of real non-negative numbers ˂a, b˃, where a, b ∈ [0, 1] and a + b ≤ 1, that is used as an evaluation of some object or process. According to Atanassov et al. (2013), the components (a and b) of IFP might be interpreted as degrees of “membership” and “non-membership” to a given set, degrees of “agreement” and “disagreement”, etc. Let the initial IM is presented in the form of Eq. (5), where, for every p, q, (1 ≤ p ≤ m, 1 ≤ q ≤ n), C p is a criterion, taking part in the evaluation; Oq —an object to be evaluated; C p (Oq )—a real number (the value assigned by the p-th criteria to the q-th object). O 1 . . . Oq . . . O n C1 C1 (O1 ) . . . C1 (Oq ) . . . C1 (On ) .. .. .. .. .. .. . . . . . . (5) A= C p C p (O1 ) . . . C p (Oq ) . . . C p (On ) .. .. .. .. .. .. . . . . . . Cm Cm (O1 ) . . . Cm (Oq ) . . . Cm (On )

Let O denotes the set of all objects being evaluated, and C(O) is the set of values assigned by a given criteria C (i.e., C = C p for some fixed p) to the objects, i.e.,

Hybrid Ant Colony Optimization Algorithms—Behaviour …

45

def

O = {O1 , O2 , O3 , . . . , On }, def

C(O) = {C(O1 ), C(O2 ), C(O3 ), . . . , C(On )}. Let xi = C(Oi ). Then the following set can be defined: C ∗ (O) = {˂xi , x j ˃|i /= j & ˂xi , x j ˃ ∈ C(O) × C(O)}. def

Further, if x = C(Oi ) and y = C(O j ), x ≺ y if i < j will be written. In order to find the agreement of different criteria, the vectors of all internal comparisons for each criterion are constructed, which elements fulfill one of the ˜ The nature of the relations is chosen such that for a fixed three relations R, R and R. criterion C and any ordered pair ˂x, y˃ ∈ C ∗ (O): ˂x, y˃ ∈ R ⇔ ˂y, x˃ ∈ R, ˂x, y˃ ∈ R˜ ⇔ ˂x, y˃ ∈ / (R ∪ R), ∗ R ∪ R ∪ R˜ = C (O).

(6) (7) (8)

For example, if “R” is the relation “”, and vice versa. When comparing two criteria the degree of “agreement” is determined as the number of matching components of the respective vectors (divided by the length of the vector for normalization purposes). Let the respective degrees of “agreement” and “disagreement” are denoted by μC,C ' and νC,C ' . In the most of the obtained pairs ˂μC,C ' , νC,C ' ˃, the sum μC,C ' + νC,C ' is equal to 1. However, there may be some pairs, for which this sum is less than 1. The difference (9) πC,C ' = 1 − μC,C ' − νC,C ' is considered as a degree of “uncertainty”. In this investigation four different algorithms for calculation of μC,C ' and νC,C ' are used, based on the ideas presented in Atanassov et al. (2015). The following rules for defining the ways of estimating the degrees of “agreement” and “disagreement”, with respect to the type of data, are proposed: • μ-biased: This algorithm follows the rules presented in (Atanassov et al., 2015, Table 3), where the rule for =, = for two criteria C and C ' is assigned to μC,C ' . An example pseudocode is presented below as Algorithm 1. • Balanced: This algorithm follows the rules in (Atanassov et al., 2015, Table 2), where the rule for =, = for two criteria C and C ' is assigned a half to both μC,C ' and νC,C ' . It should be noted that in such case a criteria compared to itself does not necessarily yield ˂1, 0˃. An example pseudocode is presented below as Algorithm 4.

46

S. Fidanova et al.

• ν-biased: In this case the rule for =, = for two criteria C and C ' is assigned to νC,C ' . It should be noted that in such case a criteria compared to itself does not necessarily yield ˂1, 0˃. An example pseudocode is presented below as Algorithm 3. • Unbiased: This algorithm follows the rules in (Atanassov et al., 2015, Table 1). It should be noted that in such case a criterion compared to itself does not necessarily yield ˂1, 0˃, too. An example pseudocode is presented below as Algorithm 2. As a result of applying any of the proposed algorithms to IM A (Eq. (5)), the following IM is constructed: C2 C1 ˂μC1 ,C2 , νC1 ,C2 ˃ .. .. . . Cm−1

... ... .. .

Cm ˂μC1 ,Cm , νC1 ,Cm ˃ , .. .

. . . ˂μCm−1 ,Cm , νCm−1 ,Cm ˃

that determines the degrees of “agreement” and “disagreement” between criteria C1 , . . . , Cm . Algorithm 1 : μ-biased Require: Vectors Vˆ (C) and Vˆ (C ' ) 1: function Degrees of Agreement and Disagreement(Vˆ (C), Vˆ (C ' )) V ← Vˆ (C) − Vˆ (C ' ) 2: 3: μ←0 4: ν←0 do 5: for i ← 1 to n(n−1) 2 6: if Vi = 0 then μ←μ+1 7: ▷ abs(Vi ): the absolute value of Vi 8: else if abs(Vi ) = 2 then 9: ν ←ν+1 10: end if 11: end for 2 μ 12: μ ← n(n−1) 2 ν 13: ν ← n(n−1) 14: return μ, ν 15: end function

Hybrid Ant Colony Optimization Algorithms—Behaviour …

47

Algorithm 2 : Balanced Require: Vectors Vˆ (C) and Vˆ (C ' ) 1: function Degrees of Agreement and Disagreement(Vˆ (C), Vˆ (C ' )) ▷ ʘ denotes Hadamard (entrywise) product P ← Vˆ (C) ʘ Vˆ (C ' ) 2: V ← Vˆ (C) − Vˆ (C ' ) 3: 4: μ←0 5: ν←0 do 6: for i ← 1 to n(n−1) 2 7: if Vi = Pi then μ ← μ + 21 8: 9: ν ← ν + 21 10: else if Vi = 0 and Pi /= 0 then μ←μ+1 11: ▷ abs(Vi ): the absolute value of Vi 12: else if abs(Vi ) = 2 then 13: ν ←ν+1 14: end if 15: end for 2 μ 16: μ ← n(n−1) 2 17: ν ← n(n−1) ν 18: return μ, ν 19: end function

Algorithm 3 : ν-biased Require: Vectors Vˆ (C) and Vˆ (C ' ) 1: function Degrees of Agreement and Disagreement(Vˆ (C), Vˆ (C ' )) ▷ ʘ denotes Hadamard (entrywise) product P ← Vˆ (C) ʘ Vˆ (C ' ) 2: V ← Vˆ (C) − Vˆ (C ' ) 3: 4: μ←0 5: ν←0 do 6: for i ← 1 to n(n−1) 2 7: if Vi = 0 and Pi /= 0 then μ←μ+1 8: ▷ abs(Vi ): the absolute value of Vi 9: else if Vi = Pi or abs(Vi ) = 2 then 10: ν ←ν+1 11: end if 12: end for 2 μ μ ← n(n−1) 13: 2 ν 14: ν ← n(n−1) 15: return μ, ν 16: end function

48

S. Fidanova et al.

Algorithm 4 : Unbiased Require: Vectors Vˆ (C) and Vˆ (C ' ) 1: function Degrees of Agreement and Disagreement(Vˆ (C), Vˆ (C ' )) ▷ ʘ denotes Hadamard (entrywise) product P ← Vˆ (C) ʘ Vˆ (C ' ) 2: V ← Vˆ (C) − Vˆ (C ' ) 3: 4: μ←0 5: ν←0 do 6: for i ← 1 to n(n−1) 2 7: if Vi = 0 and Pi /= 0 then μ←μ+1 8: ▷ abs(Vi ): the absolute value of Vi 9: else if abs(Vi ) = 2 then 10: ν ←ν+1 11: end if 12: end for 2 μ 13: μ ← n(n−1) 2 14: ν ← n(n−1) ν 15: return μ, ν 16: end function

6 Computational Results and Discussion The proposed hybrid ACO algorithm for MKP is tested on 10 test MKP instances from Operational Research Library “OR-Library” available at http://people.brunel.ac.uk/~mastjjb/jeb/info.html. Every test problem consists of 100 items and 10 constraints/knapsacks. We prepare a software, which realizes our hybrid algorithm. The software is coded in C + + program language and is run on Pentium desktop computer at 2.8 GHz with 4 GB of memory. The ACO algorithm parameters are fixed experimentally as follows: • Number of iterations = 300, Number of ants = 20; • ρ = 0.5, τ0 = 0.5; • a = 1, b = 1 and d1 = 1. We perform 30 independent runs with every one of the test instances, because the algorithm is stochastic and to guarantee the robustness of the average results. We apply ANOVA test for statistical analysis and thus we guarantee the significance of the difference between the average results. The names of the test instances are presented in Table 1. In Tables 2 and 3 observed numerical results for all 30 runs are listed. In Table 4 are reported average results for every one of the test instances over 30 runs. We compare ACO algorithm combined with local search procedure (hybrid ACO) with traditional ACO algorithm. On the last row is reported average computational time, in seconds, of the two variants of ACO algorithm. Table 4 shows that for eight of ten instances hybrid ACO algorithm outperforms the traditional one. For the instances MKP 100 × 10–02 and MKP 100 × 10–10

Hybrid Ant Colony Optimization Algorithms—Behaviour … Table 1 Test instances Instance MKP 100 × 10–01 MKP 100 × 10–02 MKP 100 × 10–03 MKP 100 × 10–04 MKP 100 × 10–05 MKP 100 × 10–06 MKP 100 × 10–07 MKP 100 × 10–08 MKP 100 × 10–09 MKP 100 × 10–10

49

Name Hybrid ACO

Traditional ACO

P1h P2h P3h P4h P5h P6h P7h P8h P9h P10h

P1t P2t P3t P4t P5t P6t P7t P8t P9t P10t

the results are statistically the same. The main problem with hybrid algorithms, when some global method is combined with local search procedure, is increasing of computational time. We try to propose efficient and in a same time less time consuming local search. We only change randomly chosen position in a solution to 0, if it is 1 and another randomly chosen position to 1 if it is 0. Thus is generated only one neighbor solution. If this solution is better than the current one, it is accepted and used for pheromone updating instead of the solution constructed by the ant. We apply this procedure to each of the solutions. As is seen from Table 4 the increase of computational time, when our local search is applied is only 2.34%. Thus we can conclude that proposed local search procedure is efficient and effective. The algorithm performance is improved, without significant increase of the computational time. To support these claims, the obtained numerical results were analysed using ICrA using four different algorithms for intercriteria relations calculation. The input matrix for ICrA has the following form index matrix Table 5:

6.1 Results of Application of μ−Biased ICrA The cross-platform software for ICrA approach, ICrAData, is used Ikonomov et al. (2018). The input index matrices for ICrA have the form of Tables 2 and 3.

6.2 Results of Application of ν−Biased ICrA See Tables 9, 10, 11.

50

S. Fidanova et al.

Table 2 Traditional ACO performance P1t P2t P3t P4t Run1 Run2 Run3 Run4 Run5 Run6 Run7 Run8 Run9 Run10 Run11 Run12 Run13 Run14 Run15 Run16 Run17 Run18 Run19 Run20 Run21 Run22 Run23 Run24 Run25 Run26 Run27 Run28 Run29 Run30

22089 21954 21935 22030 21875 21970 21974 22041 21893 21984 21787 21916 22031 22188 21889 22009 21880 21958 22015 22054 22093 22074 22003 22169 22091 21945 22086 21926 21929 22030

22452 22055 21912 21914 21990 21999 21990 22120 21924 22104 21950 22675 22027 21975 22119 22101 21990 22102 21963 22027 22027 22065 22027 22005 22106 21899 22196 21975 22177 21931

20936 20966 21023 20732 21120 21114 21085 21032 21187 20822 21042 21182 20869 21203 21204 20877 21085 20872 20841 21007 20833 20960 20976 21003 20808 21103 21069 21003 20799 20925

21481 21318 21318 21556 21451 21619 21740 21918 21335 21719 21661 21629 21490 21736 21740 21705 21531 21335 21861 21815 21607 21701 21759 21490 21964 21335 21774 21437 21681 22061

P5t

P6t

P7t

P8t

P9t

P10t

21751 21606 21463 21519 21903 21736 21641 21811 21716 21767 21673 21818 21843 21811 21716 21952 21736 21811 21581 21679 21811 21711 21685 21735 21622 21417 21944 21922 21736 21434

21810 21864 21912 21903 21970 21756 21654 22053 21864 21864 22047 21834 22123 21864 21824 21779 21713 22053 21864 22140 21816 21903 21864 21898 21840 22053 22241 21864 21907 21864

21537 21659 21526 21470 21360 21426 21522 21584 21587 21509 21595 21426 21466 21601 21509 21409 21394 21596 21624 21392 21509 21509 21522 21511 21434 21426 21590 21531 21479 21509

21634 21596 21516 21337 21689 21729 21515 21550 21725 21550 22067 21550 21508 21550 21573 21506 21579 21550 21729 21729 21496 21496 21339 21520 21614 21516 21629 21503 21530 21366

22213 22398 22065 22191 22152 22125 22398 22109 22398 22078 22398 22101 22398 22398 22086 22039 22398 22105 22324 22398 22039 22398 22039 22156 22398 22059 22398 22398 22398 22398

40594 40701 40647 40617 40489 40646 40714 40550 40581 40594 40659 40646 40584 40627 40515 40498 40680 40589 40404 40367 40496 40737 40594 40317 40664 40319 40728 40636 40498 40756

6.3 Results of Application of Unbiased ICrA See Tables 12, 13, 14.

Hybrid Ant Colony Optimization Algorithms—Behaviour … Table 3 Hybrid ACO performance P1h P2h P3h P4h Run1 Run2 Run3 Run4 Run5 Run6 Run7 Run8 Run9 Run10 Run11 Run12 Run13 Run14 Run15 Run16 Run17 Run18 Run19 Run20 Run21 Run22 Run23 Run24 Run25 Run26 Run27 Run28 Run29 Run30

22206 21968 21970 22130 22107 22138 22367 22051 21867 21910 22133 22164 21949 22067 21889 21999 21926 21914 22008 21840 21993 22130 21958 22014 22072 22088 21928 21933 21970 21993

22047 22186 22074 22028 22168 22027 22027 22028 22104 22027 22027 22074 22044 22102 22027 22213 22065 22027 22028 22151 22155 22106 22060 22027 22027 22104 22027 22110 22027 22027

21292 21089 21233 21687 21020 21222 21416 21261 20861 21090 20758 20848 21090 20954 21236 21185 21017 21097 20953 21166 20839 21134 20881 21373 20925 20857 20934 20916 21281 21064

21885 21962 22139 21701 21885 21736 21962 21420 21780 21666 21854 21885 22099 21921 21801 21561 22174 21542 21490 21893 21656 21962 21885 22023 21953 22052 21864 21885 21793 21951

51

P5h

P6h

P7h

P8h

P9h

P10h

21811 21811 21716 21811 21811 21798 21811 21790 21811 21811 21844 21798 21736 21798 21811 21811 21914 21855 21796 21804 21811 21798 21811 21811 21811 21974 21811 21811 21832 21811

21940 21957 21934 22024 22047 21980 21900 22048 21985 21987 22011 22053 21987 21987 22113 21900 21937 21864 22053 21987 21891 22063 21987 21924 21987 21987 21987 22121 21987 22050

21509 21509 21530 21415 21522 21522 21437 21509 21531 21537 21655 21330 21409 21587 21509 21509 21509 21509 21624 21418 21584 21533 21426 21511 21509 21509 21509 21509 21509 21509

22004 21616 21550 21729 21729 21551 21729 21648 21729 21729 21729 21653 21729 21729 21729 21650 21550 21550 21683 21729 21550 21658 21729 21729 21697 21550 21550 21689 21729 21550

22270 22097 22087 22294 22285 22257 22140 22125 22398 22398 22398 22154 22398 22117 22069 22429 22191 22247 22398 22193 22398 22479 22152 22152 22123 22429 22218 22398 22398 22479

40647 40557 40598 40679 40647 40522 40538 40710 40710 40647 40583 40662 40664 40683 40689 40636 40489 40714 40677 40728 40565 40742 40503 40514 40650 40751 40658 40499 40598 40540

6.4 Results of Application of Balanced ICrA The obtained by ICrA results based on four different algorithms for intercriteria relations calculation (mu−biased ICrA, ν−biased ICrA, Unbiased ICrA and Balanced ICrA) are listed in Tables 6, 9, 12 and 15 (μ-values) and Tables 7, 10, 13 and 16 (ν-values). The π-values are also presented (see Tables 8, 11, 14 and 17). The results between the same instances but for different ACO algorithms are presented.

52

S. Fidanova et al.

Table 4 Comparison of ACO performance Instance Hybrid ACO MKP 100 × 10–01 MKP 100 × 10–02 MKP 100 × 10–03 MKP 100 × 10–04 MKP 100 × 10–05 MKP 100 × 10–06 MKP 100 × 10–07 MKP 100 × 10–08 MKP 100 × 10–09 MKP 100 × 10–10 computational time

Traditional ACO

22022.73 22071.46 21089.3 21846 21814.3 21989.26 21506.26 21672.53 22272.36 40626.66 64.052 s

Table 5 Index matrix for ICrA Run1

Run2

P1t

val P1t,1

val P1t,2

P2t .. .

val P2t,1 .. .

val P2t,2 .. .

P10t

val P10t,1

val P10t,2

P1h

val P1h,1

val P1h,2

P2h .. .

val P2h,1 .. .

val P2h,2 .. .

P10h

val P10h,1

val P10h,2

21989.43 22081.36 21027.63 21635.3 21717.3 21869.73 21477.3 21606.43 22257 40623.26 65.552 s

... .. . .. . .. . .. . .. . .. . .. . .. .

Run30 val P1t,30 val P2t,30 .. . val P10t,30 val P1h,30 val P2h,30 .. . val P10h,30

For example, relations between P1t − P1h , P2t − P2h , P3t − P3h , etc., are considered for further analysis (presented in bold results in Tables 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17. According to Atanassov et al. (2015) the results show that the considered criteria pairs, are in dissonance or in strong dissonance. This means that the both compared ACO algorithms (hybrid and traditional ones) performed differently in case of all 10 various instances. The results obtained form ICrA are correct and reliable, taking into account the observed values of πC,C ' -values. Only for relations between P5t − P5h , P8t − P8h and P9t − P9h , there are some high πC,C ' -values, respectively 0.31, 0.25 and 0.24.

Hybrid Ant Colony Optimization Algorithms—Behaviour …

53

Table 6 Degree of agreement—μC,C ' -values (μ−biased ICrA) μ

P1t

P2t

P3t

P4t

P5t

P6t

P7t

P8t

P9t

P10t

P1h P2h P3h P4h P5h P6h P7h P8h P9h P10h

0.55 0.42 0.57 0.47 0.25 0.4 0.41 0.4 0.45 0.55

0.46 0.37 0.57 0.39 0.34 0.45 0.3 0.42 0.38 0.52

0.51 0.48 0.44 0.55 0.33 0.5 0.44 0.38 0.35 0.5

0.5 0.3 0.5 0.4 0.27 0.57 0.38 0.42 0.44 0.52

0.45 0.48 0.46 0.35 0.32 0.43 0.4 0.4 0.49 0.46

0.38 0.39 0.46 0.47 0.36 0.5 0.38 0.37 0.49 0.63

0.45 0.36 0.46 0.4 0.35 0.44 0.52 0.39 0.39 0.48

0.46 0.37 0.43 0.41 0.34 0.46 0.51 0.42 0.39 0.54

0.37 0.34 0.39 0.51 0.34 0.46 0.36 0.42 0.42 0.38

0.55 0.36 0.47 0.61 0.34 0.48 0.38 0.29 0.41 0.38

Table 7 Degree of disagreement—νC,C ' -values (μ−biased ICrA) ν

P1t

P2t

P3t

P4t

P5t

P6t

P7t

P8t

P9t

P10t

P1h P2h P3h P4h P5h P6h P7h P8h P9h P10h

0.44 0.44 0.43 0.49 0.46 0.51 0.41 0.37 0.47 0.43

0.51 0.46 0.4 0.55 0.36 0.45 0.5 0.35 0.53 0.44

0.47 0.38 0.55 0.42 0.38 0.41 0.37 0.39 0.57 0.48

0.48 0.56 0.49 0.55 0.43 0.33 0.43 0.35 0.48 0.45

0.52 0.37 0.52 0.59 0.37 0.46 0.41 0.36 0.41 0.5

0.54 0.4 0.47 0.43 0.35 0.36 0.4 0.36 0.4 0.29

0.52 0.48 0.5 0.54 0.37 0.44 0.27 0.37 0.52 0.47

0.5 0.46 0.54 0.53 0.34 0.42 0.28 0.33 0.51 0.42

0.4 0.38 0.4 0.26 0.27 0.31 0.34 0.24 0.34 0.39

0.43 0.49 0.51 0.36 0.37 0.43 0.43 0.47 0.51 0.6

Table 8 Degree of uncertainty—πC,C ' -values (μ−biased ICrA) π

P1t

P2t

P3t

P4t

P5t

P6t

P7t

P8t

P9t

P10t

P1h P2h P3h P4h P5h P6h P7h P8h P9h P10h

0.01 0.14 0 0.04 0.29 0.09 0.18 0.23 0.08 0.02

0.03 0.17 0.03 0.06 0.3 0.1 0.2 0.23 0.09 0.04

0.02 0.14 0.01 0.03 0.29 0.09 0.19 0.23 0.08 0.02

0.02 0.14 0.01 0.05 0.3 0.1 0.19 0.23 0.08 0.03

0.03 0.15 0.02 0.06 0.31 0.11 0.19 0.24 0.1 0.04

0.08 0.21 0.07 0.1 0.29 0.14 0.22 0.27 0.11 0.08

0.03 0.16 0.04 0.06 0.28 0.12 0.21 0.24 0.09 0.05

0.04 0.17 0.03 0.06 0.32 0.12 0.21 0.25 0.1 0.04

0.23 0.28 0.21 0.23 0.39 0.23 0.3 0.34 0.24 0.23

0.02 0.15 0.02 0.03 0.29 0.09 0.19 0.24 0.08 0.02

54

S. Fidanova et al.

Table 9 Degree of agreement—μC,C ' -values (ν−biased ICrA) μ

P1t

P2t

P3t

P4t

P5t

P6t

P7t

P8t

P9t

P10t

P1h P2h P3h P4h P5h P6h P7h P8h P9h P10h

0.55 0.42 0.57 0.47 0.25 0.40 0.41 0.40 0.45 0.55

0.46 0.37 0.57 0.39 0.33 0.44 0.30 0.41 0.38 0.52

0.51 0.48 0.44 0.55 0.33 0.50 0.44 0.38 0.35 0.50

0.50 0.30 0.50 0.40 0.27 0.57 0.38 0.41 0.44 0.52

0.45 0.48 0.46 0.35 0.32 0.43 0.39 0.40 0.49 0.46

0.38 0.39 0.46 0.47 0.32 0.49 0.36 0.36 0.47 0.63

0.45 0.36 0.46 0.40 0.33 0.44 0.52 0.38 0.38 0.48

0.46 0.37 0.43 0.41 0.34 0.46 0.50 0.42 0.39 0.54

0.37 0.30 0.39 0.50 0.29 0.42 0.31 0.37 0.39 0.38

0.55 0.36 0.47 0.60 0.34 0.47 0.38 0.29 0.41 0.38

Table 10 Degree of disagreement—νC,C ' -values (ν−biased ICrA) ν

P1t

P2t

P3t

P4t

P5t

P6t

P7t

P8t

P9t

P10t

P1h P2h P3h P4h P5h P6h P7h P8h P9h P10h

0.44 0.44 0.43 0.49 0.47 0.51 0.41 0.37 0.47 0.43

0.51 0.46 0.40 0.55 0.36 0.46 0.50 0.36 0.53 0.44

0.47 0.38 0.55 0.42 0.38 0.41 0.37 0.39 0.57 0.48

0.48 0.56 0.49 0.55 0.43 0.33 0.43 0.36 0.48 0.45

0.52 0.37 0.52 0.59 0.37 0.46 0.41 0.36 0.41 0.50

0.54 0.40 0.47 0.43 0.38 0.36 0.41 0.37 0.41 0.29

0.52 0.49 0.50 0.54 0.38 0.44 0.27 0.37 0.52 0.47

0.50 0.46 0.54 0.53 0.34 0.42 0.29 0.33 0.51 0.42

0.40 0.41 0.40 0.27 0.32 0.34 0.38 0.30 0.37 0.39

0.43 0.49 0.51 0.36 0.38 0.43 0.43 0.48 0.51 0.60

Table 11 Degree of uncertainty—πC,C ' -values (ν−biased ICrA) π

P1t

P2t

P3t

P4t

P5t

P6t

P7t

P8t

P9t

P10t

P1h P2h P3h P4h P5h P6h P7h P8h P9h P10h

0.01 0.14 0.00 0.03 0.29 0.09 0.18 0.23 0.07 0.01

0.03 0.16 0.03 0.05 0.30 0.10 0.20 0.23 0.09 0.03

0.01 0.14 0.01 0.03 0.29 0.09 0.19 0.23 0.08 0.02

0.02 0.14 0.02 0.04 0.30 0.10 0.19 0.23 0.09 0.03

0.03 0.16 0.03 0.05 0.31 0.11 0.20 0.24 0.09 0.03

0.08 0.21 0.08 0.10 0.29 0.15 0.23 0.27 0.12 0.09

0.03 0.16 0.03 0.06 0.29 0.12 0.21 0.25 0.09 0.04

0.04 0.17 0.04 0.06 0.32 0.12 0.21 0.25 0.11 0.05

0.22 0.29 0.22 0.23 0.39 0.23 0.30 0.33 0.24 0.23

0.02 0.15 0.01 0.04 0.28 0.09 0.19 0.23 0.08 0.02

Hybrid Ant Colony Optimization Algorithms—Behaviour …

55

Table 12 Degree of agreement—μC,C ' -values (Unbiased ICrA) μ

P1t

P2t

P3t

P4t

P5t

P6t

P7t

P8t

P9t

P10t

P1h P2h P3h P4h P5h P6h P7h P8h P9h P10h

0.55 0.42 0.57 0.47 0.25 0.40 0.41 0.40 0.45 0.55

0.46 0.37 0.57 0.39 0.33 0.44 0.30 0.41 0.38 0.52

0.51 0.48 0.44 0.55 0.33 0.50 0.44 0.38 0.35 0.50

0.50 0.30 0.50 0.40 0.27 0.57 0.38 0.41 0.44 0.52

0.45 0.48 0.46 0.35 0.32 0.43 0.39 0.40 0.49 0.46

0.38 0.39 0.46 0.47 0.32 0.49 0.36 0.36 0.47 0.63

0.45 0.36 0.46 0.40 0.33 0.44 0.52 0.38 0.38 0.48

0.46 0.37 0.43 0.41 0.34 0.46 0.50 0.42 0.39 0.54

0.37 0.30 0.39 0.50 0.29 0.42 0.31 0.37 0.39 0.38

0.55 0.36 0.47 0.60 0.34 0.47 0.38 0.29 0.41 0.38

Table 13 Degree of disagreement—νC,C ' -values (Unbiased ICrA) ν

P1t

P2t

P3t

P4t

P5t

P6t

P7t

P8t

P9t

P10t

P1h P2h P3h P4h P5h P6h P7h P8h P9h P10h

0.44 0.44 0.43 0.49 0.46 0.51 0.41 0.37 0.47 0.43

0.51 0.46 0.40 0.55 0.36 0.45 0.50 0.35 0.53 0.44

0.47 0.38 0.55 0.42 0.38 0.41 0.37 0.39 0.57 0.48

0.48 0.56 0.49 0.55 0.43 0.33 0.43 0.35 0.48 0.45

0.52 0.37 0.52 0.59 0.37 0.46 0.41 0.36 0.41 0.50

0.54 0.40 0.47 0.43 0.35 0.36 0.40 0.36 0.40 0.29

0.52 0.48 0.50 0.54 0.37 0.44 0.27 0.37 0.52 0.47

0.50 0.46 0.54 0.53 0.34 0.42 0.28 0.33 0.51 0.42

0.40 0.38 0.40 0.26 0.27 0.31 0.34 0.24 0.34 0.39

0.43 0.49 0.51 0.36 0.37 0.43 0.43 0.47 0.51 0.60

Table 14 Degree of uncertainty—πC,C ' -values (Unbiased ICrA) π

P1t

P2t

P3t

P4t

P5t

P6t

P7t

P8t

P9t

P10t

P1h P2h P3h P4h P5h P6h P7h P8h P9h P10h

0.01 0.14 0.00 0.03 0.29 0.09 0.18 0.23 0.07 0.01

0.03 0.16 0.03 0.05 0.31 0.10 0.20 0.24 0.09 0.03

0.01 0.14 0.01 0.03 0.29 0.09 0.19 0.23 0.08 0.02

0.02 0.15 0.02 0.04 0.30 0.10 0.19 0.23 0.09 0.03

0.03 0.16 0.03 0.05 0.31 0.11 0.20 0.25 0.09 0.03

0.08 0.21 0.08 0.10 0.33 0.15 0.24 0.29 0.13 0.09

0.04 0.16 0.03 0.06 0.30 0.12 0.21 0.25 0.10 0.04

0.04 0.17 0.04 0.06 0.32 0.12 0.21 0.26 0.11 0.05

0.22 0.32 0.22 0.24 0.45 0.27 0.35 0.39 0.26 0.23

0.02 0.15 0.01 0.04 0.29 0.10 0.19 0.24 0.08 0.02

56

S. Fidanova et al.

Table 15 Degree of agreement—μC,C ' -values (Balanced ICrA) μ

P1t

P2t

P3t

P4t

P5t

P6t

P7t

P8t

P9t

P10t

P1h P2h P3h P4h P5h P6h P7h P8h P9h P10h

0.55 0.42 0.57 0.47 0.25 0.40 0.41 0.40 0.45 0.55

0.46 0.37 0.57 0.39 0.34 0.45 0.30 0.41 0.38 0.52

0.51 0.48 0.44 0.55 0.33 0.50 0.44 0.38 0.35 0.50

0.50 0.30 0.50 0.40 0.27 0.57 0.38 0.41 0.44 0.52

0.45 0.48 0.46 0.35 0.32 0.43 0.39 0.40 0.49 0.46

0.38 0.39 0.46 0.47 0.34 0.49 0.37 0.36 0.48 0.63

0.45 0.36 0.46 0.40 0.34 0.44 0.52 0.39 0.38 0.48

0.46 0.37 0.43 0.41 0.34 0.46 0.50 0.42 0.39 0.54

0.37 0.32 0.39 0.50 0.31 0.44 0.34 0.40 0.41 0.38

0.55 0.36 0.47 0.61 0.34 0.47 0.38 0.29 0.41 0.38

Table 16 Degree of disagreement—νC,C ' -values (Balanced ICrA) ν

P1t

P2t

P3t

P4t

P5t

P6t

P7t

P8t

P9t

P10t

P1h P2h P3h P4h P5h P6h P7h P8h P9h P10h

0.44 0.44 0.43 0.49 0.47 0.51 0.41 0.37 0.47 0.43

0.51 0.46 0.40 0.55 0.36 0.46 0.50 0.35 0.53 0.44

0.47 0.38 0.55 0.42 0.38 0.41 0.37 0.39 0.57 0.48

0.48 0.56 0.49 0.55 0.43 0.33 0.43 0.36 0.48 0.45

0.52 0.37 0.52 0.59 0.37 0.46 0.41 0.36 0.41 0.50

0.54 0.40 0.47 0.43 0.36 0.36 0.40 0.37 0.40 0.29

0.52 0.48 0.50 0.54 0.37 0.44 0.27 0.37 0.52 0.47

0.50 0.46 0.54 0.53 0.34 0.42 0.28 0.33 0.51 0.42

0.40 0.40 0.40 0.27 0.30 0.33 0.36 0.27 0.36 0.39

0.43 0.49 0.51 0.36 0.38 0.43 0.43 0.47 0.51 0.60

Table 17 Degree of uncertainty—πC,C ' -values (Balanced ICrA) π

P1t

P2t

P3t

P4t

P5t

P6t

P7t

P8t

P9t

P10t

P1h P2h P3h P4h P5h P6h P7h P8h P9h P10h

0.01 0.14 0.00 0.03 0.29 0.09 0.18 0.23 0.07 0.01

0.03 0.16 0.03 0.05 0.30 0.10 0.20 0.23 0.09 0.03

0.01 0.14 0.01 0.03 0.29 0.09 0.19 0.23 0.08 0.02

0.02 0.14 0.02 0.04 0.30 0.10 0.19 0.23 0.09 0.03

0.03 0.16 0.03 0.05 0.31 0.11 0.20 0.24 0.09 0.03

0.08 0.21 0.08 0.10 0.29 0.15 0.23 0.27 0.12 0.09

0.03 0.16 0.03 0.06 0.29 0.12 0.21 0.25 0.09 0.04

0.04 0.17 0.04 0.06 0.32 0.12 0.21 0.25 0.11 0.05

0.22 0.29 0.22 0.23 0.39 0.23 0.30 0.33 0.24 0.23

0.02 0.15 0.01 0.04 0.28 0.09 0.19 0.23 0.08 0.02

Hybrid Ant Colony Optimization Algorithms—Behaviour …

57

The obtained estimates for the degree of agreement and the degree of disagreement have a high degree of uncertainty. The conclusion for unique performance of the considered hybrid ACO algorithms for all 10 cases is unambiguously confirmed by the four different algorithms for calculating intercriteria relations. There are some observed differences in case of P9 and P6 (marked with underlining in Tables 9, 10, 12, 14, 15 and 16 that are negligibly small and can be ignored.

7 Conclusion In this paper a hybrid ACO algorithm for solving MKP is proposed. The hybrid algorithm is a combination of a traditional ACO algorithm and a local search procedure. Proposed algorithm is tested on 10 benchmark MKP. The achieved results show the efficiency and effectiveness of the proposed local search procedure. The hybrid algorithm performs better than the traditional one, while the calculation time increases only with 2.34%. Obtained results are analyzed by ICrA approach. Four different algorithms performing ICrA—μ-biased, Balanced, ν-biased and Unbiased—are used and applied in order to analyze the results. The analysis shows that the both algorithms performs differently for the considered 10 instances, i.e. the behavior of the proposed hybrid ACO is importantly different from that of the traditional ACO algorithm, or the local search procedure perturbs significantly the search process. Through the application of ICrA approach the efficiency and effectiveness of the proposed hybrid ACO algorithm are confirmed. Author Contributions: The authors contributed equally to the work. Acknowledgements The development of the proposed hybrid ACO algorithm with local search procedure has been funded by the Grant DFNI KP-06-N52/5 and Grant No BG05M2OP001-1.0010003, financed by the Science and Education for Smart Growth Operational Program and cofinanced by the European Union through the European structural and Investment funds. The study of ACO algorithms behavior based on ICrA approach has been funded by the Bulgarian National Science Fund, Grant KP-06-N22/1 “Theoretical Research and Applications of InterCriteria Analysis”.

References Angelova, M., Roeva, O., Pencheva, T.: InterCriteria analysis of crossover and mutation rates relations in simple genetic algorithm. In: Proceedings of the 2015 Federated Conference on Computer Science and Information Systems, vol. 5, pp. 419–424 (2015) Antonov, A.: Analysis and detection of the degrees and direction of correlations between key indicators of physical fitness of 10–12-year-old hockey players. Int. J. Bioautomation 23(3), 303–314 (2019). https://doi.org/10.7546/ijba.2019.23.3.000709

58

S. Fidanova et al.

Antonov, A.: Dependencies between model indicators of the basic and the specialized speed in hockey players aged 13–14. Trakia J. Sci. 18(Suppl. 1), 647–657 (2020) Arsik, I., Keskinocak, P., Coppola, J., Hampapur, K., He, Y., Jiang, H., Regala, D., Tailhardat, N., Goin, K.: Effective and equitable appointment scheduling in rehabilitation centers. INFORMS Annu. Meet. 2017, 22–25 (Oct 2017) Atanassov, K.: Index matrices: towards an augmented matrix calculus. Springer International Publishing Switzerland (2014) Atanassov, K.: Intuitionistic fuzzy sets: VII ITKR session, sofia. Int. J. Bioautomation 20(S1), S1–S6 (2016), 20–23 June 1983 Atanassov, K.: Generalized index matrices. Comptes rendus de l’Academie bulgare des Sci. 40(11), 15–18 (1987) Atanassov, K.: On intuitionistic fuzzy sets theory. Springer, Berlin (2012) Atanassov, K.: On index matrices, part 1: standard cases. Adv. Stud. Contemp. Math. 20(2), 291–302 (2010) Atanassov, K.: On index matrices, part 2: intuitionistic fuzzy case. Proc. Jangjeon Math. Soc. 13(2), 121–126 (2010) Atanassov, K.: Review and new results on intuitionistic fuzzy sets, mathematical foundations of artificial intelligence seminar, sofia, 1988, Preprint IM-MFAIS-1-88. Int. J. Bioautomation 20(S1), S7–S16 (2016) Atanassov, K., Mavrov, D., Atanassova, V.: InterCriteria decision making: a new approach for multicriteria decision making, based on index matrices and intuitionistic fuzzy sets. Issues Intuit. Fuzzy Sets Gen. Nets 11, 1–8 (2014) Atanassov, K., Szmidt, E., Kacprzyk, J.: On intuitionistic fuzzy pairs. Notes Intuit. Fuzzy Sets 19(3), 1–13 (2013) Atanassov, K., Atanassova, V., Gluhchev, G.: InterCriteria analysis: ideas and problems. Notes Intuit. Fuzzy Sets 21(1), 81–88 (2015) Atanassova, V., Mavrov, D., Doukovska, L., Atanassov, K.: Discussion on the threshold values in the intercriteria decision making approach. Notes Intuit. Fuzzy Sets 20(2), 94–99 (2014) Birattari, M., Stutzle, T., Paquete, L., Varrentrapp, K.: A racing algorithm for configuring metaheuristics. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 11–18 (2002) Bonabeau, E., Dorigo, M., Theraulaz, G.: Swarm Intelligence: From Natural to Artificial Systems. Oxford University Press, New York (1999) Dorigo, M., Gambardella, L.: Ant colony system: a cooperative learning approach to the traveling salesman problem. IEEE Trans. Evol. Comput. 1, 53–66 (1996) Dorigo, M., Stutzle, T.: Ant Colony Optimization. MIT Press (2004) Fidanova, S., Lirkov, I.: 3D protein structure prediction. Analele Universitatii de Vest Timisoara, vol. XLVII, pp. 33–46 (2009) Fidanova, S.: An improvement of the grid-based hydrophobic-hydrophilic model. Int. J. Bioautomation 14, 147–156 (2010) Fidanova, S.: ACO algorithm with additional reinforcement. In: International Conference from Ant Colonies to Artificial Ants. Lecture Notes in Computer Science, vol. 2463, pp. 292–293 (2003) Fidanova, S., Atanassov, K., Marinov, P.: Generalized Nets and Ant Colony Optimization. Bulg. Academy of Sciences Pub, House (2011) Fidanova, S., Atanassov, K., Marinov, P.: Start strategies of ACO applied on subset problems. In: Numerical Methods and Applications. Lecture Notes Computer Science, vol. 6046, pp. 248–255 (2011) Fidanova, S., Atanassov, K., Marinov, P.: Intuitionistic fuzzy estimation of the ant colony optimization starting points. In: Large Scale Scientific Computing. Lecture Notes in Computer Science, vol. 7116, pp. 219–226 (2012) Fidanova, S., Roeva, O., Paprzycki, M.: InterCriteria analysis of ACO start strategies. In: Proceedings of the 2016 Federated Conference on Computer Science and Information Systems, vol. 8, pp. 547–550 (2016)

Hybrid Ant Colony Optimization Algorithms—Behaviour …

59

Fidanova, S.: Hybrid ant colony optimization algorithm for multiple knapsack problem. In: 5th IEEE International Conference on Recent Advances and Innovations in Engineering (ICRAIE), pp. 1–5. IEEE (2021). https://doi.org/10.1109/ICRAIE51050.2020.9358351 Feuerman, M., Weiss, H.: A mathematical programming model for test construction and scoring. Manage. Sci. 19(8), 961–966 (1973) Goldberg, D.E., Korb, B., Deb, K.: Messy genetic algorithms: motivation analysis and first results. Complex Syst. 5(3), 493–530 (1989) Ikonomov, N., Vassilev, P., Roeva, O.: ICrAData-software for intercriteria analysis. Int. J. Bioautomation 22(1), 1–10 (2018). https://doi.org/10.7546/ijba.2018.22.1.1-10 Karaboga, D., Basturk, B.: Artificial bee colony (ABC) optimization algorithm for solving constrained optimization problems. In: Advances in Soft Computing: Foundations of Fuzzy Logic and Soft Computing, LNCS, vol. 4529, pp. 789–798 (2007) Kellerer, H., Pferschy, U., Pisinger, D.: Multiple knapsack problems. In: Knapsack Problems. Springer, Berlin (2004) Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of IEEE International Conference on Neural Networks, vol. IV, pp. 1942–1948 (1995) Kirkpatrick, S., Gelatt, C.D., Vecchi, M.P.: Optimization by simulated annealing science. New York, N.Y., vol. 13 , issure 220, pp. 671–680 (1983) Kochenberger, G., McCarl, G., Wymann, F.: An heuristic for general integer programming. Decis. Sci. 5, 34–44 (1974) Krause, J., Cordeiro, J., Parpinelli, R.S., Lopes, H.S.: A survey of swarm algorithms applied to discrete optimization problems. In: Swarm Intelligence and Bio-Inspired Computation, pp. 169– 191. Elsevier (2013) Lawler, E.L., Lenstra, J.K., Kan, A.H.R., Shmoys, D.B.: Sequencing and scheduling: algorithms and complexity. In: Graves, S.C., et al. (eds.) Handbooks in OR and MS, vol. 4, pp. 445–522. Elsevier Science Publishers (1993) Leguizamon, G., Michalevich, Z.: A new version of ant system for subset problems. In: International Conference on Evolutionary Computations, vol. 2, pp. 1459–1464 (1999) Liu, Q., Odaka, T., Kuroiwa, J., Shirai, H., Ogura, H.: A new artificial fish swarm algorithm for the multiple knapsack problem. IEICE Trans. Inf. Syst. 97(3), 455–468 (2014) Mirjalili, S., Mirjalili, S.M., Lewis, A.: Grey wolf optimizer. Adv. Eng. Softw. 69, 46–61 (2014) Mosavi, M.R., Khishe, M., Parvizi, G.R., Naseri, M.J., Ayat, M.: Training multi-layer perceptron utilizing adaptive best-mass gravitational search algorithm to classify sonar dataset. Arch. Acoust. 44(1), 137–151 (2019) Murgolo, F.D.: An efficient approximation scheme for variable-sized bin packing. SIAM J. Comput. 16(1), 149–161 (1987) Schaffer, A.A., Yannakakis, M.: Simple local search problems that are hard to solve. Soc. Ind. Appl. Math. J. Comput. 20, 56–87 (1991) Osman, I.H.: Metastrategy simulated annealing and tabue search algorithms for the vehicle routing problem. Ann. Oper. Res. 41(4), 421–451 (1993) Ravakhah, S., Khishe, M., Aghababaee, M., Hashemzadeh, E.: Sonar false alarm rate suppression using classification methods based on interior search algorithm. Int. J. Comput. Sci. Netw. Secur. 17(7), 58–65 (2017) Roeva, O., Vassilev, P., Angelova, M., Pencheva, T., Su, J.: Comparison of different algorithms for Intercriteria relations calculation. In: 2016 IEEE 8th International Conference on Intelligent Systems (IS), pp. 567–572 (2016). https://doi.org/10.1109/IS.2016.7737481 Stutzle, T., Hoos, H.: Max min ant system. Futur. Gener. Comput. Syst. 16, 889–914 (2000) Vikhar, P.A.: Evolutionary algorithms: a critical review and its future prospects. In:, Proceedings of the 2016 International Conference on Global Trends in Signal Processing. Information Computing and Communication (ICGTSPICC), pp. 261–265. Jalgaon (2016) Woodcock, A.J., Wilson, J.M.: A hybrid tabue search/branch and bound approach to solving the generalized assignment problem. Eur. J. Oper. Res. 207(2), 566–578 (2010)

60

S. Fidanova et al.

Yang, X.S.: A new metaheuristic bat-inspired algorithm, nature inspired cooperative strategies for optimization, studies in computational. Intelligence 284, 65–74 (2010) Yang, X.S.: Nature-Inspired Metaheuristic Algorithms. Luniver Press (2008) Zaharieva, B., Doukovska, L., Ribagin, S., Radeva, I.: InterCriteria analysis of data obtained from patients with Behterev’s disease. Int. J. Bioautomation 24(1), 5–14 (2020). https://doi.org/10. 7546/ijba.2020.24.1.000507

Scheduling Algorithms for Single Machine Problem with Release and Delivery Times Natalia Grigoreva

Abstract The problem of minimizing the maximum delivery times while scheduling jobs on the single processor is a classical combinatorial optimization problem. Each job has a release time, processing time and a delivery time. The objective is to minimize the time, by which all jobs are delivered. This problem is denoted by 1|r j , q j |Cmax , has many applications, and it is NP-hard in strong sense. The problem is useful in solving flowshop and jobshop scheduling problems. The goal of this paper is to propose a new 3/2—approximation algorithm, which runs in O(n log n) times for scheduling problem 1|r j , q j |Cmax . We present an example which shows that the bound of 3/2 is tight. To compare the effectiveness of proposed algorithms we tested random generated problems of up to 5000 jobs. Keywords Single-machine scheduling problem · Release and delivery times · Approximation algorithm · Worst-case performance ratio

1 Introduction The problem of minimizing the maximum delivery times while scheduling jobs on the single processor is a classical combinatorial optimization problem. We consider a set of jobs U = {i 1 , i 2 , . . . , i n }. Each job i must be processed without interruption for t (i ) time units on the processor, which can process at most one job at time. Each job i has a release time r (i), when the job is ready for processing, and a delivery time q(i ). The delivery of each job begins immediately after processing has been completed.The objective is to minimize the time, by which all jobs are delivered. In

N. Grigoreva (B) St. Petersburg State University, Universitetskaja nab. 7/9, 199034 St. Petersburg, Russia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 S. Fidanova (ed.), Recent Advances in Computational Optimization, Studies in Computational Intelligence 1044, https://doi.org/10.1007/978-3-031-06839-3_4

61

62

N. Grigoreva

the notation of Graham et al. (1979) this problem is denoted by 1|r j , q j |Cmax , and has many applications. It is required to construct a schedule, that is, to find for each job i ∈ U the start time τ (i), provided that r (i ) ≤ τ (i ). The characteristic of the quality of the schedule is the delivery time of the last job, which is equal Cmax = max{τ (i ) + t (i ) + q(i )|i ∈ U }. The goal is to construct a schedule that minimizes Cmax . In Lenstra (1977) it is shown that the problem is N P-hard in the strong sense, but there are exact polynomial algorithms for some special cases. Some authors considered an equivalent formulation of the problem, in which instead of the delivery time for each job, the due date D(i ) = K − q(i ), is known, where K is a constant, and the objective function is the maximum lateness L max = max{τ (i) + t (i ) − D(i )|i ∈ U }. This formulation of the problem is denoted as 1|ri |L max . The advantage of the model with delivery times is that the value of the objective function is always positive, while the maximum lateness can be negative or equal to zero. If we swap the delivery times and the release times, we get an inverse problem with the property that the solution of the direct problem S = (i 1 , i 2 , . . . , i n ) is optimal if and only if the permutation Sinv = (i n , i n−1 , . . . , i 1 ) is the optimal solution of the inverse problem. The 1|r j , q j |Cmax is the main subproblem in many important models of scheduling theory, such as multiprocessor scheduling and flowshop and jobshop problems. The study of this problem is of theoretical interest and is useful in practical industrial application (Artigues and Feillet 2008; Chandra et al. 2014; Sourirajan and Uzsoy 2007). Several approximation algorithms are known for solving the problem 1|ri , qi |Cmax . The first algorithm for constructing an approximate schedule was the Schrage heuristic (Schrage 1971)—an extended Jackson rule, which is formulated as follows: each time the processor is free, a ready job with the maximum delivery time is assigned to it. Computational complexity of the Schrage heuristic is O(n log n). Kise et al. (1979) have shown that the algorithm is a 2—approximation algorithm. Potts (1980) proposed an algorithm in which the extended Jackson’s rule algorithm repeats n times. Before each new start, a new precedence constraints is added to the set of task. The best schedule is selected from n constructed schedules. Computational complexity of the Potts algorithm is O(n 2 log n). The worst-case performance ratio is equal is 3/2. Hall and Schmois (1992) considered, along with the direct problem, the inverse problem, in which the receive times of the tasks and the delivery times are reversed. The authors have developed a scheduling method in which the Potts algorithm is applied to the direct and the inverse problem. A case in which there are two big tasks is considered separately. In total, the algorithm builds 4n schedules and chooses the best one. Computational complexity of the algorithm is O(n 2 log n). The worst-case performance ratio is equal is 4/3. Novitsky and Smutnitsky (1994) proposed an 3/2—approximation algorithm, which creates only two permutations. For the first time, the Jackson rule is applied, then the interference work is determined and the set of tasks is divided into two sets: tasks that should be performed before the interference work and tasks that should be

Scheduling Algorithms for Single Machine Problem with Release and Delivery Times

63

performed after it. Tasks from the first set are performed in the order of their receive times, from the second set are performed in nonincreasing delivery times. The best schedule is selected from the two built ones. Computational complexity is O(n log n). All the mentioned algorithms use the list greedy Schrage algorithm as a basic heuristic. The works of Baker (1974), Carlier (1982), Grabowski et al. (1986), McMahon and Florian (1975) and Pan and Shi (2006), developed branch and bound algorithms for single processor scheduling problem using different branching rules and bounding techniques. The most efficient algorithm is Carlier algorithm by, which optimally solves instances with up to thousand of jobs. This algorithm constructs a full solution by extended Jackson’s rule in each node of the search tree. The branch and bound algorithms proposed in Chandra et al. (2014), Baker (1974), Grabowski et al. (1986) and Liu (2010) can apply to the problem with precedence constraints. One way to improve the performance of the branch and bound method is to use approximation efficient algorithms to obtain upper bounds. Such algorithms should have a good approximation ratio and the low computational complexity. One of the popular scheduling tools are list algorithms that build non-delayed schedules. In the list algorithm, at each step, the task with the highest priority is selected from the set of ready tasks. But the optimal schedule may not belong to the class of non-delayed schedules. IIT schedules (IIT—inserted idle time) were defined in Kanet and Sridharan (2000) as feasible schedules in which the processor can be idle when there are jobs ready to run. The author considered an IIT 2—approximation algorithm for a single-machine scheduling problem (Grigoreva 2016) and developed the branch and bound algorithm for single processor scheduling problem with receive and delivery times (Grigoreva 2018) and for the problem with precedence constraints (Grigoreva 2019). The main idea of greedy algorithms for solving these problems is the choice at each step of the highest priority task, before the execution of which the processor could be idle. In this paper, we propose an approximation algorithm ICA for solving the problem, which creates two permutations, one by the Schrage method, and the second by an algorithm with inserted idle time. The construction of each permutation requires O(n log n) action. We prove that the worst-case performance ratio of the algorithm ICA is equal 3/2 and the bound of 3/2 is tight. The article is organized as follows: Sect. 2 presents a new approximate algorithm scheduling IJR. In Sect. 3 we propose the combined ICA algorithm, which builds two permutations and chooses the best one. The theoretical study of the proposed algorithm in Sect. 4 contains five lemmas and one theorem. We prove that the worstcase performance ratio of the ICA algorithm is equal 3/2. An example is given showing that the bound of 3/2 is tight. In Sect. 5 we propose to apply the ICA algorithm to the direct and the inverse problem. The results of the computational experiment, which showed the speed and practical accuracy of the algorithm, are given in Sect. 6. In conclusion, the main results obtained in the article are formulated.

64

N. Grigoreva

2 IJR and ICA Scheduling Algorithms First, we describe the IJR scheduling algorithm. The main idea of the IJR algorithm is that sometimes it is better to place a priority job on service, even if it leads to some idle time of the processor. In the IJR algorithm two tasks are selected: the highest priority task and the highest priority from ready tasks. The paper has established special conditions in which it is advantageous to organize the unforced idle time of the processor. These conditions allow to choose between two tasks. The algorithm IJR is a greedy algorithm, but not a list algorithm and can be used as a basic heuristic for various scheduling models and constructing a branch and bound method. We introduce the following notation: Sk = (i 1 , i 2 , . . . , i k ) is the partial schedule, time := max{τ (i ) + t (i )|i ∈ Sk−1 }is the time to release the processor after the execution of already scheduled tasks. We store ready tasks in the queue with priorities Q 1 , the priority of a job is it’s delivery time.

2.1 Algorithm IJR 1. Sort all tasks in non-descending order of receive times: r ( j1 ) ≤ r ( j2 ) ≤ · · · ≤ r ( jn ). Let the list H = ( j1 , j2 , . . . , jn ) 2. Define rmin = min{r (i )| i ∈ U }, 3. Define qmin = min{q(i ) | i ∈ U }. 4. Define two lower Σnbounds the objective function L B1 = rmin + i=1 t (i) + qmin . L B2 = max{r (i ) + t (i ) + q(i ) | i ∈ U }. 5. Define the lower bound of the objective function L B = max{L B1, L B2}. 6. Set time := rmin . Q 1 = ∅, l = 1, S0 = ∅. 7. For k = 1 to n do a. Add ready tasks to the Q 1 queue such that r ( ji ) ≤ time, beginning with jl . b. Let l be the number of the first job in the list H for which r ( jl ) > time. c. If there is no ready task and Q 1 = ∅, then time := min{r (i )|i ∈ / Sk−1 } and go to step (a). d. Select the ready task u ∈ Q 1 with the maximum delivery time q(u) = max{q(i )|i ∈ Q 1 }. e. Set rup := time + t (u). f. While there are jobs jl such that, time < r ( jl ) < rup do i. If q( jl ) ≤ L B/2, then jl is added to the queue Q 1 . l := l + 1, go to step (f).

Scheduling Algorithms for Single Machine Problem with Release and Delivery Times

65

ii. If q( jl ) > L B/2, define a possible idle time of the processor idle( jl ) = r ( jl ) − time. iii. If q( jl ) − q(u) ≥ idle( jl ) then we set task jl , on the processor: set τ ( jl ) := r ( jl ); time := τ ( jl ) + t ( jl ), Sk := Sk−1 ∪ { jl }, l := l + 1, go to step 7. iv. Add task jl , to the queue Q 1 . Set l := l + 1, go to step (f). g. Map the job u on the processor: set τ (u) := time; time := τ (u) + t (u), Sk := Sk−1 ∪ {u}. Delete u from the queue Q 1 , go to step 7. 8. The schedule Sn is constructed. Find the value of the objective function Cmax (Sn ) = max{τ (i ) + t (i ) + q(i ) | i ∈ U }.

3 Combined Scheduling Algorithm ICA 1. Construct the schedule S J R by the Schrage algorithm, denote the makespan of the schedule Cmax (S J R ). 2. Construct the schedule S by the IJR algorithm, denote the makespan of the schedule Cmax (S). 3. Choose the schedule S A with a smaller value of the objective function: Cmax (S A ) = min{Cmax (S), Cmax (S J R )}. The JR algorithm does not allow the processor to be idle if there is a ready job, even if the priority of the job is low. The IJR algorithm allows the processor to be idle while waiting for the more priority job. Additional conditions are checked (step f of IJR algorithm), under which it is profitable to perform the more priority job, but the downtime is not too large. The ICA algorithm selects the best solution.

4 Properties of the Schedule Constructed by the Algorithm ICA The properties of the schedule created by the combined algorithm ICA proposed in Sect. 3 are formulated and proved in the following lemmas. Let the IJR algorithm constructs a schedule S I , the value of the objective function is equal to Cmax (S I ), the critical sequence in the schedule S I is J (S I ) = ( ja , ja+1 , . . . , jc ). And the schedule S J R was constructed by the JR algorithm. The critical sequence in which is J (S J R ) = (z a , z a+1 , . . . , z c ) and the value of the objective function is equal to Cmax (S J R ). Consider some definitions that were introduced in Potts (1980) for schedules constructed according to Jackson’s rule, and which are important characteristics when for IIT schedules.

66

N. Grigoreva

Definition 1 (Potts 1980) A critical job is a job jc such that Cmax (S I ) = τ ( jc ) + t ( jc ) + q( jc ). If there are several such jobs, then we choose the earliest one in the schedule S. Definition 2 (Potts 1980) A critical sequence in a schedule S is a sequence of jobs J (S) = ( ja , ja+1 , . . . , jc ) such that jc is the critical job and there is no processor idle time in the schedule, starting from the start of the job ja until the job jc ends. The job ja is either the first job in the schedule, or the processor is idle before it. Definition 3 (Potts 1980) A job ju in a critical sequence is called interference job if q( ju ) < q( jc ) and q( ji ) ≥ q( jc ), for i > u. Proposition 1 (Potts 1980) If for all jobs of the critical sequence it is true that r ( ji ) ≥ r ( ja ) and q( ji ) ≥ q( jc ), then the schedule is optimal. Let us introduce a definition of delayed job that can be encountered in IIT schedules. Definition 4 A job jv from a critical sequence J (S) = ( ja , ja+1 , . . . , jc ) is called a delayed job if r ( jv ) < r ( ja ). An interference job can be a delayed job. Let us formulate and prove two properties of the schedule, similar to the properties of schedules, built according to Jackson’s rule (Potts 1980). Lemma 1 If there is the interference job ju in the critical sequence J (S I ), then Cmax (S I ) − Cmax (Sopt ) ≤ t ( ju ) − idle, where idle > 0 is possible idle time of the processor. Σc Proof Let T (J (S I )) = i=a t ( ji ) be total execution time of jobs from the critical sequence J (S). Then Cmax (S I ) = r ( ja ) + T (J (S I )) + q( jc ). If the critical sequence contains interference job ju , then this sequence can be represented as J (S I ) = (S1 , ju , S2 ), where S1 is a sequence of jobs before ju , and S2 is a sequence of jobs after ju . Then at the time t1 = r ( ja ) + T (S1 ) there are not any ready jobs in the S2 sequence. Let us introduce the following notation rmin (S2 ) = min{r ( ji )| ji ∈ S2 }. Then the inequality t1 < rmin (S2 ) is satisfied. For an optimal schedule, it is true that Cmax (Sopt ) ≥ rmin (S2 ) + T (S2 ) + q( jc ), then Cmax (S I ) − Cmax (Sopt ) ≤ r ( ja ) + T (J (S)) + q( jc )−

Scheduling Algorithms for Single Machine Problem with Release and Delivery Times

67

−rmin (S2 ) − T (S2 ) − q( jc ) = = r ( ja ) + T (S1 ) + t ( ju ) − rmin (S2 ) = t ( ju ) − idle. Where idle = rmin (S2 ) − t1 > 0 is the minimum processor idle time, if instead of ⟁ the interference job ju , we schedule some job from the sequence S2 . This lemma refines the property of the Jackson schedule, proved in Potts (1980), and shows that, removing the interference work, we can reduce the length of the schedule no more than t ( ju ) − idle. Lemma 2 If there is no any delayed jobs in the critical sequence, then Cmax (S I ) − Cmax (Sopt ) ≤ q( jc ). The lemma was formulated and proved by Potts (1980), for the considered algorithm it is only necessary to add a clarification about the absence of delayed jobs. Lemma 3 Let there is the interference job ju in the critical sequence J (S I ) = (S1 , ju , S2 ). If the job ju is executed after the sequence S2 in an optimal schedule, then Cmax (S I )/Cmax (Sopt ) ≤ 3/2. Proof If t ( ju ) ≤ Cmax (Sopt )/2, then the Lemma 3 is true by Lemma 1. Let t ( ju ) > Cmax (Sopt )/2. Let be rmin (S2 ) = min{r ( ji )| ji ∈ S2 }. If the interference job ju is executed after all jobs of the sequence S2 in an optimal schedule, then Cmax (Sopt ) ≥ rmin (S2 ) + T (S2 ) + t ( ju ) + q( ju ). Then Cmax (S I ) − Cmax (Sopt ) ≤ r ( ja ) + T (S1 ) + t ( ju ) + T (S2 )+ +q( jc ) − rmin (S2 ) − T (S2 ) − t ( ju ) − q( ju ) = = r ( ja ) + T (S1 ) − rmin (S2 ) + q( jc ) − q( ju ) = = −idle + q( jc ) − q( ju ) Choose a job v ∈ S2 such that r (v) = rmin (S2 ). Then idle = idle(v) = rmin (S2 ) − r ( ja ) − T (S1 ) > q(v) − q( ju ) ≥ q( jc ) − q( ju ), or q( jc ) ≤ q(v) < L B/2. Then Cmax (S I ) − Cmax (Sopt ) < L B/2. ⟁

68

N. Grigoreva

Lemma 4 Let the schedule S J R is constructed by the JR algorithm. There is the interference job ju in the critical sequence J (S J R ) = (F1 , ju , F2 ). If the interference job ju is executed before all jobs of the sequence F2 in an optimal schedule, then Cmax (S J R )/Cmax (Sopt ) ≤ 3/2. Proof If t ( ju ) ≤ Cmax (Sopt )/2, then the Lemma 4 is true by Lemma 1. If delivery time q(z c ) of the critical work z c does not exceed Cmax (Sopt )/2, then the Lemma 4 is true by Lemma 2. Let q(z c ) > Cmax (Sopt )/2. If the job ju is executed before all jobs of the sequence F2 in an optimal schedule Sopt , then Cmax (Sopt ) ≥ r (z a ) + t ( ju ) + T (F2 ) + q(z c ). Then Cmax (S J R ) − Cmax (Sopt ) ≤ T (F1 ) < L B/2.



Theorem 1 The algorithm ICA constructs a schedule S A for which Cmax (S A )/C max (Sopt ) ≤ 3/2. The computational complexity of the ICA algorithm is O(n log n). Proof Let the schedule S I be constructed using the IJR algorithm. The objective function value is Cmax (S I ), and there is the critical sequence J (S I ) = ( ja , ja+1 , . . . , jc ) in S. And the schedule S J R was constructed by the JR algorithm and there is the critical sequence J (S J R ) = (z a , z a+1 , . . . , z c ) in S J R . We consider all the possible cases. Case 1. There are no interference and delayed jobs in J (S I ) or no interference job in J (S J R ) critical sequences. In this case, the corresponding algorithm has constructed an optimal schedule. Case 2. There are interference jobs in each critical sequence. It is required to consider the case in which two interference jobs is the same large job such that t ( ju ) > Cmax (Sopt )/2. The length of the schedule S J R is equal to Cmax (S J R ) = r (z a ) + T (J (S J R )) + q(z c ). If delivery time of the critical job z c does not exceed Cmax (Sopt )/2, then by Lemma 2 the theorem is true. Let q(z c ) > Cmax (Sopt )/2. By virtue of the proved Lemmas 3 and 4, it suffices to consider the case in which in the optimal schedule, the job ju should be carried out after all jobs of the sequence F2 and before jobs of the sequence S2 . This can only be if rmin (S2 ) > r (z c ) + t (z c ). Let us show that if the job ju is performed in the schedule S I before all jobs of the sequence F2 , then the accuracy estimate of the IJR algorithm is 3/2. Since q( j ) > Copt /2 for all j ∈ F2 then r ( j ) < Cmax (Sopt )/2, for all j ∈ F2 , hence all jobs from the sequence F2 are competitive jobs for ju . In the schedule S I , the job ju can be performed before all jobs of the sequence F2 , only if a potential idle time idle(u ∗ ) before beginning of the job u ∗ in F2 is large and the inequality q(u ∗ ) − q( ju ) < idle(u ∗ ) is true.

Scheduling Algorithms for Single Machine Problem with Release and Delivery Times

69

Then all jobs from the sequence F2 must be included in the sequence S2 of the critical sequence J (S). Choose a job v ∈ S2 such that r (v) = rmin (S2 ) = rmin (F2 ). Then it is true that idle = idle(v) = rmin (F2 ) − r ( ja ) − T (F1 ) > q(v) − q( ju ). Then by Lemma 1 Cmax (S) − Cmax (Sopt ) ≤ t ( ju ) − idle. Let us show that in this case Cmax (S I ) − Cmax (Sopt ) ≤ Cmax (Sopt )/2. Suppose, on the contrary, that Cmax (Sopt )/2 < Cmax (S) − Cmax (Sopt ) ≤ t ( ju ) − idle, then Cmax (Sopt )/2 < t ( ju ) − idle. Hence q(v) − q( ju ) + Cmax (Sopt )/2 < t ( ju ), Cmax (Sopt ) < t ( ju ) + q( ju ) < L B. We got a contradiction, therefore, if the IJR algorithm has set the job ju before F2 , then in this case it is true that Cmax (S)/Cmax (Sopt ) ≤ 3/2. If the IJR algorithm puts the job ju after the sequence F2 , then two cases are possible. First case: there is no idle time in the schedule S I after the job z c and until the critical job jc , then in the optimal schedule, the work ju is performed between ja and jc . If in the optimal schedule, the job ju is performed between jobs ja and jc , then Cmax (Sopt ) ≥ r ( ja ) + t ( ju ) + T (S2 ) + q( jc ). Therefore Cmax (S I ) − Cmax (Sopt ) ≤ r ( ja ) + T (S1 ) + t ( ju ) + T (S2 ) + q( jc ) − r ( ja ) − t ( ju ) − T (S2 ) − q( jc ) = T (S1 ) ≤ L B/2. In this case, the IJR algorithm constructs a 3/2 approximation schedule. Second case: r ( ja ) > max{τ (z c ) + t (z c ), rmin (F2 ) + T (F2 )}. According to the properties of the IJR algorithm, the processor is idle until the time r ( ja ) and q( ja ) > L B/2. Then in the optimal schedule, the job ju can be performed between jobs ja and jc or before the job ja . It remains to consider the case when in the optimal schedule ju is executed after z c and before ja . Then Cmax (Sopt ) ≥ rmin (F2 ) + T (F2 ) + t ( ju ) + t ( ja ) + q( ja ). Hence, Cmax (S J R ) − Cmax (Sopt ) ≤ r (z a ) + T ( J (S J R )) + q(z c )−

70

N. Grigoreva

−rmin (F2 ) − T (F2 ) − t ( ju ) − t ( ja ) − q( ja ) = = r (z a ) + T (F1 ) + q(z c ) − rmin (F2 ) − t ( ja ) − q( ja ) L B/2. In this case, the Schrage algorithm constructs a 3/2 approximation schedule. Case 3. There is the interference job in the critical sequence J (S J R ) = (z a , z a+1 , . . . , z c ) and there are some delayed jobs in J (S) = ( ja , ja+1 , . . . , jc ). If there is no an interference job in the critical sequence J (S I ), then q( ji ) ≥ q( jc ) for all jobs from the critical sequence ji ∈ J (S I ). But in the critical sequence there are jobs that can be started before the job ja . Let r (J (S I )) = min{r ( ji ) | ji ∈ J (S)}. Then Cmax (Sopt ) ≥ r (J (S)) + T (J (S)) + q( jc ). Hence Cmax (S I ) − Cmax (Sopt ) ≤ r ( ja ) + T ( J (S))+ +q( jc ) − r (J (S)) − T (J (S)) − q( jc ) = = r ( ja ) − r ( J (S I )) < L B/2. According properties of the ICA algorithm it is true r ( ja ) < L B/2. We have proven that the worst-case performance ratio of ICA algorithm is equal 3/2. Consider the complexity of the ICA scheduling algorithm. The algorithm constructs two permutations: one by the Schrage algorithm, the computational complexity of which is O(n log n), and one by the IJR algorithm. Let’s show that for the IJR algorithm the computational complexity is equal O(n log n). First, we sorts all jobs in non-descending order of it’s release times r ( j1 ) ≤ r ( j2 ) ≤ · · · ≤ r ( jn ), this step requires O(n log n) actions. The main operation is to select a job from a set of ready jobs. We store the ready jobs as a priority queue Q 1 , which can be organized as a binary heap, the priority of job j is the delivery time q( j ). At the step (a) of the algorithm, we add new ready tasks to the queue Q 1 such that r ( ji ) ≤ time, adding each job requires O(log n) actions. The job u with the highest priority is selected for O(1) actions. At step (f), we add new tasks ( ji ) to the queue Q 1 , for which r ( ji ) < time + t (u). If there is a task jl for which all conditions are met, then we map it on the processor. Otherwise, we look through all candidates by placing them in the queue Q 1 , and map the task u on the processor on the step (g).

Scheduling Algorithms for Single Machine Problem with Release and Delivery Times Table 1 Release, processing, delivery times for jobs J ob ri ti ε M/2 − ε 0 M/2 + ε

x a u c

Table 2 IJR schedule ε ε idle

x

Table 3 JR schedule M/2 + ε u

71

qi

ε ε M/2 + ε ε

M −2∗ε M/2 0 M/2 − 2 ∗ ε

M/2 − 3ε

ε

M/2 + ε

ε

idle

a

u

c

ε

ε

ε

x

a

c

For scheduling, the task jl or u is selected, which requires O(1) actions, and the selected task is deleted from queue Q 1 , which requires O(log n) actions. Each job can be added to the queue at most once: building the binary heap requires O(n log n) actions. The total computational complexity of the IJR algorithm is equal O(n log n). The computational complexity of the ICA algorithm is O(n log n). ⟁ Lemma 5 There is an example for which the ratio Cmax (S A )/Cmax (Sopt ) tends to 3/2. Proof Consider a system of four tasks U = {x, a, u, c}. The data for the system of tasks are given in Table 1, where M is a constant. The lower bound for the objective function is equal L B = M. The IJR algorithm constructs the schedule S = (x, a, u, c) (see Table 2). The processor is idle M/2 − ε time units before starting the job a. c is the critical job, the execution of job c ends at M + 2ε and the delivery of job c ends at 3/2M. The objective function is equal Cmax (S) = M/2 − ε + ε + M/2 + ε + ε + M/2 − 2 ∗ ε = 3/2M. The JR algorithm constructs the schedule S J R = (u, x, a, c) (see Table 3). The critical job is x, which ends at M/2 + 2ε and the delivery of job x ends at 3/2M. The objective function is equal Cmax (S J R ) = M/2 + 2ε + M − 2 ∗ ε = 3/2M. The optimal schedule is Sopt = (x, u, a, c) (see Table 4), the value of the objective function for which is equal Cmax (Sopt ) = 2 ∗ ε + M/2 + ε + M/2 − 2 ∗ ε = M + ⟁ ε. If ε tends to zero, then the ratio Cmax (S A )/Cmax (Sopt ) tends to 3/2.

72

N. Grigoreva

Table 4 Optimal schedule ε ε idle

x

M/2 + ε

ε

ε

u

a

c

5 Combined Scheduling for the Forward and the Inverse Problem FIICA In this section we consider along with the direct problem, the inverse problem, in which the receive times of jobs and the delivery times are reversed. It is known that the solution of the direct problem S = (i 1 , i 2 , . . . , i n ) is optimal if and only if the permutation Sinv = (i n , i n−1 , . . . , i 1 ) is the optimal solution of the inverse problem. Hall and Schmois (1992) have developed a scheduling method in which the Potts algorithm is applied to the direct and the inverse problem. The case in which there are two big jobs is considered separately. In total, the algorithm builds 4n schedules and chooses the best one. The computational complexity of the algorithm is O(n 2 log n). The worst-case performance ratio is equal is 4/3. We run the algorithm ICA for the forward and the inverse problem. In the inverse problem we exchange the roles of release time r j and delivery time q j for each job. This algorithm constructs 4 or 8 schedules and return the best one.

5.1 Algorithm FIICA 1. If there are two jobs u 1 ,u 2 such that t (u 1 ), t (u 2 ) > L B/3, then we consider two cases. 2. Case 1. Set u 1 ≺ u 2 , and change release time of job u 2 and delivery time of job u1. Set r (u 2 ) := max{u 1 , u 2 } and q(u 1 ) := max{q(u 1 ), q(u 2 ) + t (u 2 )}. Call algorithm ICA for the forward and the inverse problem. Case 2. Set u 2 ≺ u 1 , and change release time of job u 1 and delivery time of job u2. Set r (u 1 ) := max{u 1 , u 2 } and q(u 2 ) := max{q(u 2 ), q(u 1 ) + t (u 1 )}. Call algorithm ICA for the forward and the inverse problem. The algorithm FIICA builds 8 permutations and selects the best one. 3. If there are not two jobs u 1 ,u 2 such that t (u 1 ), t (u 2 ) > L B/3, then call algorithm ICA for the forward and the inverse problem. The algorithm FIICA builds four permutations and selects the best one. The guaranteed estimate of the accuracy of this algorithm is unknown.

Scheduling Algorithms for Single Machine Problem with Release and Delivery Times

73

6 Computational Experiment To find out the practical efficiency of the algorithm, a computational experiment was carried out. The goals of the computational experiment were: 1. Checking the performance and efficiency of the algorithm and experimentally evaluating its accuracy using random test examples. 2. Comparison of the accuracy of the IJR algorithm with the accuracy of the JR Schrage algorithm. Comparison of the accuracy of the combined ICA algorithm with the accuracy of the NS algorithm of Novitsky and Smutnitsky. The initial data was generated by the method described by Carlier (1982), the same method generating of test examples were used by Novitsky and Smutnitsky when they compared their proposed algorithms with Hall and Schmois and Schrage algorithms. For each task i ∈ 1 : n, three integer values were chosen with uniform distribution : q(i ) between 1 and qmax , r (i ) between 1 and rmax and t (i ) between 1 and tmax . The following values were set: tmax = 50, rmax = qmax = n K . There were chosen the values for K from 10 to 25, which were noted by Carlier as the most difficult for the problem under consideration. For each value of n and K , we considered 100 instances. Three groups of examples were considered. The processing times t ( j ) of jobs from each groups were selected from the following intervals: 1. Type A: t ( j ) from [1, tmax ], 2. Type B: t ( j) from [1, tmax /2], for j ∈ 1 : n − 1 and t ( jn ) from [ntmax /8, 3ntmax /8], 3. Type C: t ( j ) from [1, tmax /3], for j ∈ 1 : n − 2 and t ( jn−1 ), t ( jn ) from [ntmax /12, 3ntmax /12]. Type B groups contains instances with one long job and type C groups contains instances with two long jobs. Each group contains 100 instances. The value of the objective function Cmax was compared with the optimal value of the objective function Copt , which was obtained by the branch and bound method (Grigoreva 2019). In all tables, n is the number of tasks in the instance. In Table 5, the number of jobs in the instance are changed from 50 to 5000 and for all tests the value K = 20 is chosen. Table 5 shows the results of the approximate algorithms IJR, JR, and NS for the single-machine scheduling problem for groups of type A. Columns 2, 3, and 4 of Table 5 show the number of tests (in percent) for which optimal solutions were generated by algorithms IJR, JR and NS, respectively. The next three columns contain the average ratio R = Cmax /Copt for the IJR, JR and NS algorithms, respectively, for all tests. It can be seen from Table 5 that, the IJR algorithm generates more optimal solutions than the NS and JR algorithms. The JR algorithm very rarely receives optimal solution. The average relative error of the solution is small for all algorithms and decreases with increasing n.The average relative error is at most 0.03% for the IJR algorithm, at most 0.97% for the JR algorithm and 0.2 % for the NS algorithm.

74

N. Grigoreva

Table 5 Type A. Performance of algorithms according to the variation of n n NI J R NJ R NN S RI J R RJ R

RN S

50 100 300 500 1000 2000 5000

81 97 96 80 96 92 96

5 6 0 0 0 1 2

74 94 88 56 93 88 91

1.00030 1.00006 1.00005 1.00004 1.00000 1.00001 1.00000

1.00970 1.00910 1.0015 1.00090 1.00091 1.00020 1.00010

Table 6 Type A. Performance of algorithms according to the variation of K K NI J R NJ R NN S RI J R RJ R n 100 100 100 100 100 100 100

10 14 15 16 18 20 22

98 95 92 93 94 97 93

2 0 1 2 0 4 2

88 91 89 91 87 91 88

1.00001 1.00002 1.00007 1.00006 1.00002 1.00001 1.00005

1.003 1.004 1.002 1.005 1.003 1.009 1.003

1.00250 1.00010 1.00009 1.00030 1.00002 1.00002 1.00001

RN S 1.0005 1.0001 1.0008 1.0001 1.0006 1.0001 1.0005

Table 6 shows the results of experiments in which we change the constant K , the value of which is given in the second column of the table. The other columns of Table 6 are similar to the columns the previous table. Table 6 shows that changing the constant K from 10 to 22 does not significantly affect the results of the algorithms for instances of Type A. The theoretical analysis of the algorithms shows that the most difficult examples take place when there are one or two long tasks. Such tests were generated in groups of type B and type C. Tables 7 and 8 show the results of comparison of algorithms for tests of type B. For these groups of tests, we considered the combined ICA, in which the best solution was chosen of the two solutions, obtained by the JR and IJR algorithms. The number of optimal solutions in percent are given in the Table 7, N I C A is the number of optimal solutions in percent for the combined algorithm and it is given in the last column. Table 7 show that for tests of type B the number of optimal solutions for the IJR and NS algorithms decreases, while for the JR algorithm it increases. For tests with one long job (Table 7) the number of optimal solutions for the ICA algorithm is greater then the number of optimal solutions for the NS algorithm. The value of the average relative error of algorithms for tests of type B are given in the Table 8. The value of the average relative error of the combined algorithm ICA R I C A = Cmax (S A )/Copt is given in the last column.

Scheduling Algorithms for Single Machine Problem with Release and Delivery Times Table 7 Type B. The number of optimal solutions n K NI J R NJ R

NN S

NI C A

25 48 49 34 49 16 33

58 62 59 69 71 33 57

RN S

RI C A

1.05 1.03 1.04 1.04 1.04 1.06 1.03

1.04 1.03 1.04 1.01 1.03 1.03 1.03

1.005 1.004 1.007 1.004 1.005 1.007 1.006

Table 9 Type C. The number of optimal solutions K NI J R NJ R n

NN S

NI C A

42 29 44 41 26 16

59 68 57 72 58 36

100 100 100 100 100 100 100

10 14 15 16 18 20 22

51 29 25 53 46 24 44

23 47 48 21 46 15 29

Table 8 Type B. The average relative error of algorithms K RI J R RJ R n 100 100 100 100 100 100 100

100 100 100 100 100 100

10 14 15 16 18 20 22

10 14 15 18 20 22

1.02 1.06 1.05 1.01 1.05 1.05 1.02

48 55 25 40 42 24

35 29 43 40 25 14

75

The relative error of the solution increases for all algorithms JR, IJR, NS and it is from 1 to 6% on average. The author’s ICA algorithm has significantly more advantages. It combines the advantages of the Schrage algorithm, which does not allow unforced idle time and IJR algorithm, which allows them. The relative error of the solution for ICA algorithm is from 1.004 to 1.007 on average, but the relative error of NS algorithm is from 1.01 to 1.04. Tables 9 and 10 show the results of comparison of algorithms for tests of Type C. The combined ICA algorithm significantly diminishes the relative error of the solution. For the combined algorithm it ranges from 0.4 to 0.8%.

76

N. Grigoreva

Table 10 Type C. The average relative error of algorithms n K RI J R RJ R 100 100 100 100 100 100 100

10 14 15 16 18 20 22

1.04 1.02 1.06 1.05 1.06 1.08 1.06

1.05 1.04 1.03 1.04 1.05 1.01 1.07

RN S

RI C A

1.04 1.04 1.02 1.04 1.05 1.01 1.07

1.006 1.005 1.006 1.005 1.004 1.005 1.008

The worst solutions for the JR algorithm had a relative error of 23%, for the IJR algorithm—19% and for the combined ICA algorithm had only 7%. No test was received during testing, for which both JR and IJR algorithms have constructed a solution with large relative error. The number of optimal solutions is also increasing for the combined ICA algorithm. The combined algorithm generates two permutations just like the NS algorithm, but its average relative error is significantly less, and the number of optimal solutions obtained is greater.

7 Conclusion The paper considers the problem of scheduling for single processor with release and delivery times. The goal is minimization of the total execution time of all tasks. The paper proposes the IJR algorithm with computational complexity O(n log n), in which the priority of the job is taken into account first and processor can be idle, when certain conditions are met. We propose the combined algorithm ICA that generates two schedules (one by the extended Jackson rule algorithm, the other by the algorithm IJR) and selects the best solution. We prove that the worst-case performance ratio of the algorithm ICA is equal 3/2 and the bound of 3/2 is tight. The computational experiment has confirmed the practical efficiency of the ICA algorithm.

References Artigues, C., Feillet, D.: A branch and bound method for the job-shop problem with sequencedependent setup times. Ann. Oper. Res. 159, 135–159 (2008) Baker, K.R.: Introduction to Sequencing and Scheduling. Wiley, New York (1974) Carlier, J.: The one machine sequencing problem. Eur. J. Oper. Res. 11, 42–47 (1982) Chandra, C., Liu, Z., He, J., Ruohonen, T.: A binary branch and bound algorithm to minimize maximum scheduling cost. Omega 42, 9–15 (2014)

Scheduling Algorithms for Single Machine Problem with Release and Delivery Times

77

Grabowski, J., Nowicki, E., Zdrzalka, S.: A block approach for single-mashine scheduling with release dates and due dates. Eur. J. Oper. Res. 26, 278–285 (1986) Graham, R.L., Lawner, E.L., Rinnoy Kan, A.H.G.: Optimization and approximation in deterministic sequencing and scheduling. A survey. Ann. of Disc. Math. 5(10), 287–326 (1979) Grigoreva, N.S.: Branch and bound algorithm for Single machine scheduling problem with release and delivery times. In: 2018 IX International Conference on Optimization and Applications (OPTIMA 2018), Supplementary Volume. https://doi.org/10.12783/Dtcse/optim2018/27919 Grigoreva, N.S.: Single machine scheduling with precedence constrains, release and delivery times. In: Proceedings of 40th Anniversary International Conference on Information Systems Architecture and Technology- ISAT 2019- Part III. (Advances in Intelligent Systems and Computing, vol. 1052), pp. 188–198 Grigoreva N.: Single machine inserted idle time scheduling with release times and due dates. In: Proceedings of the DOOR2016. Vladivostoc. Russia. Sep.19–23.2016. Ceur-WS. 2016, vol. 1623, pp. 336–343 (2016) Hall, L.A., Shmoys, D.B.: Jackson’s rule for single-machine scheduling: making a good heuristic better. Math. Oper. Res. 17(1), 22–35 (1992) Kanet, J., Sridharan, V.: Scheduling with inserted idle time: problem taxonomy and literature review. Oper. Res. 48(1), 99–110 (2000) Kise, H., Ibaraki, T., Mine, H.: Performance analysis of six approximation algorithms for the onemachine maximum lateness scheduling problem with ready times. J. Oper. Res. Soc. Jpn. 22, 205–224 (1979) Lenstra, J.K., Rinnooy Kan, A.H.G., Brucker, P.: Complexity of machine scheduling problems. Ann. Disc. Math. 1, 343–362 (1977) Liu, Z.: Single machine scheduling to minimize maximum lateness subject to release dates and precedence constraints. Comput. & Oper. Res. 37, 1537–1543 (2010) McMahon, G.B., Florian, N.: On scheduling with ready times and due dates to minimize maximum lateness. Oper. Res. 23(3), 475–482 (1975) Nowicki, E., Smutnicki, C.: An approximation algorithm for a single-machine scheduling problem with release times and delivery times. Discret. Appl. Math. 48, 69–79 (1994) Pan, Y., Shi, L.: Branch and bound algorithm for solving hard instances of the one-machine sequencing problem. Eur. J. Oper. Res. 168, 1030–1039 (2006) Potts, C.N.: Analysis of a heuristic for one machine sequencing with release dates and delivery times. Oper. Res. Int. J. 28(6), 445–462 (1980) Schrage L. : Optimal Solutions to Resource Constrained Network Scheduling Problems (unpublished manuscript) (1971) Sourirajan, K., Uzsoy, R.: Hybrid decomposition heuristics for solving large-scale scheduling problems in semiconductor wafer fabrication. J. Sched. 10, 41–65 (2007)

Key Performance Indicators to Improve e-Mail Service Quality Through ITIL Framework Leoneed Kirilov and Yasen Mitev

Abstract The e-Mail service takes significant part at the corporate collaboration. This is due to its natural benefits such as unification, traceability and the ease of use. To ensure that such a fundamental service is functioning and being maintained right, proper methods for measuring its efficiency and reliability have to be applied. In this paper we propose a comprehensive set of Key Performance Indicators (KPIs) for assessment the quality of e-Mail service. The KPIs can be used by the IT Management staff for measuring the operational performance of the service in a specific organization. A group decision support was applied for evaluating KPIs relevance. All these steps are done within Information Technology Infrastructure Library (ITIL) framework. Keywords e-Mail service · Group Decision Making · ITIL (Information Technology Infrastructure Library) · KPI (Key Performance Indicators) · Information Technology Service Management (ITSM) · IT Processes

1 Introduction The e-Mail service is one of the most important business functions in most of the enterprises. It is common accepted as principal communication channel in most of them. This is caused both by productivity and legal reasons. The Service Level Agreement is one of the key subjects in the Service Design volume of ITIL (Information Technology Infrastructure Library)—(Hunnebeck 2011; Rubio and Camazón 2018). It is an asset of processes that aims to describe the deliverables that should be achieved in order to have the service available on the expected level. The Key Performance Indicators (KPI) are parameters that quantitatively describe the SLA (Service Level Agreement). For example, when we talk for e-Mail service following KPI (Key Performance Indicators) (Talla and Valverde L. Kirilov (B) · Y. Mitev Institute of Information and Communication Technologies, Bulgarian Academy of Sciences, Sofia, Bulgaria e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 S. Fidanova (ed.), Recent Advances in Computational Optimization, Studies in Computational Intelligence 1044, https://doi.org/10.1007/978-3-031-06839-3_5

79

80

L. Kirilov and Y. Mitev

2013) may be defined: 99% of all the e-Mail messages to be delivered for less than a minute within the organization; the e-Mail servers to be reachable for at least 97, 5% of the time; all the priority 1 service requests to be resolved within 90 min; etc. For the scope of this research we are going to cover the KPIs that fall under the Service Operations volume of ITIL—(Steinberg 2011; Talla and Valverde 2013; Trinkenreich and Santos 2015). Our scenario includes the cases where the e-Mail service is already integrated and running in normal operations mode. Key performance indicators can be also used in case of measuring the efficiency of integration of the service or from financial perspective in order to assess the financial efficiencyl. The SLA parameters can be monitored on a periodic interval (daily, weekly monthly), but during the last years there is a big demand for monitoring this data live and comparing it to historical data (Kosinski et al. 2008; Talla and Valverde 2013; Trinkenreich and Santos 2015).

2 Problem Discussion The usage of ITIL framework for improving and optimizing the level of the email service have been proved as successful approach (Guo and Wang 2009; Spremic et al. 2008; Talla and Valverde 2013). This framework does not give the exact rules itself, it also does not specify the exact measurable for success. Therefore, development and application of appropriate methods is actual task—(Borissova 2016; Kirilov et al. 2013; Petrov 2021; Popchev et al. 2021; Rubio and Arcilla 2020; Rubio and L. 2021; Tsenov 2015; Weichbroth 2018). In the case study (Valverde et al. 2014) it has been studied how the service is being recognized before and after the ITIL framework implementation based on a simple KPIs defined in Eikebrokk and Iden (2017) and in that way it is shown the benefits and difficulties from implementing ITIL. It has always been a challenge to compose service level agreement (SLA) that corresponds to the expectations of the customer. A key part of this SLA are the mentioned KPIs that describe quantitatively what is assumed as good service by the customer and outlined in his requirements. How the customer satisfaction is being evaluated is described in Xiaozhong et al. (2015). They construct IT service level evaluation system, based on ITIL. On Fig. 1. it can be seen that the KPI has key effect over the customer’s perceptions of the quality of the IT Service. The quality of the ITIL service is dependent both by the customer perceptions and the KPIs form the ITIL based service evaluation framework. How the efficiency from the ITIL implementation can be evaluated could also be interpreted from Fig. 1. It depends on the feedback from the customer from one side. This feedback is received based on predefined evaluation system. From other side the results from ITIL are being measured by KPIs that describe the performance of the IT environment and the corresponding processes. The proper choosing of the proper KPIs met the following two challenges:

Key Performance Indicators …

81

Fig. 1 Satisfaction evaluation for the IT Service—(Xiaozhong et al. 2015)

• Usage of the proper asset of KPIs—as there are collaterals suggesting very large lists of KPIs that can describe the properties of the service, it is responsible task to choose the ones that can represent the customer’s expectations and priorities. It is a common issue to choose irrelevant KPI metrics that furtherly to be monitored. In that scenario the companies suffer from low customer satisfaction but positive and optimistic values (for example service uptime for more of the expected 98% of the month). This results in losing company resources to get better in tasks that do not add significant value to the overall quality of the service, while other important activities are being neglected. • Setting up the right values for the chosen KPIs—the most often problem here is that after choosing the KPIs that are going to be monitored, they are not assigned with proper values. This leads to committing with objectives that cannot be met by the service supplier. Also, there is a dependency that determining higher value of a service requires higher effort, materials and funding. Because of that reason choosing the right values has economic dependency as well.

3 Service Dependencies and Available Approaches The e-Mail server and respectively the service that it provides depend from different technologies in the IT environment (Stidley and Jagott 2010). It means that the email service can be impacted by different technologies that interact with it. In the large organizations with 500 or more users there are different engineers or even teams responsible for these adjacent technologies. Following up that scenario, the best practice according ITIL is to have different KPIs defined for each technology. This is needed in order to have clear definition for the service performance and quality in the context of any technology. It means that in our case the quality of the email service can be affected by underperformance according to a KPI in different technology. For example, that can be a large number of network outages due to faulty switch or even poor facility service in the server room/data center as failed rack ventilation system.

82

L. Kirilov and Y. Mitev

When there are such dependencies, and no relation is established, there is a risk to end up in a situation where all the defined KPIs for email service performance report healthy condition, but still complaints for underperformance are received. To illustrate such an example, we can take the following situation from the practice: suppose we have Microsoft Exchange server installed on physical instance running with Windows server operating system. Both the Windows operating system and the Exchange server have tolerance of 1 h unplanned downtime during the month according to the defined KPIs. Hardware failure occurs on the server and it should be switched off and the faulty part is being replaced for 50 min. Until the operating system boots it takes additional 8 min. At this point of time we have fully functioning operating system but the related roles and applications are still initializing. Until the affected Exchange server role starts to deliver email messages let us assume that additional 12 min are necessary. Until the Exchange server role start to deliver messages, we assume that additional 10 min are required. It means that in total the downtime for the hardware instance will be 50 min, then we have 58 min for the Platform and 1 h and 10 min for the Exchange server. In this example the main purpose for the whole server is to deliver e-Mails. The most significant KPI defining uptime of the Exchange service is not met, while the KPIs for Platform are in the healthy norms. To solve that issue of dependency, we suggest the following approach to be applied as best practice. All the KPIs of services parent to the email service have to be with higher values. It means that if parent service for the email one is running on a minimum allowed KPI performance, the related email server will still be able to fulfill the expected level of performance. Example of that approach is shown on Fig. 2. According to Fig. 2 the Exchange CAS (Client Access Server) server is Windows server role responsible to fulfill all the client access requests to the Exchange email server. It is critical for the email service to be running, but this service is dependent to the Windows Server platform on which it is installed. The Exchange CAS role cannot be running while the Windows platform is not initialized, so we define higher availability KPI for it. From the other hand this Windows platform can host other server applications as well. Respectively the Windows server cannot deliver the service for which it is intended to if it has not proper connection to Internet and the internal network. In that case we have to ensure that the data center/server room network connection uptime is higher than the one defined for the Windows server. It is also important because in the same server room/DC (Datacenter) usually there

Exchange CAS server uptime – 98% expected

Windows platform uptime – 99% expected

Datacenter network uptime – 99.5% expected

Fig. 2 Parent dependencies for the Exchange server email service

Key Performance Indicators …

83

are different servers running and the impact from the network outage should be quite significant for the company compared to an outage in the Exchange CAS server. Below we list and describe the technologies that are underlying for the email service. Server platform—The email service is being delivered by a server application which respectively have to run on a server operating system. That can be Windows Server or UNIX based platform. Our example above describes case with Microsoft Windows Server according to the Exchange installation requirements. Network—the proper connectivity to the Internet and the local network are needed in order to ensure that the mail flow is delivered to the designated users. In that scope we can consider the proper network topology and DNS set up. Backup—the backup is mainly needed due to two main purposes. The first one is to prevent data loss for the email server application due to soft-ware/hardware failures or misconfiguration. The second one is to prevent data loss for the user data. For example, if there is a corrupted mailbox or simply the user deletes by mistake important email messages or calendar entries. We should clarify that in our example apart of the standard file level backup that is being performed, the Exchange server needs additional backup solution to perform backup and restores on a mailbox database level. The basic solutions that are currently being used by the companies in order to solve the above formulated problems (Pollard and Cater-Steel 2009): • Usage of the proper asset of KPIs—most of the companies rely on a standard asset of KPIs included in their offering plans. These assets are different for the different companies and correspond to their strengths and maturity. This puts demand to the service so it to be relevant only for particular types of business needs. When non-standard requests come to the implementer/developer, custom KPIs needs to be created to measure the bespoke service. • Setting up the right values over the chosen KPIs—the goal is to determine thresholds for the different KPI. They need to correspond to the understanding for acceptable quality of service by both customer and supplier. When obtaining these thresholds, a detailed assessment is being made on the available support resource as well as the supported environment. There is also a good practice to include additional warning threshold which to flag that high attention is needed for the indicator in order to continue functioning as expected. An example of KPI threshold can be viewed on Table 1. Table 1 KPIs thresholds Expected (%)

Threshold (%)

Service uptime

99

97

Service requests fulfilled for one business day

90

50

User incidents resolved within the agreed time

95

90

Amount of wrongly executed tickets

3

5

84

L. Kirilov and Y. Mitev

In practice the above solutions are not applied according to the exact customer’s needs. Instead, every supplier set higher KPI values led by the market trends and competition for offering better solutions. This may lead to customer dissatisfaction from a service described with excellent KPI values but at the same time not responding to specific requirements. It is a result of incomplete or not properly defined KPIs values. The above situation can be mitigated by using more systematic approach that focuses on the exact business needs. We consider that the suggested approach (see Fig. 2) is applicable as additional step that can be performed to achieve the design of more accurate KPIs. Here are a few cases where this is applicable. For example, migrations towards ITIL framework—(Cardoso et al. 2018). Here ITIL framework and the selection of KPIs is used to support cloud-based migration. In (Othman et al. 2018) a decision support system based on AHP (Analytic Hierarchical Process) methodology is described. The methodology is used to measure the biggest challenges in migration to ITIL. The successful approach of implementing KPI measurements is presented in Proctor and Anderson (2017). It emphasizes the key role of the higher management that needs to support the process to achieve success. For successful usage of ITIL based IT service transformation, the organization needs to adapt their culture to successfully adopt the proposed approach (Eikebrokk and Iden 2016). That is confirmed as success factor for better results in the conducted quality management (Eikebrokk and Iden 2017).

4 Key Performance Indicators Design The design of KPIs is not a single time activity. There are specific occasions when KPIs need to be implemented, applied, followed up and updated. The IT service lifecycle describes the stages where is an interaction with the KPIs. Some key principles of ITIL when executing migrations with the support of Service design and Service transition chapters can be found in Gërvalla et al. (2018), Kubiak and Rass (2018), Wegmann et al. (2008).

4.1 Timeline of ITIL Implementation As per the ITIL structure there are four main stages for the service integration. They represent universal processes that can be quickly tailored and implemented at any industry (Marrone et al. 2014; Marrone and Hammerle 2017). These stages are listed below: Service strategy. This is a group of activities that needs to be done before the integration of the service. Their goal is to define the purpose of the service as well as its short term and long-term

Key Performance Indicators …

85

targets (Cannon 2011). The service strategy is not one-time activity. It is a process that triggers activities during the whole service lifecycle. This is needed because the strategy needs to be updated according to technology changes, market trends, changes in the core business of the organization. Service design. The next group of processes that can be triggered is described in the ITIL chapter “Service design” (Hunnebeck 2011). The stage begins with assessment of the current IT environment and processes if available. During the assessment it is being decided what is going to be kept and what is going to be changed. Some of the following important decisions are taken during the service design. • The exact technologies that are going to be used; the details about their implementation—their design, the stages of the implementation as well as their timeframes. • The amount and the profiles of the personnel that is going to fulfill the transition and the ongoing maintenance afterwards. • The needed ITIL roles that are needed to support the environment from process perspective. • The KPIs that will be used to measure the environment performance. At this stage it is decided the technology that is going to be used. For example, in our case: “Environment description” → “Microsoft Exchange 2010”. Also, how many servers needs to be installed, where should be located and what else is needed in order to have the environment running. What are the effective KPIs that can measure adequately the service performance. Service transition. During this period of the service—(Rance 2011) the timeline of the actual build of the service is being processed. It can be new environment installation from scratch or modification of existing set up. During that period the IT equipment is being delivered and physically installed. The applications are also installed and set up. All these activities are coordinated through change management process which avoids overlaps and ensures that production environment as well as business operations will not be affected during the installation. Service operations. This is the period of the service—(Steinberg 2011) when all the transition services are completed and the FMO (Future Mode of Operations) is active. That means that the environment is installed, tested and provided for usage from the business. For example, in our case it means that the email messages are routed through the Exchange 2010 environment. The defined KPIs should present effective representation of the performance during that period. The whole timeline of the e-Mail service is shown on Fig. 3.

86

L. Kirilov and Y. Mitev

Service strategy

Service Design

Service Transition

Service operation

Fig. 3 Service lifecycle representation (Hunnebeck 2011)

4.2 KPIs Classification and Formulation The smooth functioning of the service operations is measured by the respective KPIs indicators representing that operation. For the example with the e-Mail service the main KPIs are the service uptime, resolution rate and quality of the service support teams, etc. From the other hand the financial management chapter is mainly focused on the distributed price per user for the service, returning of the investments, the amount of penalties paid, etc. Generally, the goal is to have most of the KPIs easily interpreted and easily measured, for example in percentages. This approach is used in order to have better visibility on the performance and to have unified base for comparison. Talking about email service there are a couple of groups with KPIs that can be defined. Depending of the business needs only a couple of the KPI can be chosen and also specific ones to can be added. In some of the companies it is extremely important to have high level of data privacy (banking, military) and in another ones the service reliability and the uptime are the most important (logistics, sales), so in the different business areas there are different business requirement for the email service. That leads to the different usage of KPIs for successfully measuring of the level of the support service. It is important to be noticed that there are also different groups of KPIs for the different chapters of ITIL—(Brenner 2006). With all this in mind, we formulate the following set of eighteen KPIs for e-Mail service assessment arranged into five groups as follows: • Service availability KPIs o p

q

Uptime percentage of the service—the amount of time for which the email service was provided with no interruptions. Count of complete unplanned service outages—the number of service interruption events. Their sum of time as well as if they were during office hours can be measured with another KPI Count of service degradation outages—count of the service outages when the service was partially available.

• Service request management KPIs o

Average time for completing the service requests—the time is average value for all cases

Key Performance Indicators …

p q

r

87

Percentage of service requests completed within the agreed SLA Percentage of service requests completed within one shot—the percentage of the service requests which have been completed without additional contacting the requestor for additional information or testing. Percentage of complaints—percentage of the complaints for SR not completed as described

• Incident management KPIs o p q r

s t

Average time for starting work on the problem the average value for the time between the ticket creation and the assignment of technical support engineer. Average time for resolution—the average value for the time between the ticket opening and its complete resolution. Percentage of incidents resolved within the SLA timeframes Percentage of incidents completed within one shot—the percentage of incidents completed without additional contacting the requestor for additional information or testing. Percentage of incidents with proper initial assessment of the impact by the first line support engineer. Percentage of complaints—percentage of the complaints for SR not completed as described

• Change management KPIs o p q

Percentage of successful changes Number of failed changes Number of unauthorized changes

• Capacity SLA KPIs o p

Consumed disc storage per user Supported users per FTE—the number of users which is successfully supported by one full time equivalent engineer.

Note that there are different approaches how an asset with KPIs can be collected in order to be significant for a wide range of companies. How to collect an asset for KPIs related to ISO 9001:2008 (Quality Management) is presented in Gacic et al. (2015). The authors’ approach is to create a questionnaire for 142 manufacturing companies. Based on the answers given by the managers, the scientists chose the most relevant KPIs for the strategy subprocesses for all manufacturing companies. Another approach is described in Trinkenreich and Santos (2015). This approach aims to identify adequate metrics to be used by organizations deploying IT service maturity models and the relationship between those metrics and processes of IT service maturity models or standards. The authors propose a systematic mapping review protocol. The protocol and its results are evaluated by a specialist on systematic mapping review and IT service maturity models. The basic approaches that are currently being used by the companies are (Pollard and Cater-Steel 2009):

88

L. Kirilov and Y. Mitev

• Usage of the proper asset of KPIs—most of the companies rely on a standard asset of KPIs included in their offering plans. These assets are different for the different companies and correspond to their strengths and maturity. This puts demand to the service so it to be relevant only for particular types of business needs. When non-standard requests come to the implementer/developer, custom KPIs needs to be created to measure the bespoke service. • Setting up the right values over the chosen KPIs—the goal is to determine thresholds for the different KPIs. They need to correspond to the understanding for acceptable quality of service by both customer and supplier. When obtaining these thresholds, a detailed assessment is being made on the available support resource as well as the supported environment. There is also a good practice to include additional warning threshold which to flag that high attention is needed for the indicator in order to continue functioning as expected.

5 Group Decision Making for KPIs Selection The evaluation and selection of corresponding KPIs is the next step of integrating IT service. We apply Group Decision Making approach for this purpose. The process is summarized in two steps: I. II.

The group of experts creates a list with all the key performance indicators that may be included in the SLA for the customer. The group of experts evaluates the feasibility of the collected KPIs one by one.

We demonstrate the proposed approach on the following real-life problem. There is a need to improve the quality of the IT service in a large national university with ~ 24 000 students. A number of five experts have been engaged to solve the problem according to the selected KPIs. The experts are part of the university IT department and they have the following roles according to ITIL: IT Director; SLA Manager; Incident Manager; Problem Manager; Change Manager. Additional clarification has been made that the university email service is provided only to the teachers and the personnel and not to the students. These experts rank each of the indicators with score between 1 and 10 as 1 means that the KPI will not be supportive at all and 10 means that such a KPI will strongly support measuring the organization’s performance. Each KPI should be evaluated from 3 aspects—if it is going to support the service uptime, the user satisfaction and the user productivity. This will help the process managers to gain clear overview for which purposes the KPIs can be used during the service operations. The evaluation is made on the base of the IT Environment of the university as follows: technical infrastructure overview; business goals and ongoing issues. Technical infrastructure overview: The University contains six buildings in one campus, connected with high broadband WAN network in between. The provided e-Mail service is available only for the teachers and the administrative personnel. There are 850 mailboxes created in total. The specific is that there is a very large

Key Performance Indicators …

89

Table 2 Group decision making for KPIs selection

number of external mail contacts stored in the active directory ~32 000. That is due to the reason that for each student a mail contact is created. These contacts are part of different public distribution lists that describe the different classes and learning groups. Technically the environment is hosted on premise in a dedicated server room. Microsoft Exchange 2010 servers with full redundancy deliver the service. Business goals: It has been planned to upgrade its environment to Exchange 2013 in order to use the features of the latest version. The goal is to have 0% outages for the email service during the weekdays. Another goal is to implement the laboratories booking trough the Exchange calendar feature. Ongoing issues: currently the personnel is complaining that the support desk is engaged with a big delay after the issue is reported—sometimes on the next business day. Another identified issue is the data loss for email items—a big number of the requested mailbox restores are not successful. Based on the provided description the experts have put their ratings. Consolidated view of group decision making model can be seen on Table 2. The columns correspond to the KPIs. The rows correspond to the DMs: DM1 = IT Director, DM2 = SLA Manager and so on. The values a(i,j) in the matrix are the scores of the Decision Maker (i) according to the KPI(j). The above model is solved using the group decision support method according to Krapohl (2012). It provides structured, transparent decision making within a group based on statistical methods. The approach employs a weighted decision matrix with authoritative attributes which leads to an individual decision outcome. The weighting coefficients are used to represent the depth of knowledge for the experts about the area of particular KPI. The solution process consists of three stages: I— Group factor identification; II—Individual scoring; III—Facilitator complies results. The output includes the following data: Disagreement and Agreement heat map; Points of contention; Optimistic/Pessimistic Disagreement; Optimistic/Pessimistic support of the final score—see Table 3.

90

L. Kirilov and Y. Mitev

Table 3 Top 5 scored KPIs; Level of agreement and disagreement

It can be seen the top 5 scored KPI indicators for each of the three purposes of the feedback session: support for the service uptime; support the end-user satisfaction; support for the end-user productivity. Also, the levels of agreement and disagreement between the experts about the relevancy of particular KPI according to the heat maps are displayed. More intensive color is about high level of agreement (disagreement) between the experts and vice versa. Further it can be seen that the service availability KPIs (the first three columns from Table 2) have major importance for the 3 measured aspects. This is also aligned with high level of agreement between the experts. Also, the level of disagreement between the experts is relatively high for the top 5 chosen KPIs for measuring the end user productivity and satisfaction (Table 3, last two columns). That can be explained with the different point of view on the IT service that the different experts have. Another interesting result is that the experts are confident and have high level of agreement for the KPIs that are scored low (see Table 3, agreement heat map). That means that we can confidently confirm which KPIs are not relevant. Namely: • Will support the service uptime: Percentage of service requests completed within the agreed SLA; Average time for completing the service requests; Average time for starting work on case; • Will support the end user satisfaction: Number of unauthorized changes; Consumed disc storage per user; Supported users per FTE; • Will support the end user productivity: Percentage of service requests completed within one shot; Supported users per FTE; Percentage of service requests completed within the agreed SLA.

6 Conclusion We have proposed a methodology for improving the e-Mail service quality that to ensure better client satisfaction. This is realized with the help of ITIL framework. To do that, a comprehensive catalogue of eighteen Key Performance Indicators was

Key Performance Indicators …

91

presented. They are grouped into five groups as follows: Service availability, Service request management, Incident management, Change management and Capacity SLA resp. The proper selection of the proper KPIs is done by means of Group Decision Making approach. For this purpose, a model for evaluation and selection of suitable KPIs is solved with participation of group of experts. This methodology allows the management department in organizations to have structured approach for choosing proper KPIs for measuring the business goals. The methodology is demonstrated on a real-life example for enhancing the quality of e-Mail service in a large organization. Acknowledgements This work was funded by the Bulgarian NSF under the grant № KP-06-N52/7 “Mathematical models, methods and algorithms for solving hard optimization problems to achieve high security in communications and better economic sustainability” and the grant № KP-06-N52/5 “Efficient methods for modeling, optimization and decision making”.

References Borissova, D.: Group decision making for selection of k-best alternatives. Comptes rendus de l’Acad´emie bulgare des Sciences 69 (2) 183–190 (2016) Brenner, M.: Classifying ITIL processes; a taxonomy under tool support aspects. In: Proceedings of the First IEEE/IFIP International Workshop on Business-Driven IT management (BDIM 2006). 19–28 (2006) https://doi.org/10.1109/BDIM.2006.1649207. Cannon, D.: ITIL Service Strategy, 2011th edn. The Stationery Office, London (2011) Cardoso, A., Moreira, F., Escudero, D.F.: Information technology infrastructure library and the migration to cloud computing. Univ. Access Inf. Soc. Univ. Access Inf. Soc. 17(3), 503–515 (2018). https://doi.org/10.1007/s10209-017-0559-3 Eikebrokk, T., Iden, J.: Enabling a culture for IT services; the role of the IT infrastructure library. Int. J. Inf. Technol. Manage. 15(1), 14–40 (2016) Eikebrokk, T., Iden, J.: Strategising IT service management through ITIL implementation: model and empirical test. Total Qual. Manag. Bus. Excell. 28(3–4), 238–265 (2017) Gacic, M., Nestic, S., Zahar, M., Stefanovic, M.: A model for ranking and optimization of key performance indicators of the strategy process. Int. J. Ind. Eng. Manag. (IJIEM) 6(1), 7–14 (2015) Gërvalla, M., Preniqi, N., Kopacek, P.: IT Infrastructure Library (ITIL) framework approach to IT Governance. In: Proceedings of the 8th IFAC Conference on Technology, Culture and International Stability TECIS 2018: Baku, Azerbaijan. IFAC-PapersOnLine, vol. 51(30), pp. 181–185 (2018) Guo, W., Wang, Y.: An incident management model for SaaS application in the IT Organization. In: Proceedings of the International Conference on Research Challenges in Computer ScienceICRCCS’09, pp. 137–140 (2009). https://doi.org/10.1109/ICRCCS.2009.42 Hunnebeck, L.: ITIL Service Design. The Stationery Office, London (2011) Kirilov, L., Guliashki, V., Genova, K., Vassileva, M., Staykov, B.: Generalized scalarizing model GENS in DSS WebOptim. Int. J. Decis. Supp. Syst. Technol. (IJDSST) vol. 5(3), pp. 1–11 (2013). https://doi.org/10.4018/jdsst.2013070101 Kosinski, J., Nawrocki, P., Radziszowski, D., Zielinski, K., Zielinski, S., Przybylski, G., Wn˛ek, P.: SLA Monitoring and Management Framework for Telecommunication Services. In: Networking and Services, 2008. ICNS 2008; pp. 170–175 (2008). ISBN: 978–0–7695–3094–9

92

L. Kirilov and Y. Mitev

Krapohl, D.: A Structured Methodology for Group Decision Making (2012). http://www.augmen tedintel.com/content/articles/group_strategic_decision_making_with_weighted_decision_matr ix.asp. Aaccessed May 2021 Kubiak, P.: Rass St. An Overview of Data-Driven Techniques for IT-Service-Management, IEEE Access, Issue 6, 63664–63688 (2018) Marrone, M., Gacenga, F., Cater-Steel, A., Kolbe, L.: IT service management: a crossnational study of ITIL adoption. Commun. Assoc. Inf. Syst. 34(1), 865–892 (2014) Marrone, M., Hammerle, M.: Relevant research areas in IT service management: an examination of academic and practitioner literatures. Commun. Assoc. Inf. Syst. 41, 517–543 (2017). https:// doi.org/10.17705/1CAIS.04123 Othman, M., Pee, N., Rahim, Y., Sulaiman, H., Othman, M.A., Abd. Aziz, M.: Using analytical hierarchy process (AHP) to evaluate barriers in adopting formal IT governance practices. J. Telecommun. Electron. Comput. Eng. (JTEC) 10(1–6), 35–40 (2018) Petrov, I.: Methodology advances in Information Theory: adjusting entropy, innovating hierarchy. In: Proceedings of the 7th IEEE International Conference “Big Data, Knowledge and Control Systems Engineering” (BdKCSE’2021). Sofia, Bulgaria. e-ISBN:978–1–6654–1042–7. IEEE. https://doi.org/10.1109/BdKCSE53180.2021.9627287. 1–23 (2021) Pollard, C., Cater-Steel, A.: Justifications, strategies, and critical success factors in successful ITIL implementations in US and Australian companies: an exploratory study. Inf. Syst. Manag. 26(2), 164–175 (2009) Popchev, I., Radeva, I.R., Nikolova, I.R.: Aspects of the evolution from risk management to enterprise global risk management. Eng. Sci. LVIII(1), 16–30 (2021) Proctor, P., Anderson, J.: Digital business KPIs: defining and measuring success. Gartner Research. Gartner. ID G00341667 (2017). https://www.gartner.com/en/documents/3803509/digital-bus iness-kpis-defining-and-measuring-success Rance, S.: ITIL Service Transition. The Stationery Office, London (2011) Rubio, J.L., Camazón, R.A.: literature review about sequencing ITIL processes. In: DATA ‘18: Proceedings of the First International Conference on Data Science, E-learning and Information Systems. Article No.8, pp. 1–7. (2018) https://doi.org/10.1145/3279996.3280004 Rubio, J., Arcilla, M.: How to optimize the implementation of ITIL through a process ordering algorithm. Appl. Sci. 10(1), 34 (2020). https://doi.org/10.3390/app10010034 Rubio Sánchez, J.L.: Model to optimize the decision making on processes in IT departments. Mathematics 9(9), 983. (2021). https://doi.org/10.3390/math9090983 Spremic, M., Zmirak, Z., Kraljevic, K.: IT and business process performance management: case study of ITIL implementation in finance service industry. In: Proceedings of the International Conference on Information Technology Interfaces, 23–26 June 2008, Dubrovnik, pp. 243–250 (2008) Steinberg, R.A.: ITIL Service Operation. The Stationery Office, London (2011) Stidley, J., Jagott, S.: Microsoft Exchange Server 2010 Best Practices. Microsoft Press, USA (2010) Talla, M., Valverde, R.: An implementation of ITIL guidelines for IT support process in a service organization. Int. J. Inf. Electron. Eng. 3(3), 334–340 (2013). https://doi.org/10.7763/IJIEE.2013. V3.329 Trinkenreich, B., Santos, G.: Metrics to support IT service maturity models—a case study. In: Hammoudi, S., Maciaszek, L., Teniente, E. (eds.), Proceedings of the 17th International Conference on Enterprise Information Systems (ICEIS), Barcelona, Spain, vol. 2, pp. 330–338 (2015) Tsenov, A.: Approaches for Improvement of IT Systems Management. International Journal of Innovative Science and Modern Engineering. 3(6), 95–98 (2015) Valverde, R., George Saade, R., Talla, M.: ITIL-based IT service support process reengineering. Intell. Decis. Technol. 8(2), 111–130 (2014) IDT-130182. https://doi.org/10.3233/IDT-130182 Wegmann, A.l., Regev, G., Garret, G.-A., Maréchal, F.: Specifying sServices for ITIL service management. In: Proceedings of: 2008 International Workshop on Service-Oriented Computing:

Key Performance Indicators …

93

Consequences for Engineering Requirements, pp. 8–14 (2008). https://doi.org/10.1109/SOC CER.2008.7 Weichbroth, P.: Mining e-mail message sequences from log data. In: Ganzha, M., Maciaszek, L., Paprzycki, M. (eds.), Proceedings of the 2018 Federated Conference on Computer Science and Information Systems. ACSIS, vol. 15, pp. 845–848 (2018). http://dx.doi.org/https://doi.org/10. 15439/2018F325 Xiaozhong, Y., Jian, L., Yong, Y.: Study on the IT service evaluation system in ITIL-based small and medium-sized commercial banks. Int. J. Hybrid Inf. Technol. 8(4), 233–242 (2015)

Contemporary Bioprocesses Control Algorithms for Educational Purposes Velislava Lyubenova, Maya Ignatova, and Olympia Roeva

Abstract The development of interactive software systems is a modern approach to the training of biotechnologists. A system that is currently being developed is interactive system for education in modelling and control of bioprocess (InSEMCoBio). The purpose of this study is to derive modern algorithms for monitoring and control of a process (gluconic acid production), built into the system InSEMCoBio. The algorithms allow users to compare the results of the application of different control strategies to the same object. To algorithms design the General Dynamical Model approach is used. In a result the considered process is monitored and its adaptive control is realized based on the synthesized software sensors of immeasurable process variables. A scheme of the whole training system is proposed, clearly showing the place of the derived algorithms and their connections with the other functions of the system. Keywords Adaptive control algorithms · Gluconic acid production · Continuous and fed-batch processes · Interactive training system

1 Introduction The application of new strategies in biotechnological production imposes modern requirements for student education (Hass 2015). In the last years, the computer-based simulations integrated into training methodologies are developed as cost-and timeefficient tools expanding specialists teaching (Diwakar et al. 2011). In the cases of representing real laboratory or plant these computer-based tools are called operator training simulators (OTS) (Hass 2015). They could be used to extend knowledge V. Lyubenova (B) · M. Ignatova Institute of Robotics, Bulgarian Academy of Science, Sofia, Bulgaria e-mail: [email protected] O. Roeva Institute of Biophysics and Biomedical Engineering, Bulgarian Academy of Sciences, Sofia, Bulgaria e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 S. Fidanova (ed.), Recent Advances in Computational Optimization, Studies in Computational Intelligence 1044, https://doi.org/10.1007/978-3-031-06839-3_6

95

96

V. Lyubenova et al.

about the process and for development of control strategies. Nowadays, very few OTS are realized and they are limited to simulations related to biogas, bioethanol and biopharmaceutical production in some small-scale applications (Hass 2015; Gerlach 2015; Gerlach et al. 2013). Modelling of biotechnological processes is the first step in the investigations of these processes. They are characterized by complex nature, nonstationary, nonlinearity and lack of reproducibility of experiments (Dochain 2013; Simeonov and Chorukova 2018; Vasileva et al. 2009). For this reason, the developed training simulators include models describing different phases of investigated processes. The software tools for model simulation proposed in the literature differ in the form of concrete bioprocess due to their multiformity, as well as in the form of modelling method depending on its application. Due to the exceptional complexity and variability of the considered processes, the known theoretical results, related with possibilities for general presentation of their nonlinear models, identifiability, stability analysis, etc., can be used for methodological basis for exhaustive investigations just for a specific case. In Farza and Chéruy (1991), the proposed software package named CAMBIO is directed to modelling of a class bioprocesses. CAMBIO provides the user with the possibility to interactively design a functional diagram that exhibit the most relevant components with their related interactions through biological and physic-chemical reactions. CAMBIO generates in an automatic way the mass-balance process models in the form of algebraic-differential systems using developed functional diagrams. In (Birol et al. 2002), the developed software tool uses a detailed unstructured model for penicillin production in a fed-batch mode of cultivation. The simulations aim monitoring and fault diagnosis of this process and can be used for education. In Ferrer et al. (2008) the Biological Nutrient Removal Model No. 1 for wastewater treatment plants is achieved. It allows to optimize the process by calculations of the performance under steady or transient state of whole treatment schemes. In (Blesgen and Hass 2010) an interactive simulator of the process of anaerobic digestion is developed. It is based on four interacting submodels which describe the biological, physico-chemical, bioreactor and plant subsystems of the overall bioprocess. The simulator implements various control loops, data acquisition systems and graphical user interfaces. It can be used for process optimization by validation of different controllers built in in the system and for industrial and academic education. An interactive system for student teaching developed by using 20-sim modelling and simulation software environment is presented in Roman et al. (2011, 2013). The system uses friendly graphical user interfaces and comprises sets of different experiments. The designed simulators are based on bond graph modelling method and organized in libraries. The sets of modelling and simulation experiments are grouped into a teaching system, which was implemented successfully at the University of Craiova (Bioengineering Master course). In Lyubenova et al. (2018) a first stage of an interactive user-friendly software system for bioprocess’ models identification is proposed. In the core of the system is an evolutionary algorithm (EA) for optimization. The first results refer to kinetics identification of alcohol fermentation models as main subject in student education

Contemporary Bioprocesses Control Algorithms …

97

in the Department of Technology of Wine and Brewing, at the University of Food Technology in Plovdiv, Bulgaria. A student’s training programme is proposed. The training programme will extend the education including contemporary facilities. This system is the core of the developed interactive system for education in modelling and control of bioprocess (InSEMCoBio), in which at this stage are builtin modern algorithms for model identification, such as genetic algorithms, EA and some hybrid algorithms) (Angelova et al. 2020; Roeva et al. 2021). The next stage is to build-in different algorithms for process monitoring and control. The monitoring and control of bioprocesses is an important stage in the development of new technologies (Hadj-Abdelkader and Hadj-Abdelkader 2019; Bouyahia et al. 2020; Bzioui and Channa 2020; Rehman et al. 2021). They are mandatory elements to be developed and build-in in the system InSEMCoBio. The purpose of this study is to propose a training system that allows users to compare the results of the application of four different adaptive control algorithms, applied to the same object. For this purpose, the process of gluconic acid production is chosen. Gluconic acid finds extensive uses in the food, pharmaceutical, and other industrial fields. In the textile industries gluconic acid is used as acid catalysts. Nowaday, the industrial production of gluconic acid is by the microorganisms Aspergillus and Pseudomonas genera, where glucose is the main substrate. Once glucose is depleted, the microorganism begins to use the synthesized gluconic acid as an alternative carbon source. Since gluconic acid is the target product in the considered process, its consumption should be avoided. This has been achieved through the application of the control laws synthesized here. Some preliminary theoretical research related to the development of control algorithms of such processes, are reported in Ignatova et al. (2021).

2 Mathematical Model and Process Monitoring A biotechnological model of gluconic acid production is derived based on a reaction scheme discussed in Ignatova et al. (2021). This model is too complex to synthesize process control algorithms, so the specific mathematical model for control purposes is proposed here.

2.1 Mathematical Model for Control Purposes According to Bastin and Dochain (1990), the General Dynamical Model (GDM) of biotechnological process can be presented in a matrix form as follows: dξ = Kϕ − Dξ + F dt

(1)

98

V. Lyubenova et al.

where ξ is a vector-column of process variables; K is a matrix of yield coefficients; ϕ is a vector-column of reaction rates; D is the dilution rate; F is the mass feed rate in the reactor. In the considered case, the reaction scheme is as follows: ϕ1

G→X ϕ2

G + O2 → G A

(2)

Following the rules proposed in Bastin and Dochain (1990), the model of control is derived fully automatically on the basis of reaction scheme (2). For the fermentation process control, the operational model (1) is presented as follows: ⎤ ⎡ ⎤ ⎡ ⎡ ⎤ 1 0 0 X X  ⎢ −k 1 −k 2 ⎥ ϕ1 ⎢ G ⎥ ⎢ ⎢ G ⎥ DG in ⎥ ⎢ ⎥ ⎢ ⎢ ⎥ d⎢ ⎣ O2 ⎦/dt = ⎣ 0 −k 3 ⎦ ϕ2 − D ⎣ O2 ⎦ + ⎣ K L a(O ∗ − O2 ) 2 GA GA 0 01 ⎡

⎤ ⎥ ⎥ ⎦

(3)

To derive control laws, all variables must be measured or observed. The available on-line process information are measurements of variables glucose (G), dissolved oxygen (O2 ) and the known transport dynamics of oxygen (K L a(O2 * – O2 )) and of glucose feed rate F = DGin , where Gin is the glucose concentration in the feed. The process variables biomass (X) and gluconic acid (GA) are not measured on-line and have to be observed.

2.2 Observer Design The model transformation is done applying the basic property of GDM (Bastin and Dochain 1990). According to the (Bastin and Dochain 1990), there exists a state transformation Z = A0 ξ a + ξ b

(4)

where A0 , is the unique solution to the matrix equation A0 Ka + Kb = 0 such that the state-space model (1) is equivalent to

(5)

Contemporary Bioprocesses Control Algorithms …

99



dξ a = Ka ϕ ξ a , ξ b dt

(6a)

dZ = A0 F a + F b dt

(6b)

State variables of model (3) are divided into unmeasured variables, ξ a , and measured ones, ξ b and appropriated matrix of yield coefficients are defined:    X G 10 −k 1 −k 2 ; Ka = ; ξa = ,ξb = ; Kb = 0 −k 3 GA O2 01 

Fa = 0  Fb =



DG in K L a O2∗ − O2

(7)

according to (5)  −Kb k1 k2 = A0 = 0 k3 Ka and to (6b).

(8)

    k k X Z1 10 G = 1 2 . + Z2 0 k3 G A 0 1 O2 The expressions of the auxiliary variables (Z) are as follows: 

Z = A0 ξ a + ξ b =

Z 1 = k1 X + k2 G A + G

(9a)

Z 2 = k 3 G A + O2

(9b)

As the values of kinetic parameters k 1 , k 2 and k 3 are known, the values of X and GA could be obtained from (9a and 9b) using the expressions Xe =

1 k2 1 Z1 − (Z 2 − O2 ) − G k1 k1 k3 k1 G Ae =

1 (Z 2 − O2 ) k3

(10a) (10b)

where the values of auxiliary variables can be obtained on the basis of known process transport dynamics by the system:

100

V. Lyubenova et al.

Fig. 1 Estimation results of unmeasured variables

d Z1 = −D Z 1 + DG in dt

(11a)



d Z2 = −D Z 2 + K L a O2∗ − O2 dt

(11b)

In this way the necessary information to derive adaptive laws for control is available. The resulting observation of immeasured variables, X and GA are shown in Fig. 1. These results could be accepted as good because the estimates (points) of biomass and gluconic acid follow exactly the model data (lines).

3 Adaptive Linearizing Control Design 3.1 Transformation of the Model for Control The adaptive linearizing control algorithm is designed under following conditions: – all process variables are considered to be measured (glucose and dissolved oxygen) or observed (biomass and gluconic acid) by (10a and 10b), (11a and 11b); – process transport dynamics is known; – the yield coefficients are unknown; – the initial conditions of process variables are known; – the glucose concentration in the feed, Gin , is known; – the reaction rates are unknown;

Contemporary Bioprocesses Control Algorithms …

101

– the dilution rate, D (or glucose feed rate F), is the control system input; – the gluconic acid or glucose in the reactor are the control system output. The main idea of control model transformation is to summarize all known process information in separated terms. At the same time, the unknown process kinetic has to be presented with unknown time varying parameters. The model (3) transformation is presented in details in Ignatova et al. (2021) and the following linear regression form is accepted: d Xe /dt = X e Gθ1 − D X e dt

(12)

dG = −X e Gθ2 − G O2 θ3 − D(G − G in ) dt

(13)

d O2 = G O2 θ5 − DG Ae = −G O2 θ4 − D O2 + K L α(O2∗ − O2 ) dt

(14)

dG Ae = G O2 θ5 − DG Ae dt

(15)

where θ 1 -θ 5 are unknown kinetic parameters.

3.2 Observer-Based Estimator of Unknown Kinetic Parameters The new unknown kinetic parameters θ 1 -θ 5 can be estimated by an observer-based estimator:

Xe = X e Gθ1 − D X e + ω1 (X e − X e ) dt

(16)



G = −X e Gθ2 − G O2 θ3 − D(G − G in ) + ω2 (G − G ) dt

(17)





O2 = −G O2 θ4 − D O2 + K L α O2∗ − O2 + ω3 (O2 − O2 ) dt

(18)



G Ae = G O2 θ5 − DG A + ω4 (G Ae − G Ae ) dt

(19)



θ1 = X e Gγ1 (X e − X e ) dt



(20)

102

V. Lyubenova et al.

θ2 = −X e Gγ2 (G − G ) dt



(21)



θ3 = −G O2 γ2 (G − G ) dt

(22)



θ4 = −G O2 γ3 (O2 − O2 ) dt

(23)



θ5 = G O2 γ4 (G Ae − G A) dt

(24)

where ω1 , ω2 , ω3 , ω4 , and γ 1 , γ 2 , γ 3 , γ 4 , are the estimator design parameters. The values of ω1 , ω2 , ω3 , ω4 are set to 1. Values of design parameters γ 1 , γ 2 , γ 3 , γ 4 , are calculated as follows: γ1 = ω12 /4(X e G)2

(25)



γ2 = ω22 /4[(X e G)2 + G O2 )2

(26)

γ3 = ω32 /4(G O2 )2

(27)

γ4 = ω42 /4(G O2 )2

(28)

The relationships (25)–(28) are derived taking into account the requirements of stability of observation and tracking error dynamics.

3.3 Adaptive Linearizing Control Design The current research is aimed at undergraduate/doctoral students of biotechnology who are not familiar with modern algorithms for identification (Angelova et al. 2020; Roeva et al. 2021), monitoring and control of biotechnological processes. This requires the considered algorithms to be pre-developed and built into the training system InSEMCiBio. Their activation have to be realized by simple user’s actions without theoretical knowledge in these scientific fields. Four adaptive linearizing algorithms of control of the gluconic acid production process are derived. These algorithms have the potential for practical application, as they are related to the possibility of increasing the productivity of the target product of the process—gluconic acid. This productivity can be increased either by fed-batch or continuous modes of cultivation process. On the other hand, users can choose the

Contemporary Bioprocesses Control Algorithms …

103

controlled variable to be either the concentration of the carbon source (glucose) or the target product (gluconic acid). To provide these capabilities, four control algorithms are developed. Each of the control laws is derived in three steps. The first one is a choice of the input/output model of the closed-loop system from the model (13) or (15). The second step is to select a stable reference model of the tracking error of controlled variable, where λ is a control tuning parameter. The third step is a substitution of the input/output model (12) or (13) in the corresponding reference one. The control development process is shown below for each case. Continous Control of Glucose concentration. First step dG = −X e Gθ2 − G O2 θ3 − D(G − G in )) dt

(29a)

λ G ∗ − G = dG/dt

(29b)

Second step

Third step D=

−λ(G ∗ − G)−X e Gθ2 − G O2 θ 3 G − G in

(29c)

Fed-Batch Control of Glucose concentration. First step dG F = −X e Gθ2 − G O2 θ3 − (G − G in ) dt V

(30a)

λ G ∗ − G = dG/dt

(30b)

Second step

Third step F=

G i n (−λ(G ∗ − G)−X e Gθ2 − G O2 θ 3 ) G − G in

(30c)

dV/dt = d V /dt = F Continous Control of Gluconic Acid Concentration. First step dG Ae = G O2 θ5 − DG Ae dt

(31a)

104

V. Lyubenova et al.

Second step

λ G A∗ − G Ae = dG Ae /dt

(31b)

Third step D=

−λ(G A∗ − G Ae ) + G O2 θ5 G Ae

(31c)

Fed-Batch Control of Gluconic Acid Concentration. First step dG Ae F = G O2 θ5 − G Ae dt G in

(32a)



λ G A∗ − G Ae = dG Ae /dt

(32b)

Second step

Third step F=

V (λ(G A∗ − G Ae ) − G O2 θ5 ) G Ae

(32c)

d V /dt = F The synthesized algorithms are adaptive due to the values of unknown kinetic parameters θ 2 , θ 3 and θ 5 (21), (22), (24), as well as value of biomass concentration, Xe or GAe (10a), (10b) are estimated on-line. In a results necessary information to derive adaptive laws for control is available.

4 Building-In the Control Algorithms in InSEMCoBio The architecture scheme of the system InSEMCoBio is given on Fig. 2. The algorithms for identification, monitoring and control are shown in yellow. The developed three optimization algorithms for the purposes of model identification are already built into the system. The next step is to incorporate the algorithms for software sensors (10a and 10b), (11a and 11b) and for adaptive control (29a, 29b, 29c)–(32a, 32b, 32c). The blocks shown in blue, are related to the choice of: • process and corresponding experimental data, as well as the models structure for the selected process;

Contemporary Bioprocesses Control Algorithms …

105

Fig. 2 Architecture scheme of the system InSEMCoBio

• identification algorithm. In the considered case, these functions are already realized and the results are: • Set of optimal models for batch mode of cultivation process; • Investigation of the mathematical models; • Choice of the best mathematical model. The best model is used instead the real process in simulations. Moreover, the model for control in linear regression form (12)–(15) is based on the obtained mathematical model. For the education purposes the user has the opportunity to get acquainted with the possibilities of bioprocess control by activating either fed-batch or continuous modes of cultivation. In the considered case there are two more options depending on the controlled variable—carbon source (G) or target product (GA). Since in the considered process only two of the variables are measured on-line (G and O2 ), the system allows monitoring of the dynamics of unmeasurable variables (X and GA). Therefore, software sensors are here synthesized (10a and 10b), (11a and 11b).

106

V. Lyubenova et al.

5 Results and Discussion The control scheme including nonlinear bioreactor presented by the best model; the observer of unmeasured biomass and gluconic acid (10a and 10b), (11a and 11b), the estimator of unknown kinetic parameters (16)–(24), as well as the adaptive controller (29a, 29b, 29c)–(32a, 32b, 32c) are investigated by numerical simulations. The simulations are realized under following conditions: The fermentation is started in batch mode. The initial values of process variables are: X(0) = 0.1 g/l; G = 0.01 g/l; O2 (0) = 0.0075 g/l; GA(0) = 0.01 g/l. The values of the best mathematical model parameters are given in Ignatova et al. (2021). The value of glucose concentration in the feed, Gin , chosen from expert’s point of view, is set to 200 g/l. The initial values of biomass and gluconic acid estimators (10a and 10b), (11a and 11b) are set to the initial values of real experimental data (values are known at the begging of the fermentation process). The simulation investigation of the control algorithms (29a, 29b, 29c)–(32a, 32b, 32c) are shown in Figs. 3, 4, 5 and 6. Model outcomes are given with lines. The result of the gluconic acid and biomass control for all cases are shown with dashed lines. Experimental data of batch phase for glucose and biomass are given with stars ‘*’ and for gluconic acid with plus ‘ + ’. In all subfigures a the curves of glucose and gluconic acid are shown. In all subfigures b the biomass dynamics is given. In subfigures c, concerning continuous processes, the dissolved oxygen dynamics is

c)

Fig. 3 Continous control of glucose concentration

Contemporary Bioprocesses Control Algorithms …

Fig. 4 Fed-Batch control of glucose concentration

c)

Fig. 5 Continous control of gluconic acid concentration

107

108

V. Lyubenova et al.

Fig. 6 Fed-batch control of gluconic acid concentration

shown, and in subfigures c, concerning fed-batch processes, the volume variation is presented. In all subfigures d the control input – dilution rate or feed rate is given. The aim of the control is to suppress the consumption of the whole product gluconic acid as a carbon source for biomass growth. The presented results show that the goal has been achieved in all cases of control. Two approaches are considered: • The first is to maintain the substrate (G) in the culture medium at some low value of 0 or 3 g/l. • In the second one, a constant concentration of the target product (GA), which corresponds to its maximum productivity, is maintained. These two approaches are implemented through continuous (Figs. 3 and 5) and fed-batch control (Figs. 4 and 6). Comparing the results obtained up to 80 h of fermentation in Figs. 3a and 4a, it was found that a fed-batch control (Fig. 4a) achieves a higher concentration of the target product compared to the continuous control (Fig. 3a). It should be noted that the fed-batch process must be stopped when the volume reaches the maximum working volume (80 l), while during continuous mode of cultivation, the process may take longer. This does not give grounds for a definite conclusion under which cultivation mode will accumulate a larger amount of target product.

Contemporary Bioprocesses Control Algorithms …

109

Maintaining a constant concentration of the target product leads to stabilization of the other process variables too (biomass and dissolved oxygen, etc.) as can be seen in Figs. 5 and 6. Unfortunately, with fed-batch stabilization of gluconic acid, the working volume is reached too quickly (about 30 h of fermentation, Fig. 6d). So this control mode does not lead to good results.

6 Conclusion A currently developed system InSEMCoBio is herewith presented. An architecture scheme of the whole training system is proposed, clearly showing the place of the synthesized algorithms and the connections with the other functionalities of the system. The modern algorithms for monitoring and control of a gluconic acid production process, built into the InSEMCoBio, are derived. The proposed algorithms, using General Dynamical Model approach, allow users to compare the results of the application of different control strategies. The considered gluconic acid production process is monitored and an adaptive control is realized based on the synthesized software sensors of immeasurable process variables. The control algorithms proposed in this study will be integrated into the system InSEMCoBio together with similar algorithms, developed for other food production processes. The choice of a process, as well as the activation of the functions related to the identification, monitoring and control, will be realized by the users through simple actions. In this way, users will be able to get acquainted with the results of modern algorithms without being familiar with the theory of their development, as well as with the software of their implementation. Above presented results define the currently involving system InSEMCoBio as an interactive and user-friendly training system. Acknowledgements This research has been supported by the National Scientific Fund of Bulgaria under the Grant KP-06-H32/ 3 “Interactive System for Education in Modelling and Control of Bioprocesses (InSEMCoBio)”.

References Angelova, M., Vassilev, P., Pencheva, T.: Genetic Algorithm and Cuckoo Search Hybrid Technique for Parameter Identification of Fermentation Process Model. Int J Bioautom. 24(3), 277–288 (2020). https://doi.org/10.7546/ijba.2020.24.3.000707 Bastin, G., Dochain, D.: On-line estimation and adaptive control of bioreactors. Elsevier, Amsterdam, Oxford, New York, Tokyo (1990) Birol, G., Ündey, C., Cinar, A.: A modular simulation package for fed-batch fermentation: penicillin production. Comput. Chem. Eng. 26(11), 1553–1565 (2002)

110

V. Lyubenova et al.

Blesgen, A., Hass, V.: Efficient biogas production through process simulation. Energy Fuels 24(9), 4721–4727 (2010) Bouyahia, S., Semcheddine, S., Talbi, B., Boutalbi, O.: High-performance control for a nonlinear biotechnological process based-on adaptive gain sliding mode strategy. Int J Bioautom. 24(2), 103–116 (2020). https://doi.org/10.7546/ijba.2020.24.2.000595 Bzioui, S., Channa, R.: Robust tracking control for the non-isothermal continuous stirred tank reactor. Int J Bioautom. 24(2), 131–142 (2020). https://doi.org/10.7546/ijba.2020.24.2.00061520 Diwakar, S., Achuthan, K., Nedungadi, P., Nair, B.: Enhanced facilitation of biotechnology education in developing nations via virtual labs: analysis, implementation and case-studies. Int. J. Comput. Theory Eng. 3(1), 1–8 (2011) Dochain, D.: Automatic Control of Bioprocesses. Wiley (2013) Farza, M., Chéruy, A.: CAMBIO: software for modelling and simulation of bioprocesses. Bioinformatics 7(3), 327–336 (1991) Ferrer, J., Seco, A., Serralta, J., Ribes, J., Jl, M., et al.: DESASS: a software tool for designing, simulating and optimising WWTPs. Envir. Mod. Softw. 23(1), 19–26 (2008) Gerlach, I.: Operator training simulators towards industrial biotechnology. Doctoral dissertation, Linköping University Electronic Press (2015) Gerlach, I., Hass, V., Brüning, C., Mandenius, C.: Virtual bioreactor cultivation for operator training and simulation: application to ethanol and protein production. J. Chem. Technol. Biotechnol. 88(2), 2159–2168 (2013) Hass, V.: Operator training simulators for bioreactors. Bioreactors: Design, Operation and Novel Applications, pp. 453–483. Weinheim, Wiley-VCH (2015) Hadj-Abdelkader, O., Hadj-Abdelkader, A.: Estimation of substrate and biomass concentrations in a chemostat using an extended Kalman Filter. Int J Bioautom. 23(2), 215–232 (2019). https://doi. org/10.7546/ijba.2019.23.2.000551 Ignatova, M., Lyubenova, V., Zlatkova, A.: Adaptive Control for Maximum Productivity of Continuous Bioprocesses. Materials, Methods and Technologies, vol. 15, 40–49 p. (2021). ISSN 1314–7269 Lyubenova, V., Ignatova, M., Kostov, G., Shopska, V., Petre, E., Roman, M.: An Interactive teaching system for kinetics modelling of biotechnological processes. In: IEEE 22nd International Conference on System Theory, Control and Computing (ICSTCC), pp. 366–371 (2018). Rehman, K., Lin Zhu, X., Wang, B., Shahzad, M., Ahmad, H., Abubakar, M., Ajmal, M.: Soft Sensor model based on IBA-LSSVM for photosynthetic bacteria fermentation process. Int. J. Bioautom. 25(2), 145–158 (2021). https://doi.org/10.7546/ijba.2021.25.2.000783 Roman, M., Popescu, D., Selisteanu, D.: An interactive teaching system for bond graph modeling and simulation in bioengineering. J. Educ. Technol. Soc. 16(4), 17–31 (2013) Roman, M., Sendrescu, ¸ D., Boba¸su, E., Petre, E., Popescu, D.: Teaching system for modelling and simulation of bioprocesses via bond graphs. In: 22nd Annual Conference on EAEEIE, Maribor, Slovenia, pp. 192–199 (2011) Roeva, O., Zoteva, D., Castillo, O.: Joint set-up of parameters in genetic algorithms and the artificial bee colony algorithm: an approach for cultivation process modelling. Soft. Comput. 25(3), 2015– 2038 (2021) Simeonov, I., Chorukova, E.: Anaerobic digestion modelling with artificial neural networks. C. r. Acad. Bulg. Sci. 61(4), 505–512 (2018) Vasileva, E., Petrov, K., Beschkov, V.: Fed batch strategy for biodegradation of monochloroacetic acid by immobilized Xantobacter Autotrophicus GJ10 in polyacrylamid gel. C. r. Acad. Bulg. Sci. 62(10), 1241–1246 (2009)

Monitoring a Fleet of Autonomous Vehicles Through A* Like Algorithms and Reinforcement Learning Mourad Baiou, Aurélien Mombelli, and Alain Quilliot

Abstract We deal here with a fleet of autonomous vehicles which is required to perform internal logistics tasks inside some protected area. This fleet is supposed to be ruled by a hierarchical supervision architecture, which, at the top level distributes and schedules Pick up and Delivery tasks, and, at the lowest level, ensures safety at the crossroads and controls the trajectories. We focus here on the top level, while introducing a time dependent estimation of the risk induced by the traversal of any arc at a given time. We set a model, state some structural results, and design, in order to route and schedule the vehicles according to a well-fitted compromise between speed and risk, a bi-level algorithm and a A* algorithm which both relies on a reinforcement learning scheme. Keywords Group decision support · IT management · Key performance indicators

1 Introduction Intelligent vehicles, provided with an ability to move with some level of autonomy, recently became a hot spot in the mobility field. Still, determining what can be exactly done with new generations of autonomous or semi-autonomous vehicles able to follow their own way without being physically tied to any kind of track (cable, rail,…) remains an issue. Most people are doubtful about the prospect of seeing such vehicles moving without any external control inside crowded urban areas. Instead they foresee that the use of those vehicles is likely to be restricted to protected areas for specific purposes: relocation of free access vehicles inside large parking areas, rural M. Baiou · A. Mombelli · A. Quilliot (B) LIMOS CNRS 6158, Labex IMOBS3, Clermont-Ferrand, France e-mail: [email protected] M. Baiou e-mail: [email protected] A. Mombelli e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 S. Fidanova (ed.), Recent Advances in Computational Optimization, Studies in Computational Intelligence 1044, https://doi.org/10.1007/978-3-031-06839-3_7

111

112

M. Baiou et al.

Fig. 1 An autonomous vehicle

or urban logistics inside closed areas, pick up and delivery transactions inside warehouses (see Amazon.com 2013; Ren and al 2017), rescue or repair interventions in a context of natural disaster. This point of view raises the general challenge of monitoring a fleet of such vehicles, required to perform internal logistics tasks while safely interacting with other players: workers, machines and standard vehicles. Related decision problems are at the intersection of Robotics and are at the intersection of Robotics and Operations Research (Fig. 1). When it comes to the management autonomous vehicle fleets, current trend is to the implementation of a 3-levels supervision architecture: • The first level, or embedded level, is defined by the monitoring and sensing devices which are embedded inside the vehicles, with the purpose of controlling trajectories in real time and adapting them to the possible presence of obstacles: currently, most effort from the robotics community remains devoted to this embedded level (see Chen and Englund 2016), which mostly involves optimal control and artificial perception techniques. • The second one, or middle one, is in charge of small tricky areas, like for instance crossroads (see Fig. 2). It sends signals and instructions to the vehicles in order to regulate their transit and avoid them to collide when they get through those areas (see Chen and Englund 2016; Philippe et al. 2019). • The third one, or global one, refers to the dynamic planning and routing of the fleet, in order to make this fleet achieve some internal logistics requests (see Le-Anh and De Koster 2006; Vis 2006; Koes and al 2005). A true challenge is about the synchronization of those monitoring levels and about the control of communication processes which will allow them to interact. We deal here with the global control level, while assuming that this level is in charge of vehicle routing and scheduling decisions. At a first glance, one may think into related problem as a kind of PDP: Pick up and Delivery problem (see Berbeglia et al. 2007), since an elementary task will consists for a vehicle in moving from some

Monitoring a Fleet of Autonomous Vehicles Through A* …

113

Fig. 2 Hierarchical supervision architecture

origin to some destination, performing some loading or unloading transaction and keeping on. But some specific features impose new challenges: • The time horizon for autonomous or semi-autonomous vehicles is usually short and decisions have to be taken on line, which means that decisional processes must take into account the communication infrastructure (see Vivaldini et al. 2013) and the way the global supervisor can be provided, at any time, with a representation of the current state of the system and its short term evolution; • As soon as autonomous vehicles are involved, safety is at stake (see Ryan et al. 2020; Pimenta et al. 2016). The global supervisor must compute and schedule routes in such a way that not only tasks are going to be efficiently performed, but also that local and embedded supervisors will perform their job more easily. Taking care of safety requires quantifying the risk induced by the introduction into the system of any additional vehicle. Addressing this issue means turning real time collected traffic data into risk estimators (see Ryan et al. 2020; Zhang et al. 2008). We do not do it here. Instead, we focus on the way resulting estimators may be used in order to take safe routing and scheduling decisions. So we assume that, at the time when we are trying to route and schedule a given vehicle V , we are provided with a procedure which, for any arc e = (x, y) or the transit network and any time value t, can compute a rough estimation of the risk related to make V be running on e at time t. Then our goal becomes to schedule the route Γ that V is going to follow, in such a way that its arrival time is minimal and that induced risk estimation remains bounded by some threshold. For the sake of simplicity, we limit ourselves to one vehicle V and one origin/destination move (o, d). Our problem may then be view as the search for a constrained shortest path (see Lozano and Medaglia 2013). But two features make it significantly more difficult:

114

M. Baiou et al.

• We must deal with a time dependent network (see Franceschetti et al. 2017; Krumke et al. 2014; Park et al. 2009); • The on line context keeps us from relying on a heavy machinery like those related to mathematical programming. According to this purpose, we proceed in 3 steps: • The first one is devoted to the setting of our SPR: Shortest Path under Risk problem and to a discussion about its structural properties. • The second step is devoted to the design of algorithms designed for a static context: almost exact algorithms which adapt well-know A* algorithm for path searching in a large state space (see Nilsson 1975); local search heuristic algorithms, which estimate the quality of a given route Γ through application of a filtered dynamic programming procedure. In both case, we try several notions of arc traversal decisions, relying on respectively risk versus time, risk versus distance and distance versus time estimations. • The last step deals with the online issue. We turn above mentioned algorithms designed according to a static paradigm into reactive algorithms for on line contexts. According to this prospect, we apply statistical learning and auto-adaptative reinforcement learning techniques, in order to associate, with any current traffic patterns, ad hoc arc traversal decisions. So the paper is organized as follows: in Section II we formally describe our model and state some structural results. In Section III we describe the global structure of an exact A* algorithm and a local search heuristic, and present the ways the notion of arc traversal decision may be implemented. In Section IV, we address the on line issue and explain how statistical learning techniques may be used in order to turn static tree search or dynamic programming algorithms into fast decision rule based algorithms. Section V is about numerical experiments.

2 The SPR: Shortest Path under Risk Model We refer here to a fleet of autonomous vehicles, which evolves throughout the time inside some kind of industrial infrastructure, for instance a warehouse, with the purpose of achieving internal logistic tasks (item storing and retrieving, maintenance, inventory,…). Those tasks have to be performed in a safe way under small standard costs (time, energy,…). It comes that, in order to set SPRC model, we first need to formalize here the safety notion.

Monitoring a Fleet of Autonomous Vehicles Through A* …

115

Fig. 3 A warehouse like transit network

2.1 Transit Network and Risk Function We suppose that our fleet of vehicles moves inside a simple almost planar transit network G = (N, A), N denoting the node set and A the arc set. This network G is likely to represent for instance a warehouse (see Fig. 3), or any kind of similar industrial or rural restricted area. To any arc e = (x, y) corresponds a length Le and a maximal speed vmaxe : an autonomous vehicles traversing e is not allowed to go faster than vmaxe while moving along e. We denote by L ∗ the shortest path distance induced by values Le, e ∈ A. We suppose that at time t = 0, when the global supervisor of the fleet must take a decision about a target vehicle V, he is provided with some knowledge about the routes which are followed by the other vehicles, their schedule, and the tasks that they are going to perform. This knowledge allows him to derive a risk estimation Π e (t) function whose meaning comes as follows: For any small value dt, Π e (t).dt is an estimation of the Expected Damage in case V moves at maximal speed vmaxe along e between time t and time t + dt. Obtaining functions Π e is not part of this study: it requires experimental data analysis. But, since the global supervisor must maintain those risk estimation functions all along the process, those functions must be expressed according to a simple format. So we make the assumption that any function Π e is piecewise linear (see Fig. 4). As a matter of fact, if additional vehicle V moves across arc e at a speed v less than maximal speed vmaxe , induced risk will decrease. We assume that, if V traverses arc e during some interval [t, t + dt] at speed v ≤ vmaxe , then related Expected Damage is given by Eq. 1:   v Risk(v, t) = Φ (1) .Π e (t).dt vmaxe where Φ is an increasing convex function with values in [0, 1], such that for any u, Φ(u)  u, which mean that Φ(u) is significantly smaller than u. The meaning

116

M. Baiou et al.

Fig. 4 A piecewise function Πe

of condition Φ(u)  u is that, since going slower implies for vehicle V a larger traversal time, Φ(u)  u will also implies that the risk induced by the traversal of e decreases while the speed decreases, even if the traversal time increases. In the next sections, we shall set Φ(u) = u 2 . It comes that if vehicle V moves across arc e between time T and time T + δ, according to speed function t → v(t), then related Expected Damage is given by Eq. 2:  T +δ  v (2) Φ .Π e (t). vmaxe T

Speed Normalization: We only care here about traversal times of arcs e ∈ A, and not about their true length, in the geometric sense. So we suppose here that, for any arc e, vmaxe = 1. According to this we deal with reduced speed values u ∈ [0, 1] and L e means the minimal traversal time for arc e.

2.2 Routing Strategies and the SPR Problem Let us suppose now that origin o and destination d are given, which are both nodes of the transit network G = (N , A). A routing strategy from o to d for additional vehicle V , is going to be defined by a pair (Γ, u), where: • Γ is a path from o to d in the network G. • u is a speed function, which, to any time value t ≥ 0, makes correspond the reduced speed u(t) ≤ 1 of the vehicle V . Notice that if we refer to the previously described speed normalization process the true speed v(t) of V is going to depend on the arc e where V is located. Such a routing strategy (Γ, u) being given, path Γ may be viewed in a standard way as a sequence e1 , . . . , en of arcs of G. If we set t0 = 0 and denote by ti when V arrives to the end-node of ei , then values ti are completely determined by speed function t → u(t). Then we set:

Monitoring a Fleet of Autonomous Vehicles Through A* …

117

• T ime(Γ, u) = tn = global duration induced by the routing strategy (Γ, u)  ti Φ(u(t))Π e (t)dt = global risk induced by (Γ, u). • Risk(Γ, u) = i ti−1

Then the SPR: Shortest Path Under Risk comes in a natural way as follows: {SPR: Shortest Path Under Risk: Given origin o and destination d, together with some threshold Rmax , compute a routing strategy (Γ, u)such that Risk(Γ, u) ≤ Rmax and T ime(Γ, u) is the smallest possible}.

2.3 Some Structural Results As it is stated, SPR looks more like an optimal control problem than like a combinatorial one. But, as we are going to show now, we may impose restrictions on speed function u, which are going to make the SPR model get closer to a discrete decision model. Proposition 1 Optimal solution (Γ, u) of SPR may be chosen in such a way that u is piecewise constant, with breakpoints related to the times ti when vehicle V arrives at the end-nodes of arcs ti , i = 1, . . . , n, and to the breakpoints of function Πie , i = 1, . . . , n. Proof Let us suppose that V is moving along some arc e = ei , and that δ1 , δ2 are 2 consecutive breakpoints in above sense. If u(t) is not constant between δ1 and δ2 then we may replace u(t) by the mean value u ∗ of function t → u(t) between δ1 and δ2 . Time value T ime(Γ, u) remains unchanged, while risk value Risk(Γ, u) decreases because of the convexity of function Φ. So we conclude. Proposition 2 If optimal SPR trajectory (Γ, u) is such that u(t) = 1 at some t, then Risk(Γ, u) = Rmax . Proof Let us suppose that path Γ is a sequence e1 , . . . , en of arcs of G. We proceed by induction on n. • First case: n = 1. Let us suppose above assertion to be false. Breakpoints of e = e1 , may be written t0 = 0, t1 , . . . , t Q = T ime(Γ, u), and we may set: ◦ q0 = largest q such that u < 1 between tq and tq+1 ; ◦ u 0 = r elated speed; l0 = distance cover ed by V at time tq0 . Let us increase u 0 by  > 0, such that u 0 +  ≤ 1 and that induced additional risk taken between tq0 and tq0 +1 does not exceed Rmax − Risk(Γ, u). Then, at time tq0 +1 , vehicle V covered a distance l > l0 . If l < L e , then it keeps on at speed u = 1, and so arrives at the end of e before time t Q , without having exceeded the risk threshold Rmax . We conclude.

118

M. Baiou et al.

• Second case: n > 1. Let us suppose above assertion to be false and denote by R1 the risk taken at the end of arc e, and by t1 related time value. Induction applied to arcs e2 , . . . , en , and risk threshold Rmax − R1 implies that the speed of V is equal to 1 all along the arcs e2 , . . . , en . Let us denote by τ0 = 0, τ1 , . . . , τ Q the breakpoints of e1 which are between 0 and t1 and let us set τ Q+1 = t1 and: ◦ q0 = largest q such that u < 1 between τq and τq+1 ; ◦ u 0 = r elated speed; l0 = distance cover ed by V at time tq0 +1 . Then we increase u 0 by  > 0, such that u 0 +  ≤ 1 and that induced additional . While moving risk taken between τq0 and τq0 +1 does not exceed (Rmax − Risk(Γ,u)) 2 at speed u 0 +  along e1 , vehicle V faces 2 possibilities: either it arrives at the end of e1 before time τq0 +1 or it may keep on moving from time τq0 +1 on along e1 at speed u = 1. In any case, it reaches the end of e1 at some time t1 − β, β < 0, with . So, for any i = 2, . . . , n we an additional risk no larger than (Rmax − Risk(Γ,u)) 2 compute speed value u i such that moving along ei at speed u i between ti−1 − β and . So we apply ti−1 does not induce an additional risk more than (Rmax − Risk(Γ,u)) 2n to V the following strategy: move as described above on arc e1 and next, for any i = 2, . . . , n, move along ei at speed u i between ti−1 − β and ti−1 and next at speed 1 until the end of ei . The additional risk induced by this strategy cannot exceed (Rmax − Risk(Γ, u)). On another side, this strategy makes vehicle V achieve its trip strictly before time tn . We conclude. Proposition 3 Given an optimal SPR trajectory (Γ, u), with Γ = {e1 , . . . , en } and u satisfying Proposition 1. Let us denote by ti the arrival time at the end of arc ei . Then, for any i = 1, . . . , n, and any t in [ti−1 , ti ] such that u = u(t) < 1, the quantity Φ (u(t)).Π eq (t) is independent on t, where Φ (u) denotes the derivative of Φ in u. Proof Once again, let us denote by ti time when vehicle V arrives at the end of arc ei . For a given i, we denote by δ1 , . . . , δ H (i) , the breakpoints of function Π ei which i are inside interval ]ti−1 , ti [, by Πqq related value of Π eq on the interval ]δ j , δ j+1 [, by u 0 , . . . , u q , . . . , u H (q) , the speed values of V when it leaves those breakpoints, and by Rq the risk globally taken by V when it moves all along eq . Because of proposition 2, vector (u 0 , . . . , u H (q) ) is an optimal solution of the following convex optimization problem:  •  Compute (u 0 , . . . , u H (q) ) such that q u q .(δq+1 − δq ) and which minimizes ei q Φ(u q )Πq (δq+1 − δq ). Then, Kuhn-Tucker conditions for the optimality of differentiable convex optimization program tell us that there must exists λ ≥ 0 such that: for any q such that u q < 1, Φ (u q ).Πqei = λ. As a matter of fact, we see that λ cannot be equal to 0. We conclude.

Monitoring a Fleet of Autonomous Vehicles Through A* …

119

Remark 1 In case Φ(u) = u 2 , above equality Φ (u q )Πqei = λ becomes u q Πqei = λ2 where u q Πqei means the instantaneous risk per distance dd RL value at the time when V moves along ei between times δq and δq+1 .

2.4 A Consequence: Risk Versus Distance Reformulation of the SPR Model Remark 1 leads us to define the Risk versus Time coefficient for arc ei as the value 2Φ (u q )Πqei involved in Proposition 3. This proposition, combined with Proposition 1, allows us to significantly simplify SPR: We define a risk versus distance strategy as a pair (Γ, λ R D ) where: • Γ is a path, that means a sequence {e1 , . . . , en } of arcs, which connects origin node o do destination node d; • λeR D associates, with any arc e in Γ , Risk versus Distance coefficient λeR D = 2Φ (u)Πe . In case Φ(u) = u 2 , we notice that this coefficient means the amount of risk per distance unit induced on arc e at any time t such that u(t) < 1, by any trajectory (Γ, u) which satisfies Proposition 3. Let us suppose that we follow a trajectory (Γ, u) which meets Proposition 3, and that we know value λeR D for any arc e of Γ .Since Φ is supposed to be convex and such that Φ(u)  u, we may state that Φ admits a reciprocal function Φ −1 . Then, at any time t when vehicle V is inside arc e, we are able to reconstruct value 

λR D

λR D

e e Φ −1 ( 2Π ), if Φ −1 ( 2Π ) 1. Rmax = 43 ; Function Φis : u → Φ(u) = u 2 .Then we see that vehicle V must go fast all along the arc e1 , in order to get out of e1 before this arc becomes very risky. That means that its speed is equal to 1 on e1 , and that its risk per distance value is equal to 21 . Next it puts the brake, in the sense that its speed remains equal to 1 but its risk per distance value decreases to 1 . It is easy to check that this routing strategy is the best one, with Risk(Γ, u) = 43 4 and T ime(Γ, u) = 2. Sill, identifying the complexity of SPR is not that simple, since we are dealing with continuous variables. As a matter of fact, complexity also depends on function Φ. We conjecture that: Conjecture 1 If Φ(u) = u 2 then SPR is in NP time and is NP-Hard.

3 Algorithms Our algorithms all rely on notions of state and decision. A state is a 3-uple (i, T, R), where: • i is a node of G where vehicle V is currently located; • T is the times spent in order to reach i, and R is the amount of risk induced by this process of moving from origin o to node i. Then a decision will consist in: • Choosing the arc e = (i, i o ) along which the vehicle is going to move; • Choosing some parameter λ which is going to determine the speed function u along the arc e. Previous Sect. 2 suggests the use of risk versus distance coefficient λeR D as decision parameter λ. But other choices are possible. We restrict ourselves to the case when Φ(u) = u 2 .

3.1 Decision Scheme As told above, a natural approach is to refer to Proposition 3 and consider λ = λ R D as expressing the mean Risk versus Distance coefficient Φ (u)Π e . But another intuitive approach is to consider λ = λ S P as expressing the mean speed of V along e, and deduce this way the arrival time on i o in a straightforward way. Finally, we may also

Monitoring a Fleet of Autonomous Vehicles Through A* …

121

consider that λ = λ R S expresses the mean Risk Speed of V along e, which means the amount of risk vehicle V takes per time unit as it advances along e. We are going to describe here those 3 possibilities, together with the way resulting state (i o , R o , T o ) may be deduce from λ and (i, R, T ). • First approach: The Risk versus Distance approach. Since Phi(u) = u 2 , Φ (u(t))Π e (t) = 2u(t)Π e (t) for any t during the traversal of e. It comes that if we fix λ R D the speed value u(t) is given by: u(t) = I n f (1, λ R D /Π e (t)). Resulting state (i o , R o , T o )will be obtained from λ R D and (i, R, T ) through the following iterative process: Risk_Distance Transition procedure: Let us set t0 = T , and let us denote by t1 , . . . , t Q the breakpoints of Π e which are larger than T and by Π e 0, . . . , Π Qe related Π e values. Initialization: t ← t0 ; r ← R; L ← 0; q ← 0; N ot Stop ; While N ot Stop do π ← Πqe ; q ← q + 1; δ ← tq − t; u = I n f (1, λ R D /π); If L e > L + uδ then L ← L + uδ; r ← r + Φ(u)πδ; t ← tq ; Else δ ← δ L euδ−L ; t ← t + δ; r ← r + Φ(u)πδ; L ← L e ; Stop; o R ← r ; T o ← t; If R o > Rmax then Fail else Success; • Second approach: The Mean Speed approach. Fixing λ S P means fixing the time T o as: T o = T + λLe S P . In order to determine the function t → u(t) and the value R o , we solve the following quadratic program: Mean_Speed Program: Let us denote by t0 = T , t1 , . . . , tq = T o the breakpoints of Π e which belong to [T, T o ] and by Π1e , . . . , Π Qe related Π e values. Then values u 1 , . . . , u Q ∈ [0, 1] such that:  we must compute speed u q (tq − tq−1 ) = T o − T q  2 e q u q Πq (tq − tq−1 ) < Rmax . This quadratic convex program may be solved through direct application of KuhnTucker formulas for local optimality. Then we get R o by setting: R o =  1st2 order e R + q u q Πq .(tq − tq−1 ). If R o > Rmax then the Mean Speed transition related to λ S P yields a Fail result. • Third approach: The Risk Speed approach. Since Φ(u) = u 2 we have that at any time t during the traversal of e, related risk speed dTd R(t) is equal to u(t)2 Π e (t). It comes that if we fix λ R S we get:  RS 1/2 ). u(t) = I n f (1, Πλe (t)

122

M. Baiou et al.

Fig. 5 Detour operator

Resulting state (i o , R o , T o ) will be obtained from λ R D and (i, R, T ) through the same following iterative process a for the Risk versus Distance approach.

3.2 A Local Search Algorithm Involving Dynamic Programming This local search heuristic LS_SPR works in 2 steps: LS_SPR Algorithm: Initialize Γ as the shortest path according to L from o to d; N ot Stop; While N ot Stop do First step: Evaluate Γ , and get the arrival time Ti of vehicle V in any node i of Γ ; Second step: Update Γ ; Keep the best path solution Γ ever obtained. Several kinds of controls may be applied to above process: one may do a random walk descent. In any case, we need to discuss both Update and Evaluate steps. • Update step: It relies on a pre-process which is applied to the transit network G and involves some proximity threshold S_Pr ox. For any two nodes i, j of G such that L i,∗ j ≤ S_Pr ox, we pre-compute a collection Path i, j of elementary path from i to j. This provides us with an operator Detour , which acts on any path Γ though parameters i, j, γ as follows (Fig. 5): ◦ i, j are nodes of Γ such that i precedes j in Γ ; γ is some path in Path i, j ; ◦ Detour (Γ, i, j, γ) replaces the restriction Γi, j of Γ from i to j by path γ. Since Detour may admit a rather large number of parameters values (i, j, γ), we T −T first identify pairs of nodes (i, j) in Γ , such that the slowdown coefficient Tj ∗ i i, j is large, and pick up such a pair (i, j). Next we choose path γ in Path i, j under the

Monitoring a Fleet of Autonomous Vehicles Through A* …

123

condition that is not very crowded between time Ti and time T j , that means which is such that the sum, for the arcs e of γ of mean Π e (t) values between time Ti and time T j is small. • Evaluation step: This evaluation step relies on a dynamic programming procedure DP_Evaluate whose main features come as follows: ◦ Let us denote by e1 , . . . , en the arcs of Γ , and by i 0 , . . . , i n related nodes; So the time space of DP_Evaluate comes in a natural as the set {0, 1, . . . , n} and a state at time q = 0, 1, . . . , n, is a pair (T, R), where T means the time when vehicle V arrives in i q , and R the cumulative risk at this time. Clearly, initial state is (0, 0) and final state should be any pair (T, R) such that R ≤ Rmax . ◦ Then a decision at time q becomes a value λ, (λ R D , λ S P , λ R S ) in the sense of Sect. 3.1 and such a decision induces a transition (q, R, T ) → (q + 1, R o , T o ) as described in Sect. 3.1, with cost R o − R. This decision is feasible if it does induce a Fail result. According to this, Bellman principle may be applied: the algorithm DP_Evaluate scans the time space {0, 1, . . . , n}, and, for any q = 0, 1, . . . , n, computes related state set State[q], according to the following instructions: ◦ Initialize State[0] as {(0, 0)} and State[q] as N il for any q > 0; ◦ For q = 1, . . . , n − 1 do Generate decision set Λ; (I1) For any λ in Λ and any state (T, R) in State[q] do Compute (in case λ is feasible) resulting state (T o , R o ); If there does not exist (t1 , R1 ) in State[q + 1] such that t1 ≤ T o and R1 ≤ R o then insert (T o , R o ) into State[q + 1] and remove from State[q + 1] any (t1 , R1 ) such that t1 ≥ T o and R1 ≥ R o . In case we are already provided with some feasible SPR solution (Γ ∗ , u ∗ ) with value T ∗ , then we may apply the following filtering rule: ◦ Lower Bound Based Filtering Rule: Let (T o , R o ) be the state involved in instruc∗ ≥ T ∗ then state (T o , R o ) may be killed: we do not tion (I1). If T o + L i(q+1),d insert it into State[q + 1], since we cannot expect it to be extended into a better solution than current solution (Γ ∗ , u ∗ ). Remark 2 We turn LS_SPR algorithm into a greedy algorithm by removing the update step and by generating Λ in such a way its cardinality is 1.

3.3 A A* Algorithm A* algorithm Nilsson (1975) was designed in order to deal with path search for robots evolving in very large (possibly infinite) state spaces. It can be adapted to our

124

M. Baiou et al.

problem, since solving SRP means searching for a shortest path in a Risk Expanded network, whose nodes are pairs (i, T, R), i ∈ N , 0 ≤ R ≤ R Max, T ≥ 0, and arcs corresponds to transition ((i, T, R) → decisionλ → (i o , T o , R o )) as described in Sect. 3.1. In the present case, it will rely on the following data structures: • An expansion list L E, which contains states (i, T, R), ordered according to increasing optimistic estimation value W . The optimistic estimation value of state ∗ (i, T, R) is equal to T + L i,d and provides us with a lower bound to the best possible value of a SPR solution (Γ, u) which would extend the path which allowed us reaching state (i, T, R). • A pivot list L Pivot, which contains states (i, T, R), together with optimistic estimation value W , which already appeared as the first element (H ead) of L E. There an element (i, T, R) which is dominated by another element should not exist in L E

(i, T1 , R1 ) in L Pivot L E, that means which is such that T1 ≤ T and R1 ≤ R. Then A*_SPR algorithm may be described as follows: A*_SPR Algorithm: Initialize L Pivot as N il and L E as {(o, 0, 0)}; N ot Stop; While (N ot Stop) ∧ (L E = N il) do (i, T, R) ← H ead(L E); If i = d then Stop; Retrieve the SPR solution Γ related to (i, T, R); Else Remove (i, T, R) from L E and Insert it into L Pivot; (I2) Generate λdecision setΛ; For any arc e = (i, i o ) and any λ in Λ do Compute resulting state (i o , T o , R o ) together with value W o = T o + L∗i o ,d ; R Max and if there does not exist (i o , T1 , R1 ) in If R o ≤

L Pivot L E such that such that T1 ≤ T o and R1 ≤ R o then Insert (i o , T o , R o ) into L E and remove from L E any (i o , T1 , R1 ) such that T1 ≥ T o and R1 ≥ R o . Do it in such a way that L E remains ordered according to optimistic estimation values W ; Remark 3 If we are able to generate all decisions likely to appear inside a given optimal decision sequence, then above algorithm A*_SPR is optimal. Remark 4 We turn A*_SPR algorithm into a shortest path algorithm by reducing Λ to 1 element.

3.4 Discussion: The Decision Set Λ Both above algorithms rely on an instruction ‘Generate λ − decisionsetΛ’. But λ values are continuous ones. So, we must decide about the way we generate a finite λ − decisionsetΛ.

Monitoring a Fleet of Autonomous Vehicles Through A* …

125

The simplest case is the case when we deal with Risk versus Distance decisions λ R D , since in such a case, Propositions 2 and 3 suggests us that a mean value for RD = RLmax λ R D is going to be given by λmean ∗ . Then a natural way to generate Λ is to fix o,d

an odd number 2.K + 1 of λ R D values, a geometric step value δ > 0, and to set: RD RD RD Λ = {λmean } ∧ {(1 + δ)k .λmean , k = 1, . . . , K } ∧ {(1 + δ)−k .λmean , k = 1, . . . , K } (4) According to this, Λ is determined by K and δ. We may consider K as a flexible parameter. As for the choice of value δ, it becomes determined by K and by the minRD RD RD RD = (1 + δ)k λmean and λmin = (1 + δ)−k λmean which imal and maximal values λmax RD we want to assign to λ . If we want to allow the vehicle to move with a speed twice RD , then we see that we must choose a value as large as the speed suggested by λmean RD RD λmax = 2λmean . That means that δ is determined by the acceleration coefficient ρ we may want to apply to the vehicle in order to make him possible to reach the end of an arc before some risky situation occurs on this arc. We shall test for instance ρ ∈ [2, 3, 4]. In the case of Risk over Time and Mean Speed decisions λ R S and λ S P , we must SD SO and λmean , and try to learn them throughout the arbitrarily fix mean values λmean computational process. This opens the way to next section.

4 Speeding Algorithms through Statistical Learning Techniques We consider here two ways of speeding our algorithms in order fit with a dynamic contexts. The first one impose a small number K of possible decisions and the second one is to bound the number of states (T, R) related to any node i. We do it while focusing on the case λ = λ R D .

4.1 Bounding Decisions Once acceleration parameter ρ has been tuned, controlling the size of decision set Λ means fixing value K . If we set K = 1 (greedy algorithm in the case of DP_Evaluate RD , and shortest path algorithm in the case of A*_SPR), then the choice is about λmean Rmax which, in a first approach, should be equal to L ∗ . If K = 1, then we apply the o,d following statistical learning process: • We apply DP_Evaluate to instances which fit parameter ρ, while using some reference decision number K r e f . For any instance I , we retrieve the optimal decision sequence {λ1 , . . . , λn }, Every decision λi is related to some number ki in −K r e f , . . . , 0, . . . , K r e f .

126

M. Baiou et al.

• Then we compute, for every value k in {−K r e f , . . . , 0, . . . , K r e f }, the percentage τ (k) of occurrence of k in those decision sequences. RD RD , λmax ] • Finally, K being the target decision number, we split the decision range [λmin RD for possible decisions λ into 2(K + 1) intervals corresponding to same percentages of decisions λi . For instance, if K = 1, we split interval [λmin , λmax ] into 4 1 1 1 1 3 3 intervals [λmin , λ 4 ], [λ 4 , λ 2 ], [λ 2 , λ 4 ] and [λ 4 , λmax ] in such a way that: ◦ ◦ ◦ ◦

1/4 of decisions λi 1/4 of decisions λi 1/4 of decisions λi 1/4 of decisions λi

belong to interval [λmin , λ1 4]; 1 1 belong to interval [λ 4 , λ 2 ]; 1 3 belong to interval [λ 2 , λ 4 ]; 3 belong to interval [λ 4 , λmax ]. 1

1

3

Then restricted Λ becomes the set {λ 4 , λ 2 , λ 4 }.

4.2 Bounding States In order to filter state set State[i] related to a given node i and impose a pre-fixed lower bound S on the cardinality of State[i], several techniques may be applied. One may for instance consider as equivalent 2 states (T, R) and (T , R ) if |T − T | + |R − R | does not exceed some rounding threshold. We are not going to follow this approach which does not guaranty that we are going to maintain the cardinality of State[i] below imposed threshold S. Instead, we are going to do as if there were existing a natural conversion rate ω which turns risk into time. According to this, we are going to rank pairs (T, R) State[i] according to increasing values ωT + R and keep on with the S best ones according to this ordering, while killing the others. Key issue here becomes about the value of ω. Intuitively, ω should be equal to Rmax , where T o is the optimal SPR value, and we should be able to learn this value as o T a function of the main characteristics of SPR instances: most relevant characteristics seem to be risk threshold Rmax , the length (expected length) L ∗ of path Γ , the mean value Δ of functions Π e , e ∈ A, and the frequency B of the breakpoints of those functions. We may notice that in case all functions Π e are constant and equal to some value Δ, then speed u is going to be constant and equal to RLmax ∗ Δ , and so that ∗

∗2

R3

. This will lead to initialize ω as ω = ΔLmax∗ 2 . time value T o will be equal to Lu = ΔL Rmax In order to refine this initial choice for ω we retrieve, for any instance, the optimal decision sequence {λ1 , . . . , λn }, related state sequence s1 = (T1 , R1 ), . . . , sn = (Tn , Rn ). Then, for any such an instance, we look for the value ω which statistically makes states si be always among the best ones for the ranking related to quantity ωT + R. As a matter of fact, while performing numerical experiments, we focus on an estimation of optimal value T o as a function of Rmax , L ∗ , B and Δ, and next test the ranking of states si among the state subsets State[i] for ω = RTmax o .

Monitoring a Fleet of Autonomous Vehicles Through A* …

4.2.1

127

Bounding States through Reinforcement Learning

Still, this way of performing learning may induce distortions. A lack of flexibility in the pruning procedure associated with a non fully well-fitted value ω may yield, for a given node i, a collection State[i] poorly balanced, in the sense that one would expect related values (T, R) to distribute themselves as a wide Pareto set. More precisely, we may qualify a pair (T, R) as risky if R L j is large with respect to RLmax ∗ , or j≥i+1

cautious if the converse holds. Then it may happen that our pruning technique yields pairs (T, R) which, taken as a whole, are either too risky or too cautious. In order to control this kind of side-effect, we make ω value become auto-adaptive. More precisely, we start, as previously explained, from some pre-learned ω value, and make it evolve through Reinforcement Learning, that means throughout an (or several) execution the DP_Evaluate (or A*_SPR or LS_SPR). In order to explain it better, we focus on the case of DP_Evaluate algorithm, while supposing that state threshold S has been fixed and that initial value ω has been computed as described above. So let us suppose that, at some time during the process, we just dealt with arc ei and so computed current state set State[i], while updating value ω. Applying decisions of Λ and filtering resulting states (T, R) through Bellman principle provides us with a state subset State[i + 1] whose size is likely to exceed S. Then we rank states (T, R) of State[i + 1] according to ωT + R values. Ideally, states (T, R) ordered this way should make S best states (T, R) be balanced in the sense that risky states R should be should get along with cautious ones, or, in other words, that the ratio Rmax L∗

centered around the ratio L0,i∗ . If, for instance, those values are centered significantly above this ratio, then we are moving in a too risky way and must make ω decrease. Conversely, if those best values are centered below this ratio, then we are too careful and must make ω increase. We implement this principle by performing a kind of statistical analysis of those best values in State[i + 1], in order to derive, from those S best states (T, R), an indicator Risk_Balance, which takes symbolic values R value is located {Risky, N or mal, Car e f ul} depending on the way the mean Rmax L∗

with respect to L0,i∗ . Then our filter&learn Filter_Learn works as follows: Filter_Learn Procedure: Rank states (T, R) of State[i + 1] according to ωT + R values; Select S best pairs (T, R) according to this ranking and compute Risk B alance; If Risk B alance = N or mal then Keep only the S best states in State(i + 1); If Risk B alance = Risky then Split State[i + 1] into 2 subsets S1 and S2 with same size: S1 is made of the best states (S, R) according to our tanking and S2 = S − S1 ; Keep only the 2S best states in S1 and S2 in State(i + 1); Make ω decrease; If Risk B alance = Cautious then proceed as in previous case, while making ω increase.

128

4.2.2

M. Baiou et al.

Adaptation ot the State Bounding Scheme to A*_SPR Algorithm

In the case of the A*_SPR algorithm, we apply the same principle, with the idea that, in the list L E ∧ L Pivot, the number of states (i, T, R) related to a given i should not exceed some target threshold S. We structure elements of L E and L Pivot according to lists L E[i] and L Pivot[i], where each list L E[i] and L Pivot[i] is a list of states (T, R) in the DP_Evaluate sense and apply the learn&filter process only to the L E[i] lists.

5 Numerical Experiments Goal: We perform numerical experiments with the purpose of studying • the behavior of static DP_Evaluate, LS_SPR, A*_SPR algorithms of Sect. 3. We pay special attention to the dependence of those algorithms to the choice of the decision mode (Mean Speed, Risk versus Time, Risk versus Distance), to the characteristics of decision set Λ; • the way we may efficiently turn those static algorithms into efficient dynamic algorithms through the use of statistical an reinforcement learning techniques described in Sect. 4. Technical Context: Algorithms were implemented in Python3.7.6 in a Docker environment on an Intel i5-9500 CPU at 4.1GHz. Instances: We generated networks (N , A) as connected symmetric partial grids, which means grids n ∗ n, modified through removal of 25% of its arcs. Those partial grids are summarized through their number |N | of nodes and their number | A| of arcs. Length values L e , e ∈ A, are uniformly distributed between 3 and 10. Function Φ is taken as function u → Φ(u) = u 2 . Function Π e are generated by fixing a time horizon Tmax , a mean frequency B of break points tie , and an average value Δ for value , Δ, Δ2 , 0}. Π e (t): More precisely, values Π e are generated within a finite set {2Δ, 3Δ 2 e As for threshold Rmax , we notice that if functions Π are constant with value Δ and if we follow a path Γ with length L diam , where L diam is the diameter of network G, Δ , then the expected risk is L diam . It comes that we generate Rmax as at speed 21 = vmax 2 2 Δ a quantity α L diam , where α is a number between 0.2 and 2. Finally, since an instance 2 is also determined by origin/pair (o, d), we denote by L ∗ the value L ∗o,d . Table 1 presents a package of 12 instances with their characteristics. Outputs related to the behavior of the procedure DP_Evaluate For every instance, L ∗ value is the length of path Γ in the L sense. We apply DP_Evaluate while testing the role of parameters λ = λ R D , λ S P , λ R S , as well as K and δ. So, for every instance, we compute: • in Table 2: The risk value R_D P mode , the time value T _D P mode , and the CPU times (in s.) C PU mode , induced by application of DP_Evaluate with λmode = λ R D , λ S P , λ R S , K = 10, ρ = 4;

Monitoring a Fleet of Autonomous Vehicles Through A* …

129

Table 1 Instances’ characteristics Instance |N | |A|

B

Δ

α

L∗

16 16 16 16 16 16 100 100 100 100 100 100

3 3 3 9 9 9 3 3 3 9 9 9

2.02 2.04 2.02 1.98 2.00 2.00 1.99 1.97 2.01 2.01 2.01 2.00

0.2 1 2 0.2 1 2 0.2 1 2 0.2 1 2

34.6 35.7 42.7 32.3 30.2 43.3 108.6 109.5 113.9 124.2 107.6 104.5

1 2 3 4 5 6 7 8 9 10 11 12

88 60 76 80 76 76 560 580 544 520 528 548

Table 2 Impact of λmode , with K = 10 and ρ = 4 T RD cpu R D R S P T SP Instance R R D 1 2 3 4 5 6 7 8 9 10 11 12

6.74 36.10 85.48 5.42 30.07 84.62 7.99 107.73 227.87 11.44 106.38 203.65

104.0 37.2 41.3 99.4 41.5 34.5 362.8 144.6 98.3 410.5 143.7 87.5

0.48 0.62 0.72 0.51 0.97 1.10 4.02 5.88 6.61 8.42 10.16 10.55

6.90 35.66 77.57 6.41 30.18 79.93 21.58 107.68 227.57 74.17 107.61 202.81

117.7 38.7 44.7 148.0 45.0 37.1 390.2 177.6 99.6 423.1 167.6 91.7

cpu S P

R RS

T RS

cpu RS

0.73 0.52 0.86 1.57 1.99 1.89 5.66 7.27 8.47 13.07 20.06 19.53

4.58 34.00 81.28 3.96 30.18 86.67 6.27 91.08 228.39 8.52 88.08 207.34

112.6 38.9 41.7 102.5 43.7 36.1 370.1 164.1 98.2 388.7 157.5 86.3

0.47 0.78 1.02 0.49 2.03 2.22 5.28 9.05 9.26 13.59 19.35 23.54

• in Table 3: For the specific mode λ R D , related number State of states per node i, together with time value T R D , when K = 1, 3, 5, 7, 10 and ρ = 4; • in Table 4: For the specific mode λ R D , related number State of states per node i, together with time value T R D , when K = 10 and ρ = 1.5, 2, 3, 4, 8. CPU times are in seconds. Outputs related to the behavior of A*_SPR and LS_SPR. We test the ability A*_SPR and LS_SPR. to catch optimal solution, and observe the characteristics of resulting path. We rely on λ = λ R D , K = 10 and ρ = 4. For every instance, we compute:

130

M. Baiou et al.

Table 3 Impact of K , with ρ = 4 K 1 3 Instance T R D States T R D 1 2 3 4 5 6 7 8 9 10 11 12

111 46 43 99 51 43 362 148 111 420 171 103

10.67 17.00 13.00 13.50 15.83 15.83 23.17 25.17 25.06 24.10 28.22 26.11

108 38 41 98 44 35 365 178 107 392 147 85

5 States T R D

7 States T R D

10 States T R D

States

28.33 38.17 38.17 24.83 37.50 31.50 34.50 41.11 46.39 48.00 50.72 47.67

36.67 47.50 49.17 35.50 48.17 49.83 49.56 57.83 63.50 60.90 60.67 66.06

44.00 59.67 62.67 42.00 60.33 56.67 52.89 63.50 75.50 74.80 70.22 74.83

60.50 66.00 72.33 49.00 78.33 68.83 65.78 103.83 91.50 94.20 107.44 94.11

128 40 41 100 42 35 365 175 99 466 164 88

Table 4 Impact of ρ, with K = 10 ρ 1.5 2 3 R D R D States T States T R D Instance T 1 2 3 4 5 6 7 8 9 10 11 12

132 54 43 113 53 38 416 182 136 474 202 103

21.50 31.83 46.83 18.67 45.33 41.83 22.39 49.28 53.39 40.40 52.61 55.11

120 50 42 107 52 36 411 179 109 387 195 87

39.33 42.33 49.50 27.33 42.83 50.67 37.06 53.44 59.22 63.40 53.83 62.83

128 40 41 100 42 35 365 175 99 466 164 88

130 37 41 100 52 35 366 175 99 466 157 84

4

104 37 41 99 42 34 363 145 98 410 144 88

States

T RD

8 States T R D

States

36.67 47.50 49.17 35.50 48.17 49.83 49.56 57.83 63.50 60.90 60.67 66.06

108 40 41 96 46 36 355 172 104 368 163 95

42.17 47.33 45.50 37.67 45.17 42.50 55.39 60.28 57.56 74.85 73.06 61.94

37.83 42.83 42.33 34.83 42.00 39.50 51.78 55.11 56.72 72.15 61.89 55.06

112 46 42 96 49 37 348 168 128 367 163 113



• in Table 5: The time value T _ A∗, the risk value R_ A∗, CPU time (in s.) C PU A , ∗ the number Node of visited nodes, the number State A of generated states, and A∗ the deviation Dev between the length of resulting path Γ and L ∗ , induced by A*_SPR with λmode = λ R D , K = 10 and ρ = 4; • in Table 6: The time value TL S, CPU time (in s.) C PU L S , the number Trial of trials, the number State of generated states, and the deviation Dev L S between the length of resulting path Γ and L ∗ , which derives from applying LS_SPR with λmode = λ R D , K = 10 and ρ = 4.

Monitoring a Fleet of Autonomous Vehicles Through A* …

131

Table 5 Behavior of A*_SPR, with λmode = λ R D , K = 10 and δ = ∗ ∗ Instance T _A∗ R_A∗ N ode cpu A States A 1 2 3 4 5 6 7 8 9 10 11 12

94.1 38.9 41.4 99.5 42.9 36.9 310.4 132.3 98.3 349.2 132.1 85.5

5.64 35.95 84.21 6.39 30.14 82.30 20.58 107.55 228.03 24.88 106.97 208.41

14.00 5.00 5.00 12.00 9.00 5.00 97.00 44.00 17.00 96.00 45.00 17.00

0.41 0.20 0.04 0.62 0.63 0.06 7.80 5.53 0.42 14.16 12.81 0.81

17.00 14.60 1.80 14.42 11.89 1.40 18.98 16.09 1.88 18.65 14.24 1.06

Table 6 Behavior of LS_SPR, with λmode = λ R D , K = 10 and δ = T _L S R_L S T rial cpu L S States L S Instance 1 2 3 4 5 6 7 8 9 10 11 12

102.5 39.5 41.4 99.6 41.8 34.6 365.0 160.9 99.1 458.8 164.3 88.3

5.60 34.85 84.07 4.72 29.04 84.14 6.28 107.72 227.33 9.81 88.26 205.68

2.00 1.00 1.00 1.00 2.00 1.00 4.00 10.00 1.00 5.00 1.00 1.00

0.35 0.31 0.35 0.30 0.86 0.61 7.48 22.47 3.39 20.11 5.43 5.93

31.33 47.50 49.17 35.50 47.00 49.83 46.32 58.31 63.50 60.61 60.67 66.06

ρA



4.26 0.00 0.00 0.00 0.00 0.00 10.94 5.08 0.00 1.30 0.41 0.00

ρL S 5.09 0.00 0.00 0.00 0.00 0.00 0.00 18.50 0.00 2.48 0.00 0.00

CPU times are in seconds. Comments: We see that A*_SPR and LS_SPR are very close to each other with λmode = λ R D . Also, they often use the shortest path but not every time especially when working with small Rmax values. Outputs related to characteristics of the solutions (Learning pre-process). We apply the learning devices of Sect. 4 and test the behavior of DP_Evaluate with no more than 3 possible decisions and 6 possible states for every i = 1, . . . , n. For every instance, we compute in Table 7, related risk value R_Fast_D P, the time value T _Fast_D P, and CPU times C PU _Fast_D P. CPU times are in seconds.

132

M. Baiou et al.

Table 7 Behavior of Fast-DP_Evaluate Instance R_Fast_D P T _Fast_D P 1 2 3 4 5 6 7 8 9 10 11 12

5.05 34.43 81.64 2.65 26.11 84.70 7.09 71.59 192.39 9.02 62.21 161.48

113.20 46.10 43.70 106.00 51.80 41.20 414.40 177.50 140.70 468.40 195.70 116.60

T _D P

cpu_Fast_D P

113.20 46.10 43.70 106.00 51.80 41.20 427.60 179.00 135.60 476.40 195.70 107.20

0.01 0.02 0.02 0.02 0.04 0.03 0.09 0.11 0.16 0.22 0.37 0.35

Comments: We can see that Fast-DP_Evaluate is, by far, the best method not because of the solution returned by because of how fast it computes a nearly equal solution.

6 Conclusion We dealt here with a shortest path problem with risk constraints, which we handled under the prospect of fast, reactive and interactive computational requirements. But in practice, a vehicle is scheduled in order to perform some kind of pick up and delivery trajectory while performing retrieval and storing tasks. It comes that a challenge becomes to adapt previously described models and algorithms to such a more general context. Also, there exist a demand from industrial players to use our models in order to estimate the best-fitted size of an AGV fleet, and the number of autonomous vehicles inside this fleet. We plan addressing those issues in the next months.

References Amazon.com, inc. Amazon Prime Air (2013). http://www.amazon.com/primeair Berbeglia, B., Cordeau, J-F., Gribkovskaïa, I., Laporte, G.: Static pick up and delivery problems: a classification scheme and survey. TOP: An Off. J. Spanish Soc. Stat. Oper. Res. 15, 1–31 (2007) Chen, L., Englund, C.: Cooperative intersection management: a survey. IEEE Trans. Intell. Transp. Syst. 17–2, 570–586 (2016) Franceschetti, A., Demir, E., Honhon, D., Van Woensel, T., Laporte, G., Stobbe, M.: A metaheuristic for the time dependent pollution-routing problem. Eur. J. Oper. Res. 259(3), 972–991 (2017)

Monitoring a Fleet of Autonomous Vehicles Through A* …

133

Koes, M. et al.: Heterogeneous multi-robot coordination with spatial and temporal constraints. In: International Conferences on Artificial Intelligence, pp. 1292–1297 (2005) Krumke, S.O., Quilliot, A., Wagler, A., Wegener, J.T.: Relocation in Carsharing Systems using Flows in Time-Expanded Networks, vol. 8504, pp. 87–98. LNCS (Special Issue SEA 2014) (2014) Le-Anh, T., De Koster, M.B.: A review of design and control of automated guided vehicle systems. Eur. J. Oper. Res. 171, 1–23 (2006) Lozano, L., Medaglia, A.L.: On an exact method for the constrained shortest path problem. Comput. Oper. Res. 40, 378–384 (2013) Nilsson, J.: Artificial Intelligence. Wiley Ed, NY (1975) Park, I., Jang, G.U., Park, S., Lee, J.: Time dependent optimal routing in micro-scale emergency situations. In: 10th International Conferences on Mobile Data Management, pp. 714–719. IEEE (2009) Philippe, C., Adouane, L., Tsourdos, A., Shin, H.S., Thuilot, B.: Probability collective algorithm applied to decentralized coordination of autonomous vehicles. In: 2019 IEEE Intelligent Vehicles Symposium, pp. 1928–1934. IEEE, Paris (2019) Pimenta, V., Quilliot, A., Toussaint, H., Vigo, D.: Models and algorithms for reliability oriented DARP with autonomous vehicles. Eur. J. Operat. Res. 257(2), 601–613 (2016) Ren, Q. et al.: Cooperation of multi-robots for disaster rescue. In: Proceedings of the ISOCC Conferences, p. 133134 (2017) Ryan, C., Murphy, F., Mullins, F.: Spatial risk modelling of behavioural hotspots: risk aware paths planning for autonomous vehicles. Transp. Res. A 134, 152–163 (2020) Vivaldini, G. Tamashiro, J. Martins Junior, Becker, M.: Communication infrastructure in the centralized management system for intelligent warehouses. In: Neto, P., Moreira, A.P. et al. (eds.), WRSM 2013. CCIS, vol. 371, pp. 127–136. Springer, Heidelberg (2013) Vis, I.F.: Survey of research in the design and control of AGV systems. Eur. J. Oper. Res. 170, 677–709 (2006) Zhang, M., Batta, R., Nagi, R.: Modeling of workflow congestion and optimization of flow routing in a manufacturing/warehouse facility. Manag. Sci. 55, 267–280 (2008)

Rather “Good In, Good Out” Than “Garbage In, Garbage Out”: A Comparison of Various Discrete Subsampling Algorithms Using COVID-19 Data Without a Response Variable Lubomír Štˇepánek, Filip Habarta, Ivana Malá, and Luboš Marek

Abstract When having more initial data than needed, an appropriate selecting of a subpopulation from the given dataset, i.e. subsampling, follows, usually intending to ensure all categorical variables’ levels are near equally covered, and all numerical variables are well balanced. If a response variable in the original data is missing, the popular propensity scoring cannot be performed. This study addresses the subsampling with the missing response variable about to be collected later, using a COVID-19 dataset (N = 698) with 18 variables of interest, reduced to a subdataset (n = 400). The quality of subpopulation selecting was measured using minimization of several metrics such as Shannon entropy, or a sum of squares of each variable’s category frequency, and few others, always averaged over all variables. An exhaustive method selecting all possible combinations of n = 400 observations from initial N = 698 observations was used as a benchmark. Choosing the subsample from a random set of them that minimized the metrics, returned similar results as a “forward” subselection reducing the original dataset an observation-by-observation per each step by permanently lowering the metrics. Finally, k-means clustering (with a random number of clusters) of the original dataset’s observations and choosing for a subsample from each cluster, proportionally to its size, also lowered the metrics compared L. Štˇepánek (B) · F. Habarta · I. Malá · L. Marek Department of Statistics and Probability, Faculty of Informatics and Statistics, University of Economics, nám. W. Churchilla 4, 130 67 Prague, Czech Republic e-mail: [email protected]; [email protected] F. Habarta e-mail: [email protected] I. Malá e-mail: [email protected] L. Marek e-mail: [email protected] L. Štˇepánek Institute of Biophysics and Informatics, First Faculty of Medicine, Charles University, Salmovská 1, Prague, Czech Republic © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 S. Fidanova (ed.), Recent Advances in Computational Optimization, Studies in Computational Intelligence 1044, https://doi.org/10.1007/978-3-031-06839-3_8

135

136

L. Štˇepánek et al.

to the random subsampling. All four latter approaches showed better results than single random subsampling, considering the metric minimization. However, while the exhaustive sampling is very greedy and time-consuming, the forward one-by-one reducing the original dataset, picking up the subsample minimizing the metric, and subsampling the clusters, are feasible for selecting a well-balanced subdataset. Keywords Subsampling · Discrete subsampling · Exhaustive subsampling · Random subsampling · Unsupervised learning · Clustering

1 Introduction Subsampling is a method that reduces a size of a dataset by selecting a subset from the original dataset. However, in many areas, including biomedicine and many others, we often face a kind of opposite problem, i.e. we obtain a sample of only insufficient size and would need to enlarge its size. That can be done, e.g., by one of the resampling methods such as bootstrapping or others, or we need to use various inference methods to estimate properties of the entire population that our dataset comes from. While such a data size reduction could not sound meaningful for the first impression, there are various situations where subsampling makes sense or is even necessary. Usually, we can distinguish between two kinds of subsampling. Firstly, when we do the subsampling, we cannot even in theory collect all possible observations of an entire population. Or, secondly, we can gain all possible observations or, furthermore, we have already got them, but for some reason, we have to reduce the number of observations that will be utilized. A typical example of the first subsampling kind is one of the large fields of statistics, called sampling, where subsampling as a method of choice deals with an idea of an entire population and its parameters but is limited to an option of gaining data of only a (small) subset coming from the population. Then, regardless of whether the population is more or less virtual, getting the sample that belongs to the population is still a problem fulfilling the subsampling definition. The motivations for the subsampling could also be different and usually arise from any impossibility to utilize the entire original dataset, as may be true for the latter family of the subsampling problems. Thus, the rank of those motivations varies from the lack of (computational) power to analyze the entire original dataset to the lack of economic sources, making it impossible to collect all values for each observation of the original sample, e.g. populating a new (important) variable is considered to enrich the original dataset but can be done only for a limited number of observations (of the sub-selected dataset). As a motivation for our study, using online surveys, we collected an original dataset of patients suffering from COVID-19 and undergoing anti-COVID-19 vaccination. To study a time development of COVID-19 antibodies after the vaccination, it is necessary to check the blood levels of the patients’ antibodies from time to time. However, no matter how helpful would be the checking of antibodies for each patient,

Subsampling Algorithms Using Data With No Response Variable

137

our financial sources were limited (and the antibody kits for laboratory serology tests are relatively expensive), so we had to select a subset of patients from the original dataset, no greater than a maximal number of laboratory tests funded by our financial sources. Furthermore, since the subsample can be done in many ways, we wanted to keep all categories of all categorical variables well balanced, i.e., to keep their frequencies in the final subsample equal or at least near-equal. All the motivations share the demand on the quality by which the subsampling is done. As is naturally feasible, we usually want to avoid the “garbage in, garbage out”, also known as the GIGO paradigm, which means that we cannot expect great outputs whenever the inputs are of low quality. The same logic applies to subsampling if followed by whatever kind of another analysis uses the subsample as an input. Thus, the authors suggest replacing the “garbage in, garbage out” paradigm more positively with “great in, great out”. However, regardless of the primary motivation why do subsampling, there is always a demand to keep the data homogeneity in the sub-selected sample, corresponding to the original data. More technically spoken, assuming the dataset contains only categorical variables, the homogeneity means that all categories of all categorical variables are near-equally represented in the final subsample. In case there is a response variable included in the dataset, a popular and wellestablished method called propensity scoring (or propensity matching) is usually performed to identify the “best” subset of a given size that harmonizes effect sizes of individual explanatory variables (Austin 2011). Nevertheless, when a response variable is missing in the data because e.g. is planned to measure its values rather only for observations in the subsample than for the entire original sample, the logistic regression model behind the propensity scoring could not be built at all (since the response variable is not available). In such a case, the methodology that could be used for subsampling differ from naive approaches such as random sampling, even-odd sampling (Pathical and Serpen 2010), to more intuitive, rather manual than automated sampling based on matching the observations so that they are balanced in pairs (or larger groups than pairs) (Stuart 2010). In other words, when a response variable, commonly participating as a key part of the subsampling quality checking, is not available in the dataset, it could be “substituted” by a metric that might control for the quality of the subsampling process. To check how balanced the subsample is, some metrics could be used (Sahney et al. 2010). They usually assume that numerical variables—if any—were prior transformed to categorical ones following more or less complex categorization rule. There are several commonly used metrics describing the rate of the categorical variables’ levels balance in a final sample (MacKay 2003) such as entropy, mutability, Gini impurity, Simpson index, Shannon-Wiener index, and other diversity metrics. A sum of squares of categories’ frequencies also becomes very popular; it is somewhat similar to Shannon entropy but is scaled, so it cannot be greater than 1.0 at maximum. Based on the metric choice, the lower (or, the higher) is the metric’s value; the better balanced is the subsample. Thus, for example, considering Shannon entropy (or sum of squares of categories’ frequencies, respectively), a higher (a lower) value

138

L. Štˇepánek et al.

means better balancing the subsample; i.e. the frequencies of the categories in the subsample are equal or at least near-equal. In fact, the subsampling itself is a discrete optimization task since the selection of a final subsample from the original sample may be made using a finite number of ways, but some of them are better than others, taking into account there is a given metric, checking the subsampling quality (categorical variables’ levels well balancing) that is about to be minimized (or maximized). In this study, we selected a subpopulation (n = 400) from a COVID-19 dataset (or original size N = 698) with a missing response variable, which was up to be collected later. Whereas the response variable was not available, there were 18 more (explanatory) variables of interest. First, numerical variables were categorized. We derived a general point of view on the metrics that control the the quality of the subsampling and using this approach, we refined the metrics in the same polarity, i.e. each of them should be either minimized to detect the subsampling is of good quality, or maximized to guarantee the same. The quality of a categorical variable’s levels balancing within a subsample was measured using the following metrics: a modified Shannon entropy calculated using the variable’s category frequencies, a sum of squares of the variable’s category frequency, the number of presented variable’s categories in the subsample, and by a value of the maximally populated variable’s category in the subsample. Since there were multiple variables in the sample, the metrics’ values were averaged over all variables within a given subsample. Minimizing the metric reflects the demand for keeping all the variables’ categories numerically balanced, i.e. of similar sizes. Several subset-selecting strategies were applied. Besides a single random subsampling, an exhaustive method selecting all possible combinations of n = 400 observations from initial N = 698 observations was performed, choosing the subsample that grand totally minimized the metric. Similarly, a “forward” subselection, reducing the original dataset by one observation per each step, permanently lowering the metric, was done. A repeated random subsampling enabled to model a prior distribution of the metric and helped estimate its empirical minimum, determining one given subsample. Finally, k-means clustering (with a random number of clusters) of the original dataset’s observations and choosing for a subsample from each cluster, proportionally to its size, and also based on a joint occurrence of each pair in one cluster, also lowered the metric compared to the random subsampling. The aim of this study is to demonstrate that all the approaches except for a single random subsample offer a valid alternative to exhaustive sampling grant-totally minimizing the chosen metric, and the choice of the metric itself is not so significant for the final subsample’s quality.

2 Proposed Research Methodology There are overall research methodology and the formal description of the dataset, the metrics chosen for controlling the quality of the subsampling, and the proposed methods of the subsampling discussed in the following subsections. Furthermore, we

Subsampling Algorithms Using Data With No Response Variable

139

derived a general framework on how the metrics for the subsampling quality could be derived and compared, too.

2.1 Formal Description of a Dataset Used for Subsampling The original dataset consists of N rows containing one observation per row and k categorical variables in columns. The subsampling task means selecting a subset of n rows and k columns, where n < N . Thus, the sampling is applied on rows, not on columns. For each i ∈ {1, 2, 3, . . . , k}, the variable i contains exactly n i categories and the frequency of the category j is n i, j . We can easily show that for the original dataset, and the subsample is ni Σ

n i, j = N

and

j=1

ni Σ

n i, j = n,

j=1

respectively, so the sum of frequencies of a given variable i’s categories is equal to N in the original dataset and is equal to n in the subsample, respectively, and based on the context.

2.2 Metrics for Controlling the Quality of the Subsampling The metrics for quality control enable us to quantitatively check how well are categories of a categorical variable balanced in the final subset output by the subsampling. Since there are more than only one variable in the final subsampling, we need to consider all values of the metric for each variable by their averaging over the variables. We also introduced a generalized approach on how to derive several metrics using one concept which also make possible to get insight into some properties of the metrics or their mutual comparisons, and advantages and disadvantages, respectively. The derived metrics are similar to some well-established ones such as Shannon entropy or Gini impurity (also known as Gini index). Let us assume that a probability pi, j of a category j for each j ∈ {1, 2, 3, . . . , n i } of a variable i containing n i categories in a final subsample of size n may be estimated n as pˆ i, j = ni, j . Then, let us define a variable Ri,a ( pi, j ) ≡

pi,a j a

(1)

for given probability pi, j ∈ ˂0, 1˃ and parameter a ∈ R ∪ {−∞, +∞}. Finally, let us define a metric

140

L. Štˇepánek et al.

Mi,a ≡

ni Σ

pi, j Ri,a ( pi, j )

(2)

j=1

for a variable i and parameter a ∈ R ∪ {−∞, +∞}. Using the above defined terms in the following sections, we may build the framework of metrics depicting a quality of categorical variables’ level balance in the final subset returned by subsampling. We may also show that minimizing the metric from formula (2) within the subsampling algorithm closely corresponds to the well-balanced frequencies of all n i levels of variable i in the final subsample, as is illustrated more in detail below. The derived metrics and some of their properties follow—using the formulas (1) and (2), firstly we modified the well-known Shannon entropy, than a sum of squares of each variable’s category frequency, than a number of all variable’s categories presented in a subsample, and, finally, simply an absolute frequency of a variable’s modal category, respectively.

2.2.1

A Modified Shannon Entropy

The Shannon entropy is defined as Hi = −

ni Σ

pi, j log pi, j

j=1

where pi, j is a probability of a category j for each j ∈ {1, 2, 3, . . . , n i } in a sample of n i categories of a variable i, (Simpson 1949). We can easily prove by Jensen’s inequality the upper bound of the entropy Hi defined by such formula is dependent on the probabilities pi, j . Also, the formula may struggle with zero probabilities, i.e. when ∃ j ∈ {1, 2, 3, . . . , n i } such that pi, j = 0 since the term log pi, j is not defined for pi, j = 0.  " p0 "  pa i, j Assume that a = 0 in formula (1), i.e. Ri,a ( pi, j ) = ai, j = Ri,0 ( pi, j ) = = 0  pa  lima→0 ai, j , which is in fact an undefined term. Applying l’Hospital rule, we get  Ri,0 ( pi, j ) = lim

pi,a j

a 0 pi, j log pi, j

a→0

=



1



l’H

= lim

a→0

pi,a j log pi, j 1

 =

= 1 · log pi, j =

= log pi, j . Finally, using the formula (2), we get a formula for modified Shannon entropy as

Subsampling Algorithms Using Data With No Response Variable

Mi,0 =

ni Σ

pi, j Ri,0 ( pi, j ) =

j=1

ni Σ

141

pi, j log pi, j =

j=1

ni Σ n i, j j=1

n

log

n i, j . n

(3)

We can easily show that Mi,0 is a negatively signed version of Shannon entropy, Mi,0 = −Hi . From theory, whereas the higher value of Shannon entropy Hi means the higher variability or diversity, i.e. the variable i’s category frequencies balance is higher, the lower value of the modified Shannon entropy Mi,0 has to mean the same. Let us derive the lower and upper bound of the modified Shannon entropy for the variable i. (i) Firstly, let us consider the first possible extreme scenario—within the variable i, the sample is populated by only one category. More technically, there is ∃ j ∗ ∈ {1, 2, 3, . . . , n i } so that n i, j ∗ = n i . Then, ∀ j ∈ {1, 2, 3, . . . , n i } \ j ∗ is n ∗ n n i, j = 0 and, eventually, i,nj = 1 and ni, j = 0. The modified Shannon entropy Mi,0 then follows the term Mi,0 =

ni Σ n i, j j=1

n

log

n i, j = n

n i, j ∗ n i, j ∗ = log + n n

Σ j∈{1,2,...,n i }\ j ∗

n i, j n i, j log = n n

= 1 · log 1 + (n i − 1) · 0 · lim log pi, j ≈ pi, j →0

≈ 1 · 0 + (n i − 1) · 0 · (−∞) ≈ ≈ 0. Thus, we derived the maximum value of the of the modified Shannon entropy Mi,0 for the variable i is equal to 0. (ii) Now suppose the other extreme scenario—all categories are equally populated in the sample and no one of the categories occurred more than once. So, in other words, n i,1 n i,2 n i,ni 1 = = ··· = = n n n n and also n i = n. The modified Shannon entropy Mi,0 then follows as Mi,0 =

ni Σ n i, j j=1

i Σ n i, j 1 1 Σ1 1 log = log = log = n n n n n n j=1 j=1

n

n

1 1 1 log = log = n n n = − log n. =n·

So, we derived the minimum value of the modified Shannon entropy Mi,0 for the variable i is equal to − log n, where n is the size of a sample containing only n categories of the variable i.

142

L. Štˇepánek et al.

Thus, we derived that for each variable i and its sample size n is the modified Shannon entropy Mi,0 of the variable’s category frequencies lower than 0 and greater than or equal to − log n, more formally − log n ≤ Mi,0 < 0. Since the subsample contain k variables, the modified Shannon entropy Mi,0 of each variable i so that i ∈ {1, 2, 3, . . . , k} should be as low as possible. To address this issue, we consider an arithmetic mean of the metrics Mi,0 over all k variables, so ni ni k k k n i, j n i, j 1Σ 1 ΣΣ 1 ΣΣ Mi,0 = pi, j log pi, j = (4) log . M¯ 0 = k i=1 k i=1 j=1 k i=1 j=1 n n

2.2.2

A Sum of Squares of Each Variable’s Category Frequency

To overcome some of difficulties of the Shannon entropy, mainly non-existing upper bound and struggling with logarithms of zeroes, we also used a sum of squares of each variable’s category frequency, inspired by Gini impurity, but may be derived using the same fashion and formulas (1) and (2) similarly as for the modified Shannon entropy. Using the finite samples, probabilities are only estimated by their frequencies, n therefore we will replace the probability pi, j by its unbiased estimate πi, j = ni, j = pˆ i, j , where n i, j is a number of occurrence of category j of variable i in the sample of size n. The Gini of the variable i’s category frequencies (Simpson 1949) then follows as ni ni ni ( Σ Σ Σ n i, j )2 Gi = 1 − πi,2 j = 1 − pˆ i,2 j = 1 − . n j=1 j=1 j=1 Assume that a = 1 in formula (1), i.e. Ri,a ( pi, j ) =

pi,a j a

= Ri,1 ( pi, j ) =

pi,1 j 1

= pi, j ,

and using the formula (2), we get a formula for the sum of squares of each variable’s category frequency, Mi,1 =

ni Σ j=1

pi, j Ri,1 ( pi, j ) =

ni Σ j=1

pi, j · pi, j =

ni Σ j=1

pi,2 j =

ni ( Σ n i, j )2 j=1

n

.

(5)

Eventually, what worth to be mentioned, is a comparison of each variable’s sum of squares Mi,1 given by formula (5) and Gini impurity. Obviously, there is a relation following Mi,1 = 1 − G i . That being written, using the Gini impurity G i in this study instead of the sum of squares Mi,1 would return exactly the same results (as far as the sign of Gini impurity is opposite than the one of the sum of squares and shifted by 1.0). The higher value of Gini impurity G i , i.e. the lower value of the the sum

Subsampling Algorithms Using Data With No Response Variable

143

of squares Mi,0 means higher variability or diversity, i.e. the variable i’s category frequencies are better balanced. Finally, when there is more than one variable, i.e. i ∈ {1, 2, 3, . . . , k} then in order to take into account for each variable’s sum of squares given by formula (5), we can calculate the average value M¯ 1 of the sums of squares for individual variables, so i i ( n i, j )2 1 ΣΣ 1 ΣΣ 1Σ Mi,1 = pi,2 j = . M¯ 1 = k i=1 k i=1 j=1 k i=1 j=1 n

k

k

n

k

n

(6)

Let us derive the lower and upper bound of the sum of squares Mi,1 for the variable i. (i) Firstly, let us consider one of the two possible extreme scenarios—the sample is populated by only one category. More technically, let us assume that ∃ j ∗ ∈ {1, 2, 3, . . . , n i } so that n i, j ∗ = n i . Then, ∀ j ∈ {1, 2, 3, . . . , n i } \ j ∗ is n i, j = 0 n ∗ n and, eventually, i,nj = 1 and ni, j = 0. The sum of squares Mi,1 then follows the term Mi,1 =

ni ( Σ n i, j )2

n ( n ∗ )2 i, j + = n

=

j=1

(n

Σ j∈{1,2,...,n i }\ j ∗

i, j

)2

n

=

= 12 + (n i − 1) · 02 = = 1. Thus, we derived the maximum value of the sum of squares Mi,1 for the variable i is equal to 1. (ii) Now suppose the other extreme scenario—all categories are equally populated in the sample and no one of the categories occurred more than once. So, in other words, n i,1 n i,2 n i,ni 1 = = ··· = = n n n n and also n i = n. The sum of squares Mi,1 then follows as Mi,1 =

ni ( Σ n i, j )2

n ( )2 1 =n· = n 1 = . n j=1

=

n i ( )2 Σ 1 j=1

n

=

n ( )2 Σ 1 j=1

n

=

144

L. Štˇepánek et al.

So, we derived the minimum value of the sum of squares Mi,1 for the variable i is equal to n1 , where n is the size of a sample containing only categories of the variable i. Concluding this up, we derived that for each variable i and its sample size n is the sum of squares Mi,1 of the variable’s category frequencies lower then or equal to 1 and greater than or equal to n1 , more formally n1 ≤ Mi,1 ≤ 1. Going back to the idea of a well-balanced subsample, all category frequencies of all variables in the subsample should be of (near) equal sizes. That is a situation very close to scenario (ii) with balanced frequencies n1 above—on the other hand, the frequencies in scenario (i) are imbalanced. Assuming this, the sum of squares Mi,1 of the variable’s category frequencies in the well-balanced subsample should be as low as possible and should approach the n1 . Finally, if all the variable would minimize their sums of squares, then also the average value M¯ 1 of all the sums of squares should be minimal. Keeping the subsample well balanced, i.e. ensuring the categories of all the variables in the subsample are of (near) equal frequencies, means lowering the average value M¯ 1 of the sums of squares as much as possible. In theory, the minimal possible value of the average value M¯ 1 of the sums of squares is i ( n i, j )2 1 Σ 1 1 ΣΣ 1 k 1 M¯ 1 = ≥ = · = . k i=1 j=1 n k i=1 n k n n

n

k

k

In practise, assuming the categories are well balanced Σ for each variable, i.e. for i n i, j = n ≈ n i · n i, j each i ∈ {1, 2, 3, . . . , k} is n i,1 ≈ n i,2 ≈ · · · ≈ n i,ni , then nj=1 n i, j n/n i 1 and so n ≈ n ≈ ni , we can expect rather ) ni ( ni ( k k n i, j )2 1 ΣΣ 1 ΣΣ 1 2 ¯  ≈ M1 = k i=1 j=1 n k i=1 j=1 n i 1Σ ≈ ni k i=1 k

2.2.3

(

1 ni

)2

1 Σ ni 1Σ 1 ≈ . 2 k i=1 n i k i=1 n i k



k

A Count of all Variable’s Categories

Using various values of parameter a ∈ R in formulas (1) and (2), we may get different metrics for subsampling quality checking with interesting interpretation. While the value a = 0 tends to the modified Shannon entropy, the value a = 1 outputs the sum of squares of each variable’s category frequency. Putting a = −1 in formula (1), we get

Subsampling Algorithms Using Data With No Response Variable

Ri,a ( pi, j ) =

pi,a j a

= Ri,−1 ( pi, j ) =

pi,−1j −1

145

=−

1 , pi, j

and by substituting in the formula (2), we get a formula for the count of all variable’s categories presented in a final subsample, Mi,−1 =

ni Σ j=1

pi, j Ri,−1 ( pi, j ) =

ni Σ j=1

( ) Σ ni 1 pi, j · − −1 = n i · (−1) = −n i . = pi, j j=1

(7) The Mi,−1 metric is relatively simple but easy-to-interpret. When a count of variable i’s categories in the final subset is high, i.e. n i is high, the categories tend to be well balanced since all of them are presented in the subsample. And, consequently, the higher is n i , the lower is Mi,−1 = −n i . So, again, to get properly distributed categories of a given variable in the final subset, we need to minimize Mi,−1 . When there is more than one variable, i.e. i ∈ {1, 2, 3, . . . , k}, we can calculate the average value M¯ −1 for individual variables’ metrics Mi,−1 values, so 1Σ 1Σ M¯ −1 = Mi,−1 = −n i . k i=1 k i=1 k

k

(8)

Let us derive the lower and upper bound of the count of all variable’s categories Mi,−1 for the variable i. (i) As the first extreme scenario—let the sample is populated by only one category, so, n i = 1. The count of all variable’s categories Mi,−1 is then Mi,−1 = −n i = −1. Thus, we derived the maximum value of the count of all variable’s categories Mi,−1 for the variable i is equal to −1. (ii) Now suppose the other extreme scenario—all categories are equally populated in the sample and no one of the categories occurred more than once. So, in other words, n i,1 = n i,2 = · · · = n i,ni = 1. Σi Σi Since nj=1 n i, j = n, it is also nj=1 1 = n i = n, the minimum value of the count of all variable’s categories Mi,−1 is equal to Mi,−1 = −n i = −n, where n is the size of a sample containing only categories of the variable i. Thus, we derived that for each variable i and its sample size n is the count of all variable’s categories Mi,−1 lower then or equal to −1 and greater than or equal to −n, more formally −n ≤ Mi,−1 ≤ −1.

146

2.2.4

L. Štˇepánek et al.

An Absolute Frequency of a Variable’s Modal Category pa

Finally, if a = ∞, one can show that in formula (1), i.e. Ri,a ( pi, j ) = ai, j , the numerator is most affected by the highest value pi, j for all j ∈ {1, 2, 3, . . . , n i }, while the denominator is infinitely high. More formally, Ri,a ( pi, j ) =

pi,a j a

= Ri,∞ ( pi, j ) = lim

pi,a j

a→∞

a



max

j∈{1,2,3,...,n i }

pi, j ∝

max

j∈{1,2,3,...,n i }

n i, j , n

and since n is constant, it is also Ri,∞ ( pi, j ) ∝

n i, j ∝ j∈{1,2,3,...,n i } n max

max

j∈{1,2,3,...,n i }

n i, j .

Eventually, using Eq. (2), we get a formula for the absolute frequency of a variable’s modal category, Mi,∞ =

ni Σ

pi, j Ri,∞ ( pi, j ) ∝

j=1

⎛ ⎞ ni Σ pi, j ⎠ · =⎝ j=1

=

max

j∈{1,2,3,...,n i }

ni Σ j=1

max

j∈{1,2,3,...,n i }

pi, j ·

max

j∈{1,2,3,...,n i }

n i, j = 1 ·

n i, j =

max

j∈{1,2,3,...,n i }

n i, j .

n i, j = (9)

The metric Mi,∞ works in the same fashion as the previous ones. The lower is Mi,∞ = max j∈{1,2,3,...,ni } n i, j , i.e. the lower is an occurrence of the most frequent category in the final subset, the higher frequencies of remaining categories could be, which tends to their frequency balance. Let us estimate the lower and upper bound of the absolute frequency of a variable’s modal category Mi,∞ for the variable i. (i) Firstly, let the sample be populated by only one category. More technically, let us assume that ∃ j ∗ ∈ {1, 2, 3, . . . , n i } so that n i, j ∗ = n i = n. Then, ∀ j ∈ {1, 2, 3, . . . , n i } \ j ∗ is n i, j = 0. The absolute frequency of a variable’s modal category Mi,∞ then follows the term Mi,1 =

max

j∈{1,2,3,...,n i }

n i, j = n i, j ∗ = n i = n.

Thus, we derived the maximum value of the absolute frequency of a variable’s modal category Mi,∞ for the variable i is equal to n. (ii) Now suppose all categories are equally populated in the sample and no one of the categories occurred more than once. So, in other words,

Subsampling Algorithms Using Data With No Response Variable

147

n i,1 = n i,2 = · · · = n i,ni = 1, and the absolute frequency of a variable’s modal category Mi,∞ then follows as Mi,1 =

max

j∈{1,2,3,...,n i }

n i, j =

max

j∈{1,2,3,...,n i }

1 = 1.

So, we derived the minimum value of the absolute frequency of a variable’s modal category Mi,∞ for the variable i is equal to 1. So, for each variable i and its sample size n, the absolute frequency of a variable’s modal category Mi,∞ is lower then or equal to n and greater than or equal to 1, more formally 1 ≤ Mi,∞ ≤ n. For more than one variable, i.e. i ∈ {1, 2, 3, . . . , k}, we can compute the average value M¯ ∞ for individual variables’ metrics Mi,∞ is 1Σ 1Σ Mi,∞ = max n i, j . M¯ ∞ = k i=1 k i=1 j∈{1,2,3,...,ni } k

k

(10)

2.3 Subsampling Algorithms We introduce various algorithms for subsampling in the following parts and, furthermore, to select the optimal one, we compare their asymptotic time complexity.

2.3.1

Single Random Subsampling Without Replacement

The term of random subsampling without replacement means that each observation of the original dataset has only one chance to be selected in the subsample. If we subsample ( ) the original dataset of size N to a dataset of size n only once, there are in theory Nn options how to do the random subsampling. Assuming one of the subsamples1 minimizing the averaged metric M¯ a , so that M¯ a ∈ { M¯ 0 , M¯ 1 , M¯ −1 , M¯ ∞ }, the probability of randomly hitting such a subsample is about N1 ≅ 0 for large (n) N > n. An expected value of the averaged metric M¯ a , calculated using the obtained subsample, is in between the expected value of the worst-case scenario, and the best-case scenario, so (i) for the averaged modified Shannon entropy − log n ≤ E( M¯ 0 ) < 0, (ii) for the averaged sums of squares of all categories’ frequencies n1 ≤ E( M¯ 1 ) ≤ 1, 1 Theoretically, there could be more than one subsample with the same but minimal value of the averaged metric M¯ a , so that M¯ a ∈ { M¯ 0 , M¯ 1 , M¯ −1 , M¯ ∞ }.

148

L. Štˇepánek et al.

(iii) for the count of all variable’s categories −n ≤ E( M¯ −1 ) ≤ −1, (iv) and for the absolute frequency of a variable’s modal category 1 ≤ E( M¯ ∞ ) ≤ n, respectively. The asymptotic time complexity is easy to derive, Θ(1), assuming the random subset generating costs 1 unit of complexity time.

2.3.2

Repeated Random Subsampling Without Replacement

Similarly to the previous approach, here we repeat the random subsampling m > 1 times. The repetition of the random subsampling enables us to estimate an expected value ˆ M¯ a ) of the averaged metric M¯ a , where M¯ a ∈ { M¯ 0 , M¯ 1 , M¯ −1 , M¯ ∞ } and standard E( √ deviation var( ˆ M¯ a ), using the values of m obtained subsamples. Assuming the ˆ ¯ ¯ √a −E( Ma ) folLjapunov’s version of the central limit theorem, the derived variable M lows standard normal distribution, formally

ˆ ¯ M¯ a −E( √ Ma ) var( ˆ M¯ a )

var( ˆ M¯ a )

∼ N(0, 1 ). That helps us to 2

estimate the minimum value of the averaged sums of squares S¯ following way. Supˆ ¯ ¯ √a −E( Ma ) ∼ N(0, 12 ) holds, we know that posing there M var( ˆ M¯ a )

( P

ˆ M¯ a ) M¯ a − E( √ ≤ u 0.01 var( ˆ M¯ a )

) = 0.01,

where u 0.01 is a 0.01-th quantile of the standard normal distribution. Continuing in the derivations, we get (

) / ˆ ¯ ¯ ¯ P Ma ≤ E( Ma ) − |u 0.01 | var( ˆ Ma ) = 0.01,

(11)

¯ so approximately, √ the minimum value of Ma is very likely close to the term of ˆE( M¯ a ) − |u 0.01 | var( ¯ ˆ Ma ). Utilizing this piece of information, we can not only estimate the minimum value of the averaged sums of squares M¯ a , but can also highlight the subsample approaching this minimum value (surely it is the subsample with min√ ˆ M¯ a ) − |u 0.01 | var( ˆ M¯ a ) from the imal value—somewhat close to the subtraction E( positive direction—of the averaged sums of squares M¯ a in the set of all m generated subsamples). The asymptotic time complexity of the (m times) repeated random subsampling without replacement is Θ(m), again assuming the random subset generating costs 1 unit of complexity time. The pseudocode of the repeated random subsampling process is in Algorithm 1.

Subsampling Algorithms Using Data With No Response Variable

149

Algorithm 1: Repeated random subsampling without replacement and estimating of the minimum value of the averaged metric M¯ a , together with highlighting of the subsample minimizing the averaged metric M¯ a , where M¯ a ∈ { M¯ 0 , M¯ 1 , M¯ −1 , M¯ ∞ } Data: an original dataset of size N containing k variables Result: a set of m random subsamples of size n < N , an estimate of the minimum value of the averaged metric M¯ a and highlighting of the subsample minimizing the averaged metric M¯ a N n m S 5 A

1 2 3 4

// // // // //

size of the original dataset ; size of the subsample; number of repetitions of subsampling; a tuple of subsamples of size n; a tuple of averaged metric M¯ a ;

6 for l = 1 : m do 7 generate a random subsample ∫ of size n without replacement from the 8 9 10 11 12 13 14

original dataset of size N and calculate its averaged metric M¯ a ; S = {S, ∫ }; A = {A, M¯ a }; end find the minimum of A and a corresponding subsample with M¯ a = min{A} ; ˆ M¯ a ) and var( calculate an estimate E( ˆ M¯ a ) ; √ ˆ M¯ a ) − |u 0.01 | var( calculate the estimated minimum of M¯ a as E( ˆ M¯ a ) ; √ ˆ M¯ a ) − |u 0.01 | var( ˆ M¯ a ) ; compare min{A} and E(

2.3.3

Exhaustive Subsampling

The method of exhaustive subsampling is based on greedy generating all possible subsamples of size n from dataset of size N > n. ( )the original n! ways how a subsample of size n could be In theory, there are Nn = k!(n−k)! ( ) sampled from the dataset of size N . It implies there is also Nn values of the averaged metric M¯ a (one value per each subsample), where M¯ a ∈ { M¯ 0 , M¯ 1 , M¯ −1 , M¯ ∞ }, but the values are not necessarily different. Regardless of that, this approach enables to convenient pick the subsample with a minimum possible value of the averaged averaged metric M¯ a (no other subsample could practically have the value of the averaged metric M¯ a lower). However, there is an obvious trade-off between the possibility to reach the prac¯ tical minimum of the value of the averaged (( )) averaged ( metric ) Ma and asymptotic N n! time complexity, which is enormous, Θ n = Θ k!(n−k)! , assuming the random subset generating costs 1 unit of complexity time.

150

2.3.4

L. Štˇepánek et al.

Subsampling by Forwarding Step-by-Step Size Reduction of the Original Dataset

The logic of the step-by-step size reduction of the original dataset by permanent lowering a value of the averaged metric M¯ a , where M¯ a ∈ { M¯ 0 , M¯ 1 , M¯ −1 , M¯ ∞ } is based on random selection of such an observation that its removing from the original dataset tends to decrease (or at least not increase) a value of the averaged averaged metric M¯ a . Thus, we also call this approach as one-by-one observation’s sample reduction or also as row-by-row observation’s sample reduction. Let’s define a size of the dataset after τ steps, i.e. after removing of τ observations, as n(τ ), and the averaged metric M¯ a after τ steps as M¯ a (τ ). Evidently, n(0) = N , n(1) = N − 1, n(2) = N − 2, …, n(N − n) = N − (N − n) = n. Analogously, we demand on M¯ a (τ + 1) ≤ M¯ a (τ ) for each τ ∈ {0, 1, 2, . . . , N − n − 1}. It is easy to demonstrate that M¯ a (N − n) ≤ M¯ a (0), i.e. the averaged metric M¯ a after N − n steps (when dataset size is n) is lower than or equal to the value of the averaged metric M¯ a in the beginning. Assuming the initial original dataset is not well balanced, then M¯ a (N − n) < M¯ a (0) or even M¯ a (N − n) n do 7 while M¯ a after removing the random observation ≥ M¯ a do 8 pick another random observation from the current dataset of size n t (# of

observations) end remove the picked observation from the dataset; n t = n t − 1; 12 update M¯ a ; 13 end 14 use the subsample of size n; 9 10 11

2.3.5

Subsampling Using Clustering

An idea behind the subsampling using unsupervised learning of clustering kind is to utilize the fact that observations within each cluster are similar enough, while observations between each cluster are different enough. Thus, when we require subsamples with well-balanced category frequencies for each variable, we should consider observations from different clusters when creating the final subsample. Thus, a big question is how to pick the observations from different clusters to ensure the final subsample of a given size is well balanced. The paper’s authors suggest several ideas on how to use clusters for subsampling and, particularly, how to draw the observations from existing clusters when the final subsample is constructed. Firstly, regardless of the fact the observations are picked randomly or following some pattern from d clusters of sizes |c1 |, |c2 |, …, |cd |, a number of observations picked from the cluster δ ∈ {1, 2, . . . , d} should be proportional to its size, |cδ |. Let us assume that a number of category frequencies of a variable i that are greater than zero is ηi in a given cluster δ. A total count of categories of a variable i is, following the previous notation, n i , and a mean frequency for average category is

152

L. Štˇepánek et al.

about |cnδi | . As we can see, the mean frequency is proportional to the cluster size |cδ |. In other words, the larger is the cluster (the larger is |cδ |), more categories would get non-zero frequency. Assuming the count of the variable i’s categories with non-zero frequency is ηi in a given cluster δ, the ηi is proportional to |cδ |, ηi ∝ |cδ |, and those frequencies are roughly similar, i.e. n i, j ≈ ηni ≈ |cηδi | , we can derive (i) for the modified Shannon entropy Mi,0 , Mi,0 =

η

ni Σ n i, j j=1

η

i i Σ Σ n i, j |cδ |/ηi 1 1 |cδ |/ηi log ∝ log ∝ log ∝ n n |c | |c | η η δ δ i j=1 j=1 i

ηi |cδ | Σ Σ 1 1 1 1 1 1 1 ∝ log ∝ log ∝ |cδ | · · log ∝ log , |cδ | |cδ | |cδ | |cδ | |cδ | |cδ | |cδ | j=1 j=1

(12) (ii) for the averaged sums of squares of all categories’ frequencies Mi,1 , Mi,1 =

ni ( Σ n i, j )2 j=1



n



) ηi ( Σ |cδ |/ηi 2 j=1

|cδ |

ηi ηi Σ Σ 1 1 ∝ ∝ ∝ 2 2 |c η δ| j=1 i j=1

|cδ | Σ 1 1 1 ∝ |cδ | · ∝ , 2 2 |cδ | |cδ | |cδ | j=1

(13)

(iii) for the count of all variable’s categories Mi,−1 , Mi,−1 = −n i ∝ −ηi ∝ −|cδ |,

(14)

(iv) and for the absolute frequency of a variable’s modal category Mi,∞ , Mi,∞ =

max

j∈{1,2,3,...,n i }

n i, j ∝ −ηi ∝ −|cδ |

(approx.).

(15)

that supports our suggestion to draw observations from the clusters proportionally2 to their sizes3 , i.e. the larger the cluster is, the more observations should be picked from the cluster towards the final subsample to minimize the metric Mi,a , where Mi,a ∈ {Mi,0 , Mi,1 , Mi,−1 , Mi,∞ }. The exact inversely proportional relations between the metric Mi,a and the cluster size |cδ | may differ according to the given metric, as we can see in formulas (12), (13), (14), and (15). 3 The proportional equations (12), (13), (14), and (15) might be confusingly understood as to pick a maximum of observations (towards the final subsample) from the larger cluster since this would lead to the minimization of the sum of squares Si for the given variable. However, such a subsample would be constructed using almost only one of the clusters—the largest one—and thus, tends to include very similar observations, which could break the demand of well-balanced category frequencies over all variables. 2

Subsampling Algorithms Using Data With No Response Variable

153

Secondly, we also propose an experimental approach that requires another ongoing research. Considering the (not necessarily k-means) clustering is repeated m times, with a random number of clusters in each of m iterations, we can construct a symmetric square matrix T of dimensions N × N , that for p-th row and q-th column describes a number of times that p-th observation of the original dataset was together in the same cluster with the q-th observation of the original dataset. The matrix T follows a form of ⎞ ⎛ t1,1 t1,2 · · · t1,N ⎜ t2,1 t2,2 · · · t2,N ⎟ ⎟ ⎜ (16) T =⎜ . .. . . .. ⎟ , . ⎝ . . . ⎠ . t N ,1 t N ,2 · · · t N ,N where t p,q stands for a number of times both the p-th observation and q-th observation of the original dataset were together in the same cluster. Once we want to construct a subsample of size n from the original dataset of size N , we demand on keeping all variables’ each category frequency balanced with other frequencies, so the final subsample should include all categories of all variables with (near) similar frequencies, if possible. Drawing such observations that were many times together in the same clusters within the multiple clustering procedure would result in the final subsample containing too many similar observations, which would reduce the native variability of the variables. Consequently, the final subsample should be constructed using observations that are mutually non-similar. The way of constructing such a subsample could be to pick two original observations with a minimum value of t p,q and then add to the subsample one by one new observation (until the subsample size is sufficient) such that each new one (the q-th) has the minimum value of Σ

t p,q ,

∀ p∈{observations in subsample}

so that q = argminq∈{1,2,...,N }

Σ

t p,q ,

(17)

∀ p∈{observations in subsample}

that minimizes a chance of getting a subsample with too much similar observations. While this approach may look as completely deterministic, it contains a part that is based on randomness, namely the clustering part. Adopting the time complexity of the m times repeated k-means clustering for small number of clusters is Θ(m · N · k) (Pakhira 2014) and for the T matrix construction (16), the ongoing part using the formula (17) takes averagely Θ(n 2 ) complexity time units. Whereas the clustering algorithm itself could vary (it is not necessary to apply only k-means algorithm), it is worth to be mentioned that—since the variables in the

154

L. Štˇepánek et al.

original dataset are categorical (or transformed into categorical ones)—the Gower distance was chosen for the clustering as it can handle categorical variables well within the clustering (Gower 1971). The pseudocode of the subsampling by clustering the original dataset is in Algorithm 3. Algorithm 3: Subsampling by clustering the original dataset using the matrix T of mutual occurrences in the same clusters as in (16) and average metric M¯ a , where M¯ a ∈ { M¯ 0 , M¯ 1 , M¯ −1 , M¯ ∞ } Data: an original dataset of size N containing k variables and T matrix of mutual occurrences in the same clusters as in (16) Result: a subsample minimizing the averaged metric M¯ a // size of the subsample; // current size of the dataset; // matrix of mutual occurrences in the same clusters; ¯ a // current averaged metric M¯ a ; 4 M 5 S // current subsample;

1 n 2 nt 3 T

6 populate the subsample S by the first two observations corresponding to row

and column indices of minimum of T ; 7 n t = 2; 8 while n t < n do Σ 9 pick the q-th observation such that q = argminq∈{1,2,...,N }\S ∀ p∈S t p,q ,

where t p,q is the value of p-th row and q-th column of the matrix T ; n t = n t + 1; 11 update M¯ a ; 12 end 13 use the subsample S of size n; 10

3 Results We used COVID-19 survey data of our provenience for the application of the proposed methods. The original dataset contains N = 698 rows corresponding to observations and k = 18 columns related to variables. Since the dataset is of a questionnaire form including questions with the close format, the vast majority of the variables are categorical. A few of the numerical variables were categorized following experts’ suggestions or natural logic, e.g. age was categorized into intervals of lengths ten years, starting and ending at an age divisible by a number 10, etc. Applying this approach, there are only categorical variables in the original dataset before the subsampling. The reason why the response variable, i.e. the serology levels of COVID19 antibodies, is missing in the original dataset is that patients involved in the study

Subsampling Algorithms Using Data With No Response Variable

155

were planned to undergo relatively expensive serology tests; thus, the original size (N = 698) had to be reduced significantly (n = 400) to keep the costs of the serology testing manageable. The task was to get a subsample of n = 400 rows from the original dataset, containing the original number of k = 18 variables. All the computations were performed using R programming language and environment (R Core Team 2017). There are more numerical applications of R language to various fields in (Štˇepánek et al. 2018, 2019a, 2019b, 2019c, 2019d, 2020; Jones et al. 2009). We applied all the algorithms mentioned above to do the subsampling and compare the results using all the metrics controlling the quality of the subsampling in between the methods. The metrics of the subsampling quality, depicting particularly how well the category frequencies of all the variables are balanced, are the averaged modified Shannon entropy M¯ 0 following the formula (4), the averaged sums of squared M¯ 1 as defined in (6), the count of all variable’s categories M¯ −1 as in (8), and the absolute frequency of a variable’s modal category M¯ ∞ , following (10). Besides the single random subsampling without replacement, we started with the repeated random subsampling without replacement. Repeating the random subsampling multiple times (m = 100) enables modeling the prior distribution of the ¯ and was also used for estimation of the minimum value averaged sums of squared S, of averaged metrics using the formula (11). The next method, subsampling by forwarding one-be-one reduction of the original dataset, was also performed m = 100 times. The subsampling by clustering the original dataset was performed m = 100 times, too. The final subsample was designed using the T matrix (16) and creating the subsample from scratch using the logic of formula (17). There is a table with minimum values of the averaged metrics for each subsampling algorithm in Table 1.

Table 1 A table with obtained minimum values of the averaged metrics for each subsampling algorithm Algorithm An averaged metric M¯ 0 M¯ 1 M¯ −1 M¯ ∞ Repeated random −0.420 subsampling without replacement Subsampling by −0.411 forwarding one-by-one reduction Subsampling by −0.391 clustering

0.247

−2.480

2.975

0.242

−2.520

3.050

0.245

−2.525

3.025

156

L. Štˇepánek et al.

Fig. 1 Histogram of the prior distributions of the averaged metrics, i.e. from the left M¯ 0 , M¯ 1 , M¯ −1 , and M¯ ∞ , respectively, calculated for the repeated (m = 100) random subsampling without replacement

Fig. 2 Histogram of the prior distributions of the averaged metrics, i.e. from the left M¯ 0 , M¯ 1 , M¯ −1 , and M¯ ∞ , respectively, calculated for the repeated (m = 100) subsampling by forwarding one-by-one reduction of the original dataset

Also, there are histograms of the prior distributions of the averaged metrics calculated for the repeated (m = 100) random subsampling without replacement in Fig. 1, the repeated (m = 100) subsampling by forwarding one-by-one reduction of the original dataset in Fig. 2, and for the repeated (m = 100) subsampling by clustering the original dataset in Fig. 3, respectively. As we can see, all the three applied methods return similar subsampling quality considering the minimization of the averaged metrics. The choice of the metric seems to be not crucial, since the differences between the obtained minimum values of the metrics are not substantial. The formal comparison of statistical differences in between mean values of the averaged metrics for the repeated (m = 100) random subsampling without replacement, repeated (m = 100) subsampling by forwarding one-be-one reduction of the original dataset, and the repeated (m = 100) subsampling by clustering the original dataset could be performed using one-way analysis of variance (ANOVA). However, considering the Figs. 1, 2 and 3, the practical differences are minimal. What practically differs is the asymptotic time complexity of the mentioned techniques, as discussed before.

4 Conclusion Subsampling may be an important task when the original dataset is larger than required. If there is a response variable available in the dataset, then the method-

Subsampling Algorithms Using Data With No Response Variable

157

Fig. 3 Histogram of the prior distributions of the averaged metrics, i.e. from the left M¯ 0 , M¯ 1 , M¯ −1 , and M¯ ∞ , respectively, calculated for the repeated (m = 100) subsampling by clustering the original dataset

ology used for the subsampling is well established; the popular propensity scoring is used to extract the subsample from the original data that harmonize size effects of all predictors using logistic regression model. When the response variable from some reason or another is missing, e.g. is planned to be collected later, the methodology of the subsampling is not so straightforward. Many various methods of low significance are used, based on different approaches— from totally random subsampling to manually matched pairs of observations with balanced all variables’ category frequencies. In this study, we proposed several metrics—the modified Shannon entropy, the averaged sums of squares, the count of all variable’s categories, and the absolute frequency of a variable’s modal category—enabling to control a quality of the subsampling, including the fact that one of the metrics, the averaged sums of squares of all variables’ category frequencies, is in theory scaled to an interval not dependent on entry data, as was proven. Furthermore, we compared several methods; some of them are novel and proposed by this paper. While the repeated random subsampling without replacement is relatively fast method, it can reach the minimum of the averaged metrics only approximately. The subsampling using one-by-one reduction of the original sample is a bit slower than the random multiple subsampling, but still feasibly applicable; it can approach the minimum of the averaged metrics only approximately, too. The exhaustive subsampling as the only one method can numerically calculate the exact value of the minimum of the averaged metrics; however, its executing time is enormously high. Finally, the subsampling by clustering is an innovative method that is relatively fast if implemented using standard algorithms and maturated computational environments, and furthermore, it offers a way to keep control over the mutual occurrences of each two observations from the same clusters, when the final subsample is constructed. Even the subsampling by clustering approached the minimum of the averaged metrics relatively closely. As illustrated using the COVID-19 data, the choice of the metric controlling the quality of the subsampling is only a secondary issue, and does not affect a scope of the subsampling algorithms’ comparison much.

158

L. Štˇepánek et al.

All the proposed methods, i.e. repeated random subsampling without replacement, subsampling using one-by-one reduction of the original dataset and subsampling by clustering seem to be valid alternatives to exhaustive subsampling. Acknowledgements This paper is supported by the grant OP VVV IGA/A, CZ.02.2.69/0.0/0.0/ 19_073/0016936 with no. 18/2021, which has been provided by the Internal Grant Agency of the Prague University of Economics and Business.

References Austin, P.C.: An introduction to propensity score methods for reducing the effects of confounding in observational studies. Multivar. Behav. Res. 46, 399–424 (2011) Gower, J.C.: A general coefficient of similarity and some of its properties. Biometrics 27, 857 (1971) Jones, O., Maillardet, R. & Robinson, A. Introduction to Scientific Programming and Simulation Using R. Chapman and Hall/CRC, Mar. 2009. https://doi.org/10.1201/9781420068740 MacKay, D.: Information Theory, Inference, and Learning Algorithms. Cambridge University Press, Cambridge, UK, New York (2003).ISBN: 0-521-64298-1 Pakhira, M.K.: A linear time-complexity k-means algorithm using cluster shifting. In: 2014 International Conference on Computational Intelligence and Communication Networks. IEEE, Nov. 2014. https://doi.org/10.1109/cicn.2014.220 Pathical, S. & Serpen, G. Comparison of subsampling techniques for random subspace ensembles. In: 2010 International Conference on Machine Learning and Cybernetics. IEEE, July 2010. https:// doi.org/10.1109/icmlc.2010.5581032 R Core Team. R: A Language and Environment for Statistical Computing R Foundation for Statistical Computing, Vienna, Austria (2017). https://www.R-project.org Sahney, S., Benton, M.J., Ferry, P.A.: Links between global taxonomic diversity, ecological diversity and the expansion of vertebrates on land. Biol. Lett. 6, 544–547 (2010) Simpson, E.H.: Measurement of diversity. Nature 163, 688–688 (1949) Štepánek, L., Habarta, F., Malá, I., Marek, L., Pazdírek, F.: A machinelearning approach to survival time-event predicting: initial analyses using stomach cancer data. In: 2020 International Conference on e-Health and Bioengineering (EHB). IEEE, Oct. 2020. https://doi.org/10.1109/ ehb50910.2020.9280301 Štepánek, L., Habarta, F., Malá, I., Marek, L.: A random forest-based approach for survival curves comparing: principles, computational aspects and asymptotic time complexity analysis. In: Annals of Computer Science and Information Systems. IEEE, Sept. 2021. https://doi.org/10. 15439/2021f89 Štepánek, L., Habarta, F., Malá, I., Marek, L.: Analysis of asymptotic time complexity of an assumption-free alternative to the log-rank test. In: Proceedings of the 2020 Federated Conference on Computer Science and Information Systems. IEEE, Sept. 2020. https://doi.org/10. 15439/2020f198 Štepánek, L., Kasal, P., Mešták, J.: Evaluation of facial attractiveness for purposes of plastic surgery using machine-learning methods and image analysis. In: 2018 IEEE 20th International Conference on e-Health Networking, Applications and Services (Healthcom). IEEE, Sept. 2018. https://doi. org/10.1109/healthcom.2018.8531195 Štepánek, L., Kasal, P., Mešták, J.: Evaluation of facial attractiveness after undergoing rhinoplasty using tree-based and regression methods. In: 2019 E-Health and Bioengineering Conference (EHB). IEEE, Nov. 2019. https://doi.org/10.1109/ehb47216.2019.8969932 Štepánek, L., Kasal, P., Mešták, J.: Machine-Learning and R in plastic surgery—evaluation of facial attractiveness and classification of facial emotions. In: Advances in Intelligent Systems

Subsampling Algorithms Using Data With No Response Variable

159

and Computing, pp. 243–252. Springer International Publishing, Sept. 2019. https://doi.org/10. 1007/978-3-030-30604-5_22 Štepánek, L., Kasal, P., Mešták, J.: Machine-learning at the service of plastic surgery: a case study evaluating facial attractiveness and emotions using R language. In: Proceedings of the 2019 Federated Conference on Computer Science and Information Systems. IEEE, Sept. 2019. https:// doi.org/10.15439/2019f264 Štepánek, L., Kasal, P., Mešták, J.: Machine-learning at the service of plastic surgery: a case study evaluating facial attractiveness and emotions using R language. In: Proceedings of the 2019 Federated Conference on Computer Science and Information Systems. IEEE, Sept. 2019. https:// doi.org/10.15439/2019f264 Stuart, E.A.: Matching methods for causal inference: a review and a look forward. Statist. Sci. 25 (2010). https://doi.org/10.1214/09-sts313

Using Temporal Dummy Players in Cost-Sharing Games Ofek Dadush and Tami Tamir

Abstract We consider cost-sharing games in which resources’ costs are fairly shared by their users. In this type of games, the total players’ cost in a Nash Equilibrium profile may be significantly higher than the social optimum. We compare and analyze several methods to lead the players to a good Nash Equilibrium by temporal addition of dummy players. The dummy players create artificial load on some resources, that encourage other players to change their strategies. This best-response (BR) dynamic continues until reaching a Nash equilibrium profile. We show that it is NP-hard to calculate an optimal strategy for the dummy players. We then focus on symmetric singleton games, where each player needs exactly one resource. We suggest several heuristics for the problem, based on the resources costs and the initial loads on the resources. The heuristics are simulated and their performance is evaluated, distinguishing between the following measures: The social cost of the final profile, the number of dummies used, the length of the BR-sequence till convergence, and the number of times the dummy players move. Our main conclusion is that the use of dummy players may significantly improve the equilibrium inefficiency. Keywords Temporal dummy players · Nash Equilibrium · Cost-sharing games

1 Introduction 1.1 Background In resource allocation applications, a centralized authority is assigning the clients to different resources. For example, in job-scheduling applications, jobs are assigned to servers to be processed; in communication or transportation networks, traffic is assigned to network links to be routed. The centralized utility is aware of all clients’ requests and determine the assignment. Classical computational optimizaO. Dadush · T. Tamir (B) School of Computer Science, Reichman University, Herzliya, Israel e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 S. Fidanova (ed.), Recent Advances in Computational Optimization, Studies in Computational Intelligence 1044, https://doi.org/10.1007/978-3-031-06839-3_9

161

162

O. Dadush and T. Tamir

tion problems study how to utilize the system in the best possible way. In practice, many resource-allocation services lack a central authority, and are often managed by multiple strategic users, whose individual payoff is affected by the assignment of other users. As a result, game theory has become an essential tool in the analysis of resource-allocation services. In the corresponding game, every client corresponds to a selfish player who aims at maximizing its own utilization. Naturally, suboptimal players will keep changing their strategy, and the dynamic continues as long as the profile is not stable. Pure Nash equilibrium (NE) is the most popular solution concept in games. A strategy profile is a NE if no player has a beneficial deviation. It is well known that decentralized decision-making may lead to sub-optimal solutions from the point of view of the society as a whole. On the other hand, the system cannot control the decisions made by the players. In this work we propose to analyze the power of adding dummy players controlled by the system. The goal of the dummy players is to direct the players to a high quality solution, while still keeping their freedom to act selfishly and select their own strategy. The addition of dummy players is temporal, that is, the final configuration consists of the initial set of players. Since the final configuration must be stable, the goal is to lead the players to a good Nash Equilibrium. Many real life applications can benefit from adapting this approach. For example, navigation apps users receive information about the current status of the traffic and act accordingly, the provider can adjust the information presented to the users in favor of improving the balancing done on cars and roads (and by that avoid creation of traffic jams). Similarly, in communication networks, the delay of using a link can be artificially increased in order to encourage users to use alternative links.

1.2 Notation and Problem Statement For an integer n ∈ N , let [n] = {1, . . . , n}. A network-formation game (NFG, for short) Anshelevich et al. (2008) is N = , where N is a set of n players, G = is a weighted graph, and for each j ∈ [n], the pair describes the objective of Player j, namely forming a path from its source vertex s j ∈ V to its target vertex t j ∈ V . A pure strategy of a player j ∈ N is a path from si to ti . A profile in N is a tuple p = < p1 , . . . , pn > of strategies for the players, that is, p j is a path from s j to t j . Consider a profile p. Recall that c maps each edge to a cost, intuitively standing for the cost of its formation. The cost of an edge is shared equally by the players that uses it. The players aim at fulfilling their objective with minimal cost. For a profile p, let n e ( p) denote the load on edge e in p, that is, the number of players that include e in their path. The cost of player j in profile p is defined to be cost j ( p) =

Σ e∈ p j

ce /n e ( p).

Using Temporal Dummy Players in Cost-Sharing Games

163

Σ The cost of a profile p is the total players’ cost, that is cost ( p) = j∈N cost j ( p). For a profile p and a strategy p j of player j ∈ [n], let [ p− j , p 'j ] denote the profile obtained from p by replacing the strategy for Player j by p 'j . Given a strategy profile p, the best response (BR) of player j is B R j ( p) = arg min p'j ∈P j cost j ( p 'j , p− j ); i.e., the set of strategies that minimize player j’s cost, fixing the strategies of all other players. Player j is said to be suboptimal in p if it can reduce its cost by a unilateral / B R j ( p). If no player is suboptimal in p, then p is a Nash deviation, i.e., if p j ∈ equilibrium (NE). Given an initial strategy profile p 0 , a BR-sequence from p 0 is a sequence < p 0 , p 1 , . . .> in which for every T = 0, 1, . . . there exists a player j ∈ N such that p T +1 = ( p 'j , p−T j ), where p 'j ∈ B R j ( p−T j ). We restrict attention to games in which every BR sequence is guaranteed to converge to a NE. A game N with k dummy players is an extension of N into N ' = . The dummies have no reachability objective of their own, and are controlled by the system. Every dummy player is assigned on a single edge and increases the load on it, thus, making it more attractive for the other players. Practically, a profile of the game N ' is given by the strategies of N and the location of the k dummies. The dummy players are added to a given initial profile p 0 . Due to their addition, some of the players will become suboptimal, and a BR-sequence will be initiated. The system can control which suboptimal player is selected to perform its BR. After a finite number of BR-steps, the dummy players leave the network, and the players may continue the BR-sequence until convergence to a NE. It is well known that NE profiles may be sub-optimal. Let O P T (G) denote the social optimum of a game N, that is, the minimal possible social cost of a feasible assignment of N , i.e., O P T (N) = min p cost ( p). The inefficiency incurred due to self-interested behavior is quantified according to the price of anarchy (PoA) (Koutsoupias and Papadimitriou 2009; Papadimitriou 2001) and price of stability (PoS) Anshelevich et al. (2008) measures. The PoA is the worst-case inefficiency of a pure Nash equilibrium, while the PoS measures the best-case inefficiency of a pure Nash equilibrium. Formally, PoA(N) = max p∈N E(N) cost ( p)/O P T (N), and PoS(N) = min S∈N E(N) cost ( p)/O P T (N). The goal of the dummy addition is to initiate a BR-sequence in which the players converge to a NE whose cost is as close as possible to the cost of the best NE. Some of our results refer to symmetric singleton games. These games fit several practical environments such as scheduling on parallel machines, or routing on parallel links Koutsoupias and Papadimitriou (2009). A network formation game that corresponds to a symmetric singleton game is given by m parallel (s − t)-links (e1 , . . . , em ) and a vector of positive link costs (c1 , . . . , cm ), where ci is the activation cost of link i. All the players have the same objective—a path from s to t, and thus, the symmetric strategy space is simply the set of edges. A profile p of the game is p p p given by a vector of Σ loads (n 1 , . . . , n m ), where n i is the number of players on ei p in profile p. Let n = i n i . We assume, w.l.o.g., that c1 ≤ c2 ≤ . . . ≤ cm . Clearly, the social optimum profile of such a game is simply assigning all the players on the cheapest link e1 . On the other hand, it is well known that the price of anarchy is n even

164

O. Dadush and T. Tamir

for a simple network with only two parallel links having costs c1 = 1 and c2 = n. Indeed, if all the players are assigned on e2 , then each of them pays n/n = 1 and would not benefit from deviating to e1 . Note that for this network, a single dummy assigned on e1 is sufficient to encourage the players to deviate to e1 .

1.3 Related Work Many modern systems provide service to multiple strategic users, whose individual payoff is affected by the decisions made by other users of the system. As a result, non-cooperative game theory has become an essential tool in the analysis of this kind of systems, in particular, routing in networks and job scheduling systems (Rosenthal 1973; Koutsoupias and Papadimitriou 2009; Vöcking 2007; Caragiannis et al. 2011; Harks and Klimm 2012; Bilò and Vinci 2017; Anshelevich et al. 2008). The addition of dummy players will make some of the players suboptimal, and will cause them to change their strategy. Other player will act in response. Thus, our work is closely related to the study of best-response dynamics. Work on BR dynamics advanced in three main avenues: The first studies whether BR dynamics converge to a NE, if one exists (e.g., Milchtaich 1996; Harks and Klimm 2012 and references therein). It is well known that BR dynamics does not always converge to a NE, even if one exists. However, for the class of finite potential games (Rosenthal 1973; Monderer and Shapley 1996), a pure NE always exists, and BR dynamics is guaranteed to converge to one. The second avenue explores how fast it takes until BR dynamics converges to a NE, e.g., Anshelevich et al. (2008), Even-Dar and Mansour (2005), Ieong et al. (2005). For some games, such as network formation games, the convergence time may be exponential, while for some games, such as singleton congestion games, fast convergence is guaranteed. The third avenue studies how the quality of the resulting NE is affected by the choice of the deviating player. Specifically, the order in which players are chosen to perform their best response moves is crucial to the quality of the equilibrium reached Feldman et al. (2017). Other related work deal with games in which some of the players are not selfish. In Stackelberg games Roughgarden (2004), Korilis et al. (1997), Bhaskar et al. (2011), Fanelli et al. (2010), Fotakis (2010), a centralized authority selects a fraction of players, denoted leaders, and assigns them to appropriately selected strategies, this is called the Stackelberg strategy. Each of the remaining players, denoted followers, selects its strategy selfishly trying to minimize its cost. The behavior of selfish players leads to a Stackelberg Nash equilibrium in which none of the selfish players has a beneficial migration. The goal is to design Stackelberg strategies that will lead the players to a high quality NE. In Roughgarden (2004), it is shown that finding an optimal Stackelberg strategy in job scheduling games is NP-hard, and approximation algorithms are presented. In congestion games on parallel links network the usage of a centrally controlled player can lead to the network optimum if its weight is above certain

Using Temporal Dummy Players in Cost-Sharing Games

165

threshold Korilis et al. (1997). In parallel networks, under some constraints, there are even optimal Stackelberg strategies Krichene et al. (2014). Our model differs from Stackelberg games as we do not assume that some players obey the system. That is, all the players act selfishly. We may add dummy players, however, they are temporal, and the system should reach a NE after they vanish. The idea of adding a temporal dummy player in order to change the final equilibrium was first presented in Tamir (2020). The paper analyzes the potential damage a single dummy player can cause to the social optimum in job scheduling games.

1.4 Our Results Let p ∗ be the cheapest NE profile. By assigning a sufficiently large number of dummies on the paths in p ∗ , these paths would become attractive enough, so that the BR of every player would be to join its path in p ∗ . Since p ∗ is a NE, the players will remain on these paths after the dummy players depart. Thus, if the number of dummy players is not limited, then it is possible to guarantee convergence to the best NE. We consider two problems: 1. What is the minimal number of dummy players required to reach the best NE? 2. Given a budget of k dummy players, what is the minimal cost NE that can be reached? A solution for each of these problems involves also an algorithm for utilizing the dummy players. Specifically, for every profile on the BR-sequence, the algorithm should decide (i ) on which links the dummy players are assigned, and (ii ) which suboptimal player is activated next to perform its best-response. In Sect. 2 we prove NP-hardness of both problems for general networks. Specifically, we present a game with two NE profiles, p ∗ and p such that cost ( p)/cost ( p ∗ ) = Θ(n), it is NP-hard to utilize two dummy players in a way that leads the players to p ∗ , while it is straightforward to do it with three dummy players. In Sect. 3 we define formally the game on m parallel links and provide several basic observations and properties of BR-sequences. In Sect. 4 we present our heuristics for convergence into the social optimum. In sect. 5 we presents our heuristics for a given number of dummies, and in Sect. 6 we presents our experimental results. The addition of dummy players is one possible temporal perturbation of a game. We conclude in Sect. 7 where we introduce additional perturbations and suggest some directions for future work.

2 Hardness Proof for General Networks Let p ★ be a min-cost NE profile. By assigning a large enough number of dummy players on the edges of p ★ , it is clearly possible to attract the players to p ★ . We show

166

O. Dadush and T. Tamir

that calculating the minimal number of dummies required for this task is NP-hard. Our hardness proof is based on the hardness proof in Feldman et al. (2017) that considers a problem of determining the order according to which players perform BRD. Theorem 2.1 The problem of leading the players to the lowest cost NE using the minimal number of dummies is NP-hard. Proof Given a game, an initial NE strategy profile, and a value k, the associated decision problem is whether k dummies are sufficient to lead the players to the lowest cost NE. We show a reduction Σ from the Partition problem: Given a set of numbers {a1 , a2 , ..., an } such thatΣ i∈[n] ai = Σ2, where ∀i∈[n] ai < 1, the goal is to find a subset I ⊆ [n] such that i∈I ai = i∈[n]\I ai = 1. Given an instance of Partition, consider the network depicted in Fig. 1, with the following initial strategy profile, p 0 of 4n + 2 players: • 3n partition players, i 1 , i 2 , i 3 for all i ∈ [n]. The objective of every triplet i l , is a -path. For all i ∈ [n], the three corresponding partition players has two strategies: an upper edge of cost 420ai and a lower edge of cost 300ai . In p 0 , all the partition players use the upper edges. • Players 1' , 2' , . . . , n ' : n players whose objective is an -path. These players have two strategies: the edge (s ' , t ' ) of cost 300n, and the path through (u 1 , u 2 ), whose cost is 1200 − ∈. In p 0 , they all use the edge (s ' , t ' ). • Player a whose objective is an -path. Player a has two strategies: The upper path, and the path through v0 , v1 , . . . , vn . In p 0 , Player a uses the upper path. • Player b whose objective is an -path. Player b has three strategies: The upper path, the path through sa , v0 , v1 , . . . , vn , and the path through the edge (u 1 , u 2 ). In p 0 , Player b uses the upper path. ∎ i Observe that p 0 is a NE. Specifically, each of the partition player has cost 420a and 3 a deviation to the lower edge will lead to cost 300ai , Player a’s current cost is 248 + 2∈ . Deviating to the path through v0 , would result in cost 420 · 2 + 204 = 414. Player 4 b’s current cost is 248 + 2∈ . Its alternative would cost 154 + 414 = 568 (through v0 ) or 1200 − ∈ (through u 1 ). Finally, every player on the lower edge has current cost 300, while its alternative through u 1 costs 1200 − ∈. The following additional observations limit the possible BRD sequences of the game:

1. Not only that the initial profile is a NE, but it is also stable in the presence of a single i > 140ai . For player a, placing the dummy dummy. For the partition players, 300a 2 on (vn , ta,b ) is most efficient, since 204 − 204 is the highest cost reduction that can 2 i be achieved by a single dummy along a’s alternative path (compared to 420a − 4 420ai ). When a dummy is placed on (v , t ), then a’s alternative strategy would n a,b 5

Using Temporal Dummy Players in Cost-Sharing Games

167

Fig. 1 The network constructed for a given Partition instance. Every edge is labeled by its cost, and (in brackets) the number of players using it in p 0

cost 420 · 2 + 204 = 312, which is more than its cost in p 0 . Similarly, player’s b 4 2 alternatives would cost 154 + 312 = 466 through v0 (dummy on (vn , ta,b )) , or 1200−∈ through u 1 (dummy on (u 1 , u 2 )). For a player i ' , the alternative would cost 2 1200−∈ through u 1 . All the above deviations are cost increasing. 2 2. In order to initiate a deviation of a partition player i l , two dummies should be i i ≤ 420a = 140ai . placed on the lower (vi−1 , vi )-edge, as 100ai = 300a 3 3 ' ' 3. The n players currently on (s , t ) would benefit from a deviation only after the edge (u 1 , u 2 ) is utilized by three other player. The following profile p ∗ is the minimal cost NE of this game and also its social optimum: • For all i ∈ [n], l ∈ [3], the partition player i l uses the lower (vi−1 , vi ) edge of cost 300ai . • Players 1' , 2' , . . . , n ' are on the path through u 1 use the 1200 − ∈ edge. • Player a is on the path through v0 and use the lower edges uses the lower (vi−1 , vi ) edges. • Player b is on the path through u 1 . The social optimum cost is 300 · 2 + 204 + 1200 − ε = 2004 − ε. The main claim of the reduction is based on the above properties. Claim 2.2 Two dummies can guarantee convergence to p ∗ if and only if a partition of exists. Σ Proof Assume that a partition exists, and let I be a subset of [n] such that i∈I ai = 1. We present a BR-sequence with 2 dummies that ends up at p ∗ . Initially, (i ) for every i ∈ I , the dummies are placed on the lower edge connecting (vi−1 , vi ), encouraging the corresponding players to deviate down. Next, (ii ), the two dummies move to (vn , ta,b ), which encourage Player a to deviate. Next, (iii ) the dummies move to

168

O. Dadush and T. Tamir

(u 1 , u 2 ), which encourage Player b to deviate to the path through this edge, and (i v) / I , the dummies are placed the players on (s ' , t ' ) follow. Finally, (v), for every i ∈ on the lower edge connecting (vi−1 , vi ), encouraging the corresponding players and Player a to deviate down. We show that each of these steps is indeed a best-response. The deviations in i i ≤ 420a . Once the players corresponding to I are on phase (i ) are BR sinceΣ300a 3 3 the lower edges, since i∈I ai = 1, the players in I utilize lower edges of total cost 300 · 1 = 300. In phase (ii ), Player a indeed benefit from the migration: its share for the lower edges, 420 for on the sub-path consisting of the partition edges is 300 4 4 204 the upper edges and 3 for (vn , ta,b ), as the dummies are on this last edge. The total cost is 248, which is less than its current cost, 248 + 2∈ . For phase (iii ), consider the possible strategies of player b once the dummies are on (u 1 , u 2 ). Its share on the path + 420 + 204 = 400. its BR is therefore the lower path through s1 would be 154 + 300 5 5 2 1200−∈ through u 1 since the dummies makes it cost 3 = 400 − 3∈ =< 400. After Player b deviates, all the players using the expensive (s ' , t ' )-edge join Player b. All These 300n < n−k+1 . deviations in phase (i v) are BR since for every k > 0 it holds that 1200−∈ 3+k 300ai 420ai Finally, the deviations in phase (v) are BR since 3 < 4 . Also note that the above deviations are the only beneficial deviations possible. Moreover, once the players are in p ∗ , the dummies can depart, without hurting the profile’s stability. For the other direction of the reduction, assume that two dummies can guarantee convergence to p ∗ . We show that a partition exists. As we achieve the social optimum the i ' players must abandon the expensive edge. In order for their alternative path through u 1 to became attractive for some i ' player, the load on the (u 1 , u 2 ) edge must be at least 3. Since there are only two dummies, it must be that Player b also uses the edge (u 1 , u 2 ). In order for Player b to deviate to the (u 1 , u 2 ) path, Player a must have deviated from the upper path, as the initial cost of Player b is 248 + 2∈ < 400 − 3∈ . In order to encourage Player a to deviate, some partition players must deviate to make some of the lower (vi−1 , vi ) edges active. For every i ∈ [n], note that the three partition players corresponding to ai are symmetric, and therefore they use the same edge in every stable profile. Thus, in every BR sequence that the dummies should cope with, exactly one of the upper or lower (vi−1 , vi ) edge is utilized. Let Da (Db ) be the set of lower (vi−1 , vi )-edges utilized by the partition players before Player a (b) deviated. Let xa (xb ) be the total sum of partition elements corresponding to D(a) (D(b). Since Player a must deviate before Player b, it holds that xa ≤ xb . In every (vi−1 , vi )-segment, Player a’s best response is to join the utilized (upper or lower) edge. In order for the deviation to be · xa + 420 (2 − xa ) + 204 < 248 + 2∈ . We profitable for Player a, it must be that 300 4 4 3 ∈ conclude that xa > 1 − 60 . In order for Player b to deviate to the path through u 1 , it must be more beneficial than deviating to the path through sa . In particular, the dummies must be assigned on the (u 1 , u 2 ) edge. Player b’s share on the partition path consists of cost (sb , sa ) = 154 < and its share in the sub-path from sa to ta,b . Therefore, it must be that 1200−∈ 3 420 204 ∈ 154 + 300 . · x + (2 − x ) + . We conclude that x < 1 + b b b 5 5 2 72

Using Temporal Dummy Players in Cost-Sharing Games

169

∈ Combining with the condition for the deviation of Player a, we get that 1 − 60 ∈ 11 xa ≤ xb < 1 + 72 . This implies that for ∈ < 360 mini ai , we have xa = xb = 1. other words, the set of players that utilize the lower edges before the deviation Player a, correspond to a set I ⊂ [n] of sum 1, which induces a partition.

< In of ∎

3 Cost-Sharing Games on Parallel Links In light of the hardness result for general networks, we consider a network of parallel links. Recall that the network is given by m parallel links (e1 , . . . , em ) and a vector of positive link costs (c1 , . . . , cm ). A profile p of the game is given by a vector of loads p p p (n 1 , . . . , n m ), where n i is the number of players on ei in profile p. With fair costp sharing, the cost of a player assigned on ei in profile p is ci /n i . We assume, w.l.o.g., that c1 ≤ c2 ≤ . . . ≤ cm . Denote by pa + the profile obtained from p by adding k dummy players on ea . Given a profile p, the best response of a player on ei is denoted cj cj l < ncii and ∀el / = ei , n j +1 ≤ nlc+1 . B Ri ( p). A link e j ∈ B Ri ( p) if and only if n j +1 In particular, ea ∈ B Ri ( pa + ) if it is possible to attract a player from ei to migrate to ea by adding the dummy players on ea . Let n i0 be the load on ei in the initial profile p 0 . Let e1 be the a cheapest link, were ties are broken in favor of highly loaded links in p 0 . That is, for every i > 1, either c1 < ci , or c1 = ci and n 01 ≥ n i0 . Since the game is symmetric, the social optimum cost is c1 . We present heuristics for solving the following problems: Given a network of parallel links and an initial configuration p 0 , (i ) what is the minimal number of dummies required to reach the social optimum, and (ii ) what is the social lowest cost we can achieve with a given number of dummies. For both problems we assume that the algorithm can move the dummy players among the links, and can select the deviating suboptimal player in each step. Players that get the right to deviate select their best response move. Performance Measures: Assume that some heuristic is performed on an initial profile p 0 . The quality of a solution will be measured by 4 parameters. 1. 2. 3. 4.

The Social cost of the final profile. Number of dummies used. Length of BR-sequence till convergence. Number of times the dummy players move.

In our experiments some of these measures are fixed. For example, we tested the social cost achieved by various heuristics with a given number of dummies, or the numbers of dummies required to converge to the social optimum, e1 .

170

O. Dadush and T. Tamir

3.1 Preliminaries and Observations We start by introducing some notation and stating few important observations and claims. p i , be the set of all most attractive links. For a profile p, let E min = arg mini∈E n pc+1 p

i

p

p

Let emin be a link in E min with a highest cost, breaking ties arbitrarily. Let pricemin = ce p

n

min p p +1 emin

be the cost to be paid by a player that joins the most attractive link.

Observation 3.1 In every NE profile, all the players are assigned on the same link. Proof Assume by contradiction that there exists a NE in which, for two links ea and a < ncaa ≤ ncbb . eb it holds that both n a and n b are positive. Assume w.l.o.g., that nac+1 Clearly, a migration from eb to ea is beneficial, contradicting the stability of the profile. ∎ Claim 3.2 If ea ∈ B Reb ( p) for some link eb , then we can guarantee convergence of BRD to ea . Proof We show that ea is the BR as long as the dummy players do not change their a a i < ncbp and n pc+1 ≤ n pc+1 , for every i / = b. location. ea ∈ B Reb ( p) if and only if n pc+1 a

a

b

i

After one player moves from eb to ea , the load on ea is n a + 1. Now, ea ∈ B Reb ( p +1 ) for every i / = a, since ca p+1 na

+1

ca p+1 na

+1

=

=

p

ca ci ci ca < p ≤ p = p+1 +2 na + 1 ni + 1 ni + 1

p na

ca cb cb ca cb < p < p < p = p+1 +2 na + 1 nb nb − 1 nb

p na

Thus, independent of the order the players are activated, as long as dummy players ∎ are not changing their location, every BR sequence converges to ea . Observation 3.3 For any a, b, x, y, k > 0, if a ≤ x and

a b



x y

then

a b+k



x . y+k

Observation 3.3 implies that if a link is less attractive than e1 , then it will never get players during a sequence that converges to e1 , since it requires at least the same number of dummies as making e1 the BR of some link directly. Our next claim states that if a link, eb , is a best-response of some player, then p p it is also a best-response of the players on emin . Note that if the link emin is empty p then it must be that emin = e1 and convergence to e1 is possible even without dummy players. p

Claim 3.4 For a given profile p and a link eb / = emin , if ∃ea s.t eb ∈ B Ra ( p) then p ( p). eb ∈ B Remin

Using Temporal Dummy Players in Cost-Sharing Games

171 p

p ( p). Proof Assume that ∃ea s.t eb ∈ B Ra ( p). If ea = emin then clearly eb ∈ B Remin p b < Otherwise, we show that eb can attract players also from emin . It must be that n pc+1

ca p na

and ∀ei /=ea ,



cb p n b +1

the BR of players on

ci p n i +1

. In particular,

cb p n b +1



ce p

min

n e p +1


1 and let eu , ev ∈ E min , a a v < ncuu and nac+1 ≤ n vc+1 < ncvv . Since assume ea ∈ B Reu ( p). It must be that nac+1 p cu cv = n v +1 , we conclude ea ∈ B Rev ( p) for any other link in E min . ∎ n u +1

3.2 The Naive Solution Before presenting the more complicated heuristics, we present a naive solution that is based on directly making e1 the best-response of some player. By Claim 3.2, once e1 is the B R of some link, we can guarantee convergences to e1 , and based on Claim 3.4, p we can calculate the number of dummies required to directly make e1 the BR of emin . p ( p + ) if and only if Recall that e1 ∈ B Remin 1

c1 p n 1 +k+1

p


c1 ·

ne p

min

c

p emin

− (n 1 + 1) and k ≥ c1 · max ( p i/ =emin

ni + 1 ) − (n 1 + 1) ci

(1)

Denote by knai ve ( p) the minimal k satisfying (1). Specifically, knaive ( p 0 ) is the number of dummies required by the naive solution. Table 1 presents an instance demonstrating that the naive solution is suboptimal. Moreover, the number of dummies it needs is higher by factor of about 1.5 from the optimum. The network consists of 8 links whose costs are listed in the first row. The loads in the initial profile p 0 are listed in the second row. The additional rows specify for each link the cost per player in p 0 —given by nce0 , and cost per player if one player e e joins e, given by n 0c+1 . e Links e2 − e7 all have the same cost and initial load.

172

O. Dadush and T. Tamir

Table 1 Initial profile p 0 of instance I1 Link e1 Cost Load ce n 0e ce n 0e +1

e2 − e 7

e8

3000 0 –

3100 200 15.5

6000 600 10

3000

15.42

9.98

Table 2 The profile achieved after phase 1 Link e1 e2 − e 4 Cost Load ce p ne ce p n e +1

e5 − e 7

e8

3000 0 –

3100 227 13.65

3100 226 13.71

6000 441 13.60

3000

13.59

13.65

13.57

We can easily see that without dummies, regardless of the activation order, the players will converge into e8 , which is the most expensive link. Using the naive solution, the required number of dummies needed to converge into e1 is knaive ( p 0 ) = 300. We show that convergence to e1 can be achieved using k = 220 dummies. By assigning 220 dummies on e2 and activating a player on e8 , a migration from e8 to e2 will be performed, then we assign the dummies on e3 and activate a player on e8 which creates a migration from e8 to e3 , we then continue in a round robin fashion on links e2 − e7 until 159 players leave e8 and the following profile is reached: The BR-sequence proceed after the 220 dummies are moved to e1 and a player on 3000 = 13.57, a e8 is activated. Since the cost of a player who would joins e1 is 0+220+1 player on e8 will choose e1 as its BR and by Claim 3.2 convergence to e1 is guaranteed. We conclude that convergence to e1 can be achieved with only 220 dummy players, while the naive solution requires 300 dummies.

4 Convergence to the Social Optimum In this section we present our heuristics for convergence into the best NE. In a network of parallel links, the best NE is also the social optimum and is simply e1 , the cheapest edge in the network. In Observation 3.3 we showed that making links that are less attractive than e1 the BR of some link is at least as demanding as making e1 the BR of the same link, therefore, in all our heuristics we do not use such links as BR of any link. Furthermore, for a given instance I , let I ' be an instance with the same set of links p ( p + ). Intuitively, I ' and load vector in which n i' ≥ n i for links fulfilling ei ∈ B Remin i

Using Temporal Dummy Players in Cost-Sharing Games

173

is more challenging then I since the links that are more attractive than e1 become even more attractive. Such links will also not be used in the heuristics we present. The heuristics we present consists of two phases. In the first phase, players are encouraged to migrate such that the players’ cost on the links that are more attractive than e1 is more balanced compared to p 0 , and then the apply the naive solution on the more balanced profile. The goal is to use fewer than knaive ( p 0 ) dummies for the balancing phase, as well as to reach a profile p where knaive ( p) < knaive ( p 0 ).

4.1 Max Cost-reduction Heuristic The first heuristic we present balances the players costs on links that are more attracp tive than e1 by migrating players out of the most attractive link, emin , into a link that will gain a maximal cost-reduction by an addition of one player. Formally, a link for i is maximal. Intuitively, we want the migration to be as significant which ncii − nic+1 as possible. The algorithm gets as input a profile, p 0 , and the number, k, of dummies, and returns a binary indicator stating whether the max cost-reduction heuristic can be used to lead the players to e1 . The minimal number of required dummies, can therefore be computed by binary search in the range [0, knaive ( p 0 )]. p i ) is maximal out of the links that can Let emcr be a link for which max( ncii − nic+1 p attract players from emin . In every iteration the algorithm moves a players from the p p current emin profile to the current emcr until the profile is balanced enough to enable a migration to e1 (step 3), or identifying that k dummies are not sufficient as a naive solution from the most balanced profile (this is detected in step 11 - by having a loop in the balancing phase). When the max cost-reduction heuristic is applied on the instance I1 presented in Table 1 and k = 220, it is able to reach the social optimum. In fact, the BR-sequence performed is exactly the one described in Table 2. Algorithm 1 Max Cost-reduction Heuristic (decision version) 1: repeat p p p 2: Calculate E min . Let emin ∈ E min be a link with max cost. p 3: if e1 ∈ E min or e1 ∈ B Re p ( p1+ ) then min 4: return tr ue 5: else p ci ci + ) for which ( 6: Let emcr be a link in B Re p ( pemcr n i − n i +1 ) is maximal. p

min

7: place k dummies on emcr . p p p 8: activate a player from emin (creates a migration from emin to emcr ). p 9: remove k dummies from emcr . 10: end if 11: until loop has been detected (profile p = p −2 ) 12: return f alse

174

O. Dadush and T. Tamir

Table 3 Profile pbal of instance I1 Link e1 Cost Load ce p ne ce p n e +1

e2 − e 5

e6 − e 7

e8

3000 0 –

3100 227 13.65

3100 226 13.71

6000 440 13.63

3000

13.59

13.65

13.60

4.2 Balancing Heuristic The second heuristic we present calculates a target load vector in which the marginal cost on the links are balanced. The dummy players are used to achieve this load vector. The naive algorithm is then performed on the balanced profiled. The load vector is a one that maximizes the marginal cost on the most attractive link and the second most attractive link. This way the attractiveness of the competitors of e1 is as low as possible. p p p ci p For a profile p, let cmin1 = mini∈Emin p , and let emin1 be a link determining cmin1 . n p

i

ci p Also, let cmin2 = mini∈E\{emin1 } n p +1 . i The idea is to balance the load on the links such that the minimal among these p two values are maximal. Intuitively, this way, by activating a player on emin1 , the attractiveness of the competitors of e1 is as low as possible. Calculating the exact p p load vector achieving maximal min{cmin1 , cmin2 } is computationally hard. In order to simplify the calculations, we calculate instead pbal - a load vector that Σapproximates = i>1 n i0 be the the optimal one. pbal is defined in the following way: Let n 1¯Σ number of players that are not assigned on e1 in p 0 and let c1¯ = i>1 ci be the total cost of edges except e1 . In pbal we determine | | the assignment of the n 1¯ players that are not on e1 . We first determine load cci¯ · n 1¯ on every link ei for i > 1, we then add 1 the remaining players iteratively, each time adding a palyer on a link with maximal ci . For example, the profile p pal of the instance I1 introduced in Table 1 is the p n i +1 following (Table 3). Once pbal is calculated, we would like to reach this profile from p 0 using the lowest p p p possible number of dummies. Given p and pbal , let E drop = {ei |n i > n i bal } be the set p p p of edges whose load is higher than their load in pbal and let E gain = {ei |n i < n i bal } p be the set of edges whose load is lower than their load in pbal . Given p, E drop p and E gain , let kmigration be the minimal number of dummies required to achieve a p p migration from a link ea ∈ E drop to a link eb ∈ E gain , and let the source and target links be edrop and egain , respectively. The algorithm iteratively calculates edrop and egain and perform the corresponding migrations. The number of dummies required may increase during the algorithm, and k is updated accordingly. When k is large enough to enable a migration to e1 , that is, when kmigration ≥ knai ve ( p), convergence to e1 is guaranteed.

Using Temporal Dummy Players in Cost-Sharing Games

175

Algorithm 2 Balancing Heuristic (min k version) 1: Calculate pbal 2: set k = 0 3: repeat 4: Calculate kmigration , egain , edr op . 5: if kmigration ≥ knai ve ( p) then 6: return max{k, knai ve ( p)}. 7: end if 8: k = max{k, kmigration } . 9: place k dummies on egain p p 10: while n i b − n i > 0 do p 11: activate a player from some e ∈ E dr op (creates a migration into egain ). 12: end while p 13: until E gain = ∅ 14: return max{k, knai ve ( p)}

4.3 Exhaustive Heuristic The third heuristic we consider balances the links that are more attractive than e1 p by migrating players outside of the most attractive link emin into the least attractive possible link while making sure pricemin is not getting lower. Formally for a profile p p ca p ( p + ) and > pricemin } be the group of target p, let E t = {ea |ea ∈ B Remin p a n a +2 p links, meaning they are B R of emin if the dummy player are added on them and do p p not lower pricemin if a migration from emin to that link occurs. Let et be the link in p p ca E t with maximal n p +2 . Our algorithm is based on moving players out of emin into a p et . The algorithm gets as input a profile, p 0 , and the number, k, of dummies, and returns a binary indicator stating whether the exhaustive heuristic leads the players to e1 . The minimal number of required dummies can therefore be computed by binary search in the range [0, knai ve ( p 0 )]. Algorithm 3 Exhaustive Heuristic (decision version) 1: repeat 2: 3: 4: 5: 6: 7: 8: 9: 10: 11:

p

p

p

p

Calculate E min , let emin ∈ E min be a link with max cost and let pricemin = p

if e1 ∈ E min or e1 ∈ B Re p ( p1+ ) then min return tr ue p else if E t /= ∅ then p place k dummies on et . p p p activate a player from emin (creates a migration from emin to et ). p remove k dummies from et . end if until no player has being activated return f alse

n

ce p min p p +1 emin

.

176

O. Dadush and T. Tamir

Table 4 Initial profile of instance I2 Link e1 e2 e3 Cost Load ce p ne ce p n e +1

e4

e5

e6

e7

e8

3000 0 –

3100 188 16.48

3100 180 17.22

3100 179 17.127

3100 175 17.71

6000 700 8.57

6000 620 9.67

10000 600 16.66

3000

16.40

17.127

17.22

17.61

8.55

9.66

16.63

Table 5 results for profile I2 algorithm Naive # of dummies # of steps # of dummy moves

350 2,642 1

Max CR

balancing

Exhaustive

251 2,956 312

310 2,874 5

240 3,043 312

4.4 Performance Measure Comparison In Sect. 3 we listed the performance measures according to which the quality of our heuristics is evaluated. Table 4 describe an initial profile of instance I2 with 8 links. It is easy to see that BRD would converge into e6 if no dummies are used. Table 5 summarizes the strengths and weaknesses of the different heuristics when applied on I2 . Clearly, the naive solution is most efficient in terms of BR-steps and dummy moves, on the other hand, the number of dummies required to reach e1 is significantly higher. The max cost-reduction needs more dummies than the other algorithms but dominates the number of steps. The balancing algorithm moves the dummies only once for every link in E gain , so the number of dummy moves is very low. The number of dummies is lower than the naive solution. Finally, the exhaustive heuristic achieves the Social optimum using the least number of dummies.

5 Exploit a Given Number of Dummy Players In this section we present our heuristics for finding the lowest achievable social cost, using a limited budget of k dummies. Notice that if we know that e1 cannot become the B R of any link using a given K , then, as explained in the previous section, migrating players out of e1 cannot be helpful.

Using Temporal Dummy Players in Cost-Sharing Games

177

We modify the algorithms presented in Sect. 4 for the new goal. As we elaborate below, the Naive approach and the balancing heuristic are slightly modified, only their destination link may be more expensive than e1 . The two other heuristics, specifically, max cost-reduction and exhaustive, have a different version for the new goal. Recall that in the naive solution (see Sect. 3.2) the algorithm locates the dummies on the target link. When the number of dummies is limited, we simply calculate the minimal l such that ce p ci cl cl < pmin and ≤ min . p nl + k + 1 n l + k + 1 i/=emin ni + 1 ne p min

The corresponding link el , is the solution of the naive algorithm with a budget of k dummies.

5.1 Max Cost-reduction Heuristic p

p . Recall that in this heuristic, players are migrated out of emin into a link in B Remin With a given budget of k dummies we run the same algorithm, and keep track of the lowers-cost link that was a target of some migration during the run.

Algorithm 4 Max Cost-reduction Heuristic (optimization version) 1: repeat p p p 2: Calculate E min . Let emin ∈ E min be a link with max cost. p 3: if e1 ∈ E min or e1 ∈ B Re p ( p1+ ) then min 4: return e1 5: else p i ) is maximal. 6: Let emcr be a link for which max( ncii − n ic+1 p 7: place k dummies on emcr . p p p 8: activate a player from emin (creates a migration from emin to emcr ). p 9: remove k dummies from emcr . 10: end if 11: until loop has been detected (profile p = p −2 ) p 12: return minimal i s.t ei ∈ E min or ei ∈ B Re p ( pi + )in any seen p min

5.2 Balancing Heuristic Next, we describe how the balancing heuristic is tuned for the budged problem: recall that the idea is to calculate a target load vector and lead the players on {e2 , . . . , en } to the corresponding configuration. With a given number of dummies, we calculate the minimal l such that it is possible to balance the players on {el+1 , . . . , en }, thus

178

O. Dadush and T. Tamir

leading the players to el . That is, the original algorithm is applied only on subset of the links. Recall that once el is a BR of some link, then all other players would benefit from joining it, in particular, those on e1 , . . . , el−1 .

5.3 Exhaustive Heuristic In its decision version (Algorithm 3), the Exhaustive heuristic is used to decide whether convergence to e1 is possible with k dummies. We now show that without changing the algorithm we can answer the optimization question, namely, what is the lowest-cost link we can converge to using k dummies. Algorithm 5 Exhaustive Heuristic (optimization version) 1: repeat 2: 3: 4: 5: 6: 7: 8: 9: 10: 11:

p

p

p

p

Calculate E min , let emin ∈ E min be a link with max cost and let pricemin = p

if e1 ∈ E min or e1 ∈ B Re p ( p1+ ) then min return 1 p else if E t /= ∅ then p place k dummies on et . p p p activate a player from emin (creates a migration from emin to et ). p remove k dummies from et . end if until no player has being activated p return minimal i s.t ei ∈ E min or ei ∈ B Re p ( pi + )

n

ce p min p p +1 emin

.

min

p

Based on Claim 3.2, when the algorithm returns i such that ei ∈ E min or ei ∈ p ( p + ), then we can converge to e . We show that links that were the BR of B Remin i i some link, will always be able to attract a player from some link, and that links that were not the BR of any link until we exhausted our effort to make e1 the BR of some link, will not be able to be the BR of any link. This implies that the returned link is indeed the best achievable link. The following claims and Observations will be used in the analysis of the Exhaustive Heuristic. The first claim shows that pricemin is monotonically increasing during the algorithm. p+1

Claim 5.1 pricemin ≥ pricemin Proof Consider any profile p. If the algorithm does not terminate after profile p then p p ca p ( p + ) and > pricemin is performed. a migration into link et s.t ea ∈ B Remin p a n a +2 In the resulting profile p +1 p+1

p

1. n e p = n e p − 1 therefore min

min

ce p min p +1 n p +1 emin

>

n

ce p min p p +1 emin

p

= pricemin

Using Temporal Dummy Players in Cost-Sharing Games p+1

p

= n a + 1 therefore

2. n a

p+1

p

3. ∀ei / = {ea , emin } n i

=

179

p ca a = n pc+2 > pricemin p +1 a n a +1 p i n i therefore p+1ci = n pc+1 i n i +1

p

≥ pricemin

p+1

We conclude that pricemin ≥ pricemin .

∎ p

The next claim shows that the load on any link em ∈ E min monotonically decreases. p

p

Claim 5.2 For a profile p and em ∈ E min , n a monotonically decreases after profile p. p+i

p

/ E t for any Proof Consider any profile p and em / = e1 ∈ E min . We show that em ∈ p+ j i ∈ N. Let profile p + j , where j ∈ N, be the first profile where em is not emin . If p emin = em then j > 0, one player migrated out of it in every iteration until profile p+ j

p + j , meaning n m < n m and p

p+ j

p+( j−1)

= pricemin

cm p +( j−1) nm +2

profile p + j is p, meaning n m = n m and p

p+ j

≤ pricemin . Else j = 0 and p

< pricemin =

cm p n m +2

cm . p n m +1

p+l

Since pricemin is monotonically increasing (Using Claim 5.1) em ∈ / E t , for p+z p+i any j ≤ l < z where em ∈ E min . Therefore, we showed that em will not be in E t p p for any i ∈ N, when it is the emin , and after it is the emin , meaning no player will ∎ migrate to em making it monotonically decreasing from profile p. We now combine Claims 5.1 and 5.2 to show that, pricemin strictly increases p every |E min | iterations and at the worst case after n − 1 iterations. p

p+|E min |

Observation 5.3 For a given profile p, pricemin

p

> pricemin

Proof From the structure of the algorithm, in every iteration a player migrates out ce p p p p p a > pricemin . If |E min | = 1, then p+1min = of emin to a link ea = et where n pc+2 a

ce p

n

>

min p p −1+1 emin

p

ea , emin

ci p +1

ni

ce p

n

=

min p p +1 emin

+1

=

p pricemin

ci p n i +1

n

ca p +1

and

na

+1

=

p

p

p

p

p pricemin , p {ea } ∪ E min p+1

ca p +1

na

ci p +1 n i +1 p

+1

=

>

+1

p +1 emin

and

p

∀ei / = p

> pricemin , Therefore pricemin > pricemin . Else |E min | > p

p+1

1, using Claim 5.2 ea ∈ / E min and emin ∈ / E min . pricemin and

ca p n a +2

p pricemin

=

ca p n a +2

ci p n i +1

p

ce p min p +1 n p +1 emin p

=

ce p min p n p −1+1 emin p

> pricemin and ∀ei ∈ E min \ emin

ci p +1

ni

>

+1

ce p min p n p +1 emin

=

ci p n i +1

= =

and ∀ei ∈ / p > pricemin . Therefore we are in the same case only p

|E min | = |E min | − 1, meaning after |E min | iteration pricemin will strictly increase. ∎ The next claim shows that if ea can attract players in some profile p, then it can attract additional players in any profile p ' that succeeds p.

180

O. Dadush and T. Tamir

Claim 5.4 For a profile p and profile p +i (where i > 0) reached in a later stage of p+i p ( p + ), then either e ∈ B R +i ( p +i + ) or e = e our algorithm. If ea ∈ B Remin p a a a a min . e min

p ( p + ) but a proProof Assume by contradiction that for some profile p, ea ∈ B Remin a

p+i

/ B Re p+i ( p +i a + ) and ea /= emin . Using Claims file p +i can be reached where ea ∈ p+i

p

5.1 and 5.2, n e p < n e p min

min

p+i

min

p+i

p

and pricemin ≥ pricemin . Furthermore, if n a

p

≥ na ,

p ( p +i + ) and using Claim 5.3 meaning he only gained players, then ea ∈ B Remin a

p+i

ea ∈ B Re p+i ( p +i a + ) or ea = emin . min +i

Else n a < n a , then for some profile p + j , where 0 < j < i reached between p+ j p+i p and p +i emin = ea . If ea is still the minimum then clearly ea = emin , else +i p it is not the minimum and as seen in Claim 5.2 p+ica+k+1 < pricemin meaning p

p

na

ea ∈ B Re p+i ( p +i a + ).



min

We turn to show that if a link was not the BR of any link at no point of the algorithm it cannot become the BR using k dummies. p ( p + ) or Claim 5.5 For a profile p +i (i ≥ 0) where not e1 ∈ E min ore1 ∈ B Remin 1 p p p ( p + ) then we cannot converge into e / E min and ea ∈ B Remin E t / = ∅. If ea ∈ a a

p

p ( p + ) but a proProof Assume by contradiction that for some profile p, ea ∈ B Remin a

p+i

/ B Re p+i ( p +i a + ) and ea /= emin . Using Claims file p +i can be reached where ea ∈ p+i

p

5.1 and 5.2, n e p < n e p min

min

p+i

min

p+i

p

and pricemin ≥ pricemin . Furthermore, if n a

p

≥ na ,

p ( p +i + ) and using Claim 5.3 meaning he only gained players, then ea ∈ B Remin a

p+i

ea ∈ B Re p+i ( p +i a + ) or ea = emin . min +i

Else n a < n a , then for some profile p + j , where 0 < j < i reached between p+ j p+i p and p +i emin = ea . If ea is still the minimum then clearly ea = emin , else +i p it is not the minimum and as seen in Claim 5.2 p+ica+k+1 < pricemin meaning p

p

ea ∈ B Re p+i ( p +i a + ). min

na



6 Experimental Results In this section we present some experimental results, achieved by simulating the heuristics presented in Sects. 4 and 5. The heuristics were performed on random instances in random initial profiles. We created a test-base consisting of four classes. The first class, denoted random includes instances with a random number of links (between 6 and 20), for each link there was a randomly generated cost between 2, 000 − 100, 000 and load between

Using Temporal Dummy Players in Cost-Sharing Games

181

Fig. 2 Percentage of random profiles in which the addition of dummy players is beneficial, and the heuristics are more efficient than the naive solution

0 and 10, 000. All values were drawn assuming uniform distribution in their range. Figure 2 shows that in the majority of the random profiles the heuristics are redundant, but in the profiles which they do they can decrease the needed number of dummies significantly. In order to emphasis the differences between the different heuristics, we included in our test-base only instances for which, in at least two heuristics, the required number of dummies is lower than the number of dummies required by the naive solution. The second class of instances is the most challenging one. It is denoted “Beat The Competitor”. In the initial profiles of this class, e2 has the highest initial load, therefore, the social optimum, e1 , has an attractive competitor. The third class of instances is denoted “Beat The Giants”. In the initial profiles of this class, the initial load on e1 is very low. There are a few heavily loaded links which cost several times the cost of e1 . Without the use of dummies, any B R D will converge to one of them. In addition there is a larger number of contender links with a lower cost than the heavy links and less attractive than them, but they can attract players from the heavy links with less dummies than e1 . The last class of instances is denoted “Beat The Median”, it is a generalization of the above class. It is similar to Beat The Giants with an addition of few links that are even heavier than the heavy links of Beat The Giants and the players on those links pay around the same cost as the players on the contender links. We first present our results for the problem considered in Sect. 4: what is the number of dummies required to converge to the social optimum. Thus, we fix the social cost of the final profile, and measure the 3 other parameters characterizing the quality of a solution. Figure 3 shows the number of dummies required to converge to the social optimum scaled compared to the naive solution, and averaged on all instances in the test-base. As shown in the figure, the heuristics can achieve the social optimum with significantly less dummies than the naive solution. In particular, the exhaustive heuristic is always the solution that uses the least number of dummies.

182

O. Dadush and T. Tamir

Fig. 3 Number of dummies required in order to converge to the social optimum, compared to the naive solution Fig. 4 Length of BR-sequence, measured by the average number of migrations performed by each player that is not on e1 in p 0

Only In 1% out of the random profiles the balancing algorithm was better than the naive solution. On the other hand, for the classes Beat The Giants/Median, the balancing phase is essential. Figure 4 compares the length of the BR-sequence till convergence to e1 , scaled by n 1¯ , which is the number of players that need to migrate to e1 . That is, how many times each player is activated on average. Recall that in the naive solution, every player, except for the players that are assigned on e1 in p 0 , migrates exactly once. Clearly, this is our lower bound. We can see that all the heuristics perform well with respect to this measure. Specifically, even in the longest sequences, players migrate on average at most 1.125 times, but we can clearly see that in the more specific profiles Beat The Giants/Median the length of BR-sequence is larger than the naive solution while Random and Beat The Competitor lengths on all heuristics are close to the optimal solution. The next measure we consider is the number of times the dummy players are moved. Again, the naive heuristic provides the lower bound, as the dummy players are assigned exactly once—on e1 . Figure 5 presents the results for this measure,

Using Temporal Dummy Players in Cost-Sharing Games

183

Fig. 5 Number of dummy moves scaled by the number of links Fig. 6 “Beat The Giants” social cost by algorithm and percentage of dummies from the naive solution, divided by social optimum

scaled by the number of links in the network. The naive solution assigns the dummy players only once, and the balancing heuristic move the dummies at most |E gain | times. In contrast, the exhaustive and max cost-reduction heuristics may migrate the dummies many times. We turn to present our results for the problem considered in Sect. 5: what is the lowest cost achievable link, given a limited budget k of dummy players. Thus, the interesting measure of a heuristic is the resulting social cost. Figure 6 presents the social costs achieved by each Heuristic where the given k is a parameter which is a percentage of knaive ( p 0 ).(naive is the minimal i we can achieve using one assignment of dummies on some link) In the chart, the Social cost is divided by the Social Optimum showing how far is the result from the Social optimum. We can see that in the Random and “Beat The Competitor”, the separation from the naive solution is shown in the higher percentage of dummies but in Beat The Giants / Median there is a difference even in the lower percentages.

184

O. Dadush and T. Tamir

7 Conclusions and Open Problems In this work we demonstrated the power of a temporal addition of dummy players to a game. The dummy players initiate a dynamic in which the players are encouraged to reach a Nash Equilibrium profile of better quality. We suggested several heuristics for operating the dummies, and analyzed their quality distinguishing between the number of dummy players, the value of the final solution, and the convergence time. Our main message is that the use of dummy players may significantly improve the equilibrium inefficiency. In general, finding an optimal algorithm for exploiting the dummies is NP-hard. However, as we show, even simple heuristics may need 30% less, dummies than a naive solution; while the length of the BR-sequence increases only by a factor of 1.06. Practically, in real-world application such as routing, the possibility of creating a controlled fake load or by temporarily disable the use of some resources, can help in leading the players to routes that improve the global system’s performance. The addition of dummy player is only one possible perturbation of a stable solution. It would be interesting to study the power of additional temporal perturbation in resource allocation games, that refer not only to the set of participating clients, but else to set of resources, e.g., temporal closure or addition of resources or temporal change in resources’ activation cost.

References Anshelevich, E., Dasgupta, A., Kleinberg, J., Tardos, E., Wexler, T., Roughgarden, T.: The price of stability for network design with fair cost allocation. SIAM J. Comput. 38(4), 1602–1623 (2008) Bhaskar, U., Fleischer, L., Anshelevich, E.: A stackelberg strategy for routing flow over time. In: Proceedings of the 22nd ACM-SIAM Symposium on Discrete Algorithms, pp. 192–201 (2011) Bilò, V., Vinci, C.: On the impact of singleton strategies in congestion games. In: Proceedings of the 25th Annual European Symposium on Algorithms, pp. 17:1–17, 14 (2017) Caragiannis, I., Flammini, M., Kaklamanis, C., Kanellopoulos, P., Moscardelli, L.: Tight bounds for selfish and greedy load balancing. Algorithmica 61(3), 606–637 (2011) Even-Dar, E., Mansour, Y.: Fast convergence of selfish rerouting. In: Proceedings of SODA, pp. 772–781 (2005) Feldman, M., Snappir, Y., Tamir, T.: The efficiency of best-response dynamics. In: The 10th International Symposium on Algorithmic Game Theory (SAGT) (2017) Fanelli, A., Flammini, M., Moscardelli, L.: Stackelberg strategies for network design games. In: Proceedings of International Workshop on Internet and Network Economics (WINE). pp. 222– 233 (2010) Fotakis, D.: Stackelberg strategies for atomic congestion games. Theory Comput. Syst. 47(1), 218– 249 (2010) Harks, T., Klimm, M.: On the existence of pure nash equilibria in weighted congestion games. Math. Oper. Res. 37(3), 419–436 (2012) Ieong, S., McGrew, R., Nudelman, E., Shoham, Y., Sun, Q.: Fast and compact: a simple class of congestion games. In: Proceedings of the 20th National Conference on Artificial Intelligence, AAAI’05, vol. 2 (2005)

Using Temporal Dummy Players in Cost-Sharing Games

185

Korilis, Y.A., Lazar, A.A., Orda, A.: Achieving network optima using stackelberg routing strategies. IEEE/ACM Trans. Netw. 5(1), 161–173 (1997) Koutsoupias, E., Papadimitriou, C.: Worst-case equilibria. Comput. Sci. Rev. 3(2), 65–69 (2009) Krichene, W., Reilly, J.D., Amin, S., Bayen, A.M.: Stackelberg routing on parallel networks with horizontal queues. IEEE Trans. Autom. Control 59(3), 714–727 (2014) Milchtaich, I.: Congestion games with player-specific payoff functions. Games Econ. Behav. 13(1), 111–124 (1996) Monderer, D., Shapley, L.S.: Potential Games. Games Econ. Behav. 14, 124–143 (1996) Papadimitriou, C.H.: Algorithms, games, and the internet. In: Proceedings of the 33rd ACM Symposium on Theory of Computing, pp. 749–753 (2001) Rosenthal, R.W.: A class of games possessing pure-strategy nash equilibria. Int. J. Game Theory 2, 65–67 (1973) Roughgarden, T.: Stackelberg scheduling strategies. SIAM J. Comput. 33(2), 332–350 (2004) Tamir, T.: The power of one evil secret agent. Theor. Comput. Sci. 839, 1–12 (2020) Vöcking, B.: Algorithmic Game Theory, Chapter 20: Selfish Load Balancing. Cambridge University Press (2007)

Index-Matrix Interpretation of a Two-Stage Three-Dimensional Intuitionistic Fuzzy Transportation Problem Velichka Traneva and Stoyan Tranev

Abstract The transportation problem (TP) is a special type of linear programming problem where the objective is to minimise the cost of distributing a product from a number of sources or origins to a number of destinations. In classical TP, the values of the transportation costs, availability and demand of the products are clear defined. The pandemic situation caused by Covid-19 and rising inflation determine the unclear and rapidly changing values of TP parameters. Uncertain values can be represented by fuzzy sets (FSs), proposed by Zadeh. But there is a more flexible tool for modeling the vague information environment. These are the intuitionistic fuzzy sets (IFSs) proposed by Atanasov, which, in comparison with the fuzzy sets, also have a degree of hesitancy. In this paper we present an index-matrix approach for modeling and solving a two-stage three-dimensional transportation problem (2-S 3-D IFTP), extending the two-stage two-dimensional problem proposed in Traneva and Tranev (2021), in which the transportation costs, supply and demand values are intuitionistic fuzzy pairs (IFPs), depending on locations, diesel prices, road condition, weather, time and other factors. Additional constraints are included in the problem: limits for the transportation costs. Its main objective is to determine the quantities of delivery from producers and resselers to buyers to maintain the supply and demand requirements at time (location, etc.) at the cheapest intuitionistic fuzzy transportation cost extending 2-S 2-D IFTP from Traneva and Tranev (2021). The solution algorithm is demonstrated by a numerical example. Keywords Intuitionistic fuzzy sets · Index matrices · Transportation problem Work on Sects. 1 and 3.1 is supported by the Asen Zlatarov University through project Ref. No. NIX-440/2020 “Index matrices as a tool for knowledge extraction”. Work on Sects. 2 and 3.2 is supported by the Asen Zlatarov University through project Ref. No. NIX-449/2021 “Modern methods for making management decisions”. V. Traneva (B) · S. Tranev Prof. Asen Zlatarov University, Prof. Yakimov Blvd, Bourgas 8000, Bulgaria e-mail: [email protected] S. Tranev e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 S. Fidanova (ed.), Recent Advances in Computational Optimization, Studies in Computational Intelligence 1044, https://doi.org/10.1007/978-3-031-06839-3_10

187

188

V. Traneva and S. Tranev

1 Introduction The basic TP was originally developed by Hitchcock Hitchcock (1941). In some TPs, consumers cannot get all the required quantity of product due to limited storage capacity. In this case, the necessary quantities of products are sent to the destinations in two stages. Initially, the minimum destination requirements are sent from the sources to the destinations. Once part of the entire initial shipment has been used up, they are ready to receive the remaining quantity in the second stage. This type of transportation problem is known as two-stage TPs (2-S TPs). Its purpose is to transport the items from the origins to the destinations in two stages such way that the total transportation costs in the two stages are minimum Malhotra and Malhotra (2002). The pandemic situation caused by Covid-19 and rising inflation determine the unclear and rapidly changing values of TP parameters. The values of transportation cost, the demanded and offered quantities of the product are uncertain due to climatic, road conditions, pandemy, inflation or other market conditions. Zadeh proposed the fuzzy set (FS) theory Zadeh (1965) in 1963 to represent uncertain values. Ituitionistic fuzzy sets (IFSs), which have hesitancy degree, were proposed by Atanassov in 1983 Atanassov (1983) as an extension of FSs. In the next exposition of the paper, a brief historical overview of the methods for solving the fuzzy and intuitionistic fuzzy transportation problem is outlined. Chanas et al., in 1984, proposed a fuzzy linear programming model for solving TPs with clear transportation costs, fuzzy supply and demand values Chanas et al. (1984). Chanas and Kuchta Chanas and Kuchta (1996), in 1996, described a method to find the crisp optimal solution of such fuzzy transportation problems in which all the cost parameters are represented by LR type fuzzy numbers. Gen et al. have given a genetic algorithm for finding an optimal solution of a bicriteria solid TP with fuzzy numbers (FNs) Gen et al. (1995). Jimenez and Verdegay, in 1999, researched fuzzy Solid TP with trapezoidal FNs and presented a genetic approach for solving FTP Jimenez and Verdegay (1999). Kikuchi Kikuchi (2022) has represented all the parameters of FTPs by triangular fuzzy numbers and used the fuzzy linear programming approach to find the set of values such that the smallest membership grade among them is maximized. Liu and Kao Liu and Kao (2004) have demostrated a method, based on Zadeh’s extension principle, to find the optimal solution of the trapezoidal FTPs. Liu and Kao Liu and Kao (2004) have extended the existing method proposed in their paper Liu and Kao (2004) to find a fuzzy solution of such FTPs in which all the parameters are represented by trapezoidal fuzzy numbers. Dinagar and Palanivel Dinagar and Palanivel (2009) have described fuzzy Vogel’s approximation method and modified distribution method for determining an initial solution of trapezoidal FTPs. Pandian and Natarajan, in 2010, described zero point method for solution for FTP with trapezoidal fuzzy parameters Pandian and Natarajan (2010). Improved zero point methods were described in Karthy and Ganesan (2019), Pandian and Natarajan (2010), Samuel (2012), Samuel (2014) for solving trapezoidal and triangular FTP. Guzel Guzel (2010) proposed a method to find the crisp optimal solution of FTPs with triangular fuzzy parameters. Kaur and Kumar,

Index-Matrix Interpretation of a Two-Stage …

189

in 2012, introduced fuzzy least cost method, fuzzy north west corner rule and fuzzy Vogel approximation method for determining of an optimal solution of FTP Kaur and Kumar (2012). Basirzadeh Basirzadeh (2011) has found a fuzzy optimal solution of fully FTPs by transforming the fuzzy parameters into the crisp parameters using classical algorithms. Venkatachalapathy Samuel (2014) proposed a new method, a modified Vogel’s approximation method, for finding a fuzzy optimal solution of fully fuzzy transportation problems. A comparative analysis on the FTPs Mohideen and Kumar (2010) was made and the conclusion has given that the zero point method is better than both the modified distribution method and Vogel’s Approximation method. Patil and Chandgude, in 2012, performed “Fuzzy Hungarian approach” for TP with trapezoidal FNs Patil and Chandgude (2012). Jahihussain and Jayaraman, in 2013, presented a zero suffix method for obtaining an optimal solution for FTPs with triangular and trapezoidal FNs (see Jahirhussain and Jayaraman 2013a, b). Fully FTPs is resolved in Dhanasekar et al. (2017), in 2017, using a new method, based on the Hungarian and MODI algorithm. Two new methods for finding a fuzzy optimal solution of TPs with the LR flat fuzzy numbers are proposed by Kaur, Kacprzyk and Kumar Kaur et al. (2020), based on the tabular representation and on the fuzzy linear programming formulation. Aggarwal and Gupta, in 2013, described an procedure for solving intuitionistic fuzzy TP (IFTP) with trapezoidal IFNs via ranking method Gupta et al. (2016). In 2012 Gani et al. (2012) introduced the solution of IFTP by using zero suffix algorithms. Gani and Abbas, in 2014 Gani and Abbas (2014), and Kathirvel, and Balamurugun, in 2012 (see Kathirvel and Balamurugan 2012, 2013), proposed a method for solving TP in which the quantities demanded and offered are represented in the form of the trapezoidal intuitionistic FNs. Antony et al. used Vogel’s approximation method for solving triangular IFTP in 2014 Antony (2014). “PSK method” for finding an optimal solution to IFTPs was presented by Kumar and Hussain in 2015 Kumar and Hussain (2015). Fuzzy methods of 2-S time minimizing TPs are presented in Gani and Razak (2006), Kaur and Kumar (2012). 2-S time minimizing TP have considered in Bharati and Malhotra (2017) over triangular intuitionistic fuzzy (IF) numbers. Trapezoidal and triangular IFSs are special cases of IFSs. In our previous works Traneva (2016), Traneva et al. (2016), Traneva and Tranev (2020a, b), we have proposed for the first time an intuitionistic fuzzy modified distribution algorithm, a zero suffix and a zero point method to determine an optimal solution of the IFTP, interpreted by the IFSs and IMs Atanassov (1983, 1987) concepts. In Kamini and Sharma (2020) is proposed the solution of the transportation problem (TP) in the form of intuitionistic fuzzy logic by using Zero point maximum allocation method. In Traneva and Tranev (2020a), we have proposed for the first time intuitionistic fuzzy zero point method (IFZPM) to solve optimally a type of IFTP. The constraints are formulated to the problem additionally: limits to the transportation costs. In this paper we present an index-matrix approach for finding of an optmal solution of a two-stage three-dimensional transportation problem (2-S 3-D IFTP), extending the two-stage two-dimensional problem proposed in Traneva and Tranev (2021), in which the transportation parameters are intuitionistic fuzzy pairs (IFPs). Additional constraints are included in the problem: limits for the trans-

190

V. Traneva and S. Tranev

portation costs. In the scientific literature has not been studied so far similar type of TP. The paper is structured as follows: Sect. 2 performs the basic concept of the concepts of the IMs and the IFPs. In Sect. 3 proposes 2-S 3-D IFTP and also an algorithm for its optimal solution using the concepts of IMs and IFSs. The reliability of the proposed approach is demonstrated by an example in the same section. Section 5 outlines the conclusion and some directions for future research.

2 Basic Definitions of the Concepts of Index Matrices and Intuitionist Fuzzy Logic 2.1 Short Remarks on Intuitionistic Fuzzy (IF) logic Let us denote an ordered pair ⟨ a , b⟩ = ⟨ μ( p), ν( p)⟩ , where a, b ∈ [0, 1] and a + b ≤ 1 intuitionistic fuzzy pair (IFP), that is used as an evaluation of a proposition p (see Atanassov (2017), Atanassov et al. (2013)). μ( p) and ν( p) respectively determine the “truth degree” (degree of membership) and “falsity degree” (degree of nonmembership). We recall some basic operations and relations with two IFPs x = ⟨ a , b⟩ and y = ⟨ c, d⟩ Atanassov (2012), Atanassov et al. (2013), Atanassov (2018-2019), De et al. (2000), Riecan and Atanassov (2010), Szmidt and Kacprzyk (2009): ¬x = ⟨ b, a⟩ ; x ∧1 y = ⟨ min(a, c), max(b, d)⟩ ; x ∨1 y = ⟨ max(a, c), min(b, d)⟩ ; x ∧2 y = x + y = ⟨ a + c − a.c, b.d⟩ ; x ∨2 y = x.y = ⟨ a .c, b + d − b.d⟩ ; α.x = ⟨ 1 − (1 − a)α , bα ⟩ (for α = n or 1/n (n ∈ N )); x − y = ⟨ max(0, a − c), min(1, b + d, 1 − a + c)⟩ .

x:y=

(1)

⎧ ⎨

⟨ min(1, a/c), min(max(0, 1 − a/c), max(0, (b − d)/(1 − d)))if c = 0 &d = 1 ⎩ ⟨ 0, 1⟩ otherwise

x ≥ y iff a ≥ c and b ≤ d; x ≤ y iff a ≤ c and b ≥ d; x ≤ y iff a ≤ c; x ≥ y iff a ≥ c; x ≤ y iff b ≥ d; x ≥ y iff b ≤ d; x=y iff a = c and b = d iff R⟨ a ,b⟩ ≤ R⟨ c,d⟩ , x ≥R y where R⟨ a ,b⟩ = 0.5(2 − a − b)(1 − a) Szmidt and Kacprzyk (2009).

(2)

(3)

Index-Matrix Interpretation of a Two-Stage …

191

2.2 Definition, Operations and Relations Over 3-D Intuitionistic Fuzzy IMs The concept of an index matrix (IM) was introduced in 1987 in Atanassov (1987) to enable two matrices with different dimensions to be summed. This apparatus was extended with different operations, relations and operators over IM. Their properties were studied in a series of papers and were summirized in the books Atanassov (2014), Traneva and Tranev (2017). One of the basic types IMs are intuitionistic fuzzy IMs (IFIMs) whose elements are IFPs Atanassov (2017). 3-dimensional intuitionistic fuzzy index matrices (3-D IFIMS) were introduced in Atanassov (2014). Let I be a fixed set. By three-dimensional intuitionistic fuzzy index matrix (3-D IFIM) [K , L , H, {⟨ μki ,l j ,h g , νki ,l j ,h g ⟩ }] with index sets K , L and H (K , L , H ⊂ I), we denote the object: l1 hg ∈ H ⟨ μk1 ,l1 ,h g , νk1 ,l1 ,h g ⟩ k1 ≡ .. .. . . km

... lj . . . ⟨ μk1 ,l j ,h g , νk1 ,l j ,h g ⟩ .. .. . .

... ln . . . ⟨ μk1 ,ln ,h g , νk1 ,ln .h g ⟩ , .. .. . .

⟨ μkm ,l1 ,h g , νkm ,l1 ,h g ⟩ . . . ⟨ μkm ,l j ,h g , νkm ,l j ,h g ⟩ . . . ⟨ μkm ,ln ,h g , νkm ,ln ,h g ⟩

where for every 1 ≤ i ≤ m, 1 ≤ j ≤ n, 1 ≤ g ≤ f : 0 ≤ μki ,l j ,h g , νki ,l j ,h g , μki ,l j ,h g + νki ,l j ,h g ≤ 1. Let us be given IMs A = [K , L , H, {⟨ μki ,l j ,h g , νki ,l j ,h g ⟩ }] and B = [P, Q, E, {⟨ ρ pr ,qs ,ed , σ pr ,qs ,ed ⟩ }]. Some basic operations with IMs have the following forms Atanassov (2014), Traneva (2014): Negation ¬A = [K , L , H, {⟨ νki ,l j ,h g , μki ,l j ,h g ⟩ }]. Termwise subtraction-(max,min): A −(max,min) B = A ⊕(max,min) ¬B. Termwise multiplication-(min, max) A ⊗(min,max) B = [K ∩ P, L ∩ Q, H ∩ R, {⟨ φtu ,vw ,x y , ψtu ,vw ,x y ⟩ }], where ⟨ φtu ,vw ,x y , ψtu ,vw ,x y ⟩ = ⟨ min(μki ,l j ,h g , ρ pr ,qs ,ed ), max(νki ,l j ,h g , σ pr ,qs ,ed )⟩ . The definition of the operations A ⊗(min,min) B and A ⊗(min,max) B are similar. Multiplication: A (◦,∗) B=[K ∪ (P − L), Q ∪ (L − P){⟨ φtu ,vw , ψtu ,vw ⟩ }], where ⟨ φtu ,vw , ψtu ,vw ⟩ is defined in Atanassov (2014) and ⟨ ◦, ∗⟩ ∈ {⟨ m ax, min⟩ , ⟨ m in, max⟩ , ⟨ ∧2 , ∨2 ⟩ }. Addition-(◦, ∗) A ⊕(◦,∗) B=[K ∪ P, L ∪ Q, H ∪ E, {⟨ φtu ,vw ,x y , ψtu ,vw ,x y ⟩ }], where ⟨ φtu ,vw ,x y , ψtu ,vw ,x y ⟩

192

V. Traneva and S. Tranev

⎧ ⎪ ⎪⟨ μki ,l j ,h g , νki ,l j ,h g ⟩ , ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨⟨ ρ pr ,qs ,ed , σ pr ,qs ,ed ⟩ ,

if tu = ki ∈ K , vw = l j , x y = h g ∈ H − E or tu = ki ∈ K , vw = l j ∈ L − Q, x y = h g ∈ H or tu = ki ∈ K − P, vw = l j ∈ L , x y = h g ∈ H if tu = pr ∈ P, vw = qs ∈ Q, x y = ed ∈ E − H or tu = pr ∈ P, vw = qs ∈ Q − L , x y = ed ∈ E = ⎪ ⎪ or tu = pr ∈ P − K , vw = qs ∈ Q, x y = ed ∈ E ⎪ ⎪ ⎪ ⎪ , ρ ), if t ⟨ ◦ (μ ⎪ k ,l ,h p ,q ,e u = ki = pr ∈ K ∩ P, vw = l j = qs ∈ L ∩ Q i j g r s d ⎪ ⎪ ⎪ , σ )⟩ , and x y = h g = ed ∈ H ∩ E ∗(ν ⎪ k ,l ,h p ,q ,e i j g r s d ⎪ ⎩ ⟨ 0, 1⟩ , otherwise where ⟨ ◦, ∗⟩ ∈ {⟨ max, min⟩ , ⟨ min, max⟩ , ⟨ average,average⟩ }. Transposition: A′ is the transposed IM of A. Reduction: We use symbol “⊥” for lack of some component in the separate definitions. In some cases, it is suitable to change this symbol with “0". The operation (k, ⊥, ⊥)-reduction of a given IM A is defined by: A(k,⊥,⊥) = [K − {k}, L , H, ⟨ φtu ,vw ,x y , ψtu ,vw ,x y ⟩ ], where ⟨ φtu ,vw ,x y , ψtu ,vw ,x y ⟩ = ⟨ μki ,l j ,h g , νki ,l j ,h g ⟩ for tu = ki ∈ K − {k}, vw = l j ∈ L and x y = h g ∈ H. The definitions of the (⊥, l, ⊥)- and (⊥, ⊥, h)-reductions are analogous. Projection: Let M ⊆ K , N ⊆ L and U ⊆ H. Then, pr M,N ,U A = [M, N , U, {bki ,l j ,h g }], where for each ki ∈ M, l j ∈ N and h g ∈ U, bki ,l j ,h g = aki ,l j ,h g . Substitution: Let the IM A = [K , L , H, {ak,l,h }] be given. Local substitution over A is defined for the pair of indices ( p, k) by ⏋ ⎡ p ⏋ ⎡ ; ⊥; ⊥ A = (K − {k}) ∪ { p}, L , H, {ak,l,h } . k Index type operations: Index type operations were proposed in Traneva (2015) and the forms of some of them are as follows: AG I ndex{(min / max)/(min / max )/(min / max )(min R / max R )}(⊥)(∈F) ( A) = ⟨ ki , l j , h g ⟩ / presents the index of the minimum/ maximum element between the elements of A, whose indexes ∈ / F, with no empty value in accordance with the relations (3). I ndex{(min / max)/(min / max )/(min / max )(min R / max R )}(⊥),ki ,h g (A) = {⟨ ki , lv1 , h g ⟩ , . . . , ⟨ ki , lvx , h g ⟩ , . . . , ⟨ ki , lvV , h g ⟩ }, where ⟨ ki , lvx , h g ⟩ (for 1 ≤ i ≤ m, 1 ≤ v ≤ n, 1 ≤ x ≤ V , 1 ≤ g ≤ f ) are the indices of the minimum/ maximum IFFP of the ki -th index of the dimension K

Index-Matrix Interpretation of a Two-Stage …

193

of A for a fixed h g with no empty value in accordance with the relations (3). I ndex(⊥)/(1) ( A) = {⟨ k1 , lv1 , h g ⟩ , . . . , ⟨ ki , lvi , h g ⟩ , . . . , ⟨ km , lvm , h g ⟩ }, where ⟨ ki , lvi , h g ⟩ (for 1 ≤ i ≤ m, 1 ≤ g ≤ f ) are the indices of the elements of A, whose cell are full or equal to 1. I ndex(max μ(ν)),ki ,h g (A) = {⟨ ki , lv1 , h g ⟩ , . . . , ⟨ ki , lvx , h g ⟩ , . . . , ⟨ ki , lvV ,h g ⟩ }, where ⟨ ki , lvx , h g ⟩ (for 1 ≤ i ≤ V , 1 ≤ x ≤ n, 1 ≤ g ≤ f ) are the indices of the IFFP of the ki -th row of A for a fixed h g , for which μ(ν)ki ,lvx ,h g is maximum. Aggregation operations Let us use the operations #q , (q ≤ i ≤ 3) from Traneva et al. (2019) for scaling aggregation operations over two IFPs x = ⟨ a , b⟩ and y = ⟨ c, d⟩ : x#1 y = ⟨ m in(a, c), max(b, d)⟩ ; x#2 y = ⟨ a verage(a, c), average(b, d)⟩ ; x#3 y = ⟨ m ax(a, c), min(b, d)⟩ . The following inequality holds: x#1 y ≤ x#2 y ≤ x#3 y Traneva et al. (2019). / K will be fixed index. The definition of the aggregation operation by the Let k0 ∈ dimension K is Atanassov (2014), Traneva et al. (2019): hg ∈ H α K ,#q (A, k0 ) =

k0

l1

...

ln

m

m

i=1

i=1

#q ⟨ μki ,l1 ,h g , νki ,l1 ,h g ⟩ . . . #q ⟨ μki ,ln ,h g , νki ,ln ,h g ⟩

,

where 1 ≤ q ≤ 3. Aggregate global internal operation Traneva (2015): AG I O⊕(max,min) (A) . This operation finds the addition of all elements of A. Internal subtraction of the components of the IM A (Traneva 2015; Traneva and Tranev 2017; Traneva et al. 2019): ⟨ ⟩ I O−(min,max) ( ki , l j , h g , A , ⟨ pr , qs , ed , B⟩ ) = [K , L , H, {⟨ γtu ,vw ,z y , δtu ,vw ,z y ⟩ }], where ki ∈ K , l j ∈ L, h g ∈ H ; pr ∈ P, qs ∈ Q, ed ∈ E and ⟨ γtu ,vw ,z y , δtu ,vw ,z y ⟩ ⎧ ⟨ μtu ,vw ,z y , νtu ,vw ,z y ⟩ , ⎪ ⎪ ⎪ ⎪ ⎨ = ⎪ ⎪ ⎪ ⟨ max(0, μki ,l j ,h g − ρ pr ,qs ,ed ), ⎪ ⎩ min(1, νki ,l j ,h g + σ pr ,qs ,ed , 1 − μki ,l j ,h g + ρ pr ,qs ,ed )⟩ ,

if tu = ki ∈ K , vw = l j ∈ L , z y = h g ∈ H if tu = ki ∈ K , vw = l j ∈ L , z y = h g ∈ H.

194

V. Traneva and S. Tranev

The non-strict relation “inclusion about value” The form of this type of relations between two IMs A and B is as follows: A ⊆v B iff (K = P)&(L = Q)&(H = R)&(∀k ∈ K )(∀l ∈ L)(∀h ∈ H )(ak,l,h ≤ bk,l,h ).

3 Index-Matrix Interpretation of a Two-Stage Three-dimensional Intuitionistic Fuzzy TP In this section we will extend the 2-S IFTP, proposed from us in Traneva and Tranev (2021) to a three-dimensional one (2-S 3-D IFTP): A transportation company supplies a product to different companies after it has been delivered by different manufacturers. The global pandemic situation and rising inflation caused by Covid-19 are leading to unclear and rapidly changing parameters. Destinations cannot get all the required quantity of product in a time-moment due to limited storage capacity. In this case, the necessary quantities of products are sent to the destinations in two stages in a time-moment. Initially, the minimum destination requirements are sent from the sources to the destinations in a time-moment. Once part of the entire initial shipment has been used up, they are ready to receive the remaining quantity in the second stage in a time-moment. The company want to find an optimal solution for the 2-S 3-D IFTP. First stage A transportation company supplies a product to n different companies (consumers) {l1 , . . . , l j , . . . , ln } in a time-moment h g (g = 1, ..., f ) after delivery of that product from different m manifacturers (producers) {k1 , . . . , ki , . . . , km } in quantities cki ,R,h g (for i = 1, ..., m; g = 1, ..., f ). H is a fixed time scale and h g is its element. The interpretation of H may also be different such as location or other. Let the consumers (destinations) in a time-moment h g (from a location h g ) need this product in quantities of c Q,l j ,h g (for i = 1, ..., m; g = 1, ..., f ). Let cki ,l j ,h g be intuitionistic fuzzy cost for transporting one unit quantity of the product from the ki -th producer to the l j -th consumer at time h g ; xki ,l j ,h g - the number of units of the product, transported from the ki -th source to the l j -th destination at time h g and IFPs c pl,l j ,h g (for j = 1, ..., n; g = 1, ..., f ) are limits for the transportation costs of a delivery a product from the ki -th manifacturer to the l j -th destination at h g . cki ,l j ,T - the quantity of units of the product transported by the ki -th producer to the l j -th consumer for the defined time period. Second stage Let some of the buyers RS = {l1∗ , . . . , l ∗j ∗ , . . . , ln∗∗ } (RS ⊂ L) become resellers in a time-moment h g (g=1, ..., f ). The resellers {l1∗ , . . . , l ∗j ∗ , . . . , ln∗∗ } want to sell quantities of the product not only purchased, but also from own production or stocks at a surplus charge cl∗∗∗ ,q ∗ ,h g for an product unit to other consumers j {u 1 , . . . , u e , . . . , u w }, in quantities cl∗∗∗ ,R ∗ ,h g (for 1 ≤ j ∗ ≤ n ∗ ) in a time-moment h g . j Users need this product in an amount of c∗Q ∗ ,u e ,h g (for 1 ≤ e ≤ w, 1 ≤ g ≤ f ). Let cl∗∗∗ ,u e ,h g (for 1 ≤ j ∗ ≤ n ∗ , 1 ≤ e ≤ w, 1 ≤ g ≤ f ) be the total cost for the purj chase of one unit quantity of the product from the l ∗j ∗ -th reseller to u e -th destination in a time-moment h g (1 ≤ g ≤ f ); xl∗∗∗ ,u e ,h ∗∗ – the number of units of the prodj

g

Index-Matrix Interpretation of a Two-Stage …

195

uct, transported from the l ∗j ∗ -th reseller to u e -th destination in a time-moment h g ; cl∗∗∗ , pu ∗ ,h g ( for 1 ≤ j ∗ ≤ n ∗ , 1 ≤ g ≤ f ) – is the price of a product unit of the l ∗j ∗ -th j reseller; c∗pl ∗ ,u e ,h g ( for 1 ≤ e ≤ w, 1 ≤ g ≤ f ) – upper limit of the price at which the u e -th consumer wish to purchase the product in a time-moment h g . The values of the specified parameters are IFPs. For estimating the parameters of 2-S 3-D IFTP in the form of IFPs, we can use the expert approach described in detail in Atanassov (2012). Each expert needs to evaluate at least a part of the alternatives in terms of their performance with respect to each defined criterion. The experts is not sure about the transportation costs due the climatic and traffic conditions, or economic factors. He hesitates in prediction of the transportation cost due to changes in some uncontrollable factors. The transportation costs are evaluated as intuitionistic fuzzy numbers after a thorough discussion, interpreted by the intuitionistic fuzzy concept: these numbers express a “positive” and a “negative” evaluations, respectively. The reliability of the expert assessment (confidence in her/his evaluation with respect to each criterion) may be involved in the evaluation process. The purpose of the TP is how to meet the requests of all users {l1 , . . . , l j , . . . , lm } and {u 1 , . . . , u e , . . . , u w } from the two stages so that the intuitionistic fuzzy transportation cost is minimum for the entire period of time according to (3).

3.1 Algorithm for Finding of an Optimal Solution of the 2-S 3-D IFTP Here, we propose an algorithm for an optimal solution of a type of 2-S 3-D IFTP in uncertain environment by index matrices tools.

3.1.1

Solution of the First Stage of the 2-S 3-D IFTP

Step 1. At starting of the algorithm for solution of the 2-S IFTP, we need to construct the cost IM C[K , L , H ] : hg ∈ H l1 ... k1 ⟨ μk1 ,l1 ,h g , νk1 ,l1 ,h g ⟩ . . . .. .. .. . . . = km ⟨ μkm ,l1 ,h g , νkm ,l1 ,h g ⟩ . . . ⟨ μ Q,l1 ,h g , ν Q,l1 ,h g ⟩ . . . Q ⟨ μ pl,l1 ,h g , ν pl,l1 ,h g ⟩ . . . pl pu 1 ⟨ μ pu 1 ,l1 ,h g , ν pu 1 ,l1 ,h g ⟩ . . .

ln ⟨ μk1 ,ln ,h g , νk1 ,ln ,h g ⟩ .. .

R ⟨ μk1 ,R,h g , νk1 ,R,h g ⟩ .. .

⟨ μkm ,ln ,h g , νkm ,ln ,h g ⟩ ⟨ μkm ,R,h g , νkm ,R,h g ⟩ ⟨ μ Q,ln ,h g , ν Q,ln ,h g ⟩ ⟨ μ Q,R,h g , ν Q,R,h g ⟩ ⟨ μ pl,ln ,h g , ν pl,ln ,h g ⟩ ⟨ μ pl,R,h g , ν pl,R,h g ⟩ ⟨ μ pu 1 ,ln ,h g , ν pu 1 ,ln ,h g ⟩ ⟨ μ pu 1 ,R,h g , ν pu 1 ,R,h g ⟩

pu ⟨ μk1 , pu,h g , νk1 , pu,h g ⟩ .. .

⟨ μkm , pu,h g , νkm , pu,h g ⟩ , ⟨ μ Q, pu,h g , ν Q, pu,h g ⟩ ⟨ μ pl, pu,h g , ν pl, pu,h g ⟩ ⟨ μ pu 1 , pu,h g , ν pu 1 , pu,h g ⟩

where K = {k1 , k2 , . . . , km , Q, pl, pu 1 }, L = {l1 , l2 , . . . , ln , R, pu} , H = {h 1 , h 2 , . . . , h f , T } and for 1 ≤ i ≤ m, 1 ≤ j ≤ n, 1 ≤ g ≤ f , {cki ,l j ,h g , cki ,R,h g , cki , pu,h g , c pl,l j ,h g , c pl,R,h g , c pl, pu,h g , c Q,l j ,h g , c Q,R,h g , c Q, pu,h g , c pu 1 ,l j ,h g , c pu 1 ,R,h g , c pu 1 , pu,h g } are IFPs. Let we denote by |K | = m + 3 the number of elements of the set K ; then |L| = n + 2 and |H | = f + 1. The algorithm creates the IM

196

V. Traneva and S. Tranev

l1 hg ∈ H I k1 xk1 ,l1 ,h g X [K I , L I , H I , {xki ,l j ,h g }] = .. .. . . km xkm ,l1 ,h g

... lj . . . xk1 ,l j ,h g .. .. . . . . . xkm ,l j ,h g

. . . ln . . . xk1 ,ln ,h g , .. .. . . . . . xkm ,ln ,h g

} { where K I = {k1 , k2 , . . . , km }, L I = {l1 , l2 , . . . , ln } , H I = h 1 , h 2 , . . . , h f , and for 1 ≤ i ≤ m, 1 ≤ j ≤ n, 1 ≤ g ≤ f : xki ,l j ,h g = ⟨ ρki ,l j ,h g , σki ,l j ,h g ⟩ . Then, let us we create the following auxiliary index matrices: (1) S = [K , L , H {ski ,l j ,h g }], such that S = C i.e. (ski ,l j ,h g = cki ,l j ,h g ∀ki ∈ K , ∀l j ∈ L , ∀h g ∈ H ); (2) hg ∈ H I l1 ... lj . . . ln k1 dk1 ,l1 ,h g · · · dk1 ,l j ,h g · · · dk1 ,ln ,h g , D[K I , L I , H I ] = .. .. .. .. .. .. . . . . . . km

dkm ,l1 ,h g . . . dkm ,l j ,h g . . . dkm ,ln ,h g

where for i = 1, ..., m; j = 1, ..., n; g = 1, ..., f : dki ,l j ,h g = {1 or 2} depending on whether the elements ski ,l j ,h g of S are crossed out with 1 or 2 lines. (3) e0 hg k1 r ck1 ,e0 ,h g RC[K I , e0 , h g ] = . , .. .. . km r ckm ,e0 ,h g where for 1 ≤ i ≤ m: r cki ,l j ,h g = {0 or 1} depending on whether the ki -th index of K I of the matrix S is crossed out at time h g ; (4) l1 ... lj ... ln h CC[r0 , L I , h g ] = g , r0 ccr0 ,l1 ,h g · · · ccr0 ,l j ,h g · · · ccr0 ,ln ,h g where for 1 ≤ j ≤ n: ccki ,l j ,h g = {0 or 1} depending on whether the l j -th index of L I of the matrix S is crossed out at time h g ; (5) R M[K I , pu, H I ] = pr K I , pu,H I C and C M[ pu 1 , L I , H I ] = pr pu 1 ,L I ,H I C; (6) U [K I , L I , H I , {u ki ,l j ,h g }] and for 1 ≤ i ≤ m, 1 ≤ j ≤ n, 1 ≤ g ≤ f : u ki ,l j ,h g

 0, if cki ,l j ,h g < c pl,l j ,h g ; = 1, otherwise

(7) When starting the algorithm, r cki ,e0 ,h g = ccr0 ,l j ,h g = 0, u ki ,l j ,h g = 1, xki ,l j ,h g = ⟨ 0, 1⟩ , cki , pu,h g = ⟨ ⊥, ⊥⟩ , c pu 1 ,L ,h g = ⟨ ⊥, ⊥⟩ (∀ki ∈ K I , ∀l j ∈ L I , ∀h g ∈ H I ). Go to Step 2.

Index-Matrix Interpretation of a Two-Stage …

197

Step 2. For solving the first stage on the 2-S 3-D IFTP we can apply one of the algorithms, outlined in the our papers Traneva et al. (2016), Traneva and Tranev (2020a, b). Let us apply the intuitionistic fuzzy zero point method to the problem for determining the optimal solution of the TP with intuitionistic fuzzy costs, demand and supply Traneva and Tranev (2020a). In the proposed program code is used a pseudocode and mathematical formulation. Let us create the 3-D IFIM C for the m n Σ Σ given problem and then, convert it into a balanced one ( cki ,R,h g = c Q,l j ,h g ), if i=1

j=1

it is not. The program executes the following operations: – We define 3 − D IMs as follows: /K∪ S1 [Q, L∗, h g ]= pr Q,L∗,h g C; S2 [K ∗, R, h g ]= pr K ∗,R,h g C and let {km+1 , ln+1 } ∈ L. We need to use the following IMs operations: m n ⎡ ⏋ Σ Σ cki ,R,h g > c Q,l j ,h g ), then – If α K ,#q (S1 , ln+1 ) ⊃v QR ; ⊥ (α L ,#q (S2 , ln+1 ))′ (i.e. i=1

j=1

introduce dummy column ln+1 having all its costs as ⟨ 0, 1⟩ and execute operations for finding the demand at this dummy destination: m n Σ Σ cki ,R,h g − c Q,l j ,h g ; {Let us define 2 − D IMs S3 , S4 , S5 such that c Q,ln+1 ,h g = i=1 j=1 ⎡ ⏋ Q ; ⊥ (S2 , ln+1 ))′ ; S3 = α K ,#q (S1 , ln+1 ) −(max,min)) α L ,#q ( R S4 = [K I , ln+1 , h g , {⟨ 0, 1⟩ }]; S5 = [K , ln+1 , h g , {cki ,ln+1 ,h g }] = S3 ⊕(max,min) S4 . The new matrix of costs is obtained by carrying out the operation “matrix addition": C := C ⊕(max,min)) S5 , go to Step 2. } ⏋ ⎡ m n Σ Σ – If ⊥; QR α K ,#q (S1 , km+1 ))′ ⊂v α L ,#q (S2 , km+1 ))′ (i.e. cki ,R,h g < c Q,l j ,h g ), i=1

j=1

then introduce dummy row km+1 having all its costs as ⟨ 0, 1⟩ and execute operations for m n Σ Σ finding the demand at this dummy destination: ckm+1 ,R,h g = cki ,R,h g − c Q,l j ,h g . i=1

{Let us define 2 − D IMs S3 , S4 , ⎡S5 such ⏋ that R S3 = α K ,#q (S2 , km+1 ) −(max,min)) ⊥; Q α L ,#q (S1 , km+1 ))′ ;

j=1

S4 [km+1 , L I , h g , {⟨ 0, 1⟩ }]; S5 = [km+1 , L∗, h g , {ckm+1 ,l j ,h g }] = S3 ⊕(max,min)) S4 ; C := C ⊕(max,min)) S5 , go to Step 3. } Step 3. Checking the conditions for limiting the transportation costs for g = 1 to f for i = 1 to m for ⎧ ⎡ j = 1 to⏋ n  {If kpli ; ⊥ pr pl,l j ,h g C ⊆v prki ,l j ,h g C, then u ki ,l j ,h g = 1.} E G = I ndex(1) (U ) = {⟨ ki1 , l j1 , h g1 ⟩ , ⟨ ki2 , l j2 , h g2 ⟩ , . . . , ⟨ ⟨ kiφ , l jφ , h gφ ⟩ };

198

V. Traneva and S. Tranev

For each ⟨ ki , l j , h g ⟩ ∈ E G, let us the element ski ,l j ,h g of S is equal to ⟨ 1, 0⟩ Lalova et al. (1980); Go to Step 4. Step 4. Determination of zero membership value—K I level For each index ki of K I at a fixed h g (g = 1, . . . , f ) of the matrix S, the smallest element is found in accordance with the relations (3) among the elements ski ,l j ,h g ( j = 1, . . . , n) and is saved to the right of the row, in the column pu. It is subtracted from all elements ski ,l j ,h g , for j = 1, 2, . . . , n. for g = 1 to f for i = 1 to m for j = 1 to n If (cki , pu,h g = ⟨ ⊥, ⊥⟩ ), then   {AG I ndex{(min)/(min )/(min )/(min R )} prki ,L I ,h g S = ⟨ ki , lv j , h g ⟩ ; ⎡ ⏋ If u ki ,l j ,h g = 0, then {S6 [ki , lv j , h g ] = prki ,lv j ,h g S; S7 = ⊥; lpu S6 ; S := S ⊕(max,min) vj S7 .}} Aftere subtraction we get the matrix S with reduced prices. for g = 1 to f for i = 1 to m ⟨ ⟩ ⟨ ⟩  for j = 1 to n {I O−(max,min) ki , l j , h g , S , ki , pu, h g , pr K I , pu,H I S }; Go to Step 5. Step 5. Determination of zero membership value—L I level For each index l j of L I at a fixed h g (g = 1, . . . , f ) of the matrix S, the smallest element is found among the elements ski ,l j ,h g (i = 1, . . . , m). It is saved at the bottom of the column, in line pu 1 and it is subtracted from all elements ski ,l j ,h g , for i = 1, 2, . . . , m. for g = 1 to f for j = 1 to n for i = 1 to m If (c pu 1 ,l j ,h g = ⟨ ⊥, ⊥⟩ ), then   {AG I ndex{(min)/(min )/(min )/(min R )} pr K I ,l j ,h g S = ⟨ kwi , l j , h g ⟩ ; If u ki ,l j ,h g = 0, then { let us create two 3-D IMs S6 and S7 : S6 [kwi , l j , h g ] = ⎡ ⏋ 1 prkwi ,l j ,h g S; S7 = pu ; ⊥ S6 ; S := S ⊕(max,min) S7 ; }} kwi We get the matrix S with reduced costs for j = 1 to n for g = 1 to f for i = 1 to ⟨ m ⟩ ⟨ ⟩  {I O−(max,min) ki , l j , h g , S , pu 1 , l j , h g , pr pu 1 ,L I ,H I S }; Go to Step 6. Step 6. Determination of zero membership value – H I level For each index ki of K I at a fixed l j ( j = 1, . . . , n) of the matrix S, the smallest element is found among the elements ski ,l j ,h g (g = 1, . . . , f ) and is recorded as the value of the element ski ,l j ,T . It is subtracted from all elements ski ,l j ,h g , for g = 1, 2, . . . , f and go to

Index-Matrix Interpretation of a Two-Stage …

199

Step 7. for j = 1 to n for i = 1 to m for g = 1 to f If (cki ,l j ,T = ⟨ ⊥, ⊥⟩ ), then   {AG I ndex{(min)/(min )/(min )/(min R )} prki ,l j ,H I S = ⟨ ki , l j , h z g ⟩ ; If u ki ,l j ,h g = 0, then { let us create two 3-D IMs S6 and ⎡S7 : ⏋ S6 [ki , l j , h z g ] = prki ,l j ,h z g S; S7 = ⊥; ⊥; hTz S5 ; S := S ⊕(+) S6 }}. g

For each index ki of K I of S at a fixed l j ( j = 1, . . . , n) we subtract the smallest element ski ,l j ,h z g from all elements ski ,l j ,h g (g = 1, . . . , f ). We create IM B = pr K I ,l j ,T S. for i = 1 to m for j = 1 to n ⟨ ⟩ ⟨ ⟩  for g = 1 to f {I O−(min,max) ki , l j , h g , S , ki , l j , T , B }. Step 7. Optimality criterion (1) Check if each quantity of product offered at time h g is less than or equal the total quantity of product required, whose reduced costs are with zero membership degrees. for g = 1 to f for i = 1 to m {I ndex(min μ),ki (A) = {⟨ ki , lv1 , h g ⟩ , . . . , ⟨ ki , lvx , h g ⟩ , . . . , ⟨ ki , lvV , h g ⟩ ; We create 3 − D IMs as follows: G v1 [ki , lv1 , h g ] = prki ,lv1 ,h g C, . . . , G vV [ki , lvV , h g ] = prki ,lvV ,h g C and G[ki , R, h g ] = prki ,R,h g C; If G[ki , R, h g ] ⊆v G v1 +(max,min) . . . +(max,min) G vx + . . . +(max,min) G vV , then go to Step 7.2 else {R M[ki , pu, h g ] = 1 and go to Step 8.} } (2) Check if each required quantity at time h g is less than or equal to the total offered quantity, whose reduced costs have zero membership degrees. for g = 1 to f for j = 1 to n I ndex(min μ),l j ,h g (A) = {⟨ kw1 , l j , h g ⟩ , . . . , ⟨ kw y , l j , h g ⟩ , . . . , ⟨ kwW , l j , h g ⟩ }; We define 3 − D IMs as follows: G w1 [kw1 , l j , h g ] = prkw1 ,l j ,h g C, . . . , G wW [kwW , l j , h g ] = prkwW ,l j ,h g C, and G[Q, l j , h g ] = pr Q,l j ,h g C; If G[Q, l j , h g ] ⊆v G w1 +(max,min) . . . +(max,min) G w y + . . . +(max,min) G wW , then go to Step 10 else {C M[ pu 1 , l j , h g ] = 1 go to Step 8. } } Step 8. Revise the cost IM All elements ⟨ 0, 1⟩ in the S are crossed out with minimum number of lines (horizontal, vertical or both). If there is no element ⟨ 0, 1⟩ in a given row or column at time h g (for g = 1, ..., f ), then the element with the minimum degree of membership is crossed out from that row or column in the cost

200

V. Traneva and S. Tranev

IM S obtained in step 7. This step introduces IM D[K I , L I , H I ], which has the same dimensions as the X matrix. We use to mark whether an element in S is crossed out with a line to the direction of the dimension K I or L I at time h g for g = 1, ..., f. If dki ,l j ,h g = 1 ( for 1 ≤ i ≤ m; 1 ≤ j ≤ n; 1 ≤ g ≤ f ), then ski ,l j ,h g is crossed out with one line; If dki ,l j ,h g = 2, then ski ,l j ,h g element is covered with two2 lines. We create the IMs CC[r0 , L I , h g ] and RC[ki , e0 , h g ], in which is recorded that some indexes ki (i = 1, . . . , m) of K I at time h g , or l j ( j = 1, . . . , n) of L I at fixed h g are covered with a line in the matrix S. for g = 1 to f for i = 1 to m for j = 1 to n – If {ski ,l j ,h g = ⟨ 0, 1⟩ (or ⟨ ki , l j , h g ⟩ ∈ I ndex(min μ),ki ,h g (S), r m ki , pu,h g = 0 and dki ,l j ,h g = 0}, then {r cki ,e0 ,h g = 1; dki ,l j ,h g = 1 ∀l j ; pr K I ,L I ,h g S(ki ,⊥,⊥) } – If {ski ,l j ,h g = ⟨ 0, 1⟩ (or ⟨ ki , l j , h g ⟩ ∈ I ndex(min μ),l j ,h g (S), cm pu 1 ,l j ,h g = 0 and dki ,l j ,h g = 1}, then {ccr0 ,l j ,h g = 1; dki ,l j ,h g = 2 ∀ki ; pr K I ,L I ,h g S(⊥,l j ,⊥) }. Step 9. Develop the new revised cost IM We select the minimum IF cost of the S at time h g using the relations (3), that is not crossed by the lines in Step 8, and subtract it from each of its uncovered elements at time h g , and we add it to each of its elements that is covered by two lines at time h g . We return to Step 7. for g = 1 to f   { AG I ndex(min,max) pr K I ,L I ,h g S = ⟨ k x , l y , h g ⟩ ; (that finds the smallest element index among the elements of the pr K I ,L I ,h g S matrix.) Subtract Skx ,l y ,h g uncrossed each element of the IM pr K I ,L I ,h g S with reduced prices:   I O−(max,min) ⟨ S⟩ , ⟨ k x , l y , h g pr K I ,L I ,h g S⟩ . We add it to each element of pr K I ,L I ,h g S, which is crossed out by two lines, i.e. dki ,l j ,h g = 2: for i = 1 to m for j = 1 to n {if dki ,l j ,h g = 2 then create ⏋ ⎡ l S1 = prkx ,l y ,h g C; S2 = prki ,l j ,h g C ⊕(max,min) kkxi ; l yj ; ⊥ S1 ; S := S ⊕(max,min) S2 ; if dki ,l j ,h g = 1 then S := S ⊕(+) prki ,l j ,h g C}. } Go to Step 6. Step 10. Determination of a cell for allocation For g = 1 to f { (1) Use relations (3) to select the largest IF cost in the IM pr K I ,L I ,h g S at time h g (g = 1, ..., f ). If a tie exists, use any arbitrary tie-breaking choice. Let us denote this cell as ckx∗ ,l y∗ ,h g .   AG I ndex(max,min) pr K I ,L I ,h g = ⟨ k x∗ , l y∗ , h g ⟩ ; (2) Select a single cost with zero degree of membership for allocation corresponding to k x∗ -th producer and/or l y∗ -th consumer at time h g if exists and assigns the most

Index-Matrix Interpretation of a Two-Stage …

201

possible to that cost cell and strike the satisfied IF supply or IF demand. Let us ske ,le ∗,h g = min(s I ndex(min μ),kx∗ ( pr K ∗,L∗,h g ) , s I ndex(min μ),l y∗ )( pr K ∗,L∗,h g ). Then the minimum of the required and offered quantity is assigned to the corresponding ske ,le ∗,h g cell and delete the row/column with exhausted required or offered quantity at time h g for (g = 1, ..., f ). So we find the reduced IM S. We find the minimum between the elements of ske ,R,h g and s Q,le ∗,h g by the operations as follows: We create IMs S8 [ke , R, h g ] = prke ,R,h g S and S9 [Q, le ∗, h g ] = pr Q,le ∗,h g S; ⏋ ⎡ If S8 ⊆v klge ; QR ; ⊥ (S9 )′ (i.e. min(ske ,R,h g , s Q,le ∗,h g ) = ske ,R,h g ), then ⏋ ⎡ l {X := X ⊕(max,min) ⊥; Rg ; ⊥ S8 ; We obtain a new form of IM pr K I ,L I ,h g S with dimensions (m + 2) × (n + 2) × 1 by deleting the ke -th index of K I for pr K I ,L I ,h g S using the operation “reduction" pr K I ,L I ,h g S (k ,⊥,⊥) . e ⏋ ⎡ Let us create IM S10 as follows: S10 [Q, le ∗, h g ] = S9 −(max,min) QR ; lkee∗ ; ⊥ (S8 )′ ; Then S := ⎡ S ⊕(max,min) ⏋ S10 ; }

; QR ; ⊥ (S9 )′ (i.e. min(ski ,R,h g , s Q,l j ,h g ) = s Q,l j ,h g ), then {the IM X ⏋ ⎡ changes with X := X ⊕(max,min) kQe ; ⊥; ⊥ S9 . We obtain a new form of pr K I ,L I ,h g S with dimensions (m + 3) × (n + 1) × 1 by reduction of the le∗ -th column of pr K I ,L I ,h g S at time h g . Let us construct IM S11 as follows: ⏋ ⎡ ′ S11 [ke , R, h g ] = S8 −(max,min) lke∗e ; QR ; ⊥ (S9 ) ; S := S ⊕(max,min) S11 ; Repeat Steps (2) and (3) until | pr K I ,L ,h g S| = 6 (all the required quantities are satisfied and all the offered quantities are exhausted at time h g ), i.e. pr K I ,L I ,h g is reduced to the form hg R pu Q ⟨ μ Q,R,h g , ν Q,R,h g ⟩ ⟨ μ Q, pu,h g , ν Q, pu,h g ⟩ S[K r , L r , h g ] = pl ⟨ μ pl,R,h g , ν pl,R,h g ⟩ ⟨ μ pl, pu,h g , ν pl, pu,h g ⟩ pu 1 ⟨ μ pu 1 ,R,h g , ν pu 1 ,R,h g ⟩ ⟨ μ pu 1 , pu,h g , ν pu 1 , pu,h g ⟩ } Go to Step 11.

If S8 ⊇v

ki lj

Step 11. For g = 1 to f { D I = I ndex⊥ pr K I ,L I ,h g X = {⟨ ki∗1 , l j∗1 , h g ⟩ , . . . , ⟨ ki∗ f , l j∗ f , h g ⟩ , . . . , ⟨ ki∗ϕ , l j∗ϕ , h g ⟩ }. If the intuitionistic fuzzy feasible solution at time h g is degenerated (it contains less than (m + n − 1) (the total number of producers and consumers decreased by 1 at time h g ) occupied cells in the pr K I ,L I ,h g = X i.e. |D I | < (m + n − 1)) Atanassov (1994) then increase the basic cells xki ,l j ,h g of pr K I ,L I ,h g X with one to which the minimum transportation cost corresponds at time h g . Let us the recorded delivery of this cell is ⟨ 0, 1⟩ . The IMs operations are: If |D I | < (m + n − 1) then

202

V. Traneva and S. Tranev

  {AG I ndex{(min / max)/(min / max )/(min / max )(min R / max R )}(⊥)(∈D / I ) pr K I ,L I ,h g C = ⟨ kα , lβ , h g ⟩ ; xkal ,lβ ,h g = ⟨ 0, 1⟩ }. } Go to Step 12. Step 12. for g = 1 to f for i = 1 to m for j = 1 to n {If (⟨ ki , l j , h g ⟩ ∈ E G and xki ,l j ,h g  = ⟨ ⊥, ⊥, ⊥⟩ ) then {the problem has not solution Atanassov (1994) and the algorithm stop, go to Step 13.} Then all the required and offered quantities are exhausted and the algorithm stop. The optimal basic solution X opt [K I , L I , H I , {xki ,l j ,h g }] is obtained. for g = 1 to f for i = 1 to m for j = 1 to n If xki ,l j ,h g = ⟨ ⊥, ⊥⟩ then xki ,l j ,h g = ⟨ 0, 1⟩ . The optimal intuitionistic fuzzy transportation cost is:  AG I O⊕1 (max,min) ) C({Q, pl, pu 1 },{R, pu},T ) ⊗(min,max) X opt   or AG I O⊕2 (∨ ) ) C({Q, pl, pu 1 },{R, pu},T ) ⊗(∧2 ) X opt , where ∨2 and ∧2 are the opera2 tions from (1). Go to Step 14. Step 13. End.

3.1.2

Solution of the Second Stage of the 2-S 3-D IFTP

To find the optimal solution for the second stage of the problem, we propose the following algorithm, described by a pseudocode and mathematical formulation. Step 14. Let us create the following cost IFIM C ∗ [L ∗ , U, H ] u1 ··· uw R∗ q∗ pu ∗ hg ∈ H ∗ ∗ ∗ ∗ ∗ ∗ l1 cl ∗ ,u 1 ,h g · · · cl ∗ ,u w ,h g cl ∗ ,R ∗ ,h g cl ∗ ,q ∗ ,h g cl ∗ , pu ∗ ,h g 1 1 1 1 1 .. .. .. .. .. . . ··· . ··· . . l ∗j ∗ cl∗∗∗ ,u 1 ,h g · · · cl∗∗∗ ,u w ,h g cl ∗j ∗ ,R ∗ ,h g cl∗∗∗ ,q ∗ ,h g cl∗∗∗ , pu ∗ ,h g j j j j .. .. .. .. .. = . . ··· . ··· . . ln∗∗ cl∗∗∗ ,u 1 ,h g · · · cl∗∗∗ ,u w ,h g cl∗∗∗ ,R ∗ ,h g cl∗∗∗ ,q ∗ ,h g cl∗∗∗ , pu ∗ ,h g n n n n n Q∗ c∗Q ∗ ,u 1 ,v · · · c∗Q ∗ ,u w ,h g c∗Q ∗ ,R ∗ ,h g c∗Q ∗ ,q ∗ ,h g c∗Q ∗ , pu ∗ ,h g pl ∗ c∗pl ∗ ,u 1 ,h g · · · c∗pl ∗ ,u w ,h g c∗pl ∗ ,R ∗ ,h g c∗pl ∗ ,q ∗ ,h g c∗pl ∗ , pu ∗ ,h g pu ∗1 c∗pu ∗ ,u 1 ,h g · · · c∗pu ∗ ,u w ,h g c∗pu ∗ ,R ∗ ,h g c∗pu ∗ ,q ∗ ,h g c∗pu ∗ , pu ∗ ,h g 1

1

1



1

1

where L ∗ = l1∗ , . . . , l ∗j ∗ , . . . , ln∗∗ , Q ∗ , pl ∗ , pu ∗1 , U = {u 1 , . . . , u e , . . . , u w , R ∗ , q ∗ , pu ∗ } , { } H = h 1 , . . . , h g , . . . , h f , T and L ∗ ⊂ L and for 1 ≤ j ∗ ≤ n ∗ , 1 ≤ e ≤ w, 1≤g ≤ f, {cl∗∗∗ ,u g ,h g , cl ∗j ∗ ,R ∗ ,h g , cl∗∗∗ ,q ∗ ,h g , cl∗∗∗ , pu ∗ ,h g , c∗pu ∗ ,q ∗ ,h g , c∗pu ∗ ,u e ,h g , c∗pu ∗ ,R,h g , c∗Q ∗ ,u w ,h g , 1 1 1 j j j c∗pl ∗ ,u w ,h g } and c∗pu ∗ , pu ∗ ,h g are IFPs with meanings given in the 2-S 3-D IFTP. 1

Index-Matrix Interpretation of a Two-Stage …

203

We also define the IFIM hg ∈ H J u1 . . . ue l1∗∗ xl1∗∗ ,u 1 ,h g · · · xl1∗∗ ,u e ,h g X [L ∗J , U ∗J , H J ] = .. .. .. .. . . . . ln∗∗ xln∗∗ ,u 1 ,h g . . . xln∗∗ ,u e ,h g

. . . uw · · · xl1∗∗ ,u w ,h g , .. .. . . . . . xln∗∗ ,u w ,h g

} { where L ∗J = l1∗∗ , l2∗∗ , . . . , ln∗∗ , U ∗J = {u 1 , u 2 , . . . , u w }, H J = {h 1 , . . . , h g , . . . , h f } and for 1 ≤ j ∗ ≤ n ∗ , 1 ≤ e ≤ w, 1 ≤ g ≤ f : xl ∗j ∗ ,u e ,h g = ⟨ ρl ∗j ∗ ,u e ,h g , σl ∗j ∗ ,u e ,h g ⟩ are the number of units of the product, transported from the l ∗j ∗ -th reseller to u e -th destination in a time moment h g . Go to Step 15. Step 15. Let construct IFIM matrix: C1 = pr R S,R ∗ ,H I (α K I ,#q (X opt , R ∗ , H I )T )). Then C ∗ := C ∗ ⊕(◦,∗) C1 . So the the quantities of product purchased in this way by the resellers from the set RS in a time-moment h g are set in the column R ∗ of the matrix C ∗ . Also the elements {cl∗∗∗ ,q ∗ ,h g , c∗Q ∗ ,u e , c∗pl ∗ ,u e ,h g } (for 1 ≤ j ∗ ≤ n ∗ , 1 ≤ e ≤ w, 1 ≤ g ≤ f ) are j introduced in C ∗ . Then the IM E[K /{Q, pl, pu 1 }, L/{R, pu}, H I ] = C({Q, pl, pu 1 },{R, pu},{T }) ⊗(min,max) X opt is created. Go to Step 16. Step 16. Through the following operations we will find the average price of the l ∗j ∗ -th reseller ∈ RS to purchase a single quantity of product in a time-moment h g . Let us construct the IM C2a = α K I ,#2 (E, pu ∗ , H I )T ; For 1 ≤ j ∗ ≤ n ∗ do: {Construct the matrices: C2b [l ∗j ∗ , R ∗j ∗ , H I , {C2al ∗∗ , pu ∗ ,h g /Cl∗∗∗ ,R ∗∗ ,h g }], j j j ⏋ ⎡ pu ∗ C ∗ := C ∗ ⊕(◦,∗) ⊥; ∗ ; ⊥ C2b .} Go to Step 17. R Step 17. The following operations will reflect in the column pu ∗ of the matrix C ∗ the final selling prices of one unit quantity of the product together with the its ⎡ surplus charge above the purchase price. Let us construct the matrices ⏋ pu ∗ C3 = ⊥; ∗ ; ⊥ { pr R S,q ∗ ,H I C ∗ } and C4 = pr R S, pu ∗ ,H I C ∗ . Then the operation q C ∗ := C ∗ ⊕(◦,∗) C3 ⊗ C4 will be executed. Go to Step 18. Step 18. Through the following operations, the elements cl∗∗ j ∗ ,u e ,h g (for 1 ≤ e ≤ e, 1 ≤ g ≤ f ) of the matrix C ∗ will contain the final selling price per unit of product, including the unit price and its transportation price from the l ∗ j ∗ -th reseller to u g -th destination in a time-moment h g . For 1 ≤ j ∗ ≤ n ∗ , 1 ≤ e ≤ w, 1 ≤ g ≤ f, do following: ⏋ ⎡ ue {Cl∗∗ j ∗ ,u e ,h g = { prl ∗j ∗ ,u e ,h g C ∗ } ⊕(◦,∗) ⊥; ∗ ; ⊥ { prl ∗j ∗ , pu ∗ ,h g C ∗ }; pu

204

V. Traneva and S. Tranev

C ∗ := C ∗ ⊕(◦,∗) Cl∗∗∗ ,u e ,h g .} Go to Step 19. j

Step 19. Determining the optimal plan at second stage of the 2-S 3-D IFTP - X ∗ [L J , U, H J , {xl∗∗∗ ,u e ,h g }] after execution of one of the algorithms, presented j in Traneva et al. (2016), Traneva and Tranev (2020a, b), with the obtained cost IFIM C ∗ . The optimal intuitionistic fuzzy transportation cost at the second stage is calculated by:   AG I O⊕1 (max,min) ) C ∗ ({Q ∗ , pl ∗ , pu ∗1 },{R ∗ ,q ∗ , pu ∗ },T ) ⊗(min,max) X ∗ opt   or AG I O⊕2 (∧ ) ) C ∗ ({Q ∗ , pl ∗ , pu ∗1 },{R ∗ ,q ∗ , pu ∗ },T ) ⊗(∨2 ) X ∗ opt , where ∨2 and ∧2 are the 2 operations from (1). Go to Step 20. Step 20. The optimal intuitionistic fuzzy transportation cost for the problem is cal culated by: AG I O⊕1 (max,min) ) C({Q, pl, pu 1 },{R, pu},T ) ⊗(min,max) X opt   ⊕(max,min) AG I O⊕1 (max,min) ) C ∗ ({Q ∗ , pl ∗ , pu ∗1 ,},{R ∗ ,q ∗ , pu ∗ },T ) ⊗(min,max) X ∗ opt   or AG I O⊕2 (∧ ) ) C({Q, pl, pu 1 },{R, pu},T ) ⊗(∨2 ) X opt 2   ⊕(max,min) AG I O⊕2 (∧ ) ) C ∗ ({Q ∗ , pl ∗ , pu ∗1 },{R ∗ ,q ∗ , pu ∗ },T ) ⊗(∨2 ) X ∗ opt 2 where ∨2 and ∧2 are the operations from (1).

3.2 An Application of the Optimal Algorithm of 2-S 3-D IFTP In this subsection we extend the problem of Traneva and Tranev (2021): A transportation company supplies a product to 4 different companies {l1 , l2 , l3 , l4 } in a time-moment h g (g = 1, 2, 3) after delivery of that product from different m manifacturers (producers) {k1 , k2 , k3 } in quantities cki ,R,h g (for i = 1, 2, 3; g = 1, 2, 3). Let the companies {l1 , l2 , l3 , l4 } demand this product in a quantity of c Q,l j ,h g (for j = 1, 2, 3, 4; g = 1, 2, 3) at time h g and c pl,l j ,h g (for 1 ≤ j ≤ 4) are intuitionistic fuzzy limits to the transportation costs of delivery a particular product from the ki -th source to the l j -th destination. Let some of the buyers RS = {l1 , l2 , l3 } (RS ⊂ L) become resellers. The resellers {l1 , l2 , l3 } want to sell quantities of the product not only purchased, but also from own production or stocks at a surplus charge cl∗∗∗ ,q ∗ ,h g (for 1 ≤ j ∗ ≤ 3, 1 ≤ g ≤ f ) for an product unit to other conj sumers {u 1 , u 2 , u 3 , u 4 }, in quantities cl∗∗∗ ,R ∗ ,h g (for 1 ≤ j ∗ ≤ 3, 1 ≤ g ≤ f ). Conj sumers need this product in an amount of c∗Q ∗ ,u g ,h g (for 1 ≤ g ≤ 4). Let cl∗∗∗ ,u e ,h g (for j 1 ≤ j ∗ ≤ n ∗ , 1 ≤ e ≤ f, 1 ≤ g ≤ f ) be the total cost for the purchase of one unit quantity of the product from the l ∗j ∗ -th reseller to u e -th destination in a time moment h g ; xl∗∗∗ ,u e ,h g – the number of units of the product, transported from the l ∗j ∗ -th reseller j to u e -th destination in a h g ; cl∗∗∗ , pu ∗ ,h g , ( for 1 ≤ j ∗ ≤ 3) – is the price of a product j unit of the l ∗j ∗ -th reseller; c∗pl ∗ ,u e ,h g ( for 1 ≤ e ≤ 4) – upper limit of the price at which the u e -th consumer wish to purchase the product in h g -moment. The purpose of the

Index-Matrix Interpretation of a Two-Stage …

205

2-S 3-D IFTP is to meet the requests of all users {l1 , . . . , l4 } and {u 1 , u 2 , u 3 } in the studied time interval so that the intuitionistic fuzzy transportation cost is minimum.

3.2.1

Solution of the First Stage of the 2-S 3-D IFTP

Step 1. The intitial form of the cost IM C[K , L , H/{T }] is: ⎧ l1 l2 l3 l4 R pu ⎪ ⎪ h1 ⎪ ⎪ k1 ⟨ 0.6, 0.2⟩ ⟨ 0.7, 0.1⟩ ⟨ 0.3, 0.1⟩ ⟨ 0.8, 0.1⟩ ⟨ 0.5, 0.2⟩ ⟨ ⊥, ⊥⟩ ⎪ ⎪ ⎪ ⎪ ⎨ k2 ⟨ 0.5, 0.3⟩ ⟨ 0.4, 0.1⟩ ⟨ 0.5, 0.1⟩ ⟨ 0.3, 0.2⟩ ⟨ 0.7, 0.1⟩ ⟨ ⊥, ⊥⟩ k3 ⟨ 0.4, 0.2⟩ ⟨ 0.3, 0.2⟩ ⟨ 0.6, 0.1⟩ ⟨ 0.7, 0.2⟩ ⟨ 0.4, 0.5⟩ ⟨ ⊥, ⊥⟩ , ⎪ ⎪ ⎪ Q ⟨ 0.4, 0.2⟩ ⟨ 0.5, 0.3⟩ ⟨ 0.6, 0.2⟩ ⟨ 0.06, 0.02⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⎪ ⎪ ⎪ pl ⟨ 0.65, 0.3⟩ ⟨ 0.6, 0.4⟩ ⟨ 0.75, 0.2⟩ ⟨ 0.75, 0.1⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⎪ ⎪ ⎩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ pu 1 ⟨ ⊥, ⊥⟩ l1 l2 l3 l4 R pu h2 k1 ⟨ 0.8, 0.1⟩ ⟨ 0.28, 0.55⟩ ⟨ 0.24, 0.70⟩ ⟨ 0.7, 0.1⟩ ⟨ 0.7, 0.1⟩ ⟨ ⊥, ⊥⟩ k2 ⟨ 0.5, 0.4⟩ ⟨ 0.32, 0.55⟩ ⟨ 0.32, 0.60⟩ ⟨ 0.7, 0.1⟩ ⟨ 0.5, 0.2⟩ ⟨ ⊥, ⊥⟩ k3 ⟨ 0.7, 0.3⟩ ⟨ 0.28, 0.60⟩ ⟨ 0.20, 0.65⟩ ⟨ 0.6, 0.2⟩ 0.4, 0.5 ⟨ ⊥, ⊥⟩ , Q ⟨ 0.5, 0.3⟩ ⟨ 0.4, 0.2⟩ ⟨ 0.06, 0.02⟩ ⟨ 0.6, 0.2⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ pl ⟨ 0.82, 0.1⟩ ⟨ 0.8, 0.1⟩ ⟨ 0.85, 0.1⟩ ⟨ 0.85, 0.1⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⎫ pu 1 ⟨ ⊥, ⊥⟩ l1 l2 l3 l4 R pu ⎪ h3 ⎪ ⎪ k1 ⟨ 0.7, 0.2⟩ ⟨ 0.5, 0.2⟩ ⟨ 0, 12, 0.73⟩ ⟨ 0.8, 0.2⟩ ⟨ 0.4, 0.5⟩ ⟨ ⊥, ⊥⟩ ⎪ ⎪ ⎪ ⎪ k2 ⟨ 0.6, 0.1⟩ ⟨ 0.7, 0.1⟩ ⟨ 0.10, 0.82⟩ ⟨ 0.7, 0.2⟩ 0.5, 0.2⟩ ⟨ ⊥, ⊥⟩ ⎪ ⎬ k3 ⟨ 0.7, 0.2⟩ ⟨ 0.5, 0.3⟩ ⟨ 0.12, 0.76⟩ ⟨ 0.7, 0.3⟩ ⟨ 0.7, 0.1 ⟨ ⊥, ⊥⟩ , ⎪ Q ⟨ 0.06, 0.02⟩ ⟨ 0.4, 0.2⟩ ⟨ 0.6, 0.2⟩ ⟨ 0.5, 0.3⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⎪ ⎪ ⎪ ⎪ pl ⟨ 0.82, 0.1⟩ ⟨ 0.8, 0.1⟩ ⟨ 0.85, 0.1⟩ ⟨ 0.85, 0.1⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⎪ ⎪ ⎪ ⎭ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ pu 1 ⟨ ⊥, ⊥⟩

where K = {k1 , k2 , k3 , Q}, L = {l1 , l2 , l3 , l4 , R, pu} , H/{T } = {h 1 , h 2 , h 3 } and for 1 ≤ i ≤ 3, 1 ≤ j ≤ 4, 1 ≤ g ≤ 3 and its elements are IFPs. Let xki ,l j ,h g is the number of units of the product, transported from the ki -th producer to l j -th destination in a moment h g (for 1 ≤ i ≤ 3, 1 ≤ j ≤ 4 and 1 ≤ g ≤ 3) and is an element of IFIM X with initial elements ⟨ ⊥, ⊥⟩ . Step 2. The problem is balanced according to the algorithm in Sect. (3.1). Step 3. Checking the conditions for limiting the transportation costs according to the 2-S 3-D approach, defined in Sect. (3.1). ⎧ The IM C[K , L , H/{T }] is transformed in: h1 ⎪ ⎪ ⎪ ⎪ k1 ⎪ ⎪ ⎪ ⎪ ⎨ k2 k3 ⎪ ⎪ ⎪ Q ⎪ ⎪ ⎪ pl ⎪ ⎪ ⎩ pu 1

l1 ⟨ 0.6, 0.2⟩ ⟨ 0.5, 0.3⟩ ⟨ 0.4, 0.2⟩ ⟨ 0.4, 0.2⟩ ⟨ 0.65, 0.3⟩ ⟨ ⊥, ⊥⟩

l2 ⟨ 1.0, 0.0⟩ ⟨ 0.4, 0.1⟩ ⟨ 0.3, 0.2⟩ ⟨ 0.5, 0.3⟩ ⟨ 0.6, 0.4⟩ ⟨ ⊥, ⊥⟩

l3 ⟨ 0.3, 0.1⟩ ⟨ 0.5, 0.1⟩ ⟨ 0.6, 0.1⟩ ⟨ 0.6, 0.2⟩ ⟨ 0.75, 0.2⟩ ⟨ ⊥, ⊥⟩

l4 ⟨ 1.0, 0.0⟩ ⟨ 0.3, 0.2⟩ ⟨ 0.7, 0.2⟩ ⟨ 0.06, 0.02⟩ ⟨ 0.75, 0.1⟩ ⟨ ⊥, ⊥⟩

R ⟨ 0.5, 0.2⟩ ⟨ 0.7, 0.1⟩ ⟨ 0.4, 0.5⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩

pu ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ , ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩

206

h2 k1 k2 k3 Q pl pu 1 h3 k1 k2 k3 Q pl pu 1

V. Traneva and S. Tranev

l1 ⟨ 0.32, 0.55⟩ ⟨ 0.2, 0.7⟩ ⟨ 0.28, 0.65⟩ ⟨ 0.5, 0.3⟩ ⟨ 0.82, 0.1⟩ ⟨ ⊥, ⊥⟩ l1 ⟨ 0.21, 0.6⟩ ⟨ 0.18, 0.55⟩ ⟨ 0.21, 0.6⟩ ⟨ 0.06, 0.02⟩ ⟨ 0.82, 0.1⟩ ⟨ ⊥, ⊥⟩

l2 ⟨ 0.28, 0.55⟩ ⟨ 0.32, 0.55⟩ ⟨ 0.28, 0.6⟩ ⟨ 0.4, 0.2⟩ ⟨ 0.8, 0.1⟩ ⟨ ⊥, ⊥⟩ l2 ⟨ 0.15, 0.68⟩ ⟨ 0.21, 0.64⟩ ⟨ 0.15, 0.72⟩ ⟨ 0.4, 0.2⟩ ⟨ 0.8, 0.1⟩ ⟨ ⊥, ⊥⟩

l3 ⟨ 0.24, 0.7⟩ ⟨ 0.32, 0.6⟩ ⟨ 0.2, 0.65⟩ ⟨ 0.06, 0.02⟩ ⟨ 0.85, 0.1⟩ ⟨ ⊥, ⊥⟩ l3 ⟨ 0.12, 0.73⟩ ⟨ 0.1, 0.82⟩ ⟨ 0.12, 0.76⟩ ⟨ 0.6, 0.2⟩ ⟨ 0.85, 0.1⟩ ⟨ ⊥, ⊥⟩

l4 ⟨ 0.7, 0.1⟩ ⟨ 0.7, 0.1⟩ ⟨ 0.6, 0.2⟩ ⟨ 0.6, 0.2⟩ ⟨ 0.85, 0.1⟩ ⟨ ⊥, ⊥⟩ l4 ⟨ 0.8, 0.2⟩ ⟨ 0.7, 0.2⟩ ⟨ 0.7, 0.3⟩ ⟨ 0.5, 0.3⟩ ⟨ 0.85, 0.1⟩ ⟨ ⊥, ⊥⟩

R ⟨ 0.7, 0.1⟩ ⟨ 0.5, 0.2⟩ ⟨ 0.4, 0.5⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ R ⟨ 0.4, 0.5⟩ ⟨ 0.5, 0.2⟩ ⟨ 0.7, 0.1⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩

pu ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ , ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⎫ pu ⎪ ⎪ ⎪ ⟨ ⊥, ⊥⟩ ⎪ ⎪ ⎪ ⎪ ⟨ ⊥, ⊥⟩ ⎪ ⎬ ⟨ ⊥, ⊥⟩ ⎪ ⟨ ⊥, ⊥⟩ ⎪ ⎪ ⎪ ⎪ ⟨ ⊥, ⊥⟩ ⎪ ⎪ ⎪ ⎭ ⟨ ⊥, ⊥⟩

Let us define IM S = [K , L , H, {ski ,l j ,h g }] such that S = C.

Step 4. Determination of zero membership value – K I level For each index ki of K I at a fixed h g (g = 1, . . . , f ) of the matrix S, the smallest element is found in accordance with the relations (3): ⟨ a , b⟩ ≤ R ⟨ c, d⟩ iff R⟨ a ,b⟩ ≥ R⟨ c,d⟩ among the elements ski ,l j ,h g ( j = 1, . . . , n) and is saved to the right of the row, in the column pu. It is subtracted from all elements ski ,l j ,h g , for j = 1, 2, . . . , n and go to Step 5. Step 5. Determination of zero membership value—L I level For each index l j of L I at a fixed h g (g = 1, 2, 3) of the matrix S, the smallest element is found among the elements ski ,l j ,h g (i = 1, 2, 3). It is saved at the bottom of the column, in line pu 1 and it is subtracted from all elements ski ,l j ,h g , for i = 1, 2, 3 and go to Step 6. Step 6. Determination of zero membership value—H I level For each index ki of K at a fixed l j ( j = 1, . . . , n) of the matrix S, the smallest element is found among the elements ski ,l j ,h g (g = 1, . . . , f ) and is recorded as the value of the element ski ,l j ,T . It is subtracted from all elements ski ,l j ,h g , for g = 1, 2, . . . , f and go to Step 6. The IM ST = S[K , L , H/{T }] takes the following form after computations of Steps 4, 5 and 6: ⎧ l1 l2 l3 l4 R pu h1 ⎪ ⎪ ⎪ ⎪ k1 ⟨ 0.0, 0.6⟩ ⟨ 0.67, 0.17⟩ ⟨ 0.0, 1.0⟩ ⟨ 0.64, 0.36⟩ ⟨ 0.5, 0.2⟩ ⟨ 0.3, 0.1⟩ ⎪ ⎪ ⎪ ⎪ ⎨ k2 ⟨ 0.2, 0.8⟩ ⟨ 0.0, 0.6⟩ ⟨ 0.2, 0.8⟩ ⟨ 0.0, 0.8⟩ ⟨ 0.7, 0.1⟩ ⟨ 0.3, 0.2⟩ k3 ⟨ 0.92, 0.02⟩ ⟨ 0.0, 0.8⟩ ⟨ 0.3, 0.7⟩ ⟨ 0.4, 0.6⟩ ⟨ 0.4, 0.5⟩ ⟨ 0.3, 0.2⟩ , ⎪ ⎪ Q ⟨ 0.4, 0.2⟩ ⟨ 0.5, 0.3⟩ ⟨ 0.6, 0.2⟩ ⟨ 0.06, 0.02⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⎪ ⎪ ⎪ ⎪ ⟨ 0.65, 0.3⟩ ⟨ 0.6, 0.4⟩ ⟨ 0.75, 0.2⟩ ⟨ 0.75, 0.1⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ pl ⎪ ⎪ ⎩ pu 1 ⟨ 0.1, 0.4⟩ ⟨ 0.0, 0.4⟩ ⟨ 0.0, 0.2⟩ ⟨ 0.0, 0.4⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ h2 l1 l2 l3 l4 R pu k1 ⟨ 0.0, 1.0⟩ ⟨ 0.01, 0.99⟩ ⟨ 0.0, 1.0⟩ ⟨ 0.0, 1.0⟩ ⟨ 0.7, 0.1⟩ ⟨ 0.24, 0.7⟩ k2 ⟨ 0.0, 1.0⟩ ⟨ 0.02, 0.98⟩ ⟨ 0.12, 0.88⟩ ⟨ 0.1, 0.9⟩ ⟨ 0.5, 0.2⟩ ⟨ 0.2, 0.7⟩ k3 ⟨ 0.0, 1.0⟩ ⟨ 0.08, 0.92⟩ ⟨ 0.0, 1.0⟩ ⟨ 0.0, 1.0⟩ ⟨ 0.4, 0.5⟩ ⟨ 0.2, 0.65⟩ , ⟨ ⊥, ⊥⟩ Q ⟨ 0.5, 0.3⟩ ⟨ 0.4, 0.2⟩ ⟨ 0.06, 0.02⟩ ⟨ 0.6, 0.2⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ pl ⟨ 0.82, 0.1⟩ ⟨ 0.8, 0.1⟩ ⟨ 0.85, 0.1⟩ ⟨ 0.85, 0.1⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ pu 1 ⟨ 0.0, 1.0⟩ ⟨ 0.04, 0.96⟩ ⟨ 0.0, 1.0⟩ ⟨ 0.4, 0.6⟩ ⟨ ⊥, ⊥⟩

Index-Matrix Interpretation of a Two-Stage …

h3 k1 k2 k3 Q pl pu 1

l1 ⟨ 0.0, 1.0⟩ ⟨ 0.08, 0.92⟩ ⟨ 0.01, 0.99⟩ ⟨ 0.06, 0.02⟩ ⟨ 0.82, 0.1⟩ ⟨ 0.08, 0.92⟩

l2 ⟨ 0.0, 0.14⟩ ⟨ 0.01, 0.99⟩ ⟨ 0.03, 0.97⟩ ⟨ 0.4, 0.2⟩ ⟨ 0.8, 0.1⟩ ⟨ 0.03, 0.97⟩

l3 ⟨ 0.12, 0.88⟩ ⟨ 0.0, 1.0⟩ ⟨ 0.0, 1.0⟩ ⟨ 0.6, 0.2⟩ ⟨ 0.85, 0.1⟩ ⟨ 0.0, 1.0⟩

l4 ⟨ 0.04, 0.96⟩ ⟨ 0.02, 0.98⟩ ⟨ 0.0, 1.0⟩ ⟨ 0.5, 0.3⟩ ⟨ 0.85, 0.1⟩ ⟨ 0.58, 0.42⟩

207

⎫ R pu ⎪ ⎪ ⎪ ⟨ 0.4, 0.5⟩ ⟨ 0.12, 0.73⟩ ⎪ ⎪ ⎪ ⎪ ⟨ 0.5, 0.2⟩ ⟨ 0.1, 0.82⟩ ⎪ ⎬ ⟨ 0.7, 0.1⟩ ⟨ 0.12, 0.76⟩ ⎪ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⎪ ⎪ ⎪ ⎪ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⎪ ⎪ ⎪ ⎭ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩

Step 7. Optimality criterion 1) Check if each quantity of product offered at time h g is less than or equal the total quantity of product required, whose reduced costs are with zero membership degrees. 2) Check if each required quantity at time h g is less than or equal to the total quantity offered, whose reduced costs have zero membership degrees. 3) If 7.1 and 7.2 are satisfied then go to Step 10. else go to Step 8. Step 8. Revise the cost IM All elements ⟨ 0, 1⟩ in the S are crossed out with minimum number of lines (horizontal, vertical or both). If there is no element ⟨ 0, 1⟩ in a given row or column at time h g (for g = 1, ..., f ), then the element with the minimum degree of membership is crossed out from that row or column in the cost IM S obtained in step 6. Step 9. Develop the new revised cost IM We select the minimum IF cost of the ST at time h g using the relations (3), that is not crossed by the lines in Step 8, and subtract it from each of its uncovered elements at time h g , and we add it to each of its elements that is covered by two lines at time h g . We return to Step 7. The Steps 7, 8. and 9. are executed twice and then proceeds to Step 10. The IM ST takes the following form after these steps: ⎧ l1 l2 l3 l4 R pu h1 ⎪ ⎪ ⎪ ⎪ k ⟨ 0 .0, 1.0⟩ ⟨ 0 .5, 0.5⟩ ⟨ 0 .0, 0.4⟩ ⟨ 0 .6, 0.4⟩ ⟨ 0 .5, 0.2⟩ ⟨ 0 .3, 0.1⟩ ⎪ 1 ⎪ ⎪ ⎪ ⎨ k2 ⟨ 0.0, 1.0⟩ ⟨ 0.0, 1.0⟩ ⟨ 0.28, 0.45⟩ ⟨ 0.0, 0.8⟩ ⟨ 0.7, 0.1⟩ ⟨ 0.3, 0.2⟩ k3 ⟨ 0.0, 0.8⟩ ⟨ 0.0, 0.8⟩ ⟨ 0.43, 0.41⟩ ⟨ 0.46, 0.54⟩ ⟨ 0.4, 0.5⟩ ⟨ 0.3, 0.2⟩ ⎪ ⎪ Q ⟨ 0.4, 0.2⟩ ⟨ 0.5, 0.3⟩ ⟨ 0.6, 0.2⟩ ⟨ 0.06, 0.02⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⎪ ⎪ ⎪ ⎪ ⟨ 0.55, 0.3⟩ ⟨ 0.6, 0.4⟩ ⟨ 0.75, 0.2⟩ ⟨ 0.65, 0.3⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ pl ⎪ ⎪ ⎩ ⟨ 0, 0.2⟩ ⟨ 0, 0.4⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ pu 1 ⟨ 0.1, 0.4⟩ ⟨ 0, 0.4⟩ h2 l1 l2 l3 l4 R pu k1 ⟨ 0.02, 0.98⟩ ⟨ 0.01, 0.99⟩ ⟨ 0.0, 1.0⟩ ⟨ 0.0, 1.0⟩ ⟨ 0.7, 0.1⟩ ⟨ 0.24, 0.7⟩ k2 ⟨ 0.0, 1.0⟩ ⟨ 0.0, 1.0⟩ ⟨ 0.1, 0.9⟩ ⟨ 0.08, 0.92⟩ ⟨ 0.5, 0.2⟩ ⟨ 0.2, 0.7⟩ k3 ⟨ 0.02, 0.98⟩ ⟨ 0.08, 0.92⟩ ⟨ 0.0, 1.0⟩ ⟨ 0.0, 1.0⟩ ⟨ 0.4, 0.5⟩ ⟨ 0.2, 0.65⟩ , ⟨ ⊥, ⊥⟩ Q ⟨ 0.5, 0.3⟩ ⟨ 0.4, 0.2⟩ ⟨ 0.06, 0.02⟩ ⟨ 0.6, 0.2⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ pl ⟨ 0.82, 0.1⟩ ⟨ 0.8, 0.1⟩ ⟨ 0.85, 0.1⟩ ⟨ 0.85, 0.1⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ pu 1 ⟨ 0.0, 1.0⟩ ⟨ 0.04, 0.96⟩ ⟨ 0.0, 1.0⟩ ⟨ 0.4, 0.6⟩ ⟨ ⊥, ⊥⟩

208

h3 k1 k2 k3 Q pl pu 1

V. Traneva and S. Tranev

l1 ⟨ 0.0, 1.0⟩ ⟨ 0.07, 0.93⟩ ⟨ 0.01, 0.99⟩ ⟨ 0.06, 0.02⟩ ⟨ 0.82, 0.1⟩ ⟨ 0.08, 0.92⟩

l2 ⟨ 0.0, 0.14⟩ ⟨ 0.0, 1.0⟩ ⟨ 0.03, 0.97⟩ ⟨ 0.4, 0.2⟩ ⟨ 0.8, 0.1⟩ ⟨ 0.03, 0.97⟩

l3 ⟨ 0.13, 0.8⟩ ⟨ 0.0, 1.0⟩ ⟨ 0.01, 0.99⟩ ⟨ 0.6, 0.2⟩ ⟨ 0.85, 0.1⟩ ⟨ 0.0, 1.0⟩

l4 ⟨ 0.04, 0.96⟩ ⟨ 0.01, 0.99⟩ ⟨ 0.0, 1.0⟩ ⟨ 0.5, 0.3⟩ ⟨ 0.85, 0.1⟩ ⟨ 0.58, 0.42⟩

⎫ R pu ⎪ ⎪ ⎪ ⟨ 0.4, 0.5⟩ ⟨ 0.12, 0.73⟩ ⎪ ⎪ ⎪ ⎪ ⟨ 0.5, 0.2⟩ ⟨ 0.1, 0.82⟩ ⎪ ⎬ ⟨ 0.7, 0.1⟩ ⟨ 0.12, 0.76⟩ ⎪ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⎪ ⎪ ⎪ ⎪ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⎪ ⎪ ⎪ ⎭ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩

Step 10. Determination of a cell for allocation For g = 1 to f { (1) Use relations (3) to select the largest IF cost in the IM pr K ∗,L∗,h g S at time h g (g = 1, 2, 3). Let us denote this cell as ckx∗ ,l y∗ ,h g . (2) Select a single cost with zero degree of membership for allocation corresponding to k x∗ -th index and/or l y∗ -th at time h g if exists and assigns the most possible to that cost cell and strike the satisfied IF supply or IF demand. (3) Then the minimum of the required and offered quantity is assigned to the corresponding ske ,le ∗,h g cell and delete the row/column with exhausted required or offered quantity at time h g for (g = 1, 2, 3). Repeat Steps (1), (2) and (3) until | pr K ,L ,h g S| = 6 (all the required quantities are satisfied and all the offered quantities are exhausted at time h g ). } Step 11. The intuitionistic fuzzy optimal solution, presented by the IM X opt is nondegenerated, it includes 6 × 3 occupied cells. The optimal intuitionistic fuzzy optimal solution X opt [K ∗, L∗, H ∗, {xki ,l j ,h g }] is obtained and has the following form: ⎧ l1 l2 l3 l4 h1 ⎪ ⎪ ⎨ k1 ⟨ 0.4, 0.2⟩ ⟨ 0.0, 1.0⟩ ⟨ 0.1, 0.4⟩ ⟨ 0.0, 1.0⟩ ⎪ k2 ⟨ 0.0, 1.0⟩ ⟨ 0.5, 0.3⟩ ⟨ 0.14, 0.42⟩ ⟨ 0.06, 0.02⟩ ⎪ ⎩ k3 ⟨ 0.0, 1.0⟩ ⟨ 0.0, 1.0⟩ ⟨ 0.4, 0.5⟩ ⟨ 0.0, 1.0⟩ l1 l2 l3 l4 h2 k1 ⟨ 0.0, 1.0⟩ ⟨ 0.4, 0.6⟩ ⟨ 0.06, 0.02⟩ ⟨ 0.2, 0.7⟩ k2 ⟨ 0.5, 0.3⟩ ⟨ 0.0, 0.5⟩ ⟨ 0.0, 1.0⟩ ⟨ 0.0, 1.0⟩ k3 ⟨ 0.0, 1.0⟩ ⟨ ⟨ 0.0, 1.0⟩ ⟨ 0.0, 1.0⟩ ⟨ 0.4, 0.5⟩ ⎫ h3 l1 l2 l3 l4 ⎪ ⎪ ⎬ k1 ⟨ 0.06, 0.22⟩ ⟨ 0.34, 0.52⟩ ⟨ 0.0, 1.0⟩ ⟨ 0.0, 1.0⟩ k2 ⟨ 0.0, 1.0⟩ ⟨ 0.0, 1.0⟩ ⟨ 0.5, 0.2⟩ ⟨ 0.0, 1.0⟩ ⎪ ⎪ ⎭ k3 ⟨ 0.0, 1.0⟩ ⟨ 0.06, 0.72⟩ ⟨ 0.1, 0.4⟩ ⟨ 0.5, 0.3⟩

Step 12. The optimal intuitionistic fuzzy transportation   cost is: AG I O⊕1 (max,min) ) C({Q, pl, pu 1 },{R, pu},T ) ⊗(min,max) X opt = ⟨ 0.50, 0.42⟩ The degree of membership (acceptance) of this optimal solution is equal to 0.5 and the its degree of non-membership (non-acceptance) is equal to 0.42. The ranking function R, defined in (3), we can use to rank alternatives of decision-making process. For the obtained optimal solution by IFZPM, the distance between the optimal solution to the pair ⟨ 1, 0⟩ is equal to R⟨ 0.5;0.42⟩ = 0.27.

Index-Matrix Interpretation of a Two-Stage …

3.2.2

209

Solution of the Second Stage of the 2-S 3-D IFTP

Step 13. The following cost IFIM C ∗ [L ∗ , U, H/T ] is created: ⎧

u1 u2 u3 u4 R∗ q∗ pu ∗ h1 ⎪ ⎪ ⎪ ⎪ ⟨ 0 .27, 0.73⟩ ⟨ 0 .23, 0.77⟩ ⟨ 0 .19, 0.81⟩ ⟨ 0 .65, 0.35⟩ ⟨ 0 .4, 0.2⟩ ⟨ 0 .1, 0⟩ ⟨ 0 .5, 0.3⟩ l ⎪ 1 ⎪ ⎪ ⎪ ⎨ l2 ⟨ 0.17, 0.83⟩ ⟨ 0.29, 0.71⟩ ⟨ 0.29, 0.71⟩ ⟨ 0.67, 0.33⟩ ⟨ 0.5, 0.3⟩ ⟨ 0.1, 0⟩ ⟨ 0.31, 0.27⟩ l3 ⟨ 0.24, 0.65⟩ ⟨ 0.24, 0.6⟩ ⟨ 0.2, 0.65⟩ ⟨ 0.56, 0.1⟩ ⟨ 0.6, 0.2⟩ ⟨ 0.15, 0⟩ ⟨ 0.25, 0⟩ ; ⎪ ⎪ ⟨ ⊥, ⊥⟩ ⎪ Q ∗ ⟨ 0.45, 0.3⟩ ⟨ 0.4, 0.2⟩ ⟨ 0.15, 0.013⟩ ⟨ 0.2, 0.013⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⎪ ⎪ ⎪ ⟨ ⊥, ⊥⟩ pl ∗ ⟨ 0.82, 0.1⟩ ⟨ 0.8, 0.1⟩ ⟨ 0.85, 0.1⟩ ⟨ 0.85, 0.1⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⎪ ⎪ ⎩ ∗ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ pu 1 ⟨ ⊥, ⊥⟩ u1 u2 u3 u4 R∗ q∗ pu ∗ h2 l1 ⟨ 0.27, 0.73⟩ ⟨ 0.23, 0.77⟩ ⟨ 0.19, 0.81⟩ ⟨ 0.65, 0.35⟩ ⟨ 0.5, 0.3⟩ ⟨ 0.1, 0⟩ ⟨ 0.5, 0.4⟩ l2 ⟨ 0.17, 0.83⟩ ⟨ 0.29, 0.71⟩ ⟨ 0.29, 0.71⟩ ⟨ 0.67, 0.33⟩ ⟨ 0.4, 0.5⟩ ⟨ 0.1, 0⟩ ⟨ 0.28, 0.55⟩ l3 ⟨ 0.24, 0.65⟩ ⟨ 0.24, 0.6⟩ ⟨ 0.2, 0.65⟩ ⟨ 0.56, 0.1⟩ ⟨ 0.06, 0.02⟩ ⟨ 0.1, 0⟩ ⟨ 0.2, 0.65⟩ ; Q ∗ ⟨ 0.45, 0.3⟩ ⟨ 0.25, 0.2⟩ ⟨ 0.15, 0.01⟩ ⟨ 0.08, 0.01⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ pl ∗ ⟨ 0.88, 0.1⟩ ⟨ 0.82, 0.1⟩ ⟨ 0.80, 0.1⟩ ⟨ 0.80, 0.1⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ pu ∗1 ⟨ ⊥, ⊥⟩ ⎫ u1 u2 u3 u4 R∗ q∗ pu ∗ ⎪ h3 ⎪ ⎪ l1 ⟨ 0.27, 0.73⟩ ⟨ 0.23, 0.77⟩ ⟨ 0.19, 0.81⟩ ⟨ 0.65, 0.35⟩ ⟨ 0.06, 0.22⟩ ⟨ 0.1, 0⟩ ⟨ 0.7, 0.2⟩ ⎪ ⎪ ⎪ ⎪ l2 ⟨ 0.17, 0.83⟩ ⟨ 0.29, 0.71⟩ ⟨ 0.29, 0.71⟩ ⟨ 0.67, 0.33⟩ ⟨ 0.34, 0.52⟩ ⟨ 0.1, 0⟩ ⟨ 0.5, 0.2⟩ ⎪ ⎬ l3 ⟨ 0.24, 0.65⟩ ⟨ 0.24, 0.6⟩ ⟨ 0.2, 0.65⟩ ⟨ 0.56, 0.1⟩ ⟨ 0.5, 0.2⟩ ⟨ 0.1, 0⟩ ⟨ 0.1, 0.82⟩ , ⎪ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⎪ Q ∗ ⟨ 0.25, 0.53⟩ ⟨ 0.05, 0.04⟩ ⟨ 0.21, 0.53⟩ ⟨ 0.4, 0.08⟩ ⎪ ⎪ ⎪ ∗ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⎪ pl ⟨ 0.79, 0.1⟩ ⟨ 0.82, 0.1⟩ ⟨ 0.83, 0.1⟩ ⟨ 0.84, 0.1⟩ ⎪ ⎪ ⎭ ∗ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ pu 1 ⟨ ⊥, ⊥⟩

} { where L ∗ = l1 , . . . , l3 , Q ∗ , pl ∗ , pu ∗1 , U = {u 1 , . . . , u 4 , R ∗ , q ∗ , pu ∗ } and all elements are IFPs. The quantities of product purchased on the first stage by the resellers from the set RS are set in the column R ∗ of the matrix C ∗ . J J ⎫ ⎧ We also define the IM X [L , U, H ] h1 u1 ⎪ ⎪ ⎨ l1 ⟨ 0, 1⟩ ⎪ l2 ⟨ 0, 1⟩ ⎪ ⎩ l3 ⟨ 0, 1⟩

u2 ⟨ 0, 1⟩ ⟨ 0, 1⟩ ⟨ 0, 1⟩

u3 ⟨ 0, 1⟩ ⟨ 0, 1⟩ ⟨ 0, 1⟩

h2 u1 u4 ⟨ 0, 1⟩ l1 ⟨ 0, 1⟩ , ⟨ 0, 1⟩ l2 ⟨ 0, 1⟩ ⟨ 0, 1⟩ l3 ⟨ 0, 1⟩

u2 ⟨ 0, 1⟩ ⟨ 0, 1⟩ ⟨ 0, 1⟩

u3 ⟨ 0, 1⟩ ⟨ 0, 1⟩ ⟨ 0, 1⟩

u4 ⟨ 0, 1⟩ , ⟨ 0, 1⟩ ⟨ 0, 1⟩

h3 u1 l1 ⟨ 0, 1⟩ l2 ⟨ 0, 1⟩ l3 ⟨ 0, 1⟩

u2 ⟨ 0, 1⟩ ⟨ 0, 1⟩ ⟨ 0, 1⟩

u3 ⟨ 0, 1⟩ ⟨ 0, 1⟩ ⟨ 0, 1⟩

u4 ⎪ ⎪ ⎬ ⟨ 0, 1⟩ , ⟨ 0, 1⟩ ⎪ ⎪ ⎭ ⟨ 0, 1⟩

L J = {l1 , l2 , l3 } , U = {u 1 , u 2 , u 3 , u 4 }, H J = {h 1 , h 2 , h 3 } and for 1 ≤ j ∗ ≤ 3, 1 ≤ e ≤ 4, 1 ≤ g ≤ 3: xl ∗j ∗ ,u e ,h g = ⟨ ρl ∗j ∗ ,u e ,h g , σl ∗j ∗ ,u e ,h g ⟩ are the number of units of the product, transported from the l ∗j ∗ -th reseller to u e -th destination in a time-moment h g .

Step 14. The column pu ∗ of the matrix C ∗ contains the final selling prices of one unit quantity of the product together with its mark-up above the purchase price. The elements cl∗∗ j ∗ ,u e ,h g (for 1 ≤ j ∗ ≤ 3, 1 ≤ e ≤ 4, 1 ≤ g ≤ 3) of the matrix C ∗ contain the final selling price per unit of product, including the unit price and its transportation price from the l ∗ j ∗ -th reseller to u g -th destination in a time-moment h g . C ∗ obtaines the following form: ⎧ h1 ⎪ ⎪ ⎪ ⎪ l1 ⎪ ⎪ ⎪ ⎪ ⎨ l2 l3 ⎪ ∗ ⎪ Q ⎪ ⎪ ⎪ ⎪ ⎪ pl ∗ ⎪ ⎩ ∗ pu 1

u1 ⟨ 0.32, 0.55⟩ ⟨ 0.2, 0.7⟩ ⟨ 0.28, 0.65⟩ ⟨ 0.45, 0.3⟩ ⟨ 0.82, 0.1⟩ ⟨ ⊥, ⊥⟩

u2 ⟨ 0.28, 0.55⟩ ⟨ 0.32, 0.55⟩ ⟨ 0.28, 0.6⟩ ⟨ 0.4, 0.2⟩ ⟨ 0.8, 0.1⟩ ⟨ ⊥, ⊥⟩

u3 ⟨ 0.24, 0.7⟩ ⟨ 0.32, 0.6⟩ ⟨ 0.2, 0.65⟩ ⟨ 0.15, 0.013⟩ ⟨ 0.85, 0.1⟩ ⟨ ⊥, ⊥⟩

u4 ⟨ 0.7, 0.1⟩ ⟨ 0.7, 0.1⟩ ⟨ 0.6, 0.1⟩ ⟨ 0.2, 0.013⟩ ⟨ 0.85, 0.1⟩ ⟨ ⊥, ⊥⟩

R∗ ⟨ 0.4, 0.2⟩ ⟨ 0.5, 0.3⟩ ⟨ 0.6, 0.2⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩

q∗ pu ∗ ⟨ 0.1, 0⟩ ⟨ 0.05, 0.3⟩ ⟨ 0.1, 0⟩ ⟨ 0.03, 0.27⟩ ⟨ 0.15, 0⟩ ⟨ 0.038, 0⟩ , ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩

210

V. Traneva and S. Tranev

h2 u1 u2 u3 u4 R∗ q∗ pu ∗ l1 ⟨ 0.31, 0.30⟩ ⟨ 0.27, 0.31⟩ ⟨ 0.23, 0.32⟩ ⟨ 0.67, 0.14⟩ ⟨ 0.5, 0.3⟩ ⟨ 0.1, 0⟩ ⟨ 0.5, 0.4⟩ l2 ⟨ 0.19, 0.46⟩ ⟨ 0.31, 0.39⟩ ⟨ 0.31, 0.39⟩ ⟨ 0.68, 0.18⟩ ⟨ 0.4, 0.5⟩ ⟨ 0.1, 0⟩ ⟨ 0.28, 0.55⟩ l3 ⟨ 0.26, 0.42⟩ ⟨ 0.26, 0.39⟩ ⟨ 0.22, 0.42⟩ ⟨ 0.57, 0.07⟩ ⟨ 0.06, 0.02⟩ ⟨ 0.1, 0⟩ ⟨ 0.2, 0.65⟩ , Q ∗ ⟨ 0.45, 0.3⟩ ⟨ 0.25, 0.2⟩ ⟨ 0.15, 0.01⟩ ⟨ 0.08, 0.01⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ pl ∗ ⟨ 0.88, 0.1⟩ ⟨ 0.82, 0.1⟩ ⟨ 0.80, 0.1⟩ ⟨ 0.85, 0.1⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ pu ∗1 ⟨ ⊥, ⊥⟩ ⎫ h3 u1 u2 u3 u4 R∗ q∗ pu ∗ ⎪ ⎪ ⎪ l1 ⟨ 0.32, 0.15⟩ ⟨ 0.28, 0.15⟩ ⟨ 0.25, 0.16⟩ ⟨ 0.67, 0.07⟩ ⟨ 0.06, 0.22⟩ ⟨ 0.1, 0⟩ ⟨ 0.7, 0.2⟩ ⎪ ⎪ ⎪ ⎪ l2 ⟨ 0.21, 0.17⟩ ⟨ 0.33, 0.14⟩ ⟨ 0.33, 0.14⟩ ⟨ 0.69, 0.07⟩ ⟨ 0.34, 0.52⟩ ⟨ 0.1, 0⟩ ⟨ 0.5, 0.2⟩ ⎪ ⎬ l3 ⟨ 0.25, 0.53⟩ ⟨ 0.01, 0.82⟩ ⟨ 0.21, 0.53⟩ ⟨ 0.56, 0.08⟩ ⟨ 0.5, 0.2⟩ ⟨ 0.1, 0⟩ ⟨ 0.1, 0.82⟩ ⎪ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⎪ Q ∗ ⟨ 0.25, 0.53⟩ ⟨ 0.05, 0.04⟩ ⟨ 0.21, 0.53⟩ ⟨ 0.4, 0.08⟩ ⎪ ⎪ ⎪ ∗ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⎪ pl ⟨ 0.79, 0.1⟩ ⟨ 0.82, 0.1⟩ ⟨ 0.83, 0.1⟩ ⟨ 0.84, 0.1⟩ ⎪ ⎪ ⎭ ∗ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ ⟨ ⊥, ⊥⟩ pu 1 ⟨ ⊥, ⊥⟩

Step 15. The problem is balanced. Then the requirements for an upper limit on the price at which consumers have the opportunity to purchase the necessary quantities of the product are checked. After execution of the algorithm, presented in (3.1), with the obtained IFIMs C ∗ and X ∗ , we obtain the following optimal plan X ∗ [L J , U, H J , {xl∗∗∗ ,u e ,h g }] for the second stage of the problem: j ⎧ h1 u1 u2 u3 u4 ⎪ ⎪ ⎨ ⟨ 0, 1⟩ ⟨ 0.35, 0.65⟩ ⟨ 0, 1⟩ ⟨ 0, 1⟩ l1 , ⟨ 0, 1⟩ ⟨ 0, 1⟩ ⎪ l2 ⟨ 0.45, 0.3⟩ ⟨ 0.05, 0.6⟩ ⎪ ⎩ ⟨ 0, 1⟩ ⟨ 0.15, 0.23⟩ ⟨ 0.15, 0.013⟩ ⟨ 0.2, 0.013⟩ l3 ⎫ h3 u1 u2 u3 u4 ⎪ ⎪ ⎬ l1 ⟨ 0, 1⟩ ⟨ 0, 1⟩ ⟨ 0, 1⟩ ⟨ 0.06, 0.22⟩ ⟨ 0, 1⟩ ⟨ 0, 1⟩ ⟨ 0.09, 0.91⟩ ⎪ l2 ⟨ 0.25, 0.53⟩ ⎪ ⎭ ⟨ 0, 1⟩ ⟨ 0.05, 0.04⟩ ⟨ 0.21, 0.53⟩ ⟨ 0.24, 0.76⟩ l3

h2 u1 u2 u3 u4 l1 ⟨ 0.05, 0.8⟩ ⟨ 0.25, 0.2⟩ ⟨ 0.15, 0.01⟩ ⟨ 0.05, 0.95⟩ , ⟨ 0, 1⟩ ⟨ 0, 1⟩ ⟨ 0, 1⟩ l2 ⟨ 0.4, 0.5⟩ ⟨ 0, 1⟩ ⟨ 0, 1⟩ ⟨ 0.06, 0.02⟩ ⟨ 0, 1⟩ l3

∗ The intuitionistic fuzzy optimal solution, presented by the IM X opt is nondegenerated, it includes 6 occupied cells for each h g . The optimal intuitionistic fuzzy transportation cost  at the second stage is calculated by:  AG I O⊕1 (max,min) ) C ∗ ({Q ∗ , pl ∗ , pu ∗1 },{R ∗ ,q ∗ , pu ∗ }) ⊗(min,max) X ∗ opt = ⟨ 0.25, 0.2⟩ .

Step 16. The optimal intuitionistic fuzzy transportation cost for the problem is calculated by: ⟨ 0.5, 0.42⟩ ⊕(max,min) ⟨ 0.25, 0.2⟩ = ⟨ 0.5, 0.2⟩ . The degree of membership (acceptance) of this optimal solution is equal to 0.5 and the its degree of nonmembership (non-acceptance) is equal to 0.2. For the obtained optimal solution of 2-S 3-D IFTP, the distance between the optimal solution to the pair ⟨ 1, 0⟩ is equal to R⟨ 0.5;0.2⟩ = 0.32. The example illustrates the reliability of the proposed algorithm in Sect. 3.1 to the studied 2-S 3-D IFTP.

4 Conclusion In this work it is proposed for the first time to extend the 2-S IFTP Traneva and Tranev (2021) to three-dimensional for finding an optimal solution of a type of 2-S 3-D IFTP using the concepts of the IMs anf IFSs. The formulated IFTP has additional constraints: upper limits to the transportation costs. The proposed algorithm for

Index-Matrix Interpretation of a Two-Stage …

211

solution of the 2-S 3-D IFTP is illustrated with a numerical example. In the future, we will extend 2-S 3-D IFZPM to the two-stage interval-valued intuitionistic fuzzy TPs Atanassov and Gargov (1989) and will apply the proposed approach for various types interval-valued IF optimization problems.

References Antony, R., Savarimuthu, S., Pathinathan, T.: Method for solving the transportation problem using triangular intuitionistic fuzzy number. Int. J. Comput. Alg. 03, 590–605 (2014) Atanassov, B.: Quantitative methods in business management. Publ. house TedIna, Varna (1994). (in Bulgarian) Atanassov K. T.: Intuitionistic Fuzzy Sets, VII ITKR Session, Sofia, 20-23 June 1983 (Deposed in Centr. Sci.-Techn. Library of the Bulg. Acad. of Sci., 1697/84) (in Bulgarian). Reprinted: Int. J. Bioautomation, 20(S1), S1–S6 (2016) Atanassov, K.: Generalized index matrices. Comptes rendus de l’Academie Bulgare des Sciences 40(11), 15–18 (1987) Atanassov, K.: On Intuitionistic Fuzzy Sets Theory. STUDFUZZ, vol. 283. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-29127-2 Atanassov, K.: Index Matrices: Towards an Augmented Matrix Calculus. Studies in Computational Intelligence, vol. 573. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10945-9 Atanassov, K.: Intuitionistic Fuzzy Logics. Studies in Fuzziness and Soft Computing, vol. 351. Springer (2017). https://doi.org/10.1007/978-3-319-48953-7 Atanassov, K., Gargov, G.: Interval valued intuitionistic fuzzy sets. Fuzzy Sets Syst. 31 (3), 343–349 (1989) Atanassov, K., Szmidt, E., Kacprzyk, J.: On intuitionistic fuzzy pairs. Notes on Intuitionistic Fuzzy Sets 19(3), 1–13 (2013) Atanassov, K.: Remark on an intuitionistic fuzzy operation “division”. Issues in IFSs and GNs 14, 113-116 (2018-2019) Bharati, S.K., Malhotra, R.: Two stage intuitionistic fuzzy time minimizing TP based on generalized Zadeh’s extension principle. Int. J. Syst. Assur. Eng. Manag. 8, 1142–1449 (2017). https://doi. org/10.1007/s13198-017-0613-9 Basirzadeh, H.: An approach for solving fuzzy transportation problem. Appl. Math. Sci. 5, 1549– 1566 (2011) Chanas, S., Kolodziejckzy, W., Machaj, A.: A fuzzy approach to the transportation problem. Fuzzy Sets Syst. 13, 211–221 (1984) Chanas, S., Kuchta, D.: A concept of the optimal solution of the transportation problem with fuzzy cost coefficients. Fuzzy Sets Syst. 82, 299–305 (1996) De, S.K., Bisvas, R., Roy, R.: Some operations on IFSs. Fuzzy Sets Syst. 114(4), 477–484 (2000) Dhanasekar, S., Hariharan, S., Sekar, P.: Fuzzy Hungarian MODI algorithm to solve fully fuzzy transportation problems. Int. J. Fuzzy Syst. 19(5), 1479–1491 (2017) Dinagar, D., Palanivel, K.: On trapezoidal membership functions in solving transportation problem under fuzzy environment. Int. J. Comput. Phys. Sci. 1, 1–12 (2009) Gani, A., Abbas, S.: A new average method for solving intuitionistic fuzzy transportation problem. Int. J. Pure Appl. Math. 93(4), 491–499 (2014) Gani, A.N., Abbas, S.: Solving intuitionistic fuzzy transportation problem using zero suffix algorithm. Int. J. Math. Sci. Eng. Appl. 6(3), 73–82 (2012) Gani, A., Razak, K.: Two stage fuzzy transportation problem. J. Phys. Sci. 10, 63–69 (2006) Gen, M., Ida, K., Li, Y., Kubota, E.: Solving bicriteria solid transportation problem with fuzzy numbers by a genetic algorithm. Comput. Ind. Eng. 29(1), 537–541 (1995)

212

V. Traneva and S. Tranev

Gupta, G., Kumar, A., Sharma, M.: A note on a new method for solving fuzzy linear programming problems based on the fuzzy linear complementary problem (FLCP). Int. J. Fuzzy Syst. 1–5 (2016) Guzel, N.: Fuzzy transportation problem with the fuzzy amounts and the fuzzy costs. World Appl. Sci. J. 8, 543–549 (2010) Hitchcock, F.: The distribution of a product from several sources to numerous localities. J. Math. Phys. 20, 224–230 (1941) Jahirhussain, R., Jayaraman, P.: Fuzzy optimal transportation problem by improved zero suffix method via robust rank techniques. Int. J. Fuzzy Math. Syst. (IJFMS) 3(4), 303–311 (2013a) Jahirhussain, R., Jayaraman, P.: A new method for obtaining an optinal solution for fuzzy transportation problems. Int J. Math. Arch. 4(11), 256–263 (2013b) Jimenez, F., Verdegay, J.: Solving fuzzy solid transportation problems by an evolutionary algorithm based parametric approach. Eur. J. Oper. Res. 117(3), 485–510 (1999) Kamini, Sharma, M.K.: Zero-Point maximum allocation method for solving intuitionistic fuzzy transportation problem. Int. J. Appl. Comput. Math. 6, 115 (2020). https://doi.org/10.1007/ s40819-020-00867-6 Karthy, T., Ganesan, K.: Revised improved zero point method for the trapezoidal fuzzy transportation problems. AIP Conf. Proce. 2112(020063), 1–8 (2019) Kathirvel, K., Balamurugan, K.: Method for solving fuzzy transportation problem using trapezoidal fuzzy numbers. Int. J. Eng. Res. Appl. 2(5), 2154–2158 (2012) Kathirvel, K., Balamurugan, K.: Method for solving unbalanced transportation problems using trapezoidal fuzzy numbers. Int. J. Eng. Res. Appl. 3(4), 2591–2596 (2013) Kaur, A., Kumar, A.: A new approach for solving fuzzy transportation problems using generalized trapezoidal fuzzy numbers. Appl. Soft Comput. 12(3), 1201–1213 (2012) Kaur, P., Dahiya, K.: Two-stage interval time minimization transportation problem with capacity constraints. Innov. Syst. Des. Eng. 6, 79–85 (2015) Kaur, A., Kacprzyk, J., Kumar, A.: Fuzzy transportation and transshipment problems. Stud. Fuziness soft Comput. 385 (2020) Kikuchi, S.: Amethod to defuzzify the fuzzy number: transportation problem application. Fuzzy Kumar, P., Hussain, R.: A method for solving unbalanced intuitionistic fuzzy transportation problems. Notes on Intuitionistic Fuzzy Sets 21(3), 54–65 (2015) Lalova, N., Ilieva, L., Borisova, S., Lukov, L., Mirianov, V.: A Guide to Mathematical Programming. Science and Art Publishing House, Sofia (1980). (in Bulgarian) Liu, S., Kao, C.: Solving fuzzy transportation problems based on extension principle. Eur. J. Oper. Res. 153, 661–674 (2004) Liu, S., Kao, C., Network flow problems with fuzzy arc lengths. IEEE Trans. Syst. Man Cybern. Part B Cybern. 34, 765–769 (2004) Malhotra, S., Malhotra, R.: A polynomial Algorithm for a Two—Stage Time Minimizing Transportation Problem. Opsearch 39, 251–266 (2002) Mohideen, S., Kumar, P.: A comparative study on transportation problem in fuzzy environment. Int. J. Math. Res. 2(1), 151–158 (2010) Pandian, P., Natarajan, G.: A new algorithm for finding a fuzzy optimal solution for fuzzy transportation problems. Appl. Math. Sci. 4, 79–90 (2010) Pandian, P., Natarajan, G.: An optimal more-for-less solution to fuzzy transportation problems with mixed constraints. Appl. Math. Sci. 4, 1405–1415 (2010) Patil, A., Chandgude, S.: Fuzzy Hungarian approach for transportation model. Int. J. Mech. Ind. Eng. 2(1), 77–80 (2012) Riecan, B., Atanassov, A.: Operation division by n over intuitionistic fuzzy sets. NIFS 16(4), 1–4 (2010) Samuel, A.: Improved zero point method. Appl. Math. Sci. 6(109), 5421–5426 (2012) Samuel, A., Venkatachalapathy, M.: Improved zero point method for unbalanced FTPs. Int. J. Pure Appl. Math. 94(3), 419–424 (2014)

Index-Matrix Interpretation of a Two-Stage …

213

Szmidt, E., Kacprzyk, J.: Amount of information and its reliability in the ranking of Atanassov’s intuitionistic fuzzy alternatives. in: Rakus-Andersson, E., Yager, R., Ichalkaranje, N., Jain, L.C. (eds.), Recent Advances in Decision Making, SCI, Springer, Heidelberg vol. 222, pp. 7–19 (2009). https://doi.org/10.1007/978-3-642-02187-9_2 Traneva, V.: On 3-dimensional intuitionistic fuzzy index matrices. Notes on Intuitionistic Fuzzy Sets 20(4), 59–64 (2014) Traneva, V.: Internal operations over 3-dimensional extended index matrices. Proc. Jangjeon Math. Soc. 18(4), 547–569 (2015) Traneva, V.: One application of the index matrices for a solution of a transportation problem. Adv. Stud. Contemp. Math. 26(4), 703–715 (2016) Traneva, V., Marinov, P., Atanassov, K.: Index matrix interpretations of a new transportation-type problem. Comptes rendus de l’Academie Bulgare des Sciences 69(10), 1275–1283 (2016) Traneva, V., Tranev, S.: Index Matrices as a Tool for Managerial Decision Making. Publ, House of the Union of Scientists, Bulgaria (2017).(in Bulgarian) Traneva, V., Tranev, S.: Intuitionistic Fuzzy Transportation Problem by Zero Point Method. In: Proceedings of the 15th Conference on Computer Science and Information Systems (FedCSIS), Sofia, Bulgaria, pp. 345–348 (2020a). https://doi.org/10.15439/2020F6 Traneva, V., Tranev, S.: An Intuitionistic fuzzy zero suffix method for solving the transportation problem. in: Dimov I., Fidanova S. (eds) Advances in High Performance Computing. HPC 2019. Studies in Computational Intelligence, vol. 902. Springer, Cham (2020b). https://doi.org/10.1007/ 978-3-030-55347-0_7 Traneva, V., Tranev, S.: Two-Stage Intuitionistic Fuzzy Transportation Problem through the Prism of Index Matrices. Position and Communication papers of the Proceedings of the 16th Conference on computer Science and Information Systems (FedCSIS), Sofia, Bulgaria, pp. 89–96 (2021). https://doi.org/10.15439/2021F76 Traneva, V., Tranev, S., Atanassova, V.: An Intuitionistic Fuzzy Approach to the Hungarian Algorithm, Springer Nature Switzerland AG, G. Nikolov et al. (eds.), NMA 2018, LNCS 11189, pp. 1–9 (2019) https://doi.org/10.1007/978-3-030-10692-8_19 Traneva, V., Tranev, S., Stoenchev, M., Atanassov, K.: Scaled aggregation operations over two- and three-dimensional index matrices. Soft Comput. 22, 5115–5120 (2019). https://doi.org/10.1007/ 00500-018-3315-6 Zadeh, L.: Fuzzy Sets. Inf. Control 8(3), 338–353 (1965)

Application of an Interval-Valued Intuitionistic Fuzzy Decision-Making Method in Outsourcing Using a Software Program Velichka Traneva, Stoyan Tranev, and Deyan Mavrov

Abstract Selecting a suitable candidate for outsourcing service provider is a challenging problem that requires discussion among a group of experts and the consideration of multiple criteria. The problem of this type belongs to the area of multicriteria decision-making. The imprecision in this problem may arise from the nature of the characteristics of the candidates for the service providers, which can be unavailable or indeterminate. It may also be derived from the inability of the experts to formulate a precise evaluation. Interval-valued intuitionistic fuzzy sets (IVIFSs), which are an extension of fuzzy sets, are the stronger tool in modeling uncertain problems than fuzzy ones. In this paper, which is an extension of Traneva et al. (2021), we will further outline the principles of a software program for automated solution of an optimal interval-valued intuitionistic fuzzy multicriteria decision-making problem (IVIFIMOA) in outsourcing for the selection of the most appropriate candidates. As an example of a case study, an application of the algorithm on example company data is demonstrated. Keywords Intuitionistic fuzzy sets · Multicriteria decision-making · Index matrices

Work on Sects. 2, 3.1 and 3.2 is supported by the Asen Zlatarov University through project Ref. No. NIX-440/2020 “Index matrices as a tool for knowledge extraction”. Work on Sects. 1, 3.3 and 4 is supported by the Asen Zlatarov University through project Ref. No. NIX-449/2021 “Modern methods for making management decisions”. V. Traneva (B) · S. Tranev · D. Mavrov Prof. Dr. Asen Zlatarov University, 1 Prof. Yakimov Blvd, 8000 Bourgas, Bulgaria e-mail: [email protected] S. Tranev e-mail: [email protected] D. Mavrov e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 S. Fidanova (ed.), Recent Advances in Computational Optimization, Studies in Computational Intelligence 1044, https://doi.org/10.1007/978-3-031-06839-3_11

215

216

V. Traneva et al.

1 Introduction Multi-criteria decision-making (MCDM) is a procedure that structures and solves decision problems by combining the estimation of each alternative against multiple decision criteria and proposes a compromise choice (Belton and Stewart 2002). The aim of MADM is to determine an optimal alternative having the highest degree of desirability with respect to all relevant goals (Zimmerman 1987). Most of decisions are not made on the basis of exact data and there is much ambiguity and uncertainty in decision making problems (Riabacke 2006). Fuzzy logic of Zadeh (1965) emerges to help model this vague environment. The uncertainty in the MCDM-problem may be caused by the unavailable or indeterminate characteristics of the candidates for the outsourcing service providers or from the inability of the experts to formulate a precise evaluation (Yager 1993). Atanassov, in 1983, introduced the notion of an intuitionistic fuzzy set (IFSs, Atanassov 2012, 2016) as an extension of fuzzy sets. In addition to the membership degree of each element in ordinal fuzzy sets, each element of IFSs has degrees of membership and non-membership. Later, Atanassov and Gargov proposed in 1989 the concept of interval-valued intuitionistic fuzzy sets (IVIFS, Atanassov 1989). There are many papers for application of IVIF theory in MCDM-problems. In Chen et al. (2018), a method on the basis of IVIF logic and the linear programming was developed. In Chen and Han (2019), a MCDM-method based on nonlinear programming, particle swarm optimization and IVIFVs was presented. In Kumar and Garg (2018), an IVIF TOPSIS method was developed. A method (Chen et al. 2012) based on the IVIFS weighted averaging operator was proposed. In Wang and Chen (2017) was developed a MCDM-method on the basis of IVIFSs, LP techniques and the extended TOPSIS. In Zadeh et al. (2020), a MCDM-method based on IVIFVs, nonlinear programming and the TOPSIS method was given. In Kumar and ShyiMing (2021) was proposed a new MCDM based on converted decision matrices, probability density functions and IVIFSs. MCDM-methods, which are focussed to select the appropriate outsourcing service provider are Kahraman et al. (2020), Wan et al. (2015), such as Analytic Hierarchy Process (AHP) (Modak et al. 2017; Wang and Yang 2007), Analytic Network Process (ANP) (Tjader et al. 2014), PROMETHEE (Bottani and Rizzi 2006; Senvar et al. 2014), balanced score card (Tjader et al. 2014), TOPSIS (Bottani and Rizzi 2006), VIKOR (Prakash and Barua 2016), DEMATEL (Uygun et al. 2015), grey relation, SWOT analysis (Tavana et al. 2016) and TODIM approach (Wang and Zhang 2016). Wang and Yang (2007) are proposed the method by using AHP and PROMETHEE for outsourcing decisions. Kahraman et al. (2016) have proposed an approach using hesitant linguistic term sets for supplier selection problem. Araz et al. (2007) have described an outsourcing model in which the PROMETHEE method and then fuzzy goal programming are used. Liu and Wang (2006) have developed a fuzzy linear programming approach by integrating Delphi method. Hsu et al. (2013) have described a hybrid decision model for selection of outsourcing companies, based on DEMATEL, ANP, and grey relation methods. In Modak et al. (2017), a decision process

Application of an Interval-Valued Intuitionistic Fuzzy Decision-Making …

217

of outsourcing problem of a mining organisation was proposed based on a balanced scorecard and fuzzy AHP. Wang et al. (2016) have used a TODIM approach with fuzzy linguistic information for selecting logistics outsourcing providers. The model proposed in Tavana et al. (2016) for outsourcing reverse logistics has used SWOT analysis to define and classify the criteria, the intuitionistic fuzzy AHP to determine the importance weights and fuzzy preference programming to give weights for rank the alternatives. Prakash and Barua (2016) have used AHP for the ranking of selection criteria and the VIKOR method for the selection of logistics providers. Uygun et al. (2015) have applied DEMATEL to determine the criteria and then fuzzy ANP is used to obtain the final providers. Senvar et al. (2014) and Chen et al. (2011) have used fuzzy PROMETHEE for outsourcing MCDM-model. Li and Wan (2014), have used fuzzy linear programming to determine the weights of the criteria and to rank the alternatives. Tjader et al. (2014) have developed MCDM-model by using a balanced scorecard approach and ANP method. Hsu et al. (2013) have used DEMATEL, ANP, and grey relation methods in a model for selection of outsourcing companies. Liou and Chuang (2010) have presented a MCDM model for outsourcing selection using DEMATEL, ANP, and VIKOR methods. In Wan et al. (2015) some disadvantages of the methods are presented as follows: 1. The researches are used exact numbers (Tjader et al. 2014; Wang and Yang 2007) or fuzzy sets (Liu and Wang 2006) to represent the information. IFSs (Atanassov 2012) have a hesitancy degree and they are more flexible than fuzzy sets (Zadeh 1965) in dealing with uncertainty. 2. The methods (Liu and Wang 2006; Wang and Yang 2007) only consider a single expert decision maker. 3. The fuzzy linear programming methods (Liu and Wang 2006) only considered the degree of satisfaction (membership) and overlook the degree of dissatisfaction begininlineMathnon-membership). An intuitionistic fuzzy (IF) linear programming method is described for outsourcing problems in Wan et al. (2015). The model proposed in Tavana et al. (2016) for outsourcing reverse logistics has used SWOT analysis to define and classify the criteria and then intuitionistic fuzzy AHP to determine the importance weights. In Kahraman et al. (2020), an interval-valued IF AHP and TOPSIS concepts are proposed to select of the best outsource manufacturers. Büyüközkan and Göçer (2017) have provided a methodology for solving supplier selection problem in an intuitionistic fuzzy environment. In Traneva and Tranev (2021a, b) we have proposed an IF algorithm for the selection of outsourcing service providers using the concepts of IMs (Atanassov 1987, 2014) and IF logic (Atanassov 2012). In the study (Traneva et al. 2021), was formulated an optimal generalized MCDM-approach (IVIFIMOA) for selecting the most appropriate outsourcing providers over IVIF data. Here, will be proposed a software program, which performs the calculations of IVIFIMOA.

218

V. Traneva et al.

The rest of the paper contains the following sections: Sect. 2 presents aome basic definitions of IVIFSs and IMs. Section 3 formulates an optimal IVIF problem for the selection of outsourcing providers, gives an algorithm for its solution and describes the software implementation. The optimal IVIFMOA takes into account the rating of the experts and the scores of the candidates, as well as the weight of the criteria for the respective outsourcing positions. A case study for selecting the best outsourcing service providers in a refinery is also given as an example. Section 4 concludes the work and gives future suggestions.

2 Basic Concepts of IMs and IVIF Logic This section recalls some basic concepts on interval-valued intuitionistic fuzzy pairs (IVIFPs) from Atanassov (2020a), Atanassov et al. (2013) and on the index matrix apparatus from Atanassov (2014, 2020b) and Traneva and Tranev (2017).

2.1 Interval-Valued Intuitionistic Fuzzy Logic The concept of IVIFPs was introduced in Atanassov et al. (2013). The IVIFP ⟨ M , N ⟩ , ]where M, N ⊆ [0, 1] are closed sets, M = [is an object of ] the form [ inf M, sup M , N = inf N , sup N and sup M + sup N ≤ 1, that is used as an evaluation of some object or process and whose components (M and N ) are interpreted as intervals of degrees of membership and non-membership, or intervals of degrees of validity and non-validity, etc. Let us have two IVIFPs x = ⟨ M , N ⟩ and y = ⟨ P, Q⟩ . In Atanassov (2020a); Atanassov et al. (2013) are defined the operations classical negation, conjunction, disjunction, multiplication with constant, and difference.

Application of an Interval-Valued Intuitionistic Fuzzy Decision-Making …

219

¬x = ⟨ N , M⟩ ; x ∧1 y =

[ ] [⟨ min(inf M, inf P), min(sup M, sup P)] , [max(inf N , inf Q), max(sup N , sup Q) ]⟩ ; x ∨1 y = ⟨ [ max(inf M, inf P), max(sup M, sup P) ] , min(inf N , inf Q), min(sup N , sup Q) ⟩ ; x ∧2 y = x + y = ⟨ [inf M + inf P − inf M inf P,] sup [ M + sup P − sup M sup] P , [inf N inf Q, sup N sup Q ]⟩ ; ⟨ inf M inf P, sup M sup P , x ∨2 y = x.y = [inf N + inf Q − inf N inf Q,] sup N [ + sup Q − sup N sup] Q ⟩ ; x ∨3 y =



[

inf M+inf P sup M+sup P , , 2 2 ] inf P+inf Q sup P+sup Q , ⟩ . 2 2 α

⟨ α.x = ⟨ [1 − (1 − inf M) , 1 − (1 − sup, )α ], [inf N α , sup N α ]⟩ (α ∈ R) (α = n or α = 1/n, where n ∈ N ) x − y = ⟨ [max(0, min(inf M − sup P, 1 − sup N + sup Q)), max(0, min(sup M − inf P, 1 − sup N + sup Q))], min(inf N + inf Q, 1 − sup M + inf P)) [min(1, min(1, min(sup N[ + sup Q, 1 − sup M + inf P))]⟩ x−y= ⟨ max(0, inf M − sup C), ] max(0, sup M − inf P) , [min(1, inf N + inf Q, 1 − inf M + sup P), min(1, sup N + sup Q, 1 − sup M + inf P)]⟩ .

(1)

The forms of the relations with IVIFPs are the following x ≥ y iff x ≥□ y iff x ≥ y iff x = y iff x=□ y iff x= y iff x ≥ R y iff

inf M ≥ inf P and and inf N ≤ inf Q inf M ≥ inf P and inf N ≤ inf Q and

sup M ≥ sup P and sup N ≤ sup Q sup M ≥ sup P sup N ≤ sup Q

inf M = inf P and sup M = sup P and inf Q = inf N and sup Q = sup N inf M = inf P and sup M = sup P inf N = inf Q and sup N = sup Q Rx ≤ R y .

(2)

where Rx = 0.5(2 − inf M − sup M + inf N + sup N ) is equal to Hamming’s distances from ⟨ [1, 1][0, 0]⟩ (Atanassov 2020a).

220

V. Traneva et al.

2.2 Interval-Valued Intuitionistic Fuzzy Index Matrices Let I be a fixed set. Three-dimensional interval-valued intuitionistic fuzzy index matrix (3-D IVIFIM) with index sets K , L and H (K , L , H ⊂ I), we denote the object (Atanassov 2014, 2020b): [K , L , H, {⟨ Mki ,l j ,h g , Nki ,l j ,h g ⟩ }] l1 hg ∈ H k1 ⟨ Mk1 ,l1 ,h g , Mk1 ,l1 ,h g ⟩ ≡ .. .. . . ⟨ Mkm ,l1 ,h g , Nkm ,l1 ,h g ⟩ km

... ln . . . ⟨ Mk1 ,ln ,h g , Nk1 ,ln .h g ⟩ , .. .. . . . . . ⟨ Mkm ,ln ,h g , Nkm ,ln ,h g ⟩

(3)

where for every 1 ≤ i ≤ m, 1 ≤ j ≤ n, 1 ≤ g ≤ f : Mki ,l j ,h g ⊆ [0, 1], Nki ,l j ,h g ⊆ [0, 1], sup Mki ,l j ,h g + sup Nki ,l j ,h g ≤ 1. Let us be given two fixed operations over IVIFPs “*” and “◦” and let ⟨ M , N ⟩ ∗ ⟨ P, Q⟩ = ⟨ M ∗l P, N ∗r Q⟩ , where “∗l ” and “∗r ” are determined by the form of the operation “*” (1). Let us recall some operations over a two 3-D IVIFIMs A = and B = [P, Q, E, {⟨ R pr ,qs ,ed , S pr ,qs ,ed ⟩ }] [K , L , H, {⟨ Mki ,l j ,h g , Nki ,l j ,h g ⟩ }] (see Atanassov 2014, 2020b; Traneva and Tranev 2020b): Addition-(*): A ⊕(∗l ,∗r ) B = [K ∪ P, L ∪ Q, H ∪ E, {⟨ Φtu ,vw ,x y , Ψtu ,vw ,x y ⟩ }], where ⟨ Φtu ,vw ,x y , Ψtu ,vw ,x y ⟩ ⎧ ⟨ Mki ,l j ,h g , Nki ,l j ,h g ⟩ , ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⟨ R pr ,qs ,ed , S pr ,qs ,ed ⟩ , ⎪ ⎨

if tu = ki ∈ K , vw = l j ∈ L − Q, xy = hg ∈ H − E or tu = ki ∈ K , vw = l j ∈ L − Q, xy = hg ∈ H or tu = ki ∈ K − P, vw = l j ∈ L , xy = hg ∈ H if tu = pr ∈ P, vw = qs ∈ Q, x y = ed ∈ E − H = or tu = pr ∈ P, vw = qs ∈ Q − L , ⎪ ⎪ ⎪ ⎪ x y = ed ∈ E ⎪ ⎪ ⎪ ⎪ or tu = pr ∈ P − K , vw = qs ∈ Q, ⎪ ⎪ ⎪ ⎪ x y = ed ∈ E ⎪ ⎪ ⎪ ⎪ ⟨ ∗l (Mki ,l j ,h g , R pr ,qs ,ed ), if tu = ki = pr ∈ K ∩ P, ⎪ ⎪ ⎪ ⎪ ∗r (Nki ,l j ,h g , S pr ,qs ,ed )⟩ , vw = l j = qs ∈ L ∩ Q ⎪ ⎪ ⎪ ⎪ and x y = h g = ed ∈ H ∩ E ⎪ ⎪ ⎩ ⟨ [0, 0], [1, 1]⟩ , otherwise Multiplication - (◦, ∗):

Application of an Interval-Valued Intuitionistic Fuzzy Decision-Making …

221

A ⊙(◦,∗) B = [K ∪ (P − L), Q ∪ (L − P), H ∪ R, {⟨ Φtu ,vw ,x y , Ψtu ,vw ,x y ⟩ }], where

⟨ Φtu ,vw ,x y , Ψtu ,vw ,x y ⟩ =

⎧ ⎪ ⎪⟨ Mki ,l j ,h g , Nki ,l j ,h g ⟩ , ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⟨ R pr ,qs ,rd , S pr ,qs ,rd ⟩ , ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨

if tu = ki ∈ K , vw = l j ∈ L − P − Q x y = h g ∈ H or tu = k i ∈ K − P − Q vw = l j ∈ L xy = hg ∈ H ; if tu = pr ∈ P vw = qs ∈ Q − K − L x y = rd ∈ R or tu = pr ∈ P − L − K vw = q s ∈ Q x y = rd ∈ R;

⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ◦l (∗l (Mki ,l j ,h g , R pr ,qs ,rd )),if tu = ki ∈ K ⎪⟨ ⎪ ⎪ ⎪ ⎪ l j = pr ∈L∩P ⎪ ⎪ ⎪ vw = q s ∈ Q ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ◦r (∗r (Nki ,l j ,h g , S pr ,qs ,rd ))⟩ ,& x y = h g ⎪ ⎪ ⎪ ⎪ ⎪ l j = pr ∈L∩P ⎪ ⎪ ⎪ & = rd ∈ H ∩ R; ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ ⟨ [0, 0], [1, 1]⟩ , otherwise.

where “*” and “◦” are two fixed operations from (1) over IVIFPs. Transposition: A T is the transposed IM of A. Multiplication with a constant α: If α = ⟨ A, B⟩ is an IVIFP, then ⟨ [ ] α⟨ Mki ,l j ,h g , Nki ,l j ,h g ⟩ = inf A inf Mki ,l j ,h g , sup A sup Mki ,l j ,h g , [

inf B + inf Nki ,l j ,h g − inf B inf Nki ,l j ,h g , sup B + sup Nki ,l j ,h g − sup B sup Nki ,l j ,h g

] ⟩

.

Aggregation operation by one dimension (Traneva et al. 2019): Let us have two IVIFPs x = ⟨ A, B⟩ and y = ⟨ C , D⟩ . We use following three operations #q , (q ≤ i ≤ 3) for scaling the aggregation evaluations: [ ] [ ] x#1 y =⟨ min(inf A, inf C), min(sup A, sup C) , max(inf B, inf D), max(sup B, sup D) ⟩ ; [ ] ⟨ average(inf A, inf C), average(sup A, sup C) , x#2 y = [ ] average(inf B, inf D), average(sup B, sup D) ⟩ ; [ ] [ ] x#3 y =⟨ max(inf A, inf C), max(sup A, sup C) , min(inf B, inf D), min(sup B, sup D) ⟩ .

Let h 0 ∈ / H be a fixed index and A = [K , L , H, {⟨ Mki ,l j ,h g , Nki ,l j ,h g ⟩ }] is an IVIFIM.

222

V. Traneva et al.

The definition of the aggregation operation α H,#q (A, h 0 ) by a dimension H is Traneva and Tranev (2020b): h0

l1

...

ln

f

f

k1 #q ⟨ Mk1 ,l1 ,h g , Nk1 ,l1 ,h g ⟩ . . . .. .

g=1

.. .

..

#q ⟨ Mk1 ,ln ,h g , Nk1 ,ln ,h g ⟩ g=1

.. .

.

f

f

ki #q ⟨ Mki ,l1 ,h g , Nki ,l1 ,h g ⟩ . . . #q ⟨ Mki ,ln ,h g , Nki ,ln ,h g ⟩ .. .

g=1

.. .

..

g=1

.

f

f

g=1

g=1

,

(4)

.. .

km #q ⟨ Mkm ,l1 ,h g , Nkm ,l1 ,h g ⟩ . . . #q ⟨ Mkm ,ln ,h g , Nkm ,ln ,h g ⟩ where 1 ≤ q ≤ 3. Projection: Let W ⊆ K , V ⊆ L and U ⊆ H. Then, pr W,V ,U A = [W, V, U, {⟨ R pr ,qs ,ed , S pr ,qs ,ed ⟩ }], where for each ki ∈ W, l j ∈ V and h g ∈ U, ⟨ R pr ,qs ,ed , S pr ,qs ,ed ⟩ = ⟨ Mki ,l j ,h g , Nki ,l j ,h g ⟩ . Substitution: Local substitution over A is defined for the couple of indices (ki , kt ) by ] [ [ ] kt ; ⊥; ⊥ A = (K − {ki }) ∪ {kt }, L , H, {⟨ Mki ,l j ,h g , Nki ,l j ,h g ⟩ } ki A level operator for decreasing the number of elements of IVIFM: Let ⟨ α, β⟩ is an IVIFP, then according to Atanassov et al. (2021) > (A) = [K , L , H, {⟨ Rki ,l j ,h g , Pki ,l j ,h g ⟩ }], Nα,β

where ⟨ Rki ,l j ,h g , Pki ,l j ,h g ⟩ ⎧ =

⟨ Mki ,l j ,h g , Nki ,l j ,h g ⟩ if ⟨ Rki ,l j ,h g , Pki ,l j ,h g ⟩ > ⟨ α, β⟩ ⟨ [0, 0], [1, 1]⟩ otherwise

(5)

Application of an Interval-Valued Intuitionistic Fuzzy Decision-Making …

223

3 An Optimal Interval-Valued Intuitionistic Fuzzy Selection for the Outsourcing Service Providers Here, we formulate an optimal IVIF problem with application in outsourcing by IMs concept from Traneva et al. (2021) and will propose for the first time a software program for its automated solution, based on our previous libraries. The management team of a company has selected the following activities ve (1 ≤ e ≤ u) to be offered for outsourcing in order to increase the profitability of the enterprise. An expert team, consisting of experts {r1 , . . . , rs , . . . , r D } has proposed an evaluation system, giving each candidate {k1 , . . . , ki , . . . , km } (for i = 1, . . . , m) for the respective outsourced service ve (1 ≤ e ≤ u), an evaluation by each criterion {c1 , . . . , c j , . . . , cn } (for j = 1, . . . , n). The weight coefficients of each assessment criteria c j (for j = 1, . . . , n) according to their priority for the service ve are given in the form of IVIFPs - pk c j ,ve (for j = 1, . . . , n). Each expert has an IVIFP rating rs = ⟨ Δs , εs ⟩ (1 ≤ s ≤ D). Let the number of his/her own participations in previous outsourcing procedures be equal to Γs (s = 1, . . . , D), respectively. All applicants need to be evaluated by the team of experts according to the established criteria in the company at the current time point h f for their application for each outsourced service ve (1 ≤ e ≤ u), and their evaluations ev ki ,c j ,ds (for 1 ≤ i ≤ m, 1 ≤ j ≤ n, 1 ≤ s ≤ D) are IVIFPs. Now we need to find the optimal assignment of candidates.

3.1 Optimal IVIF Selection of the Providers To solve this problem, we propose a new approach - IVIFIMOA, described with mathematical notation and pseudocode: Step 1. This step creates an expert 3-D evaluation IM E V . It is possible for the experts to include assessments for the same candidates from a previous evaluation IM at time points h 1 , . . . , h g , . . . , h f −1 . The team of experts needs to evaluate the candidates for the services according to the approved criteria in the company at the current time moment h f . The experts are uncertain about their evaluations due to changes in some uncontrollable factors. The evaluations must either be IVIFPs, or converted to such. It is possible that some of the experts’ assessments are incorrect from an IVIF point of view. In Atanassov (1989), different ways for altering incorrect experts’ estimations are discussed. Let us propose that, the estimations of the Ds (1 ≤ s ≤ D) expert are correct and described by the IVIFIM E Vs = [K , C, H, {ev ki ,c j ,ds ,h g }] as follows: hg ∈ H c1 ... cn k.1 ⟨ Mk1 ,c1 ,ds ,h g ,. Nk1 ,c1 ,ds ,h g ⟩ .. . . ⟨ Mk1 ,cn ,ds ,h g ,. Nk1 ,cn ,ds ,h g ⟩ , .. .. .. .. km ⟨ Mkm ,c1 ,ds ,h g , Nkm ,c1 ,ds ,h g ⟩ . . . ⟨ Mkm ,cn ,ds ,h g , Nkm ,cn ,ds ,h g ⟩

(6)

224

V. Traneva et al.

} { where K = {k1 , k2 , . . . , km }, C = {c1 , c2 , . . . , cn } , H = h 1 , h 2 , . . . , h f and IVIFP {ev ki ,c j ,ds ,h g } is the estimate of the ds th expert for the ki th candidate by the c j th criterion at a moment h g . Let us apply the α H th aggregation operation (4) α E Vs ,#q to find the evaluation of the ds th expert (s = 1, . . . , D), where 1 ≤ q ≤ 3. We get the 3-D IVIFIM E V [K , C, E, {ev ki ,c j ,ds }] with the evaluations of all experts for all candidates: E V = α E V1 ,#q (H, d1 ) ⊕(max,min) . . . ⊕(max,min) α E VD ,#q (H, d D )

(7)

Go to Step 2. Step 2. Let the score (rating) rs of the ds th expert (ds ∈ E) be specified by an IVIFP ⟨ δs , εs ⟩ . δs and εs are interpreted respectively as his degree of competence and of incompetence. Then we create E V ∗ [K , C, E, {ev ∗ ki ,c j ,ds }]: E V ∗ = r1 pr K ,C,d1 E V . . . ⊕(max,min) r D pr K ,C,d D E V ; ∗

(8)

evk∗i ,l j ,ds ,

∀ki ∈ K , ∀l j ∈ L , ∀ds ∈ E). E V := E V (evki ,l j ,ds = Then α E th aggregation operation is applied to find the aggregated assessment R = α E,#q (E V , h f ) (1 ≤ q ≤ 3) of the ki th candidate against the c j th criterion at /E: the moment h f ∈ ⎫ ⎧ hf ⎪ ⎪ ⎪ ⎪ cj ∈ C ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ D ⎪ ⎪ ⎪ ⎪ ⎪ ⎬ ⎨ k1 #q ⟨ Mk1 ,c j ,ds , Nk1 ,c j ,ds ⟩ ⎪ (9) R = α E,#q (E V , h f ) = , s=1 . . ⎪ ⎪ .. . ⎪ ⎪ ⎪ ⎪ . ⎪ ⎪ D ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ km #q ⟨ Mkm ,c j ,ds , Nkm ,c j ,ds ⟩ ⎪ ⎪ ⎪ ⎪ ⎭ ⎩ s=1

(1 ≤ q ≤ 3). If q is 2 or 3, then the evaluation of the candidates is more optimistic as outsourcing service provider. Go to Step 3. Step 3. Let us define the 3-D IFIM P K [C, V, h f , { pk c j ,ve ,h f }] of the weight coefficients of the assessment criterion according to its priority to the outsourcing service ve (1 ≤ e ≤ u): hf v1 c1 pk c1 ,v1 ,h f .. .. . . P K [C, V , h f , { pk c j ,ve ,h f }] = c j pk c j ,v1 ,h f .. .. . . cn pk cn ,v1 ,h f

... ve . . . pk c1 ,ve ,h f .. .. . . . . . pk c j ,ve ,h f .. .. . . . . . pk cn ,ve ,h f

... vu . . . pk c1 ,vu ,h f .. .. . . , . . . pk c j ,vu ,h f .. .. . . . . . pk cn ,vu ,h f

(10)

where C = {c1 , . . . , cn }, V = {v1 , . . . , vu } and all elements pk c j ,ve ,h f are IVIFPs. The transposed IM of R is founded under the form R T [K , C, h f ] and is calculated 3-D IVIFIM

Application of an Interval-Valued Intuitionistic Fuzzy Decision-Making …

225

B[K , V , h f , {bki ,ve ,h f }] := R T ⊙(◦,∗) P K ,

(11)

which contains the cumulative estimates of the ki th candidate (for 1 ≤ i ≤ m) for the ve th outsourcing service. If a candidate ki (1 ≤ i ≤ m) does not wish to participate in the competition to provide an outsourcing service ve , then the element bki ,ve ,h f is equal to ⟨ [0, 0], [1, 1]⟩ . Go to Step 4. Step 4. The aggregation operation α K ,#q (B, k0 ) is applied by the dimension K to find the most suitable candidate for the outsourcing service ve , as follows: v1

...

vu

m

m

i=1

i=1

k0 #q ⟨ Mki ,v1 , Nki ,v1 ⟩ . . . #q ⟨ Mki ,vu , Nki ,vu ⟩

,

(12)

where k0 ∈ / K , 1 ≤ q ≤ 3. If the company requires a different candidate for each service, then it is necessary to apply the IVIF Hungarian algorithm (Traneva and Tranev 2020a) to the data contained in the IVIFIM B and then the optimal allocation of the candidates will be found. It is possible to reduce the candidates with an overall score lower than the IVIFP ⟨ α, β⟩ applying the level-operator (5) to IVIFIM B before the algorithm is implemented. Go to Step 5. Step 5. This step optimizes the criteria of the outsourcing service evaluation system. Let us assume, that the experts offer for use in concrete procedure different criteria. At this step of the algorithm, we need to determine whether there are correlations between some of the evaluation criteria (Atanassov et al. 2017). The procedure of IVIF-form of ICrA (IVIFICrA), based on the intercriteria analysis (Atanassov et al. 2014) is discussed in Atanassov et al. (2019). Let IVIFP ⟨ α, β⟩ be given. The criteria Ck and Cl are in: strong (α, β)-positive consonance, if inf MCk ,Cl > α and sup NCk ,Cl < β; weak (α, β)-positive consonance, if sup MCk ,Cl > α and inf NCk ,Cl < β; strong (α, β)-negative consonance, if sup MCk ,Cl < α and inf NCk ,Cl > β; weak (α, β)-negative consonance, if inf MCk ,Cl < α and sup NCk ,Cl > β; (α, β)-dissonance, otherwise. After application of the IVIFICrA over IFIM R we determine which criteria are in consonance. Then, we can evaluate their complexity with respect to time, or price, or resources needed for their evaluation and more expensive or slower criteria can be removed from the evaluation system. If O = {O1 , . . . , OV } are the criteria that can be omitted, then we can reduce R by IM-operation R∗ = R(O,⊥) . Go to Step 6. Step 6. The last step determinates the new rating coefficients of the experts. Let the expert ds (s = 1, . . . , D) participate in Γs procedures, on the basis of which his score rs = ⟨ Δs , εs ⟩ is determined, then after his participation in (Γs + 1)th procedure his score will be determined by Atanassov (2012):

226

V. Traneva et al. ⎧ inf Δ.Γ +1 sup Δ.Γ +1 inf ε.Γ sup ε.Γ ⟨ [ Γ +1 , Γ +1 ], [ Γ +1 , Γ +1 ]⟩ , ⎪ ⎪ ⎪ ⎪ ( if the expert’s estimation is correct) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ sup Δ.Γ inf ε.Γ sup ε.Γ ' ' ⟨ [ infΓ Δ.Γ +1 , Γ +1 ], [ Γ +1 , Γ +1 ]⟩ , ⟨ Δs , εs ⟩ = ⎪ ( if the expert had not given any estimation) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ inf Δ.Γ sup Δ.Γ inf ε.Γ +1 sup ε.Γ +1 ⎪ ⎪ ⎩ ⟨ [ Γ +1 , Γ +1 ], [ Γ +1 , Γ +1 ]⟩ , (if the expert’s estimation is incorrect)

(13)

The complexity of the algorithm without step 5 is O(Dmn) (the complexity of the ICrA in the step 5 is O(m 2 n 2 ) Atanassova and Roeva 2018).

3.2 A Software Program for Optimal IVIF Selection of the Providers In order to apply IVIFIMOA algorithm on actual data more easily, we have developed a command line utility, written in C++. In our previous works, we have used an IM template class (IndexMatrix⟨ T ⟩ ), which implements the basic IM operations (Mavrov 2019/2020). Since the operations of the algorithm require 3-dimensional index matrices, a new class named IndexMatrix3D⟨ T ⟩ has been implemented using an extended IndexMatrix⟨ T ⟩ class to represent each layer of the 3-D matrix. The templated nature of these classes allows the substitution of the template type T with any class, as long as it implements certain operations on the current object and between two objects (e.g. arithmetic operations, comparison, etc.), so that they can be substituted in the already prepared IM operation methods. As part of a previous work on intuitionistic fuzzy ANOVA (Traneva et al. 2020), we developed a class representing IFPs. Using the work done previously, for this project we have developed a class for IVIF pairs and real intervals, which are used to construct the required index matrices. The use of IM and 3-D IM operations helps ensure the correctness of the algorithms’s each step. The program takes as its input the experts’ evaluations, a matrix of the experts’ rating coefficients and a matrix of the weight coefficients of each criterion for each service. The expert evaluations can be given either directly as an IVIF index matrix, or as a matrix of interval of marks. In the latter case, the minimum and maximum mark that a expert can give must be specified, as they cannot be discerned from the intervals. For example, this is how the program would be called with an input matrix of intervals:

ivifimoa -interval 0 10 TEST_EVAL.ixm3 TEST_RATING.ixm3 TEST_COEFFICIENTS.ixm3

Application of an Interval-Valued Intuitionistic Fuzzy Decision-Making …

227

And this is how it would be called for a matrix with IVIF pairs: ivifimoa TEST_EVAL.ixm3 TEST_RATING.ixm3

ivifimoa TEST_COEFFICIENTS.ixm3

3.3 A Case Study In this section, the proposed IVIFIMOA approach is applied to an example case study of an oil refinery with the help of the “IVIFIMOA” software utility. Let us say that the studied refinery wants to adopt an outsourcing model, and after the restructuring, the following activities are to remain outside the company, and will be offered for outsourcing (Tranev 2020): v1 —trade and distribution of high quality fuels, polymers and petrochemicals; v2 —engineering activity, specialized in consulting, preparation of technical and economic opinions, detailed projects with author’s supervision; v3 — transport service for public transport of goods and passengers, as well as services with construction machinery; v4 —aviation fuel distributor. For this purpose, the refinery invites a team of the experts d1 , d2 and d3 to evaluate the candidates ki (for 1 ≤ i ≤ 4) for the outsourced refinery services. The evaluation system of outsourcing providers selection is determined on the basis of 5 criteria as follows: C1 —compliance of the outsourcing service provider with its corporate culture; C2 —understanding of the outsourcing service by the provider; C3 —necessary resources of the outsourcing provider for the implementation of the service; C4 —price of the provided service; C5 —opportunity for strategic development of the outsourcing service together with the outsourcing-assignor. The weight coefficients for the service ve - pk c j ,ve as compared to the criteria c j (for j = 1, . . . , 5) according to their priority for the service ve (e = 1, 2, 3, 4) and the ratings of the experts {r1 , r2 , r3 } will be given under the form of IVIFPs. The aim of the problem is to optimally select the outsourcing providers. An optimal solution of the problem: Step 1. A 3-D expert evaluation IM of intervals E I [K , C, E, {ei ki ,c j ,ds }] is created, where each interval {ei ki ,c j ,ds } (for 1 ≤ i ≤ 4, 1 ≤ j ≤ 5, 1 ≤ s ≤ 3) is the estimate of the ds th expert for the ki th candidate by the c j th criterion, and it has the form [a, b] and A ≤ a ≤ b ≤ B, where A is is the minimal mark an expert can give and B is the maximum. ⎧ d1 ⎪ ⎪ ⎪ ⎨ k1 k2 ⎪ ⎪ ⎪ k3 ⎩ k4

c1 [3, 7] [1, 4] [4, 8] [0, 1]

c2 [2, 5] [4, 6] [1, 3] [2, 3]

c3 [6, 8] [4, 5] [2, 6] [2, 2]

c4 [4, 6] [2, 3] [6, 8] [2, 4]

c5 [1, 4] [4, 6] , [6, 7] [4, 5]

228

V. Traneva et al. d2 k1 k2 k3 k4 d3 k1 k2 k3 k4

c1 c2 c3 c4 c5 [4, 6] [1, 3] [7, 9] [5, 7] [2, 4] [2, 2] [3, 5] [6, 8] [4, 5] [5, 8] , [3, 6] [3, 4] [1, 3] [6, 9] [5, 8] [2, 2] [1, 4] [2, 4] [1, 5] [4, 6] ⎫ c1 c2 c3 c4 c5 ⎪ ⎪ [1, 3] [2, 3] [4, 6] [2, 4] [0, 2] ⎪ ⎬ [1, 2] [3, 4] [2, 4] [1, 3] [2, 5] ⎪ [3, 5] [2, 3] [3, 4] [4, 7] [4, 5] ⎪ ⎪ ⎭ [1, 2] [0, 2] [1, 3] [1, 4] [2, 4]

The 3-D IM of intervals is then transformed from to IVIF data using the method described in Atanassov (1989), using A and B as defined above and taking the set of all evaluations of each candidate-criterion pair separately. From this we get the 3-D IM of IVIF pairs E V [K , C, E, {ev ki ,c j ,ds }], where the above intervals are converted to IVIFP. ⎧ c1 c2 d1 ⎪ ⎪ ⎪ ⎨ k1 ⟨ [0.1, 0.3], [0.3, 0.3]⟩ ⟨ [0.1, 0.2], [0.5, 0.5]⟩ k2 ⟨ [0.1, 0.1], [0.6, 0.6]⟩ ⟨ [0.3, 0.4], [0.4, 0.4]⟩ ⎪ ⎪ ⎪ k3 ⟨ [0.3, 0.4], [0.2, 0.2]⟩ ⟨ [0.1, 0.1], [0.6, 0.7]⟩ ⎩ ⟨ [0, 0.2], [0.6, 0.7]⟩ k4 ⟨ [0, 0], [0.8, 0.9]⟩ c3 c4 c5 ⟨ [0.4, 0.6], [0.1, 0.2]⟩ ⟨ [0.2, 0.4], [0.3, 0.4]⟩ ⟨ [0, 0.1], [0.6, 0.6]⟩ ⟨ [0.2, 0.4], [0.2, 0.5]⟩ ⟨ [0.1, 0.2], [0.5, 0.7]⟩ ⟨ [0.2, 0.4], [0.2, 0.4]⟩ , ⟨ [0.1, 0.2], [0.4, 0.4]⟩ ⟨ [0.4, 0.6], [0.1, 0.2]⟩ ⟨ [0.4, 0.6], [0.2, 0.3]⟩ ⟨ [0.1, 0.2], [0.6, 0.8]⟩ ⟨ [0.1, 0.2], [0.5, 0.6]⟩ ⟨ [0.2, 0.4], [0.4, 0.5]⟩ c1 c2 d2 k1 ⟨ [0.1, 0.4], [0.3, 0.4]⟩ ⟨ [0.1, 0.1], [0.5, 0.7]⟩ k2 ⟨ [0.1, 0.2], [0.6, 0.8]⟩ ⟨ [0.3, 0.3], [0.4, 0.5]⟩ k3 ⟨ [0.3, 0.3], [0.2, 0.4]⟩ ⟨ [0.1, 0.3], [0.6, 0.6]⟩ k4 ⟨ [0, 0.2], [0.8, 0.8]⟩ ⟨ [0, 0.1], [0.6, 0.6]⟩ c3 c4 c5 ⟨ [0.4, 0.7], [0.1, 0.1]⟩ ⟨ [0.2, 0.5], [0.3, 0.3]⟩ ⟨ [0, 0.2], [0.6, 0.6]⟩ ⟨ [0.2, 0.6], [0.2, 0.2]⟩ ⟨ [0.1, 0.4], [0.5, 0.5]⟩ ⟨ [0.2, 0.5], [0.2, 0.2]⟩ , ⟨ [0.1, 0.1], [0.4, 0.7]⟩ ⟨ [0.4, 0.6], [0.1, 0.1]⟩ ⟨ [0.4, 0.5], [0.2, 0.2]⟩ ⟨ [0.1, 0.2], [0.6, 0.6]⟩ ⟨ [0.1, 0.1], [0.5, 0.5]⟩ ⟨ [0.2, 0.4], [0.4, 0.4]⟩ c1 c2 d3 k1 ⟨ [0.1, 0.1], [0.3, 0.7]⟩ ⟨ [0.1, 0.2], [0.5, 0.7]⟩ k2 ⟨ [0.1, 0.1], [0.6, 0.8]⟩ ⟨ [0.3, 0.3], [0.4, 0.6]⟩ k3 ⟨ [0.3, 0.3], [0.2, 0.5]⟩ ⟨ [0.1, 0.2], [0.6, 0.7]⟩ ⟨ [0, 0], [0.6, 0.8]⟩ k4 ⟨ [0, 0.1], [0.8, 0.8]⟩ ⎫ c3 c4 c5 ⎪ ⎪ ⟨ [0.4, 0.4], [0.1, 0.4]⟩ ⟨ [0.2, 0.2], [0.3, 0.6]⟩ ⟨ [0, 0], [0.6, 0.8]⟩ ⎪ ⎬ ⟨ [0.2, 0.2], [0.2, 0.6]⟩ ⟨ [0.1, 0.1], [0.5, 0.7]⟩ ⟨ [0.2, 0.2], [0.2, 0.5]⟩ ⎪ ⟨ [0.1, 0.3], [0.4, 0.6]⟩ ⟨ [0.4, 0.4], [0.1, 0.3]⟩ ⟨ [0.4, 0.4], [0.2, 0.5]⟩ ⎪ ⎪ ⎭ ⟨ [0.1, 0.1], [0.6, 0.7]⟩ ⟨ [0.1, 0.1], [0.5, 0.6]⟩ ⟨ [0.2, 0.2], [0.4, 0.6]⟩

Step 2. Let the experts have the following rating coefficients respectively: {r1 , r2 , r3 }={⟨ [0.7, 0.8], [0.0, 0.1]⟩ , ⟨ [0.6, 0.7], [0.0, 0.1]⟩ , ⟨ [0.8, 0.9], [0.0, 0.1]⟩ }. We create E V ∗ [K , C, E, {ev ∗ }] = r1 pr K ,C,d1 E V ⊕(max,min) r2 pr K ,C,d2 E V ⊕(max,min) r3 pr K ,C,d3 E V . Then, E V := E V ∗ .

Application of an Interval-Valued Intuitionistic Fuzzy Decision-Making …

229

Let us apply the optimistic aggregation operation α E,(max,min) (E V, h f ) = R[K , h f , C] to find the aggregated value of the ki th candi/ D. date against the c j th criterion in a current time-moment h f ∈ Step 3. The 3-D IFIM P K [C, V , h f , { pk c j ,ve ,h f }] of the weight coefficients of the assessment criterion according to its priority to the service ve (e = 1, 2, 3, 4) has the following form:

c1 c2 c3 c4 c5

v1 ⟨ [0.8, 0.9], [0.0, 0.1]⟩ ⟨ [0.7, 0.8], [0.0, 0.1]⟩ ⟨ [0.5, 0.6], [0.1, 0.2]⟩ ⟨ [0.8, 0.9], [0.0, 0.1]⟩ ⟨ [0.7, 0.8], [0.1, 0.2]⟩

v2 ⟨ [0.7, 0.8], [0.1, 0.2]⟩ ⟨ [0.5, 0.6], [0.1, 0.2]⟩ ⟨ [0.8, 0.9], [0.0, 0.1] ⟨ [0.8, 0.9], [0.0, 0.1]⟩ ⟨ [0.8, 0.9], [0.0, 0.1]⟩

v3 ⟨ [0.5, 0.6], [0.1, 0.2]⟩ ⟨ [0.6, 0.7], [0.0, 0.2]⟩ ⟨ [0.4, 0.5], [0.2, 0.3]⟩ ⟨ [0.7, 0.8], [0.0, 0.1]⟩ ⟨ [0.8, 0.9], [0.0, 0.1]⟩

v4 ⟨ [0.6, 0.7], [0.1, 0.2]⟩ ⟨ [0.7, 0.8], [0.1, 0.2]⟩ ⟨ [0.6, 0.7], [0.1, 0.2]⟩ ⟩ ⟨ [0.6, 0.7], [0.1, 0.2]⟩ ⟨ [0.5, 0.6], [0.3, 0.4]⟩

where C = {c1 , . . . , c5 }, V = {v1 , . . . , v4 } and for 1 ≤ j ≤ 5, 1 ≤ e ≤ 4 : pk c j ,ve ,h f are IVIFPs. We construct B = R T ⊙(◦,∗) P K = k1 k2 k3 k4

v1 ⟨ [0.352792, 0.725031], [0.005472, 0.0279586]⟩ ⟨ [0.404508, 0.738049], [0.009408, 0.0371671]⟩ ⟨ [0.577245, 0.826309], [0.0015456, 0.01553]⟩ ⟨ [0.202079, 0.561224], [0.070656, 0.136313]⟩

v2 ⟨ [0.41206, 0.768346], [0.003663, 0.0251806]⟩ ⟨ [0.40876, 0.771331], [0.005888, 0.0298147]⟩ ⟨ [0.586175, 0.84119], [0.0014336, 0.014802]⟩ ⟨ [0.236044, 0.581198], [0.062976, 0.126003]⟩

v3 v4 ⟨ [0.292319, 0.654518], [0.009324, 0.0402332]⟩ ⟨ [0.34357, 0.687293], [0.0103004, 0.0434512]⟩ ⟨ [0.366846, 0.701505], [0.009216, 0.0420116]⟩ ⟨ [0.372876, 0.699069], [0.0199485, 0.0623336]⟩ , ⟨ [0.531802, 0.791404], [0.0017472, 0.0179122]⟩ ⟨ [0.477876, 0.752455], [0.00689132, 0.0342835]⟩ ⟨ [0.203173, 0.535386], [0.066912, 0.139423]⟩ ⟨ [0.1662, 0.498859], [0.107143, 0.187742]⟩

which contains the cumulative optimistic estimates of the ki th candidate (for 1 ≤ i ≤ 4) for the ve th vacancy (for 1 ≤ e ≤ 4). Step 4. We apply the optimistic aggregation operation. We can conclude that k3 is the optimal outsourcing provider for all services, if one operator is to take all services, respectively: v1 —with degree of acceptance (d.a.) ∈ [0.577245, 0.826309]; v2 —with d.a. ∈ [0.586175, 0.84119]; v3 —with d.a. ∈ [0.531802, 0.791404] and v4 —with d.a. ∈ [0.477876, 0.752455].

4 Conclusion Using our software program for automatic calculations of IVIFIMOA approach based on the theories of IVIFSs and IMs we can select the most eligible candidates for the provision of an outsourcing service. The usefulness of the software was demonstrated by an example problem for finding the most suitable outsourcing candidates in an

230

V. Traneva et al.

oil refinery. The proposed algorithm can be easily generalized for multidimensional IF data (Atanassov 2018) using n-dimensional IMs. In the future we will further improve the software by adding more features and will apply it over more datasets.

References Araz, C., Ozfirat, P., Ozkarahan, I.: An integrated multicriteria decision-making methodology for outsourcing management. Comput. Oper. Res. 34, 3738–3756 (2007) Atanassov, K.T.: Intuitionistic Fuzzy Sets, VII ITKR Session, Sofia, 20-23 June 1983 (Deposed in Centr. Sci.-Techn. Library of the Bulg. Acad. of Sci., 1697/84) (in Bulgarian). Reprinted: Int. J. Bioautomation 20(S1), S1–S6 (2016) Atanassov, K.: Generalized index matrices. Comptes rendus de l’Academie Bulgare des Sciences 40(11), 15–18 (1987) Atanassov, K.: On Intuitionistic Fuzzy Sets Theory. STUDFUZZ, vol. 283. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-29127-2 Atanassov, K.: Index Matrices: Towards an Augmented Matrix Calculus. Studies in Computational Intelligence, vol. 573. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10945-9 Atanassov, K.: Interval-valued intuitionistic fuzzy sets. Studies in Fuzziness and Soft Computing, vol. 388. Springer (2020a) Atanassov, K., Gargov, G.: Interval valued intuitionistic fuzzy sets. Fuzzy Sets Syst. 31(3), 343–349 (1989) Atanassov, K.: Extended interval valued intuitionistic fuzzy index matrices. In: Atanassov, K., et al. (eds.) Uncertainty and Imprecision in Decision Making and Decision Support: New Challenges, Solutions and Perspectives, IWIFSGN 2018, Advances in Intelligent Systems and Computing, vol. 1081. Springer, Cham (2020b) Atanassov, K.: n-Dimensional extended index matrices Part 1. Adv. Stud. Contemp. Math. 28(2), 245–259 (2018) Atanassov, K., Mavrov, D., Atanassova, V.: Intercriteria decision making: a new approach for multicriteria decision making, based on index matrices and intuitionistic fuzzy sets. Issues IFSs General. Nets 11, 1–8 (2014) Atanassov, K., Szmidt, E., Kacprzyk, J., Atanassova, V.: An approach to a constructive simplication of multiagent multicriteria decision making problems via ICrA. Comptes rendus de lAcademie bulgare des Sciences 70(8), 1147–1156 (2017) Atanassov, K., Vassilev, P., Kacprzyk, J., Szmidt, E.: On interval-valued intuitionistic fuzzy pairs. J. Univ. Math. 1(3), 261–268 (2013) Atanassov, K., Marinov, P., Atanassova, V.: InterCriteria analysis with interval-valued intuitionistic fuzzy evaluations. In: Cuzzocrea, A., Greco, S., Larsen, H., Saccà, D., Andreasen, T., Christiansen, H. (eds.), Flexible Query Answering Systems, FQAS 2019, Lecture Notes in Computer Science, vol. 11529, pp. 329–338. Springer, Cham (2019) Atanassov, K., Vassilev, P., Roeva, O.: Level operators over intuitionistic fuzzy index matrices. Mathematics 9, 366 (2021) Atanassova, V., Roeva, O.: Computational complexity and influence of numerical precision on the results of intercriteria analysis in the decision making process. Notes Intuit. Fuzzy Sets 24(3), 53–63 (2018) Belton, V., Stewart, T.J.: Multiple Criteria Decision Analysis. Springer Science and Business Media LLC, New York (2002) Bottani, E., Rizzi, A.: A fuzzy TOPSIS methodology to support outsourcing of logistics services. Supply Chain Manag.: Int. J. 11, 294–308 (2006)

Application of an Interval-Valued Intuitionistic Fuzzy Decision-Making …

231

Büyüközkan, G., Göçer, F.: Application of a new combined intuitionistic fuzzy MCDM approach based on axiomatic design methodology for the supplier selection problem. Appl. Soft Comput. 52, 1222–1238 (2017) Chen, Y.H., Wang, T.C., Wu, C.Y.: Strategic decisions using the fuzzy PROMETHEE for IS outsourcing. Expert Syst. Appl. 38, 13216–13222 (2011) Chen, S.M., Han, W.H.: A new multiattribute decision making method based on multiplication operations of interval-valued intuitionistic fuzzy values and linear programming methodology. Inf. Sci. 429, 421–432 (2018) Chen, S.M., Lee, L.W., Liu, H.C., Yang, S.W.: Multiattribute decision making based on intervalvalued intuitionistic fuzzy values. Expert Syst. Appl. 39(12), 10343–10351 (2012) Chen, S.M., Han, W.H.: Multiattribute decision making based on nonlinear programming methodology, particle swarm optimization techniques and interval-valued intuitionistic fuzzy values. Inf. Sci. 471, 252–268 (2019) Hsu, C., Liou, J.H., Chuang, Y.C.: Integrating DANP and modified grey relation theory for the selection of an outsourcing provider. Expert Syst. Appl. 40, 2297–2304 (2013) Kahraman, C., Öztaysi, B., Çevik Onar, C.: A multicriteria supplier selection model using hesitant fuzzy linguistic term sets. Multiple-Valued Logic Soft Comput. 26, 315–333 (2016) Kahraman, C., Öztaysi, B., Çevik Onar, S.: An integrated intuitionistic fuzzy AHP and TOPSIS approach to evaluation of outsource manufacturers. J. Intell. Syst. 29(1), 283–297 (2020) Kumar, K., Shyi-Ming, Ch.: Multiattribute decision making based on interval-valued intuitionistic fuzzy values, score function of connection numbers, and the set pair analysis theory. Inf. Sci. 551, 100–112 (2021) Kumar, K., Garg, H.: TOPSIS method based on the connection number of set pair analysis under interval-valued intuitionistic fuzzy set environment. Comput. Appl. Math. 37(2), 1319–1329 (2018) Li, D.F., Wan, S.P.: A fuzzy inhomogenous multiattribute group decision making approach to solve outsourcing provider selection problems. Knowl.-Based Syst. 67, 71–89 (2014) Liou, J.J.H., Chuang, Y.T.: Developing a hybrid multi-criteria model for selection of outsourcing providers. Expert Syst. Appl. 37, 3755–3761 (2010) Liu, H., Wang, W.: An integrated fuzzy approach for provider evaluation and selection in third-party logistics. Expert Syst. Appl. 36, 4387–4398 (2006) Mavrov, D.: An Application for Performing 2-D Index Matrix Operations. Annual of the “Informatics” Section of the Union of Scientists in Bulgaria, Vol. 10, 2019/2020, pp. 66–80 (in Bulgarian) Modak, M., Pathak, K., Ghosh, K.K.: Performance evaluation of outsourcing decision using a BSC and fuzzy AHP approach: a case of the Indian coal mining organization. Resour. Policy 52, 181–191 (2017) Prakash, C., Barua, M.K.: A combined MCDM approach for evaluation and selection of third-party reverse logistics partner for Indian electronics industry. Sustain. Prod. Consump. 7, 66–78 (2016) Riabacke, A.: Managerial decision making under risk and uncertainty. IAENG Int. J. Comput. Sci. 32, 453–459 (2006) Senvar, O., Tuzkaya, G., Kahraman, C.: Multicriteria supplier selection using fuzzy PROMETHEE method. In: Oztaysi, K.A. (ed.) Supply Chain Management Under Fuzziness, pp. 21–34. Springer, Berlin (2014) Tavana, M., Zareinejad, M., Di Caprio, D., Kaviani, M.A.: An integrated intuitionistic fuzzy AHP and SWOT method for outsourcing reverse logistics. Appl. Soft Comput. 40, 544–557 (2016) Tjader, Y., May, J.H., Shang, J., Vargas, L.G., Gao, N.: Firm-level outsourcing decision making: a balanced scorecard-based analytic network process model. Int. J. Prod. Econ. 147, 614–623 (2014) Tranev, S.: Outsourcing in the oil refining company - modeling the choice of supplier in an intuitionistic fuzzy environment, Publishing House of BAS “Prof. Marin Drinov”, Sofia (2020) Traneva, V., Tranev, S.: Index Matrices as a Tool for Managerial Decision Making. Publ, House of the Union of Scientists, Bulgaria (2017). (in Bulgarian)

232

V. Traneva et al.

Traneva, T.S.: An Interval-Valued Intuitionistic Fuzzy Approach to the Assignment Problem. In: Kahraman, C., et al. (eds.), Intelligent and Fuzzy Techniques in Big Data Analytics and Decision Making, INFUS 2019, Advances in Intelligent Systems and Computing, vol. 1029, pp. 1279– 1287. Springer, Cham (2020a) Traneva, V., Tranev, S.: Intuitionistic Fuzzy Index-Matrix Selection for the Outsourcing Providers at a Refinery. INFUS,: Advances in Intelligent Systems and Computing, p. 2021. Springer, Cham (2021a) Traneva, V., Tranev, S.: Intuitionistic Fuzzy Approach for Outsourcing Provider Selection in a Refinery. In: Margenov, S., Lirkov, I. (eds.), Proceedings of LSSC 2021, Sozopol, Bulgaria. Lecture Notes in Computer Science. Springer, Cham (2021b). (in press) Traneva, V., Tranev, S., Atanassova, V.: Three-Dimensional Interval-Valued Intuitionistic Fuzzy Appointment Model, In: Fidanova, S. (eds.), Recent Advances in Computational Optimization. Studies in Computational Intelligence, vol. 838, pp. 181–199. Springer, Cham (2020b) Traneva, V., Mavrov, D., Tranev, S.: Fuzzy Two-Factor Analysis of COVID-19 Cases in Europe. In: IEEE 10th International Conference on Intelligent Systems, IS 2020 - Proceedings, pp. 533–538 (2020) Traneva V., Tranev S., Mavrov, D.: Interval-valued intuitionistic fuzzy decision-making method using index matrices and application in outsourcing. In: Proceedings of the 16th Conference on Computer Science and Information Systems (FedCSIS), Sofia, Bulgaria, 2021, vol. 25, pp. 251–255. https://doi.org/10.15439/2021F77 Traneva, V., Tranev, S., Stoenchev, M., Atanassov, K.: Scaled aggregation operations over two- and three-dimensional index matrices. Soft. Comput. 22, 5115–5120 (2019). https://doi.org/10.1007/ 00500-018-3315-6 Uygun, Ö., Kaçamak, H., Kahraman, U.: An integrated DEMATEL and fuzzy ANP techniques for evaluation and selection of outsourcing provider for a telecommunication company. Comput. Indust. Eng. 86, 137–146 (2015) Wan, S.-P., Wang, F., Lin, L.-L., Dong, J.-Y.: An IF linear programming method for logistics outsourcing provider selection. Knowl.-Based Syst. (2015). https://doi.org/10.1016/j.knosys.2015. 02.027 Wang, J., Yang, D.: Using a hybrid multi-criteria decision aid method for information system outsourcing. Comput. & Oper. Res. 34, 3691–3700 (2007) Wang, J., Zhang, H.: A likelihood-based TODIM approach based on multi-hesitant fuzzy linguistic information for evaluation in logistics outsourcing. Comput. & Ind. Eng. 99, 287–299 (2016). https://doi.org/10.1016/j.cie.2016.07.023 Wang, C.Y., Chen, S.M.: Multiple attribute decision making based on interval-valued intuitionistic fuzzy sets, linear programming methodology, and the extended TOPSIS method. Inf. Sci. 397, 155–167 (2017) Yager, R.: Non-numeric multi-criteria multi-person decision making. Int. J. Group Dec. Making Negot. 2, 81–93 (1993) Zadeh, L.: Fuzzy Sets. Inf. Control 8(3), 338–353 (1965) Zeng, S., Chen, S.M., Fan, K.Y.: Interval-valued intuitionistic fuzzy multiple attribute decision making based on nonlinear programming methodology and TOPSIS method. Inf. Sci. 506, 424– 442 (2020) Zimmerman, H.J.: Fuzzy Sets, Decision Making, and Expert Systems. Kluwer Academic Publishers, Boston (1987)

Combinatorial Etudes and Number Theory Miroslav Stoenchev and Venelin Todorov

Abstract The goal of the paper is to consider a special class of combinatorial problems, the solution of which is realized by constructing finite sequences of ±1. For example, for fixed p ∈ N, is well known the existence of n p ∈ N with the property: any set of n p consecutive natural numbers can be divided into 2 sets, with equal sums of its pth-powers. The considered property remains valid also for sets of finite arithmetic progressions of integers, real or complex numbers. The main observation here is the generalization of the results for arithmetic progressions with elements of complex field C to elements of arbitrary associative, commutative algebra. Keywords Prouhet–Tarry–Escot problem · Finite arithmetic progressions · Number theory

1 Morse Sequence For every positive integer m, let us denote with ϑ(m) and 𝔢(m) respectively the number of occurrences of digit 1 in the binary representation of m, and the position of first digit 1 in the binary representation of m. The Morse sequence {am }∞ m=1 is defined by am = (−1)ϑ(m)+𝔢(m)−2 .

M. Stoenchev Technical University of Sofia, 8 Kliment Ohridski blvd., 1000 Sofia, Bulgaria e-mail: [email protected] V. Todorov (B) Department of Information Modeling, Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, Acad. Georgi Bonchev Str., Block 8, 1113 Sofia, Bulgaria Department of Parallel Algorithms, Institute of Information and Communication Technologies, Bulgarian Academy of Sciences, Acad. G. Bonchev Str., Block 25 A, 1113 Sofia, Bulgaria e-mail: [email protected]; [email protected]

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 S. Fidanova (ed.), Recent Advances in Computational Optimization, Studies in Computational Intelligence 1044, https://doi.org/10.1007/978-3-031-06839-3_12

233

234

M. Stoenchev and V. Todorov

The following properties are derived directly: a2k = (−1)k and a2k +l = −al for l = 1, 2 . . . , 2k ; k ∈ N. The problem of finding a number n p , such that the set An p = {1, 2, . . . , n p } is represented as disjoint union of two subsets, say B and C, with the property: Σ

bp =

b∈B

Σ

c p,

c∈C

1 p+1 is solved by the sequence {am }∞ m=1 . Elementary proof is given below and n p = 2 has the desired property, with

B = {m ∈ An p : am = 1}, C = An p \B = {m ∈ An p : am = −1}. This result can be generalized to arbitrary arithmetic progressions of complex numbers. As example, if a, d ∈ C, d /= 0 and An p = {a + kd : k = 0, 1, . . . , n p − 1}, then n p = 2 p+1 and B = {a + kd ∈ An p : ak+1 = 1}.

2 Formulation of the Main Results Let us define {Hn,m (z)}∞ n,m=1 , by 2 ∞ Σ Σ l

Hn,m (z) =

ak (P(z) + k.Q(z))m ,

l=n k=1

where P, Q ∈ C[z] are complex polynomials. Proposition 1 If m = 0, 1, . . . , n − 1 then Hn,m ≡ 0, while if n = 2t the following equality is satisfied Hn,n (z) = n!2

n 2 −n 2

Q n (z).

Proposition 2 Let n ∈ N be a even number and α1 , α2 , . . . , αn are complex numbers, then 2n Σ n 2 −n ak (α1 + k)(α2 + k) · · · (αn + k) = n!2 2 . k=1

Proposition 3 If P ∈ C[z] is a complex polynomial, then 1 Similar solutions and generalizations of the Prouhet–Tarry–Escot problem are considered in Borwein and Ingalls (1994), Borwein (2002), Dorwart and Brown (1937), Hardy and Wright (1979), Wakhare and Vignat (2022).

Combinatorial Etudes and Number Theory

235

P 21+deg Σ

ak P(k) = 0.

k=1

Proposition 4 Let p and k be positive integers. Then there exist n ∈ N, n ≤ 2 p[log2 k] and distinct square-free positive integers xi j , i = 1, 2, . . . , k; j = 1, 2, . . . , n with the property: n Σ j=1

x1r j =

n Σ

x2r j = · · · =

n Σ

j=1

xkr j , ∀r = 1, 2, . . . , p.

j=1

Proposition 5 Let n and m be positive integers. Then there exists an integer s = s(n, m) with the property: every s-element subset of Γn , where k runs through s consecutive integers, can be represented as disjoint union of m subsets, with equal sums of the elements in each one. The proof of each of the formulated above propositions,2 with the exception for Proposition 5, is based on the following lemma: Lemma 1 Set a, d ∈ C, d /= 0, p ∈ N and A2 p+1 = {a + kd : k = 0, 1, . . . , 2 p+1 − 1}. Then there are sets B ∩ C = ∅, B ∪ C = A2 p+1 such that Σ

bp =

b∈B

Σ

c p.

c∈C

Corollary 1 Under assumptions of Lemma 1, it holds Σ

br =

b∈B

Σ

cr , r = 0, 1, . . . , p.

c∈C

To prove the Lemma 1 and its consequence, we define a sequence of polynomials: {Ts, p (z)}∞ s=0 , through which we will gradually calculate the differences between the sums of equal powers of the elements in B = {a + kd ∈ A2 p+1 : ak+1 = 1} and C = {a + kd ∈ A2 p+1 : ak+1 = −1}. For s ≥ 0 set Ts, p (z) =

4s+1 −1 Σ

ak+1 (z + kd) p

k=0

and we calculate Ts, p (z) =

Σ 0≤k≤4s+1 −1;

2

(z + kd) p − ak+1 =1

Σ 0≤k≤4s+1 −1;

(z + kd) p . ak+1 =−1

This paper is continuation of Stoenchev and Todorov (2022), and contains complete proofs of the formulated propositions.

236

M. Stoenchev and V. Todorov

When s ≤

set z = a to obtain

p−1 , 2

Σ

Ts, p (a) =

Σ

bp −

b≤a+(4s+1 −1)d

c p,

c≤a+(4s+1 −1)d

where summation is by b ∈ B, c ∈ C. Set p = 2m + r, r ∈ {0, 1}. Here and everywhere below the summations are performed on all b ∈ B and c ∈ C, which satisfy the corresponding inequalities. When r = 1 we obtain Σ Σ bp − cp = Tm, p (a) = b≤a+(4m+1 −1)d

Σ

Σ

bp −

b≤a+(2 p+1 −1)d

c≤a+(4m+1 −1)d

cp =

c≤a+(2 p+1 −1)d

Σ

bp −

b∈B

Σ

c p.

c∈C

When r = 0: Σ

Tm−1, p (a) =

b≤a+(22m −1)d

On the other hand Σ

p 2Σ −1

Σ

bp −

b≤a+(2 p −1)d

Σ

cp =

a+2 p d≤c≤a+(2 p −1)d

p 2Σ −1

am+1 (a + md) p =

2 p ≤m≤2 p+1 −1

k=0

(

)

ak+1 (a + 2 d) + kd = − p

2m 2Σ −1

( ) ak+1 (a + 2 p d) + kd =

k=0

= −Tm−1, p (a + 2 p d). Therefore, for p = 2m we obtain Σ b∈B

bp −

Σ c∈C

c p;

c≤a+(2 p −1)d

p

k=0

Summarized:

Σ

2Σ −1 ) ) ( ( a2 p +k+1 a + (2 p + k)d = − ak+1 a + (2 p + k)d =

k=0

=−

Σ

cp =

c≤a+(22m −1)d

bp −

a+2 p d≤b≤a+(2 p+1 −1)d

=

Σ

bp −

c p = Tm−1, p (a) − Tm−1, p (a + 2 p d).

Combinatorial Etudes and Number Theory

Σ

bp −

b∈B

Σ

⎧ cp =

c∈C

237

Tm, p (a), for p = 2m + 1 Tm−1, p (a) − Tm−1, p (a + 2 p d), for p = 2m

3 Proof of the Main Results Lemma 1 follows directly from : Proposition 6

⎧ Tm−1, p (z) =

p!2

p2 − p 2

0, for p = 2m − 1 d p , for p = 2m

Proof Let us determine the polynomials {Ts, p (z)}∞ s=0 by finding recurrent formula. Since a1 = a4 = 1, a2 = a3 = −1, then T0, p (z) = (z + 3d) p − (z + 2d) p − (z + d) p + z p We will prove that for all s ≥ 1 is valid Ts, p (z) = Ts−1, p (z + 3.4s d) − Ts−1, p (z + 2.4s d) − Ts−1, p (z + 4s d) + Ts−1, p (z) For example, if s = 1 then: T1, p (z) =

15 Σ

ak+1 (z + kd) p =

k=0

+

11 Σ

+

3 Σ

ak+1 (z + kd) p +

k=0

ak+1 (z + kd) p +

k=8

3 Σ

15 Σ

= T0, p (z) −

3 Σ

3 Σ

a22 +m+1 ((z + 4d) + md) p +

a23 +22 +m+1 ((z + 3.4d) + md) p =

m=0 3 Σ

am+1 ((z + 4d) + md) p −

m=0

+

3 Σ m=0

k=12

m=0

ak+1 (z + kd) p +

k=4

ak+1 (z + kd) p = T0, p (z) +

a23 +m+1 ((z + 2.4d) + md) p +

7 Σ

3 Σ

am+1 ((z + 2.4d) + md) p +

m=0

am+1 ((z + 3.4d) + md) p = T0, p (z) − T0, p (z + 4d) − T0, p (z + 2.4d) + T0, p (z + 3.4d)

m=0

The proof is similar in the general case:

238

M. Stoenchev and V. Todorov

Ts, p (z) =

4s+1 −1 Σ

ak+1 (z + kd) = p

k=0

s 4Σ −1

ak+1 (z + kd) + p

+

ak+1 (z + kd) + p

k=2.4s

ak+1 (z + kd) p +

k=4s

k=0 s 3.4 −1 Σ

s 2.4 −1 Σ

4s+1 −1 Σ

ak+1 (z + kd) p =

k=3.4s

= Ts−1, p (z) +

s 4Σ −1

a4s +m+1 ((z + 4s d) + md) p +

m=0

+

s 4Σ −1

a2.4s +m+1 ((z + 2.4 d) + md) + s

p

m=0

s 4Σ −1

a3.4s +m+1 ((z + 3.4s d) + md) p =

m=0

= Ts−1, p (z + 3.4s d) − Ts−1, p (z + 2.4s d) − Ts−1, p (z + 4s d) + Ts−1, p (z), whereby the necessary [recurrent formula is established. ] In the case 1 ≤ s ≤ 2p − 1, we prove that Ts, p (z) has the type: Ts, p (z) =

=

p−2 Σ

iΣ 1 −2

iΣ 2 −2

i 1 =2s i 2 =2(s−1) i 3 =2(s−2)

···

( ) i s −2 ( )( )( ) Σ p i1 i2 is ··· d p−is+1 L s, p z is+1 , i i i i 1 2 3 s+1 i =0 s+1

where L s, p = = (3 p−i1 − 2 p−i1 − 1)(3i1 −i2 − 2i1 −i2 − 1) . . . (3is −is+1 − 2is −is+1 − 1)4i1 +i2 +···+is −sis+1

Indeed, when s = 0 follows: T0, p (z) = (z + 3d) p − (z + 2d) p − (z + d) p + z p = p−2 ( ) Σ p (3 p−i1 − 2 p−i1 − 1)d p−i1 z i1 ; i 1 i =0 1

T1, p (z) = T0, p (z) − T0, p (z + 4d) − T0, p (z + 2.4d) + T0, p (z + 3.4d) =

Combinatorial Etudes and Number Theory

=

p−2 ( ) Σ p i 1 =0

=

i1

239

( ) (3 p−i1 − 2 p−i1 − 1)d p−i1 (z + 12d)i1 − (z + 8d)i1 − (z + 4d)i1 + z i1 =

p−2 ( ) ( ) Σ p (3 p−i1 − 2 p−i1 − 1)d p−i1 (z + 12d)i1 − (z + 8d)i1 − (z + 4d)i1 + z i1 = i1

i 1 =2

=

( ) ( ) ∑ p−2 ( p ) p−i1 ∑ − 2 p−i1 − 1)d p−i1 z i1 + ii12 =0 ii21 (3i1 −i2 − 2i1 −i2 − 1)(4d)i1 −i2 z i2 = i 1 =2 i 1 (3

=

(∑ ) ∑ p−2 ( p ) p−i1 i 1 −2 (i 1 ) i 1 −i 2 − 2 p−i1 − 1)d p−i1 − 2i1 −i2 − 1)(4d)i1 −i2 z i2 = i 2 =0 i 2 (3 i 1 =2 i 1 (3

p−2 i 1 −2 ( )( ) Σ Σ p i1 (3 p−i1 − 2 p−i1 − 1)(3i1 −i2 − 2i1 −i2 − 1)4i1 −i2 d p−i2 z i2 = = i i 1 2 i =2 i =0 1

2

p−2 i 1 −2 ( )( ) Σ Σ p i1 d p−i2 L 1, p z i2 , = i i 1 2 i =2 i =0 1

2

whereby the assertion is established for s = 1.Suppose that for some s ≥ 2, Ts−1, p (z) satisfies the recurrent formula and denote ( ) ( )( )( ) is p i1 i2 s, p ··· d p−is+1 L s, p , when s ≥ 1. G i1 ,i2 ,...,is+1 = i s+1 i1 i2 i3 Direct calculation shows: Ts, p (z) = Ts−1, p (z + 3.4s d) − Ts−1, p (z + 2.4s d) − Ts−1, p (z + 4s d) + Ts−1, p (z) =

=

iΣ 1 −2

p−2 Σ

i s−1 −2

iΣ 2 −2

···

Σ

s−1, p

G i1 ,i2 ,...,is ×

i s =0

i 1 =2(s−1) i 2 =2(s−2) i 3 =2(s−3)

) ( × (z + 3.4s d)is − (z + 2.4s d)is − (z + 4s d)is + z is =

=

p−2 Σ

iΣ 1 −2

i 1 =2(s−1) i 2 =2(s−2)

i s−1 −2

···

Σ i s =0

s−1, p

G i1 ,i2 ,...,is ×

240

M. Stoenchev and V. Todorov



⎞ ) is ( Σ i s × ⎝z i s + (3is −is+1 − 2is −is+1 − 1)4s(is −is+1 ) d is −is+1 z is+1 ⎠ = i s+1 i =0 s+1

i s−1 −2

iΣ 1 −2

p−2 Σ

=

···

i 1 =2s i 2 =2(s−1)

Σ

s−1, p

G i1 ,i2 ,...,is ×

i s =2

) i s −2 ( Σ is × (3is −is+1 − 2is −is+1 − 1)4s(is −is+1 ) d is −is+1 z is+1 = i s+1 i =0 s+1

=

iΣ 1 −2

p−2 Σ

···

i s−1 −2 i s −2 Σ Σ

i 1 =2s i 2 =2(s−1)

( s−1, p

G i1 ,i2 ,...,is

is

)

i s+1

i s =2 i s+1 =0

×

) ( × 3is −is+1 − 2is −is+1 − 1 4s(is −is+1 ) d is −is+1 z is+1 =

=

iΣ 1 −2

p−2 Σ

···

i s−1 −2 i s −2 Σ Σ

i 1 =2s i 2 =2(s−1)

s, p

G i1 ,i2 ,...,is+1 z is+1 ,

i s =2 i s+1 =0

which prove that Ts, p (z) satisfies the recurrent formula. Let us determine the degree of Ts, p (z), s ≥ 0. According to the derived formula we find i s+1 ≤ i s − 2 ≤ i s−1 − 4 ≤ · · · ≤ i 1 − 2s ≤ p − 2(s + 1), as equality is reached everywhere. Therefore deg Ts, p (z) = p − 2(s + 1). If p = 2m + r, r ∈ {0, 1}, then deg Tm−1, p (z) = p − 2m = r. For r = 0 we obtain that Tm−1, p (z) is a constant, equal to p!2 p−2 Σ

Tm−1, p (z) =

2(m−1) Σ

2(m−2) Σ

···

i 1 =2(m−1) i 2 =2(m−2)

(

m−1, p

= G p−2, p−4, p−6...,2,0 =

= p!2

p2 − p 2

p p−2

)(

d p . Indeed

i m−2 −2 i m−1 −2

iΣ 1 −2

···

i 1 =2(m−1) i 2 =2(m−2)

=

p2 − p 2

Σ Σ

m−1, p

G i1 ,i2 ,...,is+1 z im =

i m−1 =2 i m =0 0 2 Σ Σ

m−1, p

G i1 ,i2 ,...,is+1 z im =

i m−1 =2 i m =0

) ( )( ) p!d p p−2 4 2 p ··· d L m−1, p = m L m−1, p = 2 p−4 2 0

d p =⇒ Tm−1, p (z) = p!2

p2 − p 2

d p , when p = 2m.

Combinatorial Etudes and Number Theory

241

In the case r = 1, we will prove that Tm, p (z) = 0: Tm, p (z) = Tm−1, p (z + 3.4m d) − Tm−1, p (z + 2.4m d)− Tm−1, p (z + 4m d) + Tm−1, p (z) = iΣ 1 −2

p−2 Σ

i m−1 −2

···

Σ

m−1, p

G i1 ,i2 ,...,im

i m =0

i 1 =2(m−1) i 2 =2(m−2)

((z + 3.4m d)im − (z + 2.4m d)im − (z + 4m d)im + z im ) = 0, the last equations is valid, since the summation index i m takes values 0 and 1. Thus the Proposition 6 is proved. ❏

3.1 Proof of Corollary 1 Proof According to Proposition 6 for every m ∈ N is valid Tm,2m+1 (z) ≡ 0 and 2 Tm−1,2m (z) = (2m)!22m −m d 2m . Then Tm,2m (z) ≡ 0, due to Tm,2m (z) = Tm−1,2m (z + 3.4m d) − Tm−1,2m (z + 2.4m d)− −Tm−1,2m (z + 4m d) + Tm−1,2m (z) = 0. Using the recurrent formula we obtain: Ts,2k (z) = Ts,2k+1 (z) ≡ 0, for all s ≥ k, i.e. [ ] k Ts,k (z) ≡ 0, for all s ≥ . 2 There are two cases, first case: p = 2m + 1 and 0 ≤ r ≤ p − 1. Then Σ b∈B

b − r

Σ c∈C

c = r

2 p+1 Σ−1 k=0

ak+1 (a + kd) = r

4m+1 Σ−1

ak+1 (a + kd)r = Tm,r (a)

k=0

If r = 2r1 , then 2r1 ≤ 2m =⇒ r1 ≤ m and consequently Tm,r (z) = Tm,2r1 (z) = 0. When r = 2r1 + 1, then 2r1 + 1 ≤ 2m =⇒ r1 < m and again Tm,r (z) = Tm,2r1 +1 (z) = 0. The second case p = 2m and 0 ≤ r ≤ 2m − 1. Then

242

Σ

M. Stoenchev and V. Todorov

br −

b∈B

Σ

22m+1 Σ−1

cr =

c∈C

ak+1 (a + kd)r =

k=0

= Tm−1,r (a) +

m 4Σ −1

m −1 4Σ

ak+1 (a + kd)r +

m 2.4 Σ−1

ak+1 (a + kd)r =

k=4m

k=0

a22m +k+1 ((a + 4m d) + kd)r = Tm−1,r (a) − Tm−1,r (a + 4m d)

k=0

If r = 2r1 , then 2r1 ≤ 2m − 1 =⇒ r1 ≤ m − 1 and therefore Tm−1,r (z)=Tm−1,2r1 (z) = 0. If r = 2r1 + 1, then 2r1 + 1 ≤ 2m − 1 =⇒ r1 ≤ m − 1 and again Tm−1,r (z) = ❏ Tm−1,2r1 +1 (z) = 0, thus Corollary 1 is proved.

3.2 Proof of Proposition 1 It is clear from the proof of Lemma 1 that the constants a and d can be replaced by elements of an arbitrary field, having characteristic 0. We will consider the field of rational functions with complex coefficients, replacing a and d with rational functions P(z) and Q(z). Set Rk (z) = P(z) + k Q(z) and when n = 2s, m = 0, 1, . . . , 2s − 1, then simple calculation gives 2 ∞ Σ Σ l

Hn,m (z) = H2s,m (z) =

ak (P(z) + k.Q(z)) =

l=2s k=1

=

∞ Σ



l=s

=

∞ Σ

m ak+1 Rk+1 (z) +

22l+1 Σ−1

k=0

⎛ ⎝Tl−1,m (R1 (z)) +

l=s

=

2l 2Σ −1

=

m ak+1 Rk+1 (z)⎠ =

m ak+1 Rk+1 (z) +

⎛ ⎝2Tl−1,m (R1 (z)) +

l=s



k=0

k=0 ∞ Σ

m ak+1 Rk+1 (z) =

l=2s k=0



2l 2Σ −1

∞ 2Σ −1 Σ l

m

22l+1 Σ−1

⎞ m ak+1 Rk+1 (z)⎠ =

k=22l 2l 2Σ −1

⎞ a22l +k+1 R2m2l +k+1 (z)⎠ =

k=0

∞ Σ (

) 2Tl−1,m (R1 (z)) − Tl−1,m (R22l +1 (z)) = 0,

l=s

[

] [ ] 2s − 1 m since l − 1 ≥ s − 1 = ≥ 2 2

Combinatorial Etudes and Number Theory

243

In the case n = 2s + 1, m = 0, 1, . . . , 2s, we have ∞ 2Σ −1 Σ l

Hn,m (z) = H2s+1,m (z) =

m ak+1 Rk+1 (z) =

l=2s+1 k=0

=

∞ Σ



22(l+1) Σ−1



l=s

=

∞ Σ

m ak+1 Rk+1 (z) +

22l+1 Σ−1

k=0

⎛ ⎝Tl,m (R1 (z)) +

l=s

=

2l 2Σ −1

m ak+1 Rk+1 (z) +

22l+1 Σ−1

⎞ m ak+1 Rk+1 (z)⎠ =

k=22l

⎛ ⎝2Tl,m (R1 (z)) +

l=s

=

m ak+1 Rk+1 (z)⎠ =

k=0

k=0 ∞ Σ



2l 2Σ −1

⎞ a22l +k+1 R2m2l +k+1 (z)⎠ =

k=0 ∞ Σ (

) 2Tl,m (R1 (z)) − Tl,m (R22l +1 (z)) = 0,

l=s

since l ≥ s ≥

[m ] 2

Now set n = m = 2s. According to calculations above and Proposition 6, we have Hn,m (z) = H2s,2s (z) =

∞ Σ (

) 2Tl−1,2s (R1 (z)) − Tl−1,2s (R22l +1 (z)) =

l=s

= 2Ts−1,2s (R1 (z)) − Ts−1,2s (R22s +1 (z)) = (2s)!22s

2

−s

Q 2s (z) = n!2

n 2 −n 2

Q n (z), ❏

thus Proposition 1 is proved.

3.3 Proof of Proposition 2 Let n = m = 2s and set γn = n!2

n 2 −n 2

Hn,n (z) = γn Q n (z) =⇒

. According to Proposition 1 we have

d Hn,n (z) = nγn Q n−1 (z)Q ' (z) =⇒ dz

244

M. Stoenchev and V. Todorov 2 ∞ Σ Σ l

=⇒ γn Q

n−1

'

(z)Q (z) =

2 ∞ Σ Σ 2l

ak Rkn−1 (z)Rk' (z)

=

l=2s k=1 2 Σ

ak Rkn−1 (z)Rk' (z) =

l=s k=1

2s

=

ak Rkn−1 (z)Rk' (z).

k=1

and by induction, we prove that 2 Σ 2s

'

''

γn Q(z)Q (z)Q (z) . . . Q

(n−1)

(z) =

ak Rk (z)Rk' (z)Rk'' (z) . . . Rk(n−1) (z) =⇒

k=1 2 Σ

(

2s

=⇒ γn =

k=1

ak

P(z) +k Q(z)

)(

P ' (z) +k Q ' (z)

)(

) ( (n−1) ) (z) P '' (z) P + k ... +k Q '' (z) Q (n−1) (z)

The last equality holds for all polynomials P and those Q for which deg Q ≥ n − 1. For arbitrary α1 , α2 , . . . , αn ∈ C, we can choose P and Q, such that for fixed z 0 to satisfy the equalities: P(z 0 ) − α1 Q(z 0 ) = P ' (z 0 ) − α2 Q ' (z 0 ) = P '' (z 0 ) − α3 Q '' (z 0 ) = · · · = = P (n−1) (z 0 ) − αn Q (n−1) (z 0 ) = 0, where Q (i) (z 0 ) /= 0, i = 0, 1, . . . , n − 1. Therefore 2 Σ n

γn =

ak (α1 + k)(α2 + k) · · · (αn + k).

k=1

Remark 1 For even n and arbitrary rational functions S1 (z), S2 (z), . . . , Sn (z) is valid: 2n Σ ak (S1 (z) + k)(S2 (z) + k) · · · (Sn (z) + k), γn = k=1

which is stronger than proved above. Corollary 2 For every polynomial P ∈ C[z] from even degree n = deg P and leading coefficient α is valid deg P 2Σ n 2 −n ak P(k) = α.n!2 2 k=1

Combinatorial Etudes and Number Theory

245

3.4 Proof of Proposition 3 We can describe a proof based on the results obtained above for the sequence {Hn,m (z)}, but we prefer another approach. Let us introduce the polynomial sequence ~s, p (z 1 , . . . , z p )} by: {T ~s, p (z 1 , z 2 , . . . , z p ) = ~s, p (z i ) = T T

4s+1 −1 Σ

ak+1

k=0

p ∏ (z i + kd), i=1

which is generalization of {Ts, p (z)}. The following recurrent formula is derived similarly as above: ~s−1, p (z i + 3.4s d) − T ~s−1, p (z i + 2.4s d) − T ~s−1, p (z i + 4s d) + T ~s−1, p (z i ) ~s, p (z i ) = T T

~k, p (z 1 , z 2 , . . . , z p ) ≡ 0, for k ≥ We will prove that T [ p] k ≤ 2 − 1 is valid

[ p] , while in the case 1 ≤ 2

~k, p (z 1 , z 2 , . . . , z p ) = T

=

p−2 Σ

iΣ 1 −2

i 1 =2k i 2 =2(k−1)

iΣ k −2

···

Σ

L k, p d p−ik+1 z σ1 (k+1) . . . z σik+1 (k+1)

i k+1 =0 σ(k+1)⊂σ(k)⊂···⊂σ(1)

and L k, p is defined as above, and set z σ0 (k+1) equal to 1. Assuming that z i0 = 1 and calculate

~0, p (z j ) = z 1 z 2 . . . z p + T

p Σ

Σ

(3 p−s − 2 p−s − 1)d p−s z j1 z j2 . . . z js =

s=0 0≤ j1 0. Thus ϕ is a continuous bijection from [0, 1] to [0, 1]. Since we use bijection for this lattice rule, we will called it bijective lattice rule and we will use the notation LATB.

3.3 Lattice Sequences Based on Optimal Vectors If we change the generating vector to be optimal in the way described in Kuo and Nuyens (2016) we have improved the lattice sequence. At the beginning of the algorithm the input is the number of dimensionality s and the number of samples N . At the first step of the algorithm s dimensional optimal generating vector z = (z 1 , z 2 , . . . z s )

(7)

is generated by the fast construction method described by Dirk Nuyens (Kuo and Nuyens 2016). The second step of the algorithm includes generating the points of lattice rule by formula ⎧ ⎫ k (8) z , k = 1, . . . , N . xk = N And at the third and last step of the algorithm an approximate value I N of the multidimensional integral is evaluated by the formula: (⎧ ⎫) N 1 Σ k f z . IN = N k=1 N

(9)

On a Full Monte Carlo Approach to Computational Finance

297

Fig. 1 The flowchart of the optimized lattice algorithms

The special choice of this optimal generating vector is definitely more efficient than the Fibonacci generating vector, which is only optimal for the two dimensional case Wang and Hickernell (200). For our improved lattice rule is satisfied Kuo and Nuyens (2016): ( s ) log N ∗ DN = O (10) . N The steps of working of the algorithm are given on the flowchart on Fig. 1. We use the notation LATO for this algorithm with an optimal generating vector.

4 Numerical Examples and Results The numerical study includes high performance computing of the multidimensional integrals ( s ) ∫ ∏ exp xi . (11) Is = [0,1]s

i=1

We will use the expansion of the exponential function in Taylor series and integrating (x1 · · · xs )n : ( s ) ∫ ∏ exp xi = [0,1]s

=

∞ Σ n=0

i=1

1 =s Fs (1, . . . , 1; 2, . . . , 2; 1), (n + 1)s n!

where p Fq (a1 , . . . , a p ; b1 , . . . , bq ; x) is the generalized hypergeometric function ∞ Σ (a1 )n · · · (a p )n x n , p Fq (a1 , . . . , a p ; b1 , . . . , bq ; x) = (b1 )n · · · (bq )n n! n=0

298

V. Todorov

and (c)n = c(c + 1) · · · (c + n − 1) is the Pochhammer symbol. Consider the case with a 50 dimensional integral: ∫

( 50 ) ∏ exp xi .

[0,1]50

i=1

I50 =

(12)

We calculate his reference value by using the exponential function in Taylor series and integrating (x1 · · · x100 )n we receive ∫ [0,1]50

=

∞ Σ n=0

( 50 ) ∏ exp xi = i=1

1 =50 F50 (1, . . . , 1; 2, . . . , 2; 1). (n + 1)50 n!

The results are given in the Tables below. Each table contains information about the MC approach which is applied, the obtained relative error, the needed CPU-time and the number of points. Numerical experiments show that the worst approach is the polynomial lattice bijectional rule and also it is at least two times slower than the Fibonacci based lattice rule and the optimized lattice rule which are near identical in performance speed. For the 5-dimensional integral the optimized lattice approach LAT0 and the Fibonacci based lattice rule FIBO gives closer relative errors (2.83e − 05 and 5.76e − 05 when N = 220 ) with 1 orders better than the bijectional polynomial lattice rule LATB which gives 5.11e − 04 and it is also two times faster than LATB—see Table 1. For the 10dimensional integral the best approach is produced by the optimized lattice approach LAT0- for number of samples N = 220 it gives relative error 1.39e − 07 and it is with 1 order better than the Fibonacci based lattice rule FIBO which gives 1.39e − 06 and with 3 orders better than the bijectional polynomial lattice rule LATB which gives 1.54e − 03 and it is also two times faster than LATB—see Table 2. For the 15dimensional integral the best approach is produced by the optimized lattice approach

Table 1 Algorithmic comparison of the relative error for the 5 dimensional integral N LATO Time, s LATB Time, s FIBO Time, s 26 28 210 212 214 216

2.19e-02 2.03e-04 1.32e-04 2.39e-04 1.15e-05 2.83e-05

0.001 0.005 0.02 0.08 0.33 0.61

2.03e-01 3.22e-02 1.65e-03 1.01e-03 5.55e-04 5.11e-04

0.002 0.008 0.03 0.13 0.54 2.17

6.70e-03 4.98e-03 1.58e-04 5.94e-04 2.38e-05 5.76e-05

0.001 0.003 0.01 0.05 0.17 0.63

On a Full Monte Carlo Approach to Computational Finance

299

Table 2 Algorithmic comparison of the relative error for the 10 dimensional integral N LATO Time, s LATB Time, s FIBO Time, s 210 212 214 216 218 220

4.64e-05 3.56e-05 9.24e-06 2.45e-06 5.76e-06 1.39e-07

0.001 0.09 0.34 1.52 6.11 21.77

1.49e-02 2.63e-02 7.50e-03 2.15e-03 2.21e-03 3.75e-04

0.04 0.16 0.64 2.54 10.08 40.2

3.87e-04 1.76e-04 7.55e-05 1.99e-05 1.33e-04 1.39e-06

0.035 0.062 0.23 0.78 2.54 6.03

Table 3 Algorithmic comparison of the relative error for the 15 dimensional integral N LATO Time, s LATB Time, s FIBO Time, s 215 216 217 218 219 220

4.89e-07 1.22e-06 9.34e-07 1.15e-06 5.31e-07 4.45e-08

0.73 1.27 2.23 3.64 5.47 9.27

9.39e-03 1.25e-02 1.29e-02 1.16e-02 1.89e-03 1.54e-03

1.59 2.91 5.82 10.80 23.13 44.46

3.67e-05 1.04e-05 2.15e-05 1.67e-05 1.49e-05 1.08e-05

0.50 1.04 1.66 3.32 7.11 16.43

Table 4 Algorithmic comparison of the relative error for the 20 dimensional integral LATO Time, s LATB Time, s FIBO Time, s N 210 214 220 221 222

1.51e-05 8.51e-08 5.14e-10 2.72e-09 7.29e-10

0.03 0.48 18.93 28.77 33.80

2.83e+00 1.62e-01 1.26e-02 1.19e-02 1.09e-02

0.05 0.81 19.05 92.32 108.7

9.53e-05 9.53e-05 2.83e-06 1.63e-06 1.82e-06

0.009 0.13 17.72 24.74 52.58

LAT0- for number of samples N = 220 it gives relative error 4.45e − 08 and it is with 3 orders better than the Fibonacci based lattice rule FIBO which gives 1.08e − 05 and with 5 orders better than the bijectional polynomial lattice rule LATB which gives 1.54e − 03—see Table 3. For the 20-dimensional integral the best approach is produced by the optimized lattice approach LAT0- for number of samples N = 222 it gives relative error 7.29e − 10 and it clearly outperforms the other two: it is with 4 orders better than the Fibonacci based lattice rule FIBO which gives 1.82e − 06 and with 8 orders better than the bijectional polynomial lattice rule LATB which gives 1.10e − 02—see Table 4. For the 25-dimensional integral the best approach is produced by the optimized lattice approach LAT0- for number of samples N = 222 it gives relative error 7.51e − 10 and it is better than the Fibonacci based lattice rule FIBO which gives 2.98e − 08 and with 4 orders better than the bijectional polynomial lattice rule LATB which gives 1.19e − 02—see Table 5. For the 30-

300

V. Todorov

Table 5 Algorithmic comparison of the relative error for the 25 dimensional integral N LATO Time, s LATB Time, s FIBO Time, s 212 216 220 222

2.15e-09 1.84e-09 8.28e-10 7.51e-10

0.13 2.08 31.61 109.37

5.94e+00 4.06e-01 1.83e-03 1.19e-02

0.22 3.62 55.57 92.32

2.93e-08 2.98e-08 2.98e-08 2.98e-08

0.036 0.57 9.27 36.72

Table 6 Algorithmic comparison of the relative error for the 30-dimensional integral N LATO Time, s LATB Time, s FIBO Time, s 210 212 216 220

2.26e-05 8.92e-05 1.92e-05 1.69e-06

0.003 0.009 2.16 16.03

1.86e+02 4.64e+01 2.93e+00 1.61e-01

0.06 0.25 4.01 60.0

9.31e-05 1.31e-05 1.01e-05 9.31e-06

0.01 0.03 0.60 9.92

Table 7 Algorithmic comparison of the relative error for the 50-dimensional integral LATO Time, s LATB Time, s FIBO Time, s N 210 212 216 220

7.88e-6 1.88e-6 8.44e-8 4.28e-8

0.05 0.17 2.14 17.65

1.75e+02 1.00e+01 8.72e-01 1.97e-01

0.27 0.92 16.3 108

6.23e-4 1.55e-4 9.72e-5 6.08e-5

0.08 0.35 5.21 32.76

dimensional integral the best approach is produced by the optimized lattice approach LAT0- for number of samples N = 220 it gives relative error 1.69e − 06 and it is with 1 order better than the Fibonacci based lattice rule FIBO which gives 9.31e − 06 and with 5 orders better than the bijectional polynomial lattice rule LATB- see Table 6. For the 50-dimensional integral for number of samples N = 220 Fibonacci approach gives 6.08e − 5 and it is better than the bijection rule LATB which gives 1.97e − 01 but worse than the optimized lattice rule approach by at least 3 order which gives relative error of 4.28e − 8—see Table 7. This is very high accuracy and with 3– 4 orders better than Fibonacci based lattice rule and with 7 orders better than the bijectional rule. So we demonstrate here the advantages of the new lattice method and its capability to achieve very high accuracy for less than a minute on a laptop with a quad core CPU.

On a Full Monte Carlo Approach to Computational Finance

301

5 Conclusion A numerical study on lattice rule with optimal generating vector, bijectional lattice rule and Fibonacci based lattice rule has been discussed in the present paper. Optimized lattice rule described here is not only one of the best available algorithms for high dimensional integrals but also one of the few possible methods, because in this work we show that the deterministic algorithms need an huge amount of time for the evaluation of the multidimensional integral, as it was discussed in this paper. The numerical tests show that the improved lattice rule is efficient for multidimensional integration and especially for computing multidimensional integrals of a very high dimensions up to 50. The novelty is that such comparison has been made for the first time and the proposed optimized method gives very high accuracy for less than a minute on laptop even for 50-dimensional integral. It is an important element since this may be crucial in order to achieve a more reliable interpretation of the results in European style options which is foundational in computational finance. Acknowledgements Venelin Todorov is supported by the Bulgarian National Science Fund under Projects KP-06-N52/5 “Efficient methods for modeling, optimization and decision making” and KP06-N52/2 “Perspective Methods for Quality Prediction in the Next Generation Smart Informational Service Networks”.

References Antonov, I.A., Saleev, V.M.: An economic method of computing L Pτ -sequences. U.S.S.R. Comput. Math. Math. Phys.19(1), 252–256 (1980) Bahvalov, N.: On the approximate computation of multiple integrals. Vestnik Moscow State Univ. 4, 3–18 (1959) Bakhvalov, N.: On the approximate calculation of multiple integrals. J. Complex. 31(4), 502–516 (2015) Black, F., Scholes, M.: The pricing of options and corporate liabilities. J. Polit. Econ. 81(3), 637–659 (1973) Boyle, P.P.: Options: a Monte Carlo approach. J. Financ. Econ. 4(3), 323–338 (1977) Boyle, P.P., Lai, Y., Tan, K.: Using lattice rules to value low-dimensional derivative contracts (2001) Broadie, M., Glasserman, P.: Pricing American-style securities using simulation. J. Econ. Dyn. Control 21(8–9), 1323–1352 (1997) Caflish, R.E.: Monte Carlo and quasi-Monte Carlo methods. Acta Numer 7, 1–49 (1998). https:// doi.org/10.1017/S0962492900002804 Caffisch, R.E., Morokoff, W., Owen, A.: Valuation of mortgage-backed securities using Brownian bridges to reduce effective dimension. J. Comput. Finan. 1(1), 27–46 (1997) Centeno, V., Georgiev, I.R., Mihova, V., Pavlov, V.: Price forecasting and risk portfolio optimization. In: AIP Conference Proceedings, vol. 2164, Issue 1, p. 060006. AIP Publishing LLC (2019) Chance, D.M., Brook, R.: An Introduction to Derivatives and Risk Management, 8th edn. SouthWestern College Pub (2009) Cox, J.C., Rubinstein, M.: Options Markets, 1st edn. Prentice Hall (1985) Cox, J.C., Ross, S.A., Rubinstein, M.: Option pricing: a simplified approach. J. Financ. Econ. 7(3), 229–263 (1979)

302

V. Todorov

Dimov, I.: Monte Carlo Methods for Applied Scientists. World Scientific (2008) Dimov, I., Karaivanova, A.: Error analysis of an adaptive Monte Carlo method for numerical integration. Math. Comput. Simul. 47, 201–213 (1998) Duffie, D.: Dynamic Asset Pricing Theory, 3rd edn. Princeton University Press (2001) Duffie, D.: Security Markets: Stochastic Models. Emerald Group Publishing Limited (1988) Fox, B.: Algorithm 647: implementation and relative efficiency of quasirandom sequence generators. ACM Trans. Math. Softw. 12(4), 362–376 (1986) Gemmill, G.: Options Pricing. McGraw-Hill Publishing Co. (1992) Harrison, J.M., Pliska, S.M.: Martingales and stochastic integrals in the theory of continuous trading. Stoch. Process. their Appl. 11(3), 215–260 (1981) Hua, L.K., Wang, Y.: Applications of Number Theory to Numerical Analysis. Springer (1981) Hull, J.C.: Options, Futures, and Other Derivative Securities, 2nd edn. Prentice-Hall (1992) Joy, C., Boyle, P.P., Tan, K.S.: Quasi-Monte Carlo methods in numerical finance. Manage. Sci. 42(6), 926–938 (1996) Korobov, N.M.: The approximate computation of multiple integrals. Dokl. Akad. Nauk SSSR 124, 1207–1210 (1959) Korobov, N.M.: Properties and calculation of optimal coefficients. Sov. Math. Dokl. 1, 696–700 (1960) Korobov, N.M.: Number-Theoretical Methods in Approximate Analysis. Fizmatgiz, Moscow (1963) Kuo, F.Y., Nuyens, D.: Application of quasi-Monte Carlo methods to elliptic PDEs with random diffusion coefficients - a survey of analysis and implementation. Found. Comput. Math. 16(6), 1631–1696 (2016) Lai, Y., Spanier, J.: Applications of Monte Carlo/Quasi-Monte Carlo methods in finance: option pricing. In: Proceedings of the Claremont Graduate University Conference (1998) Niederreite, H.: Existence of good lattice points in the sense of Hlawka. Monatsh. Math. 86, 203–219 (1978) Niederreiter, H., Talay, D.: Monte Carlo and Quasi-Monte Carlo methods. Springer (2002) Owen, A.: Monte Carlo and Quasi-Monte Carlo methods in Scientific Computing, vol. 106, Lecture Notes in Statistics, pp. 299–317 (1995) Paskov, S.H.: Computing high dimensional integrals with applications to finance, Technical report CUCS-023-94, Columbia University (1994) Paskov, S.H., Traub, J.F.: Faster valuation of financial derivatives. J. Portf. Manag. 22(1), 113–120 (1995) Raeva, E., Georgiev, I. R.: Fourier approximation for modeling limit of insurance liability. In: AIP Conference Proceedings, vol. 2025, Issue 1, p. 030006. AIP Publishing LLC (2018) Sharygin, I.F.: A lower estimate for the error of quadrature formulas for certain classes of functions. Zh. Vychisl. Mat. i Mat. Fiz. 3, 370–376 (1963) Sloan, I.H., Kachoyan, P.J.: Lattice methods for multiple integration: theory, error analysis and examples. SIAM J. Numer. Anal. 24, 116–128 (1987) Sloan, I.H., Joe, S.: Lattice Methods for Multiple Integration. Oxford University Press (1994) Tan, K.S., Boyle, P.P.: Applications of randomized low discrepancy sequences to the valuation of complex securities. J. Econ. Dyn. Control 24, 1747–1782 (2000) Tanaka, H., Nagata, H.: Quasi-random number method for the numerical integration. Suppl. Prog. Theor. Phys. 56, 121–131 (1974) Tezuka, S.: Uniform Random Numbers: Theory and Practice. Kluwer Academic Publishers (1995) Wang, Y., Hickernell, F.J.: An historical overview of lattice point sets, in Monte Carlo and QuasiMonte Carlo Methods 2000. In: Proceedings of a Conference held at Hong Kong Baptist University, China (2000) Wilmott, P., Dewynne, J., Howison, S.: Option Pricing: Mathematical Models and Computation. Oxford University Press (1995)

Advanced Monte Carlo Methods to Neural Networks Venelin Todorov

Abstract The paper presents an approach for multidimensional integrals in neural networks, namely a rank-1 lattice sequence in order to resolve multidimensional integrals up to 100 dimensions. The theoretical perspective is presented in detail and the results of the proposed approach are compared to those of three other stochastic techniques. The theoretical perspective is presented in detail and the proposed method was compared with other state of art approaches to solve the same problem. The proposed method behaved better and have significant advantages over the others. Keywords Neural networks · Monte Carlo methods · Multidimensional integrals

1 Introduction The Monte Carlo method is known to be only accurate with a tremendous amount of √ scenarios since its rate of convergence is O(1/ N ) (Dimov 2008). In the last few years new approaches have been developed that outperform standard Monte Carlo in terms of numerical efficiency. It has been found that there can be efficiency gains in using deterministic sequences rather than the random sequences which are a feature of standard Monte Carlo (Boyle et al. 2001). These deterministic sequences are carefully selected so that they are well dispersed throughout the region of integration. Sequences with this property are known as low discrepancy sequences. Quasi-Monte Carlo methods use deterministic sequences that have better uniform properties measured by discrepancy. They are usually superior to the Monte Carlo method as they have a convergence rate of O((log N )s /N ), where N is the number of samples and s is the dimensionality of the problem. V. Todorov (B) Department of Information Modeling, Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, Acad. Georgi Bonchev Str., Block 8, 1113 Sofia, Bulgaria e-mail: [email protected]; [email protected] Department of Parallel Algorithms, Institute of Information and Communication Technologies, Bulgarian Academy of Sciences, Acad. G. Bonchev Str., Block 25 A, 1113 Sofia, Bulgaria © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 S. Fidanova (ed.), Recent Advances in Computational Optimization, Studies in Computational Intelligence 1044, https://doi.org/10.1007/978-3-031-06839-3_17

303

304

V. Todorov

Monte Carlo methods have been developed over many years in a range of different fields, but have only recently been applied to the problems in Bayesian statistics (Lin et al. 2009). As well as providing a consistent framework for statistical pattern recognition, the stochastic approach offers a number of practical advantages including a solution to the problem for higher dimensions (Lin 2011). Multidimensional integrals arise in algebraic analysis for nonidentifiable learning machines (Watanabe 2001). The accurate evaluation of marginal likelihood integrals is a difficult fundamental problem in Bayesian inference that has important applications in machine learning (Song et al. 2017). Conventional approaches to network training are based on the minimization of an error function, and are often motivated by some underlying principle such as maximum likelihood (Bendavid 2017). The paper is organized as follows. The problem setting and motivation is presented in Sect. 2. The theoretical aspects of the methods are presented in Sect. 3. Numerical study and discussions are given in Sect. 4. The conclusions are given in Sect. 5.

2 Formulation of the Problem A fundamental problem in neural networks is the accurate evaluation of multidimensional integrals. We will primarily be interested in two kinds of integrals. The first has the form ∫ p1u 1 (x) . . . psu s (x)d x, (1) Ω

where Ω ⊆ Rs , x = (x1 , . . . , xs ), pi (x) are polynomials and u i are integers. The second kind of integrals has the form ∫ Ω

e−N f (x) φ(x)d x,

(2)

where f (x) and φ(x) are s-dimensional polynomials and N is a natural number. These integrals are investigated by Shaowei Lin in Lin (2011), Lin et al. (2009). The asymptotics of such integrals is well understood for regular statistical models, but little was known for singular models until a breakthrough in 2001 due to Sumio Watanabe (Watanabe 2001). His insight was to put the model in a suitable standard form by employing the technique of resolution of singularities from algebraic geometry. These integrals are evaluated unsatisfactory with deterministic (Watanabe 2001) and algebraic methods (Song et al. 2017) up to now, and it is known that Monte Carlo methods (Dimov 2008; Pencheva et al. 2021) outperform these methods especially for high dimensions. Consider the problem of approximate integration of the multiple integral ∫ [0,1)s



1

f (x)dx =

dx 0

(1)



1

dx 0

(2)



1

... 0

dx(s) f (x (1) , x (2) , . . . , , x (s) ) = θ (3)

Advanced Monte Carlo Methods to Neural Networks

305

where x = (x (1) , . . . , x (s) ) ∈ [0, 1)s and |θ | < 1. Since the variance is finite, the function f must be square-integrable in [0, 1)s . For small values of s, numerical integration methods such as Simpson’s rule or the trapezoidal rule can be used to approximate the integral. These methods, however, suffer from the so-called curse of dimensionality and become impractical as s increases beyond 3 or 4. One viable technique for larger values Σ N −1of s is to use sampling f (xn ) where PN = methods such that the estimator of θ becomes θˆ = N1 n=0 {x0 , x1 , . . . , x N −1 } ∈ [0, 1)s is some point set. Different techniques are available for selecting these point sets. When the integration nodes PN are N independently and identically distributed random points in [0, 1)s , the above sampling method becomes the standard Monte Carlo integration method and the resulting estimator θˆ MC ≡ θ MC is known as the Monte Carlo estimator. Important properties of θˆ MC are as follows: • It is an unbiased estimator of θ with variance σ 2 /N ; i.e. E[θˆ MC ] = θ and V ar [θˆ MC ] = σ 2 /N • The Strong Law of Large Numbers asserts that θˆ MC converges to θ almost surely. • The Central Limit Theorem guarantees that the distribution of θˆ converges asymptotically to a normal distribution with mean θ and variance σ 2 /N as N → ∞. In other words, the error |θ − θˆ MC | converges probabilistically at a rate of O(N −1/2 ). • The rate of convergence O(N −1/2 ) is independent of s. We will now give a brief explanation which demonstrates the strength of the Monte Carlo and quasi-Monte Carlo approach (Dimov 2008). According to Dimov (2008) we will choose 100 nodes on the each of the coordinate axes in the s-dimensional cube G = E s and we have to evaluate about 10100 values of the function f (x). Assume a time of 10−7 s is necessary for calculating one value of the function (Dimov 2008). So, a time of order 1093 s will be necessary for computation of the integral, and 1 year has 31536 × 103 s. Now MC approach consists of generating N pseudo random values (points) (PRV) in G; in evaluating the values of f (x) at these points; and averaging the computed values of the function. For each uniformly distributed random (UDR) point in G we have to generate 100 UDR numbers in [0, 1]. Assume that the expression in front of h −6 is of order 1 (Dimov 2008). Here h = 0.1, and we have N ≈ 106 ; so, it will be necessary to generate 100 × 106 = 10 × 107 PRV. Usually, 2 operations are sufficient to generate a single PRV. According to Dimov (2008) the time required to generate one PRV is the same as that for computation the value of f (x). So, in order to solve the task with the same accuracy, a time of 10 × 107 × 2 × 10−7 ≈ 20s will be necessary. We summarize that in the case of 100-dimensional integral it is 5 × 1091 times faster than the deterministic one. That motivates our study on the new highly efficient stochastic approaches for the problem under consideration.

306

V. Todorov

3 The Description of the Stochastic Approaches For more information about the theoretical aspects of lattice rules see the monographs of Sloan and Kachoyan (1987), Hua and Wang (1981) and Sloan and Joe (1994) which provide comprehensive expositions of the theory of integration lattices. We will use this rank-1 lattice sequence Wang and Hickernell (2002): ⎧ xk =

⎫ k z , k = 1, . . . , N , N

(4)

where N is an integer, N ≥ 2, z = (z 1 , z 2 , . . . z s ) is the generating vector and {z} denotes the fractional part of z. For the definition of the E sα (c) and Pα (z, N ) see Wang and Hickernell (2002) and for more details, see also Bahvalov (1959). Definition 1 Consider the point set X = {xi | i = 1, 2, . . . N } in [0, 1)s and N > 1. Denote by xi = (xi(1) , xi(2) , . . . , xi(s) ) and J (v) = [0, v1 ) × [0, v2 ) × . . . × [0, vs ). Then the discrepancy of the set is defined as | | | | s ∏ | | #{xi ∈ J (v)} v j || . D ∗N := sup || − N 0≤v j ≤1 | | j=1

(5)

In 1959 Bahvalov proved that Bahvalov (1959) there exists an optimal choice of the generating vector z: | | | | (⎧ ⎫) ∫ N | |1 Σ k (log N )β(s,α) | | f f (u)du | ≤ cd(s, α) , z − | | |N N Nα | | k=1 s [0,1)

(6)

for the function f ∈ E sα (c), α > 1 and d(s, α), β(s, α) does not depend on N . The generating vector z which satisfies (6), is an optimal generating vector (Wang and Hickernell 2002) and while the existence of optimal generating vectors is proved by the theoretical result, the main bottleneck lies in the construction of the optimal vectors, especially for very high dimensions (Dimov 2008). The first generating vector in our study is the generalized Fibonacci numbers of the corresponding dimension: z = (1, Fn(s) (2), . . . , Fn(s) (s)), where we use that (s) Fn(s) ( j) := Fn+ j−1 −

j−2 Σ i=0

(s) Fn+i

(7)

Advanced Monte Carlo Methods to Neural Networks

307

(s) and Fn+l (l = 0, . . . , j − 1, j is an integer, 2 ≤ j ≤ s) is the term of the s-dimension al Fibonacci sequence (Wang and Hickernell 2002). If we change the generating vector to be optimal in the way described in Kuo and Nuyens (2016) we have improved the lattice sequence. We will now give the description of the steps of our algorithms. At the beginning of the algorithm the input is the number of dimensionality s and the number of samples N . At the first step of the algorithm s dimensional optimal generating vector

z = (z 1 , z 2 , . . . z s )

(8)

is generated by the fast construction method described by Dirk Nuyens (Kuo and Nuyens 2016). The second step of the algorithm includes generating the points of lattice rule by formula ⎧ ⎫ k (9) xk = z , k = 1, . . . , N . N And at the third and last step of the algorithm an approximate value I N of the multidimensional integral is evaluated by the formula: IN =

(⎧ ⎫) N k 1 Σ f z . N k=1 N

(10)

The special choice of this optimal generating vector is definitely more efficient than the Fibonacci generating vector, which is only optimal for the two dimensional case (Wang and Hickernell 2002). For our improved lattice rule is satisfied (Kuo and Nuyens 2016): ( s ) log N ∗ DN = O (11) . N

4 Numerical Results We considered different examples of 4, 7, 10, 30 and 100 dimensional integrals, respectively, for which we have computed their referent values. Example 1 s = 4. ∫ x1 x22 e x1 x2 sin(x3 ) cos(x4 ) ≈ 0.108975. [0,1]4

Example 2 s = 7.

(12)

308

V. Todorov

⎛ ∫ e

1−

3 Σ

i=1

[0,1]7

sin( π2 .xi )

7 Σ



xj ⎟ ⎜ j=1 ⎜ ⎟ .ar csin ⎜sin(1) + ⎟ ≈ 0.7515. ⎝ 200 ⎠

(13)

Example 3 s = 10. ∫

4x1 x32 e2x1 x3 x5 +···+x10 e ≈ 14.808435. (1 + x2 + x4 )2

(14)

4x1 x32 e2x1 x3 x5 +···+x20 e x21 . . . x30 ≈ 3.244. (1 + x2 + x4 )2

(15)

[0,1]10

Example 4 s = 30. ∫ [0,1]30

We also consider the 100-dimensional multidimensional integral defined by the following way: Example 5 s = 100.



( 100 ) ∏ exp xi ,

[0,1]100

i=1

I100 =

(16)

whose reference value is calculated by expanding the exponential function in Taylor series and integrating the terms (x1 · · · x100 )n namely ∫ [0,1]100

=

∞ Σ n=0

( 100 ) ∏ exp xi = i=1

1 =100 F100 (1, · · · , 1; 2, · · · , 2; 1), (n + 1)100 n!

where p Fq (a1 , · · · , a p ; b1 , · · · , bq ; x) is the generalized hypergeometric function p Fq (a1 , · · · , a p ; b1 , · · · , bq ; x) =

∞ Σ (a1 )n · · · (a p )n x n , (b1 )n · · · (bq )n n! n=0

and (c)n = c(c + 1) · · · (c + n − 1) is the Pochhammer symbol. We make a comparison between the optimized lattice sequence with an optimal generating vector (OPT), Fibonacci lattice sets (FIBO), Latin hypercube sampling (LHS) (Minasny and McBratney 2006, 2010) and the scrambled Sobol sequence

Advanced Monte Carlo Methods to Neural Networks

309

Table 1 Algorithm comparison of the REs for the 4-dimensional integral for different number of points # of points

OPT

t, s

FIBO

t,s

LHS

t, s

SOBOL t, s

CRUDE t, s

ADAPT t, s

1490

6.11e-4

0.002

1.01e-3

0.004

8.16e-4

0.005

3.78e-3

0.47

5.25e-3

0.002

2.08e-4

0.07

10671

2.13e-5

0.01

8.59e-5

0.02

6.11e-4

0.01

6.10e-4

1.59

1.83e-3

0.01

2.98e-4

1.09

20569

6.56e-6

0.02

3.89e-5

0.03

5.01e-5

0.02

1.97e-5

4.54

6.59e-4

0.02

2.44e-4

1.74

39648

9.14e-7

0.06

3.01e-5

0.07

4.18e-5

7.09

9.67e-6

8.26

1.04e-3

0.06

8.26e-5

4.58

147312

4.78e-7

0.15

3.71e-6

0.24

2.19e-5

0.28

1.40e-6

27.91

3.78e-3

0.15

7.03e-5

11.98

Table 2 Algorithm comparison of the REs for the 4-dimensional integral for a preliminary given time t, s OPT FIBO LHS SOBOL CRUDE ADAPT 1 5 10 20

5.66e-7 3.12e-7 5.14e-8 3.18e-8

5.62e-6 5.38e-7 3.77e-7 2.67e-8

1.54e-5 9.18e-6 6.51e-6 2.31e-6

6.32e-4 1.23e-5 8.48e-6 1.16e-6

1.48e-3 6.62e-4 2.52e-4 1.58e-4

2.85e-4 9.18e-5 1.36e-5 2.08e-5

Table 3 Algorithm comparison of the REs for the 7-dimensional integral for different number of points # of points

OPT

t, s

FIBO

t, s

LHS

t, s

SOBOL t, s

CRUDE t, s

ADAPT t, s

2000

6.39e-4

0.14

2.81e-3

0.23

5.45e-3

0.25

2.51e-3

1.42

6.39e-3

0.14

4.44e-4

1.62

7936

3.23e-4

0.64

1.38e-3

0.87

2.11e-3

0.91

1.16e-3

3.08

8.51e-3

0.64

8.04e-4

6.90

15808

1.23e-5

0.95

9.19e-4

1.73

8.31e-4

1.81

7.58e-4

5.89

2.58e-3

0.95

1.98e-4

11.26

62725

3.15e-6

2.54

2.78e-5

3.41

6.22e-4

3.5

3.11e-4

15.64

2.55e-3

2.54

2.38e-4

29.27

124946

1.12e-6

6.48

6.87e-5

6.90

4.34e-4

7.1

8.22e-5

31.41

2.22e-3

6.48

1.47e-4

76.46

(SOBOL) (Paskov 1994). For lower dimensions we have included the Adaptive approach (ADAPT) (Dimov and Georgieva, 2010; Dimov and Karaivanova, 1998; Dimov et al., 2003) and the simplest possible plain approach (CRUDE) (Dimov 2008). Each Table below contains information about the stochastic approach which is applied, the obtained relative errors (REs), the needed CPU-time in seconds and the number of points. Note that when the FIBO method is tested, the number of sampled points are always generalized Fibonacci numbers of the corresponding dimensionality. The computer working architecture is Core i7-4710MQ at 2.50 GHz and 8 GB of RAM. We performs 10 algorithmic runs using MATLAB on CPU Core i7-4710MQ for the algorithms to validate our assumptions of experimentation. As well as providing a consistent framework for statistical pattern recognition, the stochastic approach offers a number of practical advantages including a solution to the problem for higher dimensions. For 4, 7 and 10-dimensional integral we have included the Adaptive approach and the plain (Crude) Monte Carlo algorithm. If the computational time is fixed he advantage of Fibonacci lattice sets in

310

V. Todorov

Table 4 Algorithm comparison of the REs for the 7-dimensional integral for a preliminary given time t, s OPT FIBO LHS SOBOL CRUDE ADAPT 0.1 1 5 10 20

7.38e-4 1.17e-5 2.32e-6 9.11e-7 7.43e-7

2.38e-3 6.19e-4 8.81e-5 1.88e-5 3.87e-6

6.65e-3 3.05e-3 4.89e-4 2.16e-4 8.56e-5

8.37e-3 1.37e-3 8.38e-4 4.78e-4 9.87e-5

2.38e-2 8.87e-3 5.16e-3 1.28e-3 2.03e-3

3.11e-2 2.88e-3 3.76e-3 6.71e-4 4.28e-4

terms of relative error in comparison with Adaptive Monte Carlo and Crude Monte Carlo is clearly seen (see Figs. 1, 2, 3). Adaptive approach, due to its high computational cost, is not applicable for the 30 and 100 dimensional cases. Numerical results show significant advantage for the optimized lattice sets algorithm based on an optimal generating vector in comparison with FIBO, LHS and SOBOL scramble sequence (1–2 orders). For the 4th dimensional integral the best approach is produced by the optimized method OPT—a relative error 4.78e − 7 for N = 147312—see Table 1 and for 20s the best approach is FIBO—2.67e − 8 in Table 2 with two orders better results than both SOBOL and LHS. For the 7th dimensional integral the best approach is produced by the optimized method OPT—a relative error 1.12e−6 for N = 124946—see Table 3 and for 20s the best approach is OPT—7.43e−7 in Table 4 with one order better REs than FIBO and two order better REs than both SOBOL and LHS. For the 10th dimensional integral the best approach is produced by the optimized method OPT for N = 3524578 the RE is 5.32e−8—see Table 5 and for 20s the best approach is again OPT—9.13e−9 in Table 6 with two order better REs than FIBO and 3–4 order better REs than both SOBOL and LHS. For the 30th dimensional integral the best approach is produced by the optimized method OPT for N = 1048576—the relative error is 8.81e−5—see Table 7 and for 20s the best approach is again OPT—2.33e−5 in Table 8 with one order better REs than both SOBOL and LHS and 3 order better REs than both FIBO, which shows that FIBO becomes inefficient for high dimensions. Finally, for the 100th dimensional integral the best approach is produced by the optimized method OPT for N = 220 the RE is 6.38e−5—see Table 9 and for 20s the best approach is OPT—8.86e−6 in Table 10 with 4 order better REs than FIBO and 3 orders better REs than SOBOLS and LHS. From all Tables we can conclude that the optimized lattice sequence OPT, used for the first time for the evaluation of this type of multidimensional integrals up to 100 dimensions, gives the best results compared to the other stochastic approaches with increasing the dimensionality of the multidimensional integral.

Advanced Monte Carlo Methods to Neural Networks

311

Table 5 Algorithm comparison of the REs for the 10-dimensional integral for different number of points # of points

OPT

t, s

FIBO

t, s

LHS

t, s

SOBOL t, s

CRUDE t, s

ADAPT t, s

1597

3.14e-4

0.002

4.39e-3

0.003

7.31e-3

0.01

1.46e-3

0.05

2.38e-2

0.002

2.43e-3

1.18

17711

6.21e-5

0.02

1.81e-3

0.04

4.45e-3

0.07

1.83e-4

0.21

1.61e-2

0.02

8.27e-4

1.07

121393

4.34e-6

0.15

1.20e-3

0.16

7.23e-4

0.21

3.12e-5

1.47

8.84e-3

0.15

4.42e-4

9.45

832040

4.11e-7

0.75

1.19e-5

0.70

3.11e-4

0.83

8.25e-6

14.41

3.74e-3

0.75

5.48e-5

77.21

3524578 5.32e-8

6.35

2.63e-6

6.45

8.57e-5

6.7

7.71e-7

139.1

5.12e-3

6.35

8.33e-6

256.37

Table 6 Algorithm comparison of the REs for the 10-dimensional integral for a preliminary given time t, s OPT FIBO LHS SOBOL CRUDE ADAPT 0.1 1 5 10 20

4.95e-6 8.10e-7 3.56e-8 4.31e-8 9.13e-9

9.19e-6 5.63e-6 2.15e-6 1.79e-6 8.61e-7

4.13e-3 2.55e-4 1.23e-4 7.17e-5 3.42e-5

4.19e-4 1.21e-4 7.21e-5 3.51e-5 7.09e-6

8.95e-3 1.10e-3 1.43e-3 3.56e-3 1.13e-3

2.61e-3 2.90e-3 1.50e-3 5.94e-4 8.82e-4

Table 7 Algorithm comparison of the REs for the 30-dimensional integral for different number of points # of OPT t, s SOBOL t, s LHS t, s FIBO t, s points 1024 16384 131072 1048576

1.21e-2 4.11e-3 5.24e-4 8.81e-5

0.02 0.16 1.34 9.02

5.78e-2 1.53e-2 1.35e-3 6.78e-4

0.53 5.69 42.1 243.9

5.68e-2 8.60e-3 5.38e-3 9.31e-4

0.03 0.18 1.2 8.9

8.81e-1 6.19e-1 2.78e-1 9.86e-2

0.02 0.14 1.16 8.61

Table 8 Algorithm comparison of the REs for the 30-dimensional integral for a preliminary given time t, s OPT SOBOL LHS FIBO 1 5 10 20

3.48e-3 4.23e-4 8.91e-5 2.33e-5

2.38e-2 5.46e-3 1.25e-3 6.11e-4

7.21e-3 5.16e-3 8.21e-4 4.35e-4

2.38e-1 1.81e-1 9.48e-2 7.87e-2

312

V. Todorov

Fig. 1 Comparison of the RE for the 4-dimensional integral with different stochastic methods

Fig. 2 Comparison of the RE for the 7-dimensional integral with different stochastic methods

Fig. 3 Comparison of the RE for the 10-dimensional integral with different stochastic methods

Advanced Monte Carlo Methods to Neural Networks

313

Table 9 Algorithm comparison of the REs for the 100-dimensional integral for different number of points OPT t, s FIBO t, s LHS t, s SOBOL t, s # of points 210 212 216 220

5.18e-3 3.18e-3 1.44e-4 6.38e-5

0.05 0.17 9.1 57.6

4.13e-1 1.15e-1 6.12e-2 3.18e-2

0.06 0.18 9.2 58.7

5.18e-2 3.22e-2 8.32e-3 4.51e-3

0.08 0.2 9.7 60

6.31e-2 1.23e-2 2.31e-3 2.34e-4

18 34 170 861

Table 10 Algorithm comparison of the REs for the 100-dimensional integral for a preliminary given time t, s OPT FIBO LHS SOBOL 1 2 10 100

2.14e-3 1.56e-3 2.58e-4 8.86e-6

7.18e-2 6.02e-2 4.12e-2 1.13e-2

2.83e-2 1.17e-2 8.34e-3 1.18e-3

9.31e-2 8.66e-2 6.94e-2 3.88e-3

5 Conclusion In this work we investigate advanced stochastic methods for solving a specific multidimensional problem related to neural networks. A comprehensive experimental study of optimized lattice rule, Fibonacci lattice sets, Sobol sequence, Latin hypercube sampling, Adaptive approach and plain Monte Carlo algorithm has been done on some case test functions. The improved lattice rule based on optimal generating vector is one of the best available algorithms for high dimensional integrals and the only possible methods, because the deterministic algorithms need an huge amount of time for the evaluation of the multidimensional integral, as it was discussed in this paper. At the same time the new method is suitable to deal with 100-dimensional problems for less than a minute on a laptop. It is an important element since this may be crucial in order to achieve a more reliable interpretation of the results in Bayesian statistics which is foundational in neural networks, artificial intelligence and machine learning. Acknowledgements Venelin Todorov is supported by the Bulgarian National Science Fund under Projects KP-06-N52/5 “Efficient methods for modeling, optimization and decision making” and KP06-N52/2 “Perspective Methods for Quality Prediction in the Next Generation Smart Informational Service Networks”.

314

V. Todorov

References Bahvalov, N.: On the approximate computation of multiple integrals. Vestnik Moscow State Univ. 4, 3–18 (1959) Bendavid, J.: Efficient Monte Carlo Integration Using Boosted Decision Trees and Generative Deep Neural Networks (2017). arXiv:1707.00028 Boyle P., Lai Y., Tan K.: Using Lattice Rules to Value Low-Dimensional Derivative Contracts (2001) Dimov, I.: Monte Carlo Methods for Applied Scientists, p. 291. World Scientific, New Jersey, London, Singapore (2008) Dimov, I., Georgieva, R.: Monte Carlo algorithms for evaluating Sobol’ sensitivity indices. Math. Comput. Simul. 81(3), 506–514 (2010) Dimov, I., Karaivanova, A.: Error analysis of an adaptive Monte Carlo method for numerical integration. Math. Comput. Simul. 47, 201–213 (1998) Dimov, I., Karaivanova, A., Georgieva, R., Ivanovska, S.: Parallel importance separation and adaptive Monte Carlo algorithms for multiple integrals. Springer Lect. Notes Comput. Sci. 2542, 99–107 (2003) Hua, L.K., Wang, Y.: Applications of Number Theory to Numerical Analysis. Springer (1981) Kuo, F.Y., Nuyens, D.: Application of quasi-Monte Carlo methods to elliptic PDEs with random diffusion coefficients - a survey of analysis and implementation. Found. Comput. Math. 16(6), 1631–1696 (2016) Lin, S.: Algebraic Methods for Evaluating Integrals in Bayesian Statistics, Ph.D. dissertation, UC Berkeley (2011) Lin, S., Sturmfels, B., Xu, Z.: Marginal likelihood integrals for mixtures of independence models. J. Mach. Learn. Res. 10, 1611–1631 (2009) Minasny, B., McBratney, B.: A conditioned Latin hypercube method for sampling in the presence of ancillary information. J. Comput. Geosci. Arch. 32(9), 1378–1388 (2006) Minasny, B., McBratney, B.: Conditioned Latin Hypercube Sampling for Calibrating Soil Sensor Data to Soil Properties, Chapter: Proximal Soil Sensing, Progress in Soil Science, pp. 111–119 (2010) Paskov, S.H.: Computing high dimensional integrals with applications to finance, Technical report CUCS-023-94, Columbia University (1994) Pencheva, V., Georgiev, I., Asenov, A.: Evaluation of passenger waiting time in public transport by using the Monte Carlo method. In: AIP Conference Proceedings, vol. 2321. Issue 1. AIP Publishing LLC (2021) Sloan, I.H., Joe, S.: Lattice Methods for Multiple Integration. Oxford University Press (1994) Sloan, I.H., Kachoyan, P.J.: Lattice methods for multiple integration: Theory, error analysis and examples. SIAM J. Numer. Anal. 24, 116–128 (1987) Song, J., Zhao, S., Ermon, S., A-nice-mc: adversarial training for mcmc. In: Advances in Neural Information Processing Systems, pp. 5140–5150 (2017) Wang Y., Hickernell F., An Historical Overview of Lattice Point Sets (2002) Watanabe, S.: Algebraic analysis for nonidentifiable learning machines. NeuralComput. 13, 899– 933 (2001)

An Efficient Adaptive Monte Carlo Approach for Multidimensional Quantum Mechanics Venelin Todorov and Ivan Dimov

Abstract In this work we study an efficient adaptive approach for computing multidimensional problem in quantum mechanics. Richard Feynman’s problem for Wigner kernel (Feynman 1948; Wigner 1932) evaluation is well known multidimensional problem in quantum mechanics and nowadays there is renewing interest in finding solutions to this problem. We propose some improvements over the standard adaptive approach for the multidimensional integrals representing the Wigner kernel under consideration. The developed full Monte Carlo adaptive approach represents an important advantage in the context of quantum many-body simulations and can be important in the field of quantum chemistry. Keywords Multidimensional quantum mechanics · Monte Carlo methods · Wigner kernel

1 Introduction The Monte Carlo method to numerical problems has shown to be remarkably efficient in performing very large computational problems since it is an embarrassingly parallel technique. Additionally, Monte Carlo methods (Cerpeley 2010; Ermakov 1985; Hammond et al. 1994; Putteneers and Brosens 2012; Shao et al. 2011) are well known to keep performance and accuracy with the increase of dimensionality of a given problem, a rather counterintuitive peculiarity not shared by any known deterministic method. Motivated by these very peculiar and desirable computational features, in this work we develop an adaptive Monte Carlo approach to the problem of simulating single- and many-body quantum systems by means of signed particles V. Todorov (B) Department of Information Modeling, Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, Acad. Georgi Bonchev Str., Block 8, 1113 Sofia, Bulgaria e-mail: [email protected]; [email protected] V. Todorov · I. Dimov Department of Parallel Algorithms, Institute of Information and Communication Technologies, Bulgarian Academy of Sciences, Acad. G. Bonchev Str., Block 25 A, 1113 Sofia, Bulgaria e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 S. Fidanova (ed.), Recent Advances in Computational Optimization, Studies in Computational Intelligence 1044, https://doi.org/10.1007/978-3-031-06839-3_18

315

316

V. Todorov and I. Dimov

(Garcia-Garcia et al. 1998; Jacoboni et al. 2001; Querlioz and Dollfus 2010; Shifren and Ferry 2002a, b; Szabo and Ostlund 1982). In particular we introduce a stochastic technique, based on the adaptive strategy, for the computation of the Wigner kernel which, so far, has represented the main bottleneck of this method (it is equivalent to the calculation of a multi-dimensional integral, a problem in which complexity is known to grow exponentially with the dimensions of the problem). Adaptive technique (Berntsen et al. 1991; Dimov 2003; Dimov and Georgieva 2010; Pencheva et al. 2021) is well known method for computing multidimensional integrals, and most efficient when the integrand function has peculiarities and peaks. Adaptive Monte Carlo methods proposed by Lautrup (Davis and Rabinowitz 1984) use a “sequence of refinements” of the original area and in this case the evaluations are concentrated in subdomains with computational difficulties. There are numerous adaptive strategies depending on the techniques of adaptation (Dimov 2008). The developed adaptive method is based on a recursive partitioning of the integration area using a posteriori error information for the current partition.

2 The Wigner Monte Carlo Method The theory is based on a novel generalized physical interpretation of the mathematical Wigner Monte Carlo method which is able to simulate the time-dependent singleand many-body Wigner equation based on the idea J.M. Sellier (Sellier 2015, 2016; Sellier and Dimov 2015; Sellier et al. 2015). We give here the three postulates which completely define the new mathematical formulation of quantum mechanics taken from Baraniuk and Jones (1993), Sellier (2015), Sellier and Dimov (2016), Sellier and Dimov (2014), Sellier et al. (2015), Shao and Sellier (2015), Xiong et al. (2016). Postulate I. Physical systems can be described by means of (virtual) Newtonian particles, i.e. provided with a position x and a momentum p simultaneously, which carry a sign which can be positive or negative. Postulate II. A signed particle, evolving in a potential V = V (x), behaves as a field-less classical point-particle which, during the time interval dt, creates a new pair of signed particles with a probability γ (x(t))dt, where (+∞ +∞ Σ Dp, VW+ (x; p, ) ≡ lim VW+ (x; M Δ p, ), γ (x) = , + Δ p →0

−∞

M=−∞

h where h = 2π is the reduced Planck constant (h) or Dirac constant, M = (M1 , M2 , . . . , Md ) is a set of d integers and VW+ (x; p) is the positive part of the quantity

i VW (x; p) = d d+1 π h

(+∞ 2i , dx, e− n x p [V (x + x, ) − V (x − x, )], −∞

(1)

An Efficient Adaptive Monte Carlo Approach for Multidimensional …

317

known as the Wigner kernel (in a d-dimensional space) Wigner (1932). If, at the moment of creation, the parent particle has sign s, position x and momentum p, the new particles are both located in x, have signs +s and −s, and momentum p + p, and p − p, respectively, with p, chosen randomly according to the (normalized) V + (x;p) probability Wγ (x) . Postulate III. Two particles with opposite sign and same phase-space coordinates (x, p) annihilate. (with L C a given length), it is possible By introducing a momentum step Δ p = hπ LC to adapt Postulate II to the case of a finite and semi-discrete phase-space, and the many-body signed particle with phase-space coordinates (x1 (t), . . . xn (t); M1 (t) Δ p . . . Mn Δ p)

(2)

and evolving in potential V = V (x1 , . . . , xn ), behaves as a field-less classical pointparticle which, during the time interval dt, creates a new pair of signed particles with a probability γ (x1 (t), . . . , xn (t)) where +M max Σ

γ (x1 (t), . . . , xn (t)) =

M1 =−Mmax

...

+M max Σ

VW+ (x1 , . . . xn ; M1 Δ p1 . . . Mn Δ pn ),

Mn =−Mmax

(3) Mmax is an integer representing the finiteness of the phase-space, and VW+ (x1 , . . . , xn ; p1 . . . pn )

(4)

is the positive part of the many-body Wigner kernel (1). If, at the moment of creation, the parent particle has sign s, position x = (x1 , . . . xn ) and momentum p = (p1 , . . . pn ), the new particles are both located in x, have signs +s and −s, and momenta p + p, and p − p, respectively, with p, chosen randomly according to the (normalized) probability VW+ (x1 , . . . , xn ; p1 . . . pn ) . γ (x1 , . . . , xn )

(5)

The transition from the quantum to the classical regimes for signed particles in the presence of a given external potential V = V (x) when h → 0 for simplicity, we restrict this investigation to 1D case. By introducing the change of variables p , x , = 2m t , the Wigner kernel (1) reads 1 VW (x; p) = π h2

(+∞ ⎡ ( ( p2 t , p p ,) p , )⎤ dt , e− mn V x+ t −V x− t . m 2m 2m

(6)

−∞

Now, the fact that the Wigner kernel is a real function (Wigner 1932) implies that the kernel can be written as

318

V. Todorov and I. Dimov

2p VW (x; p) = π h2 m

( 2 , ) (n) ( ) (+∞ (+∞ p t V (x) pt , n , dt cos hm n! 2m

n=0,even −∞

2p = π h2 m

2p + 2 πh m

( 2 ,) (+∞ p t , dt cos V (x)+ hm

−∞

( 2 ,) ) ( (+∞ p t 1 ∂ 2 V pt , 2 , dt cos + O(h 4 ). hm 2 ∂ x 2 2m

−∞

Now one obtains ⎡

2V (x) sin VW (x; p) ≈ π ph

(

2V (x) = 2 , lim sin t →+∞ π ph =

4V (x) sin π ph

(

p2 t , hm

(

p2 T hm

)⎤+∞

p2 t , hm

−∞

)

) .

The function γ (x) can be expressed as γ (x) = lim , + Δ p →0

+∞ ⎡ Σ M=−∞

4V (x) (M Δ p)2 T π h(M Δ p) hm

⎤+ ,

(7)

where [. . . ]+ is the positive part of the quantity in brackets. If now we suppose, for simplicity (the generalization being trivial), that V (x) > 0∀x this brings to the condition hmπ 0≤M≤ Δ p 2 T which for h → 0 simply becomes 0≤M ≤0 therefore M = 0. Therefore one concludes that in the classical limit the function γ is identically zero γ (x) = 0. 1D quantum system in the presence of a potential barrier is mathematically expressed as V (x) = VB , ∀x ∈ [x A , x B ] and V (x) = 0 otherwise. where VB is the energy of the barrier, and [x A , x B ] represents the spatial interval where the barrier is

An Efficient Adaptive Monte Carlo Approach for Multidimensional …

319

placed. In particular, one starts by rewriting the kernel (1) as the contribution of two terms, i.e. ⎤ ⎡ +∞ (+∞ ( i ⎣ 2i , 2i , dx, e− n x p V (x − x, )]⎦ , dx, e− n x p [V (x + x, ) − VW (x; p) = π h2 −∞

(8)

−∞

which, using the fact that V (x) = 0∀ ∈ n[x A , x B ], reduces to the sum of two simple integrals ⎡ VW (x; p) =

VB ⎢ ⎣ π h2

X(B −x

x A −x

dx, sin

(

,

2x p h

)

−(X ( A −x)

dx, sin

− −(x B −x)

(



,

)

2x p ⎥ ⎦. h

(9)

Finally, by performing the integrations, one obtains Wigner kernel in the case of a potential barrier VW (x; p) =

) ( )⎤ ⎡ ( VB 2(x B − x) p 2(x A − x) p − cos . cos π h2 h h

(10)

Algorithm 1. Input data: total number of points N 1, constant M > 2 (the initial number of subregions taken), constant ε (max value of the variance in each subregion), constant δ (maximal admissible number of subregions), d-dimensionality of the initial region/domain, f - the function of interest. 1.1. Calculate the number of points to be taken in each subregion N = N 1/δ. 2. For j = 1, M d : 2.1. Calculate the approximation of I j and the variance D j in subdomain  j based on N independent realizations of random variable θ N ; 2.2. If (D j ≥ ε) then 2.2.1. Choose the axis direction on which (the partition ) will perform, 2.2.2. Divide the current domain into two G j1 , G j2 along the chosen direction, 2.2.3. If the length of obtained subinterval is less than δ then go to step 2.2.1 else j = j1 G j1 is the current domain right and go to step 2.1; 2.3. Else if (D j < ε) but an approximation of IG j2 has not been calculated yet, then j = j2 G j2 is the current domain along the corresponding direction right and go to step 2.1; 2.4. Else if (D j < ε) but there are subdomains along the other axis directions, then go to step 2.1; 2.5. Else Accumulation in the approximation I N of I .

320

V. Todorov and I. Dimov

Computational Complexity For the simple case when we have the two dimensional case (N = 2) and on the first step in the optimized adaptive approach we have M = 4 subdomains in our optimized Adaptive approach and θˆN =

N1 N2 N3 N4 1 Σ 1 Σ 1 Σ 1 Σ θi + θi + θi + θi , N1 i=1 N2 i=1 N3 i=1 N4 i=1

where N1 + N2 + N3 + N4 = N , so we have the same number of operations as the Crude Monte Carlo, which computational complexity is linear (Dimov 2008), to evaluate an approximation of IG j . So we choose only O(1) subdomains where the variance is greater than the parameter ε and this is independent of N . When we divide the domain on every step adaptiveness is not in all subdomains, but only in O(1) subdomains. At the beginning we have to choose kN0 random points. After that when dividing the domain into 2 N subdomains, we choose only O(1) subdomains, this choice is again independent of N . In these subdomains we choose kN1 points. On the jth step of the Adaptive approach i Σ 1 we choose O(1) subdomains with kNj points. We have that = 1. Therefore for kj j=0

the computational complexity we obtain N N N + O(1) + · · · + O(1) = k0 k1 ki ⎛ ⎞ i Σ 1 ⎠ = N O(1) = O(N ). = N O(1) ⎝ k j=0 j In this way we can conclude that the computational complexity of the optimized Adaptive algorithm is linear.

3 Numerical Examples The infinite domain of integration can be mapped into the s-dimensional unit hypercube using the following transformation 21 + π1 arctan(x) which maps (−∞, ∞) to (0, 1). We want to compute (1) in the 3, 6, 9 and for the first time in 12-dimensional case,

An Efficient Adaptive Monte Carlo Approach for Multidimensional … ⎛ ⎜ ⎝

( V w(x, p) =

e

−i2

n Σ k=1 n

xk, pk

321

⎞ ⎟ ⎠

× ,

,

[V (x1 + x1, , . . . xn + xn, ) − V (x1 − x1, , . . . xn − xn, )]d x1 . . . d xn , where the Wigner potential is V = V (x) = {x1 . . . xn , x , , x, p, x + x , , x − x , ∈ [0, 1]}. It is well known that Wigner kernel has real values (Wigner 1932). We also be interested in one-body problem (1D), two-body problem (2D) and three-body problem (3D). We first simulate a quantum system consisting of one electron moving in the presence of a potential with three different shapes, i.e. a single barrier, a double barrier, and a Gaussian barrier. The Wigner kernel for these experiments is computed numerically by means of both a deterministic trapezoidal rule and adaptive Monte Carlo approach. On Fig. 1 is given 1D problem in the presence of a single barrier: (top) Wigner kernel VW = VW(x; p) computed by deterministic approach, (middle) Wigner kernel computed by stochastic approach, and (bottom) difference between the stochastic and deterministic kernel. On Fig. 2 is given 1D problem in the presence of a double barrier: (top) Wigner kernel computed by midrectangular approach, (middle) Wigner kernel computed by adaptive approach, and (bottom) difference between the adaptive and midrectangular kernel. On Fig. 3 is given 1D problem in the presence of a Gaussian barrier: (top) Wigner kernel computed by midrectangular approach, (middle) Wigner kernel computed by adaptive approach, and (bottom) difference between the stochastic and deterministic kernel. On Fig. 4 is given 1D problem in the presence of a single barrier: evolution in time of the probability density in the real space, corresponding to 20 fs (top), 30 fs (middle) and 40 fs (bottom). On Fig. 5 is given 2D problem in the presence of a single barrier: evolution in time of the probability density in the real space, corresponding to 20 fs (top), 30 fs (middle) and 40 fs (bottom). On Fig. 6 is given 3D problem in the presence of a single barrier: evolution in time of the probability density in the real space, corresponding to 20 fs (top), 30 fs (middle) and 40 fs (bottom). On Fig. 7 is given 4D problem in the presence of a single barrier: evolution in time of the probability density in the real space, corresponding to 20 fs (top), 30 fs (middle) and 40 fs (bottom). In Table 1 is given the relative error obtained the basic Adaptive approach and other Monte Carlo methods—the plain Monte Carlo method (Crude) (Dimov 2008), the standard lattice method based on Fibonacci generator (Fibo) (Wang and Hickernell 2002), and the quasi-Monte Carlo method uses the well-known Sobol sequence (Sobol) Sobol (1989). The advantage of the Adaptive algorithm versus

322 Fig. 1 1D comparison—single barrier

V. Todorov and I. Dimov

An Efficient Adaptive Monte Carlo Approach for Multidimensional … Fig. 2 1D comparison—double barrier

323

324 Fig. 3 1D comparison—Gaussian barrier

V. Todorov and I. Dimov

An Efficient Adaptive Monte Carlo Approach for Multidimensional … Fig. 4 1D problem—single barrier

325

326

V. Todorov and I. Dimov

Fig. 5 2D problem—single barrier

other stochastic approaches can be clearly seen, the difference in relative error is about 3 orders better, which is an essential improvement. In Table 2 is given the relative error obtained with the standard Adaptive (Adapt), the developed optimized Adaptive approach (OptAdapt) and the deterministic method based on midrectangular rule. The advantage of the Adaptive algorithm versus other stochastic approaches can be clearly seen, the difference in relative error is about 3 orders better, which is an essential improvement. The numerical results including relative errors and computational times corresponding to the algorithms under consideration are presented, and the algorithms efficiency is discussed. A numerical comparison for a given number of samples

An Efficient Adaptive Monte Carlo Approach for Multidimensional …

327

Fig. 6 3D problem—single barrier

between the adaptive approach, the Sobol sequence and the Fibonacci based lattice sequences and the new optimized Adaptive approach for different dimensions (3–18) has been given in Tables 3, 4, 5, 6, 7 and 8. From the all experiments it can be clearly seen that the optimized adaptive approach gives relative errors with at least 1 or 2 orders better than those produced by the basic adaptive approach, because of the increased number of subregions taken in every subdomain M. The optimized Adaptive MC approach outperforms the other two approaches Fibo and Sobol QMC by at least 3 orders even for 18 dimensional case, see Table 8. We should emphasize here that the efficiency of the optimized adaptive MC algorithm under consideration is high when computational peculiarities of the integrand occur only in comparatively small subregion of the initial integration domain as it is in the case of the Wigner kernel.

328

V. Todorov and I. Dimov

Fig. 7 3D problem—single barrier Table 1 Relative errors for the 3, 6 and 9 dimensional integrals Time (s) Crude Adapt Fibo s=3

s=6

s=9

0.1 1 10 100 0.1 1 10 100 0.1 1 10 100

6.99e-02 9.56e-03 2.41e-03 1.53e-03 6.65e-02 4.54e-02 8.65e-03 6.15e-03 7.23e-02 5.61e-02 9.18e-03 7.44e-03

8.73e-04 4.05e-05 9.12e-06 8.18e-07 6.81e-04 9.09e-05 8.13e-06 9.08e-07 1.73e-03 8.05e-05 6.32e-06 7.58e-07

8.12e-03 5.42e-03 2.11e-03 9.50e-04 8.11e-03 9.25e-04 5.11e-04 1.05e-04 1.35e-03 8.72e-04 6.51e-04 3.70e-04

Sobol 1.01e-02 7.27e-03 7.83e-04 2.18e-04 2.13e-02 3.31e-03 9.34e-04 1.27e-04 5.42e-02 5.59e-03 5.84e-03 6.39e-04

An Efficient Adaptive Monte Carlo Approach for Multidimensional …

329

Table 2 Relative error of the optimized Adaptive approach, Adaptive approach and the deterministic mid rectangular method s N determ. t (s) OptAdapt t (s) Adapt t (s) 3

6

9

322 × 50 322 × 100 642 × 50 642 × 100 84 × 502 84 × 1002 164 × 502 164 × 1002 66 × 163 66 × 323 66 × 403

8.51e-03 8.21e-03 5.76e-03 4.89e-03 1.16e-02 9.75e-03 7.84e-03 2.12e-03 1.75e-03 1.35e-03 1.12e-03

0.2 0.5 1 1.9 41.2 160.6 635.2 2469.1 835.5 5544.1 10684.4

Table 3 Relative error for 3-dimensional integral N Adapt OptAdapt 103 104 105 106 107

5.36e-03 4.84e-04 2.51e-05 1.76e-05 6.26e-06

3.21e-04 4.02e-05 3.33e-06 1.56e-07 5.67e-08

Table 4 Relative error for 6-dimensional integral N Adapt OptAdapt 103 104 105 106 107

6.72e-03 9.10e-04 5.26e-05 2.70e-06 1.03e-06

1.07e-04 2.11e-05 3.02e-06 3.21e-07 5.13e-08

Table 5 Relative error for 9-dimensional integral N Adapt OptAdapt 103 104 105 106 107

4.92e-02 9.09e-04 3.32e-05 6.46e-06 1.21e-06

6.11e-04 1.12e-05 9.88e-07 2.01e-07 5.84e-08

1.47e-03 1.14e-04 5.12e-0 7.11e-06 8.64e-05 5.21e-06 3.21e-05 2.13e-05 5.11e-05 1.41e-05 1.67e-06

0.1 0.21 0.6 1.4 19.5 59.4 321 1001.6 345 2133.6 4531.5

2.71e-03 3.42e-04 7.52e-05 1.21e-05 9.09e-04 1.52e-05 4.37e-04 3.80e-04 7.62e-05 2.73e-05 8.12e-06

0.1 0.2 0.55 1.3 18.1 57.9 311.5 987.1 330.5 2225.1 4491.5

Fibo

Sobol

3.72e-02 7.06e-03 3.40e-03 1.01e-03 1.80e-04

1.07e-02 8.77e-03 8.57e-04 6.73e-04 5.98e-05

Fibo

Sobol

7.82e-03 5.01e-03 6.88e-03 7.68e-04 4.12e-04

2.42e-02 5.02e-03 4.60e-04 3.59e-04 8.11e-05

Fibo

Sobol

2.03e-02 2.02e-03 9.16e-04 7.13e-04 4.84e-04

5.42e-02 6.02e-03 3.57e-03 8.02e-04 5.19e-04

330

V. Todorov and I. Dimov

Table 6 Relative error for 12-dimensional integral N Adapt OptAdapt 103 104 105 106 107

3.91e-03 5.04e-04 2.76e-04 4.14e-05 2.31e-06

1.11e-04 9.01e-06 3.14e-06 1.12e-07 2.83e-08

Table 7 Relative error for 15-dimensional integral t, s Adapt OptAdapt 103 104 105 106

6.21e-03 4.45e-04 5.43e-05 1.23e-05

6.22e-04 4.51e-05 3.56e-06 4.16e-07

Table 8 Relative error for 18-dimensional integral t, s Adapt OptAdapt 103 104 105 106

8.32e-03 1.05e-03 2.42e-03 5.45e-05

1.73e-03 8.05e-05 6.32e-06 7.58e-07

Fibo

Sobol

1.33e-02 1.34e-03 5.51e-04 4.43e-04 2.57e-04

2.85e-02 4.04e-03 1.77e-03 4.07e-04 2.7e-04

Fibo

Sobol

8.76e-04 5.56e-04 3.34e-04 1.34e-04

2.31e-02 5.45e-03 4.11e-03 6.45e-04

Fibo

Sobol

1.42e-03 7.33e-04 7.42e-04 2.71e-04

5.32e-02 6.31e-03 4.73e-03 5.54e-04

4 Conclusions In this work, we presented a full Monte Carlo approach to the simulation of singleand many-body quantum systems in the context of the signed particle formulation of quantum mechanics. In particular, we introduced the use of a stochastic method based on the concept of adaptive approach to compute the Wigner kernel which is known to be not affected by the curse of dimensionality. Indeed we have shown that its complexity grows linearly with the typical dimensions of a problem. This work shows that the improved adaptive Monte Carlo algorithm under consideration gives the most accurate results in computing the Wigner kernel by a stochastic approach and it has lower computational complexity than the existing deterministic approaches. This means that the developed optimized stochastic approach is of great importance for the problems in quantum mechanics with high dimensions. To summarize, the developed optimized adaptive MC algorithm is a new successful solution (in terms of robustness and reliability) of Richard Feynman’s problem for Wigner kernel evaluation.

An Efficient Adaptive Monte Carlo Approach for Multidimensional …

331

Acknowledgements Venelin Todorov is supported by the Bulgarian National Science Fund under Project KP-06-M32/2 - 17.12.2019 “Advanced Stochastic and Deterministic Approaches for LargeScale Problems of Computational Mathematics” and Project KP-06-N52/2 “Perspective Methods for Quality Prediction in the Next Generation Smart Informational Service Networks”. The work is also supported by the Bulgarian National Science Fund KP-06-N52/5 “Efficient methods for modeling, optimization and decision making” and by the Bulgarian National Science Fund under the Bilateral Project KP-06-Russia/17 “New Highly Efficient Stochastic Simulation Methods and Applications”.

References Baraniuk R.G., Jones D.L.: A signal-dependent time-frequency representation: Optimal kernel design. IEEE Trans. Signal Process. 41(4) (1993) Berntsen, J., Espelid, T.O., Genz, A.: An adaptive algorithm for the approximate calculation of multiple integrals. ACM Trans. Math. Softw. 17, 437–451 (1991) Cerpeley, D.M.: An overview of quantum Monte Carlo methods, theoretical and computational methods in mineral physics. Rev. Mineral. Geochem. 71, 129–135 (2010) Davis, P.J., Rabinowitz, P.: Methods of Numerical Integration. Academic, London, 2nd edn. (1984) Dimov, I.: Monte Carlo Methods for Applied Scientists, 291 p. World Scientific, New Jersey, London, Singapore (2008). ISBN-10 981-02-2329-3 Dimov, I., Karaivanova, A., Georgieva, R., Ivanovska, S.: Parallel importance separation and adaptive Monte Carlo algorithms for multiple integrals. Springer Lect. Notes Comput. Sci. 2542, 99–107 (2003) Dimov, I., Georgieva, R.: Monte Carlo algorithms for evaluating Sobol’ sensitivity indices. Math. Comput. Simul. 81(3), 506–514 (2010) Ermakov, S.M.: Monte Carlo Methods and Mixed Problems. Nauka, Moscow (1985) Feynman, R.P.: Space-time approach to non-relativistic quantum mechanics. Phys, Rev. Mod. 20 (1948) Garcia-Garcia, J., Martin, F., Oriols, X., Sune, J.: Quantum Monte Carlo simulation of resonant tunneling diodes based on the Wigner distribution function formalism. Appl. Phys. Lett. 73, 3539 (1998) Jacoboni, C., Bertoni, A., Bordone, P., Brunetti, R.: Wigner-function formulation for quantum transport in semiconductors: theory and Monte Carlo approach. Math. Comput. Simul. 55, 67–78 (2001) Hammond B.J., Lester, W.A., Reynolds, P.J.: Monte Carlo Methods in Ab-initio Quantum Chemistry. World Scientific, Singapore (1994). ISBN: 981-02-0321-7 Pencheva, V., Georgiev, I., Asenov, A.: Evaluation of passenger waiting time in public transport by using the Monte Carlo method. In: AIP Conference Proceedings, vol. 2321, Issue 1. AIP Publishing LLC (2021) Putteneers, K., Brosens, F.: Monte Carlo implementation of density functional theory. Phys. Rev. B 86, 085115 (2012) Querlioz, D., Dollfus, P.: The Wigner Monte Carlo Method for Nanoelectronic Devices-A Particle Description of Quantum Transport and Decoherence. ISTE-Wiley (2010) Sellier, J.M.: A signed particle formulation of non-relativistic quantum mechanics. J. Comput. Phys. 297, 254–265 (2015) Sellier, J.M.: Nano-archimedes (2016). www.nano-archimedes.com, Accessed 13 May 2016 Sellier, J.M., Dimov, I.: On a full Monte Carlo approach to quantum mechanics. Physica A 463, 45–62 (2016) Sellier, J.M., Dimov, I.: The many-body Wigner Monte Carlo method for time-dependent ab-initio quantum simulations. J. Comput. Phys. 273, 589–597 (2014)

332

V. Todorov and I. Dimov

Sellier, J.M., Dimov, I.: A sensitivity study of the Wigner Monte Carlo method. J. Comput. Appl. Math. 277, 87–93 (2015) Sellier, J.M., Nedjalkov, M., Dimov, I.: An introduction to applied quantum mechanics in the Wigner Monte Carlo formalism. Phys. Rep. 577, 1–34 (2015) Shao, S., Lu, T., Cai, W.: Adaptive conservative cell average spectral element methods for transient Wigner equation in quantum transport. Commun. Comput. Phys. 9, 711–739 (2011) Shao, S., Sellier, J.M.: Comparison of deterministic and stochastic methods for time-dependent Wigner simulations. J. Comput. Phys. 300, 167–185 (2015) Shifren, L., Ferry, D.K.: A Wigner function based ensemble Monte Carlo approach for accurate incorporation of quantum effects in device simulation. J. Comp. Electr. 1, 55–58 (2002) Shifren, L., Ferry, D.K.: Wigner function quantum Monte Carlo. Phys. B 314, 72–75 (2002) Sobol, I.M.: Quasi-Monte Carlo methods. In: Sendov, Bl., Dimov, I.T. (eds.) International Youth Workshop on Monte Carlo Methods and Parallel Algorithms, pp. 75–81. World Scientific, Singapore (1989) Szabo, A., Ostlund, N.S.: Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory. Dover Publications Inc., New York (1982) Xiong, Y., Chen, Z., Shao, S.: An advective-spectral-mixed method for time-dependent many-body Wigner simulations. SIAM J. Sci. Comput., to appear (2016). arXiv:1602.08853 Wang, Y., Hickernell, F.J.: An Historical Overview of Lattice Point Sets. In: Fang, K.T., Niederreiter, H., Hickernell, F.J. (eds.) Monte Carlo and Quasi-Monte Carlo Methods 2000, pp. 158–167. Springer, Berlin, Heidelberg (2002) Wigner, E.: On the quantum correction for thermodynamic equilibrium. Phys. Rev. 40, 749 (1932)

An Overview of Lattice and Adaptive Approaches for Multidimensional Integrals Venelin Todorov

Abstract This paper presents an overview and comparison between two different stochastic approaches aiming to improve the convergence of the standard plain Monte Carlo method used to compute multidimensional integrals with different dimensions. The original contribution is the extended overview of the lattice and adaptive methods is given and the experimental comparison which has been done for the first time for the multidimensional problems under consideration. Some of the integrals has applications in the environmental safety and control theory. Keywords Multidimensional integrals · Lattice method · Monte Carlo method

1 Introduction Monte Carlo methods (Dimov 2008; Kostadinova et al. 2021; Kuo and Nuyens 2016; Lai and Spanier 1998; Mikhov et al. 2022, 2020; Pencheva et al. 2021) are one of the most commonly used numerical methods. Their advantages are enhanced by increasing the dimensionality. For this reason, they are a major tool for numerically solving classes of problems in such important areas as particle physics, engineering chemistry, molecular dynamics, and financial mathematics. A major scientific challenge in the development of modern Monte Carlo methods is their relatively slow rate of convergence, which in many cases has the asymptotic O(N −1/2 ), where N is the sample size. There are two approaches to improve convergence—reducing the variance of the estimated value and reducing the discrepancy of the sequence used. Adaptive strategy and lattice rules are two different ways to improve the convergence and has never been compared on this type of multidimensional integrals before. V. Todorov (B) Department of Information Modeling, Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, Acad. Georgi Bonchev Str., Block 8, 1113 Sofia, Bulgaria e-mail: [email protected]; [email protected] Department of Parallel Algorithms, Institute of Information and Communication Technologies, Bulgarian Academy of Sciences, Acad. G. Bonchev Str., Block 25 A, 1113 Sofia, Bulgaria © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 S. Fidanova (ed.), Recent Advances in Computational Optimization, Studies in Computational Intelligence 1044, https://doi.org/10.1007/978-3-031-06839-3_19

333

334

V. Todorov

Adaptive strategy (Berntsen et al. 1991; Dimov and Georgieva 2010; Dimov et al. 2003; Karaivanova 2012) is well known method for evaluation of multidimensional integrals, especially when the integrand function has peculiarities and peaks. Adaptive strategy is well-established as an efficient and reliable tool for multidimensional integration of integrands functions with computational peculiarities like peaks (Dimov 2008). Adaptive Monte Carlo methods proposed by Lautrup (Davis and Rabinowitz 1984) use a “sequence of refinements” of the original area selected so that the computations to be concentrated in subdomains with computational difficulties. There are various adaptive strategies depending on the technique of adaptation (Dimov and Georgieva 2010). Our adaptive algorithm (simple adaptive Monte Carlo algorithm) does not use any a priori information about the smoothness of the integrand, but it uses a posteriori information for the variance obtained during calculations. The main idea is a concentration of random points in the subregions where the variance is large (in terms of a preliminary given accuracy), i.e. the approach is based on a recursive partitioning of the integration area using a posteriori error information for the current partition. Lattice rules are based on the use of deterministic sequences rather than random sequences. They are a special type of so-called low discrepancy sequences. It is known that as long as the integral is sufficiently regular, lattice rules generally outperform not only basic Monte Carlo, but also other types of low discrepancy sequences. Up to know, there is not a full comparison between the adaptive and lattice methods, but only with the Monte Carlo algorithm based on Sobol sequences (Antonov and Saleev 1979; Bratley and Fox 1988; Dimov et al. 2010; Dimov et al. 2013; Paskov 1994). The monographs of Sloan and Kachoyan (1987), Niederreiter (1978, 2002), Hua and Wang (1981) and Sloan and Joe (1994) provide comprehensive expositions of the theory of integration lattices. The paper is organized as follows. The adaptive approach is described in Sect. 2. Some basic notations and theorems for lattice rules are described in Sect. 3. Numerical study and discussions are given in Sect. 4. The conclusions are given in Sect. 5.

2 Adaptive Approach Let p j and IΩ j be the following expressions: ) pj =

) Ωj

p(x) dx and IΩ j =

Ωj

f (x) p(x) dx.

( j) Consider⎡now a random point ⎤ ξ ∈ Ω j with a density function p(x)/ p j . In this case p j Σ N ( j) IΩ j = E N i=1 f (ξi ) = Eθ N . This adaptive algorithm gives an approximation

with an error ε ≤ c N −1/2 , where c ≤ 0.6745σ (θ ) (σ (θ ) is the standard deviation).

An Overview of Lattice and Adaptive Approaches …

335

Algorithm 1. Input data: total number of points N 1, constant M (the initial number of subregions taken), constant ε (max value of the variance in each subregion), constant δ (maximal admissible number of subregions), d-dimensionality of the initial region/domain, f—the function of interest. 1.1. Calculate the number of points to be taken in each subregion N = N 1/δ. 2. For j = 1, M d : 2.1. Calculate the approximation of IΩ j and the variance DΩ j in subdomain Ω j based on N independent realizations of random variable θ N ; 2.2. If (DΩ j ≥ ε) then 2.2.1. Choose the axis direction on which the partition will perform, 2.2.2. Divide the current domain into two (G j1 , G j2 ) along the chosen direction, 2.2.3. If the length of obtained subinterval is less than δ then go to step 2.2.1 else j = j1 G j1 is the current domain right and go to step 2.1; 2.3. Else if (DΩ j < ε) but an approximation of IG j2 has not been calculated yet, then j = j2 G j2 is the current domain along the corresponding direction right and go to step 2.1; 2.4. Else if (DΩ j < ε) but there are subdomains along the other axis directions, then go to step 2.1; 2.5. Else Accumulation in the approximation I N of I . Computational Complexity First let’s describe briefly the Crude Monte Carlo algorithm. Plain (Crude) Monte Carlo is the most famous MC approach for solving multidimensional integrals (Dimov 2008). We are interested in the approximate computation ( f (x) p(x)dx. Let the random variable θ = f (ξ ) is such that I [ f ] = of the integral Ω ( Eθ = Ω f (x) p(x)dx, where the random points ξ1 , ξ2 , . . . , ξ N are independent realizations of the random point ξ with probability density function (p.d.f.) p(x) and θ1 = f (ξ1 ),Σ . . . , θ N = f (ξ N ). We obtain that the approximate value of the integral I N θi , which defines the Plain MC algorithm. One can see that the comis θ N = N1 i=1 putational complexity of the Plain MC is linear, because we should choose N random points from the domain and every such choice is at the cost of O(1) operations. In the adaptive Monte Carlo algorithm we are doing the same number of operations as in the Crude Monte Carlo algorithm (Dimov 2008). For the simple case when we have the two dimensional case (N = 2) and on the first step in the optimized adaptive approach we have M = 4 subdomains in our optimized Adaptive approach and θˆN =

N1 N2 N3 N4 1 Σ 1 Σ 1 Σ 1 Σ θi + θi + θi + θi , N1 i=1 N2 i=1 N3 i=1 N4 i=1

336

V. Todorov

where N1 + N2 + N3 + N4 = N , so we have the same number of operations as the Crude Monte Carlo, which computational complexity is linear (Dimov 2008), to evaluate an approximation of IG j . So we choose only O(1) subdomains where the variance is greater than the parameter ε and this is independent of N . When we divide the domain on every step adaptiveness is not in all subdomains, but only in O(1) subdomains. At the beginning we have to choose kN0 random points. After that when dividing the domain into 2 N subdomains, we choose only O(1) subdomains, this choice is again independent of N . In these subdomains we choose kN1 points. On the j th step of the Adaptive approach i Σ 1 we choose O(1) subdomains with kNj points. We have that = 1. Therefore for kj j=0

the computational complexity we obtain N N N + O(1) + · · · + O(1) = k0 k1 ki ⎛

⎞ i Σ 1 ⎠ = N O(1) = O(N ). = N O(1) ⎝ k j j=0 In this way we can conclude that the computational complexity of the optimized Adaptive algorithm is linear.

3 Lattice Rules We will use the following lattice point sets xn =

{n } z , n = 0, . . . , N − 1 N

where z = (z 1 , . . . , z n ) is the generating vector with dimensionality s. By a lattice rule we mean a rule of the form N −1 1 Σ f (x j ), IN ( f ) = N j=0

(1)

where x0 , . . . , x N −1 are all points of a lattice L in the unit hypercube. Lets deal with the problem for approximate evaluation of the integral: ) I( f ) =

[0,1)s

f (x)d x,

An Overview of Lattice and Adaptive Approaches …

337

where f is a real function in [0, 1)s . We consider the case when f has a periodic extension  f in R s ,  f = f (x), x ∈ [0, 1)s ,  f (x + z) =  f , x ∈ Rs , z ∈ Zs . Let IN ( f ) =

N −1 1 Σ f (x j ), N j=0

(2)

where x0 , x1 , . . . , x N −1 are points from the lattice L ∈ [0, 1)s . We define the dual lattice L as L ⊥ = m ∈ Rs : m.x ∈ Z, x ∈ L. In the case when the lattice has rank 1 L ⊥ = m ∈ Zs : m.x ≡ 0 (

mod N ).

Let f can be presented in Fourie series as: f (x) =

Σ

a(m)e2πim.x , x ∈ [0, 1)s ,

m∈Zs

)

where a(m) =

e−2πim.x f (x)d x,

[0,1)s

and the scalar product m.u = m 1 x1 + m 2 x2 + . . . m s xs . Let E sα for α > 1 and c > 0 is a class of functions f , for which the coefficients of Fourier [?] satisfies: |a(m)| ≤

c , (m 1 . . . m s )α

(3)

where m = |m|, |m|, m ≥ 1, m = 1, m = 0. We define the Zaremba index (Wang and Hickernell 2000) as ρ = min m∈L ⊥ ,m/=0 (m 1 . . . m s ). The following theorems are key points in analysis the error of integration in the lattice rule:

338

V. Todorov

Theorem 1 (Sloan and Kachoyan 1987) Let L is a lattice with points x0 , x1 , . . . , x N −1 in [0, 1)s and m ∈ Zs . Then N −1 1 Σ 2πim.x j e = 1, m ∈ L ⊥ , N j=0 N −1 1 Σ 2πim.x j e = 0, m ∈ / L ⊥. N j=0

Theorem 2 (Wang and Hickernell 2000) Let L is a lattice with points x0 , x1 , . . . , x N −1 in [0, 1)s . Then for the error of integration is fulfilled IN ( f ) − I ( f ) =

Σ

a(m).

m∈L ⊥ ,m/=0

When we replace f (x) =

Σ

a(m)e2πim.x in I N ( f ) =

m∈Zs

IN ( f ) =

1 N

NΣ −1

f (x j ):

j=0

N −1 N −1 1 Σ 1 ΣΣ f (x j ) = a(m)e2πim.x j = N j=0 N j=0 s m∈Z

Σ m∈Zs

a(m)

N −1 Σ 1 Σ 2πim.x j e = a(m). N j=0 ⊥ m∈L

and using the definition I ( f ) = a(0) we will obtain IN ( f ) − I ( f ) =

Σ

a(m).

m∈L ⊥ ,m/=0

Theorem 3 (Wang and Hickernell 2000) Let L is a lattice with points x0 , x1 , . . . , x N −1 in [0, 1)s and let f ∈ E sα (c), α > 1. Then |I N ( f ) − I ( f )| ≤ c

Σ

(m 1 . . . m s )−α .

m∈L ⊥ ,m/=0

The proof of this fact directly leads from the previous theorem |I N ( f ) − I ( f )| =

Σ m∈L ⊥ ,m/=0

|

An Overview of Lattice and Adaptive Approaches …

Σ

a(m)| ≤ c

m∈L ⊥ ,m/=0

339

1 (m 1 . . . m s )α

. Theorem 4 (Wang and Hickernell 2000) Let L is a lattice with x0 , x1 , . . . , x N −1 in [0, 1)s and let f ∈ E sα (c), α > 1 and ρ ≥ 2 is the Zaremba index. Then |I N ( f ) − I ( f )| ≤ cd(s, α)ρ −α (log ρ)s−1 . Here we can define the number Rl , l = 1, 2, . . . , which will show the number of points m ∈ L ⊥ for which m 1 . . . m s < lρ Here we will use the lemma proven by Hua and Wang (1981): Rl ≤ e(s)l(log 3lρ)s−1 , l = 1, 2, . . . , where e(s) depend only on s. From Theorem 3 Σ

|I N ( f ) − I ( f )| ≤ c

m∈L ⊥ ,m/=0

1 (m 1 . . . m s )α

The sum on m can be broken down into a sum of collectibles by E 1 , E 2 , . . . , where El is defined by the inequalities lρ ≤ m 1 . . . m s < (l + 1)ρ, l = 1, 2, . . . By the definition of Rl we have the following inequalities: Σ m∈L ⊥ ,m/=0

1 ≤ (m 1 . . . m s )α

∞ Σ Rl+1 − Rl ≤ (lρ)α l=1

  ∞ 1 1 1 Σ R − . l+1 ρ α l=1 lα (l + 1)α We have that 1 1 − =α lα (l + 1)α

) l

l+1

x −α−1 d x ≤

α , l α+1

340

V. Todorov

and using the Lemma of Hua and Wang |I N ( f ) − I ( f )| ≤

∞ cα Σ Rl+1 ≤ ρ α l=1 l α+1





cαe(s) Σ (l + 1)(log 3(l + 1)ρ)s−1 ρ α l=1 l α+1 ≤ cd(s, α)ρ −α (log ρ)s−1 ,

where d1 (s, α) depend only on s and α. This proves theorem 4. In the theory of integration lattice rule a key point play the functions f α , α = 2, 4, . . . . Every function f α is the worst function (Wang and Hickernell 2000) for appropriate class E sα (1). This functions are defined by f α (x) =

Σ m∈L ⊥

1 e2πim.x . (m 1 . . . m s )α

Furthermore f α ∈ E sα (1), I ( f α ) = 1. Let Pα (z, N ) = Pα means the error in I ( f α ). From Theorem 2 Pα (z, N ) = I N ( f α ) − I ( f α ) =

Σ m∈L ⊥ ,m/=0

1 . (m 1 . . . m s )α

Now for f ∈ E sα (c) according Theorem 3 the error is defined by |I N ( f ) − I ( f )| ≤ c Pα (N , z),

(4)

where α = 2, 4, . . . and the error is fulfilled when f = f α . The values of Pα (z, N ) for fixed α are using as indication of the relative quality of the particular lattice. In the case of rank-1 lattice Σ

Pα (z, N ) = z.a≡0 (

1 . . . . m s )α (m 1 mod N ),a/=0

Bakhvalov proves that (Wang and Hickernell 2000): Theorem 5 If P is a lattice point set, with an optimal generating vector z, for the error of integration we have | | N −1   ) |1 Σ k | f z − | |N N | k=0 [0,1)s

| | | (log N )β(s,α) | | ≤ Cd(s, α) | Nα |

for f ∈ E sα (c), α > 1 and d(s, α), β(s, α) does not depend on N .

(5)

An Overview of Lattice and Adaptive Approaches …

341

Korobov (1959; 1960; 1963) prove that: Theorem 6 If N is a prime number, there exists generating vector z, such that D(N ) = O(N −1 logs N ), Pα (z, N ) = O(N −α logαs N ). Niederreiter shows (Sloan and Joe 1994), if N is not a prime number,there exist lattice point sets for which: Pα (z, N ) = O(N −α (log N )α(s−1)+1 ( Pα (z, N ) = O(N −α (log N )α (

N )), s ≥ 2, φ(N )

N τ (N ) + )), s = 2, φ(N ) log(N )

Pα (z, N ) = O(N −α (log N )α (s − 1)(1 +

τ (N ) )), s ≥ 3, log s−1 (N )

where φ(N ) is the Euler’s totient function and τ (N ) is the number of divisors of N . For prime number from this formulas leads that there exist z, for which Pα (z, N ) = O(N −α logα(s−1) N ). It is fulfilled the following theorem of Sharygin (1963): Theorem 7 For a given lattice rule it is fulfilled that Pα (z, N ) ≥ O(N −α logs−1 N ).

(6)

When s = 2 there is an optimal construction. Bakhavalov (1959), Hua and Wang (1960) introduced construction, based on Fibonacci numbers, which are defined recursively by F0 = 0, F1 = 1, Fl = Fl−1 + Fl−2 , l ≥ 2. Let N = Fl and z = (1, Fl−1 ). For the obtained lattice Bakhavalov and Hua and Wang show that Pα ((1, Fl−1 ), Fl ) = O(Fl−α log Fl ), which is optimal according to Sharygin. In 1966 Zaremba (Wang and Hickernell 2000) shows that D(Fl ) = O(Fl−1 log Fl ),

342

V. Todorov

which is optimal according to Schmidt (1972) (Sloan and Kachoyan 1987). It is important that for finding Fl are necessary only O(log Fl ) elementary operations. , There are different techniques for optimal constructions when s ≥ 2. Let s = p−1 2 where p ≥ 5 is a prime number. If we have the set Q(2 cos 2πp ), which is an algebraic field of degree s with basis functions 2 cos(2π j/ p) | j = 1, . . . , s, we construct the sequence ηl , l = 1, 2, . . . , which satisfies: ( j)

cs−1 el < ηl < cs el , cs−1 e−l/(s−1) ≤ |ηl | ≤ cs−1 e−l/(s−1) , j = 2, . . . , s, where cs is a constant and η( j ) is the conjugate of η. Define the generating vector by: ηl =

s Σ

( j)

ηl , h (l) j = [ηl 2 cos(2π j/ p)], j = 2, . . . , s,

j=1

where ηl is the number of points and [.] is a function whole part. With such a choice of z Hua and Wang show that 1 − 21 − 2(s−1)+ε

D(ηl ) = O(ηl

α − α2 − 2(s−1)+ε

), Pα (z, N ) = O(ηl

),

where ε is a preliminary given positive number. We will construct a lattice L with the following optimized generating vector. for positive number n: (7) z = (1, Fn (2), . . . , Fn (s)) It is fulfilled that Fn ( j) = Fn+ j−1 − Fn+ j−2 − . . . − Fn , where Fi are the generalized Fibonacci numbers with dimensionality s: Fl+s = Fl + Fl+1 + ... + Fl+s−1 , l = 0, 1, . . .

(8)

with initial condition: F0 = F1 = . . . = Fs−2 = 0, Fs−1 = 1,

(9)

z = (1, Fn−1 + Fn−2 + . . . + Fn−s+1 , . . . , Fn−1 + Fn−2 , Fn−1 ).

(10)

for l = 0, 1, . . .. After simplifying:

The advantage of the algorithm described above is that the number of calculation required to obtain the generating vector is O(log Nl ) (Cull and Holloway 1989). The advantage of the lattice method is the linear computational complexity and reduced time for calculating the multidimensional integrals. The generation of a new point requires constant number of operations thus to obtain a lattice set of the described kind consisting of Nl points, O(Nl ) number of operations are necessary.

An Overview of Lattice and Adaptive Approaches …

343

4 Numerical Examples We will test the optimized lattice rule into the following examples: Example 1. s = 3. ) exp(x1 x2 x3 ) ≈ 1.14649907.

(11)

x1 x22 e x1 x2 sin(x3 ) cos(x4 ) ≈ 0.1089748630.

(12)

exp(−100x1 x2 x3 )(sin(x4 ) + cos(x5 )) ≈ 0.1854297367.

(13)

[0,1]3

Example 2. s = 4. ) [0,1]4

Example 3. ) [0,1]5

Example 4. s = 7. ) e

1−

3 Σ

7 Σ sin( π2

.xi )

.ar csin(sin(1) +

xj

j=1

) ≈ 0.75151101.

(14)

2 3 4 5 2 xi2 )(x11 − x12 − x13 − x14 − x15 ) ≈ 1.96440666.

(15)

i=1

[0,1]7

200

Example 5. s = 15. ) ( [0,1]15

10 Σ i=1

The results are given in the tables below. We have used laptop CPU Core i7 4710HQ at 2.5 GHz. The first group of tables contains information about the method used, the relative error obtained, the number of conversions required, and the CPU time required to compute the integral. The second group of tables contains information about the computational complexity. A comparison between the lattice method under consideration (lattice), the simplest possible Monte Carlo approach (crude) and the adaptive Monte Carlo approach (adapt) has been preformed. This shows what relative error each of the used algorithms gives at a predetermined time. From these results it can be concluded that the lattice method (lattice) is the most efficient for computing multidimensional integrals of smooth subintegral functions due to the low computational complexity and high accuracy in comparison with the simple Monte Carlo algorithm (crude) and the adaptive approach (adapt). The crude Monte Carlo

344

V. Todorov

Table 1 Relative error for 3 dimensional integral N Crude t,s Adapt 103 104 105 106 107

3.62e-2 1.67e-3 8.60e-4 5.12e-4 3.15e-4

0.007 0.07 0.74 6.12 60.1

4.82e-3 1.07e-3 1.52e-4 5.11e-5 2.34e-5

t,s

Lattice

t,s

0.17 1.44 10.9 131 1094

1.21e-3 5.04e-4 5.34e-6 7.85e-7 8.89e-8

0.006 0.07 0.66 7.02 79.7

Table 2 Relative error for 3-dimensional integral for preliminary given time Time in sec. Crude Adapt Lattice 1.05e-3 6.84e-4 4.79e-4 1.57e-4

1 5 10 100

7.96e-3 8.14e-4 1.82e-4 7.04e-5

Table 3 Relative error for 4-dimensional integral N Crude t,s Adapt 104 105 106 107

9.31e-3 4.37e-3 7.87e-4 4.31e-5

0.08 0.78 5.86 50.1

1.11e-3 1.44e-4 5.63e-5 9.11e-6

2.34e-6 8.47e-7 4.89e-7 6.53e-9

t,s

Opt. lattice

t,s

1.97 20.1 210 2035

8.61e-5 3.69e-5 2.86e-6 3.38e-7

0.07 0.99 5.22 58

Table 4 Relative error for 4-dimensional integral for preliminary given time Time in sec. Crude Adapt Opt. lattice 5 20 100

8.61e-4 2.31e-4 2.21e-5

5.24e-3 1.44e-4 8.21e-5

8.47e-7 4.89e-7 4.53e-8

is the basic and simplest possible Monte Carlo approach. Such kind of applications appear also in some important problems in control theory. It can be seen that by increasing the dimension, the optimized lattice rule gives the best results especially for lower dimensions (see Tables 1, 3, 5), and the advantage is more clearly pronounced for a preliminary given computational time (see Tables 2 and 4). Even for higher dimensions the lattice method is with 1-2 orders better than the adaptive approach (see Tables 6 and 7). The lattice method is not applicable to functions with singularities as we will see from the numerical experiments in this section. Let the following model function be given:

An Overview of Lattice and Adaptive Approaches … Table 5 Relative error for 5-dimensional integral N Crude t,s Adapt 103 104 105 106 107

2.10e-2 4.52e-3 1.19e-3 9.47e-4 6.38e-4

0.007 0.07 0.64 6.06 59.9

2.15e-3 2.01e-3 8.91e-4 2.92e-4 8.21e-5

Table 6 Relative error for 7-dimensional integral N Crude t,s Adapt 104 105 106 107

1.47e-2 8.26e-3 1.76e-3 9.85e-4

0.11 1.02 10.1 96.3

1.07e-3 7.51e-4 6.30e-5 2.34e-5

Table 7 Relative error for 15-dimensional integral N Crude t,s Adapt 103 104 105 106

6.31e-2 4.30e-2 2.77e-2 2.13e-3

0.09 0.95 9.70 95.8

3.16e-3 1.49e-3 5.76e-4 1.29e-4

f (x) = (1 +

d Σ

345

t,s

Opt. lattice

t,s

0.27 2.43 25.2 219.5 2043

1.75e-4 1.28e-5 9.50e-6 5.47e-7 7.71e-8

0.007 0.06 0.61 5.98 58.4

t,s

Opt. lattice

t,s

2.07 19.3 194 1861

2.19e-4 6.87e-5 7.39e-6 8.89e-7

0.11 0.99 9.81 94.2

t,s

Opt. lattice

t,s

9.24 88 847 8235

5.34e-2 1.22e-3 3.08e-4 1.37e-5

0.08 0.93 9.65 96.9

ai xi )−(s+1) .

(16)

i=1

The class of test functions in question belongs to a package proposed by Genz (1984). Each individual class of the package is characterized by a peculiarity in computational terms. The selected set of functions has a single local maximum near one of the vertices of the multidimensional single cube, similar to some model functions describing the change in the concentrations of pollutants in the air. The 1 1 ; 1 − 20 ], parameters ai are evaluated, using variables ai, , uniformly distributed in [ 20 , and the relation a = c a . The constant c is parameter of computational complexity (Berntsen et al. 1991), selected so that the “sharpness” of the local maximum is . The adaptive approach is effective for controlled by the following norm ||a||1 = 600 s2 such a class of functions—functions with computational features in a local subdomain of the field of integration. The results obtained after applying the simple(crude) and adaptive Monte Carlo algorithm for integrals of 5 and 18 are given in Tables 8 and 9, respectively. The efficiency of the adaptive and lattice algorithms is studied.

346

V. Todorov

Table 8 Relative error for s = 5, I [ f ] = 2.12e-06, a = (5, 5, 5, 5, 4) Adapt N

Crude IN [ f ]

Opt. lattice

(s)

N

IN [ f ]

(s)

N

IN [ f ]

(s)

102

3.7735e-03

0.33

105

5.4858e-02

0.27

1346269

9.7135e-02

0.38

103

1.2877e-03

1.44

106

3.8207e-02

1.22

3524578

6.7594e-02

1.32

104

4.2452e-04

10.75

107

3.3962e-03

12.3

14930352

1.5377e-02

15.07

105

4.7169e-05

142.18 108

9.4339e-04

124.2

102334155

2.9245e-03

134.58

 Table 9 Relative error for s = 18, I [ f ] = 9.919e-06, a = ) 1 2 2 1 1 4 1 1 9 , 27 , 27 , 9 , 9 , 27 , 9 , 9 Adapt

Crude

1 2 2 1 2 1 1 4 2 1 , , , , , , , , , , 9 27 27 9 27 9 9 27 27 9 Opt. lattice

N

IN [ f ]

(s)

N

IN [ f ]

(s)

N

IN [ f ]

(s)

10

9.2341e-04

15.7

107

4.5367e-03

13.6

14930352

7.1579e-02

14.7

102

8.0653e-05

142

108

2.0163e-03

140

102334155

1.3096e-02

144.1

1408

109

5.0480e-04

1353.5 1134903170

7.8883e-03

1344.3

103

1.0081e-05

In both tables, the value N denotes the total number of conversions in the entire domain for the ordinary algorithm, for the adaptive algorithm, and for the algorithm using a plurality of lattice types. The total number of conversions and approximately the same time to calculate the integrals is actually the basis for comparing the presented results. A number of realizations of the random variable have been chosen so that the times for obtaining an approximate value of the integral are close (see Table 8. The obtained results confirm the reduction of the variance—the adaptive algorithm needs much fewer implementations and gives more accurate results than the ordinary Monte Carlo and the lattice type algorithm, but it is significantly slower (see Table 9).

5 Conclusion In this work we make a comparison between lattice and adaptive stochastic approaches for multidimensional integrals with different dimensions. A comprehensive experimental study is done for multidimensional integrals with applications in ecology. The numerical experiments show that the optimized lattice rule is more efficient for multidimensional integrals of smooth functions. The adaptive approach is more efficient for multidimensional integrals with peculiarities and peaks which have applications in air pollution modelling.

An Overview of Lattice and Adaptive Approaches …

347

Acknowledgements Venelin Todorov is supported by the Bulgarian National Science Fund under Project KP-06-M32/2 - 17.12.2019 “Advanced Stochastic and Deterministic Approaches for LargeScale Problems of Computational Mathematics”. The work is also supported by the Bulgarian National Science Fund under Projects KP-06-N52/5 “Efficient methods for modeling, optimization and decision making” and by Bilateral Project KP-06-Russia/17 “New Highly Efficient Stochastic Simulation Methods and Applications”.

References Antonov, I., Saleev, V.: An economic method of computing L Pτ -sequences. USSR Comput. Math. Phy. 19, 252–256 (1979) Berntsen, J., Espelid, T.O., Genz, A.: An adaptive algorithm for the approximate calculation of multiple integrals. ACM Trans. Math. Softw. 17, 437–451 (1991) Bratley, P., Fox, B.: Algorithm 659: implementing Sobol’s Quasirandom sequence generator. ACM Trans. Math. Softw. 14(1), 88–100 (1988) Cull, P., Holloway, J.L.: Computing Fibonacci numbers quickly. Inf. Process. Lett. 32(3), 143–149 (1989) Davis, P.J., Rabinowitz, P.: Methods of Numerical Integration, 2nd edn. Academic, London (1984) Dimov, I.: Monte Carlo Methods for Applied Scientists 291 p. World Scientific, New Jersey, London, Singapore (2008). ISBN-10 981-02-2329-3 Dimov, I., Georgieva, R.: Monte Carlo algorithms for evaluating Sobol’ sensitivity indices. Math. Comput. Simul. 81(3), 506–514 (2010) Dimov I., Georgieva R., Ivanovska S., Ostromsky Tz., Zlatev Z.: Studying the sensitivity of pollutants’ concentrations caused by variations of chemical rates. J. Comput. Appl. Math. 235(2), 391–402 (2010) Dimov, I., Georgieva, R., Ostromsky, Tz., Zlatev, Z.: Advanced algorithms for multidimensional sensitivity studies of large-scale air pollution models based on Sobol sequences. Comput. Math. Appl. 65(3), 338–351 (2013) Dimov, I., Karaivanova, A., Georgieva, R., Ivanovska, S.: Parallel importance separation and adaptive Monte Carlo algorithms for multiple integrals. Springer Lect. Notes Comput. Sci. 2542, 99–107 (2003) Genz, A.: Testing Multidimensional Integration Routines. Methods and Languages for Scientific and Engineering Computation, Tools, pp. 81–94 (1984) Hua, L.K., Wang, Y.: Applications of Number Theory to Numerical analysis (1981) Karaivanova, A.: Statistical Numerical Methods and Simulations. Demetra, Sofia (2012). (In Bulgarian) Kostadinova, V., Georgiev, I., Mihova, V., Pavlov, V.: An application of Markov chains in stock price prediction and risk portfolio optimization. In: AIP Conference Proceedings, vol. 2321, Issue 1, p. 030018. AIP Publishing LLC (2021) Korobov, N.M.: The approximate computation of multiple integrals. Dokl. Akad. Nauk SSSR 124, 1207–1210 (1959) Korobov, N.M.: Properties and calculation of optimal coefficients. Sov. Math. Dokl. 1, 696–700 (1960) Korobov, N.M.: Number-Theoretical Methods in Approximate Analysis. Fizmatgiz, Moscow (1963) Kuo, F.Y., Nuyens, D.: Application of quasi-Monte Carlo methods to elliptic PDEs with random diffusion coefficients - a survey of analysis and implementation. Found. Comput. Math. 16(6), 1631–1696 (2016) Lai, Y., Spanier J.: Applications of Monte Carlo/Quasi-Monte Carlo methods in finance: option pricing. In: Proceedings of the Claremont Graduate University conference (1998)

348

V. Todorov

Mikhov, R., Myasnichenko, V., Kirilov, L., Sdobnyakov, N., Matrenin, P., Sokolov, D., Fidanova, S.: On the Problem of Bimetallic Nanostructures Optimization: An Extended Two-Stage Monte Carlo Approach. In: Fidanova, S. (ed.) Recent Advances in Computational Optimization. WCO 2020. Studies in Computational Intelligence, vol. 986. Springer, Cham (2022) Mikhov, R., Myasnichenko, V., Kirilov, L., Sdobnyakov, N., Matrenin, P., Sokolov, D., Fidanova, S.: A two-stage Monte Carlo approach for optimization of bimetallic nanostructures. In: 2020 15th Conference on Computer Science and Information Systems (FedCSIS), pp. 285–288. IEEE (2020) Niederreite, H.: Existence of good lattice points in the sense of Hlawka. Monatsh. Math 86, 203–219 (1978) Niederreiter, H., Talay, D.: Monte Carlo and Quasi-Monte Carlo Methods. Springer (2002) Paskov, S.: Computing High Dimensional Integrals with Applications to Finance. preprint Columbia Univ. (1994) Pencheva, V., Georgiev, I., Asenov, A.: Evaluation of passenger waiting time in public transport by using the Monte Carlo method. In: AIP Conference Proceedings, vol. 2321, Issue 1. AIP Publishing LLC (2021) Sharygin, I.F.: A lower estimate for the error of quadrature formulas for certain classes of functions. Zh. Vychisl. Mat. i Mat. Fiz. 3, 370–376 (1963) Sloan, I.H., Kachoyan, P.J.: Lattice methods for multiple integration: theory, error analysis and examples, SIAM. J. Numer. Anal. 24, 116–128 (1987) Sloan, I.H., Joe, S.: Lattice Methods for Multiple Integration. Oxford University Press, Oxford (1994) Wang, Y., Hickernell, F.J.: An historical overview of lattice point sets. In: Monte Carlo and QuasiMonte Carlo Methods 2000, Proceedings of a Conference held at Hong Kong Baptist University, China (2000)

Advanced Biased Stochastic Approach for Solving Fredholm Integral Equations Venelin Todorov, Ivan Dimov, and Rayna Georgieva

Abstract In this study we investigate and analyse an advanced balanced biased Monte Carlo method for solving a class of integral equations, namely the second kind Fredholm integral equations. The biased stochastic approach is based on the balancing of systematic and stochastic errors which is very important for the quality of the presented algorithm. Error balancing conditions of both stochastic and systematic errors have been obtained and applied for the advanced biased algorithm. A comparison with other biased and unbiased stochastic approaches has been done. Extensive numerical experiments have been done to support the theoretical studies regarding the convergence rate of the different Monte Carlo methods under consideration. Keywords Fredholm integral equations · Monte Carlo methods · Biased stochastic approach

1 Introduction By definition Monte Carlo methods are methods of approximation of the solution to problems of computational mathematics, by using random processes for each such problem, with the parameters of the process equal to the solution of the problem. The method can guarantee that the error of Monte Carlo approximation is smaller than V. Todorov (B) Department of Information Modeling, Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, Acad. Georgi Bonchev Str., Block 8, 1113 Sofia, Bulgaria e-mail: [email protected]; [email protected] V. Todorov · I. Dimov · R. Georgieva Department of Parallel Algorithms, Institute of Information and Communication Technologies, Bulgarian Academy of Sciences, Acad. G. Bonchev Str., Block 25 A, 1113 Sofia, Bulgaria e-mail: [email protected] R. Georgieva e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 S. Fidanova (ed.), Recent Advances in Computational Optimization, Studies in Computational Intelligence 1044, https://doi.org/10.1007/978-3-031-06839-3_20

349

350

V. Todorov et al.

a given value with a certain probability (Sobol 1973). Monte Carlo methods always produce an approximation to the solution of the problem or to some functional of the solution, but one can control the accuracy in terms of the probability error (Dimov 2008). Monte Carlo methods is a widely used tool in many fields of science. It is well known that Monte Carlo methods give statistical estimates for the functional of the solution by performing random sampling of a certain random variable whose mathematical expectation is the desired functional (Dimov 2006). An important advantage of the Monte Carlo methods is that they are suitable for solving multi-dimensional problems, since the computational complexity increases polynomially, but not exponentially with the dimensionality. Another advantage of the method is that it allows to compute directly functionals of the solution with the same complexity as to determine the solution. In such a way this class of methods can be considered as a good candidate for treating innovative problems related to modern areas in quantum physics. The simulation of innovative engineering solutions that may be treated as candidates for quantum computers need reliable fast Monte Carlo methods for solving the discrete version of the Wigner equation (Dimov et al. 2015b, c; Nedjalkov et al. 2013). High quality algorithms are needed to treat complex, time-dependent and fully quantum problems in nano-physics and nano-engineering. These methods use estimates of functionals defined on discrete Markov chains defined on properly chosen subspaces. Integral equations are of high applicability in various areas of applied mathematics, physics, and engineering (Arnold 1978; Kalos and Whitlock 2008; Kress 1999; Mikhov et al. 2022; Wilmott et al. 1995). In particular, they are widely used in mechanics, geophysics, kinetic theory of gases, quantum mechanics, mathematical economics and queuing theory. That is why it is reasonable to develop and study efficient and reliable approaches to solve integral equations. In order to be able to analyze the quality of biased algorithms for integral equations we need to introduce several definitions related to probability error, discrete Markov chains and algorithmic computational cost. Definition 1 If J is the exact solution of the problem, then the probability error is the least possible real number R N , for which: P = Pr {|ξ N − J | ≤ R N } where 0 < P < 1. If P = 21 , then the probability error is called the probable error. The probable error is the value r N for which: Pr {|ξ N − J | ≤ R N } =

1 = Pr {|ξ N − J | ≥ R N } 2

In our further considerations we will be using the above defined probable error taking into account that the chosen value of the probability P only changes the constant, but not the rate of convergence in the stochastic error estimates.

Advanced Biased Stochastic Approach for Solving Fredholm Integral Equations

351

There are two main directions in the development and study of Monte Carlo algorithms (Dimov and Atanassov 2007). The first is Monte Carlo simulation and the second is the Monte Carlo numerical algorithms. In Monte Carlo numerical algorithms we construct a Markov process and prove that the mathematical expectation of the process is equal to the unknown solution of the problem (Dimov 2008). A Markov process is a stochastic process that has the Markov property. Often, the term Markov chain (Kalos and Whitlock 2008; Veleva et al. 2020) is used to mean a discrete-time Markov process. A finite discrete Markov chain Tk is defined as a finite set of states {α1 , α2 , . . . , αk }. A state is called absorbing if the chain terminates in this state with probability one. Iterative Monte Carlo algorithms can be defined as terminated Markov chains: T = αt0 → αt1 → αt2 . . . αtk ,

(1)

where αtq , q = 1, . . . , i is one of the absorbing states. Computational cost of a randomized iterative algorithm A R is defined by cost (A, x, w) = n E(q)t0 , where E(q) is the mathematical expectation of the number of transitions in the sequence (1) and t0 is the mean time needed for value of one transition. The computational cost is an important measure of the efficiency of Monte Carlo algorithms. If one has a set of algorithms solving a given problem with the same accuracy, the algorithm with the smallest computational cost can be considered as the most efficient algorithm. In the latter case there is a systematic error. The Monte Carlo algorithm consists in simulating the Markov process and computing the values of the random variable. It is clear, that in this case a stochastic error also appears. The error estimates are important issue in studying Monte Carlo algorithms. In order to formalize our consideration we need to define the standard iterative process. Define an iteration of degree j as (Dimov 1991, 2008) u k+1 = Fk ( A, b, u k , u k−1 , . . . , u k− j+1 ) where u k is obtained from the kth iteration. It is desired that u k → u = A−1 b as k → ∞ The degree of j should be small because of efficiency requirement, but also in order to save computer memory storage. The iteration is called stationary if Fk = F for all k, that is, Fk is independent of k. The iterative Monte Carlo process is said to be linear if Fk is a linear function of u k , . . . , u k− j+1 . In this work for integral equations we shall consider iterative stationary linear Monte Carlo algorithms only and will analyze both systematic and stochastic errors.

352

V. Todorov et al.

We will put a special effort to set-up the parameters of the algorithm in order to have a good balance between the two errors mentioned above (Curtiss 1954). The paper is organized as follows. The problem setting is given in Sect. 2. A description of biased stochastic approaches applied to solve the problem under consideration is presented in Sect. 3. In this section some error analysis based on the concept of balancing both stochastic and systematic errors is done. The unbiased stochastic approach on the idea of Sylvain Maire is presented in Sect. 4. The numerical examples including a comparison between the biased and unbiased approach are given in Sect. 5. Some final conclusions are drawn in Sect. 6.

2 Formulation of the Problem Let X be a Banach space of real-valued functions. Let f = f (x) ∈ X and u k = u (xk ) ∈ X be defined in Rd and K = K(u) be a linear operator defined on X. Consider the second kind Fredholm integral equation: ( u(x) =

Ω

k(x, x, )u(x, )dx, + f (x)

(2)

or u = Ku + f (K is an integral operator), where k(x, x, ) ∈ L 2 (Ω × Ω), f (x) ∈ L 2 (Ω) are given functions and u(x) ∈ L 2 (Ω) is an unknown function, x, x, ∈ Ω ⊂ Rn (Ω is a bounded domain). Consider the sequence u 1 , u 2 , . . . , defined by the recursion formula u k = K (u k−1 ) + f, k = 1, 2, . . .

(3)

The formal solution of (3) is the truncated Liouville-Neumann series u k = f + K ( f ) + · · · + Kk−1 ( f ) + Kk ( f ) , k > 0,

(4)

where Kk means the kth iterate of (K. As)an example consider the integral iterations. Let u(x) ∈ X, x ∈ Ω ⊂ Rd and k x, x , be a function defined for x, x , ∈ Ω. The integral transformation ( Ku(x) =

Ω

k(x, x , )u(x , )d x ,

maps the function u(x) into the function Ku(x), and is called an iteration of u(x) by the integral transformation kernel k(x, x , ). The second integral iteration of u(x) is denoted by KKu(x) = K2 u(x).

Advanced Biased Stochastic Approach for Solving Fredholm Integral Equations

Obviously

( ( K2 u(x) =

Ω

Ω

353

k(x, x , )k(x , , x ,, )d x , d x ,, . k→∞

In this way K3 u(x), . . . , Ki u(x), . . . can be defined. When u (k) −−−→ u then u = ∞ Σ Ki f ∈ X and i=0

u = K(u) + f.

(5)

The truncation error of (5) is u k − u = K(u 0 − u). Random variables θi , i = 1, 2, . . . , k are defined on spaces Ti+1 , where Ti+1 = Rd × Rd× . . . Rd , i = 1, 2, . . . , k, i times

and it is fulfilled Eθ0 = J (u 0 ), E(θ1 /θ0 ) = J (u 1 ) . . . , E(θk /θ0 ) = J (u k ). An approximate value of linear functional J (u k ) that is to be calculated is set up as: J (u k ) ≈

N 1 Σ {θk }s , N i=1

(6)

where {θk }s is the sth realization of θk . We consider the case when K is an ordinary integral transform ( K(u) =

k(x, y)u(y)dy Ω

and u(x) and f (x) are functions. Equation (5) becomes ( u(x) =

Ω

k(x, y)u(y)dy + f (x) or u = Ku + f.

(7)

We want to evaluate the linear functionals of the solution of the following type ( J (u) =

Ω

ϕ(x)u(x) = (ϕ(x), u(x)) .

(8)

354

V. Todorov et al.

In fact (8) defines an inner product of a given function ϕ(x) ∈ X with he solution of the integral equation (7). The adjoint equation is v = K∗ v + ϕ,

(9)

where v, ϕ ∈ X∗ , K ∈ [X∗ → X∗ ], X∗ is the dual function space and K∗ is an adjoint operator. We will prove that J = (ϕ, u) = ( f, v) . If we multiply (5) by v and (9) by u and integrate, then: ) ( (v, u) = (v, Ku) + (v, f ) and (v, u) = K∗ v, u + (ϕ, u) From ( ∗ ) K v, u = ( ( =

Ω

Ω

( Ω



( (

K v(x)u(x)d x =

k(x , , x)u(x)v(x , )d xd x , =

we obtain that

Ω

( Ω

Ω

k ∗ (x, x , )v(x , )u(x)d xd x ,

Ku(x , )v(x , )d x , = (v, Ku) ,

) ( ∗ K v, u = (v, Ku) .

Therefore (ϕ, u) = ( f, v) . That is very important, because in practice it happens the solution of the adjoint problem to be simple than this of the original one, and they are equivalent as we have proved it above. Usually it is assumed that ϕ(x) ∈ L 2 (Ω), because k(x, x , ) ∈ L 2 (Ω × Ω), f (x) ∈ L 2 (Ω). In a more general setting k(x, x , ) ∈ X (Ω × Ω), f (x) ∈ X (Ω), where X is a Banach space. Then the given function ϕ(x) should belong to the adjoint Banach space X∗ , and the problem under consideration may by formulated in an alternative way: (10) v = K∗ v + ϕ, ∗ ∗ ∗ where v, ϕ ∈ X∗ (Ω), and ( K (Ω × Ω) ∈ [X → X ]. In such a way one may compute the value J (v) = f (x)v(x)dx = ( f, v) instead of (8). An important case for practical computations is the case when X ≡ L 1 , where L 1 is defined in a standard way: ( (

∥ f ∥L 1 =

Ω

| f (x)|d x; ∥ K ∥ L 1 ≤ sup x

Ω

|k(x, x , )|d x , .

Advanced Biased Stochastic Approach for Solving Fredholm Integral Equations

355

In this case the function ϕ(x) from the functional (8) belongs to L ∞ , i.e. ϕ(x) ∈ L ∞ since L ∗1 ≡ L ∞ . It is also easy to see that (K∗ v, u) = (v, Ku), and also (ϕ, u) = ( f, v). This fact is important for practical computations since often the computational complexity for solving the adjoint problem is smaller than the complexity for solving the original one. The above consideration shows that if ϕ(x) is a Dirac δ–function δ(x − x0 ), then the functional J (u) = (ϕ(x), u(x)) = (δ(x − x0 ), u(x)) = u(x0 ) is the solution of the integral equation at the point x0 , if u ∈ L ∞ . Consider the case when X = X∗ = L 2 . We define: (( ∥ f ∥L 2 =

Ω

(( ∥K∥ L 2 ≤ sup

,

Ω

x

) 21

( f (x))2 d x

(k(x, x )) d x 2

; ) 21

.

If ϕ(x), u(x) ∈ L 2 then (8) is finite, because | ( |( [( } 21 ( | | 2 2 | ϕ(x)u(x)d x | ≤ |ϕ(x)u(x)|d x ≤ ϕ(x) d x u d x < ∞. | | Ω

Ω

Ω

Ω

If u(x) ∈ L 2 and k(x, x , ) ∈ L 2 (Ω × Ω) then Ku(x) ∈ L 2 : [( |Ku(x)| ≤ 2

Ω

|ku|d x

,

}2

( ≤

,

k (x, x )d x 2

Ω

,

( Ω

u 2 (x , )d x , .

Let us integrate the last inequality with respect to x: ( (

( Ω

|Ku|2 d x ≤

Ω

Ω

k 2 (x, x , )d x , d x

( Ω

u 2 (x , )d x , < ∞.

From the last inequality it follows that K2 u(x), . . . , Ki u(x), . . . also belongs to ∞ Σ L 2 (Ω). If ∥Kn ∥ < 1, n ∈ N then u = Ki f converges. i=0

An approximation of the unknown value (ϕ, u) can be obtained using a truncated Liouville–Neumann series (4) for a sufficiently large k: (ϕ, u (k) ) = (ϕ, f ) + (ϕ, K f ) + . . . + (ϕ, K(k−1) f ) + (ϕ, K(k) f ). So, we transform the problem for solving integral equations into a problem for approximate evaluation of a finite number of multidimensional integrals (Sobol 1989). We will use the following denotation (ϕ, K(k) f ) = I (k), where I (k) is a value, obtained after integration over Ω j+1 = Ω × . . . × Ω, j = 0, . . . , k. It is obvious that the calculation of the estimate (ϕ, u (k) ) can be replaced by evaluation of a sum of linear functionals of iterative functions of the following type

356

V. Todorov et al.

(ϕ, K( j) f ), j = 0, . . . , k, which can be presented as: (ϕ, K( j ) f ) =

( (Ω

ϕ(t0 )K( j ) f (t0 )dt0 = ϕ(t0 )k(t0 , t1 ) . . . k(t j−1 , t j ) f (t j )dt0 . . . dt j ,

=

(11)

G

where t = (t0 , . . . , t j ) ∈ G ≡ Ω j+1 ⊂ Rd( j+1) . If we denote by Fk (t) the integrand function F(t) = ϕ(t0 )k(t0 , t1 ) . . . k(t j−1 , t j ) f (t j ), t ∈ Ω j +1 , then we will obtain the following expression for (11): I ( j) = (ϕ, K

( j)

(

Fk (t)dt, t ∈ G ⊂ Rn ( j +1) .

f) =

(12)

G

Thus, we will consider the problem for approximate calculation of multiple integrals of the type (12). The above consideration shows that there two classes of possible stochastic approaches. The first one is the so-called biased approach when one is looking for a random variable which mathematical expectation is equal to the approximation of the solution problem by a truncated Liouville–Neumann series (4) for a sufficiently large k. An unbiased approach assumes that the formulated random variable is such that its mean value approaches the true solution of the problem. Obviously, in the first class of approaches there are two errors: a systematic one (a truncation error) rk and a stochastic (a probabilistic) one, namely r N , which depends on the number N of values of the random variable,or the number of chains used in the estimate. In the case of unbiased stochastic methods one should analyse the only probabilistic error. In the case of biased stochastic methods more careful error analysis is needed: balancing of both systematic and stochastic error should be done in order to minimize the computational complexity of the methods (for more details, see Dimov 2008).

3 Biased Stochastic Approach 3.1 Monte Carlo Method for Integral Equations It is known (Dimov 2008; Sobol 1973) that the approximate calculation of linear functionals of the solution of an integral equation (ϕ, u) brings to the calculation of a finite sum of linear functionals of iterative functions (ϕ, K j f ), j = 0, . . . , i. First, we construct a random trajectory (Markov chain) Tk of length k starting from state x0 in the domain Ω: Tk : x0 −→ x1 −→ . . . −→ xk

Advanced Biased Stochastic Approach for Solving Fredholm Integral Equations

357

according to the initial π(x) and transition p(x, x, ) probabilities. The functions π(x) and p(x, x, ) satisfy the requirements for non-negativeness,( to be acceptϕ(x) and the kernel k(x, x, ) respectively and Ω π(x)dx = 1, (able to function , , n Ω p(x, x )dx = 1 for any x ∈ Ω ⊂ R . The Markov chain transition probability , p(x, x ) is chosen to be proportional to |k(x, x , )| following (Dimov 2008). It means that p(x, x , ) = c|k(x, x , )|, and the constant c is computed such that (( c=

Ω

|k(x, x , )|d x ,

)−1

= 1 for any x ∈ Ω.

From the above supposition on the kernel k(x, x, ) and the well-known facts that ϕ(x0 ) Σ W j f (x j ), π(x0 ) j=0 k

Eθk [ϕ] = (ϕ, u (k) ), where θk [ϕ] =

and W0 = 1, W j = W j−1

k(x j−1 , x j ) , p(x j−1 , x j )

j = 1, . . . , k

it follows that the corresponding estimation by the Monte Carlo algorithm for integral equations (BSA) of (ϕ, u (k) ) can be presented in the following form: (ϕ, u (k) ) ≈

N 1 Σ θk [ϕ]l . N l=1

Therefore, the random variable θk [ϕ] can be considered as a biased estimate of the desired value (ϕ, u) for k sufficiently large with a statistical error of order O(N −1/2 ), where N is the number of chains and θk [ϕ]l is the value of θk [ϕ] taken over the l-th chain. It is very important that the same trajectories of the type Tk can be used for a biased approximate evaluation of (ϕ, u (k) ) for various functions ϕ(x). Furthermore, they can be used for various integral equations with the same kernel k(x, x, ), but with different right-hand sides f (x). If we consider (Sobol 1973) p(x, x , ) = (

|K(x, x , )| |ϕ(x)| , π(x) = ( , |K(x, x , )|d x , |ϕ(x)|d x

then the algorithm is called an Almost Optimal Monte Carlo algorithm with almost optimal probability densities and it reduces the variance. We use almost optimal densities instead of optimal, because they are very expensive and give much bigger variance (Dimov 2008).

358

V. Todorov et al.

3.2 Error Analysed of the Biased Stochastic Approach In this subsection the Balanced Biased Stochastic Approach (BBSA) will be described. First we will start with the error estimates which are the key element in the construction of BBSA Monte Carlo algorithm. 1 The probabilistic error is r N ≤ 0.6745σ (θ ) √ (Dimov 1994), where N is N the number of the realizations of the random variable θ and σ (θ ) = ((Dθ )1/2) is the standard deviation of the random variable θ for which Eθk [ϕ] = ϕ, u (k) = k ( ) Σ (ϕ, K( j) f ), where for point x = x0 , . . . , x j ∈ G ≡ Ω j+1 ⊂ Rd( j+1) , j = j=0

1, . . . , k : (ϕ, K

( j)

( f) =

Ω

ϕ(x0 )K( j) f (x0 )d x0 =

( =

ϕ(x0 )k(x0 , x1 ) . . . k(xk−1 , x j ) f (x j )d x0 d x1 . . . d x j = G

( F(x)d x, G

where F(x) = ϕ(x0 )k(x0 , x1 ) . . . k(xk−1 , x j ) f (x j ), x ∈ G ⊂ Rd( j+1) .Taking into { / )2 k k Σ Σ ( j) ( j) account that D θk ≤ Dθk , and using the variance properties we have j=0

j=0

the following inequalities (Dimov 2008): rN ≤ 0.6745 Σ √ N j=0 k

{(

(

( j)

K ϕf

)2

(( pd x −

K ϕ f pd x

G

0.6745 Σ ≤ √ N j=0 k

( j)

)2 )1/2 ≤

G

((

(

K( j ) ϕ f

)2

)1/2 pd x

G

k Σ ∥ ( j) ∥ 0.6745 ∥K ∥ . ∥ϕ∥ L 2 ∥ f ∥ L 2 = √ L2 N j=0

In this case the estimate simply involves the L 2 norm of the integrand. Finally, we obtain the following estimate for the probable error:

Advanced Biased Stochastic Approach for Solving Fredholm Integral Equations

359

0.6745∥ f ∥ L 2 ∥ϕ∥ L 2 rN ≤ √ ( ) . N 1 − ∥K∥ L 2 Consider the sequence u (1) , u (2) , . . . , defined by the recursion formula u (k) = Ku (k−1) + f, k = 1, 2, . . . . Let again stress that the formal solution of the Eq. (3) is the truncated Neumann series u (k) = f + K f + · · · + K(k−1) f + K(k) u (0) , k > 0, k−1 Σ (i) K f + K(k) u (0) . where K(k) means the kth iteration of K, u (k) = i=0

We define the k - residual vector of the systematic error r (k) : r (k) = f − ) ( (k) (k) . (I − K) u = (I − K) u − u (k) r (k+1) = By the definition of r (k) : r (k) = f − u (k) + Ku( (k) = u (k+1) − ) u and (k+2) (k+1) (k+1) (k) (k+1) (k) (k) = Kr . −u = Ku + f − Ku − f = K u −u u We have r (0) = u (1) − u (0) = Ku (0) + f − u (0) = K f , r (k+1) = Kr (k) = K(2)r (k−1) = · · · = K(k+1)r (0) . So we obtain u (k+1) = u (k) + r (k) = u (k−1) + r (k−1) (+ r (k) = · · · = u (0) +) r (0) + · · · + (k) = u (0) + r (0) + Kr (0) + K(2) r (0) + · · · + K(k) r (0) = u (0) + I + K + · · · + K(k) r (0) (Curr tiss 1954). ∞ Σ K(i ) f is convergent and If ∥K∥ L 2 < 1 then the Neumann series u = i=0

) ( k→∞ u (k+1) −−−→ u therefore from u (k+1) = u (0) + I + K + · · · + K(k) r (0) and k → ∞ we have u = u (0) + (I − K)−1 r (0) . After simple transformations u = Ku + f = Ku (0) + K(I − K)−1r (0) + f = u (1) + K(I − K)−1r (0) . Doing this k times we obtain: u = u (k) + K(k) (I − K)−1r (0) . Using the CauchySchwarz inequality: ∥ ∥ r (k) = ∥u − u (k) ∥ L 2 ≤ ∥ ∥ ∥K∥kL 2 ∥r (0) ∥ L 1 − ∥K∥ L 2

2



∥K∥kL 2 ∥ f ∥ L 2 ∥K∥ L 2 = 1 − ∥K∥ L 2

∥K∥k+1 L 2 ∥ f ∥L 2 1 − ∥K∥ L 2

.

Finally, we obtain the following estimate for the systematic error: | ∥ ∥ )| ( |(ϕ, u) − ϕ, u (k) | ≤ ∥ϕ∥ L ∥u − u (k) ∥ ≤ 2 L2 ∥ϕ∥ L 2 ∥ f ∥ L 2 ∥K∥k+1 L2 1 − ∥K∥ L 2

.

360

V. Todorov et al.

3.3 Error Balancing Conditions for the Biased Stochastic Approach Theorem 1 Let δ be the preliminary given error to solve the problem under consideration (2). We suppose that 0.6745∥ϕ∥ L 2 ∥ f ∥ L 2 δ rN ≤ √ ( ) ≤ , 2 N 1 − ∥K∥ L 2 rk ≤

∥ϕ∥ L 2 ∥ f ∥ L 2 ∥K∥k+1 L2 1 − ∥K∥ L 2



δ . 2

Then the lower bounds for N and k for the balanced biased stochastic approach are: { N≥

1.349∥ϕ∥ L 2 ∥ f ∥ L 2 ) ( δ 1 − ∥K∥ L 2

)2 , k≥

ln

δ (1−∥K∥ L 2 ) 2∥ϕ∥ L 2 ∥ f ∥ L 2 ∥K∥ L 2

ln ∥K∥ L 2

.

The proof of the two inequalities leads directly from the above assumption where the probable and systematic errors are bounded by the half of the preliminary given error. Namely, we find the following optimal ratio between k and N : ln k≥

0.6745 √ ∥K∥ L 2 N . ln ∥K∥ L 2

Now if we choose k to be close to its lower bound in the error balancing conditions, then the optimal ration between N and k is: N≥

0.455 . ∥K∥2k+2 L2

In fact, the two obtained lower bounds for k are equivalent if N is equal to the smallest natural number for which the first inequality in the theorem holds because in this case one can easily see that the estimate for the systematic error is smaller than the estimate for the probable error and the corollary follows directly. Now it easily follows the following statement: Corollary 1 For the balanced biased stochastic algorithm one can choose the following values for N and k: ⎡ ⎢ k= ⎢ ⎢ ⎢

ln

δ (1−∥K∥ L 2 ) 2∥ϕ∥ L 2 ∥ f ∥ L 2 ∥K∥ L 2

ln ∥K∥ L 2

⎤ ⎥ ⎥, N = ⎥ ⎥



0.455 ∥K∥2k+2 L2

⎤ ;

(13)

Advanced Biased Stochastic Approach for Solving Fredholm Integral Equations

⎡ 0.6745 )2 ⎤ ln √ ⎢ 1.349∥ϕ∥ L 2 ∥ f ∥ L 2 ⎢ ∥K∥ L 2 N ⎢ ⎥ ) ( N= ⎢ ⎥, k = ⎢ ⎢ ln ∥K∥ L 2 δ 1 − ∥K∥ L 2 ⎢ ⎥ ⎢ ⎢ ⎡{

361

⎤ ⎥ ⎥ ⎥. ⎥ ⎥ ⎥

(14)

4 Unbiased Stochastic Approach In this section the Unbiased Stochastic Approach (USA) based on the idea of Sylvain Maire (Dimov and Maire 2019) will be described.

4.1 Unbiased Approach for a Simplified Problem We start with a simple case assuming that 0 ≤ k(x, x, ) ≤ 1 for any x, x, ∈ Ω ≡ [0, 1]d , x = (x1 , . . . , xd ), x, = (x1, , . . . , xd, ). Assume that Y ≡ U [0, 1]d is a uniformly distributed random variable in [0, 1]d . Then we use the following stochastic presentation: u(x) = E{k(x, Y )u(Y ) + (1 − k(x, Y ))0} + f (x), which suggests a representation for the function u(x) based on the same ideas used in Dimov et al. (2015a) for solving linear systems of equations. The probability for absorption is 1 − k(x, Y ), the score is the function f (x), and if the trajectory is not absorbed then it continues at point Y . We consider a random trajectory (Markov chain) Ti of length i starting from state X = x = x0 in the domain Ω assuming that ∂ ≡ Rd \ Ω, i.e.: Ti : x0 −→ x1 −→ . . . −→ xi , where P(xi+1 ∈ [a, b]/xi /= ∂) = k(x, Ui )P(Ui ∈ [a, b]), P(xi+1 = ∂/xi /= ∂) = 1 − k(x, Ui ) and P(xi+1 = ∂/xi = ∂) = 1, and f(∂) = 0. Then we have:

362

V. Todorov et al.

{ u(x) = E

∞ Σ

) f (xi )/x0 = x = E

i=0

{ τ Σ

) f (xi )/x0 = x , where τ = inf (xi ) = ∂. i≥0

i=0

Obviously, one can write u(x) = E

{∞ Σ i=0

)

{ {

∞ Σ

))

f (xi+1 )/x0 = x + f (x) = E E f (xi+1 )/x1 + f (x) i=0 ( = k(x, y)u(y)dy + f (x) Ω

(15)

The algorithm in this simplified case may be described as follows. Algorithm 1 : Unbiased stochastic algorithm for 0 ≤ k(x, y) ≤ 1, x, y ∈ [0, 1]d 1. Initialization Input initial data: the kernel k(x, y), the function f (x), and the number of random trajectories N . 2. Calculations: 2.1. Set score=0 2.2. Set x0 = x, test = 1 Do j = 1, N Do While (test /= 0) scor e = scor e + f (x0 ) U = rand[0, 1]d , V = rand[0, 1] If k(x0 , U ) < V then test = 0 else x0 = U Endif Endwhile Enddo 3. Compute the solution: u(x) =

scor e N

4.2 Unbiased Approach for the General Problem Now we may describe the method for the case when the kernel k(x, y) may be negative and also we allow that |k(x, y)| ≥ 1, but still k(x, y) ∈ L 2 (Ω × Ω) and the Neumann series converges. If −1 ≤ k(x, Y ) ≤ 0, then k(x, Y ) is not any more the probability to stop. Instead, |k(x, Y )| is used as a probability to stop. But now one needs to compute −u(Y ) instead of u(Y ). In this case, the score is − f (x) instead of f (x). If |k(x, Y )| ≥ 1, then the random walk continues and the score is k(x, Y ) f (x). The above described procedure leads to the following method, which is a modification of

Advanced Biased Stochastic Approach for Solving Fredholm Integral Equations

363

the previous approach. Assume we deal with the Markov chain Ti of length i starting from state X = x = x0 in the domain Ω assuming that δ ≡ Rd − Ω: Ti : x0 −→ x1 −→ . . . −→ xi , where in this case P(xi+1 ∈ [a, b]/xi /= ∂) = min (|k(xi , xi+1 )|, 1) , P(xi+1 = ∂/xi /= ∂) = 1 − min (|k(xi , xi+1 )|, 1) , and P(xi+1 = ∂/xi = ∂) = 1. Then we have: { u(x) = f (x) + E

∞  k Σ

) ˆk(xi−1 , xi ) f (xi−1 )/x0 = x ,

k=1 i=1

ˆ x, ) is defined in the following way: where k(x, ⎧ ⎨ k(x, x, ) if |k(x, x, )| ≥ 1 ˆk(x, x, ) = 1 if 0 ≤ k(x, x, ) ≤ 1 ⎩ −1 if − 1 ≤ k(x, x, ) ≤ 0 Now, we may present the algorithm in this more general case: Algorithm 2 : Unbiased stochastic algorithm for the general case: 1. Initialization Input initial data: the kernel k(x, y), the function f (x), and the number of random trajectories N . 2. Calculations: 2.1. Set score=0 2.2. Do j = 1, N 2.3. Set x0 = x, test = 1, prod=1 Do While (test /= 0) scor e = scor e + f (x0 ) ∗ pr od U = rand(0, 1)d If |k(x0 , U )| > 1 then pr od = pr od ∗ k(x0 , U ) else x0 = U V = rand[0, 1]d If |k(x0 , U )| < V then

364

V. Todorov et al.

test = 0 else pr od = pr od ∗ sign (k(x0 , U )), x0 = U Endif Endwhile Enddo 3. Compute the solution: e u(x) = scor N

5 Numerical Examples and Discussion To illustrate the strength of BBSA we will give some examples representing Fredholm integral equations of the second kind. In the first four examples a comparison with between BBSA and BSA methods will be analyzed and in the last example we will show the comparison between BBSA and USA approaches.

5.1 Simple Case First we start with the following simple test case: ( ) ( ) ( u (x) = k x, x , u x , d x , + f (x) , Ω

( ) 1 , Ω ≡ [0, 1], k x, x , = e x+x , f (x) = 6x − e x , ϕ(x) = δ(x). The exact solution 6 is u(x) = 6x. We want to find the solution in one point - the middle of the interval. In order to apply the theorem we evaluate the L 2 norms: ∥ϕ∥ L 2 = 1, ∥K∥ L 2 = 0.5324, ∥ f ∥ L 2 = 1.7873. Our Monte Carlo algorithm starts from x0 = 0.5 so the exact solution is 3 and π (x) = δ(x). In Table 1 we present different values for δ and estimate of N and k by the theorem are presented. The values of N and k are set to the smallest possible natural numbers for which the inequalities in the theorem hold. The first two columns with the expected relative error and the computational time measured in seconds are for the BSA and the last two columns are for the case when BBSA is used. One can easily see that the BBSA method gives better results.

5.2 Application to Biology The next example is analytically tractable model taken from the biology (Doucet et al. 2010):

Advanced Biased Stochastic Approach for Solving Fredholm Integral Equations Table 1 Relative error and computational time δ N k Expected BSA rel. rel. error error

Time (s)

BBSA rel. Time (s) error 0.0132 0.0036 0.0020 9.3671e04 6.3582e04 6.2479e04

0.1 0.03 0.02 0.0075

2659 29542 66468 472659

6 8 9 10

0.0333 0.01 0.0067 0.0025

0.0137 0.0039 0.0022 0.001

11 62 140 1167

0.007

542593

11

0.00233

1562

0.005

1063482

11

0.00167

6.9639e04 6.4221e04

Table 2 Relative error and computational time N k Expected BSA rel. δ rel. error error 0.23 0.037 0.025 0.014

132 5101 11172 35623

3 4 5 6

0.1395 0.0224 0.0152 0.0085

0.0055

230809

7

0.0033

0.0045

344788

7

0.0027

( u (x) = (

) ,

Ω

0.0123 0.0041 0.0014 4.5725e04 1.5242e04 1.5242e04

365

4412

5 42 70 529 614 2202

Time (s)

BBSA rel. Time (s) error

0.5 11 16 56

0.0121 0.0040 0.0012 4.0010e04 9.8811e05 1.4893e04

424 605

0.2 7 9 34 346 592

) ( ) ( k x, x , u x , d x , + f (x) ,

Ω ≡ [0, 1], k x, x = 13 e x , f (x) = 23 e x , ϕ(x) = δ(x). The exact solution is u(x) = e x . We want to find the solution in one point- the middle of the interval. In order to apply the theorem we evaluate the L 2 norms: ∥ϕ∥ L 2 = 1, ∥K∥ L 2 = 0.3917, ∥ f ∥ L 2 = 1.1915. Our Monte Carlo algorithm starts from x0 = 0.5 so the exact solution is 1.6487 and π (x) = δ(x). In Table 2 we give the preliminary given error δ different values and estimate N and k by the theorem. One can see that the BBSA method gives slightly better results than the BSA Monte Carlo algorithm and one can see that the results obtained by the two Monte Carlo methods are closer when the initial probability is the δ - function. We also see that experimental relative error confirms the expected relative error.

366

V. Todorov et al.

5.3 Application to Neural Networks We study the following example taken from neural networks (Doucet et al. 2010): ( u (x) =

Ω

) ( ) ( k x, x , u x , d x , + f (x) ,

) ( 0.055 f (x) = 0.02 3x 2 + e−0.35x , ϕ(x) = Ω ≡ [−2, 2], k x, x = 1+e −3x + 0.07, 0.7((x + 1)2 cos(5x) + 20). We want to find (ϕ, u), where ϕ(x) = 0.7((x + 1)2 cos(5x) + 20). The exact solution is 8.98635750518 (Georgieva 2003). This integral equation describes the procedure of teaching of neural networks (Curtiss 1954). In order to apply the theorem we evaluate the L 2 norms: ∥ϕ∥ L 2 = 27.7782, ∥K∥ L 2 = 0.2001, ∥ f ∥ L 2 = 0.2510. The L 2 norm of the kernel is smaller than this in the previous example, so we can expect that smaller values of k will be needed to obtain the balancing of the errors. In Table 3 we give the preliminary given error δ different values and estimate N and k by the theorem where N is equal to the smallest possible natural number for which the first inequality in the theorem holds and k is slightly bigger than the smallest possible natural number for which the second inequality in the theorem is fulfilled. The expected relative error is obtained by dividing the preliminary given error by the exact value. One can easily see that the BBSA method gives much better results than the BSA Monte Carlo algorithm for larger values of N and k. For smaller values of them the BSA Monte Carlo method gives better results, but the results obtained with BBSA method are closer to the expected theoretical error. Using BBSA method we see that experimental relative error confirms the expected relative error. We also see that in the BBSA algorithm the computational time is bigger because we use the acceptance-rejection method (or method of selection) for modeling the initial probabilities. In our example this is not necessary for the transition probabilities because the kernel is a function of only one variable and when we set the transition probabilities to be proportional to the kernel we obtain that they are equal to constant functions. (

) ,

Table 3 Relative error and computational time δ N k Expected BSA rel. rel. error error 0.4 0.2 0.1 0.05 0.028 0.02

865 3457 13827 55306 176357 345659

3 4 4 5 5 6

0.0445 0.0223 0.0111 0.00556 0.00312 0.00233

0.0052 0.0094 0.0113 0.0177 0.0176 0.0202

Time (s)

BBSA rel. Time (s) error

3 9 28 132 448 901

0.0239 0.0121 0.0086 0.0032 0.0031 0.0013

5 23 46 222 540 1090

Advanced Biased Stochastic Approach for Solving Fredholm Integral Equations

367

5.4 Application to Physics It is interesting to see if the considered algorithm can be applied to non-linear integral equations with polynomial non-linearity. One may expect that if the nonlinearity is not too strong instead of considering functionals on branching Markov processes (see, the analysis presented in Dimov 2008) one may use the presented balanced biased algorithm. We consider the following non-linear integral equation with a quadratic non-linearity (Dimov and Gurov 1997): ( ( k (x, y, z)u (y) u (z) dydz + f (x) , (16) u (x) = Ω

Ω

where Ω = E ≡ [0, 1] and x ∈ R1 The kernel k(x, y, z) is non-negative. In our 2 test k (x, y, z) = x(a2ay−z) and f (x) = c − ax3 , where a1 > 0, a3 /= 0, a2 and c are 1 constants. We evaluate: ( 1( 1 2a 2 − 3a2 + 2 K 2 = max | k (x, y, z) | dydz = 2 x∈[0,1] 0 6a1 0 The process converges when the following condition is fulfilled: K2
80%) the tracked performance of the sensor was higher than that of the normal instrument (Jayaratne et al. 2018). Surface pollutant concentrations are diluted through convective mixing of air mass caused by solar heating of the field. Elevated pollutant plumes (e.g., from an industrial chimney, log range transfer of dust at higher levels) may increase nearsurface concentrations if the Mixing Layer Height (MLH) increases above the level of dust/smoke plumes. Several studies identify that barometric pressure is important for modeling particulate matter because complicated wind flows may occur, resulting

376

P. Zhivkov

in stagnant/stationary conditions with little circulation. Pollutants accumulate near the ground as a result of these conditions (Murthy et al. 2020). When barometric pressure was included in the model, the connection between particulate matter and cardiovascular mortality was marginally strengthened (Janssen et al. 2013). Finding associations between MLH and near-surface pollutant concentrations representative for a city like Berlin (flat terrain) appears to be impractical, particularly when traffic emissions are dominant (Geiß et al. 2017).

2.4 Reference Instrument This research used five air quality control stations with traditional measuring methods as a guide. The BETA-attenuation monitor (BAM) was used to calculate PM10. Impactors, cyclones, detection parts, and a dynamic heating system are among the standard instruments. They are situated in Sofia, Bulgaria respectively in the areas of Mladost, Druzhba, Nadezhda, Hipodruma, and Krasno Selo. Only one of these stations is measuring PM 2.5 and due to this PM 2.5 is not used for a reference in this research.

2.5 PM Sensors The PM sensor used in this research was the NovaFitness SDS011 laser particle sensor. The laser diffraction principle is used to use the sensor. The laser illuminates the trapped ions when capturing dispersed light waves at a certain angle as air flows through the photosensitive region of the sensor. A particle size continuum is created by classifying these pulse signals into various particle size intervals in order to measure the mass concentration of the particles (Koehler and Peters 2015). Table 1 lists the parameters of the SDS011. Here important is to see that the maximum humidity for accurate measurements is 70%. This means that the manufacturer assures accuracy up to this limit yet does not necessarly means that above 70% humidity the measurements are not accurate.

2.6 The Wireless Network In this research, the Wireless Sensor Network (WSN) of Luftdaten was used. It was made up of 300 fixed sensors covering Sofia. Each sensor was installed in a plastic tube that could be mounted on walls, balconies, street light poles, and other structures. Certain guidelines were developed to aid in obtaining the best representation of PM emissions in the city with the least amount of sensors possible. The WSN used

Improving Performance of Low-Cost Sensors Using Machine …

377

Table 1 Characteristics and datasheet of SDS011-TRF Content Parameter Output: Monitoring range: Supply voltage: Power consumption (work): Work temperature: Humidity (work): Accuracy: Pressure:

PM10, PM2.5 0–999.9 µg/m3 15V (4.7–5.3V) 70 mA±10 mA −10 to +50C Max. 70% 70% for 0.3 µm 98% for 0.5 µm 300–1100 hPa

fixed sensors that were mounted in 1 km grids to ensure that the majority of the downtown area was covered with adequate density. The following section is describing the two-step calibration model that is proposed to improve the data quality of the low-cost sensors used in this network.

3 Data Calibration Model This section is describing the two-step calibration model for low-cost sensors that uses a combination of supervised and unsupervised ML techniques. This model uses measurement data from low-cost sensors and standard air monitoring stations as a reference instrument. Models for low-cost sensor data correction have already been shown to deliver good outcomes when trained against a reference instrument (Considine et al. 2021).

3.1 Datasets and Model The model uses the 5 standard air monitoring located in Sofia and 5 low-cost sensors with the specifications shown in the previous section. The input variables are RH, AP, temp. The value that we are calibrating the data is PM10. This input data from both standard instruments and low-cost sensors for measuring air quality. Since the sensor’s time resolution differed from that of the regular instrument, the hourly mean was used for calculation and evaluation. Because temperature, air pressure, and humidity affect the values of many air quality sensors, measurements of these variables are frequently recorded on-site and can be utilized in correction models (Holstius et al. 2014; Zusman et al. 2020).

378

P. Zhivkov

3.2 Methods A two-step model calibration method was modeled in this analysis, as seen in Fig. 1. In the first step of the model are used 5 supervised ML techniques and evaluated their results. The best performing one is used again in the second step that includes anomaly detection (unsupervised ML) for cleaning the outliers and investigating if this improves the results. Here is a short introduction to the five supervised ML techniques that are used in the first step of the calibration model. A decision tree (DT) is a decision-making model that employs a tree-like model of decisions and their potential consequences, including the implications of chance events (Safavian and Landgrebe 1991). The Gradient Boosting Decision Tree (GBDT) algorithm is an iterative decision tree algorithm made up of multiple decision trees (Kwok and Carter 1990). To obtain the final answer, the conclusions of all trees are added together. Random Forest (RF) is a tree predictor hybrid in which each tree is based on the values of a random variable sampled independently and with the same distribution for all trees in the forest, (Breiman 2001). In this research, an RF with 10 trees is applied. The Artificial Neural Network (ANN) is a mathematical model that simulates neuronal behavior and is automatically modified by back-propagation errors (Gevrey at al. 2003). RF and ANN are stochastic techniques, therefore, several runs have been performed in order to obtain objective results. Anomaly detection is decided It is a method of detecting unusual objects or occurrences in data sets that are out of the ordinary (Savage et al. 2014). The anomaly detector was used to remove outliers from the training dataset. As this is unsupervised learning, an evaluation with the same ANN setup was made, before and after cleaning the dataset to identify if unsupervised learning is appropriate for this dataset.

Fig. 1 Model for finding anomalies, cleaning dataset, and evaluating the two ANN models

Improving Performance of Low-Cost Sensors Using Machine …

379

The learning process of the model was divided into two stages: learning and testing. The raw data was split into two data sets at random, with 80% for training and 20% for testing. The model was first trained using the training data, and then its output was evaluated by the test data set.

4 Application of the Model on Wireless Sensor Network After calibrating the 5 sensors situated right next to the standard instruments, we apply the same model to other sensors from the Luftdaten network. Measurement data from 13 other sensors that are in the vicinity of 500 meters to the regulatory stations are added in the analysis. The purpose of this adjustment is to forecast the “true” PM10 concentration at a place given the PM10 concentration as precisely as possible.

4.1 Dataset and Model For the purposes of this research we take data from low-cost sensors that are exploited by citizens. The percentage of missing data among low-cost sensors was 9.4% as a result of broken sensors/no data were captured; failure in box operation (e.g. unplugged); timerelated issues (no valid time-variable); and inability to transmit data. The average percent completeness of data per sensor was 89.2%, with a median of 96.7%. Same five supervised ML techniques were used in the first step of the analysis: LR, DT, GBDT, RF models, and ANN were evaluated. For assessing and evaluating the models is used the correlational coefficient R2 as performance indicators while choosing and assessing our models. In terms of variable selection, we only selected variables in the validation stage that appeared to enhance the findings.

4.2 Modelling the Pressure Measurement The WSN consists of sensors installed on different heights. For evaluating the pressure measurement from the low-cost sensors and identifying any outliers we use the barometric formula and Pearson correlation coefficient. As data inputs were used both the values from low-cost PM sensors and reference instruments. The height of installation for every low-cost sensor is known and added to the dataset. Therefore, we can also identify precisely the height difference between every sensor and analyze the measurements for atmospheric pressure.

380

P. Zhivkov

The Barometric formula (1) is used to model how the pressure of the air changes with altitude and it is as follows: 

Tb + L b . (h − h b ) P = Pb . Tb

 −g 0 .M R ∗ .L b

(1)

where: Pb = reference pressure (Pa) Tb = reference temperature (K) L b = temperature lapse rate (K/m) in ISA h = height at which pressure is calculated (m) h b = height of reference level b (meters; e.g., hb = 11 000 m) g0 = gravitational acceleration: 9.80665 m/s2 R ∗ = universal gas constant: 8.3144598 J/(mol·K) M = molar mass of Earth’s air: 0.0289644 kg/mol The comparison statistical methods fall into two categorizations: parametric and nonparametric. Parametric comparisons are based on the premise that the variable is continuous and normally distributed. Nonparametric approaches are used where data is continuous with non-normal distribution or any other form of data other than continuous variables. As our calculation model includes data normality and strict sample size, a parametric method is chosen. Moreover, parametric methods are better ways to measure the difference between the groups relative to their equivalent nonparametric methods. To evaluate the AP results from low-cost sensors a correlation method was used. The parametric Pearson correlation coefficient (2) is used for comparing the two sources of data. It provides a measure of the linear association between the two continuous variables (usually just referred to as correlation coefficient). Correlation coefficients for each (x, y) pair are determined to carry out the evaluation, and the values of x and y are consequently replaced by their ranks. Applying the test findings to a coefficient of correlation ranging from −1 to 1. Σ n ¯ i − y¯ ) i=1 (x i − x)(y √ Σ r = √ Σ (2) n n 2 2 (x − x) ¯ (y − y ¯ ) i i i=1 i=1

5 Results and Evaluation In Table 2 are shown the results of the five supervised ML techniques. For better evaluation of the low-cost sensors together with the coefficient of determination (Rsquared), Mean Absolute Error (MAE), Mean Squared Error (MSE) are calculated.

Improving Performance of Low-Cost Sensors Using Machine … Table 2 Results from supervised learning models Type of model MAE LINEAR REGRESSION: DECISION TREE: GRADIENT BOOSTING DECISION TREE: RANDOM FOREST: ARTIFICIAL NEURAL NETWORK:

11.19 8.89 8.68 7.96 6.27

381

MSR

R2

288.12 170.03 145.22 125.57 83.90

0.77 0.86 0.89 0.90 0.94

The mean value of R squared between PM sensors and standard instruments without calibration was 0.62. The LR model showed the worst result after calibration with a mean R 2 of 0.77. The best correlation for PM10 came from the ANN model. The mean value of the R 2 was 0.94 (PM10), which matched the findings of previous studies (Rai et al. 2017; Qin et al. 2020). For long-term comparison, the low-cost sensor and the regular instrument were put in the same location, which was a common approach for sensor evaluation in previous studies (Hagan et al. 2018; Holstius et al. 2014; Jiao et al. 2016).

5.1 Evaluation of the Results for Relative Humidity Relative humidity (RH) and temperature are considered to be the most important impact factors on particle sensor efficiency. High RH has been shown in previous studies to be the catalyst to causing hygroscopic growth of particles and modifying optical properties, resulting in substantial interference for PM sensors (Liu et al. 2020). Moreover, RH turned out to be of the highest importance in the RF and ANN models for the PM10 values (Table 3). In this evaluation, the low-cost sensors had nearly identical values as compared to standard instruments when RH was below 40%. While PM10 had a worse correlation when RH was above 80%. Unfortunately, normal instruments could not be compared for the PM 2.5/PM 10 ratio because, as was mentioned above, from the five standard instruments—there was only one measuring PM2.5.

Table 3 Field importance based on RF Content Relative Humidity: Temperature: Atmospheric Pressure:

Parameter (%) 46.62 30.71 22.67

382

P. Zhivkov

5.2 Results of the Calibration Model Table 2 presents the statistical outcomes of each model’s testing, where Mean Absolute Error (MAE), Mean Squared Error (MSE), and the R 2 were determined. The output of the other five separate models showed that the ANN model performed best. The RF model showed slightly worse results. The R 2 of PM10 increased from 0.62 to 0.9 and 0.94 for RF and ANN respectively. The ANN model performed best out of the 5 models, slightly better than the RF model, therefore, was chosen to be used for the second step—with anomaly detection.

5.3 Improvements of the Model Through Unsupervised Learning The ANN model was used as an autonomous model with sensor data and environmental variables as inputs. With the output values of five independent models as inputs, the ANN-final model conducted an artificial neural network model after filtering the dataset from anomalies. It was set up to find the top 20 anomalous instances within a forest with 256 trees. The highest anomaly score of a tree was 68%. These 20 outliers were removed from the training set and ran an ANN with the cleaned model which was evaluated again. The compared evaluations showed improvement with an R 2 increased from 0.94 to 0.95. In addition, the MAE and MSE decreased by 5.16% and 14.69% respectively (see Table 4). Therefore, the use of unsupervised learning in this study is considered to be useful and improves the result of the calibration. In conclusion, the ANN-final model (ANN with anomally detection) had the best calibration score, with the largest R squared and the best correlation, indicating that the proposed two-step model was more accurate than a single model in the model calibration of low-cost sensors (Fig. 2).

Table 4 Field importance based on RF Type of Model MAE ANN: ANN with anomaly detection:

6.27 5.62 (↑5.16%)

MSE

R2

83.90 65.37 (↑14.69%)

0.94 0.95

Improving Performance of Low-Cost Sensors Using Machine …

383

Fig. 2 ANN model combined with anomaly detection

5.4 Results from the Application on the WSN Results for AP from 13 sensors installed on different heights within a 500m radius were evaluated using the barometric formula in the calculations and the data from the reference instrument. The installation height of the sensors was 3, 6, 8, 12, and 18m while the height of the reference instruments is 2 m. Calculations showed a correlation between the sensors and the standard instruments with a mean value of the Pearson correlation coefficient r = 0.91. This result can be determined as a very high correlation. However, one sensor—situated at 6m height close to the Pavlovo station showed a lower correlation to the standard instrument. This sensor showed higher AP after the correction and is left for further evaluation as an outlier. We have evaluated the results after calibrating with the 5 ML techniques, and again the model with ANN was the best performing one. Thus this model was used for the second step. On the first step after calibrating the 13 stations using ANN, the R 2 was 0.92. After applying anomaly detection, the results improved and the Rsquared increased to R 2 = 0.93. The same sensor that had a lower correlation with the standard stations for atmospheric pressure measurement, had the lowest value for R 2 = 0.87. It can be concluded that the model not only can calibrate the data but also find outliers. This sensor should be further investigated the reasons for this disparity in correlation to the rest sensors that participated in the model. RH is the most essential factor in the calibration results once again.

384

P. Zhivkov

6 Conclusion and Future Research The efficiency of PM sensors was measured by comparing standard instruments using the wireless sensor network. To calibrate the fixed sensors, a two-step process was developed, and the model’s results were evaluated. The following are the major conclusions: The findings of the two-step model were satisfactory. The R 2 of the fixed PM10 sensors increased from 0.62 to 0.95. The ANN model had the strongest impact of the five independent models, followed by the RF model, while the LR model was ineffective. Anomaly detectors can be an unsupervised alternative to classifiers in an unbalanced dataset. As a result of them, unwanted sensor behavior is detected and removed from the dataset. Anomaly detection improved the final results in this study. Relative humidity turned to be the most important factor for the results of the calibration. Which was something expected, as high humidity is the condition where low-cost sensors show most weaknesses in data quality. This turned out to be the case in this study as well having higher impact factor than temperature and atmospheric pressure. The atmospheric pressure values of standard stations and sensors were evaluated by the use of calculations with the Barometric formula. The correlation was strong which means that low-cost sensors may be considered as a good source of modeling air pollution in vertical planning in further research. This type of calibration, using supervised and unsupervised ML, demonstrates the ability to improve the results of low-cost sensors. Furthermore, it might be used also for the assessment of outliers. Further investigation can be handled to understand if these outliers are damaged sensors, not correctly installed ones, or they are working well but hyper-local changes in atmospheric and air quality conditions exist. This novelty study is useful to environmental specialists, scientists, policymakers, and medical physicists. Besides improving air quality, policymakers should focus on mechanisms of informing citizens for upcoming days with high air pollution. To do so they need high-resolution surveillance based on reliable monitoring results at lower costs for equipment and maintenance. While having only one standard instrument for measuring PM2.5 is highly insufficient for a city like Sofia with nearly 1.5 million inhabitants. Yet there is a WSN with nearly 300 sensors that measure PM2.5. By improving the reliability of PM sensors, studies like this one can be helpful for the successful control and management of the air quality in large cities like Sofia. Further studies will be beneficial in incorporating gas sensors into the WSN network. In addition, it is useful to analyze automotive emissions with integrated mobile sensors in the vehicles and use the model’s improvement from this research. Acknowledgements The author would like to thank air.bg, luftdaten.info, Sofia Municipality, Air for Health, and Air Solutions for cooperating with data for this research. The work is supported by National Scientific Fund of Bulgaria under the grant DFNI KP-06-N52/5.

Improving Performance of Low-Cost Sensors Using Machine …

385

References Brantley, H., Hagler, G., Kimbrough, E., Williams, R., Mukerjee, S., Neas, L.: Mobile air monitoring data-processing strategies and effects on spatial air pollution trends. Atmosp. Meas. Tech. 7(7), 2169–2183 (2014) Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001) Chin, J.Y., Steinle, T., Wehlus, T., Dregely, D., Weiss, T., Belotelov, V.I., Stritzker, B., Giessen, H.: Nonreciprocal plasmonics enables giant enhancement of thin-film faraday rotation. Nat. Commun. 4(1), 1–6 (2013) Chow, J.C.: Measurement methods to determine compliance with ambient air quality standards for suspended particles. J. Air Waste Manag. Assoc. 45(5), 320–382 (1995) Collier-Oxandale, A., Casey, J.G., Piedrahita, R., Ortega, J., Halliday, H., Johnston, J., Hannigan, M.P.: Assessing a low-cost methane sensor quantification system for use in complex rural and urban environments. Atmosp. Meas. Tech. 11(6), 3569–3594 (2018) Considine, E.M., Reid, C.E., Ogletree, M.R., Dye, T.: Improving accuracy of air pollution exposure measurements: statistical correction of a municipal low-cost airborne particulate matter sensor network. Environ. Pollut. 268, 115833 (2021) Geiß, A., Wiegner, M., Bonn, B., Schäfer, K., Forkel, R., Schneidemesser, E.v., Münkel, C., Chan, K.L., Nothard, R.: Mixing layer height as an indicator for urban air quality? Atmosp. Meas. Tech. 10(8), 2969–2988 (2017) Gevrey, M., Dimopoulos, I., Lek, S.: Review and comparison of methods to study the contribution of variables in artificial neural network models. Ecol. Model. 160(3), 249–264 (2003) Hagan, D.H., Isaacman-VanWertz, G., Franklin, J.P., Wallace, L.M., Kocar, B.D., Heald, C.L., Kroll, J.H.: Calibration and assessment of electrochemical air quality sensors by co-location with regulatory-grade instruments. Atmosp. Meas. Tech. 11(1), 315–328 (2018) Holstius, D.M., Pillarisetti, A., Smith, K., Seto, E.: Field calibrations of a low-cost aerosol sensor at a regulatory monitoring site in California. Atmos. Meas. Tech. 7(4), 1121–1131 (2014) Janssen, N., Fischer, P., Marra, M., Ameling, C., Cassee, F.: Short-term effects of pm2. 5, pm10 and pm2. 5–10 on daily mortality in The Netherlands. Sci. Total Environ. 463, 20–26 (2013) Jayaratne, R., Liu, X., Thai, P., Dunbabin, M., Morawska, L.: The influence of humidity on the performance of a low-cost air particle mass sensor and the effect of atmospheric fog. Atmosp. Meas. Tech. 11(8), 4883–4890 (2018) Jiao, W., Hagler, G., Williams, R., Sharpe, R., Brown, R., Garver, D., Judge, R., Caudill, M., Rickard, J., Davis, M., et al.: Community air sensor network (cairsense) project: evaluation of low-cost sensor performance in a suburban environment in the southeastern united states. Atmosp. Meas. Tech. 9(11), 5281–5292 (2016) Kampa, M., Castanas, E.: Human health effects of air pollution. Environ. Pollut. 151(2), 362–367 (2008) Koehler, K.A., Peters, T.M.: New methods for personal exposure monitoring for airborne particles. Curr. Environ. Health Rep. 2(4), 399–411 (2015) Kwok, S.W., Carter, C.: Multiple decision trees. In: Machine Intelligence and Pattern Recognition, vol. 9, pp. 327–335. Elsevier (1990) Liu, X., Jayaratne, R., Thai, P., Kuhn, T., Zing, I., Christensen, B., Lamont, R., Dunbabin, M., Zhu, S., Gao, J., et al.: Low-cost sensors as an alternative for long-term air quality monitoring. Environ. Res. 185, 109438 (2020) Mead, M., Popoola, O., Stewart, G., Landshoff, P., Calleja, M., Hayes, M., Baldovi, J., McLeod, M., Hodgson, T., Dicks, J., et al.: The use of electrochemical sensors for monitoring urban air quality in low-cost, high-density networks. Atmos. Environ. 70, 186–203 (2013) Mouzourides, P., Kumar, P., Neophytou, M.K.A.: Assessment of long-term measurements of particulate matter and gaseous pollutants in south-east mediterranean. Atmos. Environ. 107, 148–165 (2015) Mukherjee, A., Agrawal, M.: World air particulate matter: sources, distribution and health effects. Environ. Chem. Lett. 15(2), 283–309 (2017)

386

P. Zhivkov

Murthy, B., Latha, R., Tiwari, A., Rathod, A., Singh, S., Beig, G.: Impact of mixing layer height on air quality in winter. J. Atmos. Solar Terr. Phys. 197, 105157 (2020) Pope, C.A., III., Dockery, D.W.: Health effects of fine particulate air pollution: lines that connect. J. Air Waste Manag. Assoc. 56(6), 709–742 (2006) Qin, X., Hou, L., Gao, J., Si, S.: The evaluation and optimization of calibration methods for lowcost particulate matter sensors: inter-comparison between fixed and mobile methods. Sci. Total Environ. 715, 136791 (2020) Rai, A.C., Kumar, P., Pilla, F., Skouloudis, A.N., Di Sabatino, S., Ratti, C., Yasar, A., Rickerby, D.: End-user perspective of low-cost sensors for outdoor air pollution monitoring. Sci. Total Environ. 607, 691–705 (2017) Rasyid, A.R., Bhandary, N.P., Yatabe, R.: Performance of frequency ratio and logistic regression model in creating gis based landslides susceptibility map at lompobattang mountain, indonesia. Geoenviron. Disasters 3(1), 1–16 (2016) Safavian, S.R., Landgrebe, D.: A survey of decision tree classifier methodology. IEEE Trans. Syst. Man Cybern. 21(3), 660–674 (1991) Savage, D., Zhang, X., Yu, X., Chou, P., Wang, Q.: Anomaly detection in online social networks. Soc. Netw. 39, 62–70 (2014) Spinelle, L., Aleixandre, M., Gerboles, M.: Protocol of evaluation and calibration of low-cost gas sensors for the monitoring of air pollution. Publication Office of the European Union, Luxembourg (2013) Sun, L., Wei, J., Duan, D., Guo, Y., Yang, D., Jia, C., Mi, X.: Impact of land-use and land-cover change on urban air quality in representative cities of china. J. Atmos. Solar Terr. Phys. 142, 43–54 (2016) Wang, Y., Li, J., Jing, H., Zhang, Q., Jiang, J., Biswas, P.: Laboratory evaluation and calibration of three low-cost particle sensors for particulate matter measurement. Aerosol Sci. Technol. 49(11), 1063–1077 (2015) Zheng, B., Tong, D., Li, M., Liu, F., Hong, C., Geng, G., Li, H., Li, X., Peng, L., Qi, J., et al.: Trends in china’s anthropogenic emissions since 2010 as the consequence of clean air actions. Atmos. Chem. Phys. 18(19), 14095–14111 (2018) Zikova, N., Masiol, M., Chalupa, D.C., Rich, D.Q., Ferro, A.R., Hopke, P.K.: Estimating hourly concentrations of pm2. 5 across a metropolitan area using low-cost particle monitors. Sensors 17(8), 1922 (2017) Zusman, M., Schumacher, C.S., Gassett, A.J., Spalt, E.W., Austin, E., Larson, T.V., Carvlin, G., Seto, E., Kaufman, J.D., Sheppard, L.: Calibration of low-cost particulate matter sensors: model development for a multi-city epidemiological study. Environ. Int. 134, 105329 (2020)

Author Index

A Arnaudov, Dimitar, 257

B Baiou, Mourad, 111 Bremer, Jörg, 1

C Caputo, Davide, 21

D Dadush, Ofek, 161 Dimov, Ivan, 277, 315, 349

F Fadda, Edoardo, 21 Fernández, Blanca Silva, 21 Fidanova, Stefka, 39, 277

I Ignatova, Maya, 95 K Kirilov, Leoneed, 79 Kishkin, Krasimir, 257 L Lyubenova, Velislava, 95 M Malá, Ivana, 135 Manerba, Daniele, 21 Marek, Luboš, 135 Mavrov, Deyan, 215 Mitev, Yasen, 79 Mombelli, Aurélien, 111 O Ostromsky, Tzvetan, 277

G Ganzha, Maria, 39 Georgieva, Rayna, 277, 349 Gioda, Ilaria, 21 Grigoreva, Natalia, 61

Q Quilliot, Alain, 111

H Habarta, Filip, 135

S Štˇepánek, Lubomír, 135

R Roeva, Olympia, 39, 95

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 S. Fidanova (ed.), Recent Advances in Computational Optimization, Studies in Computational Intelligence 1044, https://doi.org/10.1007/978-3-031-06839-3

387

388 Stoenchev, Miroslav, 233

T Tadei, Roberto, 21 Tamir, Tami, 161 Todorov, Daniel, 267

Author Index Todorov, Venelin, 233, 257, 267, 277, 289, 303, 315, 333, 349 Tranev, Stoyan, 187, 215 Traneva, Velichka, 187, 215

Z Zhivkov, Petar, 373