This first volume presents the theory and methods of solving vector optimization problems, using initial definitions tha
318 78 2MB
English Pages 195 [196] Year 2020
Table of contents :
Table of Contents
Main Designations
Introduction
1 Vector Problems in Mathematical Programming (VPMP)
2 The Theoretical Bases of Vector Optimization
3 Methods for Solving Problems of Vector Optimization
4 Research and Analysis of Approaches to ProblemSolving in Vector Optimization
5 The Theory of Vector Problems in Mathematical Programming with Independent Criteria
6 The Duality of Vector Problems in Linear Programming (VPLP)
7 The Theory of Management DecisionMaking Based on Vector Optimization
Bibliography
Theory and Methods of Vector Optimization (Volume One)
Theory and Methods of Vector Optimization (Volume One) By
Yu. K. Mashunin
Theory and Methods of Vector Optimization (Volume One) By Yu. K. Mashunin This book first published 2020 Cambridge Scholars Publishing Lady Stephenson Library, Newcastle upon Tyne, NE6 2PA, UK British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Copyright © 2020 by Yu. K. Mashunin All rights for this book reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the copyright owner. ISBN (10): 1527548317 ISBN (13): 9781527548312
TABLE OF CONTENTS
INTRODUCTION VOLUME 1. THE THEORY AND METHODS OF VECTOR OPTIMIZATION CHAPTER 1 VECTOR PROBLEMS IN MATHEMATICAL PROGRAMMING (VPMP) 1.1. Problems in defining vector optimization 1.2. A case study of vector optimization
6 8
CHAPTER 2 THE THEORETICAL BASES OF VECTOR OPTIMIZATION 2.1. The basic concepts and definitions of vector optimization 14 2.1.1. The normalization of criteria in vector problems 14 2.1.2. The relative evaluation of criterion 15 2.1.3. The relative deviation 16 2.1.4. The relative level 17 2.1.5. Prioritizing one criterion over others in VPMP 18 2.1.6. The vector criterion prioritized over other criteria in VPMP 18 2.1.7. The job of the vector in prioritizing one criterion over another 19 2.2. The axiomatics of vector optimization 20 2.3. Principles of optimality in a VPMP solution 22 2.4. Theoretical results that are bound to the axiomatics of vector optimization 24 2.4.1. The properties of onecriteria and vector problems in linear programming 25 2.4.2. Definitions according to the theory of continuous and convex functions 26 2.4.3. Theoretical results of vector optimization 27 2.5. A geometrical interpretation of the axiomatics and principles of optimality in a VPMP solution 34 2.6. Conclusions on the theoretical foundations of vector optimization. 38
vi
Table of Contents
CHAPTER 3. METHODS FOR SOLVING PROBLEMS OF VECTOR OPTIMIZATION 3.1. Geometrical solutions to linear vector problems 39 3.2. The algorithm and the practice of solving vector problems in mathematical programming (VPMP) with equivalent criteria 43 3.2.1. Algorithm 1. Solving a vector problem in mathematical programming with equivalent criteria 43 3.2.2. The practice of solving a vector problem in linear programming 46 3.2.3. The practice of solving a vector problem of fractional programming 53 3.2.4. Practicing the solution to a vector problem in nonlinear programming 57 3.3. The methodology for solving a problem in vector optimization with the given prioritized criterion 59 3.3.1. Solving a problem in vector optimization with the given priority 59 3.3.2. Algorithm 2. Solving a problem in vector optimization with the given priority 61 3.3.3. Theoretical analysis of the maximin problem with criterion priorities and weight coefficients 66 3.3.4. The practice of solving VPLP with the given priority 71 3.4. The methodology for solving the problem of vector optimization with a given value of the objective function 74 3.4.1. Algorithm 3. Select a point from the Pareto set with the given value of the objective function 3.4.2. An example of the choice of a point from the Pareto set with the given value of the objective function 76 3.5. Test examples of vector problems in linear programming 79 CHAPTER 4 RESEARCH AND ANALYSIS OF APPROACHES TO PROBLEMSOLVING IN VECTOR OPTIMIZATION 4.1. Solutions to vector problems in mathematical programming (VPMP) based on the folding of criteria 83 4.1.1. Introduction to the study of vector problems in mathematical programming 83 4.1.2. Solutions to VPMP based on the folding of criteria (methods of the 1th type) 84
Theory and Methods of Vector Optimization (Volume One)
vii
4.1.3. VPMP solutionmethods using criteria restrictions (methods of the 2th type) 86 4.1.4. Methods of target programming (methods of the 3th type) 87 4.1.5. Methods based on searching for a compromise solution (methods of the 4th type) 88 4.1.6. Methods based on humanmachine procedures of decisionmaking (methods of the 5th type) 89 4.2. Analysis of the results of the solution in a test example and solutionmethods for VPMP 90 4.2.1. An analysis of the results of the solution in a test example 90 4.2.2. An analysis of the results of testing solutionmethods for VPMP 91 4.2.3. Conclusions on the methods for solving vector problems 94 CHAPTER 5 THE THEORY OF VECTOR PROBLEMS IN MATHEMATICAL PROGRAMMING WITH INDEPENDENT CRITERIA 5.1. A definition of vector problems of mathematical programming with independent criteria 96 5.2. Vector problems in linear programming with independent criteria 97 5.2.1. Vector linearprogramming problems with independent criteria 97 5.2.2. The practice of solving vector problems in linearprogramming with independent criteria 100 5.2.3. Vector problems in linear programming with independent criteria, in modelling economic hierarchical systems 104 5.3. Twolevel hierarchical systems developing in dynamics uniformly and proportionally 107 5.3.1. The theory of twolevel hierarchical systems developing the dynamics uniformly and proportionally 107 5.3.2. The practical solution to a twolevel hierarchical system which develops in dynamics uniformly and proportionally 111 CHAPTER 6 THE DUALITY OF VECTOR PROBLEMS IN LINEAR PROGRAMMING (VPLP) 6.1. The duality of a problem in linearprogramming 6.2. VPLP with a maximum vector of the objective function (VPLPmax) and the duality problem 6.2.1. Construction of dual VPLPmax
115 117 117
viii
Table of Contents
6.2.2. The algorithm of decision PLPmin with a set of restrictions119 6.2.3. Algorithm 4. The solution to PLP with a set of restrictions 120 6.2.4. An algorithm for solving PLP in a set of restrictions with a restriction priority 122 6.2.5. Algorithm 5. The solution to PLP in a set of restrictions with a restriction priority 123 6.2.6. Duality theorems in VPLPmax 125 6.2.7. Duality VPLPmax in test examples 127 6.2.8. The solution to direct and dual VPLP with a prioritized criterion (MATLAB) 131 6.2.9. An analysis of dual problems on the basis of a Lagrange function 134 6.3. VPLP with a minimum vector of the objective function (VPLPmin) and a problem dual to it 137 6.3.1. Construction of dual VPLPmin 137 6.3.2. The algorithm for decision PLPmax with a set of restrictions 139 6.3.3. Algorithm 6. Decision PLPmax in a set of restrictions 140 6.3.4. Algorithm 7. The solution to PLP in a set of restrictions with a restriction priority 141 6.3.5. Duality theorems in VZLPmin 141 6.3.6. Duality VPLPmin in test examples 143 6.4. Duality in VPLP with a set of restrictions 145 6.4.1. An analysis of duality in VPLP with a set of restrictions 145 6.4.2. Algorithm 8. The solution to direct VPLP with equivalent criteria 146 6.4.3. Algorithm 9. The solution to dual VPLP with equivalent criteria 149 6.4.4. An analysis of duality in VPLP with a set of restrictions on the basis of a Lagrange function 154 CHAPTER 7 THE THEORY OF MANAGEMENT DECISIONMAKING BASED ON VECTOR OPTIMIZATION 7.1. Management decisionmaking: problems and methods 156 7.1.1. Problems with the theory of management decisionmaking 156 7.1.2. The definition of a management decision 158 7.1.3. The classification of management decisions 159 7.1.3. Decisionmaking 161 7.2. A model of decisionmaking under uncertainty 162
Theory and Methods of Vector Optimization (Volume One)
ix
7.2.1. The conceptual formulation of the problem of decisionmaking 162 7.2.2. An analysis of the current ("simple") methods of decisionmaking 164 7.2.3. Transforming a decisionmaking problem into a vector problem of mathematical programming 165 7.3. The technology of decisionmaking under conditions of certainty and uncertainty 167 Bibliography
179
MAIN DESIGNATIONS
N – set of natural numbers. R  set of real numbers; numerical straight line. Rn – arithmetic real ndimensional space; Euclidean ndimensional space; {a, b, c, x, y, …} – the set consisting of elements a, b, c, x, y, …  generality quantifier: "for all.”  existential quantifier: "exists."  empty set.  sign of an accessory to a set.  sign of an inclusion of a set. AB – a product of the sets of A and B. AB– a union of the sets of A and B. X={x1,…, xj,…, xN} – the set consisting of N elements or X={xj, j=1, …, N}, or X={xj, j= 1, N }, where j – number of the index (object), N – number (number of the last index), N – a set of indices 1. {  identically equal. lim – a limit. max X – the greatest (maximal) element of a great number of ɏ. min X – the least (minimum) element of a great number of ɏ. max f ( X )  the greatest (maximal) value of function f on a great number of x X
ɏ.
min f ( X )  the least (minimum) value of function f on a great number of xX
ɏ.
max f ( X ) { F  the greatest (maximal) value of function f to which x X
the functional value F, on a great number of X is identically appropriated.
1
Each number, for example 100=N, can designate a set of N, and an element in this set with number N is allocated with writing (the set is designated by the aliphatic and sloping code  N).
Theory and Methods of Vector Optimization (Volume One)
xi
min f ( X ) { F  the least (minimum) value of function f to which the xX
functional value F, on a great number of X is identically appropriated. max F(X)={max fk(X), k= 1, K }  vector criterion of maximizing with which each component is maximized, K – number, ɚ K{ 1, K  a set of criterion indices. max F1(X)={max fk(X), k= 1, K 1 }  vector criterion of maximizing with which each component is maximized, K1 – number, ɚ K1{ 1, K 1  a subset of indices of criterion of K1K. min F2(X)={fk(X), k= K 1
1, K }  vector criterion of minimization, K2{
K1 1, K { 1, K 2  a subset of indices of criterion, K2 – number, K2K, K1K2=K.
INTRODUCTION
At the beginning of the 20th century, during research into commodity exchange, Vilfredo Pareto [1] mathematically formulated the criterion of optimality, the purpose of which is to estimate whether the proposed change improves common welfare in an economy. Pareto’s criterion claims that any change which does not inflict loss on anyone and brings benefit to some is an improvement. Despite some imperfections, Pareto’s criterion broadly makes sense (e.g., in the creation of development plans for an economic system when the interests of its constituent subsystems or groups of economic objects are considered). According to Pareto’s theory, the distribution of resources is optimum for the conditions of a perfect competitive market structure. In other words, perfect competitive markets guarantee that an economy will automatically reach points of optimality. However, the distribution of resources being Paretooptimal does not always mean they are socially optimal, as a society can choose (by means of the state’s economic policy) to limit any point’s accessible usefulness, and it may or may not be the point answering to social optimality. Resources can be effective (according to Pareto) if distributed, even in situations of extreme inequality. This is promoted, as a rule, by the economic policy directions of the state which provides benefits to one group of the population at the expense of another. The Pareto criterion was later transferred to optimization problems with a set of criteria, where problems were considered in which optimization meant improving one or more indicators (criteria), provided that others did not deteriorate. Multicriteria optimization problems arose. As a rule, a set of criteria was represented as a vector of criteria (hence vector optimization problems or vector problems in mathematical programming (VPMP)). It immediately became clear that there were optimum points of Pareto in VPMP but, as a rule, there were fewer of these than sets of admissible points. Further interest in the problems of vector optimization increased in connection with the development and widespread use of computer technology in the work of economists and mathematicians. The functioning of the majority of economic systems depends on a set of indicators (criteria), i.e., the substance of economic systems includes
2
Introduction
multicriteria and only the lack of mathematical methods in solving the problems of vector optimization (the cornerstone of the specified models) has constrained their use, both in theory and in practice. It later became clear that multicriteria (vector) problems arose not only in the economy, but also in technology, e.g., in the projection of technical systems, the optimum projection of chips, and in military science. The solution to the problem of vector optimization creates a number of difficulties and, apart from their conceptual character, the principal aim of them is to understand what it means to solve a problem of vector optimization (i.e., to create the principle of optimality showing why one decision is better than another, and defining a choice rule for the best decision). This aim of this book is to find a solution to this problem. This monograph will present a systemic analysis of the theory and methods of vector optimization and the practical applications, first in the modelling and forecasting of the development of economic systems, and secondly in problems of projection and the modelling of technical systems. These models are used during the development and adoption of management decisions, on the basis of what is developed in the MATLAB system (the software used in solving linear and nonlinear vector problems). The monograph includes two volumes. The first volume, The Theory and Methods of Vector Optimization, includes seven chapters and considers the methods for solving vector problems in linear and nonlinear programming. The main focus is on the author's theory and methods of vector optimization. Bases and design methods for solving vector (multicriteria) problems in mathematical programming are also presented. The main difference from the standard approaches to solving vector problems is that they are constructed on axiomatics and the principles of an optimal solution. This demonstrates the way in which one decision is more optimum than another. As the decision is carried out on a set of several (system) criteria, the system analysis and systemic decisionmaking, in total, are put into the decision algorithm. The algorithm allows us to solve linear and nonlinear problems with equivalent criteria and at the given priority of the criterion. Theoretical issues with the duality of vector problems in linear programming are investigated, and the interrelation between the theory of the adoption of an administrative decision and vector optimization is also presented. Problemsolving in decisionmaking is shown under conditions of certainty and uncertainty. The majority of
Theory and Methods of Vector Optimization (Volume One)
3
mathematical methods are followed not only by concrete numerical examples but also by their categorisation in the MATLAB system. The second volume, Vector Optimization Modelling of Economic and Technical Systems, presents the practical use of the theory and methods of vector optimization in the field of mathematical modelling and simulations of economic and technical systems. The second volume of the work is divided into two parts: economic systems; and technical systems. In the first part of the second volume, “Vector Optimization in the Modelling of Economic Systems,” the following is considered: the theory, modelling, forecasting and adoption of administrative decisions at the level of production, market and regional systems. This is divided into three chapters where questions around the creation of mathematical models at the level of the firm, the market and the region are explored. An analysis is carried out of the “theories of the firm”, on the basis of which the mathematical model of prediction and decisionmaking across many criteria (purposes) of the development of the firm is constructed. Modelling and the adoption of production decisions on the basis of such models can be carried out for small, mediumsized and major companies. At the level of the market, a mathematical model is constructed which includes the purposes of all consumers and producers in total, in the form of a vector problem in linear programming. The constructed mathematical market model allows research to be conducted into the structure of the market and helps make decisions while taking purposefulness into account. Such a model resolves issues of equality of supply and demand in the dynamics of a competitive economy. At the level of the region, a mathematical model is constructed which includes economic targets for all sectors of the region and defines the dynamics of the development of the regional economy within investment processes. In the second part, “Vector Optimization in the Modelling of Technical Systems,” questions are considered around the theory, modelling, development practice and adoption of management decisions in technical systems. This is presented in three chapters. The complexity of modelling technical systems is defined by the fact that, in the functioning of a technical object, a system is defined by a set of characteristics that depend on the parameters of a technical system. An improvement in one of these characteristics leads to a deterioration in another. There is a problem in determining such parameters which would improve all functional characteristics of the technical system at the same time, i.e., the solution to a vector (multicriteria) problem is necessary. These problems are now being solved at both technical (experimental) and mathematical (model) levels. The costs associated with the experimental
4
Introduction
level are much higher than those at the information level. The methods being offered also solve this problem. The concept of optimal design in a technical system is developed, as well as the organization of this under conditions of certainty and uncertainty. Theoretical modelling problems are accompanied by numerical simulations of technical systems. The two volumes of this monograph are based on research and analysis of similar literature in the field of the theory and methods of vector optimization (1126). The first volume is constructed on the basis of research into foreign literature (1, 2, 2659), domestic authors (310), and the author's own developments (1123). In the second volume, analysis and research are conducted first in the field of economics, i.e., the theory of the firm, market theory, and decisionmaking in the regional economy (5996); secondly in the field of technical research and decisionmaking (97126); and thirdly from experience of teaching both the theory of management and the development of management decisions at Far Eastern Federal University. The book is designed for students, graduate students, scientists and experts dealing with theoretical and practical issues in the use of vector optimization, the development of models and predictions of developments in economic systems, and the designing and modelling of technical systems.
VOLUME 1. THE THEORY AND METHODS OF VECTOR OPTIMIZATION This volume presents a theory and methods for solving vector optimization problems; the common difficulties surrounding the definition of vector optimization; a development of the axiomatics of vector optimization on the basis of which the principles of optimality in solving vector problems are formulated; a consideration of the theoretical questions related to the principles of optimality; methods for solving vector problems in mathematical programming, allowing for solutions at equivalent criteria and with the given prioritized criterion; and an investigation into the theory of duality in vector problems of linear programming. Further, the Appendix presents a comparison of the known approaches with the developed method, which is based on a normalization of criteria and the principle of a guaranteed result.
CHAPTER 1 VECTOR PROBLEMS IN MATHEMATICAL PROGRAMMING (VPMP)
1.1. Problems in defining vector optimization A vector problem in mathematical programming (VPMP) is a standard mathematicalprogramming problem including a set of criteria which, in total, represent a vector of criteria. It is important to distinguish between uniform and nonuniform VPMP: a uniform maximizing VPMP is a vector problem in which each criterion is directed towards maximizing; a uniform minimizing VPMP is a vector problem in which each criterion is directed towards minimizing; a nonuniform VPMP is a vector problem in which the set of criteria is shared between two subsets (vectors) of criteria (maximization and minimization respectively), e.g., nonuniform VPMP are associated with two types of uniform problems. According to these definitions, we will present a vector problem in mathematical programming with nonuniform criteria [11, 27] in the following form: Opt F(X) = {max F1(X) = {max fk(X), k = 1, K 1 }, (1.1.1) min F2(X) = {min fk(X), k = 1, K 2 }}, (1.1.2) G(X)dB, (1.1.3) Xt0, (1.1.4) where X={xj, j= 1, N }  a vector of material variables, Ndimensional Euclidean space of RN, (designation j= 1, N is equivalent to j=1,...,N); F(X)  a vector function (vector criterion) having K – a component functions, (K  set power K), F(X) ={fk(X), k= 1, K }. The set Ʉ consists of sets of K1, a component of maximization and Ʉ2 of minimization; Ʉ=K1K2 therefore we enter the designation of the operation "opt," which includes max and min (a definition of the operation "opt" is given in section 2.3);
Vector Problems in Mathematical Programming (VPMP)
7
F1(X) = {fk(X), k= 1, K 1 } – maximizing vectorcriterion, K1 – number of criteria, and K1{ 1, K1  a set of maximizing criteria (a problem (1.1.1), (1.1.3), (1.1.4) represents VPMP with the homogeneous maximizing criteria). Let's further assume that fk(X), k= 1, K 1  the continuous concave functions (we will sometimes call them the maximizing criteria); F2(X)={fk(X), k= K 1 1 , K }  vector criterion in which each component is minimized, K2{ K 1 1, K { 1, K 2  a set of minimization criteria, K2 – number, (the problems (1.1.2)(1.1.4) are VPMP with the homogeneous minimization criteria). We assume that fk(X), k= K 1 1, K the continuous convex functions (we will sometimes call these the minimization criteria), i.e., K1K2 = K, K1K, K2K. G(X) d B, Xt0  standard restrictions, gi(X) d bi, i=1,...,M where bi a set of real numbers, and gi(X) are assumed continuous and convex. S = {XRN  Xt0, G(X) d B}z , where the set of admissible points set by restrictions (1.1.3)(1.1.4) is not empty and represents a compact. where X={xj, j= 1, N }  a vector of material variables, Ndimensional Euclidean space of RN, (designation j= 1, N is equivalent to j=1,...,N); F(X)  a vector function (vector criterion), F(X)={fk(X), k= 1, K }. The set Ʉ consists of sets of K1, a component of maximizing and Ʉ2 of minimization; Ʉ=K1K2 therefore we enter the designation of the operation "opt" including max and min; F1(X) = {fk(X), k= 1, K 1 }  vector maximizing criterion, K1 – number of criteria, and K1{ 1, K1  a set of indexes of criterion; K2{ K 1 1 , K { 1 , K 2  vector minimization criterion. K1K2 = K, K1K, K2K. G(X) d B, Xt0  standard restrictions, gi(X) d bi, i=1,...,M where bi  a set of real numbers, and gi(X) are assumed continuous and convex. S = {XRN  Xt0, G(X) d B}z, where the set of admissible points set by restrictions (1.1.3)(1.1.4) is not empty and represents a compact. The vector minimization function (criterion) F2(X) can be transformed to the vector maximization function (criterion) by the multiplication of each component of F2(X) to minus unit. The vector criterion of F2(X) is injected into VPMP (1.1.1)(1.1.4) to show that, in a problem, there are two subsets of criteria of K1, K2 with, in essence, various directions of optimization.
Chapter 1
8
We assume that the optimum points received by each criterion do not coincide for at least two criteria. If all points of an optimum coincide among themselves for all criteria, then we consider the decision trivially.
1.2. A case study of vector optimization The information analysis on VPMP (1.1.1)(1.1.4) that we will carry out assumes there is a possibility of the decision being made separately for each component of vector criterion. 1) We show that exact upper and lower boundaries exist for any of the criteria for all kK on the admissible set S. So, really: a) In accordance with the Weierstrass theorem on the achievement of a continuous function, on the compact of its exact faces for the set of admissible points S z an optimum (best point) exists for each kth component (k = 1, K ) of the vector criterion, i.e., X *k RN is the optimum point and the value of the objective function (criterion) at this point: f *k { fk(X *k ) will be obtained to solve VPMP (1.1.1)  (1.1.4) separately for each component of the vector criterion, with kK1, which of course are solved for a maximum; and for criteria with kK2  at a minimum. b) According to the same Weierstrass theorem, on a point set of S it is possible to find the worst point by each criterion of k = 1, K , i.e., X 0k RN  the worst point and f 0k { fk(X 0k ) is the value of the criterion in this point, VPMP (1.1.1)(1.1.4) is received with the decision made separately for each component of vector criterion, (the problems with k K1 are solved at min
a minimum of f 0k { f k f 0k {
max f k ).
, and with kK2 respectively at a maximum of
From here, it follows that each criterion of kK1K on an min
admissible set of S can change from f 0k { f k f 0k
d fk(X) d f *k , k = 1, K 1 ,
max
to f *k { f k
: (1.2.1)
and the criteria kK2K on the admissible set S vary from the maximum max
value f 0k =f k , k= 1, K 2 to f *k , k = 1 , K 2 : f 0k t fk(X) t f *k , k = 1 , K 2 .
(1.2.2)
In (1.2.1), (1.2.2) f , k = 1, K is what we call the worst part of the kth 0 k
criterion.
Vector Problems in Mathematical Programming (VPMP)
9
2) We recall the definition of the set of Pareto optimal points. Definition (condition of optimality of a point on Pareto): In VPMP, the point of ɏo S is Paretooptimal if it is admissible and there is no other point of X' S for which: f(X') t f(X 0k ), k= 1, K 1 , fk(X') d fk(Xo), k= 1, K 2 and at least for one criterion is carried out with strict inequality. The point set of So for which the condition of optimality of a point on Pareto is satisfied is called a point set, Paretooptimal, by So S. This is also called a set of “notimproved points”. The theorem (concerning the existence of Paretooptimal points): In VPMP (1.1.1)(1.1.4), if the set of admissible points of S is not empty and represents a compact, and vector criterion (1.1.1)  concave functions, and vector criterion (1.1.2) – convex functions, then a point set of Sɨ, Paretooptimal, is not empty: Sɨz, Sɨ S. The proof is in [24]. Point set, being Paretooptimal, are somewhat between optimum points which are received as a result of the solution to VPMP separately by each criterion. For example, in two criteria of VPMP, as shown in Fig. 1.1, Sɨ represents some curve X 1* X *2 . So S
X 1* x S S1 0 X2x
Fig. 1.1. Geometrical interpretation of an example of the solution to twocriteria VPMP: S 02 X 1* , X *2  optimum points in the
solution to VPMP in the first and xX 2 second criteria; X 10 , X 02  the worst S' points, respectively; Xo  the decision at equivalent criteria; S2 S  an admissible pointset in VPMP; So  points, Paretooptimal (subset); 0 S1, S2  admissible points (subsets) x X 1 which are the priority for the first and second criterion respectively.
x Xo
0 1
Let's note that such path limits (sides) of change for any criterion can be found on a Pareto set: for criterion to maximize qK1 its minimum size is defined by viewing all optimum points on a set of maximizing criteria for the first type:
Chapter 1
10
f min = min f q (X *k ), q = 1, K 1, q kK1
where X *k  an optimum point which is received in the solution to VPMP, separately to the kth criterion; for minimization criteria for the second type there is the maximal size, which is defined by viewing all optimum points on the corresponding set of maximizing criteria: f max = max f q ( X k* ) , q= K 1 1 , K . q kK 2
Thus, the values of criterion fq(X) received on a Pareto set lie in borders: d fq(X) d f *q , q = 1 , K 1 , f min (1.2.3) q
t fq(X) t f *q , q = K 1 1, K . f max q Let's notice that min q
,
qK1, f max q
f min q
(1.2.4)
can be the very worst decision and f 0q , i.e. f 0q
df
d f 0q , qK2 is similar.
3) There is a natural question about whether the point can, if it’s Paretooptimal, be the solution to VPMP (1.1.1)(1.1.4). The answer is no. Generally speaking, in VPMP (1.1.1)(1.1.4), the point set of Sɨ is commensurable or can even coincide with a set of admissible points of S. Below are two examples illustrating this premise. Example 1.1. Let's consider a vector problem with two linear criteria. max F(X) = {max f1(X) { 2x1 + x2, (1.2.5) max f2(X) { x1 + 2*x2}, x1 + x2 = 1, x1, x2t 0. The Pareto set of Sɨ and the set of admissible points of S are equal among themselves and represent a point set, lying on a straight line X 1* X * (see Fig. 1.2, a) with coordinates: 2 X 1* ={x 1* = 1, x *2 = 0}, X *2 ={x 1* =0, x *2 =1}. The points of X 1* and X *2 are the solution to VPMP (1.2.5), separately, with the corresponding criterion. Moving forward, we will note that the solution to VPMP (1.2.5), on the basis of criteria normalization and the principle of the maximine (a guaranteed result), with a condition of an equivalence of criteria, only has the following appearance: Oɨ = 0,5, Xɨ = {x1 = 0,5, x2 = 0,5}. Example 1.2. Let's consider a vector problem with three linear criteria: opt F(X) = {max F1(X) = {max f1(X) { 2x1 + x2, max f2(X) { x1 + 2*x2},
Vector Problems in Mathematical Programming (VPMP)
11
min F2(X) = {min f3(X) { x1 + x2}}, (1.2.6) x1 + x2 d 1, x1, x2t 0 . Sets of Sɨ and S are equal among themselves and represent point sets, lying on the plane X 1* X *2 X *3 (see Fig. 1.2, b) with coordinates: X 1* ={x 1* =1, x *2 =0}, X *2 = {x 1* =0, x *2 =1}, X *3 ={x 1* =0, x *2 = 0}. Points of X 1* and X *2 , to X *3 submit the solution to VPMP (1.2.6) for each criterion respectively. The result of the solution to a vector task (1.2.6) at equivalent criteria: Oɨ = 0,43, Xɨ = {x1 = 0,285, x2 = 0,285}, Oɨ = O1(Xɨ) = O2(Xɨ) = O3(Xɨ) = 0,429.
x2
a
X2 x
x2
X2 x
b
S0=S
x Xɨ S S0=S x Xɨ 0x
x
X1
x1
X3
x 0
x x1
X1
Fig. 1.2. Geometrical interpretation of distribution pointsets in VPMP: a) with two, and b) three criteria.
The given examples show that if Paretooptimal is found in VZMP (1.2.5) and (1.2.6) points, then only the admissible point is found, and no more. The answer to the question of what this is better than, other than points from a Pareto set, remains open. Generally, for VPMP (1.1.1)(1.1.4), there is a problem not only with the choice of a point, Paretooptimal XɨS , but also with definition, in that a point of XɨSɨS is "more optimum" than another point of XSo, XzX, i.e., the choice of the principle of optimality on a Pareto set. Let's consider the example of VPMP with two criteria and the question of the preferences (or priorities) of the person making the decisions.
12
Chapter 1
Apparently, if the decisionmaker considers the first criterion to be the priority, then the greatest priority will be at a point of an optimum of X 1* ; further from X 1* , the first criterion’s priority concerning the second decreases. If the decisionmaker considers the second criterion to be the priority, then the greatest priority for it, apparently, will be at a point of X *2 ; with removal from it, the priority of the second criterion concerning the first decreases. At some distance from X 1* and X *2 there has to be a compromise point of Xɨ in which neither the first nor second criterion have priority, i.e., they are equivalent. Thus, the question of the preferences (or priorities) of the decisionmaker demands a more precise definition in terms of the area of priority according to this or that criterion, and the areas where criteria are equivalent. Generally, we will try to formulate the problem of finding the solution to VPMP (1.1.1)(1.1.4). According to us, the problem with finding the solution to VZMP is in the ability to solve three problems: • First, to allocate any point from a set of So S and to show its optimality concerning other points belonging to a Pareto set; • Secondly, to show by what criterion of qK it (the point) is more of a priority than other criteria of k = 1, K and by how much; • Thirdly, if there are changes to the limits of the prioritized criterion in the Sɨ Pareto set, and these are known (and it is easy to state these in ratios (1.2.3), (1.2.4)), then under the given numerical value of the criterion, to be able to find a point at which the mistake does not exceed the given. The efforts of most vectoroptimization researchers have been directed towards solving this problem in general, as well as its separate parts. For the last three decades, a large number of articles and monographs have been devoted to methods for solving vector (multicriteria) tasks. These have detailed the theoretical research and methods in the following ways: 1. VPMP solutionmethods based on the folding of criteria; 2. VPMP solutionmethods using criteria restrictions; 3. Methods of targetprogramming; 4. Methods based on searching for a compromise solution; 5. Methods based on humanmachine procedures for decisionmaking. Research and analysis of these methods is presented in Chapter Four. Analysis is carried out by comparing the results of the solution to the test
Vector Problems in Mathematical Programming (VPMP)
13
example using these methods, to a method based on the normalization of criteria and the principle of a guaranteed result [11, 14, 17], which is the cornerstone of this book.
CHAPTER 2 THE THEORETICAL BASES OF VECTOR OPTIMIZATION
This chapter presents the basic concepts and definition used in the creation of methods to solve the problems of vector optimization; the principles of optimality in solving vector problems; the theoretical results characterizing the formulated principles of optimality in vector optimization problemsolving; and conclusions on the theoretical bases and methods of vector optimization.
2.1. The basic concepts and definitions of vector optimization To develop the principles of optimality and methods for solving the problems of vector optimization, we will look at the following concepts: • • • • • •
the relative assessment; the relative deviation; the relative level; a criterion prioritized in VPMP over other criteria; a criterion prioritized vector in VZMP over other criteria; the given vector of a criterion prioritized in VPMP over other criteria [11, 27].
From this, we will derive a number of definitions that allow us to formulate the principles of optimality in solving vectoroptimization problems.
2.1.1. The normalization of criteria in vector problems. Normalizing criteria (mathematical operation: the shift plus rationing) presents a unique display of the function fk(X), k = 1, K , in a onedimensional space of R1(the function fk(X),kɄ represents a function of
The Theoretical Bases of Vector Optimization
15
transformation from a Ndimensional Euclidean space of RN in R1). To normalize criteria in vector problems, linear transformations will be used: (2.1.1) fk(X) = ak f 'k (X) + ckkɄ, or fk(X) = (f 'k (X) + ck)/akkK,
(2.1.2)
where f 'k (X), k = 1, K  aged (before normalization) value of criterion; fk(X), k = 1, K  the normalized value, ak, ck  constants. Normalizing criteria (2.1.2) in an optimizing problem does not influence the result of the decision, i.e., the point of an optimum of X *k , k = 1, K is the same for nonnormalized and normalized problems. There are two basic requirements in the mathematical operation known as "the normalization of criteria" when applied to vector problems in mathematical programming: a) the criteria should be measured in the same units; b) at the optimum points X k , k = 1, K all criteria must have the same values (e.g., equal to 1 or 100%). These requirements are reflected in the following definitions.
2.1.2. The relative evaluation of criterion. Definition 1. In a mathematicalprogramming vector problem (1.1.1)(1.1.4) Ok(X) is the relative estimate which represents the normalized criterion: fk(X), kK in XS point, with a normalization of the following type: o Ok(X) = f k (X)  f k ,kK, (2.1.3) f k* f ko where at the point XS the value of the kth criterion is fk(X); f *k  the criterion value kth at a point of optimum of XS is received in solving a vector problem (1.1.1)(1.1.4) separately to the kth criterion; f ok  the worst size of the kth criterion in an admissible set of S in a vector problem (1.1.1)(1.1.4). It follows from normalization (2.1.3) that any relative estimate of a point of X S on the kth criterion conforms to both requirements imposed on normalization: first, that criteria are measured in the relative units; secondly, that the relative estimate of Ok(X) k K in an admissible set
Chapter 2
16
changes from zero at a point of X ok , kK lim o Ok(X)=0 to the unit at an X oXk
optimum point X ,kK lim Ok(X)=1: * k
Xo X
* k
kK 0d Ok(X) d 1, XS.
(2.1.4)
2.1.3. The relative deviation. Definition 1a. The relative deviation, Ȝ k (X) , k= 1, K , is also the normalized criterion of kK at a point of XS VPMP (1.1.1)(1.1.4), but with normalization of the type: (2.1.5) O k ( X ) = (f k  fk(X))/(f k  f ok ),k K, where fk(X), f k , f ok  the values defining the kth criterion are also described above. From (2.1.3) it follows that any relative deviation O k ( X ) at a point of X S on the kth criterion also conforms to both requirements of normalization: k K lim O k ( X ) =0; kK lim O k ( X ) =1. (2.1.6) x ! x k*
X o X
o k
Between Ok(X), O k ( X ) , k= 1, K , XS there exists a onetoone association: Ok(X) = 1  O k (X), k= 1, K , XS. (2.1.7) The relative estimates and deviations within the corresponding types of normalized criteria, taking into account types of VPMP and types of restrictions, are given in Table 2.1 where designated: O(X), k = 1, K  the relative estimates of the kth criterion; O k ( X ) , k = 1, K  the relative deviation from XS optimum by criterion.
The Theoretical Bases of Vector Optimization
17
Table 2.1: The normalization of criteria in problems of vector optimization. Type of VPMP The Homogeneous criteria of maximizing (1.1.1),(1.1.3)(1.1.4) The homogeneous criteria of minimization (1.1.2)(1.1.4)
Type of restrictions 0 d fk (X ) d f
* k
fkmin d fk (X) d fk*
f f k (x) t f
* k
fkmaxt fk (X) t fk*
Type of normalized criteria
Ok(X) = fk(X)/f k
O k (X) = (f k  fk(X) /f k Ok(X)=(fk(X)f min k
)/(f k f min k
Limits of normalization criterion 0 d Ok(X) d 1 1 t O k (X)t 0 ) 0
d Ok(X) d 1
) 1 t O k (X)t 0 O k (X)=(f k fk(X)/(f k  f min k
Ok ( X )
fk ( X ) / fk*
O k (X) = (fk(X)  f k )/fk Ok(X) = (fk(X)  f
min k
)
1
d Ok(X) fk(Xo)]}, Sɨ S. (2.2.1) ɨ The point set of S , which is Paretooptimal, lies between the points of an optimum X k , received from the decision by VPMP (1.1.1)(1.1.4), separately for each criterion of k = 1 , K , K = K1 K2 (see Fig. 1.1). In this specific case, the point set of Sɨ, which is Paretooptimal, can be equal to an admissible point set, as shown in the examples in sections 1, 2 and 1.3, i.e., Sɨ = S. The point set of Sɨ is allocated from S proceeding from the natural (i.e., the given statements of the problem) criteria of fk(X), k= 1 , K , but it is an axiom which is also true for normalized criteria: o Ok(X) = f k (X)  f k , k= 1, K . f k* f ko
Axiom 2 (regarding equality and the equivalence of criteria in admissible points of VPMP). In a problem of vector optimization (1.1.1)(1.1.4), two criteria are regarded as equal in ɏS point with the indexes kK, qK if the relative estimates of the criteria of kK and qK are equal among themselves in this point: Ok(X) = Oq(X), k,qK, XS.
The Theoretical Bases of Vector Optimization
21
Criteria are equivalent in a vector problem (1.1.1)(1.1.4) if, at any point of the XS operation, a comparison of numerical values of the relative estimates Ok(X), k= 1 , K among themselves is possible while, at the same time, in each criterion of fk(X), k= 1 , K , and respectively, the relative estimates of Ok(x), k= 1 , K side conditions concerning the prioritized criteria are not imposed. Axiom 3 (regarding a subset of points which are a priority criterion in a vector problem). In a problem of vector optimization, the subset of the points of Sq S is called the area of a priority of the qth criterion, before other criteria, if for XSq Oq(X) t Ok(X), kK, q z k. The definition of Axiom 3 also extends to a point set of S ɨ S, which is Paretooptimal. Axiom 3a (regarding points and criterion priorities in a Pareto set in VPMP). In a vector problem in mathematicalprogramming, the subset of points of S oq So S is called the area (subset) of points of a prioritized criterion qK before other criteria, if for XS oq , Oq(X) t Ok(X), kK, qz k. Let's provide some explanations. Axioms 3 and 3a allow a break in VPMP (1.1.1)(1.1.4), an admissible point set of S, including a subset of points of Sɨ S, which are Paretooptimal, into subsets: • The first subset of points of S'S in which criteria are equivalent, while at the same time a subset of points of S' is crossed with a subset of points of So, allocates a subset of points which are Paretooptimal, on the condition of the equivalence of the criteria of Soo=S'So which, as we will further show, consists of one point of XoS, i.e., Xo=Soo=S'So, S'S, SoS. • In the subsets of points of K where each criterion of q= 1, K is prioritized before the other k= 1, K , qz k criteria, the set of all admissible points of S break into subsets of Sq S, q= 1, K and, with another, a point set of So, which is Paretooptimal, breaks into subsets of S oq So S, q= 1, K . From here, the following ratios are correct: S'U( U q K S qo ) = So, S oq So S, q = 1, K .
Chapter 2
22
Let us notice that a subset of points of S oq , in areas where qK is prioritized over other criteria, belong to: S oq Sq S,
(2.2.2)
and, secondly, to a subset of points, which are Paretooptimal: S oq So S.
(2.2.3)
Axiom 3 and a numerical expression of a prioritized criterion (2.1.10) allow us to identify each point of X by means of a vector: Pq(X)={p qk (X)=Oq(X)/Ok(X), k= 1, K }: (2.2.4) a) From a subset of points with a prioritized criterion of Sq which belongs to a set of admissible points of S, qK XSqS. (2.2.5) Let us notice that ratios (2.2.5) and (2.2.4) can be used in problems of clustering, but that is beyond this book; b) From a subset of points with a prioritized criterion of S oq which belongs to a set which is Paretooptimal, points of So, qK, XS oq So. Thus, the complete identification of all admissible points and those which are Paretooptimal, is executed in a vector problem as the following sequence: Set of admissible points X S
o
Subset of points, Pareto optimal, XSo S
o
Subset of points by prioritized criterion X S oq So S
o
Isolated point X S XS oq So S
It is the most important result which will allow the removal of the principle of optimality and the development of methods which enable any point from a Pareto set to be chosen [11, 27].
2.3. Principles of optimality in a VPMP solution The principles of optimality follow on from an analysis of reasonings in a problem of vector optimization (1.1.1)(1.1.4), the definitions and axioms of which are explained in section 2.2. The principle of optimality 1 – the solutions to VPMP are at equivalent criteria. A vector problem in mathematicalprogramming at the equivalent criteria is decided if the point of XoS and the maximum level of Oo are
The Theoretical Bases of Vector Optimization
23
found (the upper index “o” is an optimum) among all relative estimates, such that: (2.3.1) Oɨ = max min Ok(X). X S
k K
Using an interrelation of expressions (2.1.8) and (2.1.9), we will transform a maximine task (2.3.1) to an extremum problem: Oo = max O, (2.3.2) X S
O d Ok(X), k= 1, K .
(2.3.3)
In a problem (2.3.2)(2.3.3), all functions Ok(X), k = 1, K are concave, including functions of minimization with indexes from K2. It is bound to the fact that, for normalized criteria: Ok(X) = (fk(X)  f ok )/(f k  f ok ), k = 1, K , X S as we first have, k Ʉ1, (fk(X)  f ok ) > 0, (f k  f ok ) > 0, therefore Ok(X) > 0; and, secondly, k Ʉ2, (fk(X)  f ok ) < 0, but it is less than zero and (f k  f ok ) < 0, therefore Ok(X) > 0. The problem (2.3.2)  (2.3.3) is called the Oproblem, in accordance with the terminology of the vector linearprogramming problem from [5]. The Oproblem (2.3.2)(2.3.3) has the (N+1) th dimension, as a result of the solution to the Oproblem (2.3.2)(2.3.3) representing an optimum vector of XɨRN+1, (N+1)th, the component of which is, in essence, the size Oo, i.e., Xo={x 1o , x o2 , ..., x oN , x oN 1 }, while at the same time x oN 1 = Oo, and (N+1)th being a component of a vector Xo, is distinguished in its specificity. The received couple {Oo, Xo}=Xɨ characterizes an optimal solution to the Oproblem (2.3.2)(2.3.3) and, according to VPMP (1.1.1)(1.1.4), with equivalent criteria is solved on the basis of the normalization of criteria and the principle of a guaranteed result. We can call in an optimal solution of Xɨ={Xo, Oo}, Xo — an optimum point, and Oo — the maximum level. The point of Xo and the Oo level correspond to restrictions (2.3.3) that can be written down: (2.3.4) Oo d Ok(Xo), k = 1, K These restrictions are the main evaluation of the results of regularity in the solution to vector problems of optimization in practice. The principle of optimality 2 — The VPMP solution with the given priority of one criteria.
Chapter 2
24
q
VPMP, with the given priority of the qth criterion of p k , k = 1, K is considered solved if the point of Xo and the maximum level of Oo, among all relative estimates that are found, are: q
Oo = max min p k Ok(X), q K. X S
(2.3.5)
kK
Using interrelation (2.1.12) and (2.1.13), we will similarly transform a previous maximine problem (2.3.5) to an extremum problem of the form: Oo = max O, (2.3.6) X S
O d p qk Ok(X), k= 1, K .
(2.3.7)
The problem (2.3.6)(2.3.7), we will call the Oproblem with the qth criterion priority. The point of Xo={Xo, Oo} will be the result of the solution to the Oproblem. It is also the result of the VPMP solution (1.1.1)(1.1.4), with the given prioritized criterion solved on the basis of normalizing criteria and the principle of a guaranteed result. Let us call in an optimal solution of Xo={Xo, Oo}, Xo — an optimum point, and Oo — the maximum level. The point of Xo and the Xo={Xo, Oo}, Xo level correspond to restrictions (2.3.7) that can be written down: (2.3.8) Oo d p qk Ok(Xo), k 1, K . These restrictions are a basis for evaluating the results of solutions to vector problems of optimization, in practice, as being correct or not. An opportunity to formulate a concept for the operation opt follows from definitions 1 and 2 of the principles of optimality. Definition 7. The mathematical operation of "opt". The mathematical operation known as "opt" in VPMP (1.1.1)(1.1.4) consists of a definition of a point of Xo and the Oo maximum lower level to which all criteria measured in the relative units are raised: (2.3.9) Oo d Ok(Xo)= f k (X)  f k o , k= 1, K , f
* k
f
o k
i.e., all criteria of Ok(Xo), k= 1, K are equal or at a “more maximum” level of Oo (therefore Oo is also called the guaranteed result).
2.4. Theoretical results that are bound to the axiomatics of vector optimization We now need a number of classical results on solving problems in linear programming, from the theory of continuous and convex functions which we will further use to prove the main theorems of vector optimization.
The Theoretical Bases of Vector Optimization
25
2.4.1. The properties of onecriteria and vector problems in linear programming. A common (onecriteria or scalar) problem in linear programming is called “the task”, which consists of determining the maximal (minimum) value of function: F* = mɚɯ
N
¦ɫx j
j 1
j
,
(2.4.1)
at restrictions d ½
N
¦a
ij
x j ° ° bi, i= 1, M , ® ¾ °t ° ¯ ¿
j 1
x j t 0, j= 1, N ,
(2.4.2) (2.4.3)
where cj, aij, bi – the given stationary values; X={xj, j= 1, N } is a vector of unknowns with N being a number of unknowns, a set of admissible points (decisions) formed by restrictions (2.4.2), which we will designate S. Generally, the problem of linear programming (2.4.1)(2.4.3) is characterized by the following properties: 1) We may not have any admissible decision, i.e., there is no X which belonged to a set of admissible points of S formed by restrictions (2.4.2)(2.4.3). 2) We can have a single admissible optimal solution (the convex hull of a set of optimum points will then be defined by ratios ¦ D i x i , ¦ D i 1, D i t 0 )). 3) We can have several admissible optimal solutions. 4) We can have a set of admissible decisions for which a target function grows and is not limited. These properties are transferred to a vector problem in linear programming which, with a maximum vector target function, will take the following form: N
max F(X)= { ¦ c k x ,k= 1 ,K }, j j
(2.4.4)
j 1
N
¦ j
1
a ij x
j
d bi ,
i= 1, M ,
x j t 0 , j= 1, N .
(2.4.5) (2.4.6)
Let us further consider vector problems in linear programming for which the set of admissible decisions is not empty and there is a single
26
Chapter 2
admissible optimal solution in each criterion. Thus, the studied class of vector problems in linear programming is narrowed to the second ZLP property, but this class covers the ZLP majority [17].
2.4.2. Definitions according to the theory of continuous and convex functions We present a number of classical results on the theory of continuous and convex functions, which we will formulate as a proposition. Proposition 1. Weierstrass's theorem. The continuous function of a compact reaches the minimum and maximal value. Proposition 2. Saving the sign of continuous functions. If at some point the continuous function is strictly positive (is negative), then there will be a vicinity of the specified point at which the function will also be strictly positive (is negative). We call lower (upper) bending around some set of functions, “their minimum”, in an admissible set. Proposition 3. When there is both upper and lower bending around a finite number of continuous functions, it is continuous. Proposition 4. When there is lower bending around a finite number (strictly) of a concave functions set, in a convex set of S, it is also (strictly) concave in the same set. In order to provide completeness in that statement we will provide the proof of this known fact. Let us consider a case of rigorous concavity in all functions fk(X), k= 1, K , i.e., fk(DX+EY) >Dfk(X) + Efk(Y), k= 1, K ; X, Y S, (2.4.7) where D+E = 1, D, E t 0. The type of this inequality won't change if it is in a right member and it passes to a minimum by all numbers k: fk(DX+EY) >Df(X) + Ef(Y), k= 1, K ; X, Y S, where f(X) = min k K fk(X). As the right member doesn't depend on k, in the lefthand part it is possible to pass to a minimum on k without changing the inequality of most types: f(DX+EY) >Df(X) + Ef(Y); X, Y S, which proves a strict convexity. For just convex functions the proof is demonstrated in the same way. Proposition 5.
The Theoretical Bases of Vector Optimization
27
Top bending around a finite number (strictly) of convex functions — this is also (strictly) convex. The proof for this proposition is similar to the proof for the previous proposition. Proposition 6. The theorem of sufficient conditions of a global maximum. If the admissible set of S is not empty and is compact and convex, and the continuous function of f (X) is bent on S, then: 1) the local maximum is global; 2) the point set on which the maximum is reached is convex. Proposition 7. If we assume that the f (X) function is strictly bent, then the decision is single, i.e., there is a single global maximum. For the proof see [1827]. Proposition 8. Linear transformation of the faf+b function at a>0 does not change the direction of camber (concavity) and also keeps the severity of camber (concavity).
2.4.3. Theoretical results of vector optimization Using the properties and propositions explained in sections 2.4.1 and 2.4.2, we will submit a number of theorems, bound to the principles of optimality in problems of vector optimization. Theorem 1. Concerning the existence of an optimal solution of VPMP with equivalent criteria. In convex VPMP, maximized at the continuous criteria and solved on the basis of the normalization of criteria and the principle of a guaranteed result, there is an optimal solution of Xo={Oo, Xo}. We will demonstrate the proof with an example of VPM maximizing (1.1.1), (1.1.3)(1.1.4). The function O= min Ok(X) represents the lower kK
Ok(X), k= 1, K which is bending around continuous functions and therefore, according to Proposition 3, is continuous in an admissible set of S. As the set of S is, by definition, a compact, as a result of the solution to a problem we will see the following: Oo = max O, X S
where O= min Ok(X) or OdOk(X), k= 1, K , exists XoS maximum point. kK
Therefore, the point of Xo={Oo, Xo} is also a required optimal solution to VPMP, at this: Oo = min Ok(Xo) or OodOk(Xo) k K. kK
Chapter 2
28
Theorem 2. The theorem of the most contradictory criteria in VPMP with equivalent criteria. In convex VPMP, at equivalent criteria solved on the basis of the normalization of criteria and the principle of a guaranteed result, at an optimum point of Xo={Oo, Xo} there are always two criteria. We will designate these the indexes qK, pK (which are in some ways the most contradictory of all criteria of k = 1, K ), for which equality is carried out: Oo = Oq(Xo) = Op(Xo), q, p K, X S. (2.4.8) Other criteria are defined by inequalities: (2.4.9) Oo d Ok(Xo) k K, q z p z k. Proof. We will demonstrate the proof of the theorem in two stages, using the example of a vector problem in mathematicalprogramming of maximizing (1.1.1), (1.1.3)(1.1.4), which results in the decision to present according to (2.3.4), in the form: Ood Ok(Xo), k K, q z p z k. At the first stage, we will prove the existence of at least one criterion of qK for which, in the ratio (2.4.8), equality is carried out by Oq(Xo) = Oo. At the second stage, we will prove the existence of two criteria for which equality is carried out: Oo =Oq(Xo)=Op(Xo), q, pK, XS. Let us assume that there is no criterion of qK for which Oq(Xo)=Oo, then, from (2.4.9) it follows that Oo 0, qK, and decreased minimum criterion of Pp(Xo+'X), pK, q z p (if the criterion of pK increased, all together the Ʉ criteria would aspire to unit), which is equal: Pp(Xo+'X)= min Ok(Xo+'X), pK, kK, kK
and there is a strict inequality for it of Oq(Xo+'X) fr(Xo)]. (2.4.20) Proof. Similar to Theorem 5.
Chapter 2
32
Theorem 7. Theorem of limits to a change in size of a priority in VPMP. In maximizing VPMP (2.1.1), (2.1.3)(2.1.4), if the criteria of qK have priority over other criteria, and Oq(X) > 0, XS , then the size of a problem of a priority of p qk , k= 1, K when moving from a point of Xo, which is received on the basis of the normalization of criteria and the principle of a maximine (the guaranteed result) at equivalent criteria, at a point of X q , received at the decision of the qth criterion, there will be changes to the limits: Oq(Xo)/Ok(Xo) d p qk d Oq(X q )/Ok(X q ), q K.
(2.4.21)
Proof. We will demonstrate the proof in two steps using the example of a vector problem in mathematicalprogramming of maximizing (1.1.1), (1.1.3)(1.1.4). At the first stage, we will prove that inequality (2.4.21) is fair for VPMP with two criteria. If the first criterion has priority over the second, then the vector of priorities, proceeding from Axiom 3, will be defined as follows: Ɋ1(X)={p 11 (X), p 12 (X)}, XS, ɝɞɟ p 11 (X)=O1(X)/O1(X)=1. According to determining the principle of optimality, the point of an optimum Xo and Oo, is such that: Oo = m ax min ( p 11 O1(X), p 12 O2(X)}. (2.4.22) X S
kK
We will transform the received maximine problem to O problem : (2.4.23) Oo = max O, X S
O  p 11 O1(X) d0,
(2.4.24)
O  p O2(X) d0.
(2.4.25)
1 2
Let us write down (2.4.23) in the form of equality and, having added padding variables in (2.4.24) and (2.4.25), we will subtract (2.4.24) of (2.4.23) and (2.4.25). As a result, we receive a problem of optimization in the first criterion under the condition: (2.4.26) O1(X) d p 12 O2(X) If criteria are equivalent, then at point ɏo and according to Theorem 2, p 12 (Xo) = O1(Xo)/ O2(Xo)=1. If p 12 >p 12 (X 1* )=1/O2(X 1* ), i.e., the condition of a priority of the first criterion over the second is violated.
The Theoretical Bases of Vector Optimization
33
If p 12 >p 12 (X 1* )=1/O2(X 1* ), then inequality (2.4.26) does not exert impact at an aspiration of ɏoɏ 1* . Thus, inequality can be affected when p is set within the limits: 1d p 12 d 1/O2(X 1* ). It is similarly proved that, if in VPMP Ʉ>2, then Pq vector, which can be set within the limits: p qk (Xo) d p qk d p qk (X q ), k= 1, K . (2.4.27) The theorem is proved. Conclusions from Theorem 7. According to Theorem 7, in the solution to VPMP (1.1.1), (1.1.3)(1.1.5) with various p qk , k= 1, K satisfying (2.4.27), we will receive a point set: S oq = {Xo RN  Xo t 0, Oo d p qk Ok(Xo); q
q
q
p k (Xo) d p k d p k (X q ), k= 1, K , G(X) d B}, qK.
(2.4.28)
The point set of S represents the trajectory of driving (general area) o q
from the Xo point received when the criteria are equivalent to a point of X
where the qth criterion has the greatest priority over other criteria. q We will call such a trajectory an optimum trajectory. If it has priority over any of the kK criteria, then XoX *k , kK will be an optimum trajectory. If the set of criteria of QK has priority over the others, then XoX Qo where X Qo — the optimum point received at the solution to VPMP with QK equivalent criteria will be an optimum trajectory. Lemma — the interrelation of prioritized criterion and Pareto's points. Theorems 5, 6 and 7 define VPMP properties consisting of the following: to each vector of priorities of p ok , k= 1, K , qK lying within (2.4.27): p qk (Xo) d p qk d p qk (X q ), k= 1, K corresponds to a point from a Pareto set So, i.e., these theorems characterize the VPMP’s direct property. The principle of completeness says that there has to be a valid and inverse property. Theorem 8. The theorem of the given size of a prioritized criterion. In VPMP (2.1.1)(2.1.4) with a priority, e.g., of criterion qK for any fq value lying within fq(Xo) d fq d fq(X *q ), qK will always be a point of the optimum of X oq S oq So and a vector of priorities: Pq(X oq ) = {Ok(X oq )/Ok(X oq ), k = 1, K }, q K
34
Chapter 2
such that Pq(Xo) d Pq(X oq ) d Pq(X *q ), q K.
2.5. A geometrical interpretation of the axiomatics and principles of optimality in a VPMP solution Let us show a geometrical interpretation of the received theoretical results using the examples of two convex problems of vector optimization with two and four criteria. Example 2.1. VPMP with two homogeneous criteria. opt F(X) = {max f1(X), max f2(X)}, (2.5.1) (2.5.2) g(X) d b, X = {x1, x2}, x1 t 0, x2 t 0, where the set of admissible points of S is not empty. A geometrical interpretation using an example of the solution of twocriteria in VPMP is presented in a general view in Fig. 1.1 (Chapter 1). In this task: S — a set of admissible points formed by restrictions (2.5.2); X 1 X 2 — the optimum points received in the solution to VPMP (2.5.1)(2.5.2) in the first and second criterion respectively; f 1 = f1(X 1 ), f 2 = f2(X 2 ) — the values of the objective functions at optimum points; X 10 , X 02 , f 10 = f1( 10 ), f 02 = f2(X 02 ) are the points and values of objective functions in the solution to VPMP (2.5.1)(2.5.2) on min, i.e., the worst points; Ok(X) = (fk(X)  f 0k )/(f k  f 0k ), k = 1 , 2 are the relative estimates of a point of XS with the first and second criteria respectively. At points of an optimum of X 1 , X 2 : O1(X 1 ) = 1, O2(X 1 ) < 1; O2(X 2 ) = 1, O1(X 2 ) < 1. Upon transition from a point of X 1 in X 2 on any trajectory there will always be a point of XS, in which O1(X) = O2(X). The subset of such points S' = {X RN  X t 0, g(X) d b, O1(X) = O2(X)} also defines a subset of points of S' S where criteria are equivalent (for two criteria the relative estimates are equal). The set of S' possesses a point of Xo in which Oo is the maximum level and, according to Theorem 2: Oo = O1(Xo) = O2(Xo). The point of Xo also belongs to a point set which is Paretooptimal, So, which lies between the points of X 1 and X 2 . See Fig. 1.1. The set of S' somehow divides:
The Theoretical Bases of Vector Optimization
35
First, a set of admissible points of S in two subsets of S1 and S2; Secondly, a point set which is Paretooptimal, So, in two subsets of S 1o and S o2 . Subsets of S 1o and S o2 , according to definition 5, are the priority area of the first criterion rather than the second (characterized by O1(X)>O2(X), XS 1o S1S ) and vice versa with subsets of S2 and S o2 which are the priority area of the second criterion rather than the first O2(X)>O1(X), XS o2 S2S. To analyze the choice of a point from the priority area of S 1o , for example, we will construct the O1(X), O2(X) functions, in the area of Pareto So, i.e., XSo. See Fig. 2.1.
Fig. 2.1. A geometrical interpretation of the choice of a point from the priority area in VPMP, with two criteria. 12
At a point of X 1o S 1o SoS, the relative assessment with the first criterion reaches the O1(X 1o ) level. To lift O2(X 1o ) to this level, it should be increased by p 12 > 1; 1, then O1(Xo) = p 12 (Xo)O2(X 1o ). From Fig. 2.1, we can see that the size of a priority changes from the Xo point where p 12 (Xo)=O1(Xo)/O2(Xo)=1, to p 12 (X 1 ) = O1(X 1 )/O2(X 1 ) > 1. These reasonings can be transferred to a general case. Let's assume that in VPMP the criterion with the qK index has priority over other criteria. Then, according to definition 5, at a point of XSqS Oq(X) > Ok(X), kK, qK, qzk, and a numerical expression of the priority of p qk (X), k= 1 , K corresponds to numerical equalities of
36
Chapter 2
p qk (X)=Oq(X)/Ok(X), qK, k= 1 , K . We define the vector Pq = {p qk (X), k = 1, K }, such that p qk tp qk (X), k= 1, K , qK, qzk and even though for one component strict inequality is adhered to, the equalities (2.1.10) will then take the form of inequalities: O(X) d p qk Ok(X), k = 1, K , qK.
Thus, in vector problems of a linear nature, and in convex programming with two criteria Xo, the point is received on the basis of the normalization of criteria and the principle of a guaranteed result, which is Paretooptimal. Example 2.2. The consideration of VPMP with four homogeneous criteria. In terms of criteria, we use a circle, and with the linear restrictions the problem is therefore solved orally and imposed on variables. (2.5.3) opt F(X) ={min f1(X){(x1  2)2 + (x2  2)2, (2.5.4) min f2(X){(x1  2)2 + (x2 + 1)2, (2.5.5) min f3(X){(x1 + 1)2 + (x2 + 1)2, (2.5.6) min f4(X){(x1 + 1)2 + (x2  2)2}, (2.5.7) at restrictions 0 d x1 d 1, 0 d x2 d 1. A geometrical interpretation of this example is presented in Fig. 2.2. To solve the problem (2.5.3)(2.5.7) for each criterion, as well as the further Oproblem, the MATLAB system (the function fmincon (…) — the solution to a nonlinear problem of optimization) is used [41].
Fig. 2.2. A geometrical interpretation of VPMP (2.5.3)(2.5.7)
Fig. 2.3. The results of the solution to VPMP: The optimum points; the relative estimates
The Theoretical Bases of Vector Optimization
37
The results of the decision in each criterion in the field of restrictions (2.5.7) are presented in Fig. 2.2 at salient points: Criterion 1: X = { x1=1, x2=1 }, f = (min f1(X)) = f1(X)=2, Criterion 1: X 1 ={x1=1, x2=1}, f 1 = (min f1(X))= f1(X 1 )=2, o
o
o
X 1 ={x1=0, x2=0}, f 1 = (max f1(X))= f1(X 1 )=8 Criterion 2: X 2 ={x1=1, x2=0}, f 2 = (min f2(X))= f2(X 2 )=2, o
o
o
X 2={x1=0, x2=1}, f 2= (max f1(X))= f2(X 2)=8. Criterion 3: X 3 ={x1=1, x2=1}, f 3 = (min f1(X))= f1(X 3 )=2, X 3o ={x1=0, x2=0}, f 3o =(max f1(X))= f1(X 3o )=8. Criterion 4: X 4 ={x1=1, x2=0}, f 4 =(min f2(X))= f2(X 4 )=2, o
o
o
X 4={x1=0, x2=1}, f 4=(max f1(X))= f2(X 4)=8. The Pareto set lies between optimum points X 1* X *2 X *3 X *4 , i.e., the area of admissible points of S formed by restrictions (2.5.3) coincides with a point set, which is Paretooptimal So, So=S. At points of an optimum of X k , k= 1, 4 all relative estimates (the normalized criteria) are equal to the unit: Ok(X k ) = (fk(X k )  f)/(f k  f ok )=1, k= 1, 4 . At points of an optimum of X ok , k= 1, 4 (worst) all relative estimates are equal to zero: o
Ok(X k ) = (fk(X ok )  f ok )/(f k  f ok )=0, k= 1, 4 . From here, kK, XS, 0dOk(X)d1. As in a task (2.5.3)(2.5.7) criteria are symmetric and the point of the guaranteed result of Xo={x1=0.5, x2=0.5 is easily defined – it lies in the centre of a square, and a maximal relative assessment of Oo=0.5833. Really, for example, with the first criterion: O1(Xo) = (fk(Xo)  f ok )/(f k  f ok )=((0.52)2+(0.52)28)/(28)=0.5833, and similarly for other criteria: Oo=O1(Xo)=O2(Xo)=O3(Xo)=O4(Xo). This is very clearly shown in Fig. 2.3. In Figs. 2.2 and 2.3 we can see that the area (point set) limited to points of S1={X1opt X12 Xo X41} is characterized by O1(X)tOk(X), k= 2 , 4 , XS1 (Fig. 2.3 shows how O1>O2, O3, O4), i.e., it is prioritized by the first criterion. In this area, the priority of the first criterion is always more or equal to the unit: p 1k (X) = O1(X)/Ok(X)t1, XS1.
38
Chapter 2
Areas (point set) prioritized by the corresponding criterion are similarly shown. In total, they give a point set, which is Paretooptimal, of So, and it (for this example) is equal to a set of admissible points: So=XoUS1US2US3US4 =S. To solve a problem (2.5.3)(2.5.7) with two criteria, for example, the third and fourth, and then the point set, are Paretooptimal, and lie on a piece X *3 X *4 , and the ɏɨɨ point defines the result of the decision Ooo — the maximum level, and Ooo =O3(Xoo)=O4(Xoo)=0.7917 according to Theorem 1. Thus, Fig. 2.3 shows all situations which arise from the analysis of theorems 18 and the principles of optimality 1 and 2.
2.6. Conclusions on the theoretical foundations of vector optimization The theoretical foundations for solving the vector optimization problems proposed in this chapter are based on the axiomatics of vector optimization, from which the principles of optimality in solving vector problems of mathematical programming (with the same criteria and for a given criterion priority in vector problems) are derived. Theoretical results are proven by using a number of theorems. Axiomatics and the principles of optimality in vector optimization, in total, provide the prerequisites for creating design algorithms to solve vector problems in mathematical programming, both at equivalent criteria and at the given prioritized criteria.
CHAPTER 3 METHODS FOR SOLVING PROBLEMS OF VECTOR OPTIMIZATION
This chapter is devoted to methods for solving vector problems in mathematical programming. The geometrical decision and three methods for solving vector problems are considered: 1. Solving vector optimization problems with equivalent criteria. 2. Solving vector optimization problems with the given prioritized criterion. 3. Methods for choosing any point from a Pareto set in the given value of the objective function with the given accuracy [11, 27].
3.1. Geometrical solutions to linear vector problems Geometrical interpretations of and the solution to vector problems in linear programming (VPLP) can only be presented with two variables, considered as a problem of the form: max F(X) = {max fk(X) { (ck1x1 + ck2x2), k = 1, K }, (3.1.1) ai1x1 + ai2x2d bi, i= 1, M , (3.1.2) x1t0, x2t0, (3.1.3) The solution to VPLP (3.1.1)(3.1.3) comes down to the definition of a point of an optimum of Xo and the maximal relative assessment of Oo, such that OodOk(Xo), k= 1, K , XoS, where Ok(Xo)=(fk(Xo)  f 0k )/(f k  f 0k ), k=1, K . The problem with restrictions of d, f 0k =0, kK, from here Ok(Xo) = fk(Xo)/f k ,kK is considered. The decision can be broken down into three stages: the creation of an area of admissible decisions; the definition of the optimum points for each criterion; the definition of an optimum point for VPLP and the relative estimates.
Chapter 3
40
Method of construction Stage 1. The creation of an area of admissible decisions. 1. We build straight lines with equations that turn out as a result of replacing (in restrictions (3.1.2)) signs of inequalities with signs of precise equalities: ai1x1 + ai2x2= bi, i= 1, M . (3.1.4) 2. We define halfplanes for each restriction of VPLP (3.1.4). 3. We build a polygon of admissible decisions. Stage 2. The definition of the optimum points for each criterion. 4. We construct the objective of k=1 passing through an origin or a straight line (ck1x1 + ck2x2)=h passing through a polygon of decisions. For the received straight line we build the vector: d=( ck1, ck2), k=1. 5. We move a straight line (ck1x1 + ck2x2)=h in the direction of d vector and, as a result, we find a point X k 1 in which the target function accepts the maximal value, or the limitlessness of the target function from above is established. 6. We determine the coordinates of a maximum point of a target function and we calculate its value: X k , f k , k = 1, K . 7. We carry out steps 4, 5 and 6 for all criteria of k = 1, K . As a result, for all criteria we will receive an optimum point and the size of the target function: X k , f k =fk(X k ), k = 1, K . Stage 3. A calculation of the relative estimates and the definition of an optimum point for VPLP. 8. We allocate all intermediate points in the field of restrictions lying between the extreme points of X k , k= 1, K . The index of such points we designate l, and a set  L (at least L will be equal to two). 9. In each allocated point we calculate the relative estimates: Ok(Xl)=fk(Xl)/f k , k= 1, K , l= 1, L . (3.1.5) Let's notice that f 0k =0. 10. Let us construct the schedule on the coordinate "ɯ1" on which across a straight line, points of Xl, l= 1, L are postponed, and the relative estimates of Ok(Xl), k = 1, K , l= 1 , L are downpostponed. 11. In coordinates of x1 and x2 we will postpone the sizes of the relative estimates (3.1.4), passing from one extreme point to the other l = where maximal assessment is equal to 1: Ok(X k ) =1, k = 1, K . 12. The maximal size Oo received in the crossing of pieces from the relative estimates is the result of the decision at an axis of ɯ1.
Methods for Solving Problems of Vector Optimization
41
13. Parallel to an axis of "ɯ1" we carry out an axis of "ɯ2" in the corresponding scale. We also determine the coordinates of the relative assessment of Oo on "ɯ1" and "ɯ2". We will show the specified stages in a numerical example: Example 3.1. max F(X) = {max f1(X) =(12x1 + 15x2), max f1(X) =(20x2)}, restrictions: 4x1 + 3x2d 12, 2x1 + 5x2d 10, x1t0, x2t0. Decision. Stage 1. The creation of an area of admissible decisions. Results are presented in Fig. 3.1. x2 4
f2(X)=20x2 X2*={0, 2} 11/7
4x1 + 3x2d12 f1(X)=12x1+15x2 X1*={21/7, 11/7 } 2x1 + 5x2d 10
d 21/7
3
5
x1
Fig. 3.1. A geometrical solution to a vector problem in linear programming.
Stage 2. The definition of optimum points for each criterion. 2.1. Criterion 1. Optimum of X 1 ={x1=21/7, x2=11/7}, f 1 =f(X 1 )=426/7. 2.2. Criterion 2. X 2 ={x1 =0, x2=2}, f 2 =f(X 2 )=40. Stage 3. Calculating the relative estimates and defining an optimum point for VPLP. We allocate all intermediate points for the fields of restrictions. There are two: X 1 and X 2 . Let's calculate the relative estimates: at a point of X 1 : O1(X 1 )=f1(X 1 )/f 1 =1, O2(X 1 )=f2(X 1 )/f 2 =4/7=0.57; at a point of X 2 : O1(X 2 )=f1(X 2 )/f 1 =0.75, O2(X 2 )=f2(X 2 )/f 2 =1. Let us construct these relative estimates upon transition from a point of X 2 to X 1 , while at the same time the coordinates change in limits: 0dx1d21/7, 2tx2t11/7. See Fig. 3.2.
Chapter 3
42
1.0
O 2 =1 Oo=0.82
O 1 =1
Oo
0.8 O2(X 1 )=0.7 0.6
O1(X 2 )=0.57
0.4 0.2 0 2.0
X 2
0.4 1.8
Xo={x1=0.88, x2=1.65}, 0.8 1.2 1.6 1.6 1.4 1.2
2.0
21/7
x1
X 1 1
x2
1 /7
Fig. 3.2. A solution to a vector problem in linear programming: X 1 , X 2 , Xo={x1=0.88, x2=1.65}, Oo=0.82.
Let us connect in points X 2 and X 1 corresponding to relative estimates. As a result, we will receive functions: O1(X), O2(X). At a cross point of these functions we will receive a maximal relative assessment of Oo=0.82 and a coordinate of x1=0.88. The relative estimates of O2(X 1 ), Oo, O1(X 2 ) represent "the lower bendingaround function" in which Oo=0.82 is the maximum relative assessment. To determine the coordinate of x2 we will construct the x2 axis parallel to x1 in scale: 1 division of x2=(x2(X 2 )x2(X 1 ))/the length of the segment X 1 X 2 on an axis x1. 1 division of x2= (2  1.14)/10.07=0.086=0.086. After that we determine x2=1.65 coordinate.
Methods for Solving Problems of Vector Optimization
43
3.2. The algorithm and the practice of solving vector problems in mathematical programming (VPMP) with equivalent criteria 3.2.1. Algorithm 1. Solving a vector problem in mathematical programming with equivalent criteria The design algorithm of the solution to a vector task (1.1.1)(1.1.4) at equivalent criteria follows from Axiom 2 and the principle of optimality. 1. We will present this in the form of a number of steps [11, 27]. Step 1. The vector problem (1.1.1)(1.1.4) is solved separately for each criterion. For each criterion of kK1, K1K the maximum is determined and, for kK2 and K2K criteria, the minimum is defined. As a result of the decision in each criterion we will receive: X *k  an optimum point for criterion: k= 1, K , K1UK2=K; f *k =fk(X *k ) – the criterion size kth in this point, k= 1, K , K1UK2=K. Sizes f *k , kK can serve as the purposes for each studied object, determine orientation for each criterion, and serve as information for a systems analysis of an object. Step 2. In a vector problem (1.1.1)(1.1.4) we will define the worst part of a criterion (antioptimum): f 0k , k= 1, K (superscript  "zero"). For this purpose the vector problem (1.1.1)(1.1.4) for the k= 1, K 1 criterion is solved on a minimum: f 0k = min fk(X), k= 1, K 1 , G(X) d B, X t 0. The vector problem (1.1.1)(1.1.4) for the k= 1, K 2 criterion is solved on a maximum: f 0k = max fk(X), k= 1 , K 2 , G(X) d B, X t 0. As a result of the decision, in both subsets of criteria, we will receive: X 0k = {xj, j= 1 , N } – an optimum point by the corresponding criterion, k= 1, K ; f 0k =fk(X 0k ) – the criterion size is kth in this point.
Step 3. Systems analysis of a point set of Pareto. At points of an optimum of X *k , k= 1 , K , received on the first step, matrixes of criteria and the relative estimates are defined: k 1, K , /(X ) ) (X ) f q ( X k* ) q 1, K
O q ( X k* )
k 1, K q 1, K
,
(3.2.1)
Chapter 3
44
Ok(X) = f k (X)  f f
* k
f
o k o k
, k= 1, K ,
(3.2.2)
where Ok(X) is a relative assessment of the kth criterion in XS point; in a formula (3.2.2) f *k — the best decision on the kK criterion which is received at the first step; f 0k , k= 1, K — the worst decision which is received at the second step, Ʉ=Ʉ1Ʉ2. a) In a problem maximizing kɄ1 the difference between any current value fk(X), XS and the worst value  f 0k is always more than zero: (fk(X)f 0k )>0, (f *k  f 0k ) > 0, therefore the relative assessment is more than zero Ok(X)=(fk(X)f 0k )/(f *k  f 0k ) > 0; b) In a problem of minimizing kɄ2, the difference between any current value fk(X), XS and the worst value  f 0k is always less than zero (fk(X)f 0k ) 0. What is required: To solve the vector problem in linear programming (3.4.5)  (3.4.6) and determine the optimum point with the second criterion prioritized. Decision. The algorithm is represented in the MATLAB system as a sequence of steps in accordance with Algorithm 3 from section 3.4.1. Steps 1 and 2. The VPMP (3.4.5)(3.4.6) is solved with the same criteria. The results of the decision for each criterion: f 1* = 81470 , f 1o = 40000, X1 = {x1 = 409.4, x2 = x3 = 0}, f *2 = 197300, f o2 = 90000, X2 = {x1 = 17.2, x2 = 0, x3 = 392.2} f *3 = 50800, f o3 = 27780, X3 = {x1 = 17.2, x2 = 0, x3 = 392.2} f *4 = 81470, f o4 = 35700, X4 = {x1 = 409.4, x2 = x3 = 0}, f *5 = 1136,
f 5o = 2047, X5 = {x1 = 147.1, x2 = 0, x3 = 80},
* 6
f = 3570, f o6 = 8147, X6 = {x1 = x2 = 0, x3 = 299}. Step 4. We construct the Oproblem. max O, O  p (199x1 + 143.8x2 + 133.8x3)/(f O
2 1
* 1
p 22
(346x1 + 360x2 + 482.8x3)/(f * 2
o f1
f o2
),
),
O  p 23 (121.2x1 + 122.9x2 + 124.2x3)/(f *3  f o3 ), O  p 24 (199x1 + 131.4x2 + 119.4x3 )/(f *4  f o4 ), O  p 52 (5x1 + 5x2 + x3)/(f *5  f 5o ), O  p 26 (19.9x1 + 13.14x2 + 11.94x3)/(f *6  f o6 ),
(3.4.7)
Chapter 3
78
5x1 + 5x2 + 5x3 d 2047, 19.9x1 + 13.14x2 + 11.794x3 d 9400, 1.12x2 + 1.28x3 d 502, 199x1 + 143.8x2 + 133.8x3 t 40000, 346x1 + 360x2 + 487.8x3 t 90000, x1, x2, x3 > 0. (3.4.8) The results of the solution to the Oproblem (3.4.7)(3.4.8) (VPMP with the same criteria p 12 =p 22 =p 23 =p 24 =p 52 =p 26 =1.0). Oo = 0.4181, Xo = {x1 = 195.6, x2 = 0, x3 = 137.6}, f1(Xo) = 57340, f3(Xo) = 134850, f5(Xo) = 40800, f2(Xo) = 55360, f4(Xo) = 1666, f6(Xo) = 5536, O1(Xo) = 0.4181, O2(Xo) = 0.4181, O3(Xo) = 5656, O4(Xo) = 0.4295, O5(Xo) = 0.4181, O6(Xo) = 0,5704. Oo d Ok(Xo), k = 1 , 6 . The obtained result in the form of a point Xo and a maximal level Oo is an optimal solution to the vector problem (3.4.5)(3.4.6) under the same criteria: Xo ={Oo, Xo}. It corresponds to the principle of optimality: OodOk(Xo), k= 1, 6 , and in accordance with Theorem 2, criteria one, two and five are the most controversial. Step 2. We select the second criterion as a priority. This is reported from the display. The computer processes the data associated with the second criterion and displays: fk(X *2 ) d fk(X) d fk(Xo), Ok(X *2 ) d Ok(X) d Ok(Xo); 55900 d f1(X) d 57340, 0.3834 d O1(X) d 0.4181; 1.0 t O2(X) t 0.4181; 197260 t f2(X) t 134850, 1.0 t O3(X) t 0.5656; 50800 t f3(X) t 40800, 50250 d f4(X) d 55360, 0.318 d O4(X) d 0.4295; 0.4810 d O5(X) d 0.4181; 2050 t f5(X) t 1666, 0.682 d O6(X) d 0.5704. 5025 d f6(X) d 5536, These data are the basic information for the DM. From the second column, the limits of the prioritized vector: p 2k (X) = O2(X)/Ok(X), k= 1, 6 : p 2k (Xo) d p 2k (X) d p 2k (X 2k ), k= 1, 6 . Step 3. The DM indicates the desired value of the second criterion: f2=175200. o
Step 4. O2 = (f2  f 2 )/(f *2  f o2 )=0.888. Step 5. U = (O2  O2(Xo))/(1  O2(Xo))=0.8077. Step 6. P2={p 12 =2.29, p 22 =1.0, p 23 =0.95, p 24 =2.72, p 52 =0.8u107, p 26 =1.325}.
Methods for Solving Problems of Vector Optimization
79
Step 7. Construction of the Oproblem with the prioritized criterion. For this, the prioritized vector P2 = {p 2k , k= 1, 6 } is inserted into the Oproblem (3.3.7)(3.3.8). Step 8. The solution to the Oproblem. As a result of the decision we receive: Oo = 0.965, Xo = {x1 = 28.68, x2 = 0, x3 = 376.5}, f1(Xo) = 57340, f2(Xo) = 137850, f3(Xo) = 40800, f4(Xo) = 5536, f5(Xo) = 1666, f6(Xo) = 5535. Step 9. Error selecting 'f  20%. At subsequent iterations (Step 9) it drops to 9%. Steps 10 and 11 are performed as needed. Step 12. The end.
3.5. Test examples of vector problems in linear programming Task 1. Geometrically solve a vector problem in linear programming (to construct the domain for solving the system of linear inequalities on the plane; to determine the optimum points for each criterion; to calculate the relative estimates and to determine the optimum points of VPLP). 1). opt F(X) = {max f(X)=(12x1 +15x2), max f(X)=(5x1 + 3x2)}, x1 + x2d 6, x2d 4, 2x1 + x2d10, x1t0, x2t0. 2). opt F(X) = {max f(X)=(3x1 +3x2), max f(X)=(8x1 + 3x2)}, 3x1 + 4x2d 12, 5x1 + 3x2d 15, x1  2x2d 2, x1t0, x2t0. 3). opt F(X) = {max f(X)=(10x1 +6x2), max f(X)=(4x1 + 6x2)}, 3x1 + 5x2d 30, 4x1 + 3x2d 24, x1 d 4, x1t0, x2t0. 4). opt F(X) = {max f(X)=(24x1 +30x2), max f(X)=(28x1 + 15x2)}, x1 + x2d 12, 3x2d 25, 2x1 + x2d 20, x1t0, x2t0. 5). opt F(X) = max f(X)=(30x1 +18x2), max f(X)=(8x1 + 12x2)}, 3x1 + 6x2d 22, 5x1 + 3x2d 35, 2x2d 19, x1t0, x2t0. 6). opt F(X) = {max f(X)=(7x1 +7x2), max f(X)=(16x1 + 7x2)}, 3x1 + 12x2d 28, 15x1 + 9x2d 32, 2x2d 5, x1t0, x2t0. 7). opt F(X) = {max f(X)=(12x1 +13x2), max f(X)=(8x1 + 3x2)}, 13x1 + 14x2d 36, 15x1 + 13x2d 45, x2d 3, x1t0, x2t0. 8). opt F(X) = {max f(X)=(15x1 +16 x 2), max f(X)=(24x1 + 9x2)}, 4.5x1 + 6x2d 25, 7.5x1 + 4.3x2d 32, x2d 4, x1t0, x2t0. 9). opt F(X) = {max f(X)=(21x1 +23x2), max f(X)=(25x1 + 11x2)}, 12x1 + 16x2d 37, 20x1 + 12x2d 47, x2d 2, x1t0, x2t0.
80
Chapter 3
10). opt F(X) = {max f(X)=(27x1 +28x2), max f(X)=(50x1 + 27x2)}, 4.5x1 + 6x2d 24, 7.5x1 + 4.5x2d 28, x2d 3, x1t0, x2t0. Task 2. To solve the VPLP using the simplex method (linprog (), MATLAB) for each calculation, 1). opt F(X)={ 2). opt F(X)={ max f1(X)=(7x1+8x2+6x3+9x4), max f1(X)=(10x1+7x2+6x3+8x4), max f2(X)=(10x1+5x2+11x3+7x4)}, max f2(X)=(60x1+36x2+28x3+54x4)}, x1 + 3x2 + 2x3+ x4 d 19.50, 3x1 +4.5x2 + 6x3 + 2x4 d 42.0, 4.5x1 +1.5x2 + 3x3 + x4d 31.0, 3x1 + 2x2 + x3  2x4 d 6.0, 1.5x1 + 3x2 +1.5x3 x4d 11.0, 2x1 + x2 + 3x3  x4 d 9.50, x1, x2, x3, x4 t0.0, x1, x2, x3, x4t0.0, 3). opt F(X)={ 4). opt F(X)={ max f1(X)=(17x1+18x2+16x3+19x4), max f1(X)=(20x1+27x2+26x3+22x4), max f2(X)=(14x1+28x2+6x3+24x4)}, max f2(X)=(10x1+7x2+16x3+8x4)}, 2x1 + 2x2 + 4x3 + 2x4 d 22.0, 3x1 + 5x2 + 3x3 + 2x4d 35.0, 4x1 + x2 + 2x3 + x4d 21.0, 2x1 + 3x2 + 3x3 + 3x4d 29.0, x1 + 5x2  2x3 + 2x4d 24.0, 5x1 + 3x2 + x3 + 2x4d 29.0, x1, x2, x3, x4 t0, x1, x2, x3, x4t0, 5). opt F(X)={ 6). opt F(X)={ max f1(X)=(27x1+28x2+26x3+22x4), max f1(X)=(25x1+32x2+31x3+27x4), max f2(X)=(7x1+8x2+6x3+9x4)}, max f2(X)=(100x1+77x2+36x3+58x4)}, 2x1 + 3x2 + 2x3 + 2x4d 26.0, 6x1 + 4.5x2 + 6x3 + 2x4d 27.0, 6x1 + 2x2 + x3  2x4d 14.0, 9x1 +1.5x2 + 3x3  x4 d 15.0, 4x1 + x2 + 3x3 + x4d 25.0, 3x1 + 3x2 + 1.5x3 + 4x4d 28.5, x1, x2, x3, x4t0.0, x1, x2, x3, x4 t0.0, 7). opt F(X)={ 8). opt F(X)={ max f1(X)=(20x1+21x2+23x3+22x4), max f1(X)=(30x1+37x2+36x3+32x4), maxf2(X)=(37x1+28x2+16x3+19x4)}, max f2(X)=(20x1+47x2+16x3+38x4)}, 3x1 +2x2 + 4x3 +2x4 d 19, 4x1 + 5x2  3x3 + x4d 18, 3x1 + 3x2 + 3x3 2x4d 17, 5x1 + x2 + 2x3  2x4d 13, 2x1 + 5x2  2x3 + 3x4d 22, 6x1 + 3x2 + 2x3  3x4d 15, x1, x2, x3, x4 t0.0, x1, x2, x3, x4t0.0, 9). opt F(X)={ 10). opt F(X)={ max f1(X)=(47x1+48x2+46x3+52x4), max f1(X)=(50x1+77x2+66x3+52x4), max f2(X)=(27x1+18x2+26x3+9x4)}, max f2(X)=(21x1+18x2+7x3+18x4)}, 4x1 + 2x2 + 4x3  x4d 25.0, 2x1 + 6x2 + 4x3  2x4d 22.0, 4x1 + 2x2 + x3 + 3x4d 38.0, 2x1  3x2 + 6x3 + 4x4d 41.0, 4x1 + 6x2  2x3 + 2x4d 25.0, 5x1 +4.5x2 + 2x3  4x4d 15.0, x1, x2, x3, x4 t0.0, x1, x2, x3, x4t0.0,
Methods for Solving Problems of Vector Optimization
Task 3. Solve VPLP with six criteria in the MATLAB system. Task 3.1. opt F(X)={ max f1(X)=199.9x1+143.8x2+133.8x3, max f2(X)= 346.0x1+360x2+482.8x3, max f3(X)=121.2x1+122.9x2+124.2x3, max f4(X)=199.90x1+131.4x2+119.4x3, min f5(X) = 5.0x1 + 5.0x2 + 5.0x3, min f6(X)=19.90x1+13.14x2+11.94x3}. 5x2 + 5x3 d 2047.0, at restrictions: 5x1 + 19.9x1 + 13.14x2 + 11.94x3 d 9400.0, 1.12x2 + 1.28 x3 d 502.0, 199x1 + 143.8x2 + 133.8x3 t 40000.0, 346x1 + 360x2 + 487.8x3 t 90000.0, x1, x2, x3 > 0.0. Task 3.2. (Mathematical model of the market: 2 * 2). opt F(X)={max f1(X) = p1x1 + p3x3, % 1 manufacturer max f2 (X) = p2x2 + p4x4, % 2 manufacturer min f3 (X) = c1x1 + c2x2 , % 1 consumer min f4 (X) = c3x3 + c4x4}, % 2 consumer min
at restrictions: b 1
max
c1x1 + c2x2 b 1 min
, % max
b 2 c3x3 + c4x4 b 2 , % a1x1+ a3x3 b1, a2x2 + a4x4 b2, x1, x2, x3, x4 t 0.0. Solving a vector problem in linear programming in the MATLAB system, substituting the data from Table. 3.1.
81
Chapter 3
82
Table 3.1 Options tasks c1 c2 c3 c4 a1 a2 a3 a4 p1= (c1a1 ) p2= (c2a2 ) p3= (c3a3 ) p4= (c4a4 ) b 1min
1 10 10 10 10 8 8 8 8 2 2 2 2 700
2 12 12 12 12 10 8 10 8 2 2 2 2 700
3 12 10 12 10 10 9 10 9 2 1 2 1 700
4 20 24 20 24 10 10 10 10 10 12 10 12 700
Variants 5 6 30 15 36 15 30 15 36 15 12 8 10 8 12 8 10 8 18 7 16 7 18 7 16 7 700 800
7 20 22 20 22 10 8 10 8 10 14 10 14 800
8 32 30 32 30 8 10 8 10 24 20 24 20 800
9 40 44 40 44 10 10 10 10 30 34 30 34 800
10 38 36 38 36 12 10 12 10 26 20 26 20 800
700
700
700
700
700
800
800
800
800
800
b
min 2
b
max 1
1000
1100 1200 1300 1400 1000 1100
1200
1300
1400
b
max 2
1000
1100 1200 1300 1400 1000 1100
1200
1300
1400
b1 b2
1000 1000
1100 1200 1300 1400 1000 1100 1100 1200 1300 1400 1000 1100
1200 1200
1300 1300
1400 1400
Conclusions. In this chapter, on the basis of the axiomatics of equality, equivalence and prioritized criteria in vector problems, constructive methods (algorithms) for solving vector problems in mathematical programming have been developed, both for equivalent criteria and for a given prioritized criteria. For the first time, a method is proposed for selecting from a Pareto set of any point that is defined by a predetermined value of the prioritized criterion. The accuracy of the selection is determined by a predetermined error. The methods are illustrated in test examples for solving vector problems in mathematical programming.
CHAPTER 4 RESEARCH AND ANALYSIS OF APPROACHES TO PROBLEMSOLVING IN VECTOR OPTIMIZATION
4.1. Solutions to vector problems in mathematical programming (VPMP) based on the folding of criteria 4.1.1. Introduction to the study of vector problems in mathematical programming Over the last three decades, a large number of monographs and individual articles have been devoted to methods for solving vector (multicriteria) problems, largely because of the extensive use of these methods in solving practical problems. An analysis of the methods and algorithms for solving multicriteria problems in accordance with its classification is presented in a number of papers [4, 10, 11, 48, 50]. Let us present an analysis of the following methods in accordance with this classification in [11]: VPMP decision methods based on the folding of criteria; VPMP solutions using criteria constraints; targetprogramming methods; methods based on finding a compromise solution; methods based on manmachine decision procedures. This analysis is carried out by comparing the results of the solution to the test example obtained by these methods based on the normalization of criteria and the principle of a guaranteed result [11, 27]. The following nonlinear vectoroptimization problem will be used as a test example. Example 4.1. opt F(X) =opt { max f1(X) { (x1  3)2 + (x2  3)2, (4.1.1) (4.1.2) max f2(X) { x1 + 2x2}, with restrictions (4.1.3) (x1  1)2 + (x2  1)2 d 1, (4.1.4) 0 d x1 d 2, 0 d x2 d 2.
Chapter 4
84
To solve the problem (4.1.1)(4.1.4) for each criterion, and then for each method, programs are written in the MATLAB system, which at the final stage uses the function: solution to the nonlinear optimization problem. The results of the decision for each criterion: Criterion 1: f 1* =max f1(X)= f1(X 1* )=14.66, X 1* ={x1=0.293, x2=0.293}, f 1o =min f1(X)= f1(X 1o )=3.343, X 1o ={x1=1.707, x2=1.707}. 2 criteria: f *2 =max f2(X)= f2(X *2 )=5.236, X *2 ={x1=1.447, x2=1.894}, f o2 =min f1(X)= f2(X o2 )=0.764, X o2 ={x1=0.553, x2=0.105}. The Pareto set lies on the boundary of the circle defined by constraints (4.1.3) between points X 1* X *2 . Test case (4.1.1)(4.1.4) was solved using the above methods. The results of the decision are discussed below and summarized in Table 1, which also includes the results of the decision for each criterion. The resulting optimum points are shown in Fig. 4.1. An analysis of the method is carried out in sections 4.6 and 4.7.
4.1.2. Solutions to VPMP based on the folding of criteria (methods of the 1th type) This method considers the strategy of a weighted sum (convolution) of criteria. According to this method, a vector problem is reduced to the scalar case by forming a sum of functions with weight coefficients that reflect the degree of a criterion’s importance. This direction of the VPMP solution, being outwardly simple enough, has been most widely used in the works of domestic and foreign authors [2, 4, 50, 53, 54]. In this case, the solution to VPMP (1.1.1)(1.1.4) for the same type of maximization problems is reduced to finding: F* = max x S
K
¦
(4.1.5)
a k f k ( X ),
k 1
where ak, k= 1, K are the weight coefficients characterizing the kth criterion’s importance in the additive convolution. It usually assumes the normalized condition
K
¦a
k
= 1 and assumes that ak t 0 kK. As a
k 1
result of solving the problem (4.1.5), the optimum point X*={x *j , j = 1, N } and the value of the objective function F*=
K
¦a k 1
k
f k ( X * ) are obtained.
Research and Analysis of Approaches to ProblemSolving in Vector Optimization
85
In [2], C. Carlin showed that the optimum point X* S, obtained by solving the problem (4.1.5), is Paretooptimal. This raises three questions: 1) Is it possible to select any point from the Pareto set with the help of additive convolution? 2) What is the solution to problem (4.1.5), with the resulting value F*whether the terms of the sum, K a f ( X * ) , k K the given weight
¦ k
k
k
1
coefficients ak, k = 1, K ? (i.e., whether the equality ak= / F*, k K). 3) How do these two questions meet the result obtained by the method based on the normalization of criteria and the principle of a guaranteed result? In order to emphasize the content of the questions, consider a simple numerical example of solving a vector linearprogramming problem with two criteria. Example 4.2. max F(X)= { max f1(X) { x1, max f2(X) { x2}, (4.1.6) x1 + x2 d 1, x1 t 0, x2 t 0. After the convolution of criteria (4.1.5), put: fx = max (a1f1(X) + a2f2(X)), a1 + a2 = 1. X S
In this task, with coefficients a1 > a2 we get the optimum point X 1* = (x 1* =1, x 1* =0), similar to the result of the solution to (4.1.6) according to the first criterion; and if a2 > a1, conversely, X *2 = (x 1* =0, x *2 =1) is the optimum point according to the second criterion. However, at a1 = a2 = 0.5 the solution is not unique and consists of a set of points of the plane located on the segment with the ends at the points X 1* X *2 , and it is not clear which of these points should be given preference. When using the simplex method at ɚ1=ɚ2=0.5, the result of the solution can be any of the points X 1* or X *2 . In general, when solving a vector linearprogramming problem with k= 1, K criteria, the result of solving the problem (4.1.5) can be one of the angular points belonging to the Pareto set. Therefore, generally speaking, the answer to the first question is as follows: "with the help of additive convolution in a vector linearprogramming problem, it is impossible to select any point from the Pareto set So S". The answer to the second question is more complicated, since none of the papers posed the problem of the correspondence of the obtained result F* to the given weight coefficients ak, k= 1, K . For example, in the
86
Chapter 4
problem (4.1.6), how many in the result F* contain the units of the solution with the first criterion and how many with the second? From the above reasoning, in answer to the first question, it follows that the result can only be points X 1* or X *2 , so in F* it cannot be the accumulation of the sum of the criteria and, consequently, the answer to the second question in the case under consideration is negative. When passing the aggregated criterion through a set of coefficients a1=a2=0.5, large increments of the objective function correspond to a small increment of one of the weight coefficients. For example, if a1=0.5+1020, the solution is the optimum point with the first criterion X 1* , and a1=0,51020 — the optimum point with the second criterion X *2 . Thus, the solution to the problem (4.1.6) is unstable. Therefore, in a general case, the solution to a vector problem of linear programming, built on an additive convolution, when passing from one corner point to another, is unstable. Nevertheless, some works propose using additive convolution to solve more complex vector problems, e.g., to solve and analyze discrete multicriteria problems and to approximate the Pareto set, etc. Let us now consider the solution to test example (4.1.1)  (4.1.4) based on the method of criteria convolution (4.1.5). Assuming that both weighting factors are equal to 0.5 (i.e., the criteria are equal), the aggregated criterion has the form: fx = max {0.5[(x1  3)2 + (x2  3)2] + 0.5[x1 + 2x2]}, (4.1.7) under restrictions (4.1.3)  (4.1.4). The results of the decision: fx=f(X*)=7.8, X*1={x1=0.168, x2=0.445}, The decision for Method 1 is reflected in Table. 4.1 (line 1). An analysis of the method is carried out in section 4.2.
4.1.3. VPMP solutionmethods using criteria restrictions (methods of the 2th type) These methods include: a) the method of successive concessions; b) the method of the leading criterion [29]; c) in the foreign literature, the methods for solving VPMP using criteria restrictions are widespread in the form of transition from the original problem to the optimization problem with the kth criterion with Hrestrictions [50]. All these methods provide for the selection of one criteria as the main objective function, e.g., kK, while the other criteria participate in the optimization procedure in the form of inequalities:
Research and Analysis of Approaches to ProblemSolving in Vector Optimization
87
fx = fk(X) o max, (4.1.8) fq(X) d Hq, q= 1 , K , q z k, (4.1.9) X S, (4.1.10) where fk(X) is the main objective function kK; Hq qK is the value of the restriction imposed on the remaining qth criteria. The main disadvantage of these methods is that the value of the restriction Hq, q K imposed on the qth criterion is not comparable with other Hk, k= 1 , K criteria. The result of the decision regarding the vector problem at the first stage requires proof of Pareto optimality, as a result of imposing constraints on the form (4.1.9) or fq(X)  f q d Hq, q K truncated set of points, optimum across Pareto. Weaknesses persist if we consider a Lagrangian based on problems (4.1.8)(4.1.10), [43]. In the test case (4.1.8)(4.1.10), we take the first criterion as the main objective function. The value of the second criterion should be greater than or equal to 90% of its maximum (5), i.e. f2(X) t4.5. As a result, we move on to the next task: fx = max f1(X) { (x1  3)2 + (x2  3)2, at restrictions x1 + 2x2 t 4.5, and restrictions (4.1.3)  (4.1.4). Solution results: fx=max f(X)= f(X*)=6.727, X*2=(x1=0.637, x2=1.932). The decision on Method 2 is reflected in Table. 1.2 (line 2). An analysis of this method is carried out in section 4.2.
4.1.4. Methods of targetprogramming (methods of the 3th type) This group of methods uses socalled target programming [42, 51]. In foreign literature, these are often referred to as methods of achieving the goal (goalattachment methods). In this case, we mean the introduction of a scalar targetvector, the components of which are the desired criteria values: F ^f k , k 1 , K `. In this case, deviation from them is controlled by weight coefficients. In general, the problem statement has two variants: 1) In the first embodiment, the solution is reduced to the problem of minimizing the sum of deviations from the goal with normalized weights: K p d ( F ( X ), F ) ( W f (X ) f ) 1 / p o min, X S (4.1.11)
¦ k
1
k
k
k
Chapter 4
88
where W
^W
k
`
1, K  a vector of normalized weights,
,k
¦
K k
1
W k =1;
the factor ɪ lies in the range 1 d ɪ< f; clearly that d ( F ( X ), F ) is a certain distance between F(X) and F . This method is analyzed in section 4.2. 2) In the second variant, the solution is reduced to the problem of minimizing the maximum deviation with normalized weights: (4.1.12) fx = min max ^/ k ` , X S
kK
where ; / k ( f k ( X ) f k* ) / wk* , k 1, K ; fk(X) is the kth criterion; f k*  the desired value of the kth criterion (goal); w k*  weight coefficient; K is the dimension of the vector criteria. Let us test the second option. Taking into account that, in problem (4.1.1)(4.1.4), the desired criteria values are their optimal (maximum) values (14 and 5 for the first and second criteria, respectively), and that it is necessary to find a solution in which the features of the two criteria are most fully taken into account, we take w1=0.5 and w2=0.5. As a result, we have: (4.1.13) fx = min max ^/ , / ` , 1 2 X S
1, 2
where / 1 (( x1 3) 2 ( x 2 3) 2 14 ) / 0.5 ; / 2 ( x 1 2 x 2 5 ) / 0 .5 . As in previous cases, trivial constraints on variables are added. Solution results: fx =5.46, X*3=(x1=0.029, x2=1.24). Solutions using Method 3 are presented in Table. 4.1 (line 3). This method is analyzed in section 4.2.
4.1.5. Methods based on searching for a compromise solution (methods of the 4th type) These methods use the principle of a guaranteed result (maximin and minimax), and were first proposed by S. Carlin [2]. Where maximin is used, the criteria are presented in natural units, but since the criteria are measured in different units, it is not possible to compare the criteria with each other and conduct joint optimization. This direction was further developed in a number of works [8, 9 and 10], as well as in the works of the author [11]. In accordance with [10], the decision of VPMP is reduced to the form: (4.1.14) max min U k ( f k ( X ) f k0 ) /( f k* f k0 ) , X S
kK
Research and Analysis of Approaches to ProblemSolving in Vector Optimization
89
where f *k , f 0k the optimal and the worst (antioptimum) value of the kth criterion respectively, Uk  weight coefficients that satisfy the usual conditions Uk > 0, ¦ U k =1 and expressing the preferences of the kK criteria. Denoted thus: O k ( X ) ( f k ( X ) f ko ) /( f k* f ko ) , where Ok(X) is the relative evaluation point of XS at the kth criterion. The difference between (4.1.14) and the approach developed by the author is to determine Uk, k K. In (4.1.14) Uk k K determines the weight of the criterion, i.e., intuitive concepts of the decisionmaker about the importance of the kth criterion in [11] Uq q K determined from ratios: O q ( X o ) / O k ( X o ) d U kq d O q ( X q* ) / O k ( X q* ) , k 1 , K , (4.1.15) where the value showing, according to the DM, is how many times the relative estimate of the qth criterion is greater than the relative estimate of the kth criterion. A more detailed study of the solution obtained in (4.1.14) with approach (4.1.15) is presented in section 3.3.3, where weights Uq are denoted as wq q K. We carry out the solution to test example (4.1.1)  (4.1.4) according to the algorithm based on the normalization of criteria and the principle of a guaranteed result developed by the second, with equivalent criteria. Solution result: Oo =0.5801, Xo=(x1=0.189, x2=1.585), Oo=O1(Xo)=O2(Xo)=0.5801. The decision about this method is given in Table. 4.1 (line 4).
4.1.6. Methods based on humanmachine procedures of decisionmaking (methods of the 5th type) These methods assume a dialogue mode of operation between the DM and the computer. The computer processes some information for the DM about the preference (priority) criteria in a vector problem. This information is used to formulate a new optimization problem and obtain another intermediate solution, etc. The result is an interactive procedure for selecting the optimal solution. We will not consider this in the future.
Chapter 4
90
4.2. Analysis of the results of the solution in a test example and solutionmethods for VPMP 4.2.1. An analysis of the results of the solution in a test example The results obtained by solving the test example (4.1.1)  (4.1.4) using the four above methods are summarized in Table. 4.1 and illustrated in figs. 4.1 and 4.2. In addition, Table 4.1 presents these results for analysis in a normalized form, i.e., with relative estimates of the criteria: O1(X)=(f1(X)f1(X1min))/(f1(X1max)f1(X1min)), O2(X)=(f2(X)f2(X2min))/(f2(X2max)f2(X2min)) from the optimal points X1max, X1min ɢ X2max, X2min, which are obtained by solving the problem (4.1.1)(4.1.4) to the maximum and minimum according with the first and second criteria, respectively. Table 4.1: The results of solving a vector optimization problem using four methods. Method
x1opt
x2opt
1
2
3
f1(Xopt) 4
f2(Xopt) 5
O1(Xopt) 6
1 0.168 0.445 14.55 1.058 0.99 2 0.637 1.932 6.72 4.5 0.299 3 0.029 1.24 11.925 2.5 0.76 4 0.1887 1.585 9.906 3.3582 0.5801 Values of the 1st and 2nd criteria on max and min 1 max 0.293 0.293 14.657 0.8787 1.0 1 min 1.707 1.707 3.343 5.1213 0.0 2 max 1.447 1.894 3.6334 5.2361 0.0257 2 min 0.553 0.105 14.367 0.764 0.974
O2(Xopt) 7
0.0659 0.835 0.39 0.5801 0.0257 0.9748 1.0 0.0
Fig. 4.1 depicts the graphical solution to the problem using these methods on the constraint domains (4.1.3)(4.1.4). Fig. 4.2 depicts the results obtained on the surface of the first and second objective functions.
Research and Analysis of Approaches to ProblemSolving in Vector Optimization
Fig. 4.1. Displaying the results of solving the problem of multicriteria optimization (4.1.1)(4.1.4).
91
Fig. 4.2. Solution results in {x1, x2, O} coordinates of the first and second objective function in Pareto region.
In Fig. 4.1, points X 1* and X 1o (points 1 and 6) correspond to the maximum and minimum values of the first criterion, respectively; X *2 and X o2 (points 5 and 8) correspond to the maximum and minimum values of the second criterion, respectively; X*1 (point 2) is optimum using the first method; X*2 (point 4) is optimum using the second method; X*3 (point 7) is optimum using the third method; Xo (point 3) is optimum using the fourth method (the O problem). In Fig. 4.2. presents the location of the obtained results for solving the multiobjective problem (4.1.1)  (4.1.4) using four methods in the Pareto region, as well as the first and second objective functions in relative units: points 1, 2, 3 and 4 are the results of solving the first, second, third and fourth (O  the problem) methods, respectively.
4.2.2. An analysis of the results of test solutionmethods for VPMP The first method. An analysis of the solution to the vector mathematicalprogramming problem (4.1.1)(4.1.4) shows that, in general, when solving VPMP with k= 1 , K criteria, the solution to the problem can only be one of the corner points belonging to the Pareto set. In this case, intermediate points fall out of consideration.
92
Chapter 4
An analysis of the solution to the test example — the nonlinear vector problem (4.1.1)(4.1.4) — will answer two questions formulated in section 1.2. The result of the decision, obtained on the basis of additive convolution of the criteria, is presented in Table 4.1 and Fig. 4.1, and shows that the optimal solution is close to the optimum point with the first criterion. This is also indicated by the relative estimates presented in the sixth and seventh columns of the first row. In a nonlinear vector problem, the result can be any point from the Pareto set. This implies the following answer to the first question: “according to the method of additive convolution in a vector problem of mathematical programming, the choice of a point from So S with given weight coefficients cannot be made”. As in the linear case, the answer to the second question is more complicated. For example, from the process of solving the problem (4.1.1)(4.1.4), it is not known how many units of the first or second criterion are contained in the sum fo. In this case, the solution to problem (4.1.5) implies that, as a result of fɨ=f(Xopt), there can be no accumulation of the sum of the criteria and, therefore, the answer to the second question is negative. This is what the test case shows. The second method. The result of solving the problem with Hconstraints (4.1.8)(4.1.10) shows that the optimal solution is close to the optimum point with the second criterion. This is also indicated by the relative estimates presented in the sixth and seventh columns of the second row. The main disadvantage of this group of methods is that the value of the limitation Hq qK imposed on the qth criterion is incommensurable with other Hk, k= 1 , K criteria, since the qth criteria is described in different units. In addition, the result of solving the vector problem requires proof of Pareto optimality because, as a result of imposing constraints on the form (4.1.9) or fq(X)f q d Hq, qK, the set of Paretooptimal points is truncated. The same drawbacks remain when the task is considered using the Lagrange function. The third method. Methods for achieving the goal are considered in the two options mentioned above in section 4.1.4. According to the second option, the study (4.1.4) shows that the optimal solution is close to the optimum point with the first criterion. This is indicated by the relative estimates O1(Xopt)=0.6, O2(Xopt)=0.2, presented in the sixth and seventh columns of the third row of the table, showing the absence of equivalence. The first option will be given a little more attention. For simplicity, we assume that the coefficient p = 1. We will check the method for three cases: 1) two objective functions are measured in natural units; 2) two
Research and Analysis of Approaches to ProblemSolving in Vector Optimization
93
objective functions are measured in relative units; 3) three objective functions are measured in relative units. In the first case, the change in criteria will be represented on the phase plane. The phase plane is the plane on which the vector values are located — the functions of F. Denoted by ), the set of values of F on the admissible set S is )=F(S). We call a set ) a phase set. On this plane, the optimal values are determined for each criterion f 1* , f * , and the point with these coordinates is the target point. The point of the 2 phase set lying at the shortest distance from the target point is the result of the solution. See Fig. 4.3.
ɚ)
ɛ)
Fig. 4.3. Phase curves: a) with original criteria; b) with normalized criteria (in relative estimates).
From Fig. 4.3 (a), it is clear that if the criteria are measured in natural units, then the diagonal is closer to the criterion that is greater in absolute value, i.e., the equivalence is clearly absent. In the second case, convert Fig. 4.3 (a) to 4.3 (b), where the axes of the criteria are measured in relative units. When carrying out the diagonal corresponding to the additional condition O1=O2 (condition of criteria equivalence), we get the point Xo. This point coincides with the decision obtained on the basis of the normalization of criteria and the principle of a guaranteed result with equivalence of criteria. At this point, the relative estimates O1(Xo)=O2(Xo)= 0.5801. In principle, this is indicated by Theorem 3 in Chapter 3 (regarding the most contradictory criteria), which states that, in the optimal solution there are always two criteria that are equal to the maximum level Oo, and the rest are not worse. In the third case, in problem (4.1.1)(4.1.4), we add the third criterion, for example, f3(X) =(x1  3)2 + (x2  1)2 omax. We construct a threedimensional
94
Chapter 4
domain from the normalized criteria, that is, in the coordinates O1, O2, O3, and draw a diagonal. See Fig. 4.4. The diagonal intersects the range of feasible solutions at the point O={O1=0.47, O2=0.47, O3=0.47}, which corresponds to the point X={x1=0.8179, x2=1.024}. See Fig. 4.4. When solving VZMP (4.1.1)(4.1.4) with three criteria based on the normalization of criteria and the principle of a guaranteed result, we get the point Xo={x1=0.18875, x2=1.5847}, in which O={O1=0.5801, O2=0.9056, O3=0.5801}.
Fig. 4.4. The solution to a vector optimization problem with three criteria.
This result shows that the previous point is not even Paretooptimal, although all three relative estimates are equal to each other, i.e., methods for solving a vector problem constructed from geometric considerations are not suitable for general cases.
4.2.3. Conclusions on the methods for solving vector problems In general, assessing the results of modelling economic systems and the study of modern methods for solving vector problems, we arrive at the following conclusions: 1) most economic and technical systems have many goals, and their modelling requires solving the vector problems underlying these models; 2) for the majority of modern methods (or rather, approaches) for solving vector problems, the problem of criteria commensurability in VPMP is not solved; 3) the problem of choosing the optimal solution has not been solved, that is, an optimality principle has
Research and Analysis of Approaches to ProblemSolving in Vector Optimization
95
not been constructed to indicate why one solution is better than the other, either with equivalent criteria or with a given criterion preference. In this book, the listed problems are solved in the following sequence: normalization of criteria — axiomatic — principles of optimality — constructive methods for solving vector problems in mathematical programming with equivalent criteria and a given prioritized criterion.
CHAPTER 5 THE THEORY OF VECTOR PROBLEMS IN MATHEMATICAL PROGRAMMING WITH INDEPENDENT CRITERIA
In this chapter, we consider vector problems in mathematical programming which have two subsets of criteria: Q and K1, where Q is a subset of independent criteria (the definition is given below) and K1 is a subset of criteria dependent among themselves. Vector problems are considered in statics (section 5.2) and dynamics (section 5.3).
5.1. A definition of vector problems in mathematical programming with independent criteria We investigate a vector problem in mathematical programming (VPMP) which has two subsets of criteria: Q and K1, where Q is a subset of independent criteria (definition is given below), K1 is a subset of criteria dependent among themselves and QK1=K is a criteria set of VPMP: (5.1.1) opt F(ɏ(t))={{F1(ɏ(t))= max fq(Xq (t)), q= 1, Q }, {F2(ɏ(t))= max fk(X(t)), k= 1, K 1}}, Gi(t) X(t)d bi(t), i= 1, M , q
G ij (t) Xq(t)d b iq (t), i= 1, M q, q= 1, Q xj(t)t0, j= 1, N ,
(5.1.2) (5.1.3) (5.1.4) (5.1.5)
where {Xq={xj, j= 1, N q}, q= 1, Q } is a vector of real variables, i.e., the vector from an Ndimensional Euclidean space of RN, including qQ, a subset of unknown NqN qth criterion, and Q is a set of independent criteria. In general, the problem (5.1.1)(5.1.5) belongs to the class of convex vector problems in mathematical programming. (5.1.3) is the restrictions imposed on all variables X ; (5.1.4) is the restrictions imposed just on variables X q.
The Theory of Vector Problems in Mathematical Programming with Independent Criteria
97
Definition 1. (A vector problem in mathematical programming with independent criteria). If a VPMP (5.1.1)(5.1.5) set of indexes from a vector of the unknowns entering the qK criterion and NqN, qK is not crossed with any other set of a vector of unknown kK, NkN, kK, i.e., q, kK, NqNk=, NqN, NkN, then such vector problems we will call VPMP with independent criteria. Vector problems of mathematical programming with independent criteria (5.1.1)(5.1.5) arise in problems with the analysis and synthesis of multilevel hierarchical systems (HS): economic HS (vector problems in linear programming); technical HS (vector problems in nonlinear programming). For example, in applying this to economic hierarchical systems, the vector of unknowns of X={Xq, q= 1 , Q } defines types (nomenclature) and volumes of the production of HS in general to X, as well as its local subsystems of Xq, of which HS is composed, N= Nq. qQ
5.2. Vector problems in linear programming with independent criteria 5.2.1. Vector linearprogramming problems with independent criteria Under investigation here are vector linearprogramming problems with independent criteria which have two subsets of criteria, Q and K1, where Q is a subset of independent criteria. Nq
opt F(ɏ(t))={{F1(ɏ(t))= max fq(X(t))=
q
¦
c j xj(t), q= 1, Q },
(5.2.1)
j 1
N
{F2(ɏ(t))= max fk(X(t))=
¦ j 1
k
c j xj(t), k= 1, K 1}}, (5.2.2)
N
¦
aij(t)xj(t)d bi(t), i= 1, M ,
(5.2.3)
j 1 N
¦ j 1
q
q
a ij (t)xj(t)d b i (t), i= 1, M q, q= 1, Q ,
xj(t)t0, j= 1, N ,
(5.2.4) (5.2.5)
Chapter 5
98
where X={Xq={xj, j= 1, N q}, q= 1, Q } is a vector of real variables, that is, a vector from the Ndimensional Euclidean space RN, which includes the qQ subset of the unknown NqN qth criterion, and Q is a set of independent criteria, i.e., satisfying the condition: q, kK, NqNk=, NqN, NkN, F(X) is a vector function (vector criterion), (K is the power of the set K), F(X)={fk(X), k= 1, K } has Ʉ=QK1 of maximizing components, Q is a set of independent criteria, and QK and K1 are a set of dependent criteria, i.e., the vector criterion F2(ɏ(t)) includes variables of various independent criteria. (5.2.3)  (5.2.4) — the reference restrictions imposed on variables of dependent and independent criteria. We assume that the set of admissible points set by reference restrictions (5.2.1)(5.2.5) and trivial restrictions of X t 0 is not empty and represents a compact: S = {X RN  X t 0, G(X) d B} z . For the solution to VPLP (5.2.1)(5.2.5) we use the algorithm based on the normalization of criteria and the principle of a guaranteed result. As a result of the decision we will receive: points of an optimum of X *k , k= 1, K and the values of the objective functions in these points of f *k , k= 1, K which turned out to solve a problem (5.1.1)(5.1.5), with the kth criterion; the worst unchangeable part of the criterion of f 0k , k= 1, K which turned out at the solution to a problem (5.2.1)(5.2.5), with the kth criterion on min; Xo is the optimum point of the solution to VPMP at equivalent criteria, i.e., with the solution to a maximine task and the Oproblem constructed on its basis; Oo is the maximal relative assessment which is the guaranteed level for all criteria in relative units. At a point of Xo we will calculate the criteria values fk(Xo) and the relative estimates: o
Ok(Xo)=(fk(X k )f 0q )/(f *k f 0q ), k= 1, K , which satisfy inequality of OodOk(Xo), k= 1, K . In Xo point, all criteria are more or Oo are equal. In other points of X So smaller criteria in the relative units of O= min Ok(X) are always less than Oo. kK
Theoretical results. Theorem 1. In VPMP, if any couple of the indexes q, k Q crossing the subsets of indexes of variables are empty, i.e., criteria are independent: (5.2.6) q, kK, NqNk=, NqN, NkN,
The Theory of Vector Problems in Mathematical Programming with Independent Criteria
99
then at the point of the optimum of Xo received on the basis of the normalization of criteria and the principle of a guaranteed result, all relative estimates of independent criteria are equal among themselves and equal to the maximum Oo relative level, (5.2.7) Oo=Oq(Xo), q= 1, Q , and for other criteria inequality is carried out: OodOk(Xo), k= 1, K . (5.2.8) The proof of Theorem 1 is presented in [11, p. 30]. Theorem 2. If, in VPMP with independent criteria (ratios (5.2.6) are right), one of the criteria of q Q has priority over the other k= 1, Q , then at an optimum point of Xo received on the basis of the normalization of criteria and the principle of a guaranteed result, all relative estimates are equal among themselves and equal to the maximum Oo relative level, and the priority is increased by a vector: (5.2.9) Oo=p qk Oq(Xo), q Q, k= 1, Q . where p qk , k= 1, Q is the set criterion qth, prioritized in relation to other k = 1, Q criteria. The proof of Theorem 2 is presented in [11, p. 31]. Properties of vector problems in linear programming with independent criteria measured in relative units: Oq(Xo)= (fq(Xo)f 0q )/(f *q f 0q ), q= 1, Q . 1. In the solution to VPLP (5.2.1)(5.2.5) on one q Q criterion at the optimum point of X *q , q Q of the size of all other criteria k K and consequently, and relative estimates are equal to zero. fk(X *q )=0, Ok(X *q )=0, q, k K, qzk.
(5.2.10)
2. At an optimum point, the X priority of the qth criterion over other * q
k K criteria on the condition of f 0q =0, q Q is equal f . * p qk (X *q )= O q ( X q ) =f.
O k ( X q* )
Therefore, when moving from Xo to the X *q , the vector of priorities of the p qk , k= 1, Q criterion lies within limits: 1dp qk df, q Q, k Q, qzk.
100
Chapter 5
5.2.2. The practice of solving vector problems in linear programming with independent criteria As an example, a vector problem in linear programming with independent criteria is considered. Example 5.1. The solution to a vector problem in linear programming with independent criteria What is given. A vector problem in linear programming with six independent criteria and two standard maximizing criteria: opt F(X(t))={max{f1(X1(t)= 800x1(t)+1000x2(t)}, max {f2(X2(t)= 400x3(t)+2000x4(t)}, max {f3(X3(t)= 600x5(t)+300x6(t)} , max {f4(X4(t)= 1050x7(t)+1000x8(t)}, max {f5(X5(t)= 500x9(t)+600x10(t)}, max {f6(X6(t)= 1000x11(t)+500x12(t)}, max{f7(X7(t)=800x1(t)+1000x2(t)+400x3(t)+2000x4(t)+600x5(t)+300x6(t)+ 1050x7(t) +1000x8(t)+500x9(t)+600x10(t)+1000x11(t) +500x12(t)}, max {f8(X8(t)=80x1(t)+100x2(t)+40x3(t)+200x4(t)+60x5(t)+30x6(t)+ 105x7(t)+100x8(t)+500x9(t)+60x10(t)+100x11(t)+50x12(t)}, (5.2.11) at restrictions: 1000x1(t)+230x2(t)+570x3(t)+2000x4(t)+830x5(t)+4000x8(t)+2680x10(t)+30 90x11(t) d 16000.0 3000x1(t)+2130x2(t)+1850x5(t)+3030x7(t)+2580x8(t)+2000x9(t)+1000 x10(t)+1260x11(t)+ 2050x12(t) d 21500, 2000x2(t)+760x3(t)+1000x5(t)+1640x6(t)+1060x8(t)+2320x9(t)+1000 x10(t)+1188x12(t)d 12300, 680x1(t)+1740x3(t)+1250x6(t)+960x8(t)+1070x9(t)+2630x10(t)+1380x11(t)+ 3050x12(t) d 14600, 1040 x1(t)+930 x3(t)+840x4(t)+1030x5(t)+250x6(t)+530x8(t)+610x10(t)+ 300x12(t) d 8700, 550x1(t)+1060x2(t)+1340x4(t)+860x5(t)+1000x6(t)+1040x8(t)+260x10(t)+ 1000x12(t) d 9000, 1000x2(t)+5000x4(t)+1000x5(t)+2000x6(t)+1620x7(t)+170x8(t)+1000x9(t)+3 30x10(t)+1200x11(t)+1000x12(t) d 11400, 300x1(t)+210x2(t)+1000x3(t)+2000x4(t)+1500x5(t)+620x6(t)+700x7(t)+ 140x8(t)+980x9(t)+1040x10(t)+1000x11(t)+2000x12(t)d18800, (5.2.12) 2000x1(t)+3000x2(t)d 18000, 1000x3(t)+2000x4(t) d 17000, 1000x5(t)+800x6(t) d 18000, 2000x7(t)+1500x8(t) d 24000, 800x9(t)+1300x10(t) d 21000,
The Theory of Vector Problems in Mathematical Programming with Independent Criteria
101
3000x11(t)+4000x12(t) d 29000, (5.2.13) (5.2.14) 0d x1(t)d1000, 0d x2(t) d1000, … , 0d x12(t)d1000, where the last lines (5.2.14) of restrictions, are given in the annex for hierarchical systems, and connected with market research and the minimum values of variables. What is required: To solve a vector problem in linear programming (5.2.11)(5.2.14) and define an optimum point with a prioritized second criterion. Decision. The algorithm is presented in the MATLAB system as the sequence of steps according to Algorithm 3, sections 3.4.1. In this VPLP (5.2.11)(5.2.14), the following is formulated: it is required to find a nonnegative solution to x1 – x12, in the system of inequalities (5.2.11)(5.2.14), at which the f1(X), … , f8(X) functions accept the greatest possible value. The solution to VPLP at equivalent criteria in the MATLAB system: the algorithm for the solution to VPLP is representable as the sequence of steps. Step 0. For the solution to the created vector problem in linear programming, we present the basic data in the MATLAB system. The vector criterion function is created in the form of a matrix: ɋvec = [800 1000 0 0 0 0 0 0 0 0 0 0; … 0 0 0 0 0 0 0 0 0 0 1000 500; 800 1000 400 2000 … 1000. 500.; 80 100 40 200 … 100. 50.]; matrix of linear restrictions: a =[1000 230 570 2000 830 0 0 4000 0 2680 3090 0.; … 0 0 0 0 0 0 0 0 0 0 3000 4000]; the vector containing restrictions (bi): b= [16000. 21500. 12300. 14600. 8700. 9000. 11400. 18800. 18000. 17000. 18000. 24000. 21000. 29000.]; equality restrictions: Aeq= []; beq= []; the restrictions imposed on the lower and upper bounds of variables: lb=[0 0 0 0 0 0 0 0 0 0 0 0 0]; ub=[10000 10000 10000 … 10000 10000]; Step 1. The decision for each criterion is the best. 1) Decision for the first criterion: [x1,f1]=linprog(Cvec(1,:),a,b,Aeq,beq,lb,ub) where x1 is a vector of optimum values of variables with the first criterion; f1 is the size of criterion function at this point. x1=X 1* ={x1(1)=5519.0, x2(1)=2320.7} is an optimum point,
Chapter 5
102
f1=f 1* =6735900 – the optimum value of the first criterion at the point of X . Results of decisions for other criteria of k= 2,8 are similarly received. 2) x2= X *2 ={x3(1)=7295.5, x4(1)= 2280}, f2= f *2 = 7473600. 3) x3=X *3 ={x5(1)=8038.7, x6(1)=1680.7}, f3=f *3 =5327400, * 1
4) x4=X *4 ={x7(1)=3689.8, x8(1)=4000}, f4=f *4 =7874300, 5) x5=X *5 ={x9(1)=3527.5 , x10(1)=4116.2}, f5=f *5 =4233500, 6) x6=X *6 ={x11(1)=5178.0, x12(1)=2444.1}, f6= f *6 =6400000, 7) x7=X *7 ={x1(1)=289.1, x2(1)= 4072.7, x3(1)=5363.6, x4(1)=792.9, x5(1)=0, x6(1)=0, x7(1)=334, x8(1)=73.9, x9(1)=0, x10(1)=0, x11(1)=234.1, x12(1)=0}, f7=f *7 =12882000, 8) x8=X *8 ={x1(1)=289.1, x2(1)= 4072.7, x3(1)=5363.6, x4(1)=792.9, x5(1)=0, x6(1)=0, x7(1)=334, x8(1)=73.9, x9(1)=0, x10(1)=0, x11(1)=234.1, x12(1)=0}, f8=f *8 =1288200. Step 2. This is not carried out because for this class of problems (restriction of the form ) the size of an antioptimum is equal to zero: f 0k =0, k= 1, K . As a result, the normalization of criteria comes down to the following form: Ok(X) = (fk(X)  f 0k )/(f *k  f 0k ) = fk(X)/f *k , k= 1, K . * 1
Step 3. The system analysis of criteria in VPLP at optimum points of X , X *2 , …, X *8 are defined sizes of criterion functions of F(X*)={{fq(X *k ),
q= 1, K }, k= 1, K } and relative estimates of O(X*)={{Oq(X *k ), q= 1, K }, k= 1, K }: F(X*)=
f 1 ( X 1* ) f 2 ( X 1* ) ... f 8 ( X 1* ) * 2
* 2
* 2
f 1 ( X ) f 2 ( X ) ... f 8 ( X ) ... f 1 ( X 8* ) f 2 ( X 8* ) ... f 8 ( X 8* )
=
6736000 1149000 ... 806000 0 7478000 ... 748000 ...
,
(5.2.15)
6385000 373100 ... 1288000
To simplify calculations of relative estimates in a formula Ok(X) = f k (X)  f ko , k= 1, K on an admissible set of S we calculate f k* f ko
deviations of criteria of dk = f *k  f 0k , k= 1, K , which are in a denominator of Ok(X). As f 0k =0, k= 1, K , dk = f *k . In problem (5.2.11)(5.2.14), deviations of criteria are equal to: d1=6735900; d2=7478200; d3= 5327400; d4= 7874300; d5=4233500; d6=6400000; d7= 12882000; d8= 1288200.
The Theory of Vector Problems in Mathematical Programming with Independent Criteria
103
The result of the system analysis is presented with functions F(X*) in (5.2.15) and a matrix of relative estimates: *
*
*
*
O(X*)= O1 ( X 1 ) O 2 ( X 1 ) ... O 7 ( X 1 ) O8 ( X 1 ) = O1 ( X 2* ) O 2 ( X 2* ) ... O 7 ( X 2* ) O 8 ( X 2* ) ...
O1 ( X 7* ) O 2 ( X 7* ) ... O 7 ( X 7* ) O 8 ( X 7* ) * 8
* 8
* 8
* 8
O1 ( X ) O 2 ( X ) ... O 7 ( X ) O 8 ( X )
1.0 0.0
0.1537 ... 0.6258 0.6258 1.0 ... 0.5805 0.5805
, (5.2.16)
... 0.9480 0.4990 ... 1.0
1.0
0.9480 0.4990
1.0
... 1.0
where Ok(X) = (fk(X)  f 0k )/(f *k  f 0k ), k= 1, K , f *k =fk(X *k ). The result of the system analysis — matrix O(X*) showed that, at optimum points, the first to the eighth criterion reach the optimum sizes: {Ok(X *k )=1, k= 1, K }. It is necessary to define a point of Xo in which all criteria are closest to the unit. The Oproblem is also directed towards solving such a problem. Step 4. Formation of a Oproblem. The basic data of a Oproblem in the MATLAB system are formed: vector criterion (objective) function – Cvec0; the matrix of linear restrictions is a0; the vector containing restrictions (bi): b0 equality restrictions: Aeq= []; beq= []; lower and upper bounds of variables: lb0, ub0. Step 5. Solutions to the Oproblem. Address: [x0, L0] =linprog (Cvec0, a0, b0, Aeq, beq, lb0, ub0). As a result of the decision we will receive: x0=Xo={x1(1)=0.2818, x2(1)= 764.8, x3(1)= 1286.4, x4(1)=4147.7, x5(1)= 224.2, x6(1)= 2502.2, x7(1)=0.0, x8(1)= 1725.2, x9(1)= 407.5, x10(1)= 1111.4, x11(1)= 1062.2, x12(1)= 1803.6, x13(1)= 0.0} (5.2.17)  optimum point; (5.2.18) L0=Oo = 0.2818 – the maximum relative assessment at the Xo optimum point; Oo is the minimum relative assessment (or the guaranteed level). Oo=0.2818 shows that all independent criteria are raised up to this size concerning the optimum by f1* ,...,f6*. Checking the results of the decision. We define values of criteria at an optimum point: fk(Xo), k= 1,8 . f1(Xo)=1898000, f2(Xo)=2107000, f3(Xo)=1501000, f4(Xo)=2219000, f5(Xo)=1193000, f6(Xo)=1804000, f7(Xo)=10723000, f8(Xo)=1072000. and relative estimates in Ok(Xo), k= 1,8 : O1(Xo)=0.2818, O2(Xo)=0.2818, O3(Xo)=0.2818, O4(Xo)=0.2818, O5(Xo)=0.2818, O6(Xo)=0.2818, O7(Xo)=0.8324, O8(Xo)=0.8324. (5.2.19)
Chapter 5
104
Thus, the maximum relative assessment of Oo is equal to the size of independent criteria in compliance (5.2.7): Oo=Oq(Xo(t)), q= 1, 6 , and also satisfies conditions (5.2.8) for the seventh and eighth criterion: Oo d Ok (Xo(t)), k= 7 ,8 . Thus, the conditions of Theorem 1 are satisfied and confirmed with a test example.
5.2.3. Vector problems in linear programming with independent criteria, in modelling economic hierarchical systems A vector problem in linear programming with independent criteria (5.2.1)(5.2.5) is a mathematical model of an economic hierarchical system (HS). For this purpose, we will consider a hierarchical economic system which includes Q – a set of the local subsystems (LS). We will analyze the numerical indicators received as a result of the solution to a vector problem with independent criteria (5.2.1)(5.2.5) in the annex to HS: { X*={X *q ={xj, j= 1, N q}, q= 1, Q }, f *q =fq(X *q ), q= 1, Q ; O= min Oq(X), qQ
X ={X ={xj, j= 1, N q}, q= 1, Q }, O = min Oq(X )}. o
o q
o
o
qQ
(5.2.20)
These indicators contain a certain amount of economic sense. First, the indicator of X *q , received as a result of the decision for the first step, represents the volume of the products which are turned out as qth by a local subsystem. Respectively, the indicator: f *q =fq(X *q ), q= 1, Q turns out from a problem (5.2.1)(5.2.5) provided that qth LS are given all the global resources of the hierarchical system (5.2.3), practically only its own restrictions exert impact on f *q (5.2.4). Thus, f *q , q= 1, Q can serve as an optimum indicator of the development of LS (e.g., divisions in firm; branches on a national scale, etc.). On the first step of an algorithm, the highest operating HS subsystem, using a method of imitating modelling, investigating as LP, will behave as if to provide unlimited resources. As a result of the decision, it receives limits to f *q , q= 1, Q to which all local subsystems have to aspire with their general optimization.
The Theory of Vector Problems in Mathematical Programming with Independent Criteria
105
Secondly, O= min Oq(X) is the level which is reached by the economy qQ
of a hierarchical system at the release of XS in volumes of production in relation to the optimum indicators: Oq(X)= (fq(X)f 0q )/(f *q f 0q ), q= 1, Q . Thirdly, Oo= min Oq(Xo)= max min Oq(X) is the maximum relative qQ
X S
q Q
o
level which the economy of a hierarchical system at the release of Xo={X q , q= 1, Q } will reach with volumes of production, rather than optimum indicators. Fourthly, all these indicators of the hierarchical system (5.2.10) are necessary to accept an optimal solution. An analysis of the numerical indicators of decision (5.2.1)(5.2.5) showed that, in the annex to the hierarchical system, two economic problems are solved. Problem 1. A choice of optimum volumes of production for all divisions of the firm. We investigated the received economic indicators in the solution to the numerical mathematical model (5.2.1)(5.2.5). x1=X 1* ={x1(1)=5519.0, x2(1)=2320.7} is an optimum point which presents production volumes released by the first local subsystem, in the annex to economic HS X 1* (we will notice that other x3(1),...,x12(1) variables are equal to zero). f1=f 1* = 6735900 – the optimum size of the first criterion at a point of X * , for HS: the sales volume received by the first division from the 1 realization of X 1* — production volumes. In general, sales volumes coincide with the first criterion of f7(X 1* )=6735900, f8(X 1* )=673590. Economic indicators for the second — the sixth LS are similarly presented. A system analysis (Step 3) showed that, at optimum points, the first to the eighth criterion reached the optimum sizes: Ok(X *k ) =1, k= 1, K , and in the others, there was much less unit. It is necessary to define the Xo point (production volumes) in which all criteria are closest to the unit. The Oproblem is also directed towards solving such a problem. As a result of the solution to the Oproblem, we received Oo — the maximum relative level which all local subsystems (divisions) of the
Chapter 5
106
o
hierarchical system at the release of Xo={X q , q= 1, Q } reached production volumes. Problem 2. The optimum distribution of resources in divisions of HS for the purpose of receiving optimum economic indicators. We present the solution to Problem 2 with a sequence of steps. Step 1. The vector problem (5.2.1)(5.2.5) is solved at equivalent criteria. As a result of the decision received: (5.2.20) X o ={X oq ={xj, j= 1, N q}, q= 1, Q ; Oo= min Oq(Xo). qQ
Step 2. Checking the loading of resources on each local subsystem, both with global resources (5.2.12), and on its own (5.2.13): q
o
r i =A X q , i= 1, M , q= 1, 6 . 1060.6
2812.5
5034.4 0 q r i (Xo)= 2572.9 3152.2 520.0 7216.9 795.4 4045.6 1784.2 300.4 1286.4 1120.9 499.6 4596.0
2076.8 1630.2 4629.1 2502.2 0 2577.3 2151.9 2502.2 3753.3
2846.7
5573.1
6279.0 3285.1 2272.5 432.0 3640.7 0 391.2 3982.8 2489.0 216.0 649.9 0 423.8 276.2 0 2864.2 1462.0 2164.3 1264.7 2193.9 1803.6
5388.8 0 0 0 0 4596.0 0 0 0 0 2502.2 0 0 0 0 4061.8 0 0 0 0 0 0 0 0
,
(5.2.21)
0 0 0 0 0 0 0 0 2270.0 0 0 5410.8
Step 3. Then the total cost of global resources is defined: 8
Ri(Xo) = ¦ r
q i
=A Xo, i= 1, M , Ɇ=14.
i 1
R1(Xo)=16000, R2(Xo)= 21500, R3(Xo)=12300, R4(Xo)=14600, R5(Xo)=8282, R6(Xo)=4937, R7(Xo)=11400, R8(Xo)=14111; and expenses for local subsystems. Step 4. The received expenses of global resources are compared to the opportunities of the firm (concern) in their acquisition with bi , i= 1, M : di= bi  Ri(Xo), i= 1, M , Ɇ=6. (5.2.22) Step 5. Analysis of deviations (5.2.22): di, Ɇ=6. If Ri < bi , i= 1, M , then ' Ri = bi  Ri, i= 1, M characterizes the value of the underutilization of ith as a resource;
The Theory of Vector Problems in Mathematical Programming with Independent Criteria
107
If Ri > bi , i= 1, M , then ' Ri = bi  Ri, i= 1, M and negatively also characterizes the size of a missing resource (such a situation can be received only as the wrong solution to a task or it is artificial); and, if Ri = bi, i= 1, M , then ' Ri = bi  Ri = 0, i= 1, M , and the loading of the ith of a resource is full. We define deviations from the optimum solution to a problem (5.2.20) di =bi  Ri, i= 1,8 for global resources (5.2.22): d1=0, d2 =0, d3=0, d4=0, d5=417.8, d6=4063.4, d7 =0, d8=4688.9. From these ratios it follows that i=1 resources, 2, 3, 4 and 7 are loaded completely; they contain the further growth of vector criterion of F(X). Modelling the annual plan is carried out by change (as a rule, increases) to the global resources bi, i =1, 2, 3, 4, 7 and the next miscalculation. A set of such decisions represents a set of alternatives for acceptance of the final decision according to the annual plan of concern. We define di =bi  Ri, i= 9 ,14 deviations for local resources (5.2.13). In this HS they are enough. In general, these resources form a basis for the development of a vector of management for each LS.
5.3. Twolevel hierarchical systems developing in dynamics uniformly and proportionally 5.3.1. The theory of twolevel hierarchical systems developing in dynamics uniformly and proportionally We now consider twolevel economic hierarchical systems, which develop in dynamics during t T of years where T is a set of indexes of the years of the planned period. At the initial moment of time, the model of twolevel HS can be presented as being similar to VPMP (5.2.1)(5.2.5) with independent criteria: Nq
opt F(ɏ(t))={{F1(ɏ(t))= max fq(X(t))=
q
¦
c j xj(t), q= 1, Q },
(5.3.1)
j 1
N
{F2(ɏ(t))= max fk(X(t))=
¦ j 1
k
c j xj(t), k= 1, K 1}}, (5.3.2)
Chapter 5
108 N
¦
aij(t)xj(t)d bi(t), i= 1, M ,
(5.3.3)
j 1 N
¦ j 1
q
q
a ij (t)xj(t)d b i (t), i= 1, M q, q= 1, Q ,
xj(t)t0, j= 1, N , t=1, … , T.
(5.3.4) (5.3.5)
where X(t)={Xq(t)={xj(t), j= 1, N q}, q= 1, Q } is a vector of real variables; designations are a similar problem (5.2.1)(5.2.5); Q – a set of independent criteria, i.e., satisfying the condition: q, kK, NqNk=, NqN, NkN. On the basis of VPLP (5.3.1)(5.3.5), it is a representable mathematical model of managing the twolevel HS developing in dynamics, where the VPLP component depends on time. We introduce designations VPLP (5.3.1)(5.3.5) as HS: X(t)={Xq(t)={xj(t), j= 1, N q}, q= 1, Q } – the vector of unknown determining volumes of the products which are turned out by HS in general, Xq(t) ={xj(t), j= 1, N q} – its local subsystems, during the discrete period of t=1, …, T; (5.3.3) – the global restrictions imposed on twolevel HS in general during the planning of tT; (5.3.4) – the restrictions imposed on the functioning of each q = 1, Q . As a result of the solution to VPMP (5.3.1)(5.3.5) we will receive: X *q (t), fk(Xq*(t)), k = 1, K q , q = 1, Q – optimum points with separate criteria and sizes of all criteria at this point; o
o
Xo, ɏ q(t), O q(t) are optimum functioning points of HS, qth LP and the maximum relative assessment such that, Oo(t)= min Oq(X o(t))= max min Oq(X(t)). qQ
(5.3.6)
q Q
X S
But as LS are independent, according to Theorem 1 (section 5.2), at a point of Xo(t) all relative estimates of independent criteria are equal among o
themselves and equal Oo(t) = Oq(ɏ q(t)), q = 1, Q . These data are initial proof of the below theorems.
The Theory of Vector Problems in Mathematical Programming with Independent Criteria
109
Theoretical results. Theorem 3 (concerning hierarchical systems developed uniformly and in proportion). In VPMP, if (5.3.1)(5.3.5) for any couple of indexes q, k Q crossing of subsets of indexes of variables is empty, i.e., criteria are independent: q, kK, NqNk=, NqN, NkN, that at an optimum point of Xo(t) received as a result of the solution to VPLP on the basis of the normalization of criteria and the principle of a guaranteed result in each period of t=1..., T, at an increase of restrictions of B(t) at 'B(t+1), all local subsystems are developed uniformly, i.e., relative estimates are equal among themselves and equal: o
o
o
Oo(t) = O1(X 1 (t))= ... = Oq(ɏ q(t))= ... = OQ(X Q (t)
(5.3.7)
and in proportion, i.e., 'O increment is the same for all LS: 'O=Oo(t+1)Oo(t) t, (t+1)T. (5.3.8) Proof. Let's solve VPLP (5.3.1)(5.3.5) in the initial time point of t=1. The result of the decision for independent criteria is presented with ratios (5.3.7). At an optimum point of of Xo(t), part of the global restrictions (5.3.3) can be inequalities:
¦
Q q 1
o
Aq X q(t) < Bq(t), q= 1, Q ,
(5.3.9)
and the others strict equalities:
¦
Q q 1
o
Aq X q(t) = Bq (t), q= 1, Q .
(5.3.10) o
The part of profit received from X (t) means that production volumes will go for reproduction. At the same time, if the resources are completely spent, i.e., those which are described by restrictions with strict equalities (5.3.10) or inequalities close to them, (5.3.9) increase. Then, resources (5.3.10) in the planned year (t + 1)T will increase by 'B(t + 1) and will take the form: A(t)X(t) d B(t) + 'B(t + 1). (5.3.11) If we replace (5.3.3) with (5.3.11) we will solve VPPM (5.3.1)(5.3.5). As a result, we will receive a new optimum point of Xo(t + 1) in which the maximum relative assessment of Oo(t + 1) increases by some size 'O(t + 1) relatively Oo(t): Oo(t + 1) =Oo(t) + 'O(t + 1). According to Theorem 1 at point Xo(t + 1), relative estimates of all LS will be equal among themselves: o
o
o
Oo(t+1) = O1(X 1 (t+1))= ... = Oq(ɏ q(t+1))= ... = OQ(X Q (t+1) .
(5.3.12)
Chapter 5
110
From here, it follows that at the end of the 't = (t+1)t period all LS q = 1, Q are developed evenly, i.e., relative estimates in Xo(t +1) point are equal among themselves and equal Oo(t + 1). Development of LS happens in proportion, i.e., an increment of relative assessment on each LS: o
o
o
o
'O=Oo(t+1)Oo(t)=O1(X 1 (t+1))O1(X 1 (t))=...=OQ(ɏ Q (t+1))OQ(ɏ Q (t)) identical to all LS t, (t+1)T with a change of resources during the period (t+1)T. Thus, for the first period of 't = (t+1)t the theorem is proved. Let's consider the solution to VPLP (5.3.1)(5.3.5) for a new year (t+W)T where W is the size of the period lying within 1dWd(T  t), measured in advance, in years. In this case, global resources (7.8.3) will increase by 'B(t+W): A(t)X(t) d B(t) + 'B(t + W). (5.3.13) Let's replace (5.3.3) with (5.3.13) and we will solve VPLP (5.3.1)(5.3.5). As a result, we will receive a new optimum point of Xo(t + W) in which the maximum relative assessment of Oo(t+W) increases by some size 'O(t + W) relatively Oo(t): Oo(t + W) = Oo(t) + 'O(t + W), but according to Theorem 1, relative estimates of all LS will be equal in Xo(t + W) point among themselves: o
o
o
o
'O=Oo(t+W)Oo(t)=O1(X 1 (t+W))O1(X 1 (t))=...=OQ(ɏ Q (t+W))OQ(ɏ Q (t)). From here, it follows that at the end of a period of 'W=(t+W)t all LS q = 1, Q are developed evenly, i.e., relative estimates are equal among themselves and equal Oo(t+W) at a point of Xo(t+W), and their development happens in proportion, i.e., the increment of relative assessment on each LS 'O is identical to all LS t, (t+W)T with a change of resources in (t+W)T. Thus, for any period of 't=(t+W)t the theorem is proved. Theorem 4 (concerning hierarchical systems developed evenly and in proportion with the set prioritized criterion). If, in VPLP (5.3.1)(5.3.5): • for any couple of indexes q, k Q crossing of subsets of indexes of variables is empty, i.e., criteria are independent (ratios (5.3.8) are right); • one of the criteria of q Q has priority over the others, at an optimum point of Xo(t) received on the basis of the normalization of criteria and the principle of a guaranteed result in each period of t = 1..., T, all are developed by LS:
The Theory of Vector Problems in Mathematical Programming with Independent Criteria
111
1) uniformly, i.e., relative estimates are equal among themselves and equal to Oo: Oo=p qk Oq(Xo), q Q, k= 1, Q . (5.3.14) where p qk , q Q, k= 1, Q – the set prioritized qth criterion in relation to other k= 1, Q criteria; 2) in proportion, i.e., increment 'O(t + W) = Oo(t + W)  Oo(t) is identical to all LS t, (t + W) T change of resources in WT concerning a period of tT: 'O = p qk (Xo(t + W))Ok(ɏo(t + W))  p qk (Xo(t))Ok(ɏo(t)), q= 1, Q , kQ. The proof is similar to Theorem 1.
5.3.2. The practical solution to a twolevel hierarchical system which develops in dynamics uniformly and proportionally We will show the modelling of a longterm plan on a test example. We will take a formation problem of the annual plan with six divisions for which basic data are presented to VPLP (5.2.11)(5.2.14) as a basis, and will consider its decision in dynamics for five years. Step 1. The problem with the formation of the firm’s annual plan is solved. The results of the decision for the first year of planning are presented in section 5.2.2. Step 2. Problem 2: "optimum distribution of resources in divisions of HS", for the purpose of receiving optimum economic indicators, is solved. The total cost of global resources (5.2.13) is defined: Ri(Xo) = ¦ 8 r iq =A Xo, i= 1, M , Ɇ=14. i 1
R1(Xo)=16000, R2(Xo)= 21500, R3(Xo)=12300, R4(Xo)=14600, R5(Xo)=8211, R6(Xo)=5152, R7(Xo)=11400, R8(Xo)=13929 Let's reduce the received results for the calculation of resources in Table 5.1, in which every line contains: number of resource i= 1, M ; existence of the resource bi(t), i= 1, M ; resource expenses: Q
ri(t)= ¦
q 1
riq(t) =
Q
¦
q 1
N
¦
o
AqX q (t), i = 1, M , t =1;
j 1
balances of the resource 'bi = bi  ri(t)< 1, i = 1, M , t =1.
Chapter 5
112
Table 5.1: Calculation of resources for constraints of the hierarchical system Name resource
Resource 1 Resource 2 Resource 3 Resource 4 Resource 5 Resource 6 Resource 7 Resource 8 Resource 9 Resource 10 Resource 11 Resource 12 Resource 13 Resource 14
Resources in the 1st year of planning Resource Resource Remainder availability costs of resourc.
Resources in the 2nd year of planning Resource Resource Remainder availability costs of resourc
16000 21500 12300 14600 8700 9000 11400 18800 18000 17000 18000 24000 21000 29000
16800 22575 12300 15330 8700 9000 11970 18800 18000 17000 18000 24000 21000 29000
16000 21500 12300 14600 8282 4937 11400 14111 5325 4535 2499 4104 2316 5405
0 0 0 0 418 4063 0 4689 12675 12465 15501 19896 18684 23595
16800 22575 1230 15330 8696 5183 11970 14817 5629 4770 2619 4299 2443 5662
0 0 0 0 3.7 3816 0 3983 12371 12230 15381 19701 18557 23338
Let's look at intended restrictions on the resources bi(t), i= 1, M for the next planning period of t=t+1 (for the second period). Global restrictions 5, 6 and 8 are not completely spent, i.e., 'Bi= Bi Ri > 0, i=1,3. Therefore, we leave them unchanged (see the column, "resource availability"). Generally, there has to be an expenses norm for resources and comparison has to not be zero and should go with the expenses norm. Restrictions 1, 2, 3, 4 and 7 are completely spent, i.e., 'Bi= Bi –Ri=0. Therefore, these restrictions increase. As a coefficient for an increase in resources, we will accept 5%, i.e., the new value of resources for the second year will be: bi(t+1)=bi(t)+0.05bi(t), i=1,2,3,4,7. As a result they will become equal: b1=16000+800=16800, b2=21500+1075=22075, b3=12300+615=12915, b4=14600+730=15330, b7(t)=11400+570=11970. The results of the calculation for the second planning period: Oo = 0. 2959; The value of the optimum point: Xo={x1(2)= 803, x2(2)=1350.7, x3(2)=4355, x4(2)=235.4, x5(2)=2627.3, x6(2)=0.0, x7(2)=1811.5, x8(2)=427.9, x9(2)=1167.0, x10(2)=1115.3, x11(1)=1893.8, x12(2)=0.0};
The Theory of Vector Problems in Mathematical Programming with Independent Criteria
113
The values of criteria at an optimum point: fk(Xo(2)), k= 1,8 : f1(Xo(2))=1993000, f2(Xo(2))=2213000, f3(Xo(2))=1576000, f4(Xo(2))=2330000, f5(Xo(2))=1253000, f6(Xo(2))=1894000, f7(Xo(2))=11259000, f8(Xo(2))=1126000. and relative estimates in Ok(Xo), k= 1 , 8 : O1(Xo(2))=0.2959, O2(Xo(2))=0.2959, O3(Xo(2))=0.2959, O4(Xo(2))=0.2959, O5(Xo(2))=0.2959, O6(Xo(2))=0.2959, O7(Xo(2))=0.874, O8(Xo(2))=0.874. Thus, the maximum relative assessment of Oo is equal to the size of independent criteria in compliance (5.3.7): Oo=Oq (Xo(2)), q= 1, 6 , which also satisfies the conditions (5.2.8) for the seventh and eighth criterion: OodOk (Xo(2)), k= 7 , 8 . Analysis of the results of the decision and its acceptance. An analysis of results is carried out similarly for the first year. The loading of resources on each LS, both in isolation and and in terms of all global resources, is checked: r iq =AX oq , i= 1, M , q= 1 , 6 . Then the total cost of global resources is defined: Ri=
8
¦
r
q i
=A Xo, i= 1 , M , Ɇ=12.
i 1
R1(Xo)=16800, R2(Xo)= 22575, R3(Xo)=12915, R4(Xo)=15330, R5(Xo)=8696, R6(Xo)=5183, R7(Xo)=11970 , R8(Xo)=14817. and expenses on local subsystems: R9(Xo)= 5629 , R10(Xo)= 4770 , R11(Xo)= 2619, R12(Xo)=4299, R13(Xo)=2443, R14(Xo)=5662. The received expenses of global resources are compared to the opportunities for the firm (concern) in their acquisition by bi , i= 1, M . Let's define the deviations of ' Ri=biRi, i= 1 , 8 for global resources:
' R1=0, ' R2 =0, ' R3=0, ' R4=0, ' R5=3.7, ' R6=3816.6, R7 =0, ' R8=3983.3, deviations of ' Ri=biRi, i= 9 ,14 for local resources: ' R9 = 12371, 'R10=12230, ' R11=15381, ' R12=19701, ' R13=18557, ' R15=23338
'
These resources form the basis of the development of a vector of resource management and the formation of loading resources shown in Table 5.1.
114
Chapter 5
Calculations for the subsequent planning periods are similarly carried out.
CHAPTER 6 THE DUALITY OF VECTOR PROBLEMS IN LINEAR PROGRAMMING (VPLP)
The concept of duality plays a fundamental role in theories of linear programming. Many authors would like to transfer the ideas of dual problems in linear programming to vector problems in linear programming. At first, a vector problem in linear programming (VPLP) was investigated in terms of duality, generally speaking, in communication with game theory [42, 43]. Further numerous attempts were made to allow the theory of duality in VPLP [46, 47, 49, 50]. The review of foreign literature in this direction is rather widely submitted in [47], and domestic can be found in [11, 14, 17]. A lack of the given works represents a lack of axiomatics in vector optimization and, as a result, a lack of the principle of optimality defining one decision as being better than another. In this chapter, to study the problems of duality in the VPLP used in Chapter 2, the axioms of vector optimization are based on its methods of solving VPLP in Chapter 3.
6.1. The duality of a problem in linear programming We briefly describe the main results of the duality theory in linear programming in accordance with [45], which will be used in the future. A pair of dual HWP is considered. Direct problem: N
Fo = mɚɯ
¦ɫ x , j 1
N
¦
j 1
j
j
a ij x j d b i , i= 1, M ,
x j t 0, j= 1, N .
(6.1.1) (6.1.2) (6.1.3)
Chapter 6
116
Dual problem: Uo = min
M
¦by i
i
,
(6.1.4)
i 1
M
¦a i 1
ij
yi t c j , j= 1, N ,
(6.1.5)
(6.1.6) yi t 0, i= 1, M , Or in a matrix form: ĺ min ȼɍ, mɚɯ ɋɏ Ⱥɏdȼ, ɏt0. ɍȺ tɋ, ɍt0. Ļ Result of the solution to the problem Ļ * ĺ Y*={y * , i= 1, M }, BY* * ɏ ={x j , j= 1, N }, Cɏ* i Result of duality: Cɏ*= BY* Three theorems are connected with the duality of a problem in linear programming. Theorem 6.1 (Theorem of existence). In order that the problem in linear programming holds the decision, it is also necessary that admissible sets of both direct and dual problems are not empty. Theorem 6.2 (Theorem of duality). The admissible vector of X * is optimum only in cases where there is an admissible vector of Y * such that ɋɏ* =ȼɍ*, (6.1.7) and, in contrast, ɍ* is optimum when there is a vector of ɏ* for which the same inequality is carried out. If a vector ɏ* exists, there is inequality. Theorem 6.3 (Theorem of additional nonrigidity, or the second theorem of duality). In order that admissible vectors of X* and Y* are solutions to dual problems, it is also necessary that they meet the conditions of supplementing, not rigidity: (ɋ – ɍ*Ⱥ) ɏ*= 0, (6.1.8) ɍ* (ȼ – Ⱥɏ*)= 0. (6.1.9) For research PLP, use Lagrange's function which, for a direct problem, is defined as follows: L(ɏ, ɍ) = ɋɏ + ɍ(ȼ – Ⱥɏ). (6.1.10) Then, according to the KuhnTakker, Theorem X* will be the solution to the problem (6.1.1)–(6.1.3) if there is a vectorstring Y*, such that (ɏ*, Y*) satisfy Kuhn–Takker conditions, having an appearance of [45] ɞL(ɏ, ɍ)/ ɞɏ = ɋ – ɍ*Ⱥd0, ɏ* ɞL(ɏ, ɍ)/ ɞɏ = (ɋ – ɍ*Ⱥ) ɏ* = 0,
The Duality of Vector Problems in Linear Programming (VPLP)
117
ɏ*t0, ɞL(ɏ, ɍ)/ ɞɏ = B – AX*t0, Y* ɞL(ɏ, ɍ)/ ɞY = Y*(B – AX*) = 0, Y*t0, Similarly, Lagrange's function and the KuhnTalker conditions for the dual problem (6.1.4)–(6.1.6) are presented.
6.2. VPLP with a maximum vector of the objective function (VPLPmax) and the duality problem 6.2.1. Construction of dual VPLPmax A vector problem in linear programming (VPLP) with a maximum vector criterion function (VPLPmax) is considered as a direct problem: max F(X)={max fk(X)Ł
N
¦c j 1
N
¦a j 1
ij
x j d bi , i= 1 , M ,
x j t 0, j= 1 , N .
k j
x j , k= 1, K },
(6.2.1) (6.2.2) (6.2.3)
We assume that the set of admissible points is not empty, Sz. For the solution to such a class of problems, standard normalization of criteria, axioms 1 and 2, the principle of optimality 1, and the algorithm for the solution to VPLP made on its basis with equivalent criteria are used (see chapters 2 and 3). As a result of the solution to VPLP (6.2.1)(6.2.3), the following is received:
x
optimum points of X *k , k= 1, K and sizes of criterion functions in these points of f *k , k = 1, K which represent the border of a great number of Pareto.
x
the worst unchangeable part of the f 0k , k= 1, K criterion, for this class of restrictions f 0k =0, k= 1, K .
Xo is the optimum point of the solution to VPMP at equivalent criteria and Oo is the maximum relative assessment which is the guaranteed level for all criteria in relative units. For each component of the vector criterion (6.2.1) fk(X) , k= 1, K we present a dual problem. For the 1st criterion k=1:
Chapter 6
118 M
Uo = min
¦by i
i
,
i 1
M
¦a i 1
ij
y i t c 1j , j= 1 , N ,
y i t 0 , i= 1 , N ,
As a result of solving this task we will receive: Y 1*  the point of an optimum and U 1* =BY 1*  the criterion function’s size at this point. At the same time, following from the duality of onecriteria is f 1* = U 1* , k=1Ʉ. It is similarly possible to construct dual problems for other criteria of k=2, …, k=K and, for the final kth of criterion k= K we present a dual problem: M
Uo = min
¦by i
i
,
i 1
M
¦a i 1
ij
y i t c Kj , j= 1 , N , y i t 0 , i= 1 , M ,
where Ʉ is the index and Ʉ is the set of criteria. As a result of the decision, we will receive: Y *K — an optimum point and U *K =BY *K — the criterion function’s size at this point. At the same time, from the duality of onecriteria problems there follows f *K =U *K , ɄɄ. These Ʉ problems differ from each other only in terms of a vectorcolumn k
of restrictions {{c j , j= 1 , N }c, k= 1, K } and the values of the objective functions are determined by the same equation: M
U1=…=UK= ¦ bi y i i 1
Therefore, in uniting them, we will receive a dual problem for VPLPmax (6.2.1)(6.2.3): min
M
¦by i
i
,
(6.2.4)
i 1
M
¦a
ij
k y i t{{c j , j= 1 , N }c, k= 1, K },
(6.2.5)
i 1
y i t 0 , i= 1 , M ,
(6.2.6) As a result of the solution to a problem (6.2.4)(6.2.6), dual to VZMPmax (6.2.1)(6.2.3), we received: Y *k ={y *i , i= 1 , M } — the optimum point for each kth restriction, k= 1, K ;
The Duality of Vector Problems in Linear Programming (VPLP)
119
U *k =BY *k — value of the objective functions (criterion) at these points which represent the border of a greater number of Pareto. At the same time, the duality condition for each criterion remains: f *k = U *k , kɄ. Thus, VZLPmax (6.2.1)–(6.2.3) corresponds to dual onecriteria PLP (6.2.4)(6.2.6), but for the criterion which will be defined in some set of restrictions Ʉ, the number of such restrictions is equal to the amount of the criteria of a task (6.2.1)–(6.2.3), k= 1, K . Let's call PLP (6.2.4)(6.2.6) a "problem in linear programming with a set of restrictions." The 0 existence of such a class of problems is mentioned in [78], in "a problem of optimization with a set of conditions." There is a question of whether problems (6.2.1)(6.2.3) and (6.2.12)(6.2.13) are really dual in relation to each other. The set of optimum points according to Pareto is the result of the decision VZLPmax. In compliance, they (the points) are identified by axioms 13, and they can be defined by the principles of optimality. Therefore, in a dual problem, the set of points similar to Pareto's great number of a problem (6.2.1)–(6.2.3) has to be (6.2.4)(6.2.6), which is also the result of the decision. The solution to this question requires proof of the duality theorems, similar to the onecriteria problems of linear programming in section 6.1: The Theorem of existence defines solutions to direct and dual problems which are not empty; The Theorem of duality defines the optimality of received decisions; The Second Theorem of duality defines the conditions of supplementing, not rigidity. For proof of the submitted theorems in the application to VZLPmax, it is necessary to consider an algorithm of the solution to a dual problem (6.2.4)(6.2.6) — PLP on a set of restrictions, at equivalent criteria and at the set prioritized criterion.
6.2.2. The algorithm of decision PLPmin with a set of restrictions For the creation of an algorithm to solve a problem in linear programming with a set of restrictions, we will consider the Oproblem, which is the cornerstone of an algorithm to solve VPLP at equivalent criteria (6.2.1)(6.2.3), and a task dual to it. Oproblem: (6.2.7) Oo = max O,
Chapter 6
120 N
O ¦ ( c k / f * ) x d0, k= 1, K , j k j
(6.2.8)
j 1
N
¦ j
1
a ij x j d b i , i= 1 , M ,
O, x j t 0, j= 1 , N .
(6.2.9) (6.2.10)
* k
where f , k = 1, K is the value of the objective function at the optimum points of X *k , obtained by solving VPLP (6.2.1)(6.2.3) with the kth criterion, k = 1, K . As a result of the solution to the Oproblem, we received: Xo — an optimum point of the solution to VPLP at equivalent criteria and Oo — the maximum relative assessment which is the guaranteed level for all criteria in relative units. Considering O as (N+1)th, a variable in the problem (6.2.7)(10.2.10), we will construct dual to the Oproblem which will take the form: Uo = min
M
¦by i
i
,
(6.2.11)
i 1
K
Ek t1,
¦
(6.2.12)
k 1
K
M
k 1
i 1
 ¦ ( c kj / f k* ) E k + ¦ a ij y i t c j , j= 1 , M , Ek, k= 1, K , yi t 0, i= 1 , M ,
(6.2.13) (6.2.14)
As a result of the decision we will receive: Yo={y io , i= 1 , M } – the optimum point, and Uo=BYothe size of criterion function (criterion) at this point. From Uo=BYo the duality condition. The resulting problem (6.2.11) (6.2.14) is dual to the Oproblem (6.2.7)  (6.2.10) and is the basis of the algorithm.
6.2.3. Algorithm 4. The solution to PLP with a set of restrictions Step 1. Solve the problem (6.2.4)(6.2.6) for each vector — to restriction {{c * k * k
k j
, j= 1 , N }c, k= 1, K }. As a result we will receive:
* i
Y ={y , i= 1 , M } is the optimum point, and U =BY *k — the value of the objective function (criterion) at this point, for all k= 1 , K vector constraints. Step 2. Normalization of restrictions is carried out (6.2.5):
The Duality of Vector Problems in Linear Programming (VPLP)
121
k { c j , j= 1 , N }c, k= 1, K . U *j
Then the problem (6.2.4)  (6.2.6) takes the form: M
min U(Y)= ¦ bi yi ,
(6.2.15)
i 1
M
¦
* k a ij y i t{{c j /U k , j= 1 , N }c, k= 1, K },
(6.2.16)
i 1
(6.2.17) y i t 0 , i= 1 , M Note that the solution to the problem (6.2.15)(6.2.17) under any normalized vectorrestriction (6.2.16) gives the optimum point of Y *k , where U *k =1, k= 1, K . Step 3. The Gproblem is constructed. Enter into the structure of the normalized matrix, it further constrains K Ek, k= 1, K , of variables such that E t 1 . Let's sum them together
¦
k
k 1
and move the constraints to the left side. The result is a onecriterion problem: M
min U(Y)= ¦ b y , i i
(6.2.18)
i 1
M
¦
a ij y i
¦
E
i 1 K
k
K
k
k
t 0 , j= 1 , N ,
(6.2.19)
yi t 0, i= 1 , M , Ek, k= 1, K ,
(6.2.20)
¦
t 1,
( c kj / U
* k
)E
k
1
1
Let's call this the Gproblem. Step 4. Solution to the Gproblem. The Gproblem is a standard problem in linear programming and, for its decision, standard methods such as the simplex method are used. As a result of the solution to the Gproblem we will receive: Yo={Eo, Yo} — an optimum point of the solution to PLP at equivalent restrictions, where Yo={y io , i= 1 , M } – coordinates of an optimum point dual to Xo, Eo={E ok , k= 1, K } – a vector of additional variables, t 1. BYo – the criterion function’s size at the Yo optimum point.
¦
K k 1
ȕ ko
Chapter 6
122
6.2.4. An algorithm for solving PLP in a set of restrictions with a restriction priority To create an algorithm to solve a problem in linear programming in a set of restrictions with a restriction priority, we use Algorithm 2, with solutions to VPLP with a prioritized criterion. The algorithm serves as the choice for any of the points of a greater number of Pareto, S oq SoS. At the fifth step, this algorithm Oproblem (a direct problem) is similar (6.2.4)(6.2.7) which, with a prioritized qth criterion, we will present showing that the following form is created: (6.2.21) Oo = max O, N q * k O ¦ p k (c j /f k )xj d0, k= 1, K , qK, (6.2.22) j
N
¦ j
1
aijxj d bi, i= 1 , M ,
(6.2.23)
1
O, xj, j= 1 , N .
(6.2.24) q k
where the set vector of priorities of p , k= 1, K ,qK lies in limits: {p qk (Xo) d p qk d p qk (X *q ), k= 1, K ,qK.
(6.2.25)
where X *q — an optimum point on the qth criterion, Xo is the optimum point of the solution to VPLP at equivalent criteria; p qk (Xo)=Oq(Xo)/Ok(Xo), p qk (X *q )=Oq(X *q )/Ok(X *q ), k= 1 , K ,qK. As a result of the solution to the Oproblem (6.2.21)(6.2.24), we will receive Xo — an optimum point and the maximum relative assessment, such that Oo dp qk Ok(Xo), k= 1, K qK where a vector of priorities of p qk , k= 1, K corresponds to the set concepts of decisionmakers concerning a qth criterion with priority over the others. Considering O as (N+1)th a variable in the problem (6.2.21)(6.2.24), we will construct a dual Oproblem with a prioritized qth criterion. Considering O as (N+1)th a variable in the problem (6.2.21)(6.2.24), we will construct a dual Oproblem with the prioritized qth criterion which we will call the Gproblem with a prioritized qth restriction: Uo = min
M
¦by i
i
,
(6.2.26)
i 1
K
¦
k 1
Ek t1,
(6.2.27)
The Duality of Vector Problems in Linear Programming (VPLP)

K
¦
k 1
M
p kq ( c kj / f k* ) E k + ¦ a i j y i t c j , j= 1 , N , i
123
(6.2.28)
1
Ek, k= 1, K , yi t 0, i= 1 , M , (6.2.29) q where the vector of priorities p k , k= 1, K corresponds to the specified concepts of the DM in the prioritized qth restriction qK over the other k= 1, K constraints. We obtained a dual problem with a priority restriction and this is the basis of the algorithm.
6.2.5. Algorithm 5. The solution to PLP in a set of restrictions with a restriction priority Step 1. Let's solve PLP (6.2.4)(6.2.6) at equivalent restrictions with Algorithm 4. As a result of the solution to the problem we will receive: Y *k ={y *i , i= 1 , M } — the optimum point for each kth restriction, k= 1, K ;
U *k =BY *k — values of the objective functions (criterion) in these points which represent a border of the Pareto set; Yo={Eo, Yo} — an optimum point of the solution to PLP at equivalent restrictions, Yo={y io , i= 1 , M } – coordinates of an optimum point of dual Xo, and Eo={E ok , k= 1, K } – a vector of additional variables such that K
¦ k
E ok t 1.
1
BYo – the value of the objective function at the Yo optimum point. At each point of Y *k , k= 1, K we will calculate the sizes of all criteria: {Uq(Y *k ), q= 1 , K }, k= 1, K , which show the sizes of each of the qK criteria upon transition from one optimum point to another (from one vector of restriction of ck={c kj , j= 1 , N }c to another k= 1, K ). At a point of Yo we will calculate the sizes of criteria Uk(Yo), k= 1, K , i.e., the point of YoSoS represents a kind of "centre" of a greater number of Pareto in relative units. This information is also the basis for further study of the structure of a greater number of Pareto. The data of the Oproblem (6.2.18)(6.2.20) are remembered. Step 2. On the basis of the obtained information, the prioritized restriction (e.g., with qK) is chosen.
Chapter 6
124
Step 3. The limits to a change in the size of a prioritized restriction of qK are defined for what, at points of Yo and X *q are priorities of the qK restriction in relation to other criteria: P qk (Xo) = Uq(Yo)/ Uk(Yo), k= 1, K , P qk (X *q ) = Uq(Y *q )/Uk(Y *q ), k= 1, K . The limits to a change in the set vector of priorities are also set: P qk (Yo) d P qk d P qk (Y *q ), k= 1, K , qK.
(6.2.30)
Step 4. The Gproblem is under construction for all normalized restrictions developed with coefficients of Ek , k= 1, K , such that
¦
K
Ek t 1.
k 1
Then, problem (6.2.18)(6.2.20) will take the form: M
min U(Y)= ¦ b y , i i
(6.2.31)
i 1
M
¦a
ij
q k * y i t { p k { c j / U k } , j= 1 , N }c, k= 1, K },
(6.2.32)
i 1
(6.2.33) yi t 0, i= 1 , M , Let's enter into the structure of the normalized matrix of restrictions in K addition with Ek, k= 1, K , a variable such that ¦ Ek t 1, and we will k 1
transfer restrictions to the left part. The problem will then turn out: M
min U(Y)= ¦ bi y i ,
(6.2.34)
i 1
K
p kq (c kj / U k* ) E k
¦
k 1
¦a
ij
y i t 0 , j= 1 , N ,
(6.2.35)
i 1
K
¦ k
M
E
k
t 1 , y i t 0 , i= 1 , M , Ek, k= 1, K .
(6.2.36)
1
Let's call it the Gproblem with a restriction priority. Step 5. The solution to the Gproblem. The Gproblem is a standard problem in linear programming and for its decision, standard methods such as the simplex method are used. As a result of the solution to the Gproblem with a restriction priority we will receive: Yo={Eo, Yo} is an optimum point of the solution to PLP at equivalent restrictions, where Yo={y io , i= 1 , M } – coordinates of an optimum point
The Duality of Vector Problems in Linear Programming (VPLP)
125
dual to Xo, and Eo={E ok , k= 1, K } – a vector of additional variables,
¦
K
k 1
Ek t 1.
BYo – the size of criterion function at the Yo optimum point.
6.2.6. Duality theorems in VPLPmax Theorem 6.4 (Theorem of existence VPLPmax). In order that the maximizing vector problem in linear programming — VZLPmax (6.2.1)–(6.2.3) — holds the decision, it is also necessary that admissible sets of both direct (6.2.1)–(6.2.3) and dual problems (6.2.4)(6.2.6) are not empty. Proof. Conditional on the admissible set of points VZLPmax (6.2.1)– (6.2.3) not being empty and representing a compact. Therefore, there is a set of optimal solutions k= 1, K VZLPmax (6.2.1)–(6.2.3). In the construction of the dual problem (6.2.4)(6.2.6) as a set of dual problems, there are also k= 1, K optimal solutions. Therefore, the admissible set of points of PLP with a set of restrictions (6.2.4)–(6.2.6) is also not empty, which meets the theorem conditions. This completes the proof. Theorem 6.5 (Theorem of duality VZLPmax). The admissible vector ɏo=(Oo, ɏo) in the problem (6.2.1)–(6.2.3) is optimum only in cases where there is an admissible vector of Yo=(Eo, Yo) in the problem (6.2.4)–(6.2.6), such that Oo=BYo, and, in contrast, Yo=(Eo, Yo) is optimum only in cases where there is vector ɏo=(Oo, ɏo) for which the same equality is carried out. Proof. The theorem of duality with a onecriteria problem in linear programming is based on the proof of one optimum point in direct and dual problems. In VPLPmax (6.2.1)–(6.2.3) there is a set of optimum points which, in total, form Pareto's great number and the proof of duality, is necessary for each point. We will carry out the proof of duality with a set of points that are optimum according to Pareto, in three steps: the proof of the duality of boundary points; the proof of the duality of a point at equivalent criteria (such as point one); the proof of the duality of points at the set priority of criteria (such as subsets K). Stage 1. The duality of boundary points of a great number of Pareto (such as points K) follows from duality K onecriteria problems, received at the creation of a problem (6.2.4)–(6.2.6). At the solution to a direct problem (6.2.1)–(6.2.3) with each criterion, k= 1, K are received ɏ *k ={x *j , j= 1 , N }, k= 1, K optimum points. At the
Chapter 6
126
solution to PLP with a set of restrictions (6.2.4)–(6.2.6) Y *k ={y *j , i= 1 , M }, k= 1, K optimum points are received. For each couple of direct and dual problems at points of ɏ *k , Y *k kɄ according to the theorem of duality for a onecriteria problem: Ckɏ *k = BY *k , k= 1, K .
(6.2.37)
Stage 2. The duality of the point received at decision VZLPmax (6.2.1)–(6.2.3) at equivalent criteria ɏo and optimum points of Yo, received from the solution to PLP with a set of restrictions (6.2.4)–(6.2.6), follows from Algorithm 4’s solutions to this problem. Comparing the Gproblem (6.2.18)–(6.2.20), received at Step 3 and dual to the Oproblem (6.2.11)– (6.2.14), we see that these problems are equal since sizes f *k and U *k are received from the dual problem and are equal among themselves. From here, the optimum points of ɏo=(Oo, ɏo) and Yo=(Eo, Yo) are dual and Oo = (6.2.38) BYo. Stage 3. With the duality of the point received at decision VZLPmax (6.2.1)–(6.2.3), at the set prioritized criterion and a point of the solution to PLP with a set of restrictions (6.2.4)–(6.2.6), the set restriction priority follows from an algorithm in the fifth solution to this problem. At Step 3, we received the Gproblem, which is dual to the Oproblem. From here, the optimum points of ɏo=(Oo, ɏo) and Yo=(Eo, Yo) are dual. By changing a vector of priorities in (6.2.30), it is possible to receive any point ɏo from a set of the points that are optimum according to Pareto, and the prioritized qth criterion, S oq SoS, qɄ. By changing the prioritized q= 1, K criterion, we can receive similar optimum points, and those dual to them on all subsets of the points that are optimum according to Pareto, and the prioritized qth criterion, ɏo S oq SoS, qɄ. In total, all these points form a set of points of S oq : (
X o S qo
Xo)=S oq .
From here, for any optimum point of ɏoS oq So of a direct problem and the corresponding point of YoS oq So of a dual problem, the duality condition is satisfied: Oo = BYo.
(6.2.39)
The Duality of Vector Problems in Linear Programming (VPLP)
127
In uniting all points of direct and dual problems (6.2.37)(6.2.39) received at each stage, we will receive sets of the points that are optimum according to Pareto, for the direct problem: {ɏ *k , k= 1, K }ɏo {S oq , q= 1, K }=So (6.2.40) and the dual problem: {Y *k , k= 1, K }Yo {S oq , q= 1, K }=So These sets of points that are optimum according to Pareto are dual and for each of them the duality condition is satisfied: Oo=BYo. The theorem is proved.
6.2.7. Duality VPLPmax in test examples We show the properties of theorems 6.4 and 6.5 in a numerical example of the solution to dual VPLP with a maximum vector criterion function (6.2.1)(6.2.3). Example 6.1. Max F(X) ={max f1(ɏ) = 2ɯ1 + x2, (6.2.41) max f2(ɏ) = ɯ1 + 3x2), (6.2.42) (6.2.43) 8ɯ1 + 5x2 d 40, (6.2.44) 2ɯ1 + 5x2 d 20, ɯ1, x2 t 0. The solution to this example with equivalent criteria is shown in section 3.2.2. The geometric interpretation of the admissible set of solutions is presented in Fig. 3.3. The admissible set of solutions to VPLP (6.2.43)(6.2.44), relative estimates of criteria of O1(X), O2(X), and an optimum point of Xo and Oo are shown in Fig. 3.4. We construct and will solve a dual problem. The text of the program for the solutions to direct and dual tasks in the MATLAB system is presented in the annex to this section. We will show the results of the solutions to direct and dual tasks in Table 6.1.
Chapter 6
128
Table 6.1: Results of the solutions to direct and dual tasks. Direct task Max F(X) ={max f1(ɏ)= 2ɯ1 + x2, max f2(ɏ) = ɯ1 + 3x2}, 8ɯ1 + 5x2 d 40, 2ɯ1 + 5x2 d 20, ɯ1, x2 t 0
Dual task min U(Y)= 40 y1 + 20 y2, 8 y 1 2 y 2 t 2 ½ , 1 ½ , l 5 y 5 y t ®1 ¾ ® 3 ¾ 1 2 ¯ ¿ ¯ ¿ y1, y2t0 Result of the solution to a task p p By each criterion On each restriction ɫk, k=1,2 Y 1*=(y1=0.25,y2=0),U 1* =u1(Y 1* )=10 ɏ 1* =(x1=5, ɯ2=0), f 1* =f1(ɏ 1* )=10, ɏ *2 =(x1=0, ɯ2=4), f *2 =f2(ɏ *2 )=12
Y *2 =(y1=0, y2=0.6),U *2 =u2(Y *2 )=12
Result of duality: f 1* = U 1* =10; f *2 = U *2 =12 (Confirmation of Stage 1: duality of boundary points) Oɡɚɞɚɱɚ: Oo = max O, Gɡɚɞɚɱɚ: min U(Y)= 40 y1 + 20 y2 O 0.2ɯ1– 0.1ɯ2d0, l 0.2E1  (1/12)E2+8y1 + 2 y2 t 0, O(1/12)ɯ1–(1/4)ɯ2d0, 0.1E1  (1/4) E2 + 5y1 + 5y2 t 0, 8ɯ1 + 5x2 d 40, E1 + E2 t 1, E1 , E2 , y1, y2 t 0 2ɯ1 + 5x2 d 20, O, ɯ1, x2t0 Result of the solution to tasks p p o o o o Y =(E =95/107, E =12/107, y1=5/214, 1 2 ɏ ={ɯ 1 = 360/107, ɯ 2 = o o y =0}, U (Y ) = 100/107=0.9346 2 280/107}, Oo = 100/107=0.9346 Verification of the decision p p o o o Yo =(y1= 5/214, y2 = 0}, f 1 =2ɯ 1 + ɯ 2 =1000/107, Uo(Yo) = 100/107 o o * O 1 =f 1 /f 1 =100/107, o 2 =ɯ o O 2 =f
f
o 1 o 2
+ 3ɯ /f
* 2
o 2
=1200/107,
=100/107
Result of duality: Oo=BYo = 100/107=0.9346. (Confirmation 2 stages: duality of a point at equivalent criteria) Explanations of Table 6.1. The checking of a direct task. As a result of the solution to the Otask (i.e., VPLPmax), we received the maximum relative assessment of Oo and an optimum point of Xo: Oo = 100/107, ɏo = {ɯ 1o = 360/107, ɯ o2 = 280/107}.
The Duality of Vector Problems in Linear Programming (VPLP)
129
Check: o f 1 = 2 ɯ 1o + ɯ o2 = 1000/107, O 1o = f 1o / f 1* = 100/107, o
f 2 = ɯ 1o + 3ɯ o2 =1200/107, O o2 = f o2 / f *2 = 100/107. Thus, the result corresponds to the principle of optimality of Oo d Ok(Xo), k=1, 2, and according to the investigation of Theorem 2.2, for a twocriteria task, strict equality is carried out at the Xo optimum point: Oo=O1(Xo)=O2(Xo). Checking the dual problem. Let's execute a normalization of the restrictions of the dual task presented in the first block, having entered the additional variables E1 , E2, such that E1 +E2t1. We will receive: (6.2.46) min U(Y)= 40y1 + 20y2, (6.2.47) 0.2E1  (1/12)E2 + 8y1 + 2 y2 t 0, (6.2.48) 0.1E1  (1/4) E2 + 5y1 + 5y2 t 0, (6.2.49) E1 + E2 t 1, (6.2.50) E1 , E2 , y1, y2 t 0. It is easy to notice that the received Gtask is a return to the Otask. It is required to find such E1 and E2 variables for which E1 + E2 t 1 and for which dual variables y1, y2 criterion function of U(Y)= 40y1 + 20y2 reach a minimum in this task. As a result of the decision we will receive an optimum point: Yo =(E1 =95/107, E2= 12/107, y1= 5/214, y2 = 0}, ɝɞɟ Uo(Yo) = 100/107. Indeed, by substituting E1 and E2 in Gtask, we get: min U(Y)= 40y1 + 20y2, 8y1 + 2 y2 t 20/107, 5y1 + 5y2 t 25/214, y1, y2t0. At the solution to this task, the optimum point: Yo=(y1=5/214, y2= 0}, Uo(Yo)=100/107. Thus, solutions to direct VPLP and dual PLP with a set of restrictions are equal among themselves, i.e., the conditions of Theorem 6.5 in a numerical example are confirmed. We construct and will solve a dual problem with the set prioritized criterion. We will show the results of the solution to direct and dual tasks in Table 6.2.
Chapter 6
130
Table 6.2: Results of the solutions to direct and dual tasks. Direct task Dual task min U(Y)= 40 y1 + 20 y2, max f1(ɏ) = 2ɯ1 + x2, max f2(ɏ) = ɯ1 + 3x2), 8 y 1 2 y 2 t 2½ , 1½ , y , y t0 1 2 l 5 y 5 y t ®1 ¾ ®3¾ 8ɯ1 + 5x2 d 40, 1 2 ¯ ¿ ¯¿ 2ɯ1 + 5x2 d 20, ɯ1, x2 t 0 Result of the solution to tasks: p p At equivalent criteria: it is presented At equivalent restrictions: it is in the left part of the Table 5.1 presented in the right part of Table 5.1 Result of duality: Cɏ*= BY* The priority of 1 criterion concerning the second is set: q=1; Calculation of a priority of 1 criterion in points: Xo, Xopt p12Xo=L1x0/L2x0=1; p12Xopt=1/L(1,2)=2.4 Choice of a priority of 1 criterion, for example, in the middle: p12z=(p12Xo+p12Xopt)/2 =1.7 Oproblem: Oo = max O, G problem: min U(Y)= 40y1+20y2 O  0.2ɯ1–0.1ɯ2d0, 0.2E1p12z*(1/12)E2+8y1+2y2t0, O p12z*((1/12)ɯ1–(1/4)ɯ2)d0, l 0.1E1p12z*(1/4) E2 +5y1+5y2t0, 8ɯ1 + 5x2 d 40, E1 + E2 t 1, 2ɯ1 + 5x2 d 20, O, ɯ1, x2 t 0 E1 , E2 , y1, y2 t 0 Result of the solution to tasks p p o o o o Y =(E =0.9308, E 1 2=0.0692, ɏ =X0p={ɯ 1=4.495, ɯ 2 =0.807}, o o y 1=0.0245, y2=0}, U (Y ) = 0.9798 L0p = 0.9798 yo=0.0245 0.0 g1o=0.9798 p
Verification of the decision p * restrictions : cvec0= [0.2*yo(1)+ =0.98
o o o o o f 1 =2ɯ 1 +ɯ 2 =9.7983,O 1 =f 1 /f o o o o o f 2 =ɯ 1 +3ɯ 2 =6.9164,O 2 =f 2/f o p12z*O 2 = 1.7*0.5764=0.9798
1 * 2 =0.57
(1/12)*yo(2) 0.1*yo(1) +(1/4)*yo(2)] =[0.1960 0.1225] Yo =(y1=0.0245, y2=0},Uo(Yo)=0.9798
Result of duality: Oo=BYo =0.9798 Changing a vector of priorities of p12z we can receive any o point of ɏoS 1 So S, qɄ. By changing the prioritized criterion, we can receive similar optimum points, and dual to them: ɏoS oq SoS, q= 1 , K . Confirmation of 3 stages: duality of points at the set prioritized criteria (such subsets K)
The Duality of Vector Problems in Linear Programming (VPLP)
131
Thus, the test example confirms the conclusions (proof) of Theorem 6.5.
6.2.8. The solution to direct and dual VPLP with a prioritized criterion (MATLAB) % The solution vector of the linear programming problem: % Author: Mashunin Yury Konstantinovich % The algorithm and program is intended for use in education and % research, for commercial use contact: % [email protected] % opt F(X)={max F1(X)={max f1(X)=2x1+ x2; % max f2(X)= x1+3x2}; % under constraints: 8x1+5x2=0,; disp('*** Source data block***')% cvec=[2. 1.; 1. 3.] a=[8. 5.; 2. 5.] b=[40. 20.] Aeq=[]; beq=[]; x0=[0. 0.]; disp('*** Decision on the 1st criterion***')% [X1,f1]=linprog(cvec(1,:),a,b,Aeq,beq,x0) %[x1min,f1x1min]=linprog(1*cvec(1,:),a,b,Aeq,beq,x0) disp('ʋʋʋ Dual problem: criterion 1ʋʋʋ'); y0=zeros(2,1); OPTIONS=optimset('Display','final','GradObj','on'); [y,g1]=linprog(b,a',cvec(1,:),Aeq,beq,y0,[],[],OPTIONS)%LB,UB,X0,OPTIONS g1y=b*y disp('*** Decision on the 2nd criterion***')% [X2,f2]=linprog(cvec(2,:),a,b,Aeq,beq,x0) %[x2min,f2x2min]=linprog(1*cvec(2,:),a,b,Aeq,beq,x0) disp('ʋʋʋ Dual problem: criterion 2 ʋʋʋ'); y0=zeros(2,1); OPTIONS=optimset('Display','final','GradObj','on'); [y,g1]=linprog(b,a',cvec(2,:),Aeq,beq,y0,[],[],OPTIONS)%LB,UB,X0,OPTIONS
132
Chapter 6
g1y=b*y f1=f1; f2=f2; disp('***The unit of analysis***')% Krit=[cvec(1,:)*X1 cvec(2,:)*X1; % Analiz cvec(1,:)*X2 cvec(2,:)*X2] L= [Krit(1,1)/f1 Krit(1,2)/f2; Krit(2,1)/f1 Krit(2,2)/f2] disp('*** Solution of Lproblem***')% cvec0=[1. 0. 0.]; a0=[1. 2./f1 1./f1; 1. 1./f2 3./f2; 0. 8. 5.; 0. 2. 5.]; b0=[0 0 40. 20.]; x00=[0. 0. 0.]; [X0,L0]=linprog(cvec0,a0,b0,Aeq,beq,x00) Cvec1=[x00(1:2)' cvec] %Cvec1=[0. 2. 1.; 0. 1. 3.]; f1x0=Cvec1(1,:)*X0 f2x0=Cvec1(2,:)*X0 L1x0=f1x0/f1 L2x0=f2x0/f2 disp('ʋʋʋ The solution of the Dual Lproblemʋʋʋ'); y0=zeros(4,1); OPTIONS=optimset('Display','final','GradObj','on'); [yo,g1o]=linprog(b0,a0',cvec0,Aeq,beq,y0,[],[],OPTIONS)%LB,UB,X0,OPTIONS g1yo=b0*yo disp('ʋʋʋ Double Lproblem check ʋʋʋ'); %min U(Y)= 40 y1 + 20 y2, % 0.2*yo(1)(1/12)*yo(2) + 8*y1 + 2*y2 >= 0, % 0.1*yo(1)(1/4) *yo(2) + 5*y1 + 5*y2 >= 0, % yo(1) + yo(2) >= 1, % yo(1) , yo(2), y1, y2 >= 0. cvec0=[0.2*yo(1)+(1/12)*yo(2) 0.1*yo(1)+(1/4)*yo(2)] y0=zeros(2,1); b0=[40. 20.]; OPTIONS=optimset('Display','final','GradObj','on'); [yo,g1o]=linprog(b0,a',cvec0,Aeq,beq,y0,[],[],OPTIONS)%LB,UB,X0,OPTIONS g1yo=b0*yo disp('*** The decision VPLP priority 1 criterion.***');
The Duality of Vector Problems in Linear Programming (VPLP)
133
q=1 % Task: priority 1 criterion p12Xo=L1x0/L2x0 p12Xopt=1/L(1,2) % The calculation of the priority 1 criterion at the %point Xopt p12z=(p12Xo+p12Xopt)/2 % Select priority 1 of the criterion, for % example, in the middle cvec0=[1. 0. 0.]; a0=[1. 2./f1 1./f1; 1. p12z*(1./f2) p12z*(3./f2); 0. 8. 5.; 0. 2. 5.]; b0=[0 0 40. 20.]; x00=[0. 0. 0.]; [X0p,L0p]=linprog(cvec0,a0,b0,Aeq,beq,x00) Cvec1=[x00(1:2)' cvec] f1x0p=Cvec1(1,:)*X0p f2x0p=Cvec1(2,:)*X0p L1x0p=f1x0p/f1 L2x0p=f2x0p/f2 L2x0p1=L2x0p*p12z disp('ʋʋʋ Dual Ltask priority criteria 1.ʋʋʋ'); y0=zeros(4,1); OPTIONS=optimset('Display','final','GradObj','on'); [yo,g1o]=linprog(b0,a0',cvec0,Aeq,beq,y0,[],[],OPTIONS)%LB,UB,X0,OPTIONS g1yo=b0*yo disp('ʋʋʋ Verification of duality. Ltasks with criterion 1 priority ʋʋʋ'); cvec0=[0.2*yo(1)+p12z*(1/12)*yo(2) 0.1*yo(1)+p12z*(1/4)*yo(2)] y0=zeros(2,1); b0=[40. 20.]; OPTIONS=optimset('Display','final','GradObj','on'); [yo,g1o]=linprog(b0,a',cvec0,Aeq,beq,y0,[],[],OPTIONS)%LB,UB,X0,OPTIONS g1yo=b0*yo The result of the solution to a vector problem in linear programming with equivalent criteria *** Block Source data *** cvec = 2 1 a= 8 5 b = 40 20 1 3 2 5 ***Decision on the 1st criterion*** Optimization terminated successfully.
134
Chapter 6
X1 = 5.0000 0.0000 f1 = 10.0000 Dual problem: criterion 1 Optimization terminated successfully. y = 0.2500 0.0000 g1 = 10.0000 g1y = 10.0000 **Decision on the 2nd criterion** Optimization terminated successfully. X2 = 0.0000 4.0000 f2 = 12.0000 Dual problem: criterion 2 Optimization terminated successfully. y = 0.0000 0.6000 g1 = 12.0000 g1y = 12.0000 *** The unit of analysis *** Krit = 10.0000 5.0000 L = 1.0000 0.4167 4.0000 12.0000 0.4000 1.0000 *** Solving the Lproblem *** Optimization terminated successfully. X0 = 0.9346 3.3645 2.6168 L0 =0.9346 Cvec1 = 0 2 1; 0 1 3 f1x0 = 9.3458 f2x0 = 11.2150 L1x0 =0.9346 L2x0 = 0.9346 Dual Lproblem Optimization terminated successfully. yo = 0.8879 0.1121 0.0234 0.0000 g1o = 0.9346 g1yo = 0.9346 Check of the duality for Lproblem cvec0 = 0.1869 0.1168 Optimization terminated successfully. yo = 0.0234 0.0000 g1o = 0.9346 g1yo = 0.9346 *** The solution of VPLP with priority of 1 criterion *** q = 1 p12Xo = 1.0000 p12Xopt = 2.4000 p12z = 1.7000 Optimization terminated successfully. X0p = 0.9798 4.4957 0.8069 L0p = 0.9798 Cvec1 = 0 2 1; 0 1 3 f1x0p =9.7983 f2x0p = 6.9164 L1x0p =0.9798 L2x0p = 0.5764 L2x0p1 = 0.9798 Dual Lproblem with criterion priority 1. Optimization terminated successfully. yo = 0.9308 0.0692 0.0245 0.0000 g1o = 0.9798 g1yo = 0.9798 Checking the duality of the Lproblem with the priority of criterion 1 cvec0 = 0.1960 0.1225 Optimization terminated successfully. yo = 0.0245 0.0000 g1o = 0.9798 g1yo = 0.9798
6.2.9. An analysis of dual problems on the basis of a Lagrange function For further analysis of the main properties of VPLP (6.2.1)–(6.2.3) and those dual to it (6.2.4)(6.2.6), we will use the Lagrange function (6.1.10) in the Oproblem (6.2.7)(6.2.10) and the Gproblem (6.2.18)(6.2.20).
The Duality of Vector Problems in Linear Programming (VPLP)
135
Lagrange's function of the Oproblem (6.2.7)(6.2.10) is defined as follows: N
K
L(ɏ,ɍ)=O+
¦
Ek(0O+
k 1
¦
k 1
* k
)xj)+
¦ ¦ k 1
j 1
¦
yi(bi
i 1
N
K
Ek)+
¦ j 1
K
N
M
k
(c j /f
* k
)xj +
aijxj )=
O(1
j 1 N
M
k
Ek(c j /f
¦
¦
yi(bi
i 1
¦
aijxj ). (6.2.51)
j 1
Similar to Lagrange's function of the Gproblem (6.2.18), (6.2.20): N
M
L(ɏ,ɍ)=
¦
yibi+
¦ j1
i1
K
M
xj(0
¦
ajiyi+
i1
K
¦ k1
k
(c j /f
* k
)Ek)+O(1
¦ E ), k
(6.2.52)
k 1
where O – additional (N+1)th the variable imposed on additional restriction (6.2.18). In expression (6.2.52), we will remove the brackets and, by replacing signs of summation, will see that Lagrange's (6.2.51) and (6.2.52) functions describe problems (6.2.1)(6.2.3) and (6.2.12)  (6.2.13), are equal among themselves. Therefore, we will use further reasoning with the example of Lagrange's function (6.2.51). Let's present it in matrix form: (6.2.53) L(ɏ,ɍ)= O(1 –E) + E(C/f*)X +Y(B–Ⱥɏ), where E={Ek, k= 1, K } is a vector of variables at coefficients of the criterion functions. This function differs from the standard function of Lagrange (6.1.10) in the presence of the member: N
K
E(C/f*)X=
¦ k
1
¦
Ek(c kj /f *k )xj,
(6.2.54)
j 1
representing the sum of all relative criteria taken with the variablesEk, k= K Ek t 1. If K=1, i.e., the problem (6.1.1)–(6.1.3) 1, K such that to ¦ k 1
onecriteria, then expression (6.2.54) takes the form: N
¦ j 1
(c kj /f *k )xj, Ek=1.
According to the KuhnTakkera theorem, X*=(Oo, Xo) will be the solution to a problem (6.2.4)(6.2.5) and, respectively, decision (6.2.1)(6.2.3) if there is Y* =(Eo, Yo) vector such that in X*, Y* satisfy the conditions of Kuhn–Takkera: (6.2.55) ɞL(ɏ, ɍ)/ ɞɏ =1Eo +Eo(C/f*) YoȺd0, (6.2.56) ɏ* ɞL(ɏ, ɍ)/ ɞɏ =(1Eo)Oo +(Eo(C/f*) ɍoȺ) ɏo = 0, Oo, ɏo t 0, (6.2.57)
Chapter 6
136
ɞL(ɏ, ɍ)/ ɞɏ =Oo +(C/f*)Xo+B–AXo t 0, (6.2.58) (6.2.59) Y* ɞL(ɏ, ɍ)/ ɞY = Eo(Oo +(C/f*)Xo )+Yo(B–AXo) = 0, (6.2.60) Eo, Yo t 0, Analyzing expression (6.2.56) and (6.2.59), we will see that at an optimum point of ɏ*, ɍ* (6.2.61) OoEo= B Yo. From the theorem of duality of Oo= B Yo, from here: K o
O =
¦ O (X ) E k
o
o k .
(6.2.62)
k 1
We will check the received result in a numerical example. Example 6.2. Let's review Example 6.1 as a result of the solution to a direct problem: Oo = O1(Xo) = O2(Xo)= 100/107, return task: B Yo=100/107, Yo =(E 1o = 95/107, E o2 = 12/107, ɭ 1o = 5/214, ɭ o2 = 0), then O1(Xo) E 1o + O2(Xo) E o2 = 100/107, i.ɟ., the condition (6.2.62) is satisfied. Theorem 6.6 (Theorem complement of nonrigidity) In VPLP run ratio: (1Eo)Oo +(Eo(C/f*) ɍoȺ) ɏo = 0, Eo(Oo +(C/f*)Xo )+Yo(B–AXo) = 0, Proof. The necessity directly follows from the conditions of KuhnTucker (6.2.55)–(6.2.51); it sufficiency follows from the duality theorem (Theorem 6.5). We write the conditions of complementary nonrigidity in expanded form: c kj / f *k K
(1 ¦ E ok )Oo=0; ( k 1
o
K
¦ k
1
E ok
N
¦
j 1
(c kj /f *k )
M i 1
N
E k (Oo+
¦
(c kj /f *k )x oj )=0, k= 1, K ; y io (bi
j 1
ajiyi)x oj = 0, j= 1 , N .
¦
N
¦
aijx oj )=0, i= 1, M .
j 1
By analyzing these expressions and comparing them with constraints in direct and dual problems, we obtain: if x oj = 0, then
M
¦ i
x oj ! 0, then
M
¦
i 1
ajiy io !(c kj /f *k )E ok , j= 1 , N 1,
(6.2.65)
1
ajiy io = (c kj /f *k )E ok , j= 1 , N 2, N1N2=N; (6.2.66)
The Duality of Vector Problems in Linear Programming (VPLP)
if y io = 0, then
N
¦
ajix oj bi, i= 1, M 1,
137
(6.2.67)
j 1
y io ! 0, then
N
ajix oj bi , i= 1, M 2, M1M2=M;
¦
(6.2.68)
j 1
if E
o k
= 0, then Oo
K k
E
o k
(c kj /f *k )x oj , k= 1 , K 1,
¦
! 0, then Oo=
1 K
(c kj /f *k )x oj , k= 1 , K 2, , K1K2=K. (6.2.70)
¦ k
(6.2.69)
1
And from Theorem 1 (section 2.4) it follows that there are always at least two equations of type (6.2.69).
6.3. VPLP with a minimum vector of the objective function (VPLPmin) and a problem dual to it 6.3.1. Construction of dual VPLPmin VPLP with a minimum vector target — VPLPmin is considered by function as a direct problem: min U(Y) ={ Ul(Y) =
M
¦ i
M
¦
b ll yi, l= 1 , L ,
(6.3.1)
1
ajiyi tcj, j= 1, N ,
(6.3.2)
i 1
yi t 0, i= 1, M . We assume a set of points of S={YRM  Yt0, AYtC}z. For the solution to the problem (6.3.1)(6.3.2):
(6.3.3)
o a) normalization U l = f of criteria (2.1.5) Ol(X)= U l (X)  U
U
* l
U
o l
o l
since for this problem, it also falls, i.e., U l (X) ; U *l b) Axiom 2, the principle of optimality 1, and the algorithm of the solution to VPLP made on its basis with equivalent criteria. As a result of the solution to VPLP (6.3.1)(6.3.3) we receive: the optimum points of Y *l , l= 1, L and sizes of criterion functions in these points of U *l which represent the border of a great number of Pareto; the worst unchangeable part of criterion, U l0 , l= 1, L , for this class of restrictions, U l0 , lL =0; Yo – an optimum point of the solution to VPLP
Chapter 6
138
at equivalent criteria and Oo — the minimum relative assessment which is the guaranteed level for all criteria in relative units. For every one of the components of the vector criterion (6.3.1), as well as in the problem of VPLPmax, we will consider a set of the L dual problems constructed for each criterion. The L problems differ from each other only as vectors — a column of restrictions {{b ll , i= 1, M }c, l= 1, L }, and the sizes of criterion functions are defined by the same equation: F1=…=FK= ¦ N cjxj. Therefore, uniting them, we will receive, in relation j 1
to VPLPmin (6.3.1)(6.3.3), the dual problem: max F(X)= ¦ N cj xj ,
(6.3.4)
aijxj d {b ll , l= 1, L }c, i= 1, M ,
(6.3.5)
j 1
¦
N j 1
xj, j= 1, N , (6.3.6) i.e., (6.3.1)(6.3.3) corresponds to a vector problem of dual onecriteria ZlPmax, the criterion of which is defined in a set a vector restrictions. As a result of the solution to a dual problem (6.3.4)(6.3.6) we can receive: X *l ={x *j , j= 1, N } — an optimum point on each lth to restriction l=
1, L ; F *l =CX *l — values of the objective functions (criterion) in these points which represent the border of a great number of Pareto. At the same time, the duality condition for each criterion remains: U *l =F *l , lL. Thus, VPLP (6.3.1)(6.3.3) corresponds to dual onecriteria PLP (6.3.4)(6.3.6) but, in the criterion by which it will be defined in some sets of restrictions of L, the number of such restrictions is equal to the amount of criteria of a task (6.3.1)–(6.3.3), l= 1, L . Let's look at a problem of linear programming (6.3.4)(6.3.6) PLPmax with a set of restrictions. There is a question over whether problems (6.3.1)(6.3.3) and (6.3.4)(6.3.6) are really dual in relation to each other. The set of optimum points according to Pareto is the result of decision VZLPmin and, in compliance, they (the points) are identified by axioms 13, which can be defined by the principles of optimality. Therefore, in a dual problem, the set of points similar to Pareto's great number of a problem (6.3.1)–(6.3.3) has to be (6.3.4)(6.3.6), and the result of its decision.
The Duality of Vector Problems in Linear Programming (VPLP)
139
For an answer to the question of whether problems (6.3.1)(6.3.3) and (6.3.4)  (6.3.6) are really dual in relation to each other, proof of the theorems of duality are necessary (similarly with onecriteria problems of linear programming in section 5.1 and VZLPmax in section 5.2): theorems of existence; dualities and the second theorem of duality. To prove the presented theorems, it is necessary to consider the algorithm for solving the dual problem (6.3.4)(6.3.6) — PLP with the set of constraints, a) with equivalent criteria, and b) with the given prioritized criterion.
6.3.2. The algorithm for decision PLPmax with a set of restrictions For the creation of an algorithm to solve a problem in linear programming with a set of restrictions, we will present the minimization Oproblem constructed at the third step of an algorithm to solve VPLP at equivalent criteria (6.3.1)(6.3.3): (6.3.7) Oo = min O, M l * O ¦ (b i /U l )yi t 0, l= 1, L , (6.3.8) i 1
¦
M i 1
ajiyi t cj, j= 1, N ,
(6.3.9)
O, yi t 0, i= 1, M .
(6.3.10)
where U *l , l= 1, L is the value of the objective function at optimum points. As a result of the solution to the Oproblem, we received: Yo={Oo, Yo}, an optimum point of the solution to VPMP at equivalent criteria; and Oo, the maximum relative assessment which is the guaranteed level for all criteria in relative units. Considering O as the (Ɇ +1)th variable in the problem (6.3.7)(6.3.10), we will construct it dual to the Oproblem, which will take the form: (6.3.11) max F(X)= ¦ N cj xj , j 1
 ¦N
j 1
L
¦
(b ll /U *l )Dl+ ¦ N
j 1
aijxj d 0, i= 1, M ,
Dld1, Dl t 0, l= 1, L ,
(6.3.12) (6.3.13)
l 1
xj, j= 1, N .
(6.3.14)
Chapter 6
140
where Dl is the dual variable corresponding to the O variable on l= 1, L with the restrictions problem (6.3.11)(6.3.14); xj, j= 1, N  standard dual variables. We present an algorithm to solve a problem of linear programming in a set of restrictions (6.3.4)(6.3.6), to be used for the purpose of a dual problem (6.3.11)(6.3.14). In terms of structure, the steps to the algorithm are identical to Algorithm 4. Therefore, we present these briefly.
6.3.3. Algorithm 6. Decision PLPmax in a set of restrictions Step 1. We solve the problem (6.3.4)(6.3.6) for every vector–to restriction. Step 2. Normalization of restrictions (6.3.5) is carried out and the problem with normalized restrictions is formed. Step 3. The Gproblem is under construction. Step 4. Solution to the Gproblem. To create an algorithm for solving a problem of linear programming in a set of restrictions with a restriction priority, we use the Algorithm 2 solutions to VPLP with a prioritized criterion (section 6.3). Algorithm 2 serves as the choice for any of the points of the Pareto set, S oq SoS. At the fifth step, this algorithm Oproblem (a direct task) is similar to (6.3.7)(6.3.10), which is the prioritized criterion and is similar to problem (6.2.21)(6.2.24), which is added and created. As a result of inequality (6.3.8), it will take the form: O
M
¦
p lq ( b ll /U *l )yi t 0, l= 1, L , qL.
(6.3.8ɚ)
i 1
Considering O as (M+1)th a variable, this is under construction, dual to the Oproblem with the qth criterion prioritized, similar to the problem (6.3.11)(6.3.14), in which inequalities (6.3.12) will take the form: (6.3.12ɚ)  ¦L p lq (b ll /U *l )Dl+ ¦ N aijxj d 0, i= 1, M , l 1 j 1
where the vector of priorities corresponds to the set concepts of the DM, concerning the prioritized qth restriction of qL over other l= 1, L criteria. The received dual problem with a prioritized criterion is also the cornerstone of an algorithm.
The Duality of Vector Problems in Linear Programming (VPLP)
141
6.3.4. Algorithm 7. The solution to PLP in a set of restrictions with a restriction priority Step 1. Let's solve PLP (6.3.4)(6.3.6) at equivalent restrictions with Algorithm 6. The results of the decision are similar to Step 1 of Algorithm 4. Step 2. The prioritized restriction is chosen on the basis of the obtained information. Step 3. The limits to a change in the size of a restriction priority are defined. Step 4. The Gproblem is under construction. Step 5. The solution to the Gproblem. The Gproblem is a standard problem in linear programming and, for its decision, standard methods (e.g., a simplex method) are used. As a result of the solution to the Gproblem, with a restriction priority, we will receive: Xo={Do, Xo} — the optimum point of the solution to PLP at equivalent restrictions, where Xo={x oj , j= 1, N } – coordinates of the optimum point dual to Yo, and Do={D lo , l= 1, L } – a vector of additional variables,
¦
L l 1
D lo t 1.
CXo – the size of criterion function at the Xo optimum point.
6.3.5. Duality theorems in VZLPmin Theorem 6.7 (Theorem of existence VPLPmin). In order that a vector problem in linear programming with minimum (6.3.1)–(6.3.3) holds the decision, it is also necessary that admissible sets (both direct (6.3.1)–(6.3.3) and dual problems (6.3.4)(6.3.6)) were not empty. The proof is similar to the proof of Theorem 6.4. Theorem 6.8 (Theorem of duality VPLPmin). The admissible vector of Yo=(Oo, Yo) in a problem (6.3.1)–(6.3.3) is optimum only in cases where there is an admissible vector of Xo=(Do, Xo) in a problem (6.3.4)–(6.3.6), such that Oo=CXo, and, in contrast, Xo=(Do, Xo) is optimum only in cases where there is a Yo=(Oo, Yo) vector, for which the same equality is carried out. Proof. In VPLPmin (6.3.1)–(6.3.3), as well as in VZLPmax (6.2.1)– (6.2.3), there are some sets of optimum points which, in total, form Pareto's great number and the proof of duality is necessary for each point.
Chapter 6
142
We will carry out the proof of duality for a set of points in three steps: the proof of duality of boundary points; the proof of duality for a point at equivalent criteria (such as point one); and the proof of duality for points at the set prioritized criteria (such as subsets of L). Stage 1. The duality of boundary points for a great number of Pareto (such as points of L) follows from the duality of the L onecriteria problems received at the creation of problem (6.3.4)–(6.3.6). Stage 2. The duality of the point received at decision VZLPmin (6.3.1)(6.3.3) at equivalent criteria of Yo and the point of optimum ɏo received in the solution to PLP with a set of restrictions (6.3.4)(6.3.6), follows from an algorithm of the sixth solution to this problem: Oo=BYo . Stage 3. The duality of the point received at decision VZLPmin (6.3.1)–(6.3.3), with the set prioritized criterion and at a point of the solution to PLP with a set of restrictions (6.3.4)–(6.3.6) at the set restriction priority, follows from an algorithm of the seventh solution to this problem. At Step 3, we received the Gproblem, which is dual to the Oproblem. From here Yo=(Eo, Yo) optimum points and ɏo=(Oo, ɏo) are dual. Changing a vector of priorities in (6.2.30), it is possible to receive any point of Yo from a set of points that are optimum according to Pareto, and o
a prioritized lth criterion, S l SoS, lL. Changing the prioritized criterion of l= 1, L , we can receive similar optimum points which are dual to them in all subsets of the points that are o
optimum according to Pareto, with a prioritized qth criterion, S l SoS, l= 1, L . From here, for any optimum point of YoS lo So with a direct problem, and the corresponding point of XoS lo So of a dual problem Oo=CXo, the duality condition is satisfied. Uniting the points of three stages in direct and dual problems, we will receive sets of the points that are optimum according to Pareto. For direct problems: {Y *l , l= 1, L }Yo {S oq , q= 1 , L }=So and dual problems: {ɏ *l , l= 1 , L }ɏo {S oq , q= 1 , L }= So. These sets of the points that are optimum according to Pareto are dual and, for each of them, the duality condition is satisfied: Oo=CXo. The theorem is proved.
The Duality of Vector Problems in Linear Programming (VPLP)
143
6.3.6. Duality VPLPmin in test examples Example 6.3. A numerical example of the solution to dual VPLP with a minimum vector criterion function: Min U(Y) ={U1(ɍ) = 3ɭ1+y2, U2(ɍ) = 2ɭ1+ 1.5y2, U3(ɍ) = 4ɭ1+9y2 , U4(ɍ) = 10ɭ1+50y2}, (6.3.15) 2ɭ1+y2t12, 5ɭ1+16y2t81, (6.3.16) 3ɭ1+4y2t36, ɭ1, y2t0. The decision: to the 1st criterion U 1* = 12, Y 1* ={y1=0, ɭ2 = 12}, To the 2nd criterion U *2 = 15, 6, Y *2 ={y1=24, ɭ2 =7.2}, To the 3rd criterion U *3 = 56,25, Y *3 ={y1=9, ɭ2 =2.25}, To the 4th criterion U *4 = 162, Y *4 ={y1=16.2, ɭ2 = 0}. Let's create the Oproblem: Oo = min O, (6.3.17) O 3/12 y1 1/12 y2 t0, O 2/15.6 y1 1.5/15.6 y2t0, O 4/56.25 y19/56.25 y2t0, O 10/162 y1 50/162y2t0, 2 y1 + y2 t 12, 5 y1 +16y2 t81, O, y1 , y2 t0. (6.3.18) 3 y1 + 4y2 t36, As a result of the solution to the Oproblem, we received the minimum relative assessment of Oo and an optimum point of Yo. Oo= 1,814, Yo ={y1=5,684, y2= 4,737}. Check: U 1o =3 y1 + y2 = 21,789, O 1o = U 1o / U 1* = 1,814, U o2 =2 y1 +1.5y2= 18,37,
O o2 = U o2 / U *2 = 1,13,
U 3o =4 y1 + 9y2 =61.369,
O 3o = U 3o / U *3 = 1,09,
U o4 =10y1 + y2 =293,69, O o4 = U o4 / U *4 = 1,8110. Thus, the result corresponds to the principle of optimality; Oo is the minimum upper bound of all relative estimates of Oo t Ol(Yo), l=1, 2, 3, 4. According to Theorem 2.1, criteria 1 and 4 are the most contradictory and for them equality Oo=O1(Yo)= O4(Yo) is satisfied. Consider the problem dual to VPLP (6.3.15)(6.3.16): max f(X)=12x1 + 81x2+36x3, (6.3.19)
Chapter 6
144 2 x1
5 x 2 3 x 3 t 3 ½ , 2 ½ , 4 ½ , 10½ , x1, x2, x3 t0. (6.3.20) ® ¾ ® ¾ ® ¾ ® ¾ x 1 16 x 2 4 x 3 t ¯1 ¿ ¯1 .5 ¿ ¯ 9 ¿ ¯50¿
We solve a problem separately for every vector — to restriction: b1, f(X 1* )=12, X 1* ={x1=1, x2= 0, x3=0}, b2,
f(X *2 )=15.6,
X *2 ={x1=0.7, x2= 0, x3=0.2},
b3,
f(X *3 )=56.25,
X *3 ={x1=0, x2= 11/28, x3=19/28},
b4, f(X *4 )=162, X *4 ={x1=0, x2= 2, x3=0}. (6.3.21) Perform the normalization of the constraints in the problem. We will enter the additional variables Dl, l= 1,L , L=4, such that Dl+D2+D3+D4 d 1, and then transfer these variables to the left part. As a result, we will receive a maximizing Gproblem: (6.3.22) max f(X)=12x1 + 81x2+36x3, 3/12Dl  2/15.6D2  4/56.25D3 10/162D4 + 2x1 + 5x2+3x3d 0, 1/12Dl 1.5/15.6D29/56D3  50/162D4 + x1 + 16x2+4x3d 0, Dl +D2 +D3 +D4 d 1, (6.3.23) Dl,D2, D3,D4, x1, x2, x3 t0. The received problem is dual to the Oproblem (6.3.17)(6.3.18). It is required to find variables Dl, D2, D3, D4, for which Dl +D2 +D3 +D4 d 1, and dual variables x1, x2, x3 at which the objective function of f(X) would reach the maximum. Solution to the Gproblem: f(Xo)=1,814, Xo={Dl =0,475, D2=D3=0, D4=0,525, x1= x2=0, x3=0,0504}. Substituting Dl, l= 1, L , L=4 in the problem (6.3.22)(6.3.23) we obtain the following problem: max f(X)=12x1 + 81x2+36x3, 2x1 + 5x2+3x3d 0,151, x1 + 16x2+4x3d 0,202, x1, x2, x3 t0. At the solution to this problem we will receive: f(Xo)=1,814, Xo ={ x1=x2=0, x3= 0,0504}. Thus, the solution to direct VPLP with a minimum vector objective function and dual PLP with a set of conditions are equal among themselves, i.e., the theorem is confirmed using the 6.8 numerical example. From this example, we can see that the algorithm for the solution to the Gproblem is similar to the algorithm given in section 6.2. The difference is that, at the third step, the sum of the variables Dl, l= 1 , L , L=4 is less or equal to the unit.
The Duality of Vector Problems in Linear Programming (VPLP)
145
An analysis of dual problems (6.3.1)(6.3.2) and (6.3.9)  (6.3.10) on the basis of Lagrange's function is similar to that given in section 6.2. Let's only show Lagrange's function of the Oproblem and its conclusions. Lagrange's function of the Oproblem (6.3.3)(6.3.5) is defined as: L(ɏ,ɍ)=O+
L
¦
l 1
Dl(O+
M
¦
i 1
l
*
(b i /U l )yi)+
N
¦
j 1
xj(cj
M
¦
aijyi ).
i 1
The turnedout Lagrange's function is identical (6.2.51). Therefore, the conclusions are also similar to those provided in section 6.2. If (Oo, Yo), (Do, Xo) – an optimal solution of dual problem, then Oo =ODo =CXo. (6.3.24) Let's check the received result with a numerical example. Example 6.4. As a result of the solution to a direct problem from Example 6.3, we will receive: Oo = 1,814, O 1o = 1,814, O o2 = 1,18, O 3o = 1,09, O o4 = 1,814. As a result of the solution to a dual problem we will receive: CXo =1,8110, Xo={ D 1o =0,475, D o2 =D 3o =0, D o4 =0,525, x 1o = x o2 =0, x o3 =0.0504), then Oo =ODo=(O 1o D 1o +O o2 D o2 +O 3o D 3o +O o4 D o4 )=1.814, i.e., the duality condition (6.3.24) is satisfied. Theorem 10.9 (The theorem of supplementing slackness in VPLP with a vector minimum as the objective function). In order that admissible vectors (Oo, Yo), (Do, Xo) are solutions to dual tasks, it is also necessary that they obey the supplementing slackness: (1Do)Oo+ (DoB/U*A Xo) Yo=0, (Oo+ (B/U*)Yo)Do+(CYoA)Xo=0. What immediately follows from the conditions of Kuhn–Takker needs to be similar (6.2.55)(6.2.58). The sufficiency follows from the duality theorem (Theorem 6.8).
6.4. Duality in VPLP with a set of restrictions 6.4.1. An analysis of duality in VPLP with a set of restrictions An analysis of the dual PLP considered above brings to formation more common VPLP with a set of restrictions which are dual. Let's give it the full name: “VPLP with the homogeneous criteria in a set a vector restrictions” (as discussion will only be about such problems in this section, we will abbreviate it to VPLP).
Chapter 6
146
Let's formulate direct VPLP: max F(X)={fk(X){
N
¦
c kj xj, k= 1, K },
(6.4.1)
j 1
N
l
¦ aijxjd {b i , i= 1, M
}c, l= 1, L , xj, j= 1, N .
(6.4.2)
j 1
N
where {fk(X)=CkX= ¦ c kj xj, k= 1 , K } – a matrix of criteria of VPLP j 1
(6.4.1)(6.4.2), dimension of N*K; Ⱥ ={aij, i= 1, M , j= 1, N } – matrix of conditions, N*M; B={{b li , i= 1, M }c, l= 1, L } – matrix of restrictions of Ɇ*L. Here, Ʉ, Ɇ, N, L, the set of indexes of criteria, statements of the problem, variables, and vector restrictions respectively. Dual to it as VPLP: min U(Y) ={ Ul(Y) =
M
¦
b li yi, l= 1, L },
(6.4.3)
i 1
M
¦ i
ajiyi t {c kj , j= 1, N }c, k= 1, K , yi t 0, i= 1, M .
(6.4.4)
1
where Y={yi, i= 1, M } – the dual variables; B={b li , i= 1, M , l= 1 , L } – matrix of the criteria dimension of M*L; Ⱥ ={aji, j= 1 , N , i= 1, M } – – matrix of conditions, M*N; ɋ={{c kj , j= 1, N }c, k= 1, K } – matrix of restrictions, N*K. It is clear that problems (6.1.1)(6.1.3) and (6.1.4)  (6.1.6), (6.2.1)(6.2.3) and (6.2.4)  (6.2.6), (6.3.1)(6.3.3) and (6.3.4)  (6.3.6) are special cases of problems (6.4.1)(6.4.2) and (6.4.3)  (6.4.4). For the solution to such a class of dual problems we will use the arguments given in sections 6.2 and 6.3.
6.4.2. Algorithm 8. The solution to direct VPLP with equivalent criteria The algorithm is shown with the problem (6.4.1)(6.4.2). Step 1. Stage Oproblem. Solves VPLP (6.4.1)(6.4.2) at equivalent criteria for each vector of restrictions of l= 1 , L . For the solution to such a problem, Algorithm 1 is used (see section 3.2). As a result of the decision for all vector restrictions {b li , i= 1, M }c, l= 1, L we will receive: X lo ={O lo , X lo }, l= 1, L – an optimum point; O lo  optimum relative assessment at this point.
The Duality of Vector Problems in Linear Programming (VPLP)
Direct problem Maximizing VPLP (6.4.1)(6.4.2) F=max CX, AXdB, X t0 Ļ Normalization of criteria: lL Okl(X)=CkX/f *kl , k= 1 , K Ļ Maximine problem: lL, O lo = max min Okl(ɏ), X S
k K
147
Dual problem Minimization VPLP(6.4.3)(6.4.4): U=min BY, AYtC, Y t0 Ļ Normalization of restrictions: lL Gkl(X)=Ck/U *kl , k= 1, K Ļ PLP with a set of restrictions: lL, U *l =min BlY, AYt Gkl, k= 1, K , Y t0
Ļ
O
N
¦
j 1
Ļ Stage G problem (6.4.11): lL U *l = min BlY,
Stage Oproblem: lL O lo = max O, ( c kj / f k* ) x j d0, k= 1 , K ,
AYGkl Ekt 0, k= 1, K , Ekt 0, Y t0
AXdB, X t0 Ļ Normalization of restrictions: (b li /O lo )T, l= 1, L , iM.
Ļ Normalization of criteria: b li Y/U *l , l= 1, L , iM.
Ļ VPLP with one restriction max CX, AXd(Bl/O lo ) Dl, Dld1, Dl t 0, l= 1, L ,
G problem: lL BlY/U *l t0, l= 1, L , AY t Ck, k= 1 , K ,
X t0 Ļ
Maximine problem: Oo = max min Ok(ɏ), X S
k K
Ļ O problem: O = max O, O  Okd 0, k= 1 , K , GlDl+0+ AXdB, Dl t 0, l= 1, L , X t0 o
Ļ Go= min G, Y t0
Ļ G problem: Go= min G, G Gl(Y) t0, l= 1 , L ,, AY t (Ck/Gk) Ek, Ek t 1, k= 1, K , Yt0 Ļ o G problem: G = min G, G  Gl(Y) t0, l= 1, L , Ok Ek +0+AY t0, Ek t 1, k= 1, K , Yt0
Result of duality:
Oo =Go
Fig. 6.1. The scheme of interrelations of direct and dual VPLP (with a matrix of restrictions).
Chapter 6
148
Step 2. A normalization of restrictions is carried out: {b li /O lo , i= 1, M }c, l= 1, L .
(6.4.5)
Let's notice the solution to VPLP (6.4.1)(6.4.2) with normalized restrictions (6.4.5) lL lim Ol(X)=1. X o X
o
Step 3. Stage Gproblem . A VZLP is under construction with one restriction, using the approach received in section 6.2. For this purpose, we will enter the padding variables Dl, l= 1, L , such that ¦ L Dl d1: l 1
N
max F(X)={
k
¦
c j xj, k = 1 , K },
j 1
N
aijxjd
¦
L
¦
l
o
(b i /O l )Dl , i= 1, M ,
(6.4.6) (6.4.7)
l 1
j 1
L
Dld1, Dl t 0, l= 1, L , xj, j= 1, N .
¦
l 1
Step 4. VPLP (6.4.6)(6.4.7) at equivalent criteria is solved. 4.1. The problem (6.4.6)(6.4.7) with each criterion is solved. 4.2. Normalization is carried out. 4.3. The maximine problem is constructed: Oo = max min Ok(ɏ), X S
(6.4.8)
k K
where the point set of S is defined with restrictions (6.4.7). 4.4. The maximine problem (6.4.8) will be transformed to a onecriteria O problem: (6.4.9) Oo = max O, N
O
¦ (c j 1
L

¦
l 1
L
¦ l
k j
l
/ f k* ) x j d0, k= 1, K , o
(b i /O l )Dl +
aijxj d 0, i= 1 , M ,
N
¦ j
(6.4.10)
1
Dld1, Dl t 0, l= 1 , L , xj, j= 1 , N .
1
The dimension of the O problem is equal (M+K+1)*(N+L+1) where (Ɇ+Ʉ+1) – the number of restrictions, (N+L+1) – the received problem we will call the number of indexes of variables. This can be compared with the previous VPLP O problem.
The Duality of Vector Problems in Linear Programming (VPLP)
149
Step 5. Solution to the O problem. As a result of the decision we will receive: Xo ={Oo, Xo, Do} — an optimum point; Oo – maximal relative assessment for which it is fair: OodOk(Xo), kK. At the same time, the optimum point of Xo is defined in a set of restrictions, such that AXodBDo where Dod1, i.e., Oo is the maximal lower bound for all relative estimates defined in a set of restrictions.
6.4.3. Algorithm 9. The solution to dual VPLP with equivalent criteria We will show an algorithm for the solution to dual VPLP using the example of a problem (6.4.3)(6.4.4). Step 1. Stage Gproblem. VPLP (6.4.3)(6.4.4) with a set of restrictions for each criterion of l= 1, L is solved (for the decision algorithm see section 5.3). For each criterion of lL we solve the problem where we introduce additional variables Ek, k= 1 , K such that ¦K Ek t 1. k 1
As a result, the Gproblem of the form is constructed (See Fig. 1): min U(Y) ={Ul(Y) =
M
¦ i
M
¦
ajiyi t
i 1
¦
K
k 1
K
¦ k
b li yi, l= 1, L },
(6.4.11)
1
(c kj /U *k )Ek, j= 1, N ,
1
Ek t 1
yi t 0, i= 1 , M , Ekt 0, k= 1 , K (6.4.12) As a result of the solution to a problem (6.4.11)(6.4.12) with each criterion of l= 1, L we will receive U *l =Ul(Y *l ), l= 1, L . Step 2. Normalization of G(Y)=blY/Ul(Y *l ), l= 1, L criteria is carried out. Step 3. The Gproblem is under construction. Step 4. The Gproblem with a set of restrictions is solved. 4.1. Decision on each restriction of k= 1, K . As a result we will receive: X ok , G ok , k= 1 , K . 4.2. Normalization of restrictions with c kj /G ok , j= 1, N , k= 1 , K is carried out.
Chapter 6
150
4.3. The Gproblem is under construction (the last block in Fig. 6.1): G, Go= min G t G l ( Y ), l 1 , L , Y S
where the point set of S is defined with the normalized restrictions taken with variables ((Ek, k= 1, K such that ¦K Ek t 1). k 1
For this purpose, the Gproblem will be transformed to onecriteria ɁLP: (6.4.14) Go = min G, G
M
¦ i

K 1
M
¦ i
¦
K
k 1
(6.4.15)
1
(c kj /G ok )Ek +
¦ k
(b li /U *l )yi t 0, l= 1 , L , ajiyi tcj, j= 1 , N ,
1
Ek t 1,
yi t 0, i= 1 , M , Ekt 0, k= 1 , K , (6.4.16) We will call the received problem (6.4.14)(6.4.16) the Gproblem. The dimension of the Gproblem is equal (L+N+1)*(K+M+1) where (L+N+1) – the number of restrictions, (Ʉ+Ɇ +1) – the number of indexes of variables, Y={E1,... EK, G, y1,..., yM}. Step 5. Solution to the Gproblem. As a result of the decision we will receive an optimum point of Yo={Eo, o G , yo} for which Go t Gl(Y), lL and it is defined with a set of restrictions: cj= ¦K Ek(c kj /G ok ), j= 1 , N , k 1
i.e., Go is a minimum upper bound for all relative estimates of Gl(Yo), l= 1 , L defined in a set of restrictions k= 1, K . Theorem 6.10 (Existence theorem). Problems (6.4.1)(6.4.2) and (6.4.3)(6.4.4) are dual among themselves. Proof. The problem (6.4.9)(6.4.10) is received from the problem (6.4.1)(6.4.2) with use of the normalization of criteria, restrictions and maximin's principle. The problem (6.4.14)(6.4.15) is received from (6.4.3)(6.4.4) with use of the same normalization and a minimax principle. We show that problems (6.4.9)(6.4.10) and (6.4.14)(6.4.15) are dual. We write down a problem which is dual to the problem (6.4.9)(6.4.10): Go = min G, (6.4.17)
The Duality of Vector Problems in Linear Programming (VPLP) M
(b li /O lo )yi t 0, l= 1, L ,
¦
G
151
(6.4.18)
i 1

K
¦ k
¦
K
k 1
(c kj /f *k )Ek +0+
1
M
¦ i
ajiyi tcj, j= 1, N ,
(6.4.19)
1
Ek t 1,
yi t 0, i= 1, M , Ekt 0, k= 1, K . Comparing the received problem (6.4.17)(6.4.19) with the Oproblem (6.4.14)(6.4.16), we see that they (problem (6.4.17)(6.4.19) and Oproblem (6.4.14)(6.4.16)), differ in restrictions (6.4.15) and (6.4.18) normalized criteria of O *l and U *l , l= 1 , L , and in restrictions (6.4.16) and (6.4.19) coefficients of G ok and f *k , k= 1, K . We show that they are pairwise and equal. The normalized criterion of O lo – the size of the normalized targetfunction at an optimum point of X lo which turned out from the solution to VPLP with k= 1, K maximizing criteria (6.4.1), (6.4.2), for each restriction vector of l= 1 , L . U *l — the value of the objective function at an optimum point of Y *l which is obtained through the solution to the dual problem (6.4.3)(6.4.4) with each l= 1, L criterion but with a set of vectors of restrictions of k= K 1, K , integrated with the Ek variables, such that ¦
k 1
Ek t 1, i.e. O lo and
U *l , l= 1 , L are optimal, the solutions to dual problems are similar to (6.2.1)(6.2.3) and (6.2.12)  (6.2.13), and then, according to Theorem 6.5, these sizes are equal: O lo =U *l , l= 1, L . Similar to G ok and f *k , k= 1, K , there are optimal solutions to dual problems, similar to (6.4.1)(6.4.2) and (6.4.9)  (6.4.10), then, according to Theorem 6.8, these sizes are equal to G ok = f *k , k= 1 , K . From here, problems (6.4.6)(6.4.7) and (6.4.17)(6.4.19) are identical. But problem (6.4.17)(6.4.19) is a dual problem (6.4.9)(6.4.10) and therefore problems (6.4.9)(6.4.10) and (6.4.14)(6.4.16) are also dual. Therefore, problems (6.4.1)(6.4.2) and (6.4.3)(6.4.4) are dual, as has been shown. Theorem 6.11 (Duality theorem).
152
Chapter 6
The admissible vector of Xo= (Oo, D 1o ,..., D oL , ɯ 1o , ..., ɯ oN ), in VPLP (6.4.1)(6.4.2) and the corresponding Oproblem (6.4.9)(6.4.10) are optimum only in cases where there is such an admissible vector of Yo=(Go, E 1o ,..., E oK , y 1o , ..., y oM ) in VPLP (6.4.3)(6.4.4) and the corresponding Gproblem (6.4.14)(6.4.16) that Oo = Go. (6.4.20) In contrast, Yo is optimum only in cases where there is a vector of Xo for which the same equality is carried out. Proof. As problems (6.4.9)(6.4.10) and (6.4.14)(6.4.16) are one criterion and dual, according to Theorem 6.2, at optimum points of Xo and Yo equality is carried out (6.4.20). We show the validity of theorems 6.4 and 6.11 with two numerical examples of VPLP (6.4.1)(6.4.2) and (6.4.3)(6.4.4), taken from work [30] where these examples were not solved. Example 6.5. Given: A vector problem with two criteria and two vector constraints [30]. Max F(X) ={max f1(ɏ) Ł 2ɯ1 + 3x2, max f2(ɏ) Ł ɯ1 + x2}, (6.4.21) under constraints 1.5ɯ1 + 2x2 d 12, 9, (6.4.22) 1.2ɯ1 + x2 d 6, 7.2, ɯ1, x2 t 0. Solve: Vector problem (6.4.21)(6.4.22), on the basis of the normalization of criteria and the principle of a guaranteed result. Step 1. We solve the problem (6.4.21)(6.4.22) for the first vector of constraints b1={12, 6}T. Solution: the first criterion: f 1* =f1(ɏ 1* )=18, ɏ 1* ={x1=0, ɯ2=6}, the second criterion: f *2 = f2(ɏ *2 )= 6, ɏ *2 ={x1=0, ɯ2=6}. Solution: the Oproblem: Oo =1, ɏo ={ɯ 1o = 0, ɯ o2 = 6}. Step 2. We solve the problem (6.4.21)  (6.4.22) with the second constraint b2={9, 7.2}T: Solution: the first criterion: f 1* = f1(ɏ 1* )=13.5, ɏ 1* ={x1=0, ɯ2=10.5}; the second criterion: f *2 = f2(ɏ *2 )= 6, ɏ *2 ={x1 = 6, ɯ2 = 0}. Solution: the Oproblem: Oo =0.923, ɏo={ɯ 1o =10.154, ɯ o2 =1.385}. Step 3. Build and solve the problem: Max F(X) ={max f1(ɏ) = 2ɯ1 + 3x2, max f2(ɏ) = ɯ1 + x2}, (6.4.23)
The Duality of Vector Problems in Linear Programming (VPLP)
153
12/1 D1  9/0.923 D2 +1.5ɯ1 + 2x2 d 0,  6/1 D1 7.2/0.923 D2 +1.2ɯ1 + x2 d 0, D1, D2, ɯ1, x2 t 0. (6.4.24) D1 + D2 d 1, Decision: 1st criterion: f 1* =f1(ɏ 1* )=18, ɏ 1* ={D1=1, D2=0, x1= 0, ɯ2=6}; the 2nd criterion: f *2 = f2(ɏ *2 )=6.5, ɏ *2 ={D1=0, D2=1, x1=6.5, ɯ2=0}. Build the Oproblem and put it into the simplex table: 0 0 0 0 0 1 0 0 3 0,111 0,167 1 0 0 0 0,154 0,154 1 0 12  9.75 1.5 2 0 (6.4.25) 0  6  7.80 1.2 1 0 1 1 1 0 0 0 Oproblem solution: Oo =0.93976, ɏo ={D1=0.783, D2=0.217, ɯ 1o =1.41, ɯ o2 = 10.7}. We present a problem dual to the problem (6.4.21)  (6.4.22): min U(Y)= { min U1(Y)=12y1 + 6y2, min U2(Y)=9y1 +7.2y2}, (6.4.26) under constraints 1.5y1 + 1.2 y2 t 2, 1, (6.4.27) y1 + y2 t 3, 1, y1, y2t0. Solve: Vector problem (6.4.26)(6.4.27), on the basis of the normalization of criteria and the principle of a guaranteed result. Step 1. We solve the problem (6.4.26)(6.4.27) with 1th criteria and two vector constraints. We will receive the decision regarding vector constraints: First restriction: U 1* = u1(Y 1* ) = 18, Y 1* ={y1 = 1.5, y2 = 0}. Second restriction: U *2 = u2(Y *2 ) = 6, Y *2 ={y1 = 0, y2 = 1}. Gproblem Go =1, Yo={E1 =0, E2=1, y1= 0.0833, y2 = 0}. Step 2. We solve the problem (6.4.26)(6.4.27) with the second criteria and two vector constraints. First restriction: U 1* = u1(Y 1* ) = 13.5, Y 1* ={y1 = 1.5, y2 = 0}. Second restriction: U *2 = u2(Y *2 ) = 6, Y *2 ={y1 =2/3, y2 = 0}. G problem: Go =0.923, Yo={E1 =0.693, E2=0.307, y1=0.1026, y2 = 0}. Step 3. Normalize the criteria (6.4.26) in the dual problem; construct the Gproblem and solve it with the first and second vector constraints. First restriction: G 1* = 18, Y 1* ={y1= 1.5, y2= 0}. Second restriction: G *2 = 6.5, Y *2 ={y1= 0.333, y2= 0.417}. Gproblem: Go=0.9397, Yo={E1=0.217, E2=0.783, y1=0.0482, y2=0.06}.
Chapter 6
154
Note that the simplex table for the Gproblem is a transposed table (6.4.25) of the direct problem. Thus, the numerical result confirms the duality theorem: Oo = Go.
6.4.4. An analysis of duality in VPLP with a set of restrictions on the basis of a Lagrange function Let's present a Lagrange function for VPLP with a set of restrictions (6.4.1)(6.4.2) and the Oproblem corresponding to it (6.4.9)(6.4.10): K
N
k1
j 1
M
*
k
L
l
N
o
L(ɏ,ɍ)=O+ ¦(0O+ ¦ (c j /f k )xj)Ek+ ¦ (0+ ¦ (b i /O l )Dl0 ¦ i 1
L
K
l 1
k 1
aijxj )yi +(1 ¦ Dl ) G= (1 ¦ Ek)O+ M
L
i 1
l 1
l
o
K
N
¦ ¦ k 1
l 1
k j
j 1
* k
(c /f )xj Ek+
j 1
N
L
j 1
l 1
+ ¦ ( ¦ (b i /O l )Dl ¦ aijxj )yi+(1 ¦ Dl )G,
(6.4.28)
where X= {O, D1,..., DL, ɯ1,..., ɯN} – the vector of unknowns of a direct problem; Y={G, E1,..., EK, y1,..., yM} – the vector of unknowns of a dual problem . The Lagrange function for a dual problem looks similar. Let's write down (6.4.28) in matrix form: L(ɏ,ɍ)= O(1 –E) + E(CX/f*) YȺɏ + DYB/O + (1D ) G. (6.4.29) An analysis of this Lagrange function is similar to section 6.3. As a result of the analysis we will receive the following at an optimum point: Xo={Oo, D
o 1
o
o 1
, ..., ɯ
o
o k
K
,..., D l , ɯ
o N
}, Yo = {Go, E
Oo = Go, o
O=
K
Ok(X ) E
Go=
¦
G(Yo) D
1 L
l 1
,..., E
o K
,y
o 1
, ..., y
o M
}:
(6.4.30)
¦ k
o 1

¦ k
o l

1 L
¦
o k
E =1,
(6.4.31)
o
(6.4.32)
D l =1.
l 1
We will check the received results with numerical examples of the solution to the generalized VPLP. Example 6.7. From the solution to direct task 6.5 we will receive (6.4.21)(6.4.22). Examples: o o o o Xo={Oo=0.93976, D 1 =0.783, D 2 =0.217, ɯ 1 =1.41, ɯ 2 =10.7}.
The Duality of Vector Problems in Linear Programming (VPLP)
dual task (6.4.26)(6.4.27): o Yo = {Go=0.93976, E 1 =0.217, E Check of a ratio (6.4.31): Oo=
K
¦ k
Ok(Xo) E
o k
=(f1(Xo)/f
* 1
o 2
)E
=0.783, y o 1
o 1
+(f2(Xo)/f
=0.0482, y * 2
)E
o 2
o 2
155
=0.06}:
=0.910.
1
Check of a ratio (6.4.32): Go=
L
¦
o
G(Yo) D l =(U1(Yo)/U
* 1
)D
o 1
+(U2(Yo)/U
* 2
)D
o 2
=0.910.
l 1
Check the ratio (6.4.30): Oo =Go=0,94 . Theorem 6.12 (The theorem of supplementing slackness in VPLP with a set of restrictions). In order that admissible vectors: Xo={Oo, D
o 1
,..., DLo, ɯ 1o , ..., ɯ
o N
}, Yo = {Go, E
o 1
,..., E
o K
, y 1o , ..., y
o M
}
were the solution to dual problems (6.4.1)(6.4.2) and (6.4.3)(6.4.4) according to the Oproblem (6.4.9)(6.4.10) and Gproblem (6.4.14)(6.4.16), it is necessary and sufficient that they satisfy the conditions of complementary nonrigidity. (1 –Eo) Oo + (Eo C/f* YoȺ)ɏo + ((B/Oo)Yo + Go)Do=0 . Go(1 –Do) + Yo(Do B/Oo XoȺ) + Eo ((C/f*)Xo + Oo) =0 . Proof. The need immediately follows from Kuhn's conditions; Takkera for a Lagrange function (6.4.29). The sufficiency follows from a duality theorem (Theorem 6.11).
CHAPTER 7 THE THEORY OF MANAGEMENT DECISIONMAKING BASED ON VECTOR OPTIMIZATION
The theory of management decisionmaking (as a science) studies the technology of development and the acceptance and implementation of management decisions. This chapter presents, in full, the characteristics of decisionmaking; the research, analysis and problems of defining decisionmaking under conditions of uncertainty; and a decisionmaking methodology under conditions of certainty and uncertainty, presented with an example of a numerical model of a static system [60]. .
7.1. Management decisionmaking: problems and methods 7.1.1. Problems with the theory of management decisionmaking Decisionmaking theory is a science (a set of scientific disciplines) which is directed towards finding a solution to the problem of defining the best (optimum) way of making decisions and actions directed towards the achievement of goals, with various objectives for the management of, for example, social, economic, and technical systems [6, 9, 12, 13, 23, 27]. The concept of "management decisionmaking" includes: a) the technology of the development (the problem of definition) of management decisions in various fields of knowledge; b) intuitive or mathematical methods for solving set management problems; c) the creation of informational, mathematical models of the studied objects and processes; d) decisionmaking for action and the implementation of the received decision.
The Theory of Management DecisionMaking Based on Vector Optimization
157
Thus, the theory of management decisionmaking is a constituent (function) of the theory of management allocated in terms of the types of importance and specifics. The decisionmaking function is a problem which is constantly solved in management processes. The interpretation of "decisionmaking" as a problem, allows the formulation of its contents to be clearer, and for the technology and methods of decisionmaking to be defined. The task of "decisionmaking" is directed towards defining the best (optimum) actions for the achievement of goals. The purpose is understood as an ideal representation of the desirable; a state or result of the activity of an object of management. If the actual state does not correspond to what is desirable, then a problem is created. The development of an action plan for the elimination of a problem makes, in its substance, decisionmaking a problem. Problems with "decisionmaking" can arise in the following cases: 


The functioning of a system in the future does not provide the achievement of goals (the purposes of organizational systems in general). In this case, the scheduling contour which, for the next planned term, has to develop a new version of the plan and, after its acceptance, is involved in defining the lines of conduct for systems for the next planned time period. The functioning of the system does not currently provide the time in which the set (intermediate) objectives should be achieved. In this case, the contour of monitoring, which reveals the deviations (a problem), is involved, and develops actions for the elimination of these deviations. After the statement (acceptance) of these actions, operations aimed at their elimination are performed. Changes to the purposes of the activity are necessary (i.e., the creation of new goods or services). In this case, both contours of scheduling and monitoring which form the business’s plan for the creation of new goods and services are involved, to develop actions for the production and sales of goods.
The problem is always bound to particular conditions which are generally called the situation. The set of a problem and situation forms a problem situation. The identification and description of a problem situation provides initial information about the problem’s definition for decisionmaking. The subject of any decision is the person known as the decisionmaker (DM). The concept of the decisionmaker is collective. It can be one person (an individual decisionmaker) or a group of people developing the collective decision (the decisionmaking group). To help the decisionmaker in
158
Chapter 7
collecting and analysing information there are also experts who are specialists in the problem that needs to be solved. The concept of an expert in the theory of decisionmaking is interpreted in a broad sense and includes employees of the management, those involved in preparing the decision, as well as scientists and practitioners. Decisionmaking occurs within a certain time period and therefore the concept of a decisionmaking process is introduced. This process consists of a sequence of stages and procedures and is directed towards eliminating the problem situation. In the decisionmaking process, alternate (mutually exclusive) versions of decisions are formed and their preferences are estimated. The preferences are the integral evaluation test of decisions based on an objective analysis (knowledge, experience, carrying out calculations and experiments) and a subjective comprehension of value, as well as the effectiveness of decisions. To choose the best decision, the individual DM defines the chosen criterion. The group DM runs a sample on the basis of the principle of coordination. The end result of the decisionmaking problem is the decision which represents an instruction for action.
7.1.2. The definition of a management decision The decision has three semantic values: the process of solving the problem; the choice of any object from a set of objects; and the result of the implementation of any procedure (task). Decisions which are accepted and implemented in a management process are called management decisions (unlike engineering, design or technology solutions). A management decision is the result of the implementation of a separate procedure, task, function, or a complex of functions or management processes in general, which are received on the basis of informationprocessing concerning a condition of an object, the external environment and the purposes of the organizational system. The object of a management decision is a problem that arises in a particular management function and requires a solution. The subject of the management decision can be the person who is the decisionmaker (DM) or the management subsystem as a whole (the collective body). Management decisions are substantial: ways of action, plan of work, alternative design, etc. The solution has its own characteristics which include: the existence of the choice from a set of possible alternative decisions; the choice being
The Theory of Management DecisionMaking Based on Vector Optimization
159
focused on the conscious achievement of the goals, i.e., the focus of the choice; the choice being based on the created installation and readiness for action. The decision is called admissible if it satisfies resource, legal, moral and ethical restrictions. The decision is called optimum (best) if it provides an extremum (a maximum or minimum) of criterion choice for the individual decisionmaker, or satisfies the principle of coordination for the group decisionmaker. The generalized characteristic of the decision is its effectiveness. This characteristic includes the effect of the decision in terms of the extent to which it achieves the goals and refers to the costs of the achievement. The solution to the subjects is more effective than the extent of the achievement of the goals, unless the costs of their realization is more.
7.1.3. The classification of management decisions In organizations, a large number of various types of management decisions (MD) are made: 
First, with the realization of the purposes of this or that level, management decisions of all classifications show signs of the organizational systems’ purposes, which are inherent. We show these in the first column of Table 7.1.  Secondly, management decisions have their own signs which differ according to their content, validity periods, developments, and informational security, etc. In general, these signs concern the organizational party of developments and the realization of management decisions. We show these in the second column of Table 7.1. Table 7.1 The classification of management decisions (MD) Classification of MD: Classification of MD: purpose of the organizational system (OS organizational signs. 1. Depending on the level of management 1. Depending on the type of in OS, MD are developed: solvable tasks, MD differ in 1.1. At the level of the state, the region: terms of the production of the a) the realization of public purposes; decision, and factors which are b) the realization of political goals (social, economic, social, political, and economic strategies, etc.). organizational, technical, scientific, etc. 1.2. At the level of the organization:
160
Chapter 7
a) MD are developed for the realization of 2. According to the principles purposes bound to the results of the work of decisionmaking, they are of the organization, such as the divided into algorithmic (highly structured) and achievement of given sales volumes, reduction of expenses, increases in profit, heuristic (weakly structured) profitability, quality of production, etc.; types. b) more whole than the newly created 3. According to the methods (projected) system; of decision substantiation there c) purposes which are bound to the are analytical, statistical, characteristics of production; mathematical programming, d) the derivative purposes connected with and gaming types. the impact that the organization (being a 4. Depending on the social educator) exerts on the character of initial environment due to its economic or social information, decisions are content. made either under certainty 1.3. At the level of the person. UR, bound (with full information) or to the realization of characteristic uncertainty (with incomplete purposes are formed in terms of increases information). in living standards, education, etc. 5. Depending on the quantity 2. Depending on the time period, the MD of the purposes, MD are is subdivided: subdivided: onetarget (scalar) a) longterm (strategic); and multipurpose (vector). b) current (tactical); 6. In decisionmaking, MD c) operational. can be presented in the form of 3. Depending on the functional law, order, contract, or plan, orientation, UR are under construction for etc. separate functions of a control system: a) predictions, b) scheduling, c) decisionmaking, d) account, e) monitoring, f) analysis, and g) regulations. 4. Depending on the organizational structure, UR are developed for each structural division. 5. Depending on the stage of the life cycle of the MD, the systems, related to behaviour, on this or that segment of the market (a strategic zone of business) are under construction.
The Theory of Management DecisionMaking Based on Vector Optimization
161
7.1.4. Decisionmaking From a technological point of view, it is possible to present decisionmaking processes as sequences of stages and procedures which have, among themselves, straight lines and feedback in their form. Feedback reflects the iterated cyclic nature of dependence between the stages and procedures, both as a scheduling contour and monitoring. Iterations, which are formed during the execution of individual elements of the decisionmaking process, need to clarify and adjust the data to perform subsequent procedures. From an informational point of view, in the decisionmaking process there is a decrease in indeterminacy. The formulation of a problem situation generates the question: "what to do?" Followup of the procedures of decisionmaking processes leads to an answer to this question in the form of "what to do and how to do it." The decisionmaking procedures can be performed by thinking of the DM and its experts, i.e., creatively, in an informal image, and with the application of formal tools (mathematical methods and the computer). In the decisionmaking process, the problems of searching, discernment, classification, streamlining and choice are solved. To solve these problems, methods of analysis and synthesis, induction and deduction, and comparison and generalization are used. The formal procedures consist of carrying out calculations for particular algorithms for the purpose of analyzing a solutioncandidate in the assessment of necessary resources, narrowing the set of solutioncandidates, etc. The implementation of formal procedures is carried out by the decisionmaker, experts, technicians and through technical means. A representation of the decisionmaking process as a logicallyordered set of informal and formal procedures can be described in a flow diagram of the realization of this process. In the decisionmaking process, the following stages are allocated: 
The definition of the problem (the development of the management decision). The formation of the decision (scheduling, model operation, and forecasting). Management decisionmaking and implementation. Analysis of the results of the management decision and the developed mathematical model, and its adjustment in the future.
Together, these stages represent a methodology for development and decisionmaking on the basis of informational and economicmathematical models [14, 17].
162
Chapter 7
7.2. A model of decisionmaking under uncertainty We consider the decisionmaking problem with the known data on some sets of discrete values of indices that characterize the engineering system (uncertainty conditions — incomplete information). We solve the decisionmaking problem in three stages: we state the decisionmaking problem; we analyse the methods for solving the problem at the present time; we transform the decisionmaking problem into a vector problem; we solve VPMP, leading to the optimal decision.
7.2.1. The conceptual formulation of the decisionmaking problem This can be seen in its general form in [41, p. 79]. We introduce the respective designations — ɚ, i= 1 , M , for the admissible decisionmaking alternatives and Ⱥ = (a1a2 … aM)T for the vector of the set of admissible alternatives. We match each alternative aA to K numerical indices (criteria) f1(a), …, fK(a) that characterize the system. We can assume that this set of indices maps each alternative into the point of the K dimensional space of outcomes (consequences) of decisions made— F(a)={fk(a), k= 1 , K }T, ɚȺ. We use the same symbol fk(a), kK both for the criterion and for the function that performs estimating with respect to this criterion. Note that we cannot directly compare the variables fv(a) and fk(a), v,kK, vk at any point F(a) of the Kdimensional space of consequences since it would make no real sense, because these criteria are generally measured in different units. Using these data, we can state the decisionmaking problem. The decisionmaker is to choose the alternative ɚ אA to obtain the most suitable result, i.e., F(a)ĺmin. This definition means that the required estimating function should reduce the vector F(a) to a scalar preference or “value” criterion. In other words, it is equivalent to setting a scalar function V given in the space of consequences and possessing the following property: V(f1(a), …, fK(a))V(f1(ac), …, fK(ac)) (f1(a), …, fK(a)) » (f1(ac), …, fK(ac)), where the symbol » means “no less preferable than” [41]. We call the function V(F(a)) the value function. In different publications, the name of this function may vary from an order value function to a
The Theory of Management DecisionMaking Based on Vector Optimization
163
preference function to a value function. Thus, the decisionmaker is to choose ɚ אA such that V(F(a)) is maximum. The value function serves as indirect comparison of the importance of certain values of various criteria of the system. That said, the matrix F(a) for admissible outcomes of alternatives takes the following form: a1
f 1 (a 1 ), … , f K (a 1 )
Ȍ=… = …
f 11 , … , f 1K
= …
 problem of 1type (7.2.1)
ai
f 1 (a i ), … , f K (a i )
… aM
f i1 , … , f iK
… f 1 (a M ), … , f K (a M )
… f M1 , … , f MK
All alternatives in it are represented by the vector of indices F(a). For the sake of certainty, we assume that the first criterion (any criterion can be the first) is arranged in increasing (decreasing) order, with the alternatives renumbered i= 1 , M . A problem of the first type implies that the decisionmaker is to choose the alternative, such that it will yield the “most suitable (optimal) result” [17]. At present, the leximin rule that is in better agreement with the system ideology is used to evaluate the quality of the chosen alternative [4, 5]. For an engineering system, we can represent each alternative ai with the N dimensional vector Xi ={xj, j= 1 , N } of its parameters and its outcomes with the K dimensional vector criterion. Taking this into account, the matrix of outcomes (7.2.1) takes the following form: a1
Ȍ= …
X 1 f 11 (X 1 ), … , f 1K (X 1 )
= …
ai
X
…
…
a
X
M
i
M
problem of 2type (7.2.2) f i1 (X i ), … , f iK (X i ) f M1 (X
M
), … , f MK (X
M
)
In comparison, matrix (7.4.1) has N = 1 parameters, i.e., the alternatives act as the values of the single parameter. A problem of the second type implies that the decisionmaker is to choose the set of design parameters of the system Xo to obtain the optimal result [41]. The informative class of the engineering system — the experimental data which can be represented by matrix (7.2.2) — can consist of a sufficient number from various industries such as electroengineering, aerospace, and metallurgical (choosing the optimal structure for the substance), etc.3 Although in this work we consider them as statics, one
Chapter 7
164
can also consider engineering systems as dynamics, using differential difference transformation methods [12] and conducting research during a small discrete instant ǻt אT.
7.2.2. An analysis of the current ("simple") methods of decisionmaking At present, problems (7.2.1) and (7.2.2) are solved by a number of “simple” methods based on forming special criteria such as Wald, Savage, Hurwitz, and Bayes–Laplace, which are the basis for decisionmaking [41]. The Wald criterion of maximizing the minimal component helps make the optimal decision that ensures the maximal gain among minimal ones: a= max min f ik . kK
i M
The Savage minimal risk criterion chooses the optimal strategy so that the value of the risk r ik is minimal among maximal values of risks over the columns a= min max r ik . i M
kK
The value of the risk r ik is chosen from the minimal difference between the decision that yields maximal profit max f ik kK, and the iM
k i
current value f : r ik ={( max f ik ) f ik , i= 1 , M , k= 1 , K }, i M
with their set being the matrix of risks. The Hurwitz criterion helps choose the strategy that lies between absolutes of pessimistic and optimistic (i.e., the most considerable risk) a= max (Į min f ik + (1Į) max f ik ), i M
kK
i M
where Į is the pessimistic coefficient chosen in the interval 0 Į 1. The BayesLaplace criterion takes into account each possible consequence of all decision options, given their probabilities pi: a= max i M
K
¦
f ik pi..
k 1
All these and other methods are sufficiently widely described in publications on decisionmaking [41, 50, 56]. They all have certain drawbacks. For instance, if we analyze the Wald maximin criterion, we can see that, with the problem’s hypothesis all criteria are in different units. Hence, the first step, which is to choose the
The Theory of Management DecisionMaking Based on Vector Optimization
minimal component f min k
min k
165
= min f ik ,kK , is quite reasonable, and all, f i M
, k= 1 , K are measured in different units, therefore the second step,
which is to maximize the minimal component max f kK
min k
, is pointless.
Although it brings us slightly closer to the solution, the criteria measurement scale fails to solve the problem since the chosen criteria scales are judgemental. We believe that to solve problems (7.2.1) and (7.2.2), we need to form a measure that would allow the evaluation of any decision that could be made, including the optimal one. In other words, we need to construct axiomatics that show, based on the set of K criteria, what makes one alternative better than the other. Axiomatics can assist in deriving a principle that helps find whether the chosen alternative is optimal. The optimality principle should become the basis for constructive methods of choosing optimal decisions. We propose such an approach for vector mathematicalprogramming problems that are essentially close to decisionmaking problems (7.2.1) and (7.2.2).
7.2.3. Transforming a decisionmaking problem into a vector problem of mathematical programming We compare problems (7.2.1) and (7.2.2) with the vector mathematicalprogramming problem. Table 7.2 shows the comparison results. Its first, second and third rows show that all three problems have the common objective to “make the best (optimal) decision.” Both types of decisionmaking problem (row 4) have some uncertainty — functional dependences of criteria and restrictions on the problem’s parameters are not known. Many mathematical methods for regression analysis have been developed that are implemented in software (such as MATLAB [31]) and allow the use of a set of initial data (7.2.2) kK f ik (Xi), i= 1 , M to construct functional dependences f ik (X). For this reason, we use regression methods, including multiple regression, to construct criteria and restrictions in decisionmaking problems of both types (row 5) [15, 17]. Combining criteria and restrictions, we represent decisionmaking problems of both types as the vector mathematicalprogramming problem (row 6). We perform these transformations and use methods of regression analysis for (7.2.1) and multiple regression for (7.2.2) to transform each k
Chapter 7
166
th column of the matrix Ȍ to the criterion function fk(X), kK. We put them together in a vector function (1.1.1) to be minimized (1.1.2): max F1(X) ={max fk(X), k= 1, K 1}, min F2(X)={min fk(X), k= 1, K 2}. (7.2.3) min max The inequalities f k d fk(X) d f k , k= 1 , K , where f min = min f ik (X), kK, f k iM
max k
= max f ik (X),kK iM
are the minimal and maximal values of each functions and the parameters are bound by minimal and maximal values of each of them (1.1.4): x min d j xj dx
max j
where x
, j= 1 , N , min j
= min xij,jN, x i M
max j
= max xij,jN i M
are the minimal and maximal values of each parameters. Table 7.2: Comparing VPMP to decisionmaking problems (DMP). VPMP 1. opt F(X) = { max F1(X) ={max fk(X), k= 1 , K }, min F2(X)= {min fk(X), k= 1 , K }}, G(X)d B, Xt0
DMP 1 type f A=
1 , 1
…,f
… f 1i , … , f
DMP 2 type
K 1
f
…,f
K 1
(X1)
K i
… f 1i (X i), … , f
K i
(X i)
… f 1M , …, f
A= K M
1 (X1), 1
… f 1M (XM), …, f
K M
(XM)
2. Common Objective: Making the best decision 3. Find the vector X from 3. Objective: Find the 3. Objective: Find the vector the admissible set, where alternative ai, i = such that the alternative Xi, i = 1 , M such that the vector criterion set of criteria f k , k= 1 , K is i the set of criteria F(X) = { fk(X), k= F(X) is optimal. optimal. 1 , K }, is optimal. 4. The vector of parameters X={xj, j= 1, N }; and the
dependence of criteria F(X)={ fk(X), k= 1 , K }on it are given completely; the set of admissible points is finite and given by the inequalities. G(X) 0, Xmin X Xmax
4. Parameters are not given; the criteria are represented as the finite set of values; the set of admissible alternatives is finite.
4. Parameters are given; the criteria are represented as separate values so that the functional dependence between them is not given; the set of admissible points is finite.
5. Transforming the decisionmaking problem into VPMP
The Theory of Management DecisionMaking Based on Vector Optimization Transforming the decisionmaking problem into VPMP 5.Using regression analysis, we 5. Using multiple regression, we transform each kth set of values of transform each kth set of criteria criteria values k kK f i , i= 1 , M , into the
fk(Xi), i = 1, M into the criterion function fk(X). Vector of criteria F(X) ={ fk (X), k= 1 , K }T.
criterion function fk (X), k= 1 , K . Vector of criteria: F(ɯ) ={ fk (X), k= 1 , K }T. Subject to f
min k
d fk(M)d f
max , k
k= 1, K ,
ɯmin ɯ ɯmax
Problems are equivalent ĺ
{min fk(M), k= min k
max , k
ɯmin ɯ ɯmax, where: f
min k = min
(X),kK ; f
d fk(M)d f
K 1 1, K max , k
max k
min k
k= 1, K , i= 1, M
x min d xj dx j
k i
iM
1
},
min F2(X)={min fk(X), k= K 1 1 , K }},
}},
d fk(X) d f
f
= max f ik (X),kK .
f
min k
d fk(X)d f
max , k
x
min d j
We end up with VPMP (1.1.1)–(1.1.4) in the form: opt F(X) = {max F1(X) ={max fk(X), k= 1, K 1}, min F2(X) ={min fk(X), k= K 1 1 , K }}, f
,
k= 1, K
6. opt F(X) = {max F1(X) ={max fk(X), k= 1 , K
min F2(M)=
f
Subject to f min d fk(M)d f k
i M
6. opt F(X) = {max F1(M) = {max fk(M), k= 1 , K 1},
՝
167
max k
max j ,
, k= 1 , K ,
j= 1 , N ,
max
xj dx j ,,j= 1 , N .
(7.2.4) (7.2.5) (7.2.6) (7.2.7)
and use the same methods (based on the normalizing of criteria and the maximin principle) as used for the engineering system model under complete certainty, to solve it for equivalent criteria.
7.3. The technology of decisionmaking under conditions of certainty and uncertainty Consider a decisionmaking problem that has data for a certain set of discrete values of parameters characterizing a system, i.e., under conditions of uncertainty in its operation. The operation of the system depends on three parameters: X={x1, x2, x3} (a vector of dimension N =3 > 2). The initial data for decisionmaking represent four characteristics {h1(X), h2(X), h3(X), h4(X)}, whose values
Chapter 7
168
depend on the vector ɏ. The numerical values of these characteristics are presented in Table 7.3 (uncertainty conditions). Table 7.3: Numerical values of parameters and characteristics of the system. x1
x2
x3
20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 60
20 20 20 20 40 40 40 40 60 60 60 60 80 80 80 80 20 20 20 20 40 40 40 40 60 60 60 60 80 80 80 80 20
20 40 60 80 20 40 60 80 20 40 60 80 20 40 60 80 20 40 60 80 20 40 60 80 20 40 60 80 20 40 60 80 20
h1(X)ĺ max 330 350 370 390 250 270 290 310 170 190 210 230 10 30 50 70 410 430 450 470 410 430 450 470 330 350 370 390 170 190 210 230 490
h2(X) ĺmin 957.8 986.2 1114.6 1043.0 1785.8 1814.2 1842.6 1871.0 2461.8 2290.2 2518.6 2547.0 2985.8 3014.2 3042.6 3071.0 0996.2 1043.0 1099.8 1156.6 1814.2 1871.0 1927.8 1984.6 2490.2 2547.0 2703.8 2660.6 3014.2 3071.0 3127.8 3184.6 1014.6
h3(X) ĺmax 72.2 80.6 69.0 67.4 168.2 186.6 165.0 163.4 328.2 326.6 355.0 323.4 552.2 550.6 549.0 547.4 75.4 72.2 69.0 65.8 191.4 168.2 165.0 161.8 331.4 328.2 325.0 321.8 595.4 552.2 549.0 545.8 81.8
h4(X) ĺmin 20.2 28.4 38.5 44.6 16.6 24.8 32.9 41.0 13.0 20.2 29.3 37.4 9.40 17.60 25.7 33.8 44.0 57.2 73.4 89.7 33.8 50.0 66.2 82.5 26.6 42.8 69.0 75.3 19.4 35.6 51.8 68.1 72.5
The Theory of Management DecisionMaking Based on Vector Optimization
60 60 60 60 60 60 60 60 60 60 60 60 60 60 60 80 80 80 80 80 80 80 80 80 80 80 80 80 80 80 80
20 20 20 40 40 40 40 60 60 60 60 80 80 80 80 20 20 20 20 40 40 40 40 60 60 60 60 80 80 80 80
40 60 80 20 40 60 80 20 40 60 80 20 40 60 80 20 40 60 80 20 40 60 80 20 40 60 80 20 40 60 80
510 530 550 490 510 530 550 410 430 450 470 250 270 290 310 490 510 530 550 490 510 530 550 410 430 450 470 250 270 290 310
1099.8 1185.0 1270.2 1842.6 1927.8 2013.0 2098.2 2518.6 2603.8 2689.0 2774.2 3042.6 3127.8 3213.0 3298.2 1043.0 1156.6 1270.2 1383.8 1871.0 1984.6 2098.2 2211.8 2547.0 2660.6 2774.2 2887.8 3071.0 3184.6 3298.2 3411.8
77.0 72.2 67.4 177.8 173.0 168.2 163.4 337.8 333.0 328.2 323.4 561.8 557.0 552.2 547.4 91.4 85.0 78.6 72.2 187.4 181.0 174.6 168.2 347.4 341.0 334.6 328.2 571.4 565.0 558.6 552.2
169
96.8 121.2 145.6 61.70 86.00 110.4 134.8 50.90 75.20 99.60 124.0 40.10 64.40 88.80 113.2 114.8 147.3 179.8 212.2 100.4 132.9 165.4 197.8 86.00 118.5 151.0 183.4 71.60 104.1 186.6 169.0
Here, we have the constraints: functional 1000 h2(X) 3100 and parametrical 20 x1 80, 20 x2 80, and 20 x3 80. In making the decision, it is desirable to obtain the values of the first and third characteristics as high as possible (max), while the values of the second and fourth characteristics are as low as possible (min). According to the considered technology, we present the solution to the problem in two stages. Stage 1. Constructing a VPMP in the MATLAB system using the initial data in Table 7.3.
Chapter 7
170
Step 1. For each experimental data set fk , k = 1, 4 , a regression function is constructed in the MATLAB system by the least squares method. For this purpose, a polynomial that determines the interconnection of the factors x1, x2, and x3 is formed. The result for each criterion represents a system of its coefficients: A={a0, a1, a2, a3, a4, a5} (see Table 7.4). The determination coefficient RRj = [0.9835 0.9985 0.9986 0.9871] is computed for each criterion, respectively (for more detail, see [17, p. 394]). Table 7.4: Coefficients of the regression function of the system’s characteristics. Coefficients a0 a1 a2 a3 a4 a5
h1 45.0 11.61 0.0875 4.45 0.0875 0.0167
h2 46.6937 0.8339 0.0082 51.8453 0.1818 0.0719
h3 43.0125 0.0375 0.0040 0.1031 0.0792 0.0048
h4 17.2937 0.2747 0.0150 0.1019 0.0061 0.0207
Step 2. Using the coefficients of the regression function from Table 7.4, we construct the problem (7.2.4)(7.2.7): f1(X) = – (45.0 + 11.6167x1 – 0.0875x 12 + 4.45x2 – 0.0875x 22 + 0.0167x1x3), (7.3.1) f2(X) = – 46.6937 + 0.8339x1 – 0.0082x 12 + 51.8453x2 – 0.1818x 22 + 0.0719x1x3, (7.3.2) f3(X) = – (43.01 – 0.0375x1 + 0.004x 12 + 0.1031x2 + 0.0792x 22 – 0.0048x1x3), (7.3.3) f4(X) = 17.2937– 0.2747x1 + 0.015x 12 – 0.1x2 – 0.0061x1x2 + 0.0207x1x3, (7.3.4) 1000 d f2(X) d 3100, 20 d x1 d 80, 20 d x2 d 80, 20 d x3 d 80. (7.3.5) Stage 2. Choosing the value of the prioritized criterion and computing the parameters. 2.1. Solving a VPMP with equivalent criteria. Step 1. Problem (7.3.1)(7.3.5) is solved for each criterion separately, during which the MATLAB’s fmincon function is used [2]; its call is considered in [9]. As a result, we obtain the optimal points X *k , k= 1, K and the criteria values at the optimal point f *k = fk(X *k ), i.e., the best solution for each criterion:
The Theory of Management DecisionMaking Based on Vector Optimization
X 1* = {80.0, 25.4286, 80.0},
f
X *2
= {21.1709, 20.5619, 28.6039},
X *3 =
{80.0, 80, 20.0},
X *4
= 577.7946 ;
f
* 1 * 2
f
* 3
= 573.0605;
171
= 1000.0;
* 4
= {20.0, 80.0, 20.0}, f = 8.1677. The constraints and the optimal points in coordinates { x1, x2} are partially presented in Fig. 7.1. Step 2. Determining the worst invariant part of each criterion (antioptimum):
Fig. 7.1. Pareto set SoS in twodimensional system of coordinates.
Step 2. Determining the worst invariant part of each criterion (antioptimum) in the problem (7.3.1)(7.3.5): o
o
X 1 = {20.0, 80.0, 20.0}, X o2 = {56.6058, 79.9087, 35.3173},
f 1 = 45.014; f o2 = 3100;
X 3o = {52.6875, 20.0, 80.0},
f 3o = 65.6506;
X o4 = {80.0, 20, 80.0},
f o4 = 211.9997.
Chapter 7
172
Step 3. Systemanalyzing the set of Paretooptimal points. For this purpose, at the optimal points X* ={X 1* , X *2 , X *3 , X *4 }, we determine the values of the targeted functions F(X*)= f q ( X k* )
k 1, K
values of the relative estimates O(X*)= O ( X * ) q k
k 1, K
q 1, K
and the matrix of , together with the
q 1, K
vector D = (d1 d2 d3 d4)T of deviations dk = f *k  f 0k , k= 1,4 for each criterion on an admissible set S: ª577.8 1628.5 88.7 208.8 º
ª532.7806 º
F (X*)= «316.3 1000.0 76.7 26.0 » , D= « », « » « 2100.0 »
« 237.1 3066.7 573.1 77.2 » «507.4099 » « » « » 45.0 2979.6 557.1 8.2 ¬ ¼ ¬ 203.832 ¼ ª1.0 0.7007 0.0455 0.0157 º O(X*)= «0.5093 1.0 0.0218 0.9126 » , where Ok(X)=(fk* fk0)/dk. . « » «0.3605 0.0159 1.0 0.6611 » « » 0.0573 0.9685 1.0 ¼ ¬0 * 1
Step 4. Constructing the Ȝproblem. Here, two stages are used. Initially, we build the maximin optimization problem with normalized criteria (7.3.1)(7.3.4): Oo = max min Ok(X), XS
kK
which at the second stage is transformed into a standard mathematicalprogramming problem, Oo = max O, O d Ok(X), k= 1, K , (7.3.6) X S
where the dimension of the vector of unknowns is four: X={x1, x2, x3, O}. In this case, we have the following constraints: 1000df2(x) d3100, 0dOd1, 20dx1d80, 20dx2d80, 20dx3d80 As a result of solving VPMP (7.3.1)(7.3.5) with equivalent criteria and the corresponding Ȝproblem (7.3.6) we obtain the following results: —the optimal point Xo = {Xo, Oo} = {41.5253, 52.6174, 20.0, 0.4009; —the criteria values and the relative estimates fk(Xo) and Ok(Xo), k = 1,4 ; —the maximum lower level among all relative estimates measured in relative units Oo=0.4009. 2.2. The transition to twodimensional space and choosing a priority characteristic. 1, K
The Theory of Management DecisionMaking Based on Vector Optimization
173
Step 1. Constructing a twodimensional space of parameters, i.e., preparing for decisionmaking. At each point X *k , we determine the values of all the criteria and relative estimates: ª577.8 «316.3 «237.1 « ¬ 45.0
F (X*)= «
1628.5 88.7 208.8º 1000.0 76.7 26.0 »» , 3066.7 573.1 77.2 » » 2979.6 557.1 8.2 ¼
ª1.0 0.7007 0.0455 0.0157º ». «0.5093 1.0 0.0218 0.9126» «0.3605 0.0159 1.0 0.6611» « » 0.0573 0.9685 1.0 ¼ ¬0
O(X*)= «
(7.3.7)
* 1
We determine the values of all the criteria and corresponding relative estimates at the point Xo: f(Xo) = {382.3, 2258.1, 269.1, 30.3}; (7.3.8) (7.3.9) O(Xo) = {0.6330, 0.4009, 0.4009, 0.8916}. We project these characteristics on a twodimensional surface. For this purpose, we choose from the entire set of parameters N = 3, two that (in the opinion of the DM) have the most significant impact on the set of characteristics, e.g., j = 1 and j = 2. The dimensions (the upper and lower bound) of these parameters correspond to (7.3.5): 20 x1 80 and 20 x2 80. The variables x1 and x2 represent the base of a twodimensional coordinate system on which the optimal points Xo and X *k , k= 1, K are 1, K
* k
projected (the boundary of the Pareto set). We present them in Fig. 7.1. The admissible set of points S, which is formed with the constraints, and the optimal points X 1* , X *2 , X *3 and X *4 combined in a contour, represent the set of optimal Pareto points SoS. We determine additional points that refine the boundary of the Pareto set. For this purpose, the Ȝproblem is solved with two criteria. When solving the Ȝproblem with the first and second criteria, we obtain the set of optimal Pareto points that lies on the segment X 1* X *2 , during which the optimal point X 12o specifies the o ) is the result of the solution with two equivalent criteria and Oo(X 12 o o ) = O2(X 12 ) = 0.8522. Analogously, with the maximum level; here, O1(X 12 other criteria:
o
Oo(X 12) = 0. 8518;
o
Oo(X 13 ) = 0. 6337;
o
Oo(X 34) = 0. 9500;
o
Oo(X 42) = 0. 9020.
X 12 = {56.2578 24.8958 20.0000 0.8518}, X 13 = {80.0163 63.0154 19.9837 0.6337}, X 34 = {35.9432 79.9631 20.0028 0.9500}, X 42 = {20.5000 27.6837 38.7756 0.9020},
o o
o
o
Chapter 7
174
We present them in Fig. 7.1. The Pareto set SoS lies among the *
o
*
o
o
*
o
optimal points X 1 X 12X 2 X 13 X *3 X 34X 4 X 42 ; i.e., the set of optimal Pareto points So is included in the domain of admissible points S formed with constraints (7.3.6). We show the coordinates of all the obtained points and relative estimates in Fig. 7.1, and in threedimensional space in Fig. 7.2, where the third axis Ȝ is a relative estimate.
Fig. 2. Solution to the Ȝproblem in a threedimensional system of coordinates x1, x2 and Ȝ.
At the optimal points X *k , k= 1, K of problem (7.3.1)(7.3.5), all the relative estimates are equal to one: Ok(X *k ) =1. At the antioptimal points * k
X 0k , all the relative estimates are equal to zero: Ok(X 0k )=0. Hence, k{ o k
1 , 4 }, X אS, 0 d Ok(X) 1. All four functions Ok(X), k= 1 , 4 , in Fig. 7.2
are constructed with allowance for the absence of the third coordinate, which is represented by a constant; in particular ɯ3 = 20.0, as at the point ɏo obtained with the equivalent criteria. For example, the criterion f1(X)
The Theory of Management DecisionMaking Based on Vector Optimization
from (7.3.1)(7.3.5) takes the form f 1' (X) =
f1 ( X )
x ( 3 ) 20 . 0
175
and
analogously, for the other criteria. The approximation error is easily determined, e.g., at the optimal point X 1* , in a threedimensional (in the general case, Ndimensional) coordinate system O1(X 1* )=1.0, whereas in a twodimensional coordinate system f 1' (X * ) = 522.8 and the corresponding relative estimate O 1' (X 1* ) = 0.8967 is 1 presented in Fig. 7.2. The approximation error is determined as follows: 'O1(X 1* ) = O1(X 1* ) – O 1' (X 1* ) = 0.1033. The optimal point Xo characterizes, on the one hand, the optimal parameters Xo and, on the other hand, the guaranteed result Oo = 0.4009. However, it is not seen in Fig. 7.2, because Oo is occluded by the third criterion O1(X) and the fourth criterion O4(X), whose relative estimate O4(Xo)= 0.8916 is shown in Fig. 7.2. In general, Fig. 7.2 shows the interrelation of all the criteria measured in relative units. We can also see in Figs. 7.1 and 7.2 that the domain bound by the o
*
o
points S 1 ={X 1 X 12
o
*
Xo X 13 X 1 } is characterized by inequality
O1(X)tOk(X), k= 2 , 4 , XS 1o (shown in Fig. 7.1 as ȜO1tO2, O3, O4); i.e., this domain is prioritized for the first criterion S 1o So. In this domain, the first criterion’s priority over the other criteria is greater than or equal to one: p 1 (X)t1, XS 1o . The domains that have priority over a corresponding k criterion are presented analogously; together, they represent the set of o
o
o
o
optimal Pareto points: So=XoUS 1 US 2 US 3 US 4 S . Using the concept of criterion priority, we can choose any point from the Pareto set. Hence, Fig. 7.2 presents, in a twodimensional coordinate system, the basic situations that arise from analyzing the set of optimal Pareto points in problems of simulating technical systems, in which the model is presented as a VPMP, where decisionmaking is based on these data. The theorem demonstrates that the optimal point Xo always has the two most conflicting criteria q and p, for which exact equality in the relative units Oo = Oq(Xo)=Op(Xo) holds (for other criteria, inequalities are true). Such criteria in problems (7.3.1)(7.3.5) are the second and third criteria: Oo = O2(Xo) = O3(Xo) = 0.4009. The following messages are displayed: “Criteria at the optimal point ɏɨ: FXo = 382.3 2258.1 269.1 30.3”; “Relative estimates at ɏɨ: LXo = 0.6330 0.4009 0.4009 0.8916” As a rule, from this pair, a criterion is chosen that the DM would like to improve (prioritized criterion).
176
Chapter 7
The following message is read out: q = input ('Input a priority criterion (number) q = '). We input q = 2. Step 3. From matrices (7.3.7) and (7.3.8), which show the values of all the criteria measured in natural units at the points Xo and X *k , the limits to changing the prioritized criterion q in natural units and the corresponding limits of the relative estimates are determined, during which the results are displayed: f2(Xo)= 2258.1 f2(X) 1000 =2(X *k ), k=2, (7.3.10) O2(Xo)=0.4009 dO2(X) d1=O2(X *k ), k=2.
(7.3.11)
The following message is read out: fq = input ('Input the value of the priority criterion q = 1'). We input fq = 1500. We present the most conflicting criteria (2 and 3) in Fig. 7.3. Using them, we demonstrate the reverse transition to threedimensional space (an analog of Ndimensional space). Fig. 7.3 is an analog of Fig. 2; in Fig. 3, the functions Ȝ1(X) and Ȝ4(X) are deleted and the points Xo and Ȝo = 0.3788 used for the construction are seen.
Fig. 3. Choice of criterion in the Ȝproblem and its solution.
The Theory of Management DecisionMaking Based on Vector Optimization
177
Oq=(15003100)/(2304.343100) =0.7619. Step 4. For the chosen value of the prioritized criterion, the relative estimate Oq = (1500 – 3100)/(2304.34 –3100) = 0.7619 is determined; according to (7.3.1), in the transition from Xo to X it lies in the following range: O2(Xo) =0.4009 d O2(X)=0.7619 d 1=O2(X *2 ). The limits for changing the relative estimates of other criteria are determined analogously; the corresponding lines are shown in Fig. 7.3: O1(Xo) = 0.7332 d O1 d 0.5093=O1(X *2 ), O3(Xo) = 0.3788 d O3 d 0.0218=O3(X *2 ), O4(Xo) = 0.7459 d O4 d 0.9126=O4(X *2 ). Step 5. We determine the coefficient of proportionality, U = 0.7619 0.4009 =0.6026, q=2. 1 0.4009
Step 6. Using the obtained coefficient of proportionality ȡ, we determine, with formula, all components of the vector of priorities Pq: P2 =[1.9636 1.0 45.8817 1.09]. (7.3.12) 2.3. Constructing the Ȝproblem with criterion priority. The obtained vector of priorities P2 is introduced to the Ȝproblem (7.3.6): q
Oo = max O, O d p k (X)Ok(X), k= 1, K , X S
(7.3.13)
2.4. Solving the Ȝproblem with criterion priority. Step 1. As a result of the solution to the Ȝproblem (7.3.13) in the MATLAB system, we obtain the following results: —the optimal point Xoo = [Xo = {24.1562 31.4471 20.0}, Ooo = 0.7838]; —the values of the targeted functions f 1(Xoo)=336.1, f 2(Xoo)=1454.1, f 3(Xoo)=123.7, f 4(Xoo)=21.6; —the values of the relative estimates O1(Xoo)=0.5452, O2(Xoo)=0.7838, O3(Xoo)=0.1144, O4(Xoo)=0.9342. (7.3.14) The obtained relative estimates are presented in Fig. 7.3 at the point Xoo. The maximum relative estimate Ooo has property [9] such that the following inequalities hold: q
Ooodp k (Xoo)Ok(Xoo), k=1 , 4 , where p qk (Xoo) are taken from (7.3.12) and Ok(Xoo) from (7.3.14).
178
Chapter 7
We have: Ooo = 0.7838 d {0.7838 0.7838 3.2077 0.7838}, i.e., Ooo is really a guaranteed result, during which the made decision Xoo for f q = 1405.8 is the optimal decision. Step 2. Analyzing the results. The determined value f2(Xoo) = 1405.8 is usually not equal to the assigned value fq=1500. The error in choosing fq depends on an error of linear approximation (in changing the relative estimates Ok(X), k= 1, K and the vector of priorities p qk (X), k= 1 , K , performed in steps 5 and 6. If the error 'fq=fq(Xoo)–fq=1454.11500=45.9 ('fq%=3.1%) is smaller than the assigned error ǻf, then go to step 6; if 'fqt'f, then the next step is executed. Steps 3–5 are executed if necessary. Step 6. The end.
Conclusions The problem of optimal decisionmaking in a complex system using a certain set of functional characteristics is one of the most important tasks of systems analysis and design. This work presents a new technology for optimal decisionmaking in vector optimization problems with prioritized criterion. The methodology for optimal decisionmaking with an assigned criterion priority is based on axioms, using the normalization of criteria and the maximin principle. The accuracy of choosing such a point depends on a predetermined error. This methodology has a systemic nature: it can be used in studying, simulating, and optimal decisionmaking for a broad class of technical systems of various branches, as well as in simulating economic systems.
BIBLIOGRAPHY
[1] [2] [3] [4] [5]
[6] [7] [8] [9] [10] [11] [12] [13]
Pareto V. Cours d’Economie Politique. Lausanne: Rouge, 1896. Carlin S. Mathematical methods in a game theory, programming and economy.  M.: World, 1964, p. 837. Tamm M.I. A compromise solution of a problem of the linear programming with several target functions//Economy and a mat. Methods. 1973. T. 9, No. 2, pp. 328330. Yemelyanov S. V., Borisov V.I., Malevich A. A., Cherkashin A. M. Models and methods of vector optimization//Izv. Academy of Sciences of the USSR. TK. 1973. No. 6, pp. 386448. Homenyuk V.V., Mashunin Yu.K. Multicriteria problem of the linear programming//Information and management.  Vladivostok: IAPU DVNTs Academy of Sciences of the USSR. Issue 13, 1974, pp. 134141. Germeier Yu. B. NonAntogonistic Games. Moscow, Nauka, 1976; Springer, Netherlands, 1986. Zeleny M. Multicriteria simplex methoda: a fortran routine // Lect. Notes Econ. and Math. Syst. 1976. Vol. 123, pp. 323345.. Butrim, B. I., Modified the decision of the task of bargaining. // Journal of computational mathematics and math. physics. 1976. No. 2, pp. 340350. Zak Yu. A. Multistage decisionmaking processes in the problem of vector optimization // A.iT. 1976. No. 6, pp. 4145. Mikhailevich V. S., Volkovich V. L. Computational methods of research and design of complex systems. M.: Science, 1979, p. 319. Yu. K. Mashunin, Methods and Models of Vector Optimization (Nauka, Moscow, 1986) [in Russian]. Yu. K. Mashunin and V. L. Levitskii, Methods of Vector Optimization in Analysis and Synthesis of Engineering Systems. Monograph (DVGAEU, Vladivostok, 1996) [in Russian]. Yu. K. Mashunin, “Solving composition and decomposition problems of synthesis of complex engineering systems by vector_optimization methods,” Comput. Syst. Sci. Int. 38, 421 (1999).
180
[14] [15] [16]
[17] [18] [19] [20]
[21]
[22]
[23]
[24]
Bibliography
Mashunin Yu. K. Theoretical bases and methods of vector optimization in management of economic systems. Moscow: LOGOS. 2001, p. 248. Yu. K. Mashunin, Market Theory and Simulation Based on Vector Optimization. Moscow: Universitetskaya kniga, 2010, p. 352 [in Russian]. Mashunin Yu. K. Engineering system modeling on the base of vector problem of nonlinear optimization //Control Applications of Optimization. Preprints of the eleventh IFAC International workshop. CAO 2000. July 36, 2000. Saint  Petersburg, 2000, pp. 145149. Mashunin Yu. K. Control Theory. The mathematical apparatus of management of the economy. Logos. Moscow. 2013, p. 448 (in Russian). Mashunin K. Yu., and Mashunin Yu. K. Simulation Engineering Systems under Uncertainty and Optimal Decision Making. Journal of Comput. Syst. Sci. Int. Vol. 52. No. 4. 2013, pp. 519534. Mashunin Yu. K. Modeling of investment processes in the regional economy. Monograph. LAMBERT Academic Publishing. 2014, p. 353. Mashunin Yu. K., and Mashunin K. Yu. Modeling of technical systems on the basis of vector optimization (1. At equivalent criteria). International Journal of Engineering Sciences & Research Technology. 3(9): September, 2014, pp. 8496. Mashunin Yu. K., and Mashunin K. Yu. Modeling of technical systems on the basis of vector optimization (2. with a Criterion Priority). International Journal of Engineering Sciences & Research Technology. 3(10): October, 2014, pp. 224240. Yu. K. Mashunin, K. Yu. Mashunin Simulation and Optimal Decision Making the Design of Technical Systems // American Journal of Modeling and Optimization. 2015. Vol. 3. No 3, pp. 5667. Yu. K. Mashunin, K. Yu. Mashunin Simulation and Optimal Decision Making the Design of Technical Systems (The decision with a Criterion Priority) // American Journal of Modeling and Optimization. 2016 Vol. 4. No 2, pp. 5166. Yu. K. Mashunin. Vector optimization in the system optimal Decision Making the Design in economic and technical systems // International Journal of Emerging Trends & Technology in Computer Science. 2017. Vol. 7. No 1, pp. 4257.
Theory and Methods of Vector Optimization (Volume One)
[25]
[26]
[27] [28]
[29] [30] [31] [32] [33] [34] [35] [36]
[37]
181
Yu. K. Mashunin. Concept of Technical Systems Optimum Designing (Mathematical and Organizational Statement)// International Conference on Industrial Engineering, Applications and Manufacturing, ICIEAM 2017 Proceedings 8076394. Saint Petersburg. Russia/ WOS: 000414282400287 ISBN:9781509056484 Yu. K. Mashunin. Optimum Designing of the Technical Systems Concept (Numerical Realization) // International Conference on Industrial Engineering, Applications and Manufacturing, ICIEAM 2017 Proceedings 8076395. Saint Petersburg. Russia/ WOS: 000414282400288 ISBN:9781509056484 Mashunin K. Yu. , and Mashunin Yu. K. Vector Optimization with Equivalent and Priority Criteria. Journal of Comput. Syst. Sci. Int. Vol. 56. No. 6. 2017.975996. https://rdcu.be/bhZ8i Yu. Torgashov, V. P. Krivosheev, Yu. K. Mashunin, and Ch. D. Holland, “Calculation and multiobjective optimization of static modes of mass_exchange processes by the example of absorption in gas separation,” Izv. Vyssh. Uchebn. Zaved., Neft’ Gaz, No. 3, pp. 82–86 (2001). Polishchuk A.I. Analysis of multicriteria economic and mathematical models.  Novosibirsk: Science. Sib. Department, 1989, p. 239. Mathematical encyclopedia / editorinChief I. M. Vinogradov. In 5 volumes.  Moscow: Soviet encyclopedia, 1982. Yu. L. Ketkov, A. Yu. Ketkov, and M. M. Shul’ts, MATLAB 6.x.: Numerical Programming (BKhV_Peterburg, St. Petersburg, 2004, p. 672) [in Russian]. Wagner G. Fundamentals of operations research./  M.: World, 1972. 1, 2, 3 volume. Intriligator M. Mathematical methods and economic theory. M.: Progress, 1975. Himmelblau D. Applied nonlinear programming.  M.: Mir, 1975, p. 583. Reclaimis G., Ravindran A., Ragsdel K. Optimization in engineering.  Moscow: Mir, 1986, p. 349. Operations research/edited by John. Modera, S., Elmaghraby.  M.: World, 1981. In 2 volumes. Volume 1. Methodological foundations and mathematical methods, p. 712. 2. Models and applications, p. 677. Murtagh B. Modern linear programming.  M.: World, 1984, p. 224.
182
[38] [39] [40] [41] [42] [43]
[44] [45] [46] [47] [48] [49] [50] [51]
Bibliography
Gill F., Murray At., Wright M. Practical optimization.  M.: World, 1985, p. 509. Tachy, Hedmi. Introduction to an operations research. M.: World, 1986. In two books. Prince of 1  479 page of Prince of 2  496 pages. Bertsekas D. The conditional optimization and methods of Lagrangian multiplicities.  M.: Radio and communication, 1987, p. 400. Keeney, R. L. and Raiffa, H. Decisions with Multiple Objectives– Preferences and Value Tradeoffs (Wiley, New York, 1976; Radio i svyaz’, Moscow, 1981). Shtoyer, Ralf. Multicriteria optimization: The theory, calculations and the application / Lane with English  M.: Radio and communication, 1992, p. 504. Eddous M., Stanfield River. Methods of acceptance of decisions / Lane with I.I. Yeliseyeva, English under the editorship of the member correspondent of RANN.  M.: Audit, UNITY, 1997, p. 590. Li D., Haimes Y.Y. Hierarhical Generating Method of Optimization Theory and Applications: Vol. 54, ʋ 2, August, 1987, pp. 303333. Gale D., Kuhn H. W., Tucker A. W. Linear programming and theory of game // Activity Analysis of production / Ed. T. C. Koopmans. N. Y.: Willey, 1951, pp. 317329. Isermann H., Duality in multiple objective linear programming // Ed. S. Zionts. Multiple Criteria. Problem Solving. Berlin; N. Y. 1978, pp. 274285. Kornbluth J. S. H. The fuzzy dual information for the multiple objective decisionmaking // Comput. And Oper. Res. 1977, pp. 265273. Multiple Criteria Problem Solving / Ed. S. Zionts. Berlin etc.: Springer, 1978, p. 481 (Lect. Notes Econ. Math. Syst.). Rodder W. A satisfying aggregation of objective by duality // Lect. Notes Econ. Math. Syst 1980. Vol. 177, pp. 389399. Szidorovszky F., Gershon M., Duckstein L. Techniques for Multiobjective desision making in systems management, 1986, p. 498. Coleman Thomas, Branch Mary Ann, Grace Andrew. Optimization Toolbox. For Use with MATLAB. The MathWork, Inc. Printing History: January 1999.
Theory and Methods of Vector Optimization (Volume One)
[52]
[53] [54] [55] [56] [57]
[58]
[59]
[60]
183
Balali, V., Zahraie, B., and Roozbahani, A. (2012) "Integration of ELECTRE III and PROMETHEE II Decision Making Methods with Interval Approach: Application in Selection of Appropriate Structural Systems."ASCE Journal of Computing in Civil Engineering, 28(2), pp. 297314. Johannes J. Vector Optimization: Theory, Applications, and Extensions. Berlin, Heidelberg, New York: Springer_Verlag, 2010, p. 460. Hasan Ansari Q., Jen Chih Yao. Recent Developments in Vector Optimization. Heidelberg, Dordrecht, London, New York: Springer 2010, p. 550. Hirotaka N., Yeboon Y., Min Y. Sequential Approximate Multiobjective Optimization Using Computational Intelligence. Berlin, Heidelberg: Springer_Verlag, 2009, p. 197. Shankar R., Decision Making in the Manufacturing Environment: Using Graft Theory and fuzzy Multiple Attribute Decision Making Methods. Springer_Verlag, 2007, p. 373. Cooke T., Lingard H., Blismas N. The development and evaluation of a Decision Support Tool for health and safety in Construction Design // Engineering, Construction and Architectural Management. 2008. V. 15. ʋ 4, pp. 336–351. S. M. Mirdehghan and H. Rostamzadeh. “Finding the Efficiency Status and Efficient Projection in Multiobjective Linear Fractional Programming: A Linear Programming Technique,” Journal of Optimization Theory and Applications, Volume 2016, Article ID 9175371, 8 pages http://dx.doi.org/10.1155/2016/9175371. Guang Peng, Yangwang Fang, Shaohua Chen, Weishi Peng, and Dandan Yang. “A Hybrid Multiobjective Discrete Particle Swarm Optimization Algorithm for Cooperative Air Combat DWTA,” Journal of Optimization Theory and Applications, Volume 2017, Article ID 8063767, 12 pages https://doi.org/10.1155/2017/8063767. Mashunin Yu.K. Mathematical Apparatus of Optimal DecisionMaking Based on Vector Optimization,” Appl. Syst. Innov. 2019, 2, 32. https://doi.org/10.3390/asi2040032