299 105 13MB
English Pages 398 [400] Year 1977
Table of contents :
Preface
Contents
Definitions and Symbols
0. Summary of Matrix Algebra and Allied Topics
1. Linear Programming
2. Integer Programming
3. Theory of Graphs
4. Planning Networks
5. Game Theory
6. Dynamic Programming
7. Queueing Models
8. Nonlinear Programming
9. Generation of Random Numbers (Simulation)
10. Replacement Models
11. Inventory Models
12. Sequencing Models
13. Plant Location Models
Appendix
Bibliography
Eiselt • von Frajer Operations Research Handbook
Horst A. Eiselt • Helmut von Frajer
Operations Research Handbook Standard Algorithms and Methods with Examples
W DE
G Walter de Gruyter • Berlin • New York • 1977
CIP-Kurztitelaufnähme
der Deutschen
Bibliothek
Eiselt, Horst A. Operations research handbook: Standard algorithms and methods with examples/Horst A. Eiselt; Helmut von Frajer. — 1. Aufl. — Berlin, New York: de Gruyter. 1977. ISBN 3-11-007055-3 NE: Frajer, Helmut von:
Library of Congress Cataloging in Publication Eiselt, Horst A.
Data
1950-
Operations research handbook. Bibliography: p. 1. Operations research — Handbooks, manuals, etc. I. Frajer, Helmut von, 1948 — joint author. II. Title. T 57.6.E37 001.4'24 77-5572 ISBN 3-11-007055-3
© Copyright 1977 by Walter de GruyterA Co., Berlin 30. All rights reserved, including those of translation into foreign languages. No part of this book may be reproduced in any form - by photoprint, microfilm, or any other means - nor transmitted nor translated into a machine language without written permission from the publisher. Printing: Karl Gerike, Berlin. - Binding: Lüderitz & Bauer, Buchgewerbe GmbH, Berlin. - Cover design: Armin Wernitz, Berlin. - Printed in Germany.
Preface In writing this handbook the authors had in mind not only the students of economics, social sciences, and mathematic but also the professionals in all fields engaging in optimization problems and the control of projects. The most important and most often utilized algorithms for the solution of these problems have been collected and described in a uniform and hopefully understandable manner. With this goal set, each section was divided as follows: a)
Hypotheses:
Here the problem i s formulated and the prerequis i t e s are explained.
b)
Principle:
Briefly the general concept i s presented.
c)
Description: In this section each step of the algorithm or method i s explained in a standard format.
d)
Example:
For each paragraph an example i s completely solved. An effort has been made in selecting the examples to find those which i l l u s t r a t e even the special properties of certain algorithms and methods.
The reader should already be familiar with the basic concepts of differentiation, integration, and s t a t i s t i c s . A summary of the topics in Linear Algebra that are necessary is provided at the beginning of the book. Because this book was conceived as a desk reference, the corresponding theory and necessary proofs which are found in the myriad of textbooks in Operations Research have been omitted. For details concerning the material in each chapter, beyond the outright applications, i t is essential that the reader becomes familiar with the technical
literature.
6
Preface
To this purpose a selection of the standard literature and advanced research studies is also included in the bibliography. The solid arrows in the sketch below indicate the relationships of the chapters of the book to each other, as well as the minimum mathematics suggested for full understanding of each chapter. The dotted arrows show other relationships that exist between the material in each chapter although they will not be expounded upon further. It was never the intension of the authors to convey through this sketch exactly how the field of Operations Research is structured either in theory or practice. Nevertheless it does illustrate certain general relationships between the various areas of Operations Research. The authors are very grateful to Prof. Dr. H. Noltemeier, University of Goettingen, for the many invaluable suggestions that he has offered from time to time during the formulation of this book. The authors also thank Cpt. Lee Dewald, who devoted much of his last semester in Goettingen assisting in the translation of the original manuscript; and Mrs. Schillings, who arduously sorted through the translation and typed the final copy. Last but not least the authors acknowledge Walter de Gruyter & Co. as partners in publishing.
Goettingen, in spring 1977
Horst A. Eiselt Helmut von Frajer
Contents D e f i n i t i o n s and Symbols 0.
15
Summary of Matrix Algebra and A l l i e d Topics
0.1
Definitions
17
0.2
Elementary Operations
18
1.
Linear Programming
1.1
General Methods
1.1.1
The Primal Simplex-Algorithm
22
1.1.2
The Two-Phase Method
25
1.1.3
The Primal Simplex-Algorithm without E x p l i c i t I d e n t i t y Matrix
30 34
1.1.4
The Dual Simplex-Algorithm
1.1.5
S e n s i t i v i t y A n a l i s y s and Parametric Programming (S.A. and P.P.)
38
1.1.5.1
S.A. and P.P. with Expected A l t e r a t i o n s
39
1.1.5.2
S.A. and P.P. with Unexpected A l t e r a t i o n s
1.1.5.2.1
Subsequent A l t e r a t i o n s of the R e s t r i c t i o n Vector
1.1.5.2.2
43
Subsequent A l t e r a t i o n s of C o e f f i c i e n t s of the Objective Function
45
1.2
Shortened Methods
1.2.1
The Transportation Problem
48
1.2.1.1
The Northwest-Corner Rule
49
1.2.1.2
The Row Minimum Method
51
1.2.1.3
The Column Minimum Method
54
1.2.1.4
The Matrix Minimum Method
57
1.2.1.5
The Double Preference Method
60
1.2.1.6
VOGEL's Approximation Method (VAM)
66
1.2.1.7
The Frequency Method
71
1.2.1.8
The Stepping-Stone Method
72
1.2.2
The Hungarian Method (Kuhn)
75
10
1.2.3
1.2.4
Contents
The Decomposition Principle (Dantzig; Wolfe)
81
FLOOD'S Technique
89
1.3
Theorems and Rules
1.3.1
The Dual Problem
91
1.3.2
Theorems of Duality
93
1.3.3
The Lexicographic Selection Rule
94
2.
Integer Programming
2.1
Cutting Plane Methods
2.1.1
The G0M0RY-I-Al1
2.1.2
The GOMORY-II-Al1
Integer Method
2.1.3
The GOMORY-III-Mixed Integer Method
2.1.4
The GOMORY-III-Mixed Integer Method
Integer Method
with Intensified Cuts 2.1.5
96 100 103
106
The Primal Cutting Plane Method (Young; Glover; Ben-Israel; Charnes)
108
2.2
Branch and Bound Methods
2.2.1
The Method of LAND and DOIG
Ill
2.2.2
The Method of DAKIN
118
2.2.3
The Method of DRIEBEEK
122
2.2.4
The Additive Algorithm (Balas)
129
2.3
Primal-Dual
2.3.1
A Partitioning Procedure for Mixed
Methods
Integer Problems (Benders) 3.
134
Theory of Graphs
3.0.1
Definitions
143
3.0.2
The Determination of Rank in Graphs
146
3.0.3
The Number of Paths in a Graph
148
3.0.4
The Determination of the Strongly Connected Components of a Graph
149
3.1
Shortest Paths in Graphs
3.1.1
The Algorithm of DIJKSTRA
151
3.1.2
The Algorithm of DANTZIG
154
Contents
3.1.3
The FORD Algorithm I
3.1.4
The FORD Algorithm II
(shortest path(s)) (longest path(s))
11
159 160
3.1.5
The Tri pel Algorithm
162
3.1.6
The HASSE Algorithm
166
3.1.7
The Cascade Algorithm
168
3.1.8
The Algorithm of LITTLE
169
3.1.9
The Method of EASTMAN
174
3.2
Flows in Networks
3.2.1
The Algorithm of FORD and FULKERSON
178
3.2.2
The Algorithm of BUSACKER and GOVJEN
183
3.2.3
The Method of KLEIN
187
3.2.4
The Out-of-Kilter Algorithm
3.3
Shortest Spanning Subtrees of a Graph
3.3.1
The Method of KRUSKAL
(Ford; Fulkerson)
191 200
3.3.2
The Method of SOLLIN
203
3.3.3
The Method of WOOLSEY
205
3.3.4
The Method of BERGE
207
3.4
Gozinto Graphs
210
3.4.1
The Method of VAZS0NYI
210
3.4.2
The Method of TISCHER
212
3.4.3
The Method of FLOYD
213
3.4.4
The Gozinto List Method
214
4.
Planning Networks
4.0.1
The Critical Path Method (CPM)
217
4.0.2
The CPM Project Acceleration
220
4.0.3
The Program Evaluation and Review Technique (PERT)
224
4.0.4
The Metra Potential Method (MPM)
227
4.0.5
The Graphical Evaluation and Review Technique (GERT)
230
12
5.
Contents
Game Theory
5.1
Non Matrix Games
5.1.1
The Normal Form
5.1.2
236
1
NASH s Solution of the Bargaining Problem
241 242
5.1.3
The Extensive Form
5.2
Matrix Games
5.2.1
A Method for Determining Pure Strategy Pairs for Two-Person Zero-Sum Games
251
5.2.2
A Method for Solving Two-Person Zero-Sum
5.2.3
An Approximization Method for Two-Person
5.2.4
The LEMKE-HOWSON Algorithm for the Solution
Games with the Simplex-Algorithm
253
Zero-Sum Games ("learning method"; Gale; Brown) of Bimatrix Games 5.3
256 260
Decisions under Uncertainty (games against nature)
264
5.3.1
The Solution of WALD
265
5.3.2
The Solution of HURWICZ
266
5.3.3
The Solution of SAVAGE and NIEHANS
266
5.3.4
The Solution of BAYES
267
5.3.5
The Solution of LAPLACE
268
5.3.6
The Solution of HODGES and LEHMANN
268
6.
Dynamic Programming
6.0.1
The n-Period Model
6.0.2
The Infinite-Period Model (policy iteration
270
routine)
276
7.
Queueing Models
7.0.1
The 1-Channel, 1-Stage Model
:
282
280
7.0.2
The 1-Channel, r-Phase Model
284
7.0.3
The k-Channel, 1-Stage Model
285
8.
Nonlinear Programming
8.1
Theorems and Special Methods
8.1.1
The Theorem of KUHN and TUCKER
288
Contents
8.1.2
The Method of LAGRANGE
8.1.3
A Method for the Optimization of Nonlinear
13
289
Separable Objective Functions under Linear Constraints
292
8.2
General Methods
8.2.1
The Method of WOLFE (short form)
296
8.2.2
The Method of FRANK and WOLFE
302
8.2.3
The Method of BEALE
307
8.2.4
An Algorithm for the Solution of Linear Complementarity Problems (Lemke)
312
8.2.5
The Gradient Projection Method (Rosen)
315
9.
Generation of Random Numbers (simulation)
321
9.0.1
The AWF-Cubes (Graf)
321
9.0.2
The Midsquare Method (J.V.Neumann)
322
9.0.3
A Mixed Congruence Method
323
9.0.4
A Multiplicative Congruence Method
324
10.
Replacement Models
10.1
Replacement Models with Respect to Gradually Increasing Maintenance Costs
10.1.1
A Model Disregarding the Rate of Interest
325
10.1.2
A Model Regarding the Rate of Interest
326
10.2
Replacement Models with Respect to Sudden Failure
10.2.1
A Model Disregarding the Rate of Interest
328
10.2.2
A Model Regarding the Rate of Interest
331
11.
Inventory Models
333
11.0.1
The Classical Inventory Model (Andler)
333
11.0.2
An Inventory Model with Penalties for Undersuppl ied Demands
334
11.0.3
An Inventory Model with Terms for Delivery
335
11.0.4
An Inventory Model with Damage to Stock
337
11.0.5
An Inventory Model with Rebates (different price intervals)
338
14
11.0.6
Contents
An Inventory Model with Respect to Transportation Capacity
340
12.
Sequencing Models
12.0.1
JOHNSON'S Algorithm for Two Machines
12.0.2
JOHNSON'S Algorithm for Three Machines (special case)
345
12.0.3
A Heuristic Solution for a Sequencing Problem
347
13.
Plant Location Models
13.1
Exact Methods
13.1.1
The Optimal Plant Location in a Trans-
13.1.2
The Optimal Plant Location in a Trans-
portation Network I
343
350
portation Network II
351
13.1.3
The Optimal Plant Location on a Straight Line
353
13.1.4
The Optimal Plant Location with Respect to
13.2
Heuristic Methods
13.2.1
The Center of Gravity-Method
13.2.2
A Solution by Vector Summation
358
13.2.3
An Iterative Method
363
Appendix
367
Rectangular Transportation Movements
Table 1 :
qk
= (1 + i) k k
Table 2 :
q~
Table 3 :
e" k
= (1 + i)"
354
356
367 k
367 368
Table 4 : Random numbers with an equal distribution
370
Table 5 : Area under the standardized normal distribution function Bibliography
372 373
Definitions and Symbols A : Aj-m
x
nj
: [m x n] - dimensional matrix
A
I : i d e n t i t y matrix e : n u l l matrix A^ : transpose o f matrix A Fn
inverse o f matrix
A A
: n-dimensional real euclidian space I f not otherwise defined, a problem P i s given as follows:
where A:A
[m x n] ;
x:x
[ n x 1] ;
b:b
[m x 1] •
x e [a;b] = a < x < b, where b > a: closed interval x e (a;b] = a < x < b, where b > a 1 x e [a ;b ) = a < x < b, where b > a
: half-closed intervals
x e (a;b) = a < x < b, where b > a : open interval a: = a + b : val uation 3 : there i s V : for a l l » E iff
... ...
: equivalence r e l a t i o n
=> : implication [a] : largest integer smaller than a : smallest integer larger than a 0 : empty set |a| : absolute value of a J M | = |{m^}| : the number of elements in M M^ U M2 : union of the sets M^ and M2 M, n M2 : intersection of the sets M^ and M2
16
Definitions
a v b :
a
or
a A b :
a
and
S
a. :
^
and
b
(alternative) b
(conjunctive)
sun over a l l elements a ^
J
J
a
j-x.-bv
Symbols
'
5as
^
c
var
for whose indices j
x. is J
b
ia le
: partial derivative o f f(x) with respect to
x
3X
grad f(x) : gradient o f f(x) = vector o f the partial f i r s t derivatives o f the function f(x) = total derivative o f f(x) with respect to the \ector A vector
a: = ( a ^ ; . . . ;aj ; . . . ;a n ) i s lexicographically p o s i t i v e
(in symbols: A vector
a
a y 0)
«*
(a k > 0 | k = min { j | a^ * 0} ) .
i s lexicographically greater than a vector
( i n symbols: a y b) The vector
x .
**
b
(a-b) y 0 .
(rl a v ' i s the lexicographic maximum of a set of vectors
in symbols:
= lex max { a ^ } i
) »
a ^ y a^ 1 ^ V i * r .
(The d e f i n i t i o n for " l e x i c o g r a p h i c a l l y smaller" and the lexicographic minimun are equivalent) . k-min: the k-th smallest element of a set for k = 1 , 2 , . . . rk(A) : rank o f the matrix
A
bv
: basic variable
nbv
: non-basic variable
inf
: infimun
sup
: supremim
q.u.
: quantity unit ( i . e .
ea.; doz.; gal. etc)
m.u.
: monetary unit ( i . e .
$; C
d.u.
: distance unit ( i . e .
m i . ; km.; i n . ; ft. etc)
t.u.
: time unit
(i.e.
etc)
h r . ; min.; sec. etc)
0. Summary of Matrix Algebra and Allied Topics
0.1 Definitions Definition
1: A vector in which all elements belong to one row is called a row vector and written a:=(aj):=(aj,...a n ). A column vector is defined in the corresponding manner and written a: m
Definition
2: An n-dimensional vector e
n>: =(ej] is
called the i-th
unit vector, if e
Definition
. 1, if
j :=
3: An n-dimensional vector e:=(ej) is called a summing vector, if
Definition
J = i
0 V j = l,...,n ; j * l
e. = 1 v j = 1,... ,n .
4: An (m * n) - dimensional matrix A is an ordered set of m-n elements. It can be represented as an m-dimensional column vector, whose elements are n-dimensional row vectors, or as an n-dimensional row vector, whose elements are m-dimensional column vectors. '11
In
ml
mn
A:: - (a i d , :
Definition
5: Let A ^
: =
be the transpose of the matrix A, then
(aij)''': = (aj1-)» i- e -
row
~
an
d column-indices
18
Matrix
Algebra
are exchanged. The transpose of a column vector is a row vector and vice versa. Definition
6: An (m x n) - dimensional matrix A is called quadratic, if m = n
Definition
.
7: A quadratic matrix A is called the identity matrix, if
a
i r
i v i
a.. = 0 v i O Definition
8: A quadratic matrix A is called a diagonal matrix, if a
Definition
ii = e i
v i
'
e
i e l R s a ij =
0
v i
* j
•
9: A quadratic matrix A is called triangular, if a ^ € F vi > j ;
Definition
.
= 0 V i < j.
10: An (m x n) - dimensional matrix A is called the null matrix, if
a^
= 0 V i ,j .
0.2 Elementary Operations 0.2.1
Vector Addition The vector c is the sum of two equal-dimensional vectors and
0.2.2
b , if
a
c^: = a^ + t^ V i
Vector Subtraction The vector c is the difference of two equal-dimensional vectors
0.2.3
a
and
b , if
Cji = a. - b. v i
Multiplication of a Vector with a Scalar Let
e € K
tor, then
be a scalar and let c: = e-a
a
be an n-dimensional vec-
is an n-dimensional vector, if
c i : = e-a i V i = 1,... ,n .
Section 0.2
0.2.4
Inner Vector Product The inner vector product
e 6 R
of an n-dimensional row vec-
tor a^ and an n-dimensional column vector • 0.2.5
19
:
•
I
v ,
b
is defined as:
•
Dyadic Vector Product The dyadic vector product tor
a
C
of an m-dimensional column vec-
and an n-dimensional row vector
(m x n) - dimensional matrix
b
is defined as the
C, so that C:=(c^j):=(ai-b^) v
i ,j • 0.2.6
Matrix Addition The sum
C
of two equal-dimensional matrices
defined as: 0.2.7
=
B
B
is
.
C
of two equal-dimensional matrices
is defined as:
A
and
j ) - ( a ^ - bi .) vi
Multiplication of a Scalar with a Matrix Let
e 6 ]R be a scalar and
C:=e-A, if 0.2.9
and
Matrix Subtraction The difference
0.2.8
: = (aij + b ^ ) v i
A
A
a matrix, then
C ^ c . j j r ^ e - a . j j v 1 ,j .
Multiplication of a Row Vector with a Matrix An m-dimensional row vector sional row vector if
b
n c: = ( c j : = ( z 1 j=l
c
is the product of an n-dimen-
and an (n x m) - dimensional matrix A, b,-a,,)v i = l,...,m
J
J1
.
0.2.10 Multiplication of a Matrix with a Column Vector An m-dimensional column vector
c
is the product of an
(m x n)-dimensional matrix A and an n-dimensional vector b, if
20
Matrix
AIgebra
a-j-bjJ V l = 1
c:«( Cl ):-( I
m .
0.2.11 Matrix Multiplication An (m x k) - dimensional matrix
C
(m x n) - dimensional matrix
and an (n x k) - dimensional
matrix
B, if
A
is the product of an
C:=( C i j ):=(z a i y - b y j ) v i , j .
0.2.12 Matrix Multiplication by Covering An (m x n) - dimensional matrix
C
is the "cover product"
of two (m x n) - dimensional matrices C : =
(
c
i j >
:
v
= ( W
^
A
and
B , if
•
0.2.13 Calculation of the Inverse As the quotient
C
of two quadratic, regular (n x n) -
dimensional matrices
A
and
B
is not defined, consider
the following equation
C:= A-B * , where B * is the inverse
of the matrix
is calculated as follows:
Step
1: Transform the (n x 2n) - dimensional matrix fe starting
Step
B .
with fe :=(B;I) and set the index T:=1.
2: Determine
b
IT
1,
if i =
0,
otherwise
v
TJ
£
T
j
TT
is. -a . £*.: = fc.. - _ L L _ I I U U I XT
and set
X:
=
T
+
1
.
Seat-ion 0.2
Step
3;
Is
T = n + 1
21
? %
If yes: Stop, now the matrix B:=(I;B the inverse of
-1
B .
If no : Set fc..:^?. . Go to step 2 .
-1 ). B
is
1. Linear Programming
1.1 General Methods 1.1.1 The Primal Simplex-Algorithm Hypotheses Given the following problem: P:
max
it
A-x
< b
= c-x
x
> e
Note: Each problem of the form min the maximization problem
max
ir = c-x
can be transformed into
-w = -c-x .
Principle The algorithm
begins with an initial feasible solution x=(0;...;0)
determining in each iteration a new basic feasible solution (graphically speaking: moving from one corner of the convex set described by the constraints to another) in which the value of the objective function increases or as a minimum does not decrease. The algorithm terminates in an optimal basic solution or an unbounded solution. Description Step
1: Add slack variables
y
to the system, so that A-x + y = b
and set up the initial tableau : x
y
1
A
I
b
-c
0
IT
For the following calculations let: ft=(A,I);c=(-c,e);fe=b .
Section 1.1.1
Step
23
2: g c, < 0 ? J If yes: Go to step 3. If no : Stop, the current solution is optimal.
Step
3: (selection of the pivot-column
s )
Determine
c := min {c. I c. < 0} . J J ^ (In principle any negative element c- will do.) •J
Step
4: 3 a i s > 0 ? If yes: Go to step 5. If no : Stop, P has an unbounded solution.
Step
5; (selection of the pivot-row Determine the pivot^row fL
b
: = min
a
r
as follows: . a r s is pivot-element.
is>°
IS
rs Step
i
r )
6: (iteration, tableau-transformation) Compute the new tableau: a) pivot-row
r :
^ a -v* ri a : j = ^ V j r a
is:"
n + m;
ft 5,* r br : = a
rs
b) pivot-column a
, = 1
fl, J 10,
rs
s : ifi=r otherwise
;
c
s:
=
u
n
c) all other rows and columns :
a
ij" = a ij
a. • a . is rj . 5 ' rs
J/ 'X/ d • a. fc* fc r IS b.:=D. i i •}> rs Go to step 2 .
c • a . _s rj C
j'=Cj "
rs
r • cs rs
24
Linear
Programming
Note: A v a r i a b l e x k or y k i s c a l l e d (bv), i f 1, aik
if
a basis variable
1 = 1
=
0, otherwise The a c t u a l v a l u e s are x k = b-| or y k = b-j . A l l v a r i a b l e s , which are not b a s i s v a r i a b l e s , are c a l l e d nonbasis v a r i a b l e s ( n b v ) . Then xJ. = 0 v x J. , nbv; or y .J = 0 v y -J, nbv. Example Given the f o l l o w i n g problem P: P: max ir = 5 - x ^ + 6 - x 2 (1)
3-xx - 2-x: < 9
(1)
(2) - 5 - x 1 + 5 - x 2 < 15
(2)
-5-x^ + 5-x2 + y 2 = 15
(3) 1 2 - x : + 3-X 2 < 12 XpX2 > 0
(3)
12-Xj + 3-X2 + y 3 = 12
3-xx - 2-x2 + yx = 9
Section 1.1.2
T^:
X
1
3
t(3).
x2
1
y
2
-2
1
0
0
T^:
X
1
9
1
X
y3
1
2/5
0
15
y
2
0
1
25
2
-5
5
0
1
0
15
-1
1
0
1/5
0
3
12
3
0
0
1
12
15
0
0
-3/5
1
3
-5
-6
0
0
0
0
-11
0
0
6/5
0
18
X
1
0
X
2
y
0
l
1
H
y
1
3
11/25 -1/15
74/5
0
1
0
4/25
1/15
16/5
1
0
0
-1/25
1/15
1/5
0
0
0
19/25 11/15 101/5
Recap of the c a l c u l a t i o n s : T^:
x = ( 0 ; 0 ) ; ir
= 0
x = (0;3); n
= 18
T ( 3 ) : x = x = 11/5; 16/5); Î = 101/5 .
1.1.2 The Two-Phase Method Hypotheses Given the following problem :
A-x I b > e x > e Principle The method s t a r t s with a non-feasible basic solution ( i . e . x = ( 0 ; . . . ; 0 ) i s not a feasible corner in the given convex s e t ) . At the end of the f i r s t phase a feasible solution i s obtained, i f one e x i s t s . Beginning with t h i s feasible s o l u t i o n , the second phase determines as in 1.1.1 an optimal basic solution.
26 Linear
Programming
Description Phase I Step
(determination of a feasible solution)
1: Transform the restrictions as follows: a-x < b
—•
a-x + y = b
a-x = b
—•
a-x + z = b
a-x > b
—*•
a-x - y + z = b
where
y
are slack variables and
z
are artificial vari-
ables. The basis variables in the last
k
rows are arti-
ficial variables . Formulate the initial tableau without the objective function. Step
2: Determine the coefficients dj of the artificial objective function as follows: m d
j
:
""I i=m-k+l
a ^ V j, nbv ;
i.e. calculate the sum of the last
k
elements in each of
the columns belonging to a nbv. The value i of this objective function is m b.
5 :
i=m-k+l The complete initial tableau is X
~T 1
' 1 A | 1
i
T 1
z
1
Lh 0 0 1 I 1 -I i'
b
! 0
TT
y 1
d Step
3: Is dj > 0
Vj
?
If yes: Go to step 5. If no : Go to step 4. Step
4: Select pivot-column and -row and do one iteration as in step 6, Primal Simplex-Algorithm. Go to step 3 .
Section 1.1.2
Step
27
5: Are all artificial variables nbv ? If yes: Go to step 7. If no : Go to step 6.
Step
6; Are all artificial variables, which are still in the basis, equal to zero
?
If yes: Eliminate all rows which have "1" in the columns of these variables and then eliminate the corresponding columns. Go to step 8, If no : Stop, P has no feasible solution . Step
7: Eliminate all columns belonging to artificial variables.
Step
8: Replace the artificial objective function in the remaining tableau with the actual objective function, in which the nbv's are retained and the
bv's are expressed as linear
combinations of nbv's. Phase I.I (determination of an optimal solution) Step
9: Apply the Primal Simplex-Algorithm to the tableau, obtained in step 8, beginning with step 2. Note: Step 8 can be omitted, when the actual objective function is added to the initial tableau at the beginning of the first phase and the coefficients are calculated in the usual manner, but they shall not determine the pivotcolumn. At the end of the first phase the artificial objective function must also be eliminated. The second phase begins with the remaining tableau. The M-Method, described in some publications, is - apart from some formal modifications - identical to the TwoPhase Method.
28
Linear
Programming
Example Given the following problem P: -6-Xj + 6-x 2 )
max ir
=
(1) 2
X
1
(2)-5
X
1
(3)-6
X
1
2-Xl + 3
X
2 +
yx
= 6
15
(2) -5-Xj + 9
X
2
+
zx
= 15
+ 3-X 2 >
3
(3) -6-Xj + 3
x
2
Xj, x 2 >
0
x
z
i > 0 v
+ 3-X 2
J i:z.,bv Step
d , so that
1J
4: Formulate the initial simplex-tableau as follows:
1 1 1 i
X y z
A
1
w e
} - — -
b
! -i
d
IT
-c Step
5: Determine the pivot-column d g := min { dj I dj so that
Step
c := ^
< 0 }, or, if
: = min
d
does not exist,
min { c. I c. < 0 } . J J
6: Determine the pivot-row rs
s , so that
b
r , so that
i
The pivot-element a r s is hereby determined.
Step
7: Exchange the variable in row lumn
s
,* _ a -
1 . _ a rj w_- . — , a - — v j * s, rJ rs rs
rs
r
for the variable in co-
and transform the tableau as follows : a * _ is , a. - - - — v i * r 1S rs
31
32
Linear
Programming
a*i= a ii "
3l
a ^ V i ^ r ; rs
b
b
r
c*= c C j CJ
rs *
it = tt -
bR • Sc a
a .•c r J s V a rs
. A*
rs
j * s; a * b is"br ?= i - ^ V l rs J
*
s;
c
* r
s= " a 5 rs
a ••c d rj s ,* s ; d = a s a rs rs
A
; d.=d. Ji i J
If the d-vector does not exist, go to step 10; otherwise go to step 8. Step
8: Is
d, > 0 J
j
?
If yes: Eliminate row
d
and all columns, belonging to
the artificial variables
z
and go to step 9 .
If no : Go to step 5 . Step
9: Are there still rows in the current tableau, corresponding to the artificial variables If yes: Stop, P
z ?
has no feasible solution.
If no : Go to step 10. Step 10: Is
Cj > 0 V j
?
If yes: Stop, the current solution is optimal. If no : Go to step 5. Example Given the following problem P: P:
max (1)
tt = ( - 6-x^ + 6-x 2 ) 2-x 1 + 3-x 2
-6-Xj + 3-X 2 - w 1 + z 2 =
XpX,, >
3 0 .
x i ,zi ,y 1 ,w 1 > 0
3
v i = 1,2.
Section 1.1.3
I*1):
T(2): w
x^
i
z
2
1
X
2
3
0
6
-5
9
0
15
Z
1
-6
3
-1
3
x
2
0
0
1
h z
i
I
6 - 6
t(3).
1
Z
1
x
Z
1/8
W
2
X
- 1 3 / 8 - 1 1 / 8 11/8
9/8
W
1/4
1/12-1/12
7/4
3/4
5/4 - 5 / 4
33/4
X
x
1 2
1
1
z
2
Z
1
1
1
11/3
0
-1/3
1
13/3
-1
1/3
2
-5/9
0
1/9
5/3
8/3
0
1/2
10
R e c a p o f the
calculations:
X = ( 0 ; 0 ) ; TT = 0 (2). x = ( 0 ; 1 ) ; tt = 6 (3). X = (3/8 ;7/4); TT = 33/4 (4).
X = (3/11 ; 2 0 / l l ) ; tt =
(5). X = ( 0 ; 5 / 3 ) ; i = 10 .
1
1 x2
t(5).
W
-1
1
3
13
-3
3
6
-2
1/3 - 1 / 3
z
3/8
2
-1/8
1
1
1/8
1
8
2
2
1
1
-2
6
T(4): h
X
z
-6
W
1
1
102/11
2
Z
1
1
3/11
0
-1/11
3/11
-13/11
-1
8/11
9/11
5/33
0
2/33 20/11
-8/11
0
10/11 1 0 2 / 1 1
33
34
Linear
Programming
1.1.4 T h e Dual Simplex-Algorithm Hypotheses Given the following problem: P:
Z y ] * - C"X maxj A-x < b x > 0 b €
All systems of constraints may be written in this form through the appropriate transformations as follows: a-x > b
—*• - a-x < - b
a-x = b
—»•
a-x < b
and
- a-x < - b
Principle The algorithm begins with a primal non-feasible basic solution x=(0;...;0). Maintaining the dual feasibility in each iteration (i.e. non-negative coefficients in the objective function row), the algorithm determines new solutions which do not decrease the value of the objective function. The first primal feasible solution discovered in this manner is necessarily an optimal solution. Description Step
1: Add the slack variables
y.
to the inequalities and set up
the initial tableau. (In a maximizing problem, the coefficients of the objective function are negated.) Step
2: Is
Cj > 0
V j
?
If yes: Go to step 5. If no : Go to step 3. Step
3: Add one column and one row to the initial tableau; the additional constraint is n z x. + y = C, where C J j=l ber and
y
is any very large positive num-
is the additional slack variable.
Section 1.1.4
Step
4: If
35
c$:= max { Cj I c^ < 0 } and the additional row is the
(m + l)-th rowj then
^ ^
the pivot-element.
Do one iteration as in step 6, Primal Simplex-Algorithm, without "go to"-statement. Result: Step
5: Is
c*>0 J
Vj
.
b. > 0 V i = l,...,m+l
?
If yes: Stop, the current solution is optimal. If no : Go to step 6. Step
6: (selection of the pivot-row r ) Select any row with b^ < 0 as the pivot-row. Note: Preferably select row
Step
r
with
b r :=min { b ^
.
7: (selection of the pivot-column s ) From all negative elements of pivot-row c
determine
larsl
: = min j:a r j 2
(2)
Xj.Xg > 0
-2-Xj - x 2 < - 4 Xl
- x2 < - 2 XpX2> 0
36
Linear
Programming
j(l) :
t (2). X
y
2
1
2
X
1
x2
*2
1
-2
-1
1
0
-4
1
1/2
-1/2
0
2
1
-1
0
1
-2
0
-3/2
1/2
1
-4
2
3
0
0
0
0
1
0
-4
t (3). X
1
x
2
y
l
y
2
1
1
0
-1/3
1/3
2/3
0
1
-1/3
-2/3
8/3
0
0
5/3
4/3 -28/3
Recap of the calculations: T ^ :
x = (0;0); tt = 0
T ( 2 ) : x = (2;0); tt =-4 T ( 3 ) : x = (2/3;8/3); i = - 28/3
2
Seat-ion 1.1.4
Example
2
Given the f o l l o w i n g P : max w = 2 - x ^ + (1)
P: P : max w = 2 - x ^ +
x2 x2 > 5
(1)
-2-x1 +
Xj + 3-X2 < 6
(2)
xx
2-Xj
(2)
problem
-
XJ.X2
x2
2
0
x.
r(2).
rd). X
1
x2
1
X
1
"u
x2
y
2
y
l
-2
1
1
0
-5
-2
1
1
0
0
-5
1
3
0
1
6
1
3
0
1
0
6
-2
-1
0
0
0
1
1
0
0
l
C
-2
-1
0
0
0
0
37
38
Linear
Programming
t(3).
X
T: x
1
y
2
l
y
2
"Vi y
y
1
X
-5+2C
0
7
1
2
0
7
6-C
0
-2
0
-1
l
C-6
1
X
y
2
2
l
0
3
1
0
2
0
2
0
1
-l
1
1
0
0
l
C
1
3
0
1
0
6
0
1
0
0
2
2C
0
5
0
2
0
12
Recap of the calculations: T ^ :
x = (0;0); TT = 0
T ^ :
x = (0;0); tt = 0
t(
3)
: x = (C;0); TT = 2C
T ^ :
x = (6;0); i = 12
1.1.5 Sensitivity Analysis and Parametric Programming (S A and P.P.) Hypotheses Given any of the following problems: P ^ :
max TT (X) = c(x)-x
P
x ^ :=
A-x < b X > 0
max tt
77 =
c where
b^x):
0, = b.
R
= c-x
A(x)•x < b X > 0
In general we examine with the sensitivity analysis how far certain elements of an optimal tableau can be changed under the assumption that the tableau remains optimal. With parametric programming we determine a new optimal tableau, when one or more elements have been changed so that the original optimal tableau no longer holds for the new optimum. Note: When changing more than one element, indexed parameters x^ must be utilized, thereby increasing considerably the difficulty of determining the intervals.
Section
1.1.5.1
39
1.1.5.1 S A a n d P.P. with Expected Alterations Hypotheses Given one of the problems P in which a subsequent alteration of one ^ A element by x
is to be accomplished.
Note: The mark "-" over an element will indicate the value of that element in the optimal tableau. Description Step
1: Solve the problem by one of the simplex-methods, complete with the parameter
x , but without influence in selecting
the pivot-column or -row. Result: For
x = 0
the computed tableau is optimal (if a
feasible solution does exist). Step
2: (sensitivity analysis) Determine the interval ^ e [ x ; x ]
Step
3: Is
[ X ; x ], so that for all
Cj(I) > 0
X * e [ X ;x ]
v j; 6 ^ 3 0 > 0 v i .
?
If yes: Stop, the optimal tableau computed in step 1 re£
mains optimal even after alteration of X parameters are calculated setting
X : = X
. The new .
If no : Go to step 4. Step
4: Is
X* >
X
?
If yes: Go to step 5. If no : Go to step 8. Step
5: Is
X
determined by the element c g (x)
If yes: Select column If no : Select row
6:3
ais > 0 ?
for pivot-column . Go to step 6 .
r , that determined
Go to step 7. Step
s
? X , for pivot-row.
40
Linear
Programming
If yes: Determine the pivot-element in column
s
as in
the Primal Simplex-Algorithm and do one iteration. Go to step 2. If no : Stop, for
x = x
there is no feasible solution
of P x . Step
7:
3
arj < 0
?
If yes: Determine the pivot-element in row
r
as in the
Dual Simplex-Algorithm and do one iteration. Go to step 2. If no : Stop, for
x
=
x*
there is no feasible solution
of P x . step
8:
Is
x_ determined by the element c $ (x)
If yes: Select column If no : Select row
s
?
for pivot-column. Go to step 9.
r , that determined x^ , for pivot-row.
Go to step 10. Step
9:
3
âis > 0
?
If yes: Determine the pivot-element in column
s
as in
the Primal Simplex-Algorithm and do one iteration. Go to step 2. If no : Stop, for
x
=
X*
there is no feasible solu-
tion of P^ . Step 10:
3 a .< 0 rj If yes: Determine the pivot-element in row
r
as in the
Dual Simplex-Algorithm and do one iteration. Go to step 2. If no : Stop, for of P x .
X
=
X * there is no feasible solution
Seat-ion 1.1.5.1
41
Example 1 Given the f o l l o w i n g problem p j ^ : P^1):
max
tt(x) = 3-Xj + (2 + x)-x 2
2-Xj +
x2 < 4
X j - 2-X 2 < 6 XpX2
> 0
r(l).
T(2).
X1
x2
y
l
H
1
X1
x2
h
1
2
1
1
0
4
2
1
l
0
4
1
-2
0
1
6
5
0
2
1
14
•3
-2-X
0
0
0
1+2X
0
2+X
0
8+4 X
[ X ; x ] = [-1/2;-) a) I f
x
= 10, then the tableau remains optimal and we have:
i ( x * ) £= c ( x * ) - x = 48 b) I f t
A
= - 1 , then the column belonging to x^ becomes pivot-column.
(3). X1
1
X2
1
1/2
1/2
0
2
0
-5/2
-1/2
1
4
0
-1/2-X
3/2
0
6
[ X;X
] = (—;-l/2]
. Then: x = ( 2 ; 0 ) ; i ( x * )
Example 2 Given the f o l l o w i n g problem P ^ :
max
tt = 3-x 1 + 2-x 2
2-Xj +
x2 < 4 + x
x 1 - 2-x 2 < 6 x, , x ,
> 0
:
= c(x*)-x = 6
42
Linear
Programming
T ^ : X
T< Z >: x
1
y
2
2
1
X
1
X
y
2
y
l
1
2
2
1
1
0
4+X
2
1
1
0
4+X
1
-2
0
1
6
5
0
2
1
14+2X
-3
-2
0
0
0
1
0
2
0
8+2 X
[
] = [-4; - )
a) If
X* = 5, then the tableau remains optimal and we have:
x(x*) = (0;9); b) If
X
i(A*) = c-x(x*) = 18 .
=-5, then the first row becomes pivot-row, but because
a,. > 0
V j no pivot-element exists. For (21 feasible solution of P* ' .
X* < -4 there is no
J
Example 3 Given the following problem P ^ = max
tt = 3-x 1 + 2-x 2
2-Xj + x-x 2 < 4 Xj - 2-x 2 < 6 XpX2
> 0 t(2).
. X
1
x
2
y
l
y
2
1
X
1
X
2
y
l
2
X
1
0
4
1
1
-2
0
1
6
0
-3
-2
0
0
0
0 -2+(3/2) X 3/2
X/2 -2-X/2
1/2 -1/2 -
y
2
1
0
2
1
4
0
6
[ X ; X ] = [ 4/3; - )
a) If
x* = 3, then the tableau remains optimal and we have:
x(X*) = (2;0); i(X*) = c-x(X*) = 6 . b) If
x* = 1/2 , then the second column becomes pivot-column.
Seation 1.1.5.2.1
t
43
(3). X
X
1
2
H
1
2/X
1
1/A
0
4/x
1+4/X
0
2/X
1
6+8/X
-3+4/X
0
2/X
0
8/X
[ A ; x ] = For
(0;4/3]
X* = 1/2 the optimal solution is x(x*) = (0;2);
i(x*) = c-x(x*) = 16
1.1.5.2 S A a n d P.P. w i t h U n e x p e c t e d A l t e r a t i o n s 1.1.5.2.1 S u b s e q u e n t A l t e r a t i o n s of t h e R e s t r i c t i o n V e c t o r Hypotheses Given a problem P, for which the optimal tableau is known, determine what subsequent alterations of the element b^ can be accomplished without destroying the optimality of the tableau. If it is not possible for a given parameter
X
, then a new restriction vector
must be calculated. The slack variable y^, belonging to the i-th restriction, can be found in the (n + i)-th column; an alteration
X*
has to be accomplished. (The mark "-" over an element will indicate the value of that element in the optimal tableau.) Description Step
1: (sensitivity analysis) Determine the interval [ A_ ; A ], so that
I 5 k,n + i > k,n+i
0
}
'
44
Linear Progranmzng
if
*
X: =
if
Step
2
2: Is
set
®k,n+i >
mi k 5
= " " i
a
Ma
k,n+i
=
0 0 0
1;
= min {10;5} = 5; '5 0 0
X =
z = (5;4;6;5);
= 10 - 5 = 5
I = {1;2;3;4>
o1 0 0 0
0 0 0 0
J = {2 ;3} ;
;
Wj = 5 - 5 = 0 ;
w =
r = min {1;2;3;4} = 1;
s = min {2 ;3} = 2 ; e = min {z^;w2> = min {5;12} = 5 ;
x
12
= 5;
X =
5 0 0 lo
Z
1
= 5 - 5 = 0 ;
I = {2;3;4} ;
0' 0 0 0
5 0 0 0
z = (0;4;6;5);
J = {2;3} ; 2 ; E = min
~ 41
X —
'5 0 0 0
5 4 0 0
0' 0 0 0
(5;12;8);
f0
J = {1;2;3}
1° Z
the demand, a dummy is introduced
reads:
The new vectors are:
{1;2;3;4}
e = min
X
(5;12;8).
w 2 = 12 - 5 = 7;
r = min {2;3;4} = 2 ;
w =
0
0 0 0
0
0 0 0
Section
z 2 = 4 - 4 = 0; I = {3;4} ;
z = (0;0;6;5);
J = {2;3} ;
w2 = 7 - 4 = 3;
1.2.1.2
w = (0;3;8);
r = min {3;4} = 3 ; s = min {2;3} = 2
e = min { Z j ì v ^ } = min {6;3> = 3 ;
x
- 332 ~
'5
5
0'
0
4
0
X-
z 3 = 6 - 3 = 3; I = {3;4} ;
0 0
3 0
0 0
z = (0;0;3;5);
J = {3};
w2 = 3 - 3 = 0 ;
r = min {3;4} = 3;
w = (0;0;8);
s = min {3}= 3;
e = min {Zgjw^} = min {3;8> = 3;
3;
33
X=
z3 = 3 - 3 = 0 ;
z = (0;0;0;5);
I = {4} ; J = { 3 } ; e = min {z 4 ;w 3 }
X
5-5=0; 9;
r = min {4} = 4 ;
s
w = (0;0;5);
min {3} = 3;
min {5;5} = 5;
X=
5;
43 =
w3 = 8 - 3 = 5;
z = (0;0;0;0) ;
w3 = 5 - 5
0;
w = (0;0;0) ;
0 = 0;
Stop, X
5 0 0 0
5 4 3 0
o) 0 is a feasible transportation plan. 3 5
1.2.1.2 The Row Minimum Method Hypotheses The index-sets I and T are defined as: I : = { i } ; T: = 0; f o r the i n i t i a l transportation matrix X : Xr
x
•> x. . = 0 V i
holds.
52
Linear
Programming
Description Step
l: Determine
r: = min
Step
2: Determine
cr$: = e: =
Step
3: Set
{i | i e 1} . min {crj- |(r,j)(ÉT}
min {z r ;w s }
xrg: = e z
r'
ws:
= ws -
e
.
I - {r} , Step
4: Define
I:
if z r = 0
I , otherwise T
T:
u {(i,s)} V
i,
if
ws = 0
T , otherwise Step
5: Is
1 = 0
?
If yes: Stop, the current matrix X yields a feasible transportation plan. If no : Go to step 1. Example Given the following problem:
C =
2
5
7
3
6
1
9
6
4
150
50
50
;
z = (10;4;6;5) ;
w = (5;12;8);
(See also the example "Northwest-Corner Rule") I = {1;2;3;4} ;
T = 0 ;
r = min {1;2;3;4} = 1; e = min { Z p W ^ } = min
X =
0 0 0 0
0 0 0 0
c^ s = min ic^-} = c ^ {10;5> = 5;
0i 0 0 0 s = 1 ;
Section 1.2.1.2
X
11
z1
5 ;
"
= 10 -
I
5 = 5;
= {1;2;3;4}
z
= (5;4;6;5);
Wj = 5 -
c
l s
e = min
x
12
Zj
I c
{1;2;3;4} {CJJ
= min
"
5 = 0;
= {2;3;4}; = min
e = min
z
f5
5
0
0 0 0
0 0 0
0 0 0
4 = 0;
= {3;4};
c3s
= min
e = min
|(2,j) = min
z
i
T} = m i n {4;7} 5 0 0 0
T = {(1,1); {c3j-
{z.,;w,}
|(3,j) = min
0 0 1° = 6 -
= min
w 2 = 12
-
5 = 7;
4 = 2;
z
{c22;c23}
= 5 ->•
s
;
w =
r
= min
= min
{6;1}
(0;7;8);
{2;3;4} = 1->•
s
= 4; 0^ 4 0 0 w3 = 8 -
4 = 4;
(2,1) ;(3,1);(4,1)} ;
£
{5;7}
5;
= (0;0;6;5);
i"5
z3
=
ic^'.c^}
= (0;4;6;5);
lO
I
= {(1,1); (2,1); (3,1); (4,1)};
T = {(1,1);(2,1) ;(3,1); (4,1)}
{c2j-
{z,;w,}
= 4 -
= min
{5;12}
("5 0 0
z2
(0;12;8)
1;
= min
X =
= 5 -
2 s
=
| ( l , j ) 2 T}
{ZpW2}
5 ;
w =
;
T = 0 U {(1,1);(2,1);(3,1);(4,1)} r = min
5 = 0;
T}
= min
{6;4}
K
3 2
;c
3 3
}
r
w =
= min
= min
(0;7;4);
{3;4}
{6;4}
= 3;
= 4 ->•
s
= 4;
5 0
0^ 4
0 0
4 oj
= (0;0;2;5);
w3
= 4 -
4 = 0;
w =
(0;7;0);
54
Linear
I = {4};
Programming
T = {(1,1);(1,3);(2,1);(2,3);(3,1);(3,3);(4,1);(4,3)}
r = min {c 4 j |(4,j) i T} = min { c ^ }} == c ci9 42 42 = m
e = min { z ^ ^ }
- 542 ~
AX
5
0
0'
0
" 0
4
2
0 z 4 = 5 - 5 = 0;
4
5
0
z = (0;0;0;0) ; f5 0 0 0
I = 0; Stop, X =
;
s = 2;
'' n i5;5} = 5;
. Step
Step
Z: Set
x
4: Define
: = e
rs
V
= zr "
e
V
= ws '
e
J: =
J - {s} , if w s = 0 J,
T: =
Step
5: Is
J = 0
otherwise
T u {(r,j)} v T,
otherwise
j, if z r = 0 .
?
If yes: Stop, the current matrix X yields a feasible
55
Section 1.2.1.3
transportation I f no
: Go t o s t e p
plan.
1.
Example Given the following f 2
5
3
6
1
9
6
4
150 v.
50
50
C =
problem:
7 ;
2 = (10;4;6;5);
w =
•
(See a l s o t h e example " N o r t h w e s t - C o r n e r J = { 1 ; 2 ; 3} ;
T = 0 ;
X =
s = min { 1 ; 2 ; 3 } = 1 ; C j = min { c ^
(5;12;8);
| (i,l)
I
Rule"). 0 0 0 0
[0 0 0 0
T } = min { c n ; c
0) 0 0 0 ; c 3 1 ; c 4 1 > = min
2 1
= 2
min { 1 0 ; 5 } = 5 ;
X
11 "
'5 0 0 lo
5 ;
0 0 0 0
{2;3;9;50} r = 1 ;
0" 0 0 0
2 = (5;4;6;5);
Wj = 5 - 5
= 0;
s = min { 2 ; 3 } = 2 ; "r2
.2)
i2
t T } = min
{ »
c
2 2;c
32;c42}
= 5
-v r = 1 ;
min { 5 ; 1 2 } = 5 ;
(5
x^
~ 5;
X =
0 0 lo =
J = {2; 3}; s = min { 2 ; 3 }
5 0 0 0
0' 0 0 0
(0;4;6;5);
w2 = 12 - 5
T = 0 U {(1,1) ;(1,2) ;(1,3)} = 2;
= 7;
= { ( 1 , 1 ) ; ( 1 , 2 ) ; (1,3)}
;
56
Linear Programming
cr2
= min
ici2 I i 1 ' 2 )
T } = m i n {C
12 ;C 22' C 32' C 42 }
= min
< 5 > 6 ; 6 ; 5 °}
= 5 -> r = 1 ; e = min {ZpV^} = min {5;12} = 5; 5 5 O"1 O O O X
12
"
0
5 ;
io z1 = 5 - 5 = 0 ;
0
0
0
OJ
z = (0;4;6;5);
= 12 - 5 = 7; w = (0;7;8);
J = {2;3}; T = 0 U {(1,1) ;(1,2) ;(1,3)}
= {(1,1) ;(1,2) ;(1,3)} ;
s = min {2;3} = 2; c r 2 = min i c ^ | (i >2) (• T} = min {C22ic32;c42^
= mi n
' {6;6;50} = 6 -»• r = 3 ;
e = min {z,;w,} = min {6;7} = 6; '5 0 0 z3 = 6 - 6 = 0 ;
5 0 6 0
01 0 0 0
z = (0;4;0;5);
w2 = 7 - 6 = 1 ;
w = (0;1;8);
J = {2;3} ; T = {(1,1);(1,2) ;(1,3)} U {(3,1);(3,2) ;(3,3)} = = {(1,1);(1,2);(1,3); (3,1) ;(3,2) ;(3,3)}; s = min {2;3} = 2; c r 2 = min {ci2 | ( i, 2 ) e
= min {z2;w2} = min {4;1} = 1; f
5
x
22
Ì T} = min {c 22 ;c 42 } = min {6 ; 50} = 6 + r = 2;
- 1-
'
X -
0
0 0
v.
z 2 = 4 - 1 = 3;
5 1
6 0
0' 0
0 0J
z = (0;3;0;5) ;
= 1 - 1 = 0;
w = (0;0;8);
J = {3}; T = {(1,1);(1,2);(1,3);(3,1);(3,2);(3,3)} ; s = min {3}= 3; c r 3 = min {c i3 I (i,3) (E T} = min ic 23 ;c 43 ) = min {1;50} = 1 + r = 2; E = min {z9;w,} = min {3;8} = 3;
57
Section 1.2.1.4
x
X
3 ;
23
z2
= 3 -
J
= {3};
=
3 = 0;
5 0
5 1
0 3
0 0
6 0
0 0
z =
T =
(0;0;0;5);
w3
= 8 -
3 = 5;
w =
(0 ;0 ; 5 ) ;
{(1,1);(1,2);(1,3);(3,1);(3,2);(3,3)}U{(2,1);(2,2); (2,3)} =
= s c
= min
{3}
= min
r 3
E = min
{(1,1);(1,2);(1,3);(2,1);(2,2);(2,3);(3,1);(3,2);
= 3;
{ c ^
(3,3)};
| (i ,3)
{z4;w3>
=
-
4 3
z4
J
= 0;
5 1
0 3
0
6
0
0
0
5
0
5;
= 5 - 5
= 0;
Stop,X
T}
= min
{ c ^ }J 43
= c ^ '43
= 50
r
=
4;
5;
[5 x
I
z =
(0;0;0;0) ;
'5 0
=
5 1
0] 3
0
6
0
0
0
5
1.2.1.4 T h e M a t r i x M i n i m u m
w3
is
= 5 -
5 = 0;
a feasible
w =
(0;0;0);
transportation
plan.
as:
J:
Method
Hypotheses The T:
index-sets = 0,
X^
= 0
for
V
I,
the
i ,j
J
initial
and
T
are
defined
transportation
matrix
I: X
holds.
Description
Step
1: Determine
Step
2: Set
x
c r s : = mi'n { c ^ e
: = e
: = min
| (i,j)
{zr;ws}
.
t
T}
=
{i};
: Xr jm *
n1
ni
=
{j};
58
Linear
Programming
I - {r}, if 3: Define
Step
zr = 0
I: I,
otherwise
J - {s}, if w s = 0
J: =
J, otherwise T U{(r,j)} v j, if z r = o
T: =
T U { ( i ,s)} V i, if w s = 0 Step
4: Is
(I = 0) v (J = 0)
.
?
If yes: Stop, the current matrix X yields a feasible transportation plan. If no : Go to step 1 . Example Given the following problem: 2
5
7 '
3
6
1
9
6
4
^50
50
50
;
z = (10;4;6;5);
w=(5;12;8);
(See also example "Northwest-Corner Rule") I = {1;2;3;4} ;
c
rs "
min
{C
J = {1;2;3} ;
T
X =
0 0 0 0
0 0 0 0
0 0 0 0
11'C12;C13;C21;C22;C23;C31'C32;C33;C41;C42'C43}
=
= min {2;5;7;3;6;1;9;6;4;50;50;50> = 1 ->- r = 2; s = 3; e = min {z 2 ;w 3 > = min {4;8} = 4;
x
23 "
4;
X =
z 2 = 4 - 4 = 0;
^0 0 0
0 0 0 0
0Ì 4 0 0
z = (10;0;6;5);
w
3
= 8 - 4 = 4 ;
w = (5;12;4);
Seat-Lon 1.2.1.4
I = {1;3;4}
;
J = {1;2;3}
;
T = 0 U { ( 2 , 1 ) ;(2,2) ; (2,3)} = {(2,1) ; (2,2) ; (2,3)} crs
= min
59
{C11'C12'C13;C31'C32'C33'C41'C42;C43}
= min { 2 ; 5 ; 7 ; 9 ; 4 ; 6 ; 5 0 ; 5 0 ; 5 0 }
= 2
;
=
r = 1; s = 1;
e = min { z ^ v ^ } = min { 1 0 ; 5 } = 5;
X X
11
- 5-
X * "
z, = 10 - 5 = 5;
I = {1;3;4};
'5
O
O'
0
0
4
O O
O O
o 0.
z = (5;0;6;5) ;
O = {2;3}
w, = 5 - 5 = 0 ;
w = (0;12;4);
i
T = {(2,1);(2,2);(2,3)}U{(1,1);(2,1);(3,1);(4,1)} = {(1,1);(2,1);(2,2);(2,3);(3,1};(4,1)} c r s = min Í c i 2 ' c 1 3 ; c 3 2 ' c 3 3 ' c 4 2 ; c 4 3 ^
= min
=
; í 5 ; 7 ; 6 ' 4 ' ' 5 0 ! 5 0 ^ = 4"> -*• r = 3 ; s = 3 ;
e = min { z ^ W j } = min { 6 ; 4 } = 4 ;
x 33
"
X=
4;
z3 = 6 - 4 = 2;
I = {1;3;4> ;
5 0 0 0
O 0 0 0
O*1 4 4 0
z = (5;0;2;5) ;
w3 = 4 - 4 = 0 ;
w = (0;12;0);
J = {2};
T = {(1,1);(2,1);(2,2);(2,3);(3,1);(4,1)}
U {(1,3);(2,3);(3,3);(4,3)}
= {(1,1);(1,3);(2,1);(2,2);(2,3);(3,1);(3,3);(4,1);(4,3)} c r s = min
Íci2'c32;c42't
= min
;
= 5 + r = l ; s = 2 ;
e = min {ZjjWg} = min { 5 ; 12} = 5 ;
X
- 512 '
z1 = 5 - 5
X A "
= 0;
'5
5
0
0
O O
O O
O' 4
4 0.
z = (0;0;2;5) ;
w2 = 12 - 5 = 7;
w = (0;7;0);
60
Linear
Programming
I = {3 ;4} ; T =
J = {2};
{(1,1);(1,3);(2,1);(2,2);(2,3);(3,1);(3,3);(4,1);(4,3)}U U i(l,l);(l,2);(l,3)}
= {(1,1);(1,2);(1,3);(2,1);(2,2);(2,3);(3,1);(3,3);(4,1);(4,3)} crs
*c32'c42*
=
e = min
x
{ZJÌW^}
- 232 '
= min
X -
z3 = 2 - 2 = 0;
I = {4}; T =
= 6 - + r = 3 ; s
=
{2;7}
=
'5
5
0'
0
0
4
0 0
2 0
;
= 2;
2;
4 0
z = (0;0;0;5) ;
w2 = 7 - 2 = 5 ;
w = (0;5;0);
J = {2};
{(l,l);(l,2);(l,3);(2,l);(2,2);(2,3);(3,l);(3,3);(4,l);(4,3)}u U{(3,1);(3,2);(3,3)> {(1,1);(1,2);(1,3);(2,1);(2,2);(2,3);(3,1);(3,2);(3,3);(4,1);
=
c r s = min { c 4 2 }= c 4 2 = 50
(4,3)}
r = 4; s = 2;
;
e = min { z 4 ; w 2 ) = min { 5 ; 5 } = 5 ;
x42
Ì5 0 0 0
X =
=
z4 = 5 -
5 = 0;
1 = 0;
J = 0;
5 0 2 5
0^ 4 4 0
z = (0;0;0;0) ; r5
0 0 0
Stop,
5 0 2 5
o] 4 4 0
w2 = 5 -
5 = 0;
w = (0;0;0);
is a feasible transportation plan.
1.2.1.5 T h e D o u b l e P r e f e r e n c e M e t h o d Hypotheses The i n d e x - s e t tion matrix
T X : X
i s defined as: T: = 0 , f o r the i n i t i a l [m x n]
X
ij
0 v
i ,j
holds.
transporta-
Section
1.2.1.5
Description Step
1: (determination of the row-minima) Determine
c. : = min {c..} v i IS
and set Step
IJ
Q: = {c i $ }
= l,...,m
.
2: (determination of the column-minima) Determine
c .: = min {c.. } v j = l,...,n rj ij i
and set Step
J
S: = {c •}
.
'J
3: Define M ( 2 > : = { C i j I ( C i j e Q) A (C.J e S)} M ( 1 ) : = {c.j I [ ( C i j e
Q) A (C.J I S)]
V A
M ( 0 ) : = {c i d
| (Cij
(C^
e S)]}
t Q) A ( Cl .j* S)}
and set the running indices Step
[(C.J t Q) A
k: = 1;
1: = 2 .
4: (assignment of reference numbers) Is
M ^
= 0
?
If yes: Go to step 6. If no : Go to step 5. Step ^
5: Determine define
c
rs
: = min 1{c. .1I c.. 6 M ^ } IJ
and set 6: Is
1 = 0
,
c ^ : = c rs rs M :
Step
IJ
- (c r s >
k: = k + 1 .
Go to step 4
?
If yes: Set the running index p: = 1 and go to step 7. If no : Set Step
1: = 1 - 1
7: Determine the element c ^
and go to step 4. .
62
Linear Programming
Step
8: I s
(r,s)
If yes:
(.11
Set
p:
= p + 1 and go t o s t e p
I f no : Go t o s t e p Step
9: D e t e r m i n e
e
define
x
z
.
T
=
: = Z
-
E
w„ : = W s s
e
r
r
V
W
V i ,
U {(i ,s)} p: = p + 1
j,
.
if
zr = 0
i f ws = 0
Go t o s t e p
( z = 0) a(w = 0)
If yes:
,
= e
j T u {(r,j)}
and s e t
Step 10: I s
9.
= min i z r ; w s >
rs'
10.
?
Stop, the current Matrix transportation
I f no : Go t o s t e p
10.
X
yields a
w =
(5;12;8);
feasible
plan.
7.
Example Given the f o l l o w i n g 2
5
problem:
7
3
6
1
9
6
4
[50
50
50
z = (10;4;6;5) ;
( S e e a l s o example " N o r t h w e s t - C o r n e r cls
= min t c n i c i 2 ~ min
c3s c4s
; c
13
}
=
c
c
c
= min i 4 ^ 4 2 ' 4 2
}
i 2
>
5 ; 7 }
=
2
c
min { 3 ; 6 ; 1 }
= 1
= min { 9 ; 6 ; 4 }
= 4
{c2^»^22'^23^
= min { c 3 1 ; c 3 2 ; c 3 3 }
min
Rule")
=
Q = {cn;c23;c41-,c42;c43}
min
;
M ^ }
= min Í c i 2
'rs*
=
C
U>
=
m ( 2 )
; c
= min í c 1 2 ; c 3 3 ; c 4 1 ; c 4 2 ; c 4 3 } "12 33 4 1 42 43
= 4; "
c { c
-
rs
33}
4rc42'c43
}
CL • 33' =
{ c
=
c(3> Vs
- C •! " 33
12;c41'c42'c43} {5;50;50;50}
m i n
=
' = 5;
'12:
4 2
;c
{c42;c43}; c
r
0
I ^ . e
^c41,c42'c43^ = min í c
4rc43
^
= {cn};
12;c33'c41;c42;c43} crs
c „ = c rs
c
=
= min { 5 ; 4 ; 5 0 ; 5 0 ; 5 0 } = { c
=
• 23'
}
1 = 1 ;
'
; c
= min { c i : j I C ^ . e
2 3
= min { c ^
M^ )
*
5 0
{7;1;4;50}
12'c33
^c13'c21,c22*,c31*,c32^
(2)
í2;3'.9;
*
= {cn;c
M(0)
Î C
63
=
= {c43}; k = = 0; k = 8;
4 3
'
k = 5;
}
= min { 5 0 ; 5 0 ; 5 0 }
k = 6; c
crs
= 50;
= min í c 4 2 ; c 4 3 >
cy
=
c.
= min { 5 0 ; 5 0 >
42' 7 ;
c rs = 1 = 0;
m i n
{ c
43}
=
c
43;
-(5)
=
C
43;
41 ' = 50;
64
Linear Programming
+ 0
c r s = min { c ^ j | c ^ . e M ^ }
= min{c13;c21;c22;c31;c32}
= min { 7 ; 3 ; 6 ; 9 ; 4 } = 3; c r s = c 2 1 ; M^ ^
=
ici3'c22'c31,c32^ ' ^
=
c r s = min { c 1 3 ; c 2 2 ; c 3 1 ; c 3 2 } c
rs
=
c
32;
c
rsP
=
c
c
rs
=
c
c
2 2 *'
= c21;
'
= min { 7 ; 6 ; 9 ; 4 } = 4 ; =
32'
c r s = min { c 1 3 ; c 2 2 ; c 3 1 }
=
{c
13'c22;c31}
k
'
=
10;
= min { 7 ; 6 ; 9 } = 6;
i s 1 0 ) = c22 '
m(0)
=
{c
13;c31};
k
=
11;
c r s = min { c 1 3 ; c 3 1 } = min Í 7 ; 9 > = 7; c
rs =
c
c
rs =
min
c
13'
p = 1;
isX) -
' 1S
1J
1J
( I f c ^ f c ^ ) does not e x i s t , set c ^ = 0 ( c i k = 0).) Step
2: Calculate
Step
3: Determine
(If
fi!:
= | c i k - ci
|
Vi = 1
m.
c .: = min {c. . | c . . € D} SJ 1J 1J
\ \ /v j = 1 c. = min {c-. | c.. € D;i * s}' : .: kj: = m i n { c i j l c i j
c s j (c kj .) does not e x i s t , set c s j = 0 (c k - = 0).)
Step
4: Calculate
6'!: = | c. . - c . | V j = l , . . . , n . J KJ SJ
Step
5: Determine
6: = max
Step
6: I s
6
£ {6!}
fip
.
?
I f yes: Go to step 7. I f no : Go to step 8. Step
7: Let
6
= s^ ; determine c r $ : = min { c r j | c r j 6 D} .
Go to step 9. Step
8: Let
6
=
Go to step 9.
; determine
c r s : = min {c^ s | c^ s 6 D} .
n.
nj
Section 1.2.1.6 67 e : = min {zr;ws>
Step 9: Determine
and define x
rs
ws -
D - {crj} V j, if z r = 0
D: =
Step 10: Is
D = 0
e
D - {cisJ V i, if w s = 0 .
?
If yes: Stop, the current matrix X yields a feasible transportation plan. If no : Go to step 1. Example Given the following problem:
C
=
2
5
7
3
6
1
9
6
4
50
50
50
;
z = (10;4;6;5);
w=(5;12;8);
(See also example "Northwest-Corner Rule") D = {C 11 ;C 12 ;C 13 ;C 21 ;c 22 ;C 23 ;C 31 ;C 32 ;C 33 ;C 41 ;C 42 ;C 43 }; c
ls
= min ic n ;c 1 2 ;c 1 3 } = min {2;5;7} = 2;
c
lk
= min ic 12 ;c 13 } = min {5;7} = 5;
c
2s
= c
23 ;
°2k
c
3s
= c
33'
c
5
i =
3k
c12 - c
n
= C = c
c
sl
clk = c12;
21'
32 ;
c
4s
= 5 - 2 =3;
3 ^ 6 - 4 = 2;
fi
cls = c n ;
= C
41'
c
4k
= c
42'
«¿=3-1=2;
sj = 50 - 50 = 0;
= min ic n ;c 2 1 ;c 3 1 ;c 4 1 ) = min {2;3;9;50} = 2; = min ic2i>c3i»c4i^ = min {3;9;50} = 3;
c
kl
c
s2 = c 1 2 ;
ck2 = c22;
c
s3
= c
23 ;
c
k3
= c
33'
c$1 = - 1 1 '
ckl = c21;
68 Linear Programming
6J
=
c
= max
6
=
= min
x
-
11
z,
D
=
=
c
l s
c
l k
c
3s Ì
5
3
c
s l
c
s2
-
k2
c
s3
5
c
i 2 '
=
c
1 3 '
=
=
min
{ c
13
}
c
33
c
;
13 -
= 0;
c
c
4
=
= min 23
{c
;
=
0
0
0
0
0
0
0
0
0
0
=
c
=
c
1 3
c
c e
2 s
=
c
3 2 '
3 3 '
m i n
{ 5 ;
;
c
c
-
5
c
=
c
2 2
;c
; c
3 2 '
c
4 2 =
3 2
=
k3
c
3 3
;
}
=
5;
=
c
23'
=
c
4 2 '
6
1
i
c
2
;w
3
2 3
-
4;
23^
>
X
1
=
3;
=
C j p
2
{
4
5
-
5
= 0;
w
=
(0;12;8);
' c
l s
=
c
1 2
c
2k
=
c
2 2 '
c
4k
=
c
;
4 3 '
6 - 1 = 5 ; 50
-
50
=
=
= min
m i n
{4;8}
{5;6;6;50}
{6;6;50} 0
-
0
=
=
4
-
1
=
= 5 ;
&
=
=
-
=
0;
=
6;
c
c =
k 2
s 2
=
c
;
2 2
'12'
0; 3; =
c
5;
2s
; =
c
2 3 '
4;
z x
5
min
min
= min {z
7}
=
ó!j
c
-
;
2;
=
4 3 ^
2;
{ ( 2 ; 5 ; 2 ; 0 ) ; ( 0 ; 1 ; 3 ) } 22*
w,
c
4s
=
6}
=
4
0 ;
=
2 2
6
=
5;
4 2 '
2s
= max
= min
l e t
(5;4;6;5) ;
32' 7
3;
2;
1 2 '
c
{10;5>
0
}
=
1 2
k l
{ c
=
=
= 6 - 5 = 1 ;
^ ; 5 ; 7 }
= 6 - 5 = 1 ; 6
¿2
'5
2 3 '
1 3
3k
-
c
min
c
1;
=
2
z
, c
2 2 ; c
=
2
l
5;
1 2
= 6
c
c
.
x
A
{ c
=
-
= min
min
=
3
;
n
=
=
5
c
i
5-
c
=
n
{(3;2;2;0) ;(1;1;3)}
10
i
c
min
e
X
-
2 1
2
=
4
-
4
=
0;
z
=
w3
=
8
-
4
=
4;
w
=
(5;0;6;5)
= (0;12;4);
;
Section
D =
c
13;c32;c33'c42' c43}
ls
=
min
{ c
12;c13}
c
lk
=
min
{c
13}
=
c
13'
c
2s
=
0 ;
c
3s
=
c
33
3k
=
c
32'
c
4s
=
C
=
c
13
=
6 - 4 = 2;
«3
;
c
{5;7}
m i n
sl
=
0;
c
s2
=
min { c 1 2 ; c 3 2 ; c 4 2 >
=
min { c 3 2 ; c ^ 2 }
kl =
= min { 6 ; 5 0 }
s3
=
c
2
=
6 - 5 = 1;
=
max { ( 2 ; 0 ; 2 ; 0 ) ; ( 0 ; 1 ; 3 ) }
=
min
c
x
r3
{C
k3
=
c
5 0 0 0
=
0 ;
c
2k
=
0 ;
ó j =|0 - 5|= 5; ckl
cls c
3s
c$2
= min { c 1 2 ; c 3 2 ; c ^ 2 }
c
k3 =
= 0 - 0 = 0; 5
43;
c$2 = c
12'
c^2 = c 3 2 ;
=
6 =
;
= 4; c r 3 = = min { 6 ; 4 }
w3 = 4 - 4 = 0 ;
= min ( c 1 2 } = c
32'
c
3k
=
c
\2'
0 ;
c
c
4s
ik =
w =
=
c
42;
= c 4 2 ; 5 0 0 0
X =
x 4 2 - 5;
z 4 = 5 - 5 = 0;
2s
ój
= 0;
c
2k = 0 ;
c
3s = c32'
= 0 - 0 = 0;
= min
= 5
>
¿2 = 6 - 5 = 1 ;
= max {(5;0;6;0);(0;1;0)} = 6 ;
[5 =
4s
= 0;
c
4k
=|0 - 6| = 6;
c ^ = 0;
c 3 s = min {c 32 } = c 3 2 ;
32
c l k = 0; c
3k =
w = (0;7;0);
=
0
= 0 - 0
c k l = 0;
c k 2 = min {c 32 } = c 3 2 ;
x
c
&'2 = 0 - 0 = 0;
c s 2 = min tci2*c32^
6
w 2 = 12 - 5 = 7;
c l g = min {c 12 } = c 1 2 ;
= 10 - 5|= 5;
c s l = 0;
0^ 4 4 0
z = (5;0;2;0);
D = {c 12 ;c 32 } ; c
0 0 0 5
2;
X=
z, = 2 - 2 = 0;
D = {Cj2); x^2
0 0 0
c
s2
= c
12'
cfc3 = 0; = 0 - 0 = 0; 6=6^;
e = min {z3;w2} = min {2;7} = 2; 0 0 2 5
0^ 4 4 0
z = (5;0;0;0) ; w- = 7 - 2 = 5; w = (0;5;0);
is the remaining free element, so :
e = min {z^;w2> = min {5;5} = 5;
xj2
=
5,
X =
z 1 = 5 - 5 = 0; D = 0;Stop,
X=
5 0 0 0
5 0 2 5
0 4 4 0
z = (0;0;0;0) ; w 2 = 5 - 5 = 0; w = (0;0;0); f5 0 0 [0
5 0 2 5
0] 4 is a feasible transportation plan 4 0
Section
1.2.1.7
71
1.2.1.7 The Frequency Method Hypotheses The Frequency Method generates from the cost matrix listic pseudo-cost matrix
t .
C
a more rea-
In order to formulate a feasible
transportation plan, any of the above described methods may be utilized on
t .
Description Step
1 r.: = - • n n
1: Calculate
c s
2: Is
j=1
m ^ i=l
. _ 1 j' " i '
cij: = Step
n
E
c^ > 0 V
i ,j
+
Sj)
c., u
V i = 1,... ,m
c. •
v j = 1
1J
- c.j
n
vi,j.
?
If yes: Stop, the matrix
t
is on hand. Work any other
of the above methods using
t
.
If no : Go to step 3 . Step
3: Let
crs
be the minimal element of the matrix
Construct the constant matrix K : K ^ K: = (|c Stop,
|)
t
and compute
x
= t + K .
is on hand .
Example Given the following matrix '2
5
7 '
3
6
1
9
6
4
50
50
50
C =
We find:
^
= 14/3;
s 1 = 16;
C :
r 2 = 10/3; s 2 = 67/4;
r 3 = 19/3; s 3 = 31/2;
t .
n j , so that
r 4 = 50;
72
Linear Programming 56/3
197/12
79/6
49/3
169/12
107/6
40/3
205/12
107/6
16
67/4
31/2
> 0
V i , j holds
->•
Stop
1.2.1.8 T h e S t e p p i n g - S t o n e Method Hypotheses Given a f e a s i b l e s o l u t i o n
X
to the t r a n s p o r t a t i o n problem, f i n d
an o p t i m a l (minimal c o s t ) t r a n s p o r t a t i o n p l a n . Description Step l: ( t e s t f o r o p t i m a l i t y o f t h e c u r r e n t
solution)
Compute t h e m a t r i x C* i n c l u d i n g t h e v a l u e s u, and v . . * C looks as f o l l o w s :
V1
' • •
' • •
Vj
vn U1
* C
u
ij
i
um
Consider t h e c u r r e n t t r a n s p o r t a t i o n m a t r i x If
x . j j > 0 , then
Step 2: Set v ^ : = 0 u
i
+
v
j:
= c
termine a l l element(s)
c*^: = 0
V
ui
c
ij
and
c * j : = 0,
=
0
Vj
•
.
.
and compute a l l o t h e r ij
X
I f
i t is
u^ not
and
v ^ , so t h a t
p o s s i b l e t o de-
i n t h i s manner, t h e n s e t t h e
so t h a t a l l
u^
and
v^
can be
computed. At t h e end o f t h i s c a l c u l a t i o n are a t l e a s t
Section
(m + n - 1) elements Step
A matrix
73
c. .: = 0 . 'j
3: Compute the remaining elements of the matrix * c..: = u. + v. - c • • . ij 1 J iJ Result:
1.2.1.8
C
as follows:
C*, which is called opportunity-cost ma-
trix . Step
4: In the transportation matrix *
with ' V
for which x i . = c ^
X
mark all elements x..
= 0 .
(If this occurs, the problem is called degenerate.) Step
5: Determine
Step
6: Is
#
c
: =
rs
c* s < 0
¥
max {c. .} . 1J i,j
?
If yes: Stop, the current transportation plan mal. The total costs
C. .: = tot
m l
n l
i=l
j=l
C.,. tot
X
is opti-
are defined as
x.c.. ij ij
m.u.
Here the costs from/to the dummies are omitted. If no : Go to step 7. Step
7: (determination of a new solution) a) Label the element x r s with "+" . b) Find an element greater than zero in row
r , in whose
corresponding column is at least one more element marked with "ft" or "greater than zero" . This element is labelled with "-" . c) In a so determined column find an element "ft" or "greater than zero", in whose corresponding row is at least one more element "greater than zero" . This element is labelled with "+" . Continue this labelling procedure, until one element in column with "-" .
s
is labelled
74
Linear
Step
Programming
8: Determine the element with minimal absolute value among those elements labelled with "-" . Let this element be x, ; kl then the new tranportation matrix reads as follows: x, . + x.,, if x. . is labelled with "+" . lJ KI 1J V
X
ij
X
ij
x
kl'
x
ij
is
labelled
w i t h
"""•
otherwise
Delete all labels and go to step 1.
Example Given three stores with the inventory of (10;4;6) units, and three places with the demand of (5;12;8) units. The transportation costs per unit matrix
C
is:
The initial feasible solution is the Northwest-Corner Solution: 5 ill)
5
0
0
3
1
+
0
0
-4
2
0
0
3
3
0
4"
0
0
3+
3"
0
0
5
C*
=
-6
0
0
3
-1
-1
0
49
* c
f(2) =
5
5
0
0
r
3+
0
6
0
0
0
+
5"
* c
rs "
0
C*
=
23
>
3
-2
0
0
-7
2
0
0
0
3
-6
0
-5
3
2
5
0
52
c
*
_
rs "
c
*
42
Section 1.2.2
x (3)
=
5
5
0
0
3
3
0
0
4
0
0
-2
2
+
-5
-5
0
-2
4"
-6
0
2
3
-3
0
0
47
0
6"
0
+
1
0
c
x (4)
=
5
5
0
*
c
rs "
0
*
33
9
3
1
0
0
4
0
0
-4
2
0
2
4
-3
-3
0
0
0
5
0
-6
0
0
3
-3
0
-2
47
c*- < 0 'J
v
75
i,j->-in X ^
the optimal
assignment of the goods to the transportation routes under the given conditions is obtained.
1.2.2 T h e H u n g a r i a n M e t h o d (Kuhn) Hypotheses Given
n
elements
elements
z^, i = l,...,n (for example workers) and
w^-, j = l,...,n (for example machines) . Exactly one
has to be assigned to each assigned to each w^. matrix
A : Aj-n
x
z^
and exactly one
zi
n w^
has to be
The cost for each assignment is given in the
n j , with
a-.: cost of assigning a 3 ij v i,j a i j > 0
z. l
to
w. j
(If necessary a "constant" matrix, one consisting of the same value in all elements, must be added to A.) The problem is to find a feasible assignment with minimal assignment costs .
76
Linear
Define
x^. , so that
.
x
Programming
1J
i 1,
if
z.j
is assigned to
^ 0,
if
z.j
is not assigned to
w^ w^ .
Now the problem formally reads: min
TI
n
n
l
i
i=l
j=l
=
n I
a.j. • x^.
n X
ij
=
i=l
x1d
=1
j=l
x.. = 0 U
V 1
.
Principle Beginning with a lower bound of assignment costs which cannot be undercut, attempt to determine a complete feasible assignment. If it is impossible, a new lower bound is established by attacking the minimal additional assignment costs to the old lower bound and trying again. Repeat the procedure as long as necessary. Description Step
l: Apply FLOOD'S Technique to the matrix Result:
Step
Step
2: Define
A
matrix
ft
.
b ^ : = (a^. | a ^
=0)
V
M :
=
P :
= Q : = S : = 0
.
{bij}
3: Cassignment) Determine row |{b r j }|
: = min
r , for which ¡{b^e
M}|
and select one element b r s .
i ,j
A .
Section
Step
Step
4: Set
5: Is
Q
= Q u{brs>
S
= {bis e
P
= P U s
M
= M - S - {b r s } .
M
=
0
M I i * r> U
1.2.2
{b . € M I j * s}
?
If yes: Go to step 6. If no : Go to step 3. Step
6: Is
|Q| = n
?
If yes: Go to step 12. If no : Go to step Step
2.
7: Define the index sets
J
J: = {i I b.j £ P}; Step
L , so that
L: = {1 |J b „ ( Q} . lj
8: (labelling of rows and columns) Define the index sets
H
H: = {h | 3 b 1 h
1€L
with
L*: = L u {1 |3b l h Step
and
9: Is
|L*| =
|L|
with
and
£
L
, so that
and
h€H}
h€J} .
?
If yes: Go to step 10. If no : Set
L: = L* and go to step 8.
Step 10: Consider the matrix
% = (a^j) and define the sets
and K^ 2 ), so that K W :
=
K:
=
i I L; j iH} U { a ^ l tf.,
i € L; j € H}
i (. L; j € H}.
Step 11: (matrix alteration) Determine
a r $ : = min {a^. | a^. (. ( K ^ U K< 2 >)}
and calculate the matrix
%* , so that
K ^
11
78
Linear
Programming
^ •U^
a
,
^
rs '
lf
-U
ij
:
a
=
e kCD
if -y
ij
+
a
-vV
. -
a
ij
I, ./ - I/(2)
6
K
a^j - a r s , otherwise
.
Set ft : = % * and go to step 2. Step 12: The optimal (minimal cost) assignment is given by the set Q
wi th :
x. .: = 1J
rl , if b. . € Q J j 'o , otherwise
The assignment costs are
IT: =
n i j=l
n 2 1=1
a. •• x.. 1J
1J
.
Note: If each assignment yields a profit, then a feasible assignment must be found that maximizes profit. In this case transform the matrix Let
A
into
akl: =
A .
max {a.j} i »J
and
A ( k l ) : = (a k l )
the appropriate "constant" matrix, then
A : =
- A .
Start the above method as explained but using A . Example Given the matrix W
Z z
A : z
Z
1 2 3 4
1 2
A : w
2
w
W
3
3
5
4 8
1
7
9
6
9
6
8
6
2
4
7
4
M = ibnib21;b32;b33;b34;b41} 1 = min {1;1;3;1} Q ={bn);
row
S = {b21;b41)
;
P =
r: = row 1; ;
Q
=
0
1
1
6
0
6
6
5
3
0
0
0
0
2
3
2
0;
brJ = b ^ ;
P = {b21;b41} ;
Section M = { i
3 3 i
b
Q = {bn;b32} i
0;
M =
L : i, * i
S = {b33;b34}
| Q | < n -*•
{1};
H =
;
3 = min { 3 } ->• row
H = {1};
L* = {1;2;4>
;
fo
0
0
5
0
5
5
4
4
0
0
0
0
1 2
= a
brr cs. = b 32' ;
;
; ;
O/ 'V % , ll*,a21'a32'a33'a34'a41 »
K(2> = { ^ 3 1 }
r
L = {2;4}
|L*| * |L|
'X.
3;
P = tb21 ;b33;b34;b41l
J = {1;3;4};
L* = { 1 ; 2 ; 4 } ;
= L* j
;
r:=row
1.2.2
ar s
=
a12
1 ;
13
1
M = ib1ilb12;b13;b21;b32;b33;b34*b41}
'
Q
=
P =
s
=
1 = min { 3 ; 1 ; 3 ; 1 } ->• row r : = row 2 ; brs
b2V
=
Q =
{b
M =
21};
S =
{b
ll;b41}
'b33'b34^ '
^
;
P =
Q = {b12;b2l}; S = {b13;b32};
M = { b
; ;
brs
b33;
=
M i 0;
{b12;b21;b33
| Q| < n
J = {1;2;3;4> ;
S =
{b34}
^
a42 =
L = {4} ;
H = {1}
;
| L * | * | Ll ->• L : = L * ;
^
a44 =
_1
P =
{b
ll;b13;b32;b 4
|L*|
I
|L|
;
- 'ai2'a13'a14'a21 'a32'a33,a34'a41 =
;
+
H = { 1 } ; L* = { 2 ; 4 } ;
ars
P =ibH^13;b32;b41}
2 = min { 2 } -»• row r : = row 3 ;
Q =
L* = { 2 ; 4 } ;
ll;b41};
{ 2 ; 3 } -+• row r : = row 1 ;
=
brs = b12; 3 3
{b
;
;
80
Linear
Programming
1
0
0
5
0
4
4
3
5
0
0
0
0
0
1
0
M = {b12;bi3;b21;b32;b33;b34:b41;b42;b44} 1 = min £ 2 ; 1 ; 3 ; 3 > b
rs
=
b
21'
Q
=
{ b
row r : 21}
;
=
{ b
41}
;
M = ibi2'b13'b32 'b33'b34'b42'b44^
b
rs
=
b
13;
Q
=
{ b
13;b21}
M = ib32ib34;b42;b44} b
rs
=
b
34;
Q
=
{ b
;
'
S
2
P = {b12;b32;b33;b41;b44}
b
rs
=
b
42;
^
=
{ b
P
=
{ b
41}
;
;
= row 1 ; =
=
{ b
m i n
13;b21;b34}
1 = min { 1 } •+• row r :
I; S = 0 ; P
Q
= row 2 ;
S
2 = min { 2 ; 3 ; 2 } -*• row r :
;
;
;
12;b33}; { 2 ; 2 }
S
=
{ b
P
r o w
32;b44}
=
{ b
r :
=
12;b33;b r o w
3 ;
;
M = {b42};
= row 4 ;
13;b21'b34'b42}
P = ibi2;b32;b33;b41;b44}
1
M
^
' 0
S ;
=
lQl
*
n
S t o p , an o p t i m a l s o l u t i o n has been f o u n d , t h e assignment 13 21 34 42
=
1
=
1
=
1
=
1
t h e assignment c o s t s a r e 16
1
-1
m.u.
"3
'
'1
•
is
Section
1.2.3
81
1.2.3 The Decomposition Principle (Dantzig; Wolfe) Hypotheses Given a linear programming problem of the following form: P,:
tt =
min
1
z k=l
V< k ). x( k ) = b(°)
z k=l
A (k).
x(k)
=b
(k)
>e where
y k
v k
c
min
=
k(°) '
v
k l w. u = 1 1K i=l wik > 0
v k
= 1,... ,r
V i ,k
and solve this problem by the Two-Phase Method. Step
8: Let the current optimal solution of solution u = (u';u), ^ u : U [1 x r] • Compute
Step
where
[c (k > + u'-v( k ) ] •
9: Set the index
p: = 1
u
1
P^
be
: u'r, L
Vk=
w, the dual
-i (jJ
1
r .
and
83
84
Linear
Programming
P^1^ :
Step 10: Set up the problems P3:
[c
+
u
'Vk>].x
A (k). x (k) =
x( k )
• v k = 1,... ,r .
b (k)
>
0
Step 11: Solve the problem P g ^
•
Result: An optimal solution Step 12: Is
d ^ :
min
x ^ .
= ([c(P) +
+ up) < 0
?
If yes: Go to step 15. If no : Go to step 13. Step 13: Is
p < r
?
If yes: Set p: = p + 1 and go to step 11. If no : Go to step 14. Step 14: Compute the optimal solution of i: = (x x:
where
;(k).: ==
z i
x w 1-|(P) e^"'
1
: = x ^
and
1 ,, : = V ^ - x ,, » q+l»p q+l,p
, „ .P [m o x 1]
be the unit vector of dimension [r x 1]
with
"1" on p-th position and B ^ the coefficient matrix of the artificial variables of the current optimal tableau of the problem
P^
•
Determine the column vector
B
p". e ^ )
this vector with the corresponding variable
and add and
Section
1.2.3
85
with the corresponding coefficient of the objective function
to problem
P2>
Determine the pivot-ele-
ment in this new column and do one iteration according to the Primal Simplex-Algorithm. Eliminate from the problem that variable x. which leaves the basis, along with the J
corresponding column vector.
Go to step 8 .
Example Given the following problem P^:
P^:
min w = 4-Xj + 3-x 2 + 12-x^ + 6-x^ + 10-Xj 2-x^ + x 2 + 4-x 3 + x 4 + 2-x 5 = 13 12
2-Xj + 3-X 2 + 6-X 3
4-x 1 + 10-x 2 + 14-x 3 = 28 2-x. + 4
xc 5
=
4
Xj > 0
V j = 1,.
,5 .
Me have: c { 1 ) = (4;3;12);
A
d)
.
2
3
4
10
= 13;
c ( 2 ) = (6;10); i(2)
14 12
b( l )
V ( 1 ) = (2;1;4);
(2;1)
b ( 2 > = 4;
28
;
m
l;
Solution of the first subproblem: X
1
X
2
X
3
1
X
1
X
2
X
3
1
2
3
6
12
1/3
1/2
1
2
4
10
14
28
-2/3
3
0
0
4
3
12
0
0
-3
0
-24
V ( 2 ) = (1;2);
86
Linear
1
X
x
2
X
0
1
1/2
1
1
0
9/4
9/2
0
-3
0
-24
0
0
3/2
-21
F e a s i b l e basic solutions
4
X
xgl
=
1
5
X
(9/2;l;0);
4
X
2
1
4
2
6
10
0
-14
1
5 1
X
=
1 2
;x
2 2
y W - x ^
1
1
0 -40
0
;
= (2 ; 1 ; 4 ) • SJ = i
ï k=l
= (2;1 ;4)•
J1)
2
1/2
7 -12
solutions:
= {(0;0;2) ;(0;4) ;(2;0)}
Now we s e l e c t :
V(k^-x
1
5
are:
be t h e n e c e s s a r y b a s i c }
X
(2;0);
SJ = { x ^ - . x ^ }
)
4
4
Feasible basic solutions
= (0;4);
Now we s e l e c t :
'22
are:
o f t h e second s u b p r o b l e m :
Q = ixn;x
v12
1
3
2
L e t r + mQ = 3
il
X
0
Optimal!
n
2
0
Solution
v
X
1
= (0;0;2);
S k=l
1
-9/2
n
x12
X
1
Optimal!
X
1
3
2
0
x
Programming
x
we h a v e :
+ (1 ; 2)n
'
.
w e
J
= 10 < 13
h a v e :
0] ^ = 16 > 13 ;
+ ( 1 ;2 ) -
1 K
= ( 2 ; 1 ; 4) •
= (1;2)' (o!
(0 0
8;
2
c
n
24;
= (4;3;12) •
8;
c 1 2 = (6 ; 1 0 ) •
2;
c22 = (6;10)•
^ 2]
= 40 ; = 12
;
Section
Problem W
P9
w
12
Z
22
1
z
z
2
1
3
8
8
2
1
0
0
13
1
0
0
0
1
0
1
0
1
1
0
0
1
1
-9
-9
-3
0
0
0
-15
24
40
12
0
0
0
0
W
w
W
Z
11
12
22
0
1
0
1
1/6
Z
Z
2
-4/3
3
-1/3
After three iterations we have the follwoing optimal tableau:
1 1/2
1
0
0
0
1
0
1
0
0
1
-1/6
4/3
4/3
1/2
0
0
0
1
1
0
0
0
0
1 -14/3
40/3
((4;3;12) + (-14/3)•(2 ;1;4))•x ( ! )
-8/3
X
X
1
X
2
- ( w H ' w 1 2 ' w 2 2)
=
= (1 ; 1/2 ;1/2) G = (u';u) = = (-14/3;(40/3;
-50
-8/3));
=
= ( (4;3; 12) - (28/3; 14/3;56/3) ) •( 1x) ^
p(l). 3 *
87
states as: W
11
1.2.3
= (-16/3;-5/3;-20/3)
1
3
2
3
6
12
After two iterations we have
4
10
14
28
the following optimal tableau:
-16/3
-5/3
-20/3
X
1
x
2
X
3
0
1
1
0
9/4
9/2
0
1
1/2
1
0
0
37/6
77/3
= (9/2 ; 1 ;0) ;
88
Linear
.(1)
Programming
(-16/3-9/2 - 5/3- 1 - 20/3- 0) + 40/3 = (-37/3) < 0 x 2 1 : = x ( 1 ) = (9/2;l;0);
Be
1 2 1 : = V ( 1 ) - x 2 1 = (2;1;4)-
fl"1 1 " 10
e
l
of W
W
11
W
0
1/3
1
1
21
10' 1 0
-1/3 0 4/3
^ 21
,-1
w
22
Z
1
0
1/6
0
0
0
12
W
1
P2 Z
is : z
2
-4/3
3
1
-1/3
1/2
1
0
1
-1/3
0
1
-1/6
4/3
4/3
1/2
0
-37/3
0
0
-14/3
40/3
-8/3
-50
22
Z
W
21
w
12
W
Z
1
Z
2
3
1
-1/3
0
1
0
1/6
-5/3
-1/3
1/6
1
1
0
0
0
1
0
1
1/3
0
0
1
-1/6
5/3
4/3
5/6
37/3
0
0
0
-14/3
= 10;
1/3 1 -1/3
0
11
9/2 1 0
77/3
W
~^ W 21' W 12' W 22^ =(l;l/6;5/6);
G=(-14/3;(77/3;
-8/3 -113/3
-8/3));
((4;3;12) + ( - 1 4 / 3 ) • ( 2 ; 1 ; 4 ) ) ^ = (-16/3;-5/3 ;-20/3)• '
In relation to the last iteration
P ^
is unchanged, therefore:
= (9/2; 1 ;0) ; d ^
= (-16/3-9/2 - 5/3-1 - 20/3-0) + 77/3 = 0 .
=
Section
((6;10) + (-14/3) • (1 ;1)) • x< 2 )
89
1.2.4
= ((6;10) - (14/3;14/3)) • x
(2)
= (4/3 ;16/3) l X 5] p(2)
3
.
x
*
4 2
4/3
X
1
5
x
X
4
1
5
4
1
1/2
2
16/3 0
0
14/3
8/3
1
S< 2 > = (2;0);
d ( 2 ) = (4/3-2 + 16/3-0) + (-8/3) = 0 . Computation of the optimal solution of
P^ :
x ( 1 ) : = w 2 1 - x 2 1 = 1 • (9/2 ;1;0) = (9/2;l;0); x ( 2 ) : = w 1 2 - x 1 2 + w 2 2 - x 2 2 = 1/6 • (0;4) + 5/6 • (2;0) = (5/3;2/3); x = ( S ( 1 ^ ; x ( 2 ) ) = (9/2;l;0;5/3;2/3); control : (2-9/2 + 1-1 + 4-0 + 1-5/3 + 2-2/3) = 13; (2-9/2 + 3-1 + 6-0) = 12; (4-9/2 + 10-1 + 14-0) = 28; (2-5/3 + 1-2/3) = 4; tt = (4-9/2 + 3-1 + 12-0 + 6-5/3 + 10-2/3) = 113/3
.
1.2.4 FLOOD'S Technique Hypotheses Given a matrix
A
: A
[m
x n]
» find a matrix
^
:
x
nJ
> which
contains at least one element equal to zero in each column and in each row. Description Step
(r)
1: Determine the vector of the row-minima a^ ', so that a
(r)T
: = (a, i
(r)
;...;a
(r)
m
(r)
) , where a . '
: = min {a - -} . iJ
90
Step
Linear
Programming
A (r) • AÌ r > so that n • [m x n] »
2: Determine the matrix A(r): = (a(p),
where
1J
= a | r ) V j = l,...,n; 1 V i = l,...,m,
13
and compute the matrix
A , so that
A : = A -
.
Step
3: Determine the vector of the column-minima a ^ , so that min {a.,} . a < c > : = (a} c >;...;a< c )), where a< c > : J i
Step
4: Determine the matrix
A ^
A ( c ) : = ( a ( ^ ) , where
: A ^ , so that [m x n]
a j ^ : = a ( c ) V i = l,...,m;
1J
3
V j = 1
n,
and compute the matrix
A , so that ft: A -
.
The reductionsconstant
rQ
m l i=l
n + I j=l
(r)
has the following value: (c) a.-
Example 2
4
2
CVJ
3
1
1
1
1
4
2
2
2
2
2
2
2
1
3
1
1
1
1
;
a:
1
ro
2
CVJ
5
A< r > :
;
2' 0
A: = A - A ( r ) .
(1;0;0)
0
1
0
0'
ro
2
0
2
1
0
0
0
2
0
1
0
0
l
0
0
0
0
0
0
2
;
c h = A - A< >:
((2 + 1 + 2 + i) + (1 + 0 + 0))
7
Section
1.3.1
91
1.3 Theorems and Rules 1.3.1 The Dual Problem In some cases it is convenient not to solve the given primal problem, but the corresponding dual problem. The dual problem can be determined from the primal problem as follows: Description Step
1: Transform the primal problem into the following form: P :
n £ j=l
max
it =
n z j=l
a.,-x. < b.
n z j=l
a. . •x. = b•
1J
J
1J
Xj > 0
c. -x. 3
1
3
1
3
v i = i
k
V i = k + 1,...,m , where k < m
v j = 1.....1
Xj unbounded
V j = l + l,...,n,
where 1 < n
or in matrix-notation: P :
max
tt = c-x
A'-x < b' A"-x = b" > x < e . Step
2: Construct the dual problem Pn: u
m z i=l
mi n
tt =
m i i=l
a, .-u, > c, 1J
1
Pp , so that
u. • b. 1
J
1
V j = 1
1
92
Linear
Programming
m £ i=l
a. • •u. = c• 1J
ui > 0
1
J
v j = 1 + 1
v i = 1
IK unbounded
n, where
1 < n
k
vi = k + 1
m,
where
k
0 is assigned to each type I inequality in the primal problem. b) A dual unbounded variable u^ is assigned to each equality in the primal problem. c) A dual type II inequality is assigned to each constrained primal variable
x. > 0. J
d) A dual equality is assigned to each unbounded primal variable Xj. e) The primal objective function by the dual objective function
max n = c-x
is replaced
min it = u-b
.
Note: A type I inequality is of type "" . Example Given the following p r o b l e m ? : V :
min
tt = 3 - X j
(1)
2-Xj +
-
P:
4-X£
x2 > 6
(2)
Xj - 2-X 2 < 1 0
(3)
5-x. + 7.x ? = 13
=i>
max
tT t = (-3-x^ + 4-x 2 ) T
(1)
-2-Xj -
x 2 < -6
(2)
x 1 - 2-X 2 < 10
(3)
5-x, + 7-Xp = 13
Xj > 0 x,
unbounded
x2
unbounded
93
Section 1.2.3
PD :
min
n = ( - 6 * U j + 10-u,, + 1 3 - u 3 )
(1)
- 2-Uj +
(2)
-
u 2 + 5-U 3 > - 3
u x - 2-U 2 + 7-U 3 =
4
Uj, u 2 >
0
u3 P'p :
max
n =
(1)
2-ux -
(2)
unbounded
6 - U j - 10-u 2 - 13-u^ u2 - 5-U3 < 3
- U j - 2-U 2 + 7-U 3 = 4 u^, u 2 > 0 u3
unbounded
1.3.2 Theorems of Duality The f o l l o w i n g r e l a t i o n s h i p s e x i s t between the primal problem and the dual problem that was developed i n 1 . 3 . 1 a) For each l i n e a r programming problem
. P
there e x i s t s a c o r r e s -
ponding dual problem P Q . b) c)
(PD)D=P 3 optimal s o l u t i o n of
d) P e) Pp
P
» 3
has an unbounded s o l u t i o n has an unbounded s o l u t i o n
f ) For the i n i t i a l
optimal s o l u t i o n of
» 2 » $
f e a s i b l e s o l u t i o n of f e a s i b l e s o l u t i o n of
and the optimal tableau the f o l l o w i n g
hold: 1. primal values of 2. dual values of
P P
e e
P^ .
dual values of P Q primal values of P n .
P^ . P
.
conditions
94
Linear
Programming
1.3.3 The Lexicographic Selection Rule Hypotheses Given a primal f e a s i b l e simplex-tableau with column pivot-column, we have b a
b U
r
ls
f
b
_J_= a r2s
. . . = _ E _ = min a i rps
s
selected as
b.
a• [is
| a.
>0
i . e . there i s more than one row equally e l i g i b l e for selection as the pivot-row. Description step
l: Let
a
be the r -th row vector of the above simplexu y tableau. Compute the vectors a^ , so that y r
a; : = — ar
u
Step
• ar ys
V y = l,...,p
.
y
2: Determine
a^, : = lex min {aj, } , v y y i . e . the lexicographic minimum of a l l a l t e r n a t i v e row
vectors
a^,
. The pivot-element i s given by
ar
y
s
•
v
Note: a) When more than one column i s equally e l i g i b l e for pivotcolumn in the dual problem, proceed in the corresponding way as described above. b) The pivot-row could also be chosen at random. (Proof of convergence in [ l . B . 7 ] )
Section
1.3.2
Example Given the following simplex-tableau: X
1
1
x
2
-1
5
1
0
0
0
2
column 1 is selected as
5
0
7/2
0
1
0
0
5
pivot-column:
3
3
2
0
0
1
0
3 1/2 0
2
X
y
3
3
1
4
1/4
0
0
0
1
-5
-2
-3
0
0
0
0
aj = (2;-l;5;l;0;0;0;2); a 2 = (5;0;7/2;0;l;0;0;5);
= a
ll
= a
21
= a
31
a[ = (1;-1/Z;5/2;1/2;0;0;0;1) = (1;0;7/10;0;1/5;0;0;1)
a 3 = (3;3;2;0;0;1;0;3);
a^ = (1;1;2/3;0;0;1/3;0;1)
ai
mum,
is the lexicographic
a-,,
is pivot-element .
j
95
2. Integer Programming
2.1 Cutting Plane Methods 2.1.1 The G O M O R Y - I - A I I Integer Method Hypotheses Given the following problem n P :
mini V maxj
P :
tt = c-x
A-x I b x
£»;
Principle Beginning with a continuous optimal solution, if it exists, new solutions are determined in each iteration by adding new restric tions (cuts) to the problem. Description Step
l: Solve the problem without the integer condition by one the simplex-methods. Result: A solution function
tt = c-x
x
Set the running index Step
2; Is
Xj £
JNq
If yes: Stop,
with the value of the objective
(if a feasible solution exists) p: = 1 .
v j = 1,... ,n x
?
is an optimal solution of
If no : Go to step 3 .
P .
.
Section
Step
3: Determine row
Step
q
r
q
o
2.1.1
: = min {r. | r. : = b- - [b.];r. i 0 o
97
> 0} ,
is called the source row .
4: Form a new restriction (a so-called cut) from the row
q
with the additional slack variable yp : r
y*P - 5 Z Z j :nbv
qj • X j = " V
Note: The summation-index
'Where j
= 3 qj " t a qj] '
V
holds for al1 nbv.
Add the cut to the current tableau; determine the pivotelement according to the Dual Simplex-Algorithm, and perform as many dual simplex-iterations as necessary. Step
5: Does a feasible solution of the expanded problem exist ? If yes: Let
x
be the solution. Set p: = p + 1 and go to
step 2 . If no : Stop, there is no feasible integer solution of Note: If a slack variable
y
is basis-variable for the second
time, then the row and column corresponding to
yp
can be elimi-
nated. Therefore the maximal size of the tableau is (m + n + 1) rows and (m + 2-n + 1) columns. Example Given the following problem P :
max (1)
it = 2-Xj + x^ Xj -
x2
(2) 4-x x + 3-x 2 x
l'
X
2
2
1
1
-1
1
0
5
0
-7/4
1
-1/4
5/2
x = (5/2 ;0);
4
3
0
1
10
1
3/4
0
1/4
5/2
tt = 5 ;
-2
-1
0
0
0
0
1/2
0
1/2
5
x^ d o e s n o t f u l f i l l t h e i n t e g e r c o n d i t i o n , t h e s e c o n d r o w is t h e s o u r c e row.
Gomory-I-cut:
y* - 3 / 4 - x ^ - l / 4 - y 2 = - 1 / 2
Section
t(3).
X
2.1.1
99
T< 4 ): X
1
y
2
y
l
y
2
*
l
1
X
1
x
y
2
*
1
2
0
-7/4
1
-1/4
0
5/2
0
0
1
1/3
1
3/4
0
1/4
0
5/2
1
0
0
0
0
-3/4
0
-1/4
1
-1/2
0
1
0
1/3
-4/3
2/3
0
1/2
0
1/2
0
0
0
0
1/3
2/3
14/3
Current solution: filled by
5
-7/3
11/3
1
2
x = (2;2/3); the integer condition is not ful-
> the third row is the source row.
Gomory-I-cut:
y* - l/3-y 2 - 2/3-y* = -2/3
.
t(5).
X
1
x
2
h
y
y
2
* l
* y
1
2
0
0
1
1/3
1
0
0
0
0
1
0
1/3
-4/3
0
2/3
0
0
0 -1/3
-2/3
1
-2/3
0
0
0
2/3
0
14/3
1/3
-7/3
11/3
0
1
2
0
Here both elements, -1/3 as well as -2/3 may be selected for pivotelement. The sixth tableau corresponds to selecting -1/3 for pivot, the seventh tableau corresponds to selecting -2/3 for pivot. t
(6). X
1
x
y
2
*
2
y
*
2
1
0
0
1
0
-3
1
3
1
0
0
0
1
0
2
0
1
0
0
-2
1
0
0
0
0
1
2
-3
2
0
0
0
0
0
1
4
First optimal integer solution: x = (2;0) ;
i = c-x = 4 .
100
Integer
Programming
T: X
1
x
y
2
2
1
0
0
1
3/2
0
-7/2
6
1
0
0
-1/2
0
3/2
1
0
1
0
1
0
-2
2
0
0
0
1/2
1
-3/2
1
0
0
0
0
0
1
Second optimal integer solution: x = (1;2) ;
it = c-x = 4 .
4
2.1.2 The G O M O R Y - I I - A I I Integer Method Hypotheses Given the following problem V: P
:
min
ir = c-x
ft-x < ft x
e
Nn o
I
€
JRm
.
Principle The method consists of the repeated application of the Dual
Simplex-
Algorithm on an integer tableau. If it is not guaranteed that the tableau will remain integer, a new restriction (cut) is added. Description Step
( a., e Z J
) A ( fe e z ^
If yes: Define
a^: = a^;
1: Is
) V i,j
^
b i : = bi
? V i ,j
and go to
step 3 . If no : Go to step 2 . Step
2: Consider the coefficients of the i-th restriction, V
i = l,...,m
and determine their common denominator,
Section
say
101
q.
Compute
a^.: = a ^ bi
Step
2.1.2
:
• q^ V
• q,
i,j
3: Set up a simplex-tableau for problem P :
P :
ir = c-x + e-y
min
A-x + y = b x £ Nj y
e Fj
b e ]Rm and set the running index Step
4: Is
b. > 0
V i = 1,... ,m
p: = 1 ?
If yes: Stop, the optimal integer solution for
V
has been
found. If no : Go to step 5 . Step
S: Determine the provisional pivot-element
ar>s
according
to the Dual Simplex-Algorithm . Step
6: 3
ar,s < 0
?
If yes: Go to step 7 . If no : Stop, there is no feasible solution of Step
Is
a r , s = (-1)
If yes: Set
ar,s rs If no : Go to step 8. Step
V
a
and go to step 9 .
8: Form a new restriction (a so-called cut) from the row with the additional slack variable
'J
• x. + Jy J P
b , r
y
P
:
where
r'
102
Integer
Programming
X: =
min {a ,.} r J j:nbv
and add this cut to the current tableau,
Let this be the r-th row. Disregard the provisional pivotelement
ar,s
and determine the pivot-element
ars
according to the Dual Simplex-Algorithm. Set p: = p + 1 . 9: Perform one dual simplex-iteration and go to step 4 .
Step
Example Given the following problem min
P :
it = 5-x^ + 10-x 2
min
TT = 5-Xj + 10- x
2
(1)
Xj^ + 4 •x ?
>
2
(1)
-x^ - 4-X 2
1
(2)
x^ - 3-X 2
0
2
1
X
1
X
X
2
y
3
2
1
1
4
5
i
0
9
1/4
1
5/4
1/4
0
9/4
4
0
3
0
1
10
4
0
3
0
1
10
-2
-3
-4
0
0
0
-5/4
0
-1/4
3/4
0
27/4
x
y
(3). X
1
x
2
0
1
1
0
0
0
The variable
3
11/4 3/4
2
1
-1/16
13/8
1/4
5/2
5/16
79/8
y
l
0
11/16 3/4
x = (5/2;13/8;0); if = c-x = 79/8;
x^ does not fulfill the integer condition, the second
row is the source row with Intensified Gomory-Ill-cut: - (3/4-1) •
Optimal continuous solution:
1/2-1
r
q
o
= 1/2 .
• x« - l/4-y? + y* = - 1/2 J
L
^ - 1/4-x3 - l/4-y2 + y* = - 1/2
1
108
Integer
Programming
T< 4 >: X
1
*
x2
X
1
H
3
0
1
1
1/4
-1/16
0
13/8
1
0
3/4
0
1/4
0
5/2
-1/4
1
-1/2
0
79/8
0
0
-1/4
0
0
0
11/16
3/4
5/16
t (5). X
1
*
x2
x
3
y2
1 The optimal mixed-integer
-1/4
7/4
solution is:
0
1
15/16
1/4
0
1
0
1/2
0
0
1
2
x = (2 ;7/4;0) ;
0
0
1
0
1
-4
2
tt
0
0
3/8
3/4
0
5/4 37/4
= c-x = 37/4.
2.1.5 T h e Primal Cutting Plane Method (Young; Glover; Ben-Israel; Charnes) Hypotheses Given the following problem P :
max
tt
A-x
0
v j
p: = 1 .
?
If yes: Stop, an optimal solution of
P
has been found.
If no : Go to step 3 . Step
3: Select the pivot-column pivot-row
r'
s
as well as the provisional
according to the Primal
Simplex-Algorithm.
Then the provisional pivot-element is given by Step
? V s > ° If yes: Go to step 5 .
5: Is
P
has no feasible integer solution .
ar,s = 1
If yes: Define
= a rs r s If no : Go to step 6 . Step
.
4: 3
If no : Stop, Step
ar,s
a
Go to step 7.
6: Form a new restriction (a so-called cut) from the row with the additional slack variable r j j :nbv A:
=
max j: nbv
• x. + yy = J P
b
r"
A
%
r'
yp :
where
{a i -1 a , . > 0} r j r'j
Note: The summation-index
j
holds for al1 nbv .
Add the cut to the current tableau. Let this be the r-th row, then the pivot-element Set Step
p: = p + 1
ars = 1
is uniquely defined.
.
7: Perform one primal simplex-iteration. Go to step
2.
Note: The note pertaining to the hypotheses of the G0M0RY-IIIMixed Integer Method holds in this case also.
110
Integer
Programing
Example Given t h e f o l l o w i n g problem P:
P
max ir = x^ + 3 - x ^ (1) 2 - x 1 + 5-X2
0
;
tt(2) = ( - » ) ; M : = 0 u {1} = {1}
1(2) = { x 2 }
J(2) = x 2
M * 0;
m a x tt = 2 - x 1 + 18
ï(l) = 4 4 / 3 ;
Xj(l) i N 0
J(l) = x 2
7
=
(3)
t = 3;
P(2):
m a x ir = 2 - x ^ + 6-x,, (1)
Î
0
;
1(3) = 0 ;
e(l)
w(t) = w ( l ) ; x g ( l ) = Xj(0)
;
= (-1);
e(2)
= 1
M = M - {1} = 0 ; ;
m a x tt = 2 - X j + 6-x,,
P ( 3 ) : m a x tt = 14
(1)
3 / 2 - x 1 + 4 - x 2 < 10
(1)
19/2
0
vj
= k + l , . . . , n , where k < n .
Section 2.2.2
M
and
are
IT
are defined as: M: = 0;
t: = 1;
119
IR : = (-), the running indices
p: = 1 . The problem P(0) is given by the problem
P
without the integer condition. Principle
See
2.2.1
Description Step
1: Solve the problem P(0) by one of the simplex-methods. Result: A solution x(0) with the value of the objective function
Step
2: Is
TI(0) = c-x(O) (if a feasible solution exists).
Xj(0)€
V j = l,...,k
?
If yes: Stop, x(0) is an optimal solution of
P .
If no : Go to step 3 . Step
3: Define where
Step
x(x): = x(0) ;
select the variable x s (T) (.
s € [l;k]
4: Set up the problem
P(t)
as follows:
P ( t ) : = P ( t ) U {x s < [xs(t)]} . This means: Add the restriction Tthbt"
a. .-x. - 5 1J J few
a. .-v. + y* = fx (-r)l - B. J 1 5 J 1 P
1J
to the optimal tableau of the problem
P(x)
and do as
many dual simplex-iterations as necessary. Step
5: Set up the problem P(t + 1): = P(T) U U
P(t + 1) as follows: S
> } •
This means: Add the restriction -> a -x, + > a..-y. + y* = B. - 1J J 1J J p 1 s j:nbv j:nbv to the optimal tableau of the problem P ( T ) and do as many dual simplex-iterations as necessary. Step
6: 3 feasible solutions x(l) V 1 = t, t + 1 If yes: Go to step 7.
?
120
Integer
Programving
If no : Go to step 11. Step
7: Is
i(l) > tt* V 1 = t, t + 1
?
If yes: Go to step 8. If no : Go to step 11. Step
8: Is (Xj (1 ) 6 W 0
V j = l,...,k)
V 1 = t, t t 1
?
If yes: Go to step 9. If no : Go to step 10. Step
9: Determine *
ir(r) : = max {ï(l ) | x.(l) e JL V J 1 * -
and set x : = x(r) ; Step 10: Define M: = M U { 1 Step 11: Is
M = 0
j = l,...,k}
it : = ir(r) .
|(x.(l) i y
j = 1
k)V 1= t,t + 1}.
?
If yes: Go to step 15. If no : Go to step 12. Step 12: Determine Step Id: Is
v{i): = max {ü(i)|i € M} .
ï(X) < IT*
?
If yes: Set M: = M - {T} . Go to step 11. If no : Go to step 14. Step 14: Select the variable set t: = t + 2 ; Step 15: Is
TI* = (-»)
?
If yes: Stop,
P
If no : Stop,
x*
x s (t) g
p: = p + 1.
where
se[l;k];
Go to step 4.
has no feasible solution. is an optimal solution of
of the objective function is
tt = c-x
P , the value .
Section 2.2.2
Example Given the following
problem
P: m a x ti = x^ + 3 - x 2 (1)
4-x1 -
4}, i.e., the additional
2 / 3 - x 1 + 1/3-X2 + y * = - 2/3
121
122
Integer
Programming
i—
~x12
+
y
12 =
0
•
+ x2
'x2
+
y
l
=
4
- l / 2 - x 1 2 + x 2 + y 2 = 5/2
1
+
"x12
12 "
+ x^
-1/2-X1q - l / 2 - x n x
x
2.2.3
y
10 =
X
+
11 y
+
12
1
ll
y
=
=
0
0
xlp, x 2 , ylf y2, ylp > 0 V p A f t e r two i t e r a t i o n s we o b t a i n t h e o p t i m a l t a b l e a u T
. ( c o n t ) o f P:
Topt(cont): X
11
x
12
X
1
0
3/4 - 1 / 2
0
1
3/8
3/4
0
0 -3/4
1/2
1
3/4 - 1 / 2
0
1
2
y
l
y
2
B
10
y
ll
y
0
12
0
3/4
0
0
27/8
0
0
1/4
1
0
3/4 0
0
0
0
-1
0
2
0
0
-1
0
0
0
0
0
1
0
0
0
9/8
1/4
0
0
0
The p a r t o f T , . ( c o n t ) , 1
y
41/8
inclosed in the dotted l i n e s ,
.
The s o l u t i o n i s :
x 1 Q = 1; x
R
= 3/4; x 1 2
= 0; x 2 = 27/8; c = 41/8.
2 X
l
:
is the matrix
=
E
X
lp
=
7 / 4
*
K
;
17
*
=
1
=
1
'
128
Integer
Programming
vai ue 1
cost penalty table: X
s 1 0 = (1/4 -
s
n
s12
2
1
1 ) • (max{ - J
1
;
2
3
0 1/8 0
} ) = (-3/4)-0 = 0;
} ) = (-1/4) • (-1/2)
= (0 - 1 ) • ( m a x { - f s1?
3
OO CO 00
= ( 3 / 4 - 1 ) • (max{
Select:
value
})
= (-l)-O = 0 AbT =
, where s= 0 ;
' 3/4
0
27/8
0
1/4
0
-
3/4 ,
o
.
= 1/8
;
.
(0;0;0;0;1); 3/4 ' 27/8 1/4
=
0
3/4
1
-1
i> is the new right-hand-side in T Q p t ( c o n t ) ; after two dual simplexiterations we obtain the following integer solution: 2 x10
= 1;
x
= 1;
n
x12
= 1;
x2 = 3/2; x ^
=
z
Xj
= 3; c(l)
= 9/2;
p=0 x* = ( 3 ; 3 / 2 ) ; Select:
s
i n
,
it* = 9 / 2 ;
i = 2 . AbT =
(0;0;1;0;0);
' 3/4
' 0
3/4 '
27/8
0
1/4
1
where s = 0 ;
-
27/8 -3/4
=
3/4
0
3/4
0
0
0
ft is the new right-hand-side in T Q p t ( c o n t ) ; after two dual simplexiterations we obtain the following integer solution: *10
=
1;
*11
=
0;
*12
=
0;
*2
=
3;
*1
=
1;
=
4;
Section 2.2.4
t h i s s o l u t i o n i s n o t as good as t h e o t h e r , so i t Select: s
,
n
where
s = 1/8;
AbT=
' 3/4
' 0
27/8
0
1/4
0
-
3/4
1
0
0
is
rejected.
(0;0;0;1;0); 3/4 ' 27/8 =
1/4 -1/4 0
ft i s t h e new r i g h t - h a n d - s i d e i n T Q p t ( c o n t ) ; a f t e r one d u a l i t e r a t i o n we o b t a i n t h e f o l l o w i n g i n t e g e r *10
=
*11
1;
=
*12 =
1;
x* = (2;3);
TT* = 5 ;
Select: s ^ ,
where
we h a v e : P
= 3;
129
simplex-
solution:
x1 = 2; c(2) = 5;
i = 3 .
s =
s > c - TT* ->• S t o p , an o p t i m a l m i x e d - i n t e g e r s o l u t i o n
of
has been f o u n d .
2.2.4 The Additive Algorithm (Balas) Hypotheses Given t h e f o l l o w i n g problem?* : Oj
P:
mm
OJ
^
IT
= c-x
^
A-x
>
b
x.
=
0 v
1
v j
= 1
n .
Principle The a l g o r i t h m e n u m e r a t e s
( a f t e r an e v e n t u a l s o r t i n g and i n d e x i n g )
l u t i o n s o f t h e p r o b l e m . As soon as a f e a s i b l e s o l u t i o n i s f o u n d ,
socer-
t a i n s e t s o f s o l u t i o n s w i t h s p e c i a l p r o p e r t i e s a r e r e j e c t e d . The r e maining set of s o l u t i o n s is s y s t e m a t i c a l l y
examined.
130
Integer
Programming
Description
Step
1: I s
c, >
If
yes:
If
no
0 Set
V j = 1 £•: = J to step
: Go
i
Step
2: D e f i n e
X
ri
cv J 2 .
h j
?
j
=
-
i f
Step
4:
C.j < Cj
If
yes:
If
no
Define
c
,
V i < j
Set
x^ : =
: Go
to
:
4
b
Xj
=
0
e =
v
I .
j
=
v
running
Q
so
>
=
that
1
r
=
1
n
=
v
if
Q:
defines
5.
P
r
is
=
l,...,n;
given
j
=
r
:
.
:
j
=
1 , . . . ,n
,
a.j
=
0
< 0
v
1:
= 0;
i
=
l,...,m;
k:
=
0;
define
the
.
rejecting)
{j n
step
l,...,n;
l,...,n;
indices
M ^ { x ^
S
to
0
1 1
>| (x^11)
S
e
n
)
A
(E
x /
1
" *
^ x ^ M x }
the
= 1
1
1 } 3 J ) . .
n-dimensional
( P )
V
i
x
l i
) = l)}-
J
j
where
Go
0
J
problem
0,
Xj :
the
(vertical
.
.
const.
set
Define
3
c-x-e
Define
,
step
.
v j
r
Now
where
Step
=
if
r-min{c.} v j
Xj-:
to
?
x^.
step
=
Go
J
^
3: I s
.
•'
= \ ^
J
Step
1 . . . . ,n
P
|x(P)eQ}
jx*11) | xj1 ^ shift-space.
=
, O v l v j }
set
Section 2.2.4
Step
7: Is M ^
= 0 ?
If yes: Go to step 13. If no : Go to step Step
8.
8: Determine x ^ : = lex max { x ^ ^ l x ^ 1 1 ^ e M^1 b . i
Step
9: Is R ^ : = A - x ^ - b > 0 ? If yes: Go to step 11. If no : Define M ^ :
Step 10: Is M ^
= M ^ - i x ^ } . Go to step 10.
= 0 ?
If yes: Set 1: = 1 + 1 ; k: = k + 1. Go to step 6. If no : Set k: = k + 1. Go to step 8. Step 11: (horizontal rejecting) Determine the index-vectors J ( k ) : = ((j a )!( x j (k) = ^ ( J ' a < V ^ » - ( ( y l i ' ^ H n ) - ^ define Md): = Step 12: Is M ^
« < 6)); » A ( x < " > = 1)A
(ja < J'e v « < ft)), Q: = Q u ; M d ) _ {x (k) } _ {x (li),j(k) _ 3 (li) , = 0
0
y
i}
?
If yes: Set 1: = 1 + 1 ; k: = k + 1. Go to step 6. If no : Set k: = k + 1. Go to step 8. Step 13: Is Q = 0
?
If yes: Stop, f* has no feasible solution. If no : Go to step 14. Step 14: Determine the solution vector x ^ , so that t t ^ : = min {ir^ j
e Q} . Return
to the form
and determine the value of the objective function a. ij r) IT = C-Xv ' . Example P:
Given the following problem x
m m n = 2-x^ -
+
(l) 5-x^ - 3-X£ +
x3 -
3
+ 5-x^ x^
>
l
132
Integer Programming
(2)
-2-x1 +
x2 + 6-x3 - 4-x4
>
1
(3)
Xj +
3-x2 - 2-X3 - 2-X4
>
0
x
x^ - x^i
x2, x3, x4
r
x2
-
c^ - c ^ ; c 2 = after P:
=
x
''
(-c2; ; c 3
x2
(2) - 2 . « ! x1
(3)
+
we
2 - X
X
1
=
x
3;
x
x
2
Cj - c 3 j c 2
l;
x
- c^»
x
4• ;
have:
3
X
4
+ 6-X3 -
- 3'*2 ~
=
4
+ 5-*4 - 3 x
' *3' X
x
- c3; c4 - c4
the transformation
5-Xj + 3
V 1
3 ~~ x 3 >
min i = 2 - 5 ^ + 3 - x 2 + (1)
0
3
X
2 '
4
>
0
>-3
"*4 v 1
0
=
4
3
2
"
>
x
4
=
c3 - c2; c4 —
x
4;
c4;
n o w p r o b l e m P is : P:
min tt = x^ + 2 - x 2 + 3 - x 3 + 5 - x 4 - 3 (1) (2)
Xj + 5 - x 2 + 3 - X 3 6-x1 - 2-x2 -
(3) - 2 - x 1 + Xj>
According
all
>
4
- 4-X4
>
0
- 3-x3 - 2-x4
x2, x3, x4
to step
variable are P:
x2
x3
x4
5
=
0 v
w e can set
(2)
1 •
x4 = 0
(the c o e f f i c i e n t s
negative); now the problem
m i n tt = x^ + 2 - x 2 + 3 - x 3 - 3 (1)
>-3
x^ + 5 - x 2 + 3 - x 3 6-x1 - 2-x2
-
x3
>
4
>
0
reads
as f o l l o w s
of :
this
Section
(3)
-2-x1 +
x 2 , x3
= 0 vl
1 = 0; k = 0; Q = 0; M ( 0 ) = { x ( 0 1 ) } ; x ( 0 1 ) (0)
=
(01)
x
M(0)
=
M^
=
=
( 0 ; 0 ; 0 ) ;
d)
x(2)
M
(
1
{
(0)
x(n);x(12);x(13)}
= M
d)
=
(12
) =
R
4 ;
O
; 3
);
x
M
_
= x(11)
= (1;0;0); R ( 1 ) = ( - 3 ; 6 ; 1 ) ;
{x(12);x(13)};
=
k
2.
=
> = (0;1;0); R ( 2 ) = (1;-Z;4);
1
(
(D}
{X
; x^ 1 1 ) = (1;0;0); x = (0;1;0);
) -
{
x (
2
) } =
{
x (
1 3
M^1) = M ( 1 ) - { x ^ 3 h = 0; 2
.
(
) } ; k = 3 ;
= x ( 1 3 ) = (0;0;1); R ( 3 ) = ( - 1 ; - 1 ; 0 ) ;
x(3)
m(
=
= (0;0;0);
{x(°)} = 0; k = 1; 1 = 1;
x ( 1 3 ) = (0;0;1); x ^ M
)=
{
x(
2 1
);X(
2 2
x = ( 0 ; l ; l ) ; R(4) q = { x
);X(
= (2;4;2) > 0; (4)
}
; M
x(31)
2 3
)
k = 4; 1 = 2; } ;
X(
2 1
)
= x -
xW
J(4)
2
= (1;1;0);X(
j(22)
4
( )=m( )-{x( )}-{X( {X(
= (1;1;1); Q * 0
31
-
)} x ^
{ x
-
1;
= 0;
:
x = ( 1 ; 1 ; 1 ;0) ; the value of the o b j e c t i v e f u n c t i o n i s tt = c- x = 0.
134
Integer
Programming
=
ees j , for which
1 •
2.3 Primal-Dual Methods 2.3.1 A Partitioning Procedure for Mixed Integer Problems (Benders) Hypotheses Given the following problem P^: P^:
where
min tt
= c-x + g-w
A-x + D-w
>
b
x
>
0
W
e
Nn-k
C
:C[1
x
k]
,
;
g:9fl
x
(n
.k)]
A:A
[ m x k]
;
D:D
[ m x (n-k)]
b:b
[ m x 1]
;
x:x
[ k x 1]
W:W
[ ( n - k ) x 1] *
;
;
;
Section
2.3.1
135
Note: The Partitioning Procedure may only be applied to mixed-integer problems ! Principle The mixed-integer problem is partitioned into a continuous problem and a pure integer programming problem by this method. Both the integer subproblem (apply the dual of the all-integer problem) and the continuous problem is solved. The solution of the original problem is the union of the two sub-solutions. Description Step
1: Determine a feasible solution u of the problem P^: '• u-A < c u > 0 (no objective function!)
Step
2: Does at least one feasible solution
u
of ?2 exist ?
If yes: Set the running index q: = 1. Go to step 3. If no : Stop, Step
3: Compute
Pj
has no finite solution.
= 9j " u-Dj V j;
z ^ :
= u-b ,
where Dj is the j-th column of the matrix Step
D .
4: Solve the problem P^: P^:
min
z
w-g( p ) - z < -z(P)
v p = 1
q
»in-k w e N z e F. Let the solutions be given by bounded from below, then set Step
5: Calculate the matrix
Step
w
and
z . If
z
is un-
z: = (-»).
b: = b - D^-w , where
D^
is the i-th row of
D .
6: Solve the problem
P^ by the Dual Simplex-Algorithm:
136
Integer
P^:
Programming
max
u-b
u-A < c u > e . Let the optimal solution of Step
7: Does
P^
P^
be
G .
have an unbounded solution ?
If yes: Formulate a new problem
P^
with the additional
restriction
E Uj < C , where i ti ve constant. Go to step 6.
C
is a large posi-
If no : Go to step 8. Step
8: Is
z = g-W + G - b
?
If yes: Go to step 9. If no : Set Step
q: = q + 1 . Go to step 3.
9: Solve the problem P^:
min
ox
A-x >
B
P^:
x > e . Let the solution of
P^
mixed-integer solution of Example Given the following problem P^:
min
TT
= x + w
(1)
4-x + 5-w > 20
(2)
5-x + 3-w > 15 x >
0
w e
W
P^:
be x . Stop, (x;w) is an optimal P^ .
Section 2.3.1
?2 '• 4 - u 1 + 5 - U 2 < 1 u^,u2 ^ 0 A feasible solution of
P^
(1/4;0) •
P^:
min
is u = ( l / 4 ; 0 ) ; (-1/4);
z(1>
q =
1;
= (1/4 ;0) •
= 5;
z
-1/4-w - z < -5 w E N
. o
The s o l u t i o n o f b = ( 2 0 ; 15) P4:
is
w = 20;
z = 0 .
(-80;-45);
max ( - 8 0 - U j - 4 5 - u 2 ) 1
min 8 0 - u ^ + 4 5 - u 2
4-Uj + 5-U2
0
u15u2
>
0
The o p t i m a l 0 I
P3
(5 ; 3 ) • 20 =
solution of
1-20 + (0;0)•
q = 2;
, ( 2 )' =- 1 -
jig (0;0)•
P^
is
20 5] 3I
u = 0 *
= 1;
(0;0);
20; z, ( 2 )
(0;0)-
20] 15
= 0;
138
Integer
P^:
min
Programming
z
T ^ :
-1/4-w - z < -5 w - z
= (0;l/5) • [20]
P^:
min
=
2.2.1
139
3;
T ^ :
z
-1/4-w - z < - 5 w - z
The set M is defined as M: = 0 .
152
Theory of Graphs
Principle Starting with the beginning node n r , the shortest paths from this node to arbitrary other nodes are determined successively until the shortest path to the terminal n $ is f o m d . Description Step
1: Determine all paths from n r to all successor nodes n^ and calculate each length c
Step
2: Let p 1 and p^ be two di fferent paths from n^ to n^ . Is
l( P l ) * l(p 2 )
?
If yes: Go to step 3. If no : Go to step 4. Step
3: Determine 1(p g ): = max {1(p^); 1(P 2 )> ; set M: = M
Step
4: Determine
Step
5: 3
l(p p )
l(p n ): = p
U
(p g ).
min i 1 (p^.)| p t t M} . t L x
?
I f yes: Go to step 6. If no : Stop, there is no cost-minimal path from n r to n $ . Step
6: Let l(p p ) be the length of the path p p = (n r Is
ni
= ns
ni) .
?
If yes: Stop, the shortest or cost-minimal path from n r to n s is given by p p with length l(p p ) • If no : Go to step 7. Step
7: Determine all paths from n. to each successor node n- , 1 J q which does not lie on the path p p and calculate its length 1 (n r > ... ,ni .n^ ): = l(p ) + c ^ Set M: = M u { p p } ;
go to step 2 .
Note: If the given graph is not a digraph, then transform it into one.
Section 3.1.1 Example Given the following graph jfr ,
M = 0; Pi = ( n i > n 2 ) »
1(PX) = 3 ;
p2 = ( n 1 ; n 4 ) ;
Pp = P l ; p3 = (n1;n2;n3);
1 (P 3 ) = 3 +
p4 = ( n 1 ; n 2 ; n 4 ) ;
l(p4) = 3 + 3
p5 = ( n i ; n 2 ; n 5 ) ;
1(p5)
M = ip1;p4)
;
4=7; = 6;
= 3 + 2 = 5 ;
pp = p 2 ;
P6 = ( n 1 ; n 4 ; n 3 ) ;
l ( p g ) = 4 + 4 = 8;
p7 = ( v n 4
i ( P 7 ) = 4 + 2 = 6;
; n
6
) ;
P8 = ( n 1 ; n 4 ; n 7 ) ;
l(pg) = 4 + 6 = 10;
M = {Pl;p2;p4;p6}
;
pp = p 5 ;
P9 = ( n 1 ; n 2 ; n 5 ; n 4 ) ;
l(pg) = 5 +
M = {p1;p2;p4;p5;p6;p9} PlO
=
P
= (n1;n4;n6;n7)
n
(
W
V
V
;
; 1
;
1=6;
pp = p 7 ;
(Pio) l(pn)
=6
+
4
=
1 0
'
= 6 + 3 = 9;
l ( p 2 ) = 4;
153
154
Theory of Graphs
M = {P1;P2;P45P5;P6;P7;P8;P10} ; Pp = P3 ; p12 = (n1 ;n2;n3;n6); 1(P12) = 7 + 6 = 13; M = {p1;p2;p3;p4;p5;p6;p7;p8;p10;p12 }; v p = p ; i n7 ^ P11 Stop ' P11 = ( n i ;n 4 ;n 6 ;l V is the shortest path from n1 to n7 with length Hp-q) = 9 . 3.1.2 The Algorithm of DANTZIG Hypotheses The same as 3.1.1; initially define the set M as M: = {nr> . Principle
See 3.1.1
Description Step 1: Formulate the following table, which will be utilized in the calculations displaying the results at each step: nj
Pj = (" r
"j>
nl
n2
nn
KPj) Ns(nj) Step
2:
Determine all nodes n. K
n. K
(j)
t
€ Ns(n.) , so that
(j)
3
M V n. * n . J
S
(The index (j) refers to the predecessor node n^..) Step
3:
Determine all paths p
k (j)
and their lengths 1 (p. k
(j)
= (p., n. )vn.e M J K (j) J ): = l(p-) + c,k 3
3
U)
Section
Step
4: Determine
): =
l(p q
write the path
(p)
min k,j
{l(p. k
)}
n
q =
n
s
and
as we11
p q = (n r ,... > n p > n q )
its
length
?
If yes: Stop, the shortest path from n r to pq
as
. Set M: = M u in ).
q
5: Is
155
(j)
l(p ) in the above table under node n Step
3.1.2
is given by
with length l(p q ) .
If no : Eliminate node
nq€
NS
(nj)
Vn
j
+
n
s
•
Go to step 6. Step
6: Is
N s (n.) =
"j
?
If yes: Stop, there is no cost-minimal path from n r to n s . If no : Go to step 3. Note: If the given graph is not a digraph, then transform it into one. Example Given the following graph JV ,
find out the shortest path from n^ to n l l M = {nj}; P2
(l)
=
(nl;n2);
P4
( l ) = ( n 1l ; 4r Y > ;
= 3 = : =
4
;
156
Theory
n
j
p
j
n
of
Graphs
n
l
2
n
3
n
n
6
n
4
n
5
n
6
n
4
n
5
n
7
n
7
n
7
(n 1 ;n 2 ) 3
l(pj) Ns(nj)
n
2
n
3
n
4
n
4
n
6
n
5
n
7
3
M = {nx ; n2 > ; P4 = (ni;n 4 ); l(p 4 ) = 4 =: l(p ) ; 4 1 4 4 q (1) (1) (P) Po 6
P4
= (n-, 1 ;n c ? ;n,); J
{2)
=
(2)
?r
(2)
1(p
1
n
"j
L
5
n
l
2(4)>
n
2
3
(n 1 ;n 2 )
J
=
6;
n
n
4
4
(2)
n
n
n
3
n
4
n
6
n
5
n
7
= (ni1 ;n£? ;n,); J
5
n
6
n
4
n
5
n
7
4
6
3
M = {n 1 ;n 2 ;n 4 ) ; Po
n
(iVn4)
3
KPj) N^nj)
) = 7 ;
l(p 5 ) = 5; °(2)
= (n, ;n„;iv); (2)
P
("lin2sn4>;
l(p, J
1 (p.
(2)
) = 7;
Section 3.1.2
P5 p
3
(nl 'n2
(2)
(n1;n4;n3) ;
l(p
1(P6
n
"J
) =
6;
1 (P7
) =
10;
(4)
n2
l
n
3
n
(n1 ;n2 )
J
)
n
3
n
5
4
n
( ' V ' V
3
l(Pj) ( n j
8;
(4)
=
%«> *
S
) =
;
(P)
(4)
( n i ;n4'>n7) ;
N
) q
3
(4) >
P
l(p
(2)
(n1 ; n 4 ; n 6 ) ; V
) = 5 =:
l(p5
'
5
("l
;n
6
; n
6
n
7
5)
5
4
n
2
n
n
3
n
5
n
6
n
7
"7
M = {rij;n2;n4;n5}
(n1;n2;n3) ;
(2) Pc
(4)
p7
;
=
(ni in4inf¡) » MPC 1
4
6
= (n1;n.;n7);
J
P
j
1
n
4
n j
)
(4)
l(p7
p3
= (n1;n4;n3);
l(p3
) = 6 =:
1 (p q
) =
) = (4)
(4) (p)
);
10;
(4)
7
l
n
2
n
3
n
4
(n1;n2)
(n1;n4)
3
4
MPji Ns(
) = 7; (2)
(4)
n
l(p3
n
3
n
6
n
3
6 ru / n
n
5
(n1;n2;n5)
n
6
n
(n1;n4;n6) 6
5 n
7
7
157
158
Theory
of
Graphs
M = -Cn1 ; r i 2 ; n 4 ; n 5 ; n 6 } p
3/0,= J (2)
( n i » n 2 ' n 3)' x "
(4) p7 p
x
^
1(P3
"
= (n, ; n . ; n 7 ) ; 1 4 '
(6)
= (n, ; n . ; n , ; n 7 ) ; 1 4 b /
j
p
j
n
l
1 (p7
n
= (n-,
p (6)
1
;n7); 4
n
l
( n j
3
n2
n
n
5
n
3
n
7
6
n
7
( V V V 6
5
4
1 (p,
n
7
) = 10;
1(p7
) = 9 (6)
n
3
q
n
4
n
4
7
P7 = (n1;n4;n6;n7)
with length
5
n
6
n
(nl
n
7
from n 1 t o n y i s g i v e n
l(p7)
= 9 .
7 ''n4;n6;n7) 9
6
5
7
the s h o r t e s t path
(p)
(nx ; n 2 ; n 5 )
(rij ; n ' 2 ; n 3 ) ( n x ; n 4 )
n
Stop,
4
;
)
nq = ny
n
(4)
3
l(Pj)
(P)
) = 9;
3
7
(n1;n2)
J
/(g)
7
= (n, ; n . ; n , ; n y ) ; 1 4 b '
"j
q
( n 1 ; n 2 ; n 3 ) (nx ; n 4 ) (nx ; n 2 ; n 5 )
;n2;n3;n4;n5;ng}
(4)
S
l(py
3
n
P7
KP
) = 10;
2
(n1;n2)
M=
) = 1
(4)
n
l(pj)
N
(2)
"(4)
(4)
n
P
;
by
Seat-ion 3.1.2
159
3.1.3 The F O R D Algorithm I (shortest path(s)) Hypotheses Given a valued graph #
with nonnegative arc-values
c^ • < » v
a^. £ A, determine the length of the shortest path from n^ to each other node n^, V j * r . Principle The algorithm determines in an iterative process a sequence of labels, whose value is monotonic non-increasing, to all nodes, until no new labels can be assigned. The label of a node n^ is then equal to the length of the shortest path from the beginning node n r to this node nj. Description Step
1: Define the labels
Step
2: Determine
Q: = { a ^
Step
3: Is
?
Q = 0
M(n r ): = 0; M ( n p : = » V
j * r .
| M(n -) - M(n.j) > c^.} .
If yes: Stop, M(nj) is the length of the shortest path from n r to n^
V j .
If no : Go to step 4 . Step
4: Select one arc
a ^ € Q and define
M(n t ): = M(n ) + c s t -
Go to step 2.
Hint: In step 4 select first the arcs having values similar to r . Example
a ^
with indices
s and t
160
Theory of Graphs
determine the M(nx)
=
0;
l e n g t h o f t h e s h o r t e s t p a t h s f r o m n^ t o a l l
S
a
st
=
+ 2 = 2;
Q = ia13;a24;a25}
12'
a
M(n^) = 0 ;
;
a
st
=
a
M(n3) = 0 + 3 = 3; M ^ )
Q = fa24;a25;a34}
;
M(n4) = 3 + 5 = 8 ; Q = {a24;a25;a45}
a
st
M(n3) = M(n4) = M(ng) =
13;
= 0;
=
a
M(n2) = 2; M(n4) = M(n5) = -
;
34'
M(nj) = 0 ; M(n2) = 2; M(n3) = 3; M(ng) = » ; ; a s t = a45
;
M(n5) = 8 + 4 = 12; M ^ )
= 0 ; M(n2) = 2; M(n3) = 3; M(n4) = 8;
Q = {a24;a25>
;
; a$t = a24
M(n4) = 2 + 4 = 6; M ^ )
Q
=
{a
nodes.
M(q2) = M(n3) = M(n4) = M(n5) = » ;
Q = {a12iai3>
M(n2) = 0
other
25;a45}
;
a
st
=
M(n5) = 2 + 1 = 3;
a25
= 0 ; M(nz) = 2; M(n3) = 3; M(ng) = 12;
;
Mtrij) = 0; M(n2) = 2; M(n3) = 3; M(n4) = 6 ;
i Q = 0 The ->- S tFo pO. R D Algorithm II (longest path(s)) 3.1.4 Hypotheses Given a valued graph
yr
with arc-values
c-jj
< ™V
a
ij
e
d e t e r m i n e the l e n g t h o f t h e l o n g e s t p a t h from n r t o each o t h e r n jr .i n cv i p lj e • r . P The algorithm determines
i n an i t e r a t i v e p r o c e s s a s e q u e n c e
of
node
Section 3.1.4
161
labels, whose value is monotonic non-decreasing, to all nodes, until no new labels can be assigned. The label of a node Pj is then the length of the longest path from the beginning node n f to this node
Description Step
1: Define the labels M(rij): = 0 V j = l,...,n .
Step
2: Determine Q: = { a . J M(n,) - M(n,) < c-•} .
Step
3: Is
IJ
Q = 0
J
I
' J
?
If yes: Stop, M(nj) is the length of the longest path from n r to n^
V j .
If no : Go to step 4 . Step
4: Select one arc
ast £ Q
and define M(n t ): = M(n s ) + c g t .
Go to step 2. Hint: In step 4 select first the arcs a s t
with indices s and t
having values similar to r . Example
M(n 1 ) = M(n 2 ) = M(n 3 ) = M(n 4 ) = M(n 5 ) = 0; Q = {a12;a13;a24;a25;a32;a34;a35;a45}
; ast - a12;
M(n 2 ) = 0 + 2 = 2; M(n 1 ) = M(n 3 ) = M(n 4 ) = M(n 5 ) = 0;
162
Theory of Graphs
Q =
a
M(n4
Q =
13;a24;a25'a34;a35;a45} = 2 + 4 = 6 ;
a
Q =
6 + 4 = 10;
a13}
M(n3
Q =
a
a
st
=
a
32;a34}
;
a
;
a
a
st
=
a24
;
M(n2) = 2; M(n3) = M(ng) = 0;
st "
a45
'
) = 0; M(n2) = 2; M(n3) = 0 ; M(n4) = 6;
l 3' M(nj) = 0 ; M(n2) = 2; M(n4) = 6 ; M(ng) = 10;
st "
a
32'
= 3 + 2 = 5; Minj ) = 0; M(n3) = 3; M(n4)= 6; M(n5) = 10;
a
M(n4
Q =
;
= 0 + 3 = 3 ;
M(n2
Q =
M(nx) = 0;
13;a25'a34'a35;a45}
M(n5
;
24;a34}
;
a
st
= 5 + 4 = 9 ;
a45}
M(n5
;
a
st
=
=
a24
;
M(nx) = 0 ; M(n2) = 5; M(n3) = 3; M(n5) = 10;
a
45'
= 9 + 4 = 13; M(n^) = 0 ; M(n2) = 5; M(n3) = 3; M(n4) = 9;
Q i 0
-»• S t o p .
3.1.5 T h e Tripel Algorithm Hypotheses Given a valued graph values
c^j
/T
w i t h o u t i s o l a t e d nodes, where t h e
represent the distance (cost)
from
n^ t o n .
arc, deter-
m i n e t h e s h o r t e s t p a t h f r o m e a c h node n^ t o e a c h o t h e r n o d e n^ j * i. C ( 0 )
T h i s g r a p h c a n be r e p r e s e n t e d by a d i s t a n c e :
C ( 0 )
=
0
[nx
n]
vi=l,...,n;
v
matrix
= = ° ° , i f a ^ . (. A ;
= c^. .otherwise.
Section 3.1.5
163
Principle In this algorithm a sequence of
n
matrices is constructed. In each
iteration one node is considered as a detour-node. Description Step
B^0^ : B^ 0 ^ r v n 1 , belonging Ln x nj
1: Set up the "detour-matrix" .(o) ... to with:
>
.{«."ciS ,J ' = j , otherwise
b
>
V
i,j
Initially set the running index k: = 1 . Step
2: Determine the matrices
C ^
and B ^ ,
so that
VI c g ) : = c ^ )
Vj
=l,...,n
c j r
f 0, ;
" jl , i f 1 < j
s
, where
if 1 > i
" \1, i f 1 < i
i . e . the ijjper indices r and s indicate that the elements c i 1 ) , which have already been computed, are the 1J ones used in determining the later elements. Step
2: Compute the matrix
C^.
The elements of t h i s matrix are
determined in the following sequence: c (2) c (2) nn ' n , n - l "
c< 2 ) cc c (2) nl ' n - l ,n ' • • • ' c n - l , l
J2) ci2) 11
so that r =
fl,
2,
Result: The matrix
'
c|2); = min ( c j ^ + cj?^} , where (-1 , I T I i I if 1 < j if 1 < i rl, (2, i f 1 > i . if 1 > j C ^ ) includes the length o f the shortest paths
between al1 nodes n^ and n^. Note 1: Determining the matrix
C ^ ) , the maximal nimber o f elemen3 tary operations i s approximately 4-n .
Section
Note 2: If the original distance matrix -(1) and C< 2 > are symmetric too.
C ^
3.1.8
169
is symmetric, then
Note 3: If a topologically sorted graph without cycles is given, then C ( 2 )
. C ^ .
Example C^ 0 ^, determine the total-
Given the following distance matrix distance matrix 0
1 5
oo o OO
C ^ . 4 3
00 0
1
4
OO
1
00 0
CO
oo
3
00
4
0
3
5
3
0
2
6
1
4
0
0
6
3
5
«0 2 2
4
00
c
(l)=c(2)=
3.1.8 The Algorithm of LITTLE Hypotheses Given a strongly connected simple graph 1? sisting of the arc-values
c^
>0
V
and the matrix
C
con-
i,j = l,...,n , representing
the distances (costs) between the nodes (c.. = °°
V
i = l,...,n) ,
determine a cost-minimal tour which includes all nodes only once (Hamiltonian circuit). Principle In this algorithm pairs of node-indices (direct connections) are successively determined so that the minimal detour, not using this arc is as long as possible. Allowing no subtours, more index-pairs are determined, beginning with the partial-path with minimal costs, as long as a sequence of index-pairs can be found which forms a Hamiltonian circuit and has the minimal cost.
170
Theory
of
Graphs
Description Step
l: Use FLOOD'S Technique on the matrix Result: A matrix
C
The index-set
is defined as:
Q
Q: = 0, furthermore let Step
2: Let
C .
and the reduction-constant r
o
.
R: = r Q .
c^. = 0 . Assign to this element an index
u , so
that u: = (min {c.. } + min {c,,}) . 1K IJ k*j Hi Do this step for all elements Step
3: Determine the element c
Step
rs:
=
^i?
1 v
=
crs
c^
= 0 .
, so that
max
•
4: (inclusion of (r,s)) Set
Q: = Q
Eliminate row
U {r} , where r
Q
and column
is the current index-set. s
and set up matrix
C^,
so that Sq: Step
=
°°
v
Sj
c
ij
otherwise.
5: Use FLOOD'S Technique on the matrix C^. Result: A matrix Set
Step
: =
C
R : = R + r^;
and the reduction-constant
r^.
eliminate all assigned indices
u
6: ((r,s) is not included = (r,s)) Set c
rs
: = °°
in the matrix
C
FLOOD'S Technique on the matrix
from step 1 and then use C .
Result: A matrix t and the reduction-constant r^ • Set ft : = R + r 2 . Result: Two more end-nodes of the solution tree with the values R
and ft .
Section 3.1.8
Step
171
7: Select that node from all end-nodes of the solution tree which has the minimal value R (either R or ft). For the index-set Q at this node is |Q| = ( n - 1 ) ? If yes: Stop, the optimal tour has been found, the n-th assignment is uniquely determined by the current matrix C. If no : Let the approriate matrix C
or
l
be the new
matrix C, and R be the current node-value. Go to step 2 . Note 1 : If a maximum problem is given, then the matrix
C
has to
be transformed using the "constant"matrix as explained in 1.2.2. The algorithm then begins with the transformed matrix. Note 2 : The algorithm is also applicable to problems concerning the working-sequence of products on machines, assuming cyclic production. Example Given the matrix graph &
C, it describes completely the corresponding
.
\ t o
1
from\
2
3
4
5
to
c(r)
f ronr
1
00
3
9
8
2
2
2
2
oo
9
4
5
2
T
2
3
4
5
• » 1 7 6 0
2
C' =
1
0
»
7
2
3 4
3
1
7
CO
9
5
1
3
0
6
»
8
4
3
5
6
00
4
3
4
0
2
3
°° 1
5
1
6
8
4
00
1
5
0
5
7
3 °°
0
1 3
1
2
3
4
5
CO
O1
4
4
O1
ro
• (c).
0°
00
4
O1
3
R: = r
3
o4
5
00
6
4
v = 4,
4
0°
1
04
00
1
5
1
4
4
1
CO
front-^ 1
o
2
0
reduction--constant r = 15 0 = 15 0 c
rs
= c 0 1 ; yQ = v0 31'
172
Theory
of
Graphs
stage 1
Q = 0 ; r 2 = 4 ; R = 19 V
to 2
3
4
5
1
O1
oo
4
O1
2
CO
4
o3
3
4
1
04
oo
1
5
3
3
O3
00
from\
Q = {3} ; r, = 1; R = 16;
stage 2
Q = {3}; r, = 4; ft = 20;
Q = {3;4} ; ^ R = 16; v = 3; c
rs =
c
12
= 0;
Section
3.1.8
Q = {3;4} ; r £ = 3; ft = 1 9 ; v = \ t o froiti\^ C=
2 5
stage 4 Now s e l e c t V t o fronts^ 1 2 5
C
; - crs = c52
4
5
» 0
0 »
Q = f1 ; 3 ; 4 } ; r^ = 3; R = 19; v = C
rs "
c
25
from stage 3 2
4
5
0 0 0
4 0 0
0 3
\ t o fronN^ 4 1 4 2 »
OD
\ t o from\^ 4 1 0 00 2
Q = {3 ;4} ; r ? = R =
5 0 3 5 0 0
Q = { 3 ; 4 } ; r, = 7; R=26; stage 5 Now s e l e c t C f r o m s t a g e 3 \ t os from \^ 4 2 0 C = 5 0
v
5
C =
0 »
Q = {1 ; 3 ; 4 } ; r2=«=;
ft«.;
\ t o from\^ 4 5 0 Q = {1;2;3;4} Optimal !
R=19
174
Theory
of Graphs
The optimal tour is given by n
l
n
2
n
5
n
4
n
3
n
(3,1) ;(4,3) ;(1,2) ;(2,5)-,(5,4), i.e.:
l '
The associated cost is 19 m.u. The solution tree developed the following form during the course of the algorithm:
3.1.9 The Method of EASTMAN Hypotheses See
3.1.8
. Set the running index t: = 1 and furthermore TT :=
Principle First a set of assignments is determined. If at least two subtours are produced, successively eliminate those index-pairs belonging to a subtour and determine new assignments. The method terminates when a tour is produced, using all nodes only once.
Section
3.1.9
175
Description Step
i: Completely solve the problem with the Hungarian Method. Result: An assignment
Step
2: Does the assignment
z z
with assignment costs
it .
form a tour, in which all
n
nodes
are included ? If yes: Stop,
z
is an optimal tour with costs
it .
If no : Go to step 3 . Step
3: At this point there are subtour
S-
k
subtours S^, i=l
k . Each
can be represented by an ordered set of
r^
index-pairs : V
= ( ( Jx » J2) » ( J 2
* ' ••(jr.'ûi))-
Set up the problems: = z " i ( j r J 1 + 1 ) } V i=l
k; 1=1,... ,r i ;
where (r.+l): = 1, with the appropriate matrices C ^ setting the element matrix Step
cn- . : = » in the current n 1 J 1 ' J l+1
%
C .
(t) 4: Use the Hungarian Method on the matrices C.-, -(t)
- m Result: Assignments .(0 Step
5: Define
M: = { z ^
Step
6: Is M = 0
for all
with assignment costs
tt^
forms a tour with all
n nodes,
_(T) | z^
t = 1,... ,t}. ?
If yes: Go to step 8. If no : Go to step 7. Step
7: Determine and set
» it
_(T) _(T) : = min {tt^ | z ^ € M}
* -(0 * : = ,pq ;
* 1 - v zgh '
z z
i=l,...,k; 1=1,....r.; where appropriate matrices % c. . : = °° jrj1+1
(ri + 1): = 1 , with the
, setting the element
in the current matrix
'Wt-l) CV . Go to step 4. gh
Example Given the matrix
C , it describes completely the corresponding
graph tf : \ t o from\
t=l; u* =
1
2
3
4
\ t o 1 fronN^
2
3
4
1
00
1
5
6
1
00
0
4
5
2
2
00
6
4
2
U
00
4
2
3
4
7
00
3
3
1
4
00
0
4
5
5
1
00
4
4
4
0
00
C =
with the Hungarian Method the following assignment
is found: z = ((1,2),(2,1),(3,4) ,(4,3)); i = 7; S,1 = ((1,2),(2,1)); S ? = ((3,4),(4,3)); z
(1)
11
3
two subtours:
= I - {(1,2)};
z
Seation
z &
= i - {(2,1)};
zj}> = i - {(3,4)}
\ t O 1 froirN^
2
3
4
1
oo
oo
4
5
2
0
co
4
2
3
1
4
co
0
4
4
4
0
oo
1 r(l). l 12 "
•A1) =
-12
1
2
3
4
»
0
4
5
L
21
¡5Í}=
"
2
1
00
00
0
1
2
0
00
4
2
3
1
0
00
4
4
0
\ t o 1 from-N^
2
3
11 "
3
4
0 0
CO
4
1
OO
0
4
5
2
00
00
2
0
co
00
4
2
3
1
4
co
0
3
0
4
00
0
4
4
4
0
00
4
3
4
0
co
W-
10 ;
2
3
4
\ to 1 from\^
2
3
4
4
5
1
00
0
4
3
2
0
00
4
0
3
0
3
oo
00
4
4
4
0
00
1
2
3
4
1
00
0
0
5
2
0
00
0
2 0 co
1
00
0
2
0
00
4
2 00 00
3
1
4
oo
4
4
4
0
\ t o frorrNv
22
1
? ( 1). u 12 '
MD. 21 "
= ((1,2),(2,4),(4,3),(3,1));
(1)_
;
= 15 ;
((1,2),(2,4),(4,3),(3,1));
d ) ._
177
2
\ t o 1 fronN^ c
= i - {(4,3)} \ t o
= ((1,3),(3,4),(4,2),(2,1));
from^x^
;
S. 1.9
^
= 10 ; \ to fromX.
1
2
3
4
1
oo
0
4
5
2
0
00
4
2
3
1
4
co
0
3
1
4
00
4
4
4
oo
oo
4
0
0
00
((1,2),(2,3),(3,4),(4,1));
22 "
^
=
15
ï
178
Theory
of
Graphs
M = {z[}); z M ; : % I M =
=
z ^ ;
^12)= ^21)=
0
->• Stop,
10;
z
^
= min { 1 5 ; 10; 10; 15> = 10;
10;
z
=
$
* =
((1.2),(2,4),(4,3),(3,1)); £ tt = 10 m.u.
i s an optimal t o u r w i t h costs
3.2 Flows in Networks 3.2.1 The Algorithm of FORD and FULKERSON Hypotheses Given a simple graph JiT = ( G , < ) , f i n d the maximal flow from t h e source
n, 1
to t h e sink
c a p a c i t i e s o f the arcs
n
n
. The arc values k. • e K ij o
a^j e A. Let
d e f i n e the
x . . be t h e flow i n arc a— .
The problem can be s t a t e d as one in L i n e a r Programming as f o l l o w s : max TT =
z
x-, •
j
z
=
J
J
J
i;
2 j
X-
j
lj
Jn
= 0 V i
X..
x
z
x
z
j X
-
x.
J
1J
= 2,...,n
-
1
= 0
Jn
X.. < K. • 1J
x
I n i t i a l l y set
ij
1J
t
V
i,j
e
x = ( x . ^ ) = ( 0 , . . . , 0 ) , furthermore l e t
f :
0 .
Principle The a l g o r i t h m determines in each i t e r a t i o n a set of paths from source to sink a'nd the corresponding c a p a c i t y . Superimposing the flows over the various paths, a flow p a t t e r n w i t h maximal flow i s determined.
Section
3.2.1
Description Step
1: Define
T:=0 ; R + :=R~:= 0 ; P^irij} ; M i r ^ ) : ^ ^ ,
Step
2: Define
I := {i | n i e P} ,
determine and set Step
r: max {i ] (i e ! p ) A ( i f. T)} T:=T U{r} .
3: (label 1ing phase) P + := {n. ](n.£ P) a ( k •> x .)} and set J J ^J rj
Determine
M(nj) : = ( n r . e j ) v n ^ p ' where Step
.
e.
: = min
4: Determine
; R+:=R+U{arj
| n. 6 P + } ,
and
set
M(nJ.): = ( n r , E j ) V n . e P" ; R ~ : = R ~ U { a j r | n.. 6 P"} , where Step
5: Is
min {E r ; Xj r } .
(P + U P" =
a(|T| = |P|) ?
If yes: Stop, x = ( x ^ ) is the maximal flow in JT , its value is If no : Set Step
Step
6: Is
f max (jT): = f* -
P: = P u P + u P". Go to step
6 .
np e P ?
If yes: Go to step
7 .
If no : Go to step
2 .
7: Beginning at node
nn
determine a path
p = (rip...,n., nj,...,n n ) by retracing the labels M(nj) Step
8: (flow-alteration phase) Set
e: = e n
and determine x
x.. : =
ij + e, if (a.j € p ) a (a.j e R + )
x.. - e , if ( a ^ e p j A l a . . e R x..
f*:=f*+e.'
1J
,
otherwise
;
180
Theory of Graphs
Eliminate a l l
labels M(nj).
Go t o step 1 .
Example Given the f o l l o w i n g network Jff , f i n d the maximal f l o w i n /T . Legend:
0
f * = 0; R + = R" = 0 ; P = { n ^ ; M(n 1 ) = ( n ^ - ) ; P+ = { n 2 ; n 3 ; n 3 )
I p = { 1 } ; r •= 1;T={1>
; M(n 2 ) = ( n 1 > 3 ) ; M(n 3 ) = ( n 1 > 4 ) ; M(n 4 ) = ( r i j . 4 ) ;
R + = ^ a i 2 ' a 1 3 ' a 1 4 >; P~ = 0 i R~ = 0". ( p + U P") ^ 0
+ P={n1;n2;n3;n4}
n 5 ( f P ; I p = { l ; 2 ; 3 ; 4 } ; r = 4 ; T = { 1 ; 4 } ; P + = { n 5 ) ; M(n g ) = ( n 4 ; 4 ) ; R + = { a 1 2 ; a 1 3 ; a 1 4 ; a 4 5 } ; P"=0; R"=0; (P + UP~) f 0 P={n1;n2;n3;n4;n5) x
14
=
x
45
=
—
; n g e P; p = ( n x ; n 4 ; n 5 ) ; e = e 5 = 4 ; ^
=
Section
3.2.1
181
T = 0; R+=R~=0 ; P = {r^} ; M(nj) = ( n ^ - ) ; I p = { 1 } ; r=l; T= { 1 } ; P+ = { n 2 ; n 3 }
; M(n2) = ( r ^ . 3 ) ; M(n3) = ( r i j . 4 ) ; R+ = { a 1 2 ; a 1 3 }
P" = 0; R" = 0; ( P + U P " ) t r=3; T = {1;3}
0
; P + = {n 4 ;n 5 }
R+ = i a 1 2 ; a 1 3 ; a 3 4 ; a 3 5 }
P = {n 1 ;n 2 ;n 3 }
;
; n^fL P;I p ={l;2;3> ;
; M(n4) = ( n 3 , 4 ) ; M(n5) = (n 3> 3) ;
; P" = 0; R" = 0; (P + U p") f 0 i
P = {n 1 ;n 2 ;n 3 ;n 4 ;ri 5 }
; rig € P; p = (r^ ;n3;n,-) ;e =
Xi n - 3, Xor - 3, f
= 1\
— = 3;
T = 0; R+ = R" = 0; P = { r ^ } ; M(nj) = ( n ^ •>); I p = { l } ; r=l; T={1}; P + = { n 2 } ; M(n2) = ( n x ; 3 ) ; R+ = { a 1 2 } ; P" = 0; (P + u P") t
0
P = { n 1 ; n 2 } ; n^t P; I p = { 1 ; 2 } ; r = 2; T = {1;2} ; P + = {rig}; M(n5) = ( n 2 ; 3 ) ; R+ = ia 1 2 "' a 25 } ; P " = 0 ; R~ = 0 ; < P+U p " ) i P = {n^;n 2 ;rig}; n 5 £ P; p = (n^;n 2 ;n 5 ); e = e 5 = 3 ; x-io
_
3, Xnr
_
3) f
— 10;
^
182
Theory
T = 0; R
of Graphs
= R~ = 0; P = { n ^ ; M ( n 1 ) = ( n j , - ) ;
P + ={n3}; M(n3)
= (n^l);
Ip={l}; r=l; T={1};
R + = { a ^ } ; P" = 0 ; R" = 0 ; ( P + u P " )
P = i n 1 ; n 3 > ; rigiP; Ip ={1;3}; r=3; T ={1;3}; P + =
/ 0
{n2;n4};
M ( n 2 ) = ( n 3 ; l ) M ( n 4 ) = ( r y . l ) ; R + = { a 1 3 ; a 3 2 ; a 3 4 } ; P" = 0 ; R" = 0 ; ( P + U P") ji 0
P = {n1;n2;n3;n4}
T = {1 ;3;4}; P + = {rig}; M ( n 5 ) P" = 0 ; R" = 0 ; ( P + U P ~ ) P = (n-,
T = 0; R
e =
; n 5 Ì P; I p ={1;2;3;4}; r = 4 ;
= (ry.l); R + =
ji 0
{a13;a32;a34;a45);
P = {n,1 ;;nn2„;;nn3 ,; ;nn4 ;. r; i ng , } ; nn^< ,€ P ; = 1 ; x13
= 4; x 3 4
= 1; x 4 5
= 5; f * =
= R~ = 0 ; P = {n,}; n ,V)11 j = (n,, « ) ; I P = { 1 } ; r = l ; I j/J Mi( i
P + = 0 ; R + = 0 ; R"
J; ( P + U P"
f m a X ( v / T ) = f * = 11
is t h e v a l u e o f t h e m a x i m a l
the corresponding
flow pattern
i 0 ) A (|T|
*
is s h o w n a b o v e .
|P|) f l o w in
T={1};
Stop, vT
,
11;
183
Section 3.2.2
3.2.2 The Algorithm of BUSACKER and G O W E N Hypotheses Given a simple graph the source
n, 1
The v a l u e s
K^ £
c^ V
€ F a^
JIT = (G,K,C), f i n d a m i n i m a l - c o s t f l o w
to the sink N
n
w i t h a g3 i v e n f l o w - v a l u e
n
d e f i n e t h e a r c - c a p a c i t i e s and t h e
are the t r a n s p o r t a t i o n costs per u n i t from
+
€ A . L e t x ^ . be t h e f l o w i n a r c a . ^
s t a t e d as o n e i n L i n e a r P r o g r a m m i n g as
m i n
X
ij
I
"
j
I
values n^ t o n^
,
. The p r o b l e m c a n be
follows:
x
=0
ji
v i
= 2
n -
1
J X
lj ij
Initially
.
a —6 f iA r ij
17 =
x
f
from
~
iJ
X .IJ. 6
K0
set
V
x = (x^j)
l,J
= (0,...,0),
furthermore let f
: = 0 .
Principi e In t h e a l g o r i t h m a sequence o f i n c r e m e n t a l which the shortest paths are determined. f e a s i b l e flow is determined.
graphs is constructed
F o r e a c h p a t h t h e maximum
Superimposing the appropriate flow
t h e c o r r e s p o n d i n g p a t h y i e l d s t h e r e q u i r e d f l o w . The a l g o r i t h m m i n a t e s , when t h e r e q u i r e d f l o w - v a l u e flows
in
i s r e a c h e d o r when no
over ter-
feasible
exist.
Description Step
1: C o n s t r u c t an i n c r e m e n t a l
graph
I(YT) = (N,A*,Y)> w h i c h
associated to the current flow pattern A*: = A1 U A'' x• • < < . . ij ij
»
, where a * , e A' ij
x , so
that
is
184
Theory
of
Graphs
x.. >
«
a
ij
'
•Cji
,
0
C Y
Step
ij
:
€ A"
if
a
if
;
ij£ a
to cost
A
' A'
i j £
2: Determine a shortest path respect
Step
ji
p =
) in
I (,/T) with
. .
3; Are there any shortest paths with respect to cost
p =
) in
I(#)
?
If yes: Go to step 4. If no : Stop, a minimal-cost flow with the given flow-value f Step
4: Let
does not exist.
6': = ^
= c
rr
11
M(n 1 ) = ( n 1 ; 0 ) ; M(n ? ) = ( n ? , - ) ; M(n,) = ( n3 'v - ) ; M(n d ) = (n d ,°°) P = { n x } ; T = 0 ; Ip = { 1 } ;
s = 1; T = {1};
M(n1)=(n1,0); M(n?)=(n1,3); M(n,)=(n,,2); 3' ^ 1 ' P = in1;n2;n3}
P
={n2;n3}
M(n4)=(n4,»)
; n r = rij ft P ; I p = { 1 ; 2 ; 3 >
; s = 3; T = {1;3}
;
P* = ( n 2 ; n 4 ) M(n1)=(n1,0); M(n?)=(nv-2); M(n,)=(ni,2); 2' 3 P = {n1;n2;n3;n4> T = {1 ;3;4}
; np = n j I P ; I p = { 1 ; 2 ; 3 ; 4 >
; P* = 0 ; I
M(n 1 ) = ( n 2 , - 5 ) ;
M ( n4 4' ) =^( n33 ', 7 )
= {1;2;3;4}
P = {n^n^n^n^
; s = 4;
; s = 2; T = { 1 ; 2 ; 3 ; 4 } ; I
; nr = n1 e
P*={1};
P*;
p = ( n ^ n ^ n ^ n ^ ; 6 ' = 2 ; 6 " = min { 1 ; 4 } = 1 ; e=min { 2 ; 1 }
= 1
Section 3.2.4
3
2
00
-3
00
4
00
-2
5
CO
5
-4
-1
-5
00
00
$ c^. < 0
-*•
C =
Stop, x = ( x ^ )
0
3
2
7
-3
0
-1
4
-2
1
0
5
-7
-4
-5
0
is a minimal-cost flow with
191
value
f = 12 . The c o s t s a r e :
it = >
c
-ij'xii
=
63
m
"u-
3.2.4 T h e O u t - o f - K i l t e r Algorithm (Ford; Fulkerson) Hypotheses Given a simple graph ^ circulation. X^
:
Define
= (G,A,;>
follows:
a.^ e A
k^j : :
minimal-cost
A , e s t a b l i s h an a r t i f i c i a l c
nl
=
arc
0 ( or set Xnj = < n l = f ,
a n ^ , so t h a t X n j = 0 ; i f a flow value f
is
192
Theory of Graphs
given). Let x ^
be the flow in arc a ^
.
The problem can be stated as one in Linear Programming as follows: min
tt = c-x I*(G)-x = e X > X X < K
Let u i € let
u.
;
W q be the values of the nodes n i , i=l,...,n . Initially =0Vi=l
*
f : = 0
m
x: = (x,,): = (0,...,0);
and set the running index t: = 1 . u^ e Z
Note: The algorithm may also start with an arbitrary an arbitrary circulation
x
with the value f
and
< °° .
Principle The algorithm determines tours using a labeling procedure, beginning with any circulation. If a tour exists, the flow is altered, if not, the "dual variables" (node-values) are altered and a test for optimality according to the complementary-slackness-theorem is performed. Description Step
1: Set up the network y T ^ = (G,x,d,z s 1 9 n ,k,u), where
d.. : U
c. . + u. - u . (V
0 )
*
( x
ij
(d,,0)A(X
J S
>A
J S
)
AND
WHERE
)
E
= M I N { £
J
; X
S
S J -
SET
X
S J
}
R+:=R+U{A
+
+
•} ^J
(ns,ej)
E J
=MIN
{ E S
;K
S J
-X
S J
}
R
(ns,ej)
E J
=MIN
{ E S
;X
J S
-X
J S
}
R":=R"U{A
J S
}
(ns,£j)
£ J
=MIN{
;X
J S
-K
J S
}
R":=R"U{A
J S
)
:=R
U{A
S J
]
) (
DEFINE
K
M(NJ)
JS)
=
(N,.^.)}
P)A(nie
IS
{(Nke
IF
YES:
GO
TO
STEP
12.
IF
NO
GO
TO
STEP
8.
:
IS
(P*
IF
YES:
GO
IF
NO
:
SET
GO
TO
STEP
5.
DEFINE
R
X
:
=
{ ^ ¡ ( N ^ P J A T N J I
PJA«!^
>
0 )
A
(
R
2
:
=
{ A ^ K N . J E P M N J . E P J A I D . J
so that two of those subgraphs are connected. Include this arc in the current subgraph. Set A':=A' u { a r $ } ; go to step 6.
Example Given the following graph Jf , find the shortest spanning subtree JV.
A' = 0, k = 1
© a
a
k l = a 1 2 ' a 21
kl=a21'
a
l2
A
'={ai2};
e
k=2;
kl=a32'
a
©
©
©
k-3;
© a
©
2 3 f- ^ ' ^ ' a \ 2 ' a 32^'
Section 3.3.3
205
rn2>
1
0
(pi
@
© a
kl
=a
45
; a
54 ^
A
'
; A
'={a12;a32;a45};
k=5;
© a
a
k l = a 5 4 ' a 45
rs
=a
e A
? ©
a ' ; A'={aT5;a12 -39'. 3 2/ic>; ' 4 5 k=n=5; |A'| t 4;
2 5 ist the arc with the least value
c
connecting the two sub-
graphs. A ' = { a 1 2 ; a 2 5 ; a 3 2 ; a 4 5 } ; |A'| ^ 4;
Stop, W
has been found, its value is
~ c i j = 8 m.u.
y a
ij
e
A
'
3.3.3 The Method of WOOLSEY Hypotheses See
a. 3.3.1 . Define N:=A':=0; set the running index t:=l.
206
Theory
of Graphs
Principle
See
3.3.1
Description Step
l: Determine
= min {c^j > and connect the nodes n^ and
n-j. Set "M:=Rf U {n |< ;n 1 }; A' : =A' U { a k l ) Step
2: Determine c. -, :=min{c.. | (n. € N)A (n. t N) }and connect the KI
IJ
1
J
nodes n^ und n^. Set Step
3: Is
Ui^h t = (n - 1)
A' : =A1 u { a k l } ; t:= t + 1 . ?
If yes: Stop, the shortest spanning subtree W found, its value is given by y
has been c^ ..
a..€ A' J
If no : Go to step 2. Example
Given the following graph jf , find the shortest spanning subtree
© c
k l = c 1 2 ; ^={n 1 ;n 2 >; A'={a 1 2 >;
©
©
Section
C
kl
c
kl=c25'
* 1 ^
c
a
1 2 '
a
1 3 '
v
kl=c45' ^={ni'»n2;n3;n4;n5};
Z.S.4
207
t-3; 3
A
'={a12'a13;a25;a45}'
Stop, v/T' has been found, its value is
t
=(n"1)=4;
c^j = 8 m.u. a
ij£
A
'
3.3.4 The Method of BERGE Hypotheses See
3.3.1 . Define "H:=l\' : = {a i .}
and set the running index t: = ;A| .
Principle Cost-maximal arcs are eliminated from the given graph as long as the subgraph remains connected.
208
Theory
of Graphs
Description 'Xj
Step
1: Determine c, , :=max{c--¡a. .6 A} . KI 1J 1J
Step
2: Is it possible to eliminate the arc a ^
from the current
subgraph without destroying its connectivity ? If yes: Go to step 3. If no : Set Step
%
A:=A-{a k l }. Go to step 1.
3: Eliminate the arc a ^
from the current subgraph. Set
A:=A - {a k l }; A':=A' - {a k l }; t:=t - 1 . Step
4: Is t = (n - 1) ? If yes: Stop, the shortest spanning subtree has been found, its value is given by ^ a
If no : Go to step 1.
c
i
A
ij *
'
Example Given the following graph vF , find the shortest spanning subtree /r.
t-8ì A-A
a
* a 13 > a 23 ' a 24 ' a 25 *,a34 'a35*'a45^ '
kl~ a 35' ^ = A ' =
{a
12;a13;a23;a24;a25;a34;a45}'
t_7;
Section
a
kl=a25'
the
arc
a
25
may
not
e1
subgraph would not be connected.
3.3.4
i m i n a t e c ' > because the resulting A^a^ja^a^ia^};
A'={a12;a13;a23;a25;a45};
Stop,
has been found, its value is
c.j = 8 m.u.
209
210
Theory
of
Graphs
3.4 Gozinto G r a p h s Hypotheses G i v e n a s i m p l e , v a l u e d g r a p h AT = ( N , A , c ) w i t h o u t c y c l e s so t h a t (n.j , r i j ) 6 A -+• i < j
( i f n e c e s s a r y t h i s can be r e a c h e d by d e t e r m i n i n g
t h e r a n k and t o p o l o g i c a l
sorting)
i n which t h e a r c values c— 6
N
i n d i c a t e how many u n i t s o f t h e i - t h p r o d u c t ( r e p r e s e n t e d by t h e node n.j) a r e n e c e s s a r y t o p r o d u c e one u n i t o f t h e j - t h
product
( r e p r e s e n t e d by t h e node n . ) , d e t e r m i n e t h e v e c t o r z : z J
s
e l
[nxl]
t h e t o t a l r e q u i r e d o f each p r o d u c t , w h i c h i s n e c e s s a r y t o a g i v e n demand v e c t o r z 0 :z zQ
[nxl]
£ N n0
O
satisfy
where t h e i - t h component o f
represents the i - t h product i n q . u . Let the m a t r i x C:Cj|nxnj
t h e d i r e c t demand-matrix ( a l s o c a l l e d t h e " n e x t assembly
be
quantity
m a t r i x " ) , so t h a t c
c. . : = U
ij
•
0
, i f ai • £ A
i f
a
ij
£
A
3.4.1 T h e Method of V A Z S O N Y I Description Step
l:
Formulate the matrix c
& = Step
Step
[(n+l)x(n+l)] zo
0
0
2: D e t e r m i n e ( w i t h an a p p r o p r i a t e method Gauss)) t h e m a t r i x
D:=(I -
t)"1.
3: Compute t h e m a t r i x
&:= D -
I.
Stop, the f i r s t tor of
&
, so t h a t
n
(Gauss-Jordan;
components o f t h e ( n + l ) - t h column v e c -
form the v e c t o r z $ ) the t o t a l
requirements-vector.
Section 3.4.1
Example Given the following Gozinto-graph /T and the demand vector
r
l
2
10
13
132
101,570
0
1
3
4
38
29,400
0
0
1
0
10
7,200
0
0
0
1
2
1,930
0
0
0
0
1
720
0
0
0
0
0
1
'0
2
10
13
132
101,570
0
0
3
4
38
29,400
0
0
0
0
10
7,200
0
0
0
0
2
1,930
0
0
0
0
0
720
0
0
0
0
0
0
The total requirements-vector z s is: zj = (101,570; 29,400; 7,200; 1,930; 720)
211
212
Theory
of
Graphs
3.4.2 The Method of TISCHER Hypotheses Set the running index k:=l. Description Step 1: Determine the vector z^, so that z ^ C - z ^ Step
.
2: Is z k = z |< _ 1 ? If yes: Go to step 3. If no : Set k:=k+l. Go to step 1.
Step
3: Determine z s > the total requirements-vector, so that k-1
V
z
= H r=0
r •
Example Given the following Gozinto-graph
yT
z\ = C - z Q = ( 6 , 9 3 0 ;
1,440;
1,960; 7,200;
zJ, = C - z 1 = ( 3 9 , 9 2 0 ; 2 7 , 3 6 0 ; 0 ; 0 ; Z3 = C - z 2 = ( 5 4 , 7 2 0 ; 0 ; 0 ; 0 ; z \ = C-z3 = (0; 0; 0; 0;
0)
z j = C-z4 = (0; 0; 0; 0;
0)
Zr = z . ; 3
4
z
b
=
T. z ; r=0
and the demand-vector
0)
0)
0)
z] = ( 1 0 1 , 5 7 0 ; 2 9 , 4 0 0 ;
7,200;
1 , 9 3 0 ; 720)
.
Section
3.4.3
3.4.3 The Method of FLOYD Hypotheses Set the running index k:=l . Description Step
1: Formulate the matrix
-(o) _
C^ 0 ^
» so that
fc
I Step
2: Compute the matrix C^(k)1 , so that c (k)._ c (k-l)
ik
ik
c (k)
(k-1) kj - c k j
V
i=l
n+1
v
j=l
n+1
c,c " + d
ifa
k-l( n i> 1f
i
(k)eA
..(k)ÎA
"i
=
> )
4: Compute S
k-l( n i)
+ È
Sk(ni): =
5: Is
..(k)- S k-l( n .(k))' i f U / ft
k 1
Step
i:j a
(*) Step
= 0; set
dk(ni) = 0 v
d
ki n i) < d k-l< n i>
k< n i> = < - 1 ^ :
1
(*) V
d
= (*) ni
?
If yes: Stop, z $ := S k (n^) ist the total requirementsvector. If no : Set k:=k + 1. Go to step 2.
216
Theory of Graphs
Example Given the following Gozinto-graph Jf and the demand-vector z T = (0; 80; 0; 490; 720); in this case the on-hand balance of each 0 T product is zero, i.e. L =(0;...;0), find the vector z%.
n. l n
l
n
2
n
3
n
4
n
5
j( 3 )=3
j(2>=4
jW-5
d£(n.) s 2 ( n i )
Vni>
di(nn.)
4
0
3
4,320
2
13,970
1
42,770
2
80
2
80
1
7,800
0
29,400
1
0
0
7,200
0
7,200
(*)
7,200
1
490
0
1,930
(*)
1,930
(*)
1,930
0
720
(*)
720
(*)
720
(*)
720
S
l< n 1>
W
j(4) = 2 W dj(n ) i 0 v (*) V
V
the vector
of shortages is :
A-
(101,570; 29,400; 7,200; 1,930; 720)
0
101,570
(*)
29,400
(*)
7,200
(*)
1,930
(*)
720
4. Planning Networks
4.0.1 The Critical Path Method (CPM) Hypotheses Given a simple network #
= (N,A,d) without cycles, with one source
n^ and one sink n n , where the arcs represent activities, determine the critical path(s) cp and the project duration. (If necessary add an artificial source and/or an artificial sink to the given network). The arc-values d^. 6
denote the duration of the activities
(n.j,nj) € A v i, j=1
n. If the following information is given in
a table like the one below, the corresponding network must be constructed. Description of the activity
Number of Duration the activity d.. ij
Dir ect predecessor successor activity activity
Principle In this method the longest paths from the source to the sink and, using the corresponding reverse arcs, the shortest paths from the sink to the source are determined. A "Covering" of these paths yields the critical path(s) cp. Description Step
1: Determine the longest paths from the source n^ to all other nodes n., j^l, with the FORD Algorithm II .
218
Planning
Networks
Result: Step
The labels M(n^) v j=l,... ,n.
2: Define ES(rij): = M(tij) V j = l Here
n; LS(n n ) := ES(n p ) .
ES(n^) denotes the earliest possible time, the re-
sult n^ can be obtained (the earliest event start time) and LS(nj) denotes the latest acceptable time, the result n. can be obtained (the latest allowable event start time). Step
3: Determine LS(n d ):= min{(LS(n k )-d j k )|(n k £ N s ( n j ) )
A
s
( 3 L S ( n k ) V n k £ N ( n j ) ) } vj=l,...,n-l. Step
4 : Define M: = {rij | E S ^ )
= LS(nj)}
and determine {cp} :={cp1 = (n 1
n i ,nj ,... ,nR) | (nn.e M)A(n £ M)A (ES(nj)- ES(n.) = d^.) v l = l
Here
q £W
q) .
critical paths cp exist. The project duration
is ES(n n ) t.u. Note: The slack of an activity (n^.rij) determines to what extent its beginning may be delayed without changing ES(n n ). We distinguish between the following types of slack: - total slack S ^ i n - . n j ) ^ LS(nj)-(ES(n i ) + d.j) it denotes the maximal increase of d^. .assuming that all other activities begin at the most favorable moments with respect to this activity. - free slack
S f (n i ,n.):=ES(nj) - (LS(n i ) + d i j ) ,
it denotes the maximal increase of d^^, assuming that all events n k start at ES(n k ). - independent slack
S^nc)(n^ .n^) := max {0; ES(nj)-(LS(n^ )+d^.)},
it denotes the maximal increase of d ^ , assuming that all other activities begin at the most unfavorable moments with respect
Section 4.0.1
219
to this activity. - interferring slack S. .(n.):=LS(n.) - ES(n-), 1nt j j j it denotes the interval in which the event n. can happen. Example Project: Renovation of an apartment Description of the activity
Number of the activity
d
ii 1J
Direct predecessor successor activity activity C, D, E
clear out the apartment
A
3
buy wall papers
B
2
-
F, G
take-down curtains
C
1
A
F, G
mix the paint
D
6
A
H
lay out the rug
E
9
A
I
wash the curtains
F
4
B, C
H
paper the apartment
G
3
B, C
I
paint the ceiling
H
2
D, F
I
clean the apartment;
I
8
E, H, G
-
return the furniture The corresponding network -/T is:
220
Planning
Networks
©
Legend:
dlj
>©
ES( n i) ILS(n.j )
ES(n j )|LS(n j )
3| 3
20120
4|6 M={n 1 ;n 2 ;n 5 ;n 6 }; {cp}={p 1 ), where P 1 = ( " 1 ,n2 ,n 5 ,n 6 ); i.e. the activities A,E and I form the only critical path cp. Determine the slack for the activity F: S t o t ( F ) = 10 - (4 + 4) = 2;
S f (F) = 9 - (4 + 4) = 1;
S i n d ( F ) = max{0; 9 - (5 + 4)} = 0;
Sint(n3)
= 6 - 4 = 2 ;
4.0.2 The CPM Project Acceleration Hypotheses Given: (1)
A
CPM-Planning Network, for which the critical path(s)
cp and the project duration ES(n n ) have already been determined. (2) Costs
ac.jj > 0, which occur when the duration d ^
of the ac-
tivity (n^.n^) is shortened by one time-unit. (Note that the activity (n.,n.) needs at least x . . € W time-units, so that d..>A..). ' J 'J ® 'J ^J Accelerate the project completion with minimal costs, so that the moment of completion x < ES(n ) is realized. Define C: = 0.
Seat-ion 4.0.2
221
Principle In this method a sequence of networks is constructed in which the duration of the activities, which have not reached their lower bound A.., is reduced on all critical paths so that the reduction is cost'J
minimal. Description Step
1: Is ES(n n ) = x ? If yes: Stop, the optimal solution has been obtained.
Step
If no : Go to step 2. 2: Let {cp}: = {cp-, | 1=1,...,q}
the set of the current criMl tical paths. Determine the activity ( n r > n s ) 'v 1=1 q» so that (a)
drs > Ars ;
(b)
the number of the critical paths is not decreasing for d :=d -1; rs rs A c r s is minimal with respect to (a) and (b).
(c) Step
3: Is |{(n r ,n s ) ( 1 ) }| = q ? If yes: Go to step 4. If no : Stop, the moment of completion ES(n n ) = x can not be realized.
Step
4: Define d ^
: = d ^ -
C: = C + E
1 v
A
1=1
1=1
q ;
, r s
determine the new values ES(n,), LS(n.) v j=l,...,n the new critical paths
J
J
and
cp . Go to step 1.
Note: The number of the critical paths is monotone non-decreasing. The cost-function
C: = f(ES(n n )) is convex and piecewise linear.
222
Planning
Networks
Exampl e Given the f o l l o w i n g CPM-planning network n 6 ) ;
= (n2,n5); (nr>ns)(2) = (n2,n4); d
25)=
7
"
1
= 6;
d
24 }
Ac^
= 2;
= 6 - 1 = 5 ;
+ 2 + 3=12
ES(rig) i
16;
cp^^ = (n 1 , n 2 , n 5 , n 6 ) ;
cp 3 = ( n l s n 2 , n 3 , n 4 , n 5 , n 6 ) ; (nr,ns)(2) Ac^
cp 2 = (n 1 , n 2 , n 4 , n 5 , r i g ) ;
= 2;
(n^n^1)
= (n2,n4); (nr,ns)(3) AC= 3;
Ac^
cp 2 = (n 1 , n 2 , n 4 , n 5 , n 6 ) ;
= 1;
= (n2>n5);
= (n3,n4);
223
224
Planning
ES(rig) = 16
Nstworks
—»-
Stop, the project is terminated after
ES(rig)=x=16 t.u., the corresponding accelerations cost is C=18 m.u. The corresponding cost-function C=f(ES(rig)): C
4.0.3 The Program Evaluation and Review Technique (PERT) Hypotheses Under the same hypotheses as in 4.0.1 , determine the probability for the termination of the project after x
t.u.
Note: The difference between PERT and CPM is that in PERT the following three time-estimations are used: k ^ : duration of the activity (n^.n^.) under favorable conditions (optimistic estimation) m ^ : most probable durat ion of the activity (n_j,n.) 1 ^ : duration of the activity (n^.nj) under unfavorable conditions (pessimistic estimation), where
k-j < m.-S 1.. v
i ,j .
Principle After computing the average project duration, the method is the same as 4.0.1 . As a result we obtain the critical path with
Section 4.0.3
225
respect to time, not necessarily the actual critical path. Description Step
l: Compute the average duration d
ij:
=
^
k
ij
+4,m
ij
+
1
ij)vU
2 1 2 a ^ : = -jg-U-jj ~ k-jj) v
and the variance
i'J-
Note: Step 1 is valid under the assumption that the variables have an Euler-Beta distribution. Step
2: Using the known
Step
d.. 'J
from step 1 determine with 4.0.1 the un-
ES(nj); LS(nj) v j = l,...,n
paths {cp} . 2 3: Compute a^: =
? ' a.^
and the set of critical
v 1=1,...,q
a
and determine
Step
4: Determine
ij € c p l 2 2 a : = max {a,}. 1 1
x-ES(n ) P(ES(n n ) < x): = $ (—,, )
the appendix.
from
table 5 in
a
Example Given the following PERT-planning network JP = (N,A,k,m,l), determine the probability for the termination of the project after x = 25 (20) t.u.
226
Planning
Networks
Step 1 yields the following planning network /f' = (N,A,d,a ):
17/6,49/36
The critical path is represented by the dotted line. 313
45/2145/2
17/6
59/3 ; 59/3
43/3', 43/3
a
2
=
2
=
£
a^. = 3.0583
aij€cp1
a) P(ES(n 6 ) < 25) =
*(25 ' 2 2 ^ - ) 3.0583
ß) P(ES(n fi ) < 20) =
»(
20
=
" 22-5) = 3.0583
*(
2
- 5 _ ) « 0.7939 s 79.39 % 3.0583
~ 2 - 5 ) « 0.2061 = 20.61 %. 3.0583
Section
4.0.4
111
4.0.4 The Metra Potential Method (MPM) Hypotheses Given a node-oriented network /r' = (N',A,1 ,d) (i.e. the (n-1) activities n.,j = 1
n - 1, are represented by the nodes), determine
the minimal project duration. The arc-values
. represent the time
relations between the activities n. and n^; the following graph should be interpreted as such:
the earliest possible beginning for the activity n^ is
t.u.
after the beginning of the activity n^; the latest acceptable beginning for the activity n^ is
t.u. after the beginning of the
activity n^ . Logically no cycles with strictly positive length may occur. The node-values
d^ £
represent the duration of the ac-
tivities n^, j = 1,... ,n - 1 . Define ES(nj): earliest possible start time EF(nj): earliest possible finish time LS(n-): latest allowable start time J
— ^ L ^r/
of activity n.. J
LF(nj): latest allowable finish time
Principle In this method an artificial sink is included; then the longest paths from the source to this sink are determined as well as the longest paths from the sink to the source in the inverted graph. Description Step
1: Adjoin the node n n and the arcs ( n j , n n ) v j = 1,..., n - 1 to the given network. Let 1 - n :=dj v j = 1
n - 1;d n :=0.
228
Planning
Networks
Result: A planning network >ilf =(N,A,1). Construct the following table which will be completed during the course of the procedure: n. J
Step
n
l
n
n
d
j
ES^.)
EF(nj)
LS
v
j= 1
n
_
/
Result: The minimal project duration is LF(r>n) t.u. Step
7: Determine the total slack of each activity: S t o t ( n - ) : = LS(n.) - ES(n-)
V j=l
n .
Section
4.0.4
229
Example Given the following MPM-planning network, determine the minimal project duration.
step 1 y i e l d s the following network N
n
j
n
l
n
?
d
J
ES(nj)
EF(nj}
1
0
1
LS(nj)
:
LF(nj)
0
1
S
tot(nj) 0
7
2
9
2
9
0
n?
7
6
13
6
13
0
n
3
3
6
4
7
1
13
13
13
13
0
n
4 5
M^)
0
= 0; M(n 2 ) = 2; M(n 3 ) = 6; M(n 4 ) = 3; M(n g ) = 13;
step 4 y i e l d s the following network
fr
:
230
Planning
Networks
the minimal project duration is
LF(n,-) = 13 t.u.
4.0.5 The Graphical Evaluation and Review Technique (GERT) Hypotheses Given a GERT decision-network
/ F =(N,A,p,t), with the following
node-types, ^\0utput
Deterministic
Input\^
o o
O o
IO
K>
"AND" inclusive "OR" exclusive "OR"
Probabil istic
the arc-values are defined as: P.jj € (0;1]: probability that the activity ( n ^» n j) Note:
t
wil1
be
p^j = 1 v probabilistic nodes n^ .
J t—
£ 1R+: (constant) duration of the activity (n^.nj) ,
realized.
Section 4.0.5
231
reduce the given network to only one arc. The value of that arc indicates, with what probability the total project will be realized and what time will be required. Description The reduction of the decision-network is done by the following reduction rules (the arc-value
x
means
x = (p x ;t x )):
(1) arcs in a series:
X a
z
b
—
*
a
$
>
b
Pz: = Pa'Pb; V *
+
(2) parallel arcs with "AND"-relation: 3! b Pz:
=
Pa'pb;
t
z:
=
max
{
W
;
(3) parallel arcs with inclusive "OR"-relation:
b
p
z:
= Pa+b:
=
Pa
t
z:
= V
+
V
p
a
z
+
p
b
Pb " P a ' P b ; +
(min
{
W
" ^
"
t
^
W
232
Planning
Networks
(4) parallel arcs with exclusive "OR"-relation:
a Z
n • „ p z" = Pa
b
X
j
„ • t • V V
+ +
)
—
V
p
a + t b' p b p—Tp^
©
z
.;
(5) loops with unlimited repetition and inclusive "OR"-relation:
/"
c \1 • * ff .
.
V
Pk
z' " T
'
V
T
" ^ b
p
c
T=pc>
'
*
pb 1 - pc
'
(6) loops with unlimited repetition and exclusive "OR"-relation:
p p
z
:
= r
b ^
;
V
=
V +
p
c
;
(7) loops with limited repetition: A loop, which can be repeated at most 1 times, may be represented by the following graph, where x.j = (Px ;tx ); p b =l-p c v
i = l,...,l
:
i
i
i
i
Seation 4.0.5
233
1 -t b ' v
1
= V *
-
p
i
+
W
*
1
- o-n-pl'MPb-pl'Vd-P,,)) ;
this loop may be interpreted as follows: A product comes over arc a to the first test at node
n- . If J 1 accepted, it comes over arc c^ to node n^. If refused, it comes over arc b-, to node n. , where another test is conducted. After at most 1 J 2 1 tests, the product will be accepted or it will be rejected, i.e.
will end in node n^ or node n^ respectively. Example 1 Given the following decision-network
•ilT , reduce it to only one arc.
Legend:
©
:
:
J
> e
0.2 + 0.3 - 0.2-0.3 = 0.44; max {2;4} = 4;
Planning
234
Networks
2"" step:
3rd
(0.44,4 p z = 0.5-1 = 0.5;
1,3
t
= 1 + 3 = 4;
0.5,4
step: P z =0.44 + 0.5 - 0.44-0.5 = 0.72;
0.72,4
t
= max {4;4} = 4;
4 t h step:
©
0.72,7
p z = 1 • 0.72 = 0.72 ;
^
t z = 3 + 4 = 7;
Example 2 Given the following decision-network VT . reduce it to only one arc. 0.7,1
0.2,4 v
0.7,3
1,3
1 s t step:
2nd
step: 0.5,4
|
0.7,2 "4) 0.3,3
p z = 0.4-1 = 0.4 ;
0.4,3 t z = 1 + 2 = 3;
—
Section 4.0.5
235
3 r d step: 0.7,4
1,3
0.3,3 p z = 0.5 + 0.4 - 0.5-0.4 = 0.7; t z = max {4;3}= 4; 4 t h step: 0.49,6
1,3
0.3,3 P z = 0.7 • 0.7 = 0.49; t £ = 2 + 4 = 6; 5 t h step: 0.643,6
1,3
p z = 0.49 + 0.3 - 0.49-0.3 = 0.643;
t z = max {6;3} = 6;
6 t h step:
0.643,9
«3)
p z = 0.643 • 1 = 0.643;
tz = 6 + 3 = 9 .
Note: In decision-networks of the most general type, like GERTplanning networks, it is possible that the durations t of the activities are random variables with a certain density-function. Each random variable may be distributed differently. For the case when t is a variable, there are modified reduction rules which derive from "moment generating functions".
5. Game Theory
5.1 Non Matrix Games 5.1.1 The Normal Form Hypotheses Given two appliers of one product, let x^ be the quantity to be supplied with cost-function C.(x^) and L^ the maximum marketable quantity of each supplier i=l,2 . Furthermore the price-function p(x), corresponding to the supply x = x^ + x-2 is known. So a duopoly model is defined. Let i = l, then T=2 and reverse. Principle The optimal strategies of both players are computed taking into account different objectives. Description Step
1: Set up the payoff-function
A^: = p(x)-x. - C^(x^)
for each player i=l,2 . Step
2: Determine the Bayes-strategy
x..
for the player i=l,2,
so that 3A
i
0
-
x.: = f(x T ) .
and the payoff A i for the player i=l,2 , playing the Bayes-strategy A * ( X * (v X T ) ) :
v
i
i''
against i , so that
= yPv( X * , X T ) - X *
1
1'
i
-
C .v( x * ) .
i
v
Section Step
5.5.1
237
3: Determine the maximum payoff for the player i = l , 2 under the assumption that player i i s monopolist ( i . e . x=x^; XT =0), so that =0 | A i = p ( x i ) - x i - ^ ( x . )
-
x.(x7) = x.(0).
I t follows: A . ( x . ( 0 ) ) : = p ( x ) - x . ( 0 ) - C ^ x ^ O ) ) . Step
4: Determine the equilibrium-strategies
x^V
i = l,2 , solving
the system of equations, determined in step 2: x*: = f(x2);
x*: = f(Xj) .
The r e s u l t i n g payoffs for both players in the equilibriumpoint are: A ^ x ^ ) : Step
5:
Let
= p(5?)-x i - C ^ x ^ V i = l , 2 .
i n f A i ( x i ;XT) : = P ^ J L T ) ^ .
-
C^x^.
Determine the maximal guarantee for the player i = l , 2 , so that 3(inf A.(x.;x7)) 3x
i
The values
= 0
x^ represent the minimax-strategies of the two
players i = l , 2 . Assume ruinous competition of the opponent i , compute the guarantee for the player i = l , 2 , so that
Example Given two suppliers of one product, the cost-functions are Cj(x^) = 5 Xj + 8;
L 1 = 18; C 2 ( x 2 ) = 6 x 2 + 2 ;
The price-function in given by 20 - 3/4 x, p(x): =
0
if
0 < x
80/3
l_2 = 20 .
238 Game-Theory
A
l =
[20 - 3/4(Xj + x2)]-x1 - 5 Xj - 8 , if
x
80/3 ;
if
x
80/3
[20 - 3/4(x1 + x2)]-x2 - 6 x 2 - 2, a2 =
- 6 x2 - 2
The Bayes-strategies are: 3A1
3(20 X-] - 3/4 x? - 3/4 x ^ 2 - 5 x][- 8)
3X,
3 X, x*
3A ?
= 0 +
= 10 - 1/2 x 2
3(20 x ? - 3/4 x x x 2 - 3/4 x, - 6 x, - 2)
3x„
3x„ x 2 = 28/3 - 1/2 Xj
A*(x*(x2)) = [20 - 3/4(10 - 1/2 x 2 + x2)]-(10 - 1/2 x2)-5(10-l/2x2) = 67 - 15/2 x 2 + 3/16 x 2 ; is A^ maximal ? 3A —
= -15/2 + 3/8 x 2 < 0 - for x 2 < 20
.*, *
A 2 (X 2 (X 1 )) = [20 - 3/4(Xj + 28/3 - 1/2
Xl)]-(28/3
A* is maximal - 1/2 Xj)
- 6(28/3 - 1/2 Xj) - 2 = 190/3 - 7 Xj t 3/16 x^ ; is A 2 maximal ? sa! 2 — —
= -7 + 3/8 x^ < 0 •+ for x^ < 56/3
* A 2 is maximal
Section 5.5.1
239
If player 1 is monopolist: x = X p x^ = 0; A ^ x ^ O ) = (20 - 3/4 x 1 )-x 1 - 5 3
A-,
T 7 7 = " 3 / 2 X1
+
15
x ^ O ) = 10
0
=
- 8 = -3/4 x^ + 15 x 1
Aj(10;0) = [20 - 3/4(10 + 0)]-10 - 5-10 - 8 = 67. If player 2 is monopolist: x = x 2 ; x^ = 0; A 2 (0;x 2 ) = (20 - 3/4 x 2 )-x 2 - 6 x 2 - 2 = -3/4 x 2 + 14 3 A„ I T ^ = -3/2 x 2
+
14 = 0
- 2
x 2 (0) = 28/3
-
A 2 (0;28/3) = (20 -(3/4)•(28/3))•28/3 - 6-(28/3) - 2 = 190/3. The equilibrium-strategies and the equilibrium-points are: x x = 10 - 1/2 x 2 ; x 1 = 64/9 the payoffs
x 2 = 28/3 - 1/2 x 1 ;
x 2 = 52/9
in the equilibrium-point are:
A^(64/9;52/9) = [20 - 3/4(64/9 + 52/9)]-64/9 - 5-(64/9) - 8 = 29.925; A 2 (64/9;52/9) = [20 - 3/4(64/9 + 52/9)]-52/9 - 6.(52/9) - 2 =. 23.UI7. Guarantee for player 1: inf A 1 (X 1 ;X 2 ) = inf {[20 - 3/4-(x 1 + x 2 )]-x 1 - 5 XJ • 8] = = -3/4 x 2 - 8; maximal guarantee:
3 (inf A 1 (x 1 ;x 2 )) 3 x
Xj^ = 0; x 2 = 20;
1
-3/2 x 1 = 0
A 1 (0;20) = [20 - 3/4-(20)]-0 - 5-0 - 8 = -8 ;
x1 = 0
240
Game
-Theory
guarantee for player 2: inf A ^ X p X ^ = inf {[20 - 3/4(x 1 + x 2 ) ] x 2 - 6 x 2 - 2} = = 1/2 x 2 - 3/4 x 2 - 2 ; maximal guarantee:
3
(inf
A
(x
g Xg
x
)} =
^
= 0
-
x 2 = 1/3 ;
x^ = 18; x 2 = 1/3 ; A 2 (18;l/3) = (20 - (3/4)•(55/3))•1/3 - 2 - 2 = -23/12 . The guarantee point of the game i s defined by the maximum guaranteed payoffs A 1 und A2= (Aj;A 2 ) = ( - 8 ; -23/12). For each pair of strategies (x^jx,,) there exists exactly one pair of payoffs ( A p A 2 ) . These payoffs can be i l l u s t r a t e d in a payoffdiagram:
Section
In the diagram above the payoff-points A ( A p A 2 )
5.1.2
241
represent:
A(29.325 ; 23.037) the equilibrium-point; A(67 ; - 2)
the monopoly point for player 1;
A( - 8 ; 190/3) the monopoly point for player 2; A( - 8 ; - 23/12) the guarantee point; A(29.46; 35.54) Nash's solution of the bargaining problem (see 5.1.2).
5.1.2 NASH's Solution of the Bargaining Problem Hypotheses Given a duopoly model as in 5.1.1 , find a solution of a game, dividing the maximum total payoff
A as evenly as possible.
Description Step
1: Determine A: =A1-hR2:»roax{ [A 2 (0;x*) +A 1 ( 0 ) ] ; ^ ( x J ; 0 ) + A 2 ( x * ; 0 ) ] J and set: A^: = A - A 2 .
Step
2: Solve the optimization problem: max w = f(A 2 ) = ((A-A 2 -A 1 (x 1 ;L 2 ))-(A 2 -A 2 (L 1 ;x 2 )))
by
differentiation, i.e. df(A 2 ) — dA 2
, =
0 .
Result: The payoffs
A^
and
A 2 . This solution is of
course always parento-optimal. Nash's solution of the bargaining problem may also be applied to bimatrix games, using the fixed-threat point instead of the guarantee point. Example Given the example of 5.1.1 . A = A 1 + A 2 = max {[190/3 - 8] ; [67 - 2]}
= 65 + A 1 = 65 - A 2 ;
242
Game -
Theory
max tt = f(A 2 ) = (65 - A 2 ~{-8)) - (Ag-(-23/12)) = (73-A 2 )-(A 2 + 23/12); df(A 2 ) . , - = 73 - 2-A„ - 23/12 ^ 6 dA 2
0
A* = 853/24
29.46.
» 35.54;
A^
«
-
5.1.3 T h e Extensive Form
Hypotheses Given a game for a finite number of players j=2,3,...,n. Each player has at most t. < °° moves, the game so consists of n k: = I t-
j=2
J
moves, which are taken in order one after the other. Be-
fore the i-th move, that player whose turn it is, may select from s.j < °°
different strategies. After all moves have been taken, a
certain progress of the game has been determined. For each possible progress of the game a corresponding payoff-function is given.
Procedure Represent the game as a digraph
G=(N,A). The possible strategies
are the nodes, connected with a directed arc from "their" source. Starting with an artificial source n , determine the nodes (strategies) of the first move and the arcs to each of these nodes. This so defined digraph is a tree which is successively extended by considering the nodes of the i-th move as the sources for the nodes of the (i + 1)-th move, v i = l k-1. , (kl After all moves are completed, there exists a tree with N^ ':= n s. i=l 1 terminal nodes, the complete tree, which is also called a game tree, exists of
% N : =
k z x=l
x n s- nodes. i=l 1
Section
5.1.3
243
These formulas only hold, if |N s (n i )| = |Ns(rij)| for all u i * u^ which can be reached after u moves, V y = l,...,k; i.e. the game ends after exactly k moves; the resulting game tree is symmetric. Otherwise the number of nodes is
N: = 1 +
k U N-, where N. is the 1 i=l 1
set of nodes, resulting from the i-th move. For each terminal node compute for each player according to the payoff-function. Extension
(solution of a game with total information)
If each player knows the current state of the game at any time (total information), then it is possible to determine one (or more) equilibrium-point(s) of the game. Procedure The player who takes the (k - l)-th move regards each node n^e if there are alternative strategies at any node n^e N ^
N^»
, then
those strategy is selected that leads to the maximal payoff for himself (if several strategies lead to the same maximal payoff, then all these strategies are selected). All other strategies are eliminated from the current game tree and of course the successor nodes and arcs are eliminated, too. Then the player who takes the (k - 2)th move regards the nodes n^e N ^ g etc. This procedure continuous until the first player has selected the optimal strategy of the first move. Result: The remaining game tree determines a unique solution of the game (equilibrium-point) with a fixed payoff for all players. In the case of degeneration several solutions may exist, but with the same payoff.
Example 1
(game without total
Two players (i.e.
information)
n = 2) each have two cards (the seven and eight
of hearts), lying covered in front of them. Both select a card at random and place it upside down on the table before them
244
(i.e.
Game-Theory
s
i
=
s
2
=
2).
The
f i r s t player guesses how many hearts are
lying on the table, then the second player does. Let be the set of all possible numbers of hearts; M = {14; 15; 16} , then y. £ H is that number, the i-th player guessed. Double-guesses are not permitted, which means that each player must guess a different number. So we have:
y^ € {14; 15; 16
;
Sg = 3
y 2 e {{14; 15; 16} - y ^
;
s4 = 2 .
The following payoff-function is given: Whosever guess is correct, receives 1 m.u. from the pot. Whosever guess is wrong, pays 1 m.u. to the pot. If both players guess wrong, no one pays. Note: This problem can be solved by determining the maximal value.
expected
Seation 5.1.3 245
246
Game-Theory
Note: To eliminate confusion the node-numbers
have not
been included. The data in the nodes indicate the strategies selected, i.e. 1-st level: player 1 selects "7" or "8" 2-nd level: player 2 selects "7" or "8" 3-rd level: player 1 guesses
y^
4-th level: player 2 guesses
y^ •
The terminal nodes are labelled with the payoffs of player 1 and player 2, respectively. Example 2 (NIM-game with total information) Given three players and a pot with six beans. Each player may take one or two beans out of the pot when i t i s his turn. The player who takes the last bean has to pay 1 m.u. to that player who would have taken the next bean(s) i f the game had proceeded.
Seation
0 ) f—1
w CSI
Q)
O
0)
ri
Q SE
>>
>
>>
>
E p =
1.1 1 1.7
P" =
. 3.5
( -0.54 =
1.02 [
1.70]
max
is optimal according to the solution of HODGES and
6. Dynamic Programming
6.0.1 The n-Period Model Hypotheses Given the following problem: The development of an enterprise i s considered for n < °° periods. The enterprise works with only one product given in the beginning as well as at the end in a d e f i n i t e quantity. During the periods a change of the on-hand balance may occur though s a l e , purchase etc. Normally a f i n i t e storage capacity i s given and a l i m i t as to the quantity to be marketed. Under these assumptions a given (generally non-linear) objective function must be optimized. Definitions n:
number of periods to be considered, n < °°
Xj: set of f e a s i b l e states ( = warehouse storage capacity ) in the j - t h period Y .: set of f e a s i b l e actions ( = market capacity ) in the j - t h period Xj: stored products at the end of the j - t h period, x^e X^ . y . : the action taken in the j - t h period, y , - E Y . J
J
J
V j ( X j _ ^ ; y j ) : t r a n s i t i o n function ( = quantitive change in the product from the ( j - l ) - t h to the j - t h period u x ; : j ( j - i y j ) objective function. Let
xn: = 0 .
Principle In t h i s algorithm f i r s t the optimal d e c i s i o n s
(actions) are r e -
c u r s i v e l y determined by s t a r t i n g with a given f i n a l state
xn .
Section
6.0.1
271
When this is accomplished the optimal decisions are executed beginning with the given initial state; thus the various interim states are determined. Description Step
1: Set
Step
2: Compute BELLMAN's functional equation (BFE) f* for the
f*+1(xp): = xn;
j: = n .
j-th period: f * ^ . ^ ) ^ max{uj.(xj_1;yj.)+ f* + 1 (Xj)} • Result: fj( x j_i) is dependent on Xj and y^ . Step
3: Replace
x. by the transition function v.(x._,;y.). ^J J J~1 J Result: fJ. ( Jx ) J- is dependent on x._, J * and y.J .
Step
4: Determine the maximum value of BFE (if necessary by differentiating) and solve the equation for y^ . Result: The optimal decision in the j-th period, y^, has been determined, this optimal point is dependent on
Step
5: Replace y. in the current BFE by the value of the optimal point. Result:
Step
is dependent only on x . ^ .
6: Is j = 1 ? If yes: Go to step 7. If no : Set j:=j-l. Go to step 2.
Step
7: y^ is now given as a function of X q . Because x Q is known, compute y^. With the transition function compute x^. With Xj and the next optimal point, y^ is uniquely determined and so on. In general:
is known. Put in into the
according optimal point and y^ is determined, x^. is then determined by placing the values for
and y. in the
transition function. When all optimal decisions y. are executed the value of the objective function is fully determined.
272
Dynamic
Programming
Example 1 A hamster farmer must plan his breeding for the following four periods. His farm may handle a maximum of 350,000 hamsters. As he cannot buy any animals, he can at most sell as many animals as he owns. In the beginning he owns 3,000 hamsters and at the end of the four periods all the hamsters shall be sold. The hamsters will
in-
crease five-fold with births and deaths balancing out. The profit in the j-th period is
2- ]/yT
m.u. Under the neither humane nor
realistic assumption of arbitrary divisibility of an animal, determine the optimal sales-policy and the total profit over the four periods. n = 4; Xj =[0; 350,000]; vj(xj_1;yj) = U
j(xj-l;yj)
=
Yj =[0;
x^];
5.(x._1-y.)=xj; 2
'
'
x
o
=
3
'000
;
fj(x 4 ) = 0; f*(x 3 ) = max { u 4 ( x 3 ; y 4 ) + fg(x 4 )} x4 = 5 x3 - 5 y4; =
x
x4 = 0
= 2-
;
-> 5x 3 - 5y 4 = 0;
3
fj(x 3 ) = 2 - l/x^ ; f*(x 2 ) = max {u 3 (x 2 ;y 3 ) + f*(x 3 )} = max x 3 = 5 x^ - 5 y 3 3-P3
3 y
+
^ 3
f*(x 2 ) =
+ 2--|/xJ};
->• f^fxg) = max {2- -/y^ + 2- ^ 5 x 2 - 5 y 3 > = : P
-5 |/5X2 - 5y 3
= 0
y 3 = 1/6 x 2
2-1/1/6 x 2 + 2-]/5X2 - (5/6)X 2 = 2-V/6x^~
;
f 2 (Xj) = max{u 2 (x 1 ;y 2 ) + 2-]/Sx^} = max {2--|/y^ + 2-j/5x^} = max {2Yy^"
+ 2-/6(5x 1 - 5y 2 )} = max {2-]/y^ + 2-j/30x1 - 30y 2 > =: (
Section 6.0.1 273
-30
3 P 3 y
2
= 0
->•
y 2 = 1/31 xj
l/30x r 30y 2
f ^ X j ) = 2-i/(l/31)-x1 + 2-l/30x 1 - 30-(l/31)-x 1 = 2- / 3 l j Ç ;
f * ( x o ) = m a x { u 1 ( x Q ; y 1 ) + 62- 1/1/31 Xj} = m a x { 2 - y y ^ + 62- l/(l/31)-x 1 } = max{2-^+ 3 P 8 y
1
=
l
62(-5/31)
+
i h
f*(xQ)
62- ^(5/31)-x q - ( 5 / 3 1 ) ^ } =:p ;
2-/5/31 x -5/31
= 0 y
= 1/156 x Q
l
= 2-1/(1/156)-xo + 62-1/(5/31) -x q - (5/31 )-( 1/156) -x q =
determination of the sales-policy: x Q = 3,000 ;
y
l
X
1
3,000 T56 =
5(x
1Q 19
o " yl>
y 2 = 1/31 x
f
„ '23
;
=
5(3
1 4
^
«
'000
0 4
14,903.85 ;
» 480.77 ;
x 2 = 5(x : - y 2 ) « 5(14,904 - 481) « 72,115.4 ; y3 »
(1/6)-72,115.4 «
x 3 = 5(x 2 - y 3 ) «
12,019.23 ;
300,480.85 ;
y 4 « 300,480.85 ; x4 = 0 total profit: ir = 2-|/yJ +
+ 2 - ^
+
» 1,368.21 m.u.
this is equivalent to *
=
=
l^ 6 2 4
x
o
.368.21
m.u.
Example 2 A speculator wants to invest 200 m.u. in such a way that his long term profit is maximized. The following three investments are
274
Dynamic
Programming
available: (1) participation in a sky-scraper construction plan. Upon investing z^ m.u. a long term profit of tt^ = 4-z^ - 2 m.u. will be realized. (2) financiation of a very promissing film-project. When z 9 m.u.are 2 invested, a long term profit t^ = l/lS-z^ - 1 / 5 ^ + 2 m.u. will be realized. (3) buying properties in the suburban area. Investments of z 3 m.u. bring long term profits of
n 3 = 3-z 3 + 1 m.u.
In order to distribute the risk let: Zj < bj = 95 m.u. z 2 < b 2 = 50 m.u. z 3 < b 3 = 100 m.u. The problem now reads: max
u = tr^ +
+ ttj
=
= 4-z x + I/I6-Z2 - 1/5-z 2 + 3-Z 3 + 1 Z
1
+
z
2
+
Z
3
zx z2
=
^
- °° . Furthermore assume that the transition function is stationary. That is, if the process is in a state then the action
y
defines a transition to the state
x'
x ,
indepen-
dent of the period. For this reason the period-index for the set of feasible states, the set of feasible actions, the transition function and the objective function is unnecessary. Definitions X : set of feasible states, |X| < °° Y : set of feasible actions, |Y]
• y = 5 helpers;
x = 5,600 meals
->- y = 6 helpers are hired.
An average profit per day of
n = 10,310 m.u. results.
7. Queueing Models
Hypotheses Given the following situation: There exists a system with
k
parallel serving stations, having the
same function and capabilities (channels). At certain intervals customers enter the system, are served at the first available station (waiting if necessary), and depart the system. If this service is accomplished in several stages, then the customer may depart the system only after each of the stages in series has been visited. In general the system may be visualized as below:
>o
channel 1
channel 2 entry
customers • • o — oo o in a queue
exit
channel k stage
stage
stage
1
2
m
Suppose the arrival of the customers is poisson-distributed (no peak loads). The following rule pertains to all customers: First come; first served. Assume the departure of the customers is exponential-distributed. We distinguish the following models:
Section
(1)
1 - channel, 1 - stage model
(2)
1 - channel, r - phase model; r e N
(3)
k - channel, 1 - stage model
(4)
k - channel, m - stage model; k > 1; m > 1.
7.
Case (4) will not be considered here, because the mathematical
281
for-
mulas and the required calculations are beyond the scope of this book. Case (4) models are generally solved with simulation. A system has r phases iff the service is accomplished in more than one step, and the next customer may be served only after the preceding customer has completely departed the system. Special case: If the serving times are not exponential-distributed random variables but exactly determined, then the r-phase model with r = » is applicable. Definitions X: average number of customers, entering the system per time unit (arrival
rate)
u: average number of customers, departing the system per time unit (service rate) t = t a A
:
average time-interval between the arrival of two customers
t|3= ^ : average service time p=
: capacity utilization of one service station
k=
: capacity utilization of the system For 1 - channel models p < 1 must hold, for k - channel models only k < 1 must hold. Pn
: probability that the length of the queue is equal to n (persons), n = 0,1,...
P(w): probability that a customer must wait
282
Queueing
Models
ri : average length of the queue including the customer that is being served n g : average length of the queue excluding the customer that is being served t Hot
: average waiting time for a customer '
avera
9 e waiting and service time for a customer
P(t w 0: average length of the queue including the customer that is being served under the assumption that a queue with more than 0 elements (customers) exists t
|n > 0: average waiting time for a customer under the assumption that a queue with more than 0 elements (customers) exists.
Principle Formulas are given, with special states in a queueing system can be determined.
7.0.1 T h e I - C h a n n e l , l - S t a g e Model Formulas P n = P"-(1 - P) P Q = P °.(l - P ) = (1 - P )
P(t w < t) = 1 - P . e " ^ " P(t w > t) =
P-e'
(y
-
Section
P(w) = p
P(t t()t < t) = 1 - e " ( y _
" =
P ( t t n t > t) = e " ^ "
T ^ = A 1-p
S
v
y-X
tot
x
283
)-t
'
1
1-p
7.0.1
1-p
no=
J ? - = n-p
n | n > 0 -
\,= W
~yn- Xr
tWi1 n >
0
y-X
— - - y -
= y-X
H o t ~ "y-~X Example 8 customers per hour enter the shop of a retailer in a little village. The arrivals are poisson-distributed.Let the service time be exponential-distributed at 5 minutes. X = 8/hr.;
y = 12/hr.;
p = 2/3;
(a) Determine the probability that a customer must wait. P(w) = p = 2/3 = 66.6 % . (b) What is the average length of the queue including the customer that is being served ? „ .
P
_
2/3
(c) What is the average total time that a customer must spend in the shop ? i Hot
=
1
=
1 I2^r
- 1 = 4
. -
hr
=
15
lc min
-
(d) If the retailer determines that a customer must wait on the average for more than 15 minutes, then he will hire a helper to ensure that customers will not leave his shop without having bought anything. How many more customers must arrive before the
284
Queueing
Models
retailer hires help ? P
_
t. = W y - X
2 / 3 - 1
=
tw s
| hr.;
+' w "
v' y'- A' "
u
hr. = 10 min;
x'= X + AA;
y'= y;
A'
A'
A' = 9/hr.
! T2^T,= AA
increase by
1
1 4
X' 12- A' 17 " 4
= 1/hr.,
i.e. the number of customers must
customer per hour.
(e) What is the probability that a customer remains longer than minutes in the shop ? P ( t t o t > 1/4) =
e
_
(12-8)•1/4
e -l =
=
=
0-3679
36
79
%
_
(f) What is the average waiting time for a customer under the assumption that he must wait ?
l
w I" >
0
=
¿ T
=
T F 8 = "i
hr
" =
15
min
-
7.0.2 The l-Channel, r - P h a s e Model Formulas
" n s
r+1 . A2 + 2-r y (y-Ay r+1 ~Z7r
y
A y
^ _ r+1 'w " 2-r
A2 (y-A)
Note: Let the durations of the r then y: = i y • j=l J
7 tot" r
r+1 2-r
phases be
A y. (y-A)
y pj,
A (y-A)
1 y ,p
15
Section 7.0.3
285
Example 1 A small motor-vehicle repair shop receives on the average 3 cars per day. The foreman, who has no other help, must write up the defects in these cars. Because he has devised a system for this diagnosis, his job may be devided into 8 parts per car, each lasting 15 minutes, and accomplished one after the other before starting another car. The foreman works 8 hours per day. r = 8;
X = 3/day;
y' = 4/hr. s
32/day;
y =
= 4/day .
(a) How many cars (except for the one being inspected) are on the average in the repair shop ?
n
s
=
32 ' TpFSy
8+1 TTS
81 64 *
=
1
, occ, -2656 •
(b) What is the average time until a customer receives his car again ?
Hot
=
f-r4 • ^ F j y
+
i
=
M
days
=
322
-5
min
-
Example 2 A modern car wash receives on the average of one car every 5 minutes for servicing. The service time at this station is exactly 3 minutes. What is the average waiting time for a customer ? ta = r
=
" *
hr.; r+1 2^r
t =
b
=^hr.; 1 2 ;
r ^
=
A = 12/hr.; 1 12 2 • 20(20-12)
y=20/hr.; =
- 3 . W hr"
7.0.3 The k-Channel, l-Stage Model Formulas
1
= 2
„ '25
oc min
"
286
Queueing
Models
, k [vl
P(w) =
P0 0
(k-1)! (k-y - A) A
k
A -p'(p) (k-1)! • (k-p-A) -
__ S
p
-
+
2
A-p- 1 pi
.
p
(k-1)! • (k-p-A) 2
0
, k t
P' W
nr (3-1)! • (3-12 - 30)'
'
4 , 30 535 89 + 12 = " 8 ? "
6
, -0112
(c) What is the average total time that a customer must spend in the post-office ?
T t. . = tot
(30 f 12- \T21 (3-1)! • (3-12 - 30)
'
4
8TO 9
x+
1
1T5" 2
=
107 X^a 534
™-, n0-2004 ,hr.
8. Nonlinear Programming
8.1 Theorems and Special Methods 8.1.1 The Theorem of KUHN and TUCKER Hypotheses Given a convex minimization problem P: P:
min
it = f(x) i = l,... ,m
g^x)
0
V j=l
n.
The partial derivatives exist for both the (nonlinear) objective function f(x) and the restrictions g ^ x ) . The
Lagrange-function
L(x,X) is defined as: L(x,x): = f(x) +
m z x,-g,(x) . i=l 1 1
A point (x,x) is called a saddlepoint of L(x,x), if L ( x , x ) < L(x,x) < L(x,x)
or, equivalently if the following six
Kuhn-Tucker conditions are fulfilled:
(1)
L(x,x) >
(2)
(3)
0
y j
-kiAiil
< 0
v i
v j
(5)
- k Î A i A l • a, 1 0 A• 1
v j = 1,... ,n
(6)
xi > 0
• xd i 0
x, > 0
(4)
vi
v i = 1,... ,m
Seation 8.1.1
289
Theorem 1 I f (x,x) i s a saddlepoint of L(x,x), then x i s an optimal
solution
of P. Theorem 2 Let x be an optimal s o l u t i o n of P. I f there e x i s t s at l e a s t one f e a s i b l e s o l u t i o n x of P, so that g ^ x ) < 0
V
i=l,...,m
(SLATER's c o n d i t i o n ) and i f there e x i s t s an \ > 0, then ( x , x ) i s a saddlepoint of L ( x , x ) .
8.1.2 The Method of LAGRANGE Hypotheses Given the f o l l o w i n g problem P:
* A-x
= f(x) = b
where r k ( A ) < n, i . e . the system of equations must not be f u l l y determined. Principle I n t h i s method the Lagrange-function i s formulated as the sum of o b j e c t i v e f u n c t i o n and the weighted r e s t r i c t i o n s . Each p a r t i a l d e r i v a t i v e of the f u n c t i o n i s set equal to zero r e s u l t i n g in a unique system of equations. Description Step
1: Formulate the Lagrange-function L(x,A): = F(x) +
m n Ï x •( E A . i - x . - b.) 1 1 i=l j=i ^i j JJ
.
290
Step
Nonlinear
Programming
2: Determine the gradient of the Lagrange-function and set it equal to zero: grad L(x,A): =
3 L(Xj,X.) 3
Step
(X..X.)
; =
o V i,j .
3: Solve the so-determined system of equations (with one of the well-known methods (algorithm of Gauss; method of Gauss-Jordan etc.)).
Example Given the following problem P: max
2 2 tt = 3-Xj + 2-x 2 + 5-x 1 -x 2 + 3-x^ + 4-x^ - x^-x^
2-x 1 + 3-X 2 = 10 x2 +
Xj =
8
The Lagrange-function is 2
2
L(x, ) = 3-x 1 + 2-x 2 + 5-x 1 -x 2 + 3-x 1 + 4-x 3 - x^x.^ + x 1 -(2-x 1 + 3-x 2 - 10) + x 2 -(x 2 + x^ - 8)
;
6-x^ + 5-X 2 + 3 - x 3 + 2-x^ 4-X 2 + 5-x^ + 3-X^ + X 2 grad L(x,x) =
4 - x 1 + X2 2-x 1 + 3-x 2 - 10 x 2 + x^ - 8
Solution of the system of equations with the method of GaussJordan: Note: Tableau-transformations are executed, until the matrix of coefficients A is the identity matrix I.
Section 8.1.2
X
1
x
2
X
3
X
1
x2
1 -3
6
5
-1
2
0
5
4
0
3
1
0
-1
0
0
0
1
-4
2
3
0
0
0
10
0
1
1
0
0
8
1
5/6
-1/6
1/3
0
-1/2
0
-1/6
5/6
4/3
1
5/2
0
5/6
-1/6
1/3
1
-9/2
0
4/3
1/3
-2/3
0
11
0
1
1
0
0
8
1
0
4
7
5
12
0
1
-5
-8
-6
-15
0
0
4
7
6
8
0
0
7
10
8
31
0
0
6
8
6
23
1
0
0
0
-1
4
0
1
0
3/4
3/2
-5
0
0
1
7/4
3/2
2
0
0
0
-9/4
- 5/2
17
0
0
0
-5/2
-3
11
1
0
0
0
-1
4
0
1
0
0
2/3
2/3
0
0
1
0
-4/9
15 2/9
0
0
0
1
10/9
-7 5/9
0
0
0
0
-2/9
-7 8/9
292
X
Nonlinear
x
1
Programming
x
2
X
3
1
*2
1
The optimal solution is:
1
0
0
0
0
79/2
0
1
0
0
0
-23
0
0
1
0
0
31
0
0
0
1
0
-47
0
0
0
0
1
71/2
x=(79/2;-23;31); X=(-47; 71/2); the value of the objective function is Ï = 214.25
.
8.1.3 A Method for the Optimization of Nonlinear Separable Objective Functions under Linear Constraints Hypotheses Given the following problem: P:
min
} max J
f(x) = c-x e v ' A-x
| b > e
x > 0 B
6
K
the object is to linearize the nonlinear objective function and solve the corresponding problem with one of the simplex-methods. For maximization problems (minimization problems) the objective function must be concave (convex). Principle In this method the nonlinear objective function is replaced by linear parts, i.e. each variable is replaced by a set of new variables, which are valid in a specified interval. The coefficients of the new objective function are determined from the gradient of the chord of the nonlinear function in each interval. The solution of the problem is obtained with one of the simplex-methods. Description Step
l: Determine in which interval the variable x^. may assume feasible values, v j=l,...,n
(e.g. using the restrictions).
Section
8.1.3
293
Let x. € [0 ; u-], where u. is the upper bound of the feaJ
J
J
sible region of Xj . Step
2: Determine appropriate (equidistant or non-equidistant) interval of "length" a ^ intervals
and split up the variable x^. as
follows : x
i3 :
'E k
=
j
r=l
x*
v
i >J r
J=1
n
• ,
-
(The value kj for the upper summation-index is determined by Uj and the selected size of the interval. For equidistant intervals of length a ^
we have kj = u ^ / a ^ . )
Let Uj r be the upper bound of the r-th interval of the variable Xj, so that Uj Q = 0, then u. = u. , + jr jr-1
^
r
For the variables 6
V Step
U
r=l,... ,k. . J
Xj r
jr " u jr-l ]
holds: v
j=1
n
'
r=1
"--'kj •
3: Determine the coefficients Cj r of the objective function for the variable
c- : =
Xj r , so that
f(x.=u. )-f(x.=u. ,) J Jr; ^ J jr'i;
J
Step
vv
V
j=l
n;
r=l,...,k i .
" V-l)
4: Formulate the problem P: d P:
• 1i mm max J
n
f(x) = I >
n E
j=1
k Jj
s e - -x. jr jr
r=1
k. l j=l ^ x•
E a - • -x. r=l 1 J J r < o
(j) u '
r
x. > 0 jr
x
V j = l,...,n;
r=l,... ,k . . J
294
Step
nonlinear
Programming
5: Solve the problem
P
with one of the simplex-methods.
Result: The solution vector x' (if a feasible solution exists). k j z x'.
Is
r=l
k
0
0 holds .
Note: The artificial variables w are only used, if type-II inequalities appear. Step
2:
(1. phase) Minimize the sum of the artificial variables w with the first phase of the Two-Phase-Method. Result: A feasible solution of P (if one exists).
Step
3:
(2. phase) Minimize the sum of the artificial variables z with the first phase of the Two-Phase-Method. Result: An optimal solution of the problem. The value of the objective function can be determined by substituting the optimal values x for x in the objective function.
298
Nonlinear
Programming
Note: I t is important to remember that no two variables x^ and Vj are basic variables with current values greater zero. Example Given the following problem: P: min n = 1/2-x^ + 1/2-x^ - 10-x 1 - 5-x 2 (1)
xj + 2-x 2 < 12
(2)
5-Xj +
x 2 < 15
X p x2 >
0
H ro
x2
0
0 ] 1/2]
fl ;
A
~
2"
12
1
15 ;
11 .O
c =
d =
'- 10' "
5
Section
8.2.1
z
W
299
T ^ : X
x
1
V
2
1
v
2
v
3
v
4
U
1
u
2
Z
1
z
2
3
z
4
1
w
2
1
1
2
1
0
0
0
0
0
0
0
0
0
0
0
1
0
12
5
1
0
1
0
0
0
0
0
0
0
0
0
0
0
1
15
1
0
0
0
-1
0
0
0
1
5
1
0
0
0
0
0
10
0
1
0
0
0
-1
0
0
2
1
0
1
0
0
0
0
5
0
0
0
0
0
0
-1
0
1
0
0
0
1
0
0
0
0
0
1
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0 -27
0
0
0
0
0
0
0
-1
-6
-3
-1
-1
0
0
0
0
T: X
X
1
y
2
2
V
1
v
2
v
3
V
4
U
1
u
2
Z
1
Z
2
z
3
z
4
W
1
w2
1
0
9/5
1
-1/5
0
0
0
0
0
0
0
0
0
0
1- - 1 / 5
9
1
1/5
0
1/5
0
0
0
0
0
0
0
0
0
0
0
1/5
3
0 -1/5
0
-1/5 -1
0
0
0
1
5
1
0
0
0
0 -1/5
7
0
1
0
0
0
-1
0
0
2
1
0
1
0
0
0
0
5
0
0
0
0
0
0
-1
0
1
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
-1
0
1
0
0
0
1
0
0
0
1/5
0
0
0
0
0
0
0
0
0
0
0
0 -9/5 -1
6/5 -9
t(3).
X
1
x
2
h
0
1
1
0 -1/9
y
2
5/9 - 1 / 9 2/9
V
1
v
2
v
3
v
4
U
1
u
2
Z
1
z
2
z
3
Z
4
W
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 -1/9
1/9 - 2 / 9 - 1
5/9 - 1 / 9
1/9 - 2 / 9
8
0
0
0
1
5
1
0
0
0
-1
0
0
2
1
0
1
0
0 -5/9
0
0
0
0
-1
0
1
0
0
0
1
0
0
0
0
0
0
-1
0
1
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0 -5/9
0
0
0
0
0
0
H e r e the f i r s t p h a s e e n d s . T h e a r t i f i c i a l
5 2
0
0
1
2/9
1/9
0 0
w2
1/9
0
0
0
0
0
0
0
1
1
0
v a r i a b l e s w c a n be e l i m i -
n a t e d f r o m t h e t a b l e a u , as t h e y a r e b o t h nbv.
300
Nonlinear
Programming
X2
H
T< 4 >: 1
X
r
1
v2
v
3
v
4
U
1
u
2
Z
1
z
2
z
3
Z
1
4
0
1
5/9
-1/9
0
0
0
0
0
0
0
0
0
0
5
1
0
-1/9
2/9
0
0
0
0
0
0
0
0
0
0
2
0
0
1/9
-2/9
-1
0
0
0
1
5
1
0
0
0
8
0
0
-5/9
1/9
0
-1
0
0
2
1
0
1
0
0
0
0
0
0
0
0
0
-1
0
1
0
0
0
1
0
0
0
0
0
0
0
0
0
-1
0
1
0
0
0
1
0
0
0
4/9
1/9
1
1
1
1
-4
-7
0
0
0
0
-8
V2
v
{5).
X
t
V
1
X2
V
1
3
v
4
U
1
u
2
Z
1
z
2
z
3
Z
4
1
0
1
5/9
-1/9
0
0
0
0
0
0
0
0
0
0
5
1
0
-1/9
2/9
0
0
0
0
0
0
0
0
0
0
2
0
0
1/9
-2/9
-1
0
0
5
1
0
1
0
0
-5
8
0
0
-5/9
1/9
0
-1
0
1
2
0
0
1
0
-1
0
0
0
0
0
0
0
-1
0
1
0
0
0
1
0
0
0
0
0
0
0
0
0
-1
0
1
0
0
0
1
0
0
0
4/9
1/9
1
1
1
-6
-4
0
0
0
0
7
-8
3
v
1
U2
(6).
X
1
x
y2
2
V
1
v
2
v
4
U
Z
1
z
2
z
3
Z
4
1
0
1
5/9
-1/9
0
0
0
0
0
0
0
0
0
0
5
1
0
-1/9
2/9
0
0
0
0
0
0
0
0
0
0
2
0
0
26/9
-7/9
-1
5
0
0
-9
0
1
-5
0
0
8
-5/9
1/9
0
-1
0
1
2
0
0
1
0
-1
0
0
0
0
0
0
0
0
-1
0
1
0
0
0
1
0
0
0
0
-5/9
1/9
0
-1
0
0
2
1
0
1
0
0
0
0
0
-26/9
7/9
1
-5
1
0
8
0
0
6
0
1
-8
0
Seation
T: X
x
1
2
y
V
l
v
1
v
2
v
3
4
U
u
1
2
1
0
0
0
45/26
0
1
0
0
5/26 - 1 / 2 6
5/26
0
0
-9/26
0
o
o
1
-7/26 -9/26
45/26
0
0
-81/26
0
o
0
-1/26 -5/26
-1/26
0
0
-
o
0
o
o
o
0
0
0
0
o
0
Z
z
1
2
-1/26
5/26 - 2 5 / 2 6
0
0
-1/26 -5/26 0
0
z
3
-1/26
Z
0
0
0
45/13
1/26
-5/26
0
0
30/13
9/26 - 4 5 / 2 6
0
0
36/13
5/26
0
-1
20/13
1
0
0
0
1
7/26
0
0
1
0
0
7/26
1
0
-1
0
1
4
25/26
1/26
0 1
-5/26
0
1
5/26
1/26
0
0
20/13
1
1
0
1
0
t(8).
X
1
x
H
2
V
1
V
2
v
3
v
4
U
1
U
2
0
1
0
-1/26
5/26
-25/26
45/26
0
0
0
1
0
0
5/26
-1/26
5/26
-9/26
0
0
0
0
0
1
-7/26
-9/26
45/26 - 8 1 / 2 6
0
0
0
0
0
0
-1/26
-5/26
-1/26
7/26
1
0
0
0
0
0
0
0
0
-1
0
1
0
7/26
0
0
1
0
0
0
0
0
0
-1/26
-5/26
-1/26
0
0
0
0
0
0
0
8.2.1
302
Nonlinear
Programming
t(8).
Z
z
1
Z
2
Z
3
1
4
25/26 -45/26
0
45/13
The optimal solution is:
-5/26
9/26
0
30/13
X T =(30/13;45/13), with:
9/26 -45/26
81/26
0
36/13
i=x T -C-x + d T - x » -31.7 .
5/26
-7/26
-1
20/13
-5/26 1/26
1/26
0
0
1
0
0
5/26
1/26
-7/26
0
20/13
1
1
1
1
0
8.2.2 The Method of FRANK and WOLFE Hypotheses
See 8.2.1
Note: If type-II inequalities are present, start with a basic feasible solution. Principle
See
8.2.1
Description Step
1: Construct the initial tableau Z :
where
Step
X
y
V
u
w
t
A
i
0
0
0
b
2-C
e
-I
AT
I
-d
1
:
^ ( m + n)x 1]'
tT
=
2: (1.phase) Minimize the sum of the artificial variables w with the first phase of the Two-Phase Method.
Step
3: Define the vector p : Pr/2 m
+
2n}x ll'
so
Section 8.2.2
T p
303
'vT = (x; y; v; u), then the vector p
= (v; u; x; y) is ad-
junctive to p. p T -p = 0 ?
Is
If yes: Stop, the optimal solution has been obtained. If no : Go to step 4. Step
4: Let p^ e p be the basic variable in the i-th row of the current tableau, then x^ : = p^ . Determine the values x.j v i=l
Step
m + n .
5: Let the new objective function be given by: m+n V
= -^
z
ij-
x
i
m+n 1 1 T^T t.-x.
V j : nbv
,
then the initial tableau of the next phase is given by the last tableau of the preceding phase with the new objective function. Step
6: (next phase) Execute one tableau-transformation according to the Primal Simplex-Algorithm.
£ ifc * ^ Result: A solution x ; y ; v ; u
with the value
£ TT of
the objective function. Step
7: Define the vector p* : p* ^ , so that *T * * * * i» ' , * * * * . p = (x ; y ; v ; u ), then the vector p =(v ;u ;x ;y ) is adjunctive to Is p
• p
p* .
= 0 ?
If yes: Stop, the optimal solution has been obtained. If no : Go to step 8. Step
8: Is
TT* < TT/2 ?
If yes: Go to step 9.
304
Nonlinear
Programming
If no : Go to step 6. Step
9: Compute 1;
ii: = m m
(P-P ) • P (P*-P)T " (P*"P)
p: = p + y ( p -P) . Step 10: Is
v = 1 ?
*
'V
*
If yes: Set p: = p ; tt: = it /2. Go to step 6 with the current solution. If no : Set p: = p
and determine the vector p which is
adjunctive to p.
Go to step 4 with the current
solution. Note: If in step 6 more than one column may be selected as the pivot-column, then select that one, which assures the following: Xj • Vj = 0
v j=l,...,n ; y i • u i = 0
v i = l,...,m .
Example Given the following problem: P:
min tt = 1/2-x^ + 1/2-x^ - 10-Xj - 5-x 2 x1
+ 2-x 2 < 12
5-x^
+
x 2 < 15
x^, x 2 >
0
(the same as in 8.2.1) Z ^ : X
1 1
x
y
2 2
1
5
1
0
1
0
0
0
1
0
-1
-1
0
2
V
1
v
2
U
1
u
2
W
1
w2
t
0
0
0
0
0
0
12
1
0
0
0
0
0
0
15
0
-1
0
1
5
1
0
10
0
0
-1
2
1
0
1
5
0
1
1
-3
0
0
-If?
0
Section
Z( X
2
8. 2. 2
305
>: x
1
V
2
v
1
U
2
1
2
1
0
0
0
5
1
0
1
0
1/5
0
0
0
-1/5
-1/5
1
0
0
1/5
-1
1/5
-1
0
0
-1/5
1
2
U
u
1
W
2
w
1
t
2
0
0
0
12
0
0
0
0
1/5
1
0
0
15
1/5
0
2
9/5
0
-1/5
1
3
-9/5
0
6/5
0
-3
0
z (3).
X
1
X
H
2
V
v
1
1
2
1
0
0
5
1
0
1
0
1
0
-1
5
0
0
0
0
0
1
U
W
2
w
1
t
2
0
0
0
0
0
12
0
0
0
0
0
0
15
0
0
-1
2
1
0
1
5
0
1
-5
9
0
1
5
15
0
0
0
0
1
1
0
Here the first phase ends, the artificial variables w can be eliminated from the following tableaus. p T = (0; 0; 12; 15; 15; 0; 0; 5); p T -p * 0;
Xj = 0;
x 2 = 5;
p T =(15; 0; 0; 5; 0; 0; 12; 15);
x 3 = 15;
X 4 = 0;
tt = -150;
z (4): X
1
x
2
y
V
l
1
V
2
U
1
u
2
t
1
2
1
0
0
0
0
0
12
5
1
0
1
0
0
0
0
15
0
1
0
0
0
-1
2
1
5
-1
5
0
0
1
-5
9
0
15
-25
-20
0
0
0
15
-30
0
-150
306
Nonlinear
Programming
z(5).
X
x
1
y
2
V
2
1
V
U
2
0
9/5
1
-1/5
0
0
0
1
1/5
0
1/5
0
0
0
1
0
0
0
0
26/5
0
1/5
1
0
-15
0
5
0
t
n * 0;
v = min
j 1;
f 1
= min
*
IT
ir = -75
9
0
0
3
-1
2
1
5
-5
9
0
18
15
-30
0
-75
p* T = (18; 0; 0; 5; 3; 0; 9; 0);
! ^/o = TT/2;
(-3;0;3;15;-3;0;0;0)-(15;0;0;5;0;0;12;15) T ^ =• V = (3;0;-3;-15;3;0;0;0)-(3;0;0;0;3;0;-3;-15) >
{1; 5/3}
= 1;
p T = (3;0;9;0;18;0;0;5);
y i 1
p T = (3;0;9;0;18;0;0;5);
* = / / 2 = -75/2;
z(
X
t
2
0
p* T = (3; 0; 9; 0; 18; 0; 0; 5); *T p • p
u
1
-
6).
1
x
y
2
i
V
y2
1
v
2
U
U
1
t
2
0
0
1
-7/26
-9/26
45/26
-81/26
0
36/13
1
0
0
5/26
-1/26
5/26
-9/26
0
30/13
0
0
0
-1/26
-5/26
-1/26
7/26
1
20/13
0
1
0
1/26
45/26
0
45/13
0
0
0
145/26
-105/26
0
-300/13
5/26 -25/26 75/26
15/26
p* T = (30/13; 45/13; 36/13; 0; 0; 0; 0; 20/13); p* T = (0; 0; 0; 20/13; 30/13; 45/13; 36/13; 0); *T p
* -p
1
= 0
-T ->- Stop,
x
= (30/13; 45/13) is an optimal solution
with the value of the objective function
TI « -31.7.
Section
8.2.3
307
Note: In this example the pivot-elements have been selected in such a way that steps 4 - 1 0 were necessary. By selecting other pivots, an optimal solution for this problem could be obtained after step 3.
8.2.3 The Method of BEALE Hypotheses Given the following problem: P:
MIN
TT = x^-C-x + d^-x
A-x
e ,
x where
c
: c
[n
x
n]'»
d
: d
[n
x
i] J
A, x, b as defined.
The matrix C must be positive semidefinite. Principle Starting with a basic feasible solution, an alternating sequence of intermediate tableaus and end tableaus is constructed by simplexiterations. Whereas the coefficients of the "simplex"-portion of an end tableau are already computed for the next basic solution in the intermediate tableau, those of the "objective"-portion are only partially determined. They must undergo another pivoting. Description Step
l: Formulate the matrix c d/2 C =
Step
d/2
C :
+
ij x ( n
+
i)"j>
so
that
7T
2: Add slack variables y to the system of inequalities and solve the system for y.
308
Step
Nonlinear
Programming
3: Construct the initial tableau X
X
x
1
1
n
1 I
x
Z
G
n simplex-tableau -A
x
b
i objecti ve-tabl eau
x
n 1
Step
objective - row
4: Does an element c p + 1
*
0 exist in the objective-row in
a column s, corresponding to a "free" variable u ? If yes: Select this column s for the pivot-column. Go to step 6. If no : Go to step 5. Step
5: Is x or y
j > 0 v
j|column j corresponds to a variable
?
If yes: Stop, an optimal solution has been obtained. If no : Select one of these columns for the pivot-column (let this be column s). Go to step 6. Step
6: Determine the row r, so that b•
V = 7 1 {TI~;|
|c
|
"¡1,S (1 S1'9n ais=sign c n + 1 , s )^ s s >0)K
Section 8.2.2
309
Herewith is the pivot-element z r s given. Step
7: Is
zrs € A ?
If yes: Go to step 8. If no : Go to step 9. Step
8: Replace the variable at the top of column s of the tableau Z by the variable, corresponding to row r. Go to step 10.
Step
9: Replace the variable at the top of the column s of the tableau Z by a "free" variable u. (If necessary, these variables must be indexed.) *
Step 10: Construct the intermediate tableau Z , so that
Z
?J
z
i s'
: = Z
v
ij"
v i ;
z
i * r» J *
rj ' - O V J
s
* s .
*
Step 11: Consider row s in the "objective"-portion of tableau Z . (It is always determined by the intersection of column
s
with the main diagonal in the objective-tableau.) Replace the variable in row
s
at the side of tableau Z
variable, corresponding to column **
Step 12: Construct the tableau z T J: =
Zij
V
Z
by the
s .
, so that
z*. e {simplex-tableau}
z*i: _ sj
z*. sj_ zrs
z?t: = ij
z*. - z . •z** ij ri sj
v j in row s in the objective-tableau otherwise with i * s .
Set Z: = Z**; go to step 4. Note: For a problem with restrictions A-x < b start with a basic feasible solution.
310
Itonlinear Programming
Example Given the following problem: P: min tt = 1/2-x^ + 1/2->c^ - 10-Xj - 5-x, (1) (2)
Xj + 2-X 2 < 12 5-x 1 +
x 2 < 15 x^x,, >
0
(the same as in 8.2.1) X X
1
x
2
1
X
1
x
2 1
x
1
1
2
1
0
0
0
1
0
-1
-2
12
-5
-1
15
1/2
0
-5
0 -5
x
1*
1
x
2
h H
1/2 -5/2 -5/2
X
x
i
x
2 1
0
-1/5 0
2
-1/5 1
1 3 0
1/5
-9/5
9
1
0
0
-1/10
-1/10 -7/2
0
1/2
-5/2
1
-3/2
-15
(pivot-column s = column 1; 12 q r = min { (-pyj~
;
15 l-5l ~PiT[)' 172" } =
this leads to the tableau
Z
1*
)
3
pivot-row r = row 4;
Seat-ion 8.2.3
y
X
1
x
2
x2
2
-1/5 -1/5 1
0
2 1
X
1
0
X
2
9
1
0
0
y2
3
1/5 -9/5
1/50 x
1
7>* V- =
1/50 7/10
1/50 13/25 -9/5
x
7/10 -9/5 -51/2
2 1
1
1
-5/26
-5/13
30/13
-1/26
25/13
45/13
7/26
45/13
36/13
1
0
0
1/52
H
U
311
0
1/26
10/13
1
0
10/13 -45/13 -825/26
3 q I_Q/5i 45 (pivot-column s = column 2; q r = min {( -pT75T'» | - 9 / 5 | ) 1 3 / g f e > = y j -»• pivot-row r = row 6; this leads to the tableau
U
y2 X
1
X
2
U
1 1
1
-5/26
-5/13
30/13
-1/26
25/13
45/13
7/26
45/13
36/13
1
0
0
1/52
0
10/13
0
25/13
0
10/13
0
No pivot-column can be selected been obtained, function is
1
x = (30/13; 45/13);
ir = -825/26 « -31.7 .
-825/26
Stop, an optimal solution has the value of the objective
312
Nonlinear
Programming
8.2.4 An Algorithm for the Solution of Linear Complementarity Problems (Lemke) Hypotheses Given the following problem: P':
where
min u = x^-C-x + d T -x A-x
>
b
x
>
e ,
C :
x
nj;
d :
jj-
x
C must be positive semidefinite.
Principle In this algorithm a special simplex-tableau is used which derives from the formulation of the nonlinear problem as a linear complementary problem. An iterative process determines a complementary basic solution. Description Step
1: Formulate the linear complementary problem P from P 1 : P:
y = A - x - b + e - z m o v = d + 2-C-x - u-A + e -z , n o
or in matrix notation: -b
' y ' v
-
d
+
0 -A
T
A
u
2-C
X
+
or in tableau notation: y
V
u
X
i
e
0
-A
0
I
AT
-2-C
Z
o
"em
"
e
n
t
-b
d
e
n
Section 8.2.4
313
where: x
problem variables
y
slack variables
u
dual variables
v
dual slack variables
of P'; u,v > e
artificial variable, z Q > 0
z o e k 1
of P 1 ; x,y > e
k-dimensional summing vector
: l
[(m + n) x 1] ;
tT;
= (" b ;
d
>;
the variables ( y ^ u ^ ) and (VJ.XJ) are called "pairs of complementary variables". Step
2: Is
ti > 0
V i = 1,... ,m + n ?
If yes: Stop, the current solution x is optimal, the value of the objective function is tt = f(x). If no : Go to step 3. Step
3: Determine
t : = min
{t.} .
i The pivot-element is given by row r and column z Q . Execute one tableau transformation according to the Primal SimplexAlgorithm. Result:
zQ
has become a bv, one variable
y^ or
v^
has
become nonbasic. Now the following holds: (z Q > 0) A (y i -u i = 0 V i = l
rajAtv
. - x ^ O V j = l,... ,n) ,
i.e. an "almost" complementary basic solution has been obtained. Step
4: Is
ZQ = 0 ?
If yes: Stop, the current solution
x
is a complementary
basic solution, the value of the objective function is
n=
f(x) .
If no : Go to step 5. Step
5: Select that column for pivot-column, whose complementary variable became nonbasic in the last iteration. The
314
Nonlinear
Programming
pivot-row
is
determined
according
to
the
Primal
Simplex-
Algorithm.
Step
Does
6:
If
a feasible
yes:
If
no
pivot-element
Execute
: Stop,
one
P
1
tableau
has
an
still
exist
?
transformation.
unbounded
Go t o
solution.
Example Given
P':
the
following 1/2-Xj
min -
x1
-
2-x2
-5-x,
(the
same
as
-1
-2
-5
>
x1,x2
>
10-x1
-
5-x2
-15
8.2.1) -12
-1
-
> -12
x2
in
P':
+ 1/2-x^
ii .a
A =
problem
'
-10 ;
-15
problem
P
>1
H
1
0
0
0
0
1
0
0
0
0
0
0
1
0
-1
-5
0
0
0
1
-2
-1
2
U
U
2-C =
d =
1
reads:
V
V
1
1
v
V
2
U
1
u
0
1
2
X
X
2
z
0
1
2
-1
5
1
-1
15
-1
0
-1
-10
0
-1
-1
-5
1
X
z
1
0
2
1
X
2
o
12
1
1
0
-1
0
1
5
2
2
0
22
0
1
-1
0
1
5
6
1
0
25
0
0
-1
0
1
5
1
0
1
10
0
0
-1
1
-1
4
1
-1
0
5
-
5
step
4
Section 8.2.5
1
V
-1/3
-4/3
0
h
y2
l
V
U
1
X
2/3
10/3
0
5/3
0
41/3
1
25/6
U
1
X
z 0
2
2
1
2
0
1/6
-1/6
0
1/6
5/6
1/6
0
0
-1/6
-5/6
0
5/6
25/6
0
-1/6
1
35/6
0
-1/6
-5/6
1
-7/6
19/6
0
-7/6
0
5/6
V
1
v
1
U
2
u
1
4/19
Z
1
o
55/19
0
243/19
9/19
0
1
9/19
0
75/19
5/19 -25/19 45/19
0
0
26/19
1
90/19
1
0
-7/19
0
5/19
1/19
1/19
0
-1/19
-5/19
y
V
2
-7/26
2
0
0
1
x
1
0
-3/19 -26/57 -20/19 36/19
0
X
2
1
-79/78
-5/19
6/19 -7/19 v
2
U
u
1
45/26 -81/26
2 0
X
1 0
X
2
Z
o
1
0
-55/26
36/13 30/13
0
5/26
-1/26
5/26
-9/26
0
1
0
-9/26
0
1/26
5/26
-25/26
45/26
0
0
1
19/26
45/13
0
-1/26
-5/26
-1/26
7/26
1
0
0
7/26
20/13
z
M
315
+ Stop, x = (30/13; 45/13) is an optimal solution, the value
of the objective function is
IT « - 31.7 ,
8.2.5 The Gradient Projection Method (Rosen) Hypotheses Given the following problem : P: min
IT
A-x
500 + Stop, C t Q t = $ 4,000
x = 2,000;
B = 5;
is the optimal inventory policy.
Example 2 Given the example of (11.0.1) with the delivery time of 90 days (assuming 360 days per year). t d = 90/360 = 1/4;
D(trf) = (1/4)-10,000 = 2,500;
x = 2,000 < 2,500 + Stop,
x = 2,500;
Section
b = 10,000/2,500 = 4; is the optimal
C ^
= " j ^
4 0 0
+
^ ^ 0 0
=
11.0.4
$
337
^
inventory policy.
11.0.4 An Inventory Model with Damage to Stock Hypotheses Given the Classical
Inventory Model
(11.0.1) in which damages to
the stock are considered. Definitions a, D
: number of items bought per period
aD
: number of items damaged per period
p
: cost per item .
Let D: = D - AD ;
b: = D/x .
Procedure Determine the optimal number 'p 2 -AD 2 x: = / , c
+
2-D-c c
i
2_
x to order, so that
-
p-AD c
i
i
the optimal number of orders B, so that ^ B:
=
D
/ c.-x + 2•P-aD
-l/
g.c.-D.»
the corresponding total cost C^. C
tot: =
b>c
o
+
c
i"
x/2
+
so that
P'iD
Example A pocket computer company will sell 500 items per month. Because of a lack of qualified help 50 items will be damaged beyond repair in a month. The costs per order and restocking of the store are
338
Inventory
Models
$ 3506.25; the inventory costs are $ 10 per item and month; the cost for one pocket computer is $ 100. What is the optimal
number
of pocket computers to order and how many orders should be placed per month ? What is the minimum price for one pocket computer, if the company does not wish to incur any loss for the damaged items ? D
aD
500;
= 5 0 ;
10Q 2 -50 2 10
r b
=
55
+
D
=
CtQt =
=
Q
2-500-3506.25
2
3506.25;
_
100-50
10
J 10-275 + 2-100-50 V 2-3506.25-500-275 2-3506.25-500-275
°"
c
550;
c
i
=
= 1 0 ;
p
=
100;
275.
10
w
2
'
2-3506.25 + 10-275/2 + 100-50 = $ 13,387.5;
C
13,387.5 =
= 26.775
D
->- the minimum price for one pocket
500
computer is
$ 100 + 26.775 = $ 126.775 .
11.0.5 An Inventory Model with Rebates (different price intervals) Hypotheses Given the Classical
Inventory Model
(11.0.1) in which rebates are
considered. A rebate is granted on the price when an order exceeds a specified amount. Definitions rj
: minimal selling quantity at the j-th price quotation; r x < r 2 < ... < r n
Ej
: price reduction per item at the j-th price quotation; £
1
... > c. . h 7 n
Description Step
Step
1: Determine
ill
••
-i/z-co-cr°
H I
: =
+
j
.
D
V j = i,... ,n .
2: Determine ci^l: =
minimum j£[0;n]
tot
Step
e
3: Is
{C^]} . tot
s = 0 ?
If yes: Go to step 4. If no : Go to step 5. Step
4: Stop, the optimal number x to order is
-
/2-c -D
i-êr •
the optimal number of orders B is 'D-c. b =
2
"co
'
the corresponding total cost C Step
C^ ^ is
• = C(o) tot" tot '
5: Stop, the optimal number x to order is the optimal number of orders B i s the corresponding total cost C^
x: =
;
B: = D/r s ; : =
.
Example A wine bottler requires 30,000 bottles every 3 month. The supplier offers the following rebates:
340
Inventory
Models
on orders of
4,000 bottles $ 0.25 off;
on orders of
5,000 bottles $ 0.5
off;
on orders of 10,000 bottles $ 0.6
off.
The cost per order is $ 1,000. The inventory costs are $ 10 without rebate, $ 8, 6 or 5 with increasing number of bottles ordered. How many orders should be placed and what is the total cost ? D = 30,000; 1 = 4,000; 1
=
c = 1,000; o r 2 = !5,000;
1/4;
(0) tot
1/2;
1 ,000-30,000 4,000
tot ;(2) tot ;(3) tot
c. = 8 ; -\1
c. = 6; i2
c. = 5; i3
r 3 = 10,000 9 e 3 = 2/3;
'2-1,000-10 •30,000 • V
;(1)
;(S)
c. = 10; l
» $ 24,494.9
J
^
8-4,000 - (1/4)-30,000 = $ 16,000 ; 2
^
+
1 ,000-30,000 5,000
+
6-5,000 " (1/2)" 30,000 = $ 2
6,000 ;
1 ,000-30,000 10,000
, 5-10,000 " (2/3)-30,000 = $ 2
8,000 ;
B = b;
6,000 .
=
tot
tot
r 2 = 5,000;
C
tot
=
$
11.0.6 An Inventory Model with Respect to Transportation Capacity Hypotheses Given the Classical Inventory Model (11.0.1) in which the capacity of the conveyance (e.g. truck, airplane, etc.) is considered. That is because of limitations in the transporting capacity more than one vehicle, or more than one trip may be necessary to deliver one order.
Section
11.0.6
Definitions x
: capacity of the vehicle
p
: number of required trips per order
C^ : fixed transportations cost per order c
: variable transportation cost per trip
C
: transportation cost per order; C: = C f + Cy•p , if (p-l)-x
< x < p-x
.
Description Step
1: Calculate D-c v + ^ D 2 - c 2 + 2 - C i - x 2 - C f - D P
l:
=
~c
r
xJ.
p' 1 ^ : =
and set
2-C f -D
Step
2: Calculate c
and set Step
p ^ :
i'x2
= [p 2 ] ;
p ^ :
3: Calculate C ( 1 ) : = ]/ 2-c.-D-(C f + c ^ m
c' ': =
Step
C f -D T ... x* p
4: Determine and set
Step
=
c -D +
+ x
C ^ : C. tot
1
=
= C ^
minimum j=l,2,3
) )
;
crx-p(j) -J v 2
j=2,3 .
{C^}
.
5: Determine the optimal number x to order, so that x: = x - p ( j )
, if C t Q t = C ( j )
V
j=l,2,3;
the optimal number of orders is b: = D/x; per order p^ trips are required. Note: In general b
(rounding off is necessary).
342
Inventory
Models
Example A furniture company sells 400 wall cabinets yearly. The firm owns only one type of truck which can transport only 5 cabinets per trip to the company warehouse. The inventory cost per item and year is %
$ 60; the transportation cost function is : C = 1,800 + 250-p . How many orders should be placed ? How many trips are necessary ? What is the total cost ? D = 400; p 1
x = 5;
c i = 60;
C f = 1,800;
2 2 -2502 , + •2-60-5 _ 400-250 + I/400 1 • • --1,800-400 - • 60-5
/2-l,800-400'„ 3 0 - 9 8 . '
V
p (l) =
r (2) C
-(3)
C as
_ "
=
141;
1 /en m 14U. 1Q
p (2) =
30;
is;
p< 3 > = 31;
60-5
Ci 1 ) = y2-60-400-(1,800 + 250-141)
C
c y = 250;
1,800-400 5 • 30
+
1,800-400 5 • 31
+
= C ^
=
^
250-400 Z
+
^
250-400 5
+
« $ 42,171 ;
^
60-5-30 2
^
60-5-31
x = 5-31 = 155;
2
= ?
9Q
¿ 9 » J00 '»
«$29,295.16 ;
B = 400/155
«
2.58 ;
B £ M it is necessary to round off:
(1)
B = 2;
(2)
B = 3;
->• x = 200; p = 40; the total cost then is $ 29,600; 3 -»• x 1 = 135; x 2 = 135; >L = 130; l x. = 400;
3 i=l Pj = 27; p 2 = 27; p 3 = 26; p = l p. = 80; i=l 1 the total cost then is
« $ 29,450.
1
12. Sequencing Models
12.0.1 JOHNSON'S Algorithm for Two Machines Hypotheses Given a matrix
D :
x
where
d..: working time of order A. on machine M. ; 1J 1 J d—
> 0 V
i=l,...,m;
j = l,2 ;
under the scheduling rule that each order is placed on M^ first and than on l^, determine a planning schedule, so that the total duration for all orders is minimal. Principle In the algorithm a minimal element is determined in the matrix of "working time requirements" and the appropriate order is placed as close as possible at the beginning or at the end. Such orders are eliminated from the matrix and the next order is positioned in the same manner. The total time requirement is determined in a
GANTT-
diagram. Description Step
1: Define
S: = {d^ •} and set the running-indices p: = 1;
q: = m . Step
2: Determine
d r g : = min { d ^ | d ^ e S} .
Note: If more than one element d^. is minimal, select one of them at random. Step
3: Is
S = 1 ?
If yes: Set
A ^ :
= A p ; p: = p + 1. Go to step 4.
Sequencing
344
Models
I f no : S e t A ^ : Step
4: D e f i n e
Step
5: Is
= A r ; q : = q - 1 . Go t o s t e p 4 .
S : S - {drj} V j=l,2.
S = 0 ?
I f y e s : Go t o s t e p 6. I f no : Go t o s t e p 2 . Step
6: The sequence ( A ^ 1 ) , . . . 1
,
J
A ^ , K
p l a n n i n g s c h e d u l e . Determine ( e . g . total
duration f o r all
, A j m ) ) i s an optimal in a GANTT-diagram) t h e
orders.
Example Given t h e f o l l o w i n g m a t r i x D: D :
S = {dij> V i = l , . . . . 8 ;
m2
M1
j =1,2;
p = 1; q = 8; d r s = d 8 2 ; s = 2; 2
4
A?
7
5
A,
6
4
9
8
8
2
7
1
3
9
5
1
A1
A4 AS A6 A7 A8
i(8) _ A ; 8
q = 7;
S = {d.j} V 1 =1 drs
= d62;
s
7; j = l , 2 ;
=
(7)
A6
2 ;
S = {dij} V i=l,2,3,4,5,7; drs
=
d52'
s
=
A5
2;
(6)
S = {d.j} V 1=1,2,3,4,7; drs
=
d
1;
ll;
A
(1)
A,;
S = {d.j} V i=2,3,4,7; j=l,2; S = {d^.} V 1 = 2 , 3 , 4 ; S = { d ^ } V 1=2,4;
Ag;
j j
= 1,2;
= 1,2;
j=l,2; A,; q = 5; j=l,2;
p = 2; drs = d?1;
s = 1;
drs = d32; drs
¿22*
A) ' = A ? ; p = 3 ;
s = 2 ; A< 5 ) = A 3 ; q = 4 ; s
=
2i
Ai (*3J ')=. A 4 ;
A^ ^ =
S = {d41,d42} ; drs = d42; s = 2; q = 2; i S = 0 ->• t h e optimal sequence of t h e o r d e r s i s : / « ( I ) ) r\j a(2) i
a
( 3 ) » a" (24 ) » . ( 5 ) '
q = 6;
(56 ) ® „ (Q7 )
9
n(8)> 8 '
^
C]
=
3\
Section
12.0.2
345
the corresponding GANTT-diagram : m
M
2
1
b~r 1
0
S
I
7
H — h 1 4
8
12
16
20
24
28
32
36
H 40
h 44
t.u. 48
-ation for for all all orders is 48 t.u., machine M^ has an the total duration idle-time of total 14 t.u.
12.0.2 JOHNSON'S Algorithm for Three Machines (special case) Hypotheses Given a matrix
D :
x
, where
d^j: working time of order A i on machine M^; d^ > 0
V i=l,... ,m;
j=l,2,3;
each order requires all machines, the sequence is given with M^, M£, M 3 - Determine a planning schedule, so that the total duration for all orders is minimal. Principle If the maximum time requirement on the second machine is less than or equal to the minimum time requirements on the first and third machines, then the time requirements of the first and second as well as those of the second and third machine are combined and the problem is handled as in 12.0.1 . Description Step
l: Is (max{d. ? }< min{d.,}) a (max{d. ? } < min{d. ? }) ? i ' i i i If yes: Go to step 2. If no : Stop, the algorithm
can not determine an optimal
346
Sequencing
Models
solution Step
2: Set up the matrix a
u
:
= d
: =
d
++
d. i2 1
T+ u
d., i3
n
i2
for this problem. & :
2]'
x
\ > V 1=1 ^
so
that
m
and apply (12.0.1) on the problem, corresponding to the matrix D. Result: An optimal sequence of orders
..
); the
total duration for all orders is determined in a GANTTdiagram, using the data from matrix D. Example Given the following matrix D: D:
M
1
A
1 A? A
3
A4
A
6
(12.0.1)
M
2
M
3
ft:
\ 6
7
5
1 A?
10
9
4
6
A
10
10
2
4
A
4
9
6
5
3
6
A
S
8
9
5
1
5
A
6
6
6
4
2
5
6
4
6 7
A
3
applied on the matrix D yields the following optimal se-
quence of the orders: ( A } 1 ) , A £ 2 ) , A £ 3 ) , A < 4 ) , A £ 5 ) , the corresponding GANTT-diagram :
AJ6));
Section
12.0.3
347
the total duration for all orders is 39 t.u., machine M2 has an idle-time of total 19.t.u, M, has an idle-time of total 8 t.u.
12.0.3 A Heuristic Solution for a Sequencing Problem Hypotheses Given the matrix D : D ^
x
nj,
where
d 1 j : working time of order A^ on machine M^; d^ > 0
V i=l,...,m; j=l,...,n;
n > 3;
each order requires all machines, the sequence is given with M^, Determine
a
planning schedule, so that the total du-
ration for all orders is as near to the minimum time as possible. Principle In this method the following rule is obeyed: The order with the shortest working time requirement (on machine M^) comes first, the one with the second shortest comes second etc. If this rule does not lead to a unique sequence, then the time requirement on the next machine determines which order has priority. Description Step
l: Define
S: = {d^.};
Q: = S; set the running-index p: = 1.
Step
2: Set the running-index s: = 1.
Step
3: Determine
Step
4: 3 (d i s = d r s | i * r) ?
d r $ : = min { d i s | ( d i s e S)A(d i s £ Q)} .
If yes: Go to step 5. If no : Go to step 7. Step
5: Define and set
Step
6: Is
Q: = Q - { d k j | d k s > d ^ } V s: = s + 1.
S < n ?
j = l,...,n
348
Sequencing
Models
If yes: Go to step 3. If no : Go to step 8. Step
7: Set
A ^ :
= A r ; go to step 9.
Step
8: Set
A ^ :
= A r > where r: = i i | d i s e Q} .
Step
9: Define
Step
10: Is
S: = S - { d ^ } V
j=l,...,n;
Q: = S .
S = 0 ?
If yes: Go to step 11. p: = p + 1; go to step 2.
If no : Set Step 11: The sequence
(A^
..
approximates the optimal
sche-
dule for the orders. Determine the total duration for all orders in a GANTT-diagram. Note: If at some point two or more orders are waiting for the same machine, chose as next the order which has the shorter working time for that machine. To be exact, solve the remaining subproblem with one of the preceding algorithms. Example Given the following matrix D: D:
M
A
1 A?
A
3
A
4
A
5
M2
1
M
M
3
4
6
6
4
4
1
3
5
1
3
3
2
6
2
4
6
3
3
1
2
6
S = Q = { d ^ } V i=l,...,5; d
rs
=
d
21;
j=l,...,4;
A
21)=
A
2;
p = 2;
S
=
Q
j=l,...,4; =
{d
drs = d41;
S = Q = {d i j} V 1 = 1,3,5;
ij}
V
i=1
p=l; >3'4'5 i
aJ 2 > = A 4 ;
j = 1,...,4;
p = 3;
s = l;
Section 12.0.3
drs = d31 = dgl;
Q = { d ^ } V i=3,5;
j=l,...,4;
349
s = 2;
J A 3) = A ; 5 S = Q = {d i j} V i=l,3; j = l,... ,4; p = 4; rs = d E52' 5 1; = „ J4 d 4; p = 5; "rs 31; A 3 4 ) = A 3 ; S = Q = {dlj} V j = 1 i /c\ d r s = d n ; a(*> = A i ; S* 0 an approximate optimal sequence d-rs s
of orders is ( A ^ ,
AJ2), A T/2} j
increasing
(cost-minimal) location for the enterprise.
, where
T: =
J
n z x. . j=l
J
Result: The location n^ is optimal. The distances between the stations have no influence on the solution. The total transportation cost is r
n C.„. : = c.- E tot t
a., -x. jk j
m.u.
Example Given five locations, in which the supplies/ markets of an enterprise are located. The enterprise demands/supplies the following quantity in each location:
x^(50; 30; 20; 60; 40).
Let the distances between the stations be : a 1 9 = 15;
a 9 , = 20;
a,, = 10;
a,,- = 15;
and
c. = 2.5 .
354
Plant Location
Models
What is the optimal location for the enterprise ? T =
5 z x. = 200; by k = 3 (or, when beginning from the other side, j=l J
by k = 4) the half of the total quantity to be transported is exceeded; here two optimal locations exist, n^ and n^; the total transportation cost is Ctot(n3) =
2.5-(35-50 + 20-30 + 0-20 + 10-60 + 25-40) = 9,875
m.u.
Ctot(n4) =
2.5-(45-50 + 30-30 + 10-20 + 0-60 + 15-40) = 9,875
m.u.
13.1.4 The Optimal Plant Location with Respect to Rectangular Transportation Movements Hypotheses Given n locations n^, k=l,...,n , in which the suppliers and customers of an enterprise are located. The locations are identified by their co-ordinates ( y ^ ^ ^ * t o
eac
^
t
* i e s e locations corres-
ponds a known quantity x^ of sales/procurement. Find the optimal location (y^;y5j) for the enterprise under the assumption that transportation movements are only possible parallel to the two co-ordinate axes. Transportation cost c^ m.u./q.u. and d.u. are assumed. Principle The method determines an optimum with respect to each co-ordinate. The point of intersection of both optima yields the optimal station site. Description Step
1: Determine the total quantity to be transported
T: = l k=l
Step
x.. K
2: (changing the indices) Assign the index i to the y^-co-ordinate (abscissa) and the index j to the y2-co-ordinate (ordinate); the location n^ with the p-smallest y^-value receives the index i = p, V p = 1
n;
Section 13.1.4
355
the location n^ with the q-smallest y2~value receives the index j = q, V q=l,...,n . Result: The location n. is now represented by n.., the K 1J corresponding transportation amount is now x ^ . Step
3: Determine the index r, so that
z x. . > 1/2} .
r: = min {i|
Step
i
1J
4: Determine the index s, so that s: = min {j |z x,. > T/2} . J J
*
*
Result: The location n r s with the co-ordinates ( y ^ ^ ) = (y^ r ;y2 s ) is the optimal station site; the total transportation cost is c
tot : =
v
k=l
- y^h
- y2ki)-xk
m
-u-
Example Given the following five locations n k with the co-ordinates and the transportation quantity x.: n
k
n
l n? n
3
n
4
n
5
*2k
x
k
2
6
80
5
5
20
8
3
40
3
2
50
6
9
60
c t = 1; r
5 T = T. x. = 250; k=l K
(y^^k)
356
Plant
Location
Models
xx. , x, . 250 jf--- 60-L -•!- 9.
n5=n45
7190 .. 110
.80... ...6.
nrni4
nrs=n24
20....... 5.
n2=n33
4. 90 ..
40... ... 3.
50 --
50---.—
n3=n52
J 4 21
2.
12,
3,
xi • j • X x. . ++ 150 210 250 i IJ 80 130 * * r = 2; s =4; n. •24 with the co-ordinates (y1;y2) = (3;6) is the optimal location; the total transportation cost is CtQt = [(|3-2|+|6-6|)80+(|3-5|+|6-5|)20+(|13-8|+|6-3|)40 + (|3-3|+|6-2|)50+(]3-61+|6-9|)60] = 1,020 m.u. 80 50
—I 20
I-
60
40
13.2 Heuristic Methods
13.2.1 The Center of Gravity-Method Hypotheses Given n locations n^, j = l,...,n , in which the suppliers and customers of an enterprise are located. The locations are identified by their co-ordinates(y, • ;y ?i ); to each of these locations corresponds
Section
13.2.1
357
a known quantity x^ of sales/procurement; a direct connection exists between all locations. Assuming transportation cost c. m.u./q.u. and *
z
*
d.u., determine an optimal location ( y ^ ^ ) for the enterprise. Principle This method determines the "center of gravity" of the n-sided polygon. Procedure Determine n 1'
n
J
j=l n
Z
1J y
k
X.
j=l
* 2:
=
k I x -y j=l J ^ E
J
X
j=l
where
k 6 N;
K
J
the total transportation cost is C
tot: = V
l/^l " y l / + < y 2
Y
" y2j)2
m. u.
Example Given the following five locations n^ with the ( y ^ ; y 2 j ) and the transportation quantity x^ :
n
j
n
l
n
?
n
3
n
4
n
5
(a)
y
2j
x. J
2
6
80
5
5
20
8
3
40
3
2
50
6
9
60
lj
y
c t = 2.5;
k = 2; * * 80 2 -2 + 20 2 -5 + 40 2 -8 + 50 2 -3 + 60 2 -6 ? ? ? ? ? 80 + 20 + 40 + 50 + 60
«
3.9103 ;
358
Fiant Location
Models
* 80 2 -6 + 20 2 -5 + 40 2 -3 + 50 2 -2 + 60 2 -9 y? = 5 5 5 5 5 80 + 20 + 40 + 50 + 60 *
«
, ,„,, 5.6966;
*
the location with the co-ordinates ( y ^ - ^ ) = (3.9103; 5.6966) is optimal; C
tot
=
2
- 5 ' ( 8 0 - 1 - 9 2 + 20-1.3 + 40-4.91 + 50-3.81 + 60-4.79) =
= 2,135.55 m.u.; (3) k = 1; * 80-2 + 20-5 + 40-8 + 50-3 + 60-6 y. =
, ,, = 4.Jb ;
80+20+40+50+60 y„ =
80-6 + 20-5 + 40-3 + 50-2 + 60-9 80+20+40+50+60
the location with the co-ordinates ( y ^ y ^ ) C
tot
= 2
,, = cb.Jb ; =
(4.36; 5.36) is optimal;
- 5 "(80-2.47 + 20-0.72 + 40-6.1 + 5 0 - 3 . 6 1 + 60-3.94) =
= 2,183.05 m.u.
13.2.2 A Solution by Vector Summation Hypotheses See
13.2.1 . In addition, a (small) value e is given, which denotes
the tolerance distance for the solution location from the optimal location. Principle The method determines a sequence of resultants (whose length is monotone decreasing), in whose direction the current suboptimal solution is shifted. If the length of the resultant becomes less than or equal to the given value E , the current suboptimum is considered as "almost optimal".
Section 13.2.2
359
Description Step
1: Assume the convex hull of the locations has been constructed through a certain connection of the locations. Select any arbitrary point S = ( y p y 2 ) inside of the convex hull for the initial solution. Note: If there are any of the given locations inside the convex hull, it is convenient to select that location (inside of the convex hull) with the greatest transportation quantity.
Step
y M) 2i " y 2 tan a u ; : =
2: Determine
y
' Step
>
if
y
y 2 j : = y 2 + tan a.(J). ^Myjj
Step
l : = y ll
5: Is
lj< l so
>
- yj)
t iat
'
>v
j=l,... ,n ,
'
A 6 (0; 1] .
4: Determine the point y
j=l».-..n
y
S^ = ( ^ ^ j ) »
{tan a
Step
v
* /
Z: Determine the points
where
*
y^
y, 6: = 4 + 1 -i. if yl < yl
7: Determine the point
:
y[
= y*
1
+
1
•
S 1 = (yj;y 2 ), so that
2
V
1 + (tan B)
y'2 : = y* + tan e-(yj - yj), and the new initial solution
*
*
*
S = (yj;y 2 ), where y^: =y|;
y 2 : = y£ + 0.1 . Go to step 2. Note 1: Instead of completing step 6 and step 7, a new initial solution can be determined by selecting any point between S and P . Note 2: The vector summation can also be done graphically. This procedure is convenient, when solving smaller problems by hand. Example Given the following five locations n- with the co-ordinates J
ar|
(y-i-'.y?')
n
j
n
l
n
?
n
3
n
4
n
5
y
d the transportation quantity x.:
lj 2
y
2j
x. J
6
80
5
5
20
8
3
40
3
2
50
6
9
60
S = n 2 = (5; 5) ;
c t = 2.5 ; e
= 0.3 ;
A
= 0.1 ;
Section
-1/3;
tan tan «< Z > =
0
=
=
;
s2
= ( 5;
6
3 =
;
s3
= (8.33;
2.78)
;
5
4 = "
;
S
4 = (2.23;
0.84)
;
{
5 =
'
S
5 = (6.46;
62
tan a < 3 ) = - 2 / 3 ;
««> = 3 / 2 ;
tan
«(5> =
4
1
;
361
7.53);
s1
;
tan
(-2.59;
= " ;
6
13.2.2
5 )
;
10.82);
P = (-0.57; 6.97); y = 5.908 > e; tan ß= -0.358; 6 = -1; S' = (3.61; 5.5); S = (3.61; 5.6); 1
tan J
) = -0.248
S
= -0.432
tan 3
tan J
tan a
62 =
5 2 = ( 5.446; 4.807)
6
3 =
5 3 = ( 7.052; 3.563)
5.92
5
4 = "
5 4 = ( 2.775; 0.655);
1.423
6
5 =
5 5 = ( 7.06; 10.509);
) = -0.592
(4) v
'
S x = (-4.154; 7.525)
1 = "
tan a ^ P = (3.739; 4.659); y = 0.95 > e; tan
-7.295; 6 = 1;
S' = (3.64; 5.36); s = (3.64;5.46); = -0 3 2 9 ;
6
tan «< 2 > = - 0 3 3 8 ;
1
tan J
tan a tan a tan a
)
(3)
s1
=
S
;
s2
=
6
;
s3
=
5 406;
5
4 = " ;
s4
=
1 5
6
5 =
S
- -0 5 6 4 ;
(4) ' = (5)
1 = " ;
-
;
2 -
3 =
5 =
(- 3 . 9 5 9 ; 7 . 9 6 ) (5 . 5 3 5 ; 4 . 8 2 ) (7 . 1 2 4 ; 3 . 4 9 5 ) (2 . 7 3 1 ; 0 . 5 4 4 ) (6 . 9 6 8 ; 1 0 . 4 5 2 )
P = (3.839; 5.431); y = 0.201 < 0.3 = e an optimal C
tot
=
2
Stop, S = (3.64; 5.46) is
location for e = 0.3;
- 5 - ( 8 0 - 1 - 7 3 + 20-1.44 + 40-5.01 + 50-3.52 + 60-4.25) =
= 1,996.5
m.u.
362
Plant Loaation
Graphical solution:
1
Models
(Note: 10 q.u. are equivalent to 1 d.u.)
5
10
Section
IS.2.3
363
13.2.3 An Iterative Method Hypotheses See
13.2.1
. In addition, a (small) value
tes the minimal shift of a solution p (
i_1
e
is given, which deno-
) to P^ 1 ^. Set the running-
index k: = 0 . Principle The method determines a sequence of points in the plane, until the value of the straight line between two consecutive points becomes less than or equal to the given value
e
. The last generated co-or-
dinates are considered as "almost optimal". Description step
1: Select an arbitrary point
P ( o ) = ( y (°) ; y(°))
364
Plant Location
Models
for the initial
solution. *
*
Note: It is convenient to select the point (y^; y^) of (13.2.1) for the initial Step
2: Compute
(k-l). _
4k+1): =
step
solution.
3: IS
Yk+1
: =
;
^
1/^1
^lj'
T
n £ j=i
y K ^ i /
+
J-1
V W - y y ) '
+
J-1
^
W
+
-y[kV
+
y
"2
2j
x.
Ì
(y^Sj)'
^
(y{zM)
V
-y{2kV
If yes: Stop, the point p ( k + 1 ) = ( y j k + 1 ) ;
. e ? is opti-
mal with respect to the given value e . The total transportation cost is
If no : Set k: = k + 1; go to step 2. Example Given th the following five locations n^ with the co-ordinates (y^j-, y2j) and the transportation quantity x^:
Seat-Lon
n
j
n
1
y
y
lj 2
2j
x. J
6
80
n
5
5
20
n
3
8
3
40
n
4
3
2
50
n
5
6
9
60
?
ct
= 2.5 ;
e
= 1/20;
P(0) = ( y í 0 ) ^ 0 ) )
20-5
80-2 ^(4-2)^(3-6)'
+
J/(4-5)*+(3-5)*
80
+
|/(4-8) 2 +(3-3) Z
1/(4-2)^(3-6)^
/(4-5) ¿ í + (3-5) i í
50-3
60-6
]/(4-3)^+(3-2) ¿
|/(4-6) ¿ +(3-9) ¿
40 4-8)^(3-3)*
4.03 60
|/(4-3)^+(3-2) ¿
+
20-5
80-6
|/(4-5) Z +(3-5)' ¿
^4-2)*+(3-6)* >1»
80
^4-5)^(3-5)^
50-2 (4-3)^+(3-2)
40-3
40 |/( 4-8)
60-9 ¿
+
|/(4-6)"+(3-9)' 4.6
50 /(4-3) +(3-2)'
60
(3-3) ¿
yf 4-8)
20
^(4-2)^(3-6)^
/
^(4-6)^(3-9)'
3-3) ¿
365
= (4;3);
40-8
20
50
12.2.2
+
Plant
366
Location
Models
to simplify collect the results in a table as follows: >(k)
n
(k)
p(°)
4
3
p(l)
4.03
4.6
« 8/5
3.98
5.11
« 1/2
P(2) P
(3)
3.93
5.3
« 1/5
P
(4)
3.87
5.37
« 9/100
3.83
5.4
P(5)
1/20
y 5 = e = 1 / 2 0 - Stop, p( 5 ) = (3.83; 5.4) is an optimal location for e = 1/20; C
tnt
=
2.5-(80-1.92 + 2o-1.24 + 4-4.81 + 50-3.5 + 60-4.2) = 1,994.5 m.u.
Appendix T a b l e
1:
q
=
( k )
(1
+
i )
k
k 1
i
2
3
4
5
6
7
8
9
10
0.01
0.9901
0 9803
0 9706
0 9610
0 9515
0 9421
0 9327
0 9234
0 9143
0 9053
0.02
0.9804
0 9612
0 9423
0 9239
0 9057
0 8879
0 8705
0 8535
0 8368
0 8203
0.03
0.9709
0 9426
0 9152
0 8885
0 8626
0 8375
0 8131
0 7894
0 7664
0 7441
0.04
0.9615
0 9246
0 8890
0.8548
0 8219
0 7903
0 7599
0 7307
0 7026
0 6756
0.05
0.9526
0 9070
0 8639
0 8227
0 7835
0 7462
0 7107
0 6768
0 6446
0.6139
0.06
0.9434
0 8900
0 8396
0 7921
0 7473
0 7050
0 6651
0 6274
0 5919
0 5584
0.07
0.9346
0 8734
0 8163
0 7629
0 7130
0 6664
0 6227
0 5820
0 5439
0 5083
0.08
0.9259
0 8573
0 7938
0 7350
0 6806
0 6302
0 5835
0 5403
0 5003
0 4632
0.09
0.9174
0 8417
0 7722
0 7084
0.6499
0 5963
0 5470
0 5019
0 4604
0 4224
0.10
0.9091
0 8264
0 7513
0 6830
0 6209
0 5645
0 5132
0 4665
0 4241
0 3855
0.11
0.9009
0 8116
0 7312
0 6587
0 5934
0 5346
0 4816
0 4339
0. 3909
0 3522
0.12
0.8929
0 7972
0 7118
0 6355
0 5658
0 5066
0 4523
0 4039
0 3606
0 3220
0.13
0.8850
0 7831
0 6930
0 6133
0 5428
0 4803
0 4251
0 3762
0 3329
0 2946
0.14
0.8772
0 7695
0 6750
0 5921
0 5194
0 4556
0 3996
0 3506
0. 3075
0 2697
0.15
0.8696
0 7561
0 6575
0 5718
0.4972
0 4323
0 3759
0 3269
0 2843
0 2472
4
5
6
7
9
10
T a b l e
X
2:
q~
1
k
=
(1
2
+
i ) "
3
k
8
1 1046
0.01
1 0100
1 0201
1 0303
1 0406
1 0510
1 0615
1 0721
1 0829
1 0937
0.02
1 0200
1 0404
1 0612
1 0824
1 1041
1 1262
1 1487
1 1717
1 1951
1 2190
0.03
1 0300
1 0609
1 0927
1 1255
1 1593
1 1941
1 2299
1 2668
1 3048
1 3439
0.04
1 0400
1 0816
1 1249
1 1699
1 2167
1 2653
1 3159
1 3686
1 4233
1 4802
0.05
1 0500
1 1025
1 1576
1 2155
1 2763
1 3401
1 4071
1 4775
1 5513
1 6289
0.06
1 0600
1 1236
1 1910
1 2625
1 3382
1 4185
1 5036
1 5938
1 6895
1 7908
0.07
1 0700
1 1449
1 2250
1 3108
1 4026
1 5007
1 6058
1 7182
1 8385
1 9672
0.08
1 0800
1 1664
1 2597
1 3605
1 4693
1 5869
1 7138
1 8509
1 9990
2 1589
0.09
1 0900
1 1881
1 2950
1 4116
1 5386
1 6771
1 8280
1 9926
2 1719
2 3674
0.10
1 1000
1 2100
1 3310
1 4641
1 6105
1 7716
1 9487
2 1436
2 3579
2 5937
0.11
1 1100
1 2321
1 3676
1 5181
1 6851
1 8704
2 0762
2 3045
2 5580
2 8394
0.12
1 1200
1 2544
1 4049
1 5735
1 7673
1 9738
2 2107
2 4760
2 7731
3 1058 3 3946
0.13
1 1300
1 2769
1 4429
1 6305
1 8424
2 0820
2 3526
2 6584
3 0040
0.14
1 1400
1 2996
1 4815
1 6890
1 9254
2 1950
2 5023
2 8526
3 2519
3 7072
0.15
1 1500
1 3225
1 5209
1 7490
2 0114
2 3131
2 6600
3 0591
3 5179
4 0456
368
Appendix
e"k
T a b l e 3:
k
e
-k
k
e"k
k
e
-k
k
e
-k
.01
.9900
.26
.7711
.51
.6005
.76
.4677
.02
.9802
.27
.7634
.52
.5945
.77
.4630
.03
.9704
.28
.7558
.53
.5886
.78
.4584
.04
.9608
.29
.7483
.54
.5827
.79
.4538
.05
.9512
.30
.7408
.55
.5769
.80
.4493
.06
.9418
.56
.5712
.81
.4449
.07
.9324
.57
.5655
.82
.4404
.08
.9231
.58
.5599
.83
.4360
.09
.9139
.59
.5543
.84
.4317
.10
.9048
.60
.5488
.85
.4274
.11 .12
.31
.7334
.32
.7261
.33
.7189
.34
.7118
.35
.7047
.8958
.36
.6977
.61
.5434
.86
.4232
.8869
.37
.6907
.62
.5379
.87
.4190
.13
.8781
.38
.6839
.63
.5326
.88
.4148
.14
.8694
.39
.6771
.64
.5273
.89
.4107
.15
.8607
.40
.6703
.65
.5220
.90
.4066
.16
.8521
.41
.6637
.66
.5169
.17
.8437
.5117
.91
.4025
.67
.18
.8353
.68
.5066
.19
.8270
.69
.5016
.20
.8187
.70
.4966
.21
.42
.6570
.43
.6505
.44
.6440
.45
.6376
.8106
.46
.6313
.71
.4916
.22
.8025
.47
.6250
.72
.23
.7945
.48
.6188
.24
.7866
.49
.25
.7788
.50
.92
.3985
.93
.3946
.94
.3906
.95
.3867
.96
.3829
.4868
.97
.3791
.73
.4819
.98
.3753
.6126
.74
.4771
.99
.3716
.6065
.75
.4724
1.00
.3679
Appendix
e"k
k
e"k
k
1.1
.3329
3.1
.04505
1.2
.3012
3.2
.04076
1.3
.2725
3.3
.03688
1.4
.2426
3.4
.03337
1.5
.2231
3.5
.03020
1.6
.2019
3.6
.02732
1.7
.1827
3.7
.02472
1.8
.1653
3.8
.02237 .02024
1.9
.1496
3.9
2.0
.1353
4.0
.01832
2.1
.1225
4.1
.01657
2.2
.1108
4.2
.01500
2.3
.1003
4.3
.01357
2.4
.09072
4.4
.01228
2.5
.08208
4.5
.01111
2.6
.07427
4.6
.01005
2.7
.06721
4.7
.009095
2.8
.06081
4.8
.008230
2.9
.05502
4.9
.007447
3.0
.04979
5.0
.006738
370
Table
Appendix
4 : Random numbers w i t h an equal d i s t r i b u t i o n
2049
9135
6601
5112
5266
6728
2188
3846
3734
4017
7087
2825
8667
8831
1617
7239
9622
1622
0409
5822
6187
0189
5748
0380
8820
3606
7316
4297
2160
8973
4178
3758
0191
5361
9605
9605
1526
8370
4969
0463
3844
4187
2145
8365
5964
0501
9196
4456
6573
9751
4932
7312
3579
7867
4000
1056
8830
1662
0869
2580
7032
7117
8283
0767
1074
5571
8464
8057
1698
7343
9058
0455
2463
1812
7991
6371
6881
5540
2774
2163
8476
5802
9722
0325
1001
1337
2042
9695
5242
8808
5749
8807
5926
7406
0537
2788
4704
8574
8016
9034
4829
5817
7136
8337
9009
9304
4010
5817
7126
9175
0634
5168
4614
4769
3246
7559
8221
8030
6603
4735
2829
7794
9246
6385
4330
3750
1255
2157
9415
4161
6851
0071
7014
9558
9956
2151
9359
6002
6458
8926
4262
8457
9217
8257
5462
2705
9735
2004
2800
6913
8660
7450
8436
9606
9996
9287
0972
4065
4140
9403
1962
4877
2789
9528
4654
2356
6478
7438
6292
1122
3100
1200
2648
0649
2406
4507
4102
2498
7771
6545
4658
6444
6375
6053
4427
1425
1938
9075
2627
4471
9335
9644
5599
6565
8249
6899
8688
7347
9876
0079
5688
9609
0763
1792
4400
3490
8251
6512
1296
5359
9981
6570
9188
3064
5262
7601
4111
7835
1855
4210
2041
9584
5237
8034
5720
4701
9016
8033
5941
4710
7470
7973
3232
5128
8315
7261
5645
1775
1428
1486
Appendix
9053
1349
4317
1495
9279
0474
5898
7622
0344
2412
3358
7831
2562
1115
1940
8754
4528
0335
0755
3294
3768
2114
2675
4256
9672
9264
3236
5791
8289
0402
7082
0519
5926
7306
2429
0199
3925
7661
6604
4570
7176
1045
9291
1734
5984
3088
0943
9323
7545
8128
8817
4047
7333
7390
2280
7320
5015
7812
6053
4372
8572
8448
9060
0079
5633
0388
9623
1694
6614
2802
7245
8673
9770
8346
9333
9368
4390
5368
8324
6634
0787
2616
6460
9258
4275
9127
7982
4834
4933
7102
5476
8770
7390
2335
2677
4597
7797
8760
5522
0374
7715
3563
4950
3707
8933
3102
1587
7336
7943
2301
3454
5165
5122
7100
5089
1244
5316
2230
3731
4669
5173
2842
5529
0841
7762
4943
5279
4453
6010
7884
6982
3868
0176
8023
7819
4782
5676
7465
8792
7513
0130
3536
0034
6191
0704
7602
3990
2271
8877
6844
1198
3035
9335
9699
4403
3048
8234
1416
3706
9143
4999
4950
4053
6294
0680
3117
4294
2768
1003
1568
3922
9964
3487
8903
6533
5209
2952
5523
0274
9608
0974
3689
7763
5119
6602
4891
5275
5181
2128
5327
4153
8232
9981
9184
2291
5232
6985
4320
2048
9300
3392
6048
5311
1391
8125
9314
5933
6146
7525
2079
3621
5593
7559
8211
6141
8419
3933
7992
6591
6890
7087
2714
8663
8057
1587
7347
9831
0485
7876
3919
9456
8382
2860
2270
9033
5050
5825
5589
8277
1817
371
372
Appendix
Table
*(X)
X
=
5 : Area under the standardized normal distribution function
I
i/inr- e " ^
;
$(-x) = 1 - »(x) Ox
0
1
2
3
4
5
6
7
8
9
0
.5000
.5040
.5080
.5120
.5160
.5199
.5239
.5279
.5319
.5359
0
1
.5398
.5438
.5478
.5517
.5557
.5596
.5636
.5675
.5714
.5754
0
2
.5793
.5832
.5871
.5910
.5948
.5987
.6026
.6064
.6103
.6141
0
3
.6179
.6217
.6255
.6293
.6331
.6368
.6406
.6443
.6480
.6517
0
4
.6554
.6591
.6628
.6664.
6700
.6736
.6772
.6808
.6844
.6879
0
5
.6915
.6950
.6985
.7019
.7054
.7088
.7123
.7157
.7190
.7224
0
6
.7258
.7291
.7324
.7357
.7389
.7422
.7454
.7486
.7518
.7549
0
7
.7580
.7612
.7642
.7673
.7704
.7734
.7764
.7794
.7823
.7852
0
8
.7881
.7910
.7939
.7967
.7996
.8023
.8051
.8078
.8106
.8133
0
9
.8159
.8186
.8212
.8238
.8254
.8289
.8315
.8340
.8365
.8389
1
0
.8413
.8438
.8461
.8485
.8508
.8531
.8554
.8577
.8599
.8621
1
1
.8643
.8665
.8686
.8708
.8729
.8749
.8770
.8790
.8810
.8830
1
2
.8849
.8869
.8888
.8907
.8925
.8944
.8962
.8980
.8997
.9015
1
3
.9032
.9049
.9066
.9082
.9099
.9115
.9131.
9147
.9162
.9177
1
4
.9192
.9207
.9222
.9236
.9251
.9265
.9279
.9292
.9306
.9319
1
5
.9332
.9345
.9357
.9370
.9382
.9394
.9406
.9418
.9429
.9441
1
6
.9452
.9463
.9474
.9484
.9495
.9505
.9515
.9525
.9535
.9545
1
7
.9554
.9564
.9573
.9582
.9591
.9599
.9608
.9618
.9625
.9633
1
8
.9146
.9649
.9656
.9664
.9671
.9678
.9686
.9693
.9699
.9706
1
9
.9713
.9719
.9726
.9732
.9738
.9744
.9750
.9756
.9761
.9767
2
0
.9772
.9778
.9783
.9788
.9793
.9798
.9803
.9808
.9812
.9817
2
1
.9821
.9826
.9830
.9834
.9838
.9842
.9846
.9850
.9854
.9857
2
2
.9861
.9864
.9868
.9871
.9875
.9878
.9881
.9884
.9887
.9890
2
3
.9893
.9896
.9898
.9901
.9904
.9906
.9909
.9911
.9913
.9916
2
4
.9918
.9920
.9922
.9925
.9927
.9929
.9931
.9932
.9934
.9936
2
5
.9938
.9940
.9941
.9943
.9945
.9946
.9948
.9949
.9951
.9952
2
6
.9953
.9955
.9956
.9957
.9959
.9960
.9961
.9962
.9963
.9964
2
7
.9965
.9966
.9967
.9968
.9969
.9970
.9971
.9972
.9973
.9974
2
8
.9974
.9975
.9976
.9977
.9977
.9978
.9979
.9979
.9980
.9981
2
9
.9981
.9982
.9982
.9983
.9984
.9984
.9985
.9985
.9986
.9986
0
Bibliography A tripel [a,b,c] shall precede each reference in the bibliography, so that: a: chapter b: B, if it is a book A, if it is an article or other special research c: running-index in alphabetical order according to author. The books and articles cited the
in the general bibliography encompass
broad aspects of Operations Research and as such are not listed
according to the chapters of this handbook. The most important journals and their abbreviations are: CACM
: Communications of the Association for Computing Machinery
JACM
: Journal of the Association for Computing Machinery
MS
: Management Science
NRLQ
: Naval Research Logistics Quarterly
ORQ
: Operational Research Quarterly
ORSA
: Journal of the Operations Research Society of America
Op.Res.Verf.: Operations Research Verfahren SIAM
: Journal of the Society for Industrial applied Mathematics
Ufo
: Unternehmensforschung
ZOR
: Zeitschrift für Operations Research
Chapter 0 [O.B.I.] Ackoff, R.L., Sasieni, M.W.; Operations Research, Verlag Kunst und Wissen, Stuttgart 1970 [O.B.2.] Angermann, A.; Entscheidungsmodelle, F. Nowack Verlag, Frankfurt/Main, 1963 [O.B.3.] Brockhoff, K.; Unternehmensforschung, W. de Gruyter, Berlin - New York 1973
374
Bibliography
[O.B.4.] Churchman, C.W.; Ackoff, R.L.; Arnoff, E.L.; Operations Research, R. Oldenbourg Verlag, Wien - München 1961 [O.B.5.] Desbazeille, G.; Unternehmensforschung, Berliner Union/ Kohlhammer, Stuttgart 1970 [O.B.6.] Dück, W.; Bliefernich, M.; ( eds.) Operationsforschung, Deutscher Verlag der Wissenschaften, Berlin 1972 [O.B.7.] Henn, R. ; Klinzi, H.P.; Einführung in die Unternehmensforschung, Band I und II, Heidelberger Taschenbücher, Bde. 38 und 39, Springer Verlag, Berlin - Heidelberg New York 1968 [O.B.8.] Kaufmann, A.; Faure, R.; Methoden des Operations Research, W. de Gruyter, Berlin - New York 1974 [O.B.9.] Körth, H.; Otto, C.; Runge, W.; Schoch, M.; (Hrsg.), Lehrbuch der Mathematik für Wirtschaftswissenschaften, Westdeutscher Verlag Opladen 1973 [O.B.IQ] Kulhavy, E.; Operations Research, Betriebswirtschaftlicher Verlag Dr. Gabler, Wiesbaden 1963 [O.B.II] Kwak, N.K.; Mathematical Programming with Business Applications, Mc Graw Hill, New York 1973 [O.B.12] Mc Millan, Jr. C.; Mathematical Programming, J. Wiley & Sons, New York 1970 [0.B.13.] Mitchell, G.H.; Operational Research: Techniques and Examples, The English Universities Press Ltd., London 1972 [O.B.14.] Müller-Merbach, H.; Operations Research, Verlag F. Vahlen, München 1973 [O.B.15.] Neumann, K. ; Operations Research A - F (working papers), Karlsruhe 1972 [O.B.16.] Sasieni, M.W.; Yaspan, A.; Friedman, L.; Operations Research: Methods and Problems, J. Wiley & Sons Inc., New York 1959 [O.B.17.] Schneeweiß, H.; Entscheidungskriterien bei Risiko, in: Ökonometrie und Unternehmensforschung Band VI, Springer Verlag, Berlin - Heidelberg - New York 1967
Bibliography
375
[O.B.18.] Stahlknecht, P.; Operations Research, in: Schriften zur Datenverarbeitung, Band 3, F. Vieweg & Sohn, Braunschweig 1970 [O.B.19.] Theil, H.; Boot, J.C.G.; Kloek, T.; Operations Research and Quantitative Economics, Mc Graw - Hill, New York 1965 [O.B.20.] Weber, H.H.; Einführung ins Operations Research, Akademische Verlagsgesellschaft, Frankfurt/Main 1972 [O.B.21.] Autorenkollektiv, Mathematische Standardmodelle der Operationsforschung, Verlag Die Wirtschaft, Berlin 1972 [O.A.1.]
Stahlknecht, P.; OR - Ein Leitfaden für Praktiker, Teil 1: Input - Output - Modelle/Optimierungsverfahren, in: Elektronische Datenverarbeitung, Beiheft 6, F. Vieweg & Sohn, Braunschweig 1965
[O.A.2.]
Stahlknecht,P.; OR - Ein Leitfaden für Praktiker, Teil 2: Simulationsmethoden/Ablauf - und Terminplanung, in: Elektronische Datenverarbeitung, Beiheft 7, F. Vieweg & Sohn, Braunschweig 1966
Chapter 1 [l.B.l.]
Arrow, K.J.; Hurwicz, L.; Uzawa, H.; Studies in Linear and Nonlinear Programming, Stanford University Press, Stanford (Calif.) 1958
[I.B.2.]
Berge, C.; Ghouila-Houri, A.; Programming, Games and Transportation Networks, J. Wiley & Sons Inc. New York 1965
[I.B.3.]
Bloech, J.; Lineare Optimierung für Wirtschaftswissenschaftler, Westdeutscher Verlag Opladen 1974
[I.B.4.]
Charnes, A; Cooper, W.W.; Henderson, A.; An Introduction
[I.B.5.]
Collatz, L.; Wetterling, W.; Optimierungsaufgaben,
to Linear Programming, J. Wiley & Sons, New York 1953 Heidelberger Taschenbücher, Band 15, Springer Verlag, Berlin - Heidelberg - New York 1966 [I.B.6.]
Cooper,L.; Steinberg, D.; Introduction to Methods of Optimization, W.B. Saunders Co., Philadelphia-London-Toronto 1970
376
Bibliography
[ I . B . 7 . ] Daritzig, G.B.; Linear Programming and Extensions, Princetown U n i v e r s i t y P r e s s , Princetown N.J. 1962 [ I . B . 8 . ] Dinkelbach, W.; S e n s i t i v i t ä t s a n a l y s e n und parametrische Programmierung, i n : Ökonometrie und Unternehmensforschung, Band X I I , Springer Verlag, B e r l i n - Heideiber - New York 1969 [ I . B . 9 . ] Ferguson, R.D.; Sargent, L . F . ; Linear Programming: Fundamentals and A p p l i c a t i o n s , Mc Graw - H i l l Comp., New York 1958 [1.B.10.]Gal, T . ; Betriebliche Entscheidungen und parametrische Programmierung, W. de Gruyter, B e r l i n - New York 1972 [ l . B . l l . ] Gale, D.; The Theory of Linear Economic Models, Mc Graw H i l l Book Comp., New York - Toronto - London, 1960 [1.B.12.] Gass, S . I . ; Linear Programming, Mc Graw - H i l l Book Comp., New York 1969 [l.B.13.1 Glicksman, A.M.; An Introduction to Linear Programming and the Theory of Games, J. Wiley & Sons I n c . , New York London 1963 [ l . B . 1 4 . ] H a d l e y , G.; Linear Programming, Addison Wesley Publ.Comp., Reasing (Mass.) 1969 [ l . B . 1 5 . ] J u d i n , D.B.;
G o l s t e i n , E.G.; Lineare Optimierung
I,
Akademie - Verlag, B e r l i n 1968 [ l . B . 16.]Judin, D.B.; G o l s t e i n , E.G.; Lineare Optimierung
II,
Akademie - Verlag, B e r l i n 1970 [ l . B . 1 7 . ] K a r l i n , S . ; Mathematical Methods and Theory of Games, in: Programming and Economics, Vol. I I : The Theory of
Infinite
Games, Addison Wesley, Reading (Mass.) 1959 [l.B.18.]Kaufmann, A . ; Methods and Models of Operations Research, Prentice Hall I n c . , Englewood C l i f f s N.J. 1963 [1.B.19.]Krekö, B.; Lehrbuch der linearen Optimierung, Deutscher Verlag der Wissenschaften, B e r l i n 1964 [ l . B . 2 0 . ] Kromphard, W.; Henn, R.; Förstner, K.; Lineare Entscheidungsmodelle, Springer Verlag, B e r l i n - Göttingen Heidelberg 1962
Bibliography
.B.21.] Kuhn, H.W.; Tucker, A.W.;
377
(eds.), Linear Inequalities
and Related Systems, Annals of Mathematics Study No. 38, Princetown University Press, Princetown N.J. 1956 .B.22.] Luenberger, D.G.; Introduction to Linear and Nonlinear Programming, Addison Wesley Pubi. Comp., Reading (Mass.) 1973 .B.23.] Naylor, H.; Byrne, E.T.; Linear Programming, Wadsworth Pubi. Comp. Inc., Belmont (Calif.) 1963 .B.24.] Piehler, J.; Einführung in die lineare Optimierung, Verlag Harri Deutsch, Zürich - Frankfurt/Main 1969 .B.25.] Saaty, T.L.; Mathematical Methods of Operations Research, Me Graw - Hill Book Comp., New York 1959 .B.26.] Sakarovitch, M.; Notes on Linear Programming, Van Nostrand Reinhold Comp., New York 1972 .B.27.] Simonnard, M.; Linear Programming, Prentice Hall Inc., Englewood Cliffs N.J. 1966 .B.28.] Vajda, S.; Mathematical Programming, Addison Wesley Pubi. Comp., Reading (Mass.) 1961 .B.29.] Vajda, S.; The Theory of Games and Linear Programming, Methuen & Comp. Ltd. and Science Paperbacks, London 1970 .B.30.] Vazsonyi, A.; Scientific Programming in Business and Industry, J. Wiley & Sons, New York 1958 .B.31.] Zimmermann, H.-J.; Zielinski, J.; Lineare Programmierung, W. de Gruyter, Berlin - New York 1971 .A.I.]
Beale, E.M.L.; "An Alternate Method of Linear Programming" Proc. Cambridge Phil. Soc. 50/4; 1954
.A.2.]
Beale, E.M.L.; "Cycling in the Dual Simplex Algorithm",
.A.3.]
Dantzig, G.B.; Ford Jr., L.R.; Fulkerson, D.R.; "A Primal
.A.4.]
Dantzig, G.B.; Wolfe, P.; "Decomposition Principle for
in: NRLQ 2; 1955 - Dual Algorithm for Linear Programs" in: [1.B.21.] Linear Programs", ORSA 8/1, 1960
378
Bibliography
[I.A.5.]
Eaves, B.C.; "The Linear Complementary Problem, in: MS
[I.A.6.]
Frisch, R.; "The Multiplex Method for Linear Programming",
17/9, 1971 in: Memorandum fra Universitetets Socialtfkonomiske Instit u t e Oslo 1955 [I.A.7.]
Gal, T.; Nedoma, J.; "Methode zur Lösung mehrparametrischer
[I.A.8.]
Habr, J.; "Die Frequenzmethode zur Lösung des Transport-
linearer Programme", in: Op.Res.Verf. XII, 1972 problems und verwandter linearer Programmierungsprobleme", in: Wissenschaftliche Zeitschrift der Universität Dresden 10/5, 1961 [I.A.9.]
Hadley, G.F.; Simmonard, W.A.; "A Simplified two phase technique for the Simplex Method", in: NRLQ 6/3, 1959
[I.A.10.] Kelley, Jr. J.E.; "Parametric Programming and the PrimalDual Algorithm", in: ORSA 7/3, 1959 [I.A.11.] Kuhn, H.W.;"The Hungarian Method for the Assignment Problem", in: NRLQ 2/1; 1955 [I.A.12.] Künzi, H.P.; "Die Duoplex - Methode", in: Ufo 7/3, 1963 [I.A.13.] Künzi, H.P.; Kleibohm, K.; "Das Triplex - Verfahren", in: Ufo 12/3, 1963 [I.A.14.] Lemke, C.E.; "The Dual Method of Solving the Linear Programming Problem", in: NRLQ 1/1, 1954 [I.A.15.] Orchard - Hayes, W.; "Evolution of Linear Programming Computing Techniques", in: MS 4/2, 1958 [I.A.16.] Saaty, T.; Gass, S.; "The Computational Algorithm for the parametric objective function", in: NRLQ 2/1 & 2, 1955 [I.A.17.] Wagner, H.M.; "A Two - Phase Method for the Simplex Tableau", in: ORSA 4/4, 1956
Chapter 2 [2.B.I.]
Brauer, K.M.; Binäre Optimierung, C. Heymanns Verlag, Köln 1969
[2.B.2.]
Burkard, R.E.; Methoden der Ganzzahl igen Optimierung, Springer Verlag, Wien - New York 1972
B-ibl-iography
[2.B.3.]
Cooper, L.; Steinberg, D.; [ 1.B.6. ]
[2.B.4.]
Greenberg, H.; Integer Programming, in: Mathematics in
379
Science and Engineering, vol. 76, Academic Press, New York - London 1971 [2.B.5.]
Hu, T.C.; Integer Programming and Network Flows; Addison Wesley Publ. Co. Reading (Mass.), 1970
[2.B.6.]
Korbut, A.A.; Finkelstein, J.J.; Diskrete Optimierung, Akademie - Verlag, Berlin 1971
[2.B.7.]
Saaty, T.L.; [1.B.25.]
[2.B.8.]
Saaty, T.L.; Optimization in Integers and Related Extremal Problems, Mc. Graw - Hill Book Comp., New York 1970
[2.A.I.]
Balas, E.; "An Additive Algorithm for Solving Linear Programs with Zero - One Variables", in: ORSA 13/4, 1965
[2.A.2.]
Balas, E.; "Discrete Programming by the Filter Method", in: ORSA 15/5, 1967
[2.A.3.1
Benders, J.F.; "Partitioning Procedures for Solving Mixed - Variables Programming Problems", in: Numerische Mathematik 4, 1962
[2.A.4.]. Ben - Isreal, A.; Charnes, A.; "On some Problems on Diophantine Programming", in: Cahiers Centre Etud. Rech. Opérât. 4, 1962 [2.A.5.]
Dakin, R.J. ; "A Tree Search Algorithm for Mixed Integer
[2.A.6.]
Driebeek, N.J.; "An Algorithm for the Solution of Mixed
Programming Problems", in: The Computer Journal 8/3, 1965
Integer Programming Problems", in: MS 12/7, 1966 [2.A.7.]
Glover, F.; "A Multiphase Dual Algorithm for the Zero One Integer Programming Problem", in: ORSA 13/6, 1965
[2.A.8.3
Glover, F.; "A New Foundation for a Simplified Primal
In-
teger Programming Algorithm", in: ORSA 16/4; 1968 [2.A.9.1
Gomory, R.E.; "An all - integer programming Algorithm", in: Industrial Scheduling, Englewood Cliffs, N.J., Prentice Hall
1963
[2.A.10.] Gomory, R.E.; "Solving Linear Programming Problems in Integers", in: Proc.Sympos.Appl.Math. vol. X, 1960
380
Bibliography
[2.A.11.] Gomory, R.E.; "An Algorithm for Integer Solutions to Linear Programs", in: Graves, R.L.; Wolfe, P . ( H r s g . ) ; Recent Advances in Mathematical Programming, Mc Graw - H i l l Book Comp., New York 1963 [2.A.12.] Kolesar, P . J . ; "A Branch and Bound Algorithm for the Knapsack Problem", in: MS 13/9, 1967 [2.A.13.] Körte, B.; "Ganzzahlige Programmierung - Ein Überblick", in: Unternehmensforschung heute, Lecture Notes in Operations Research and Mathematical Systems Bd. 50, Springer Verlag, Berlin - Heidelberg - New York, 1971 [2.A.14.] Körte, B.; K r e l l e , W.; Oberhofer, W.; "Ein lexikographischer Suchalgorithmus zur Lösung allgemeiner ganzzahliger Programmierungsaufgaben", in: Ufo 13, 1969 (2 a r t i c l e s ) [2.A.15.] Land, A.H.; Doig, A.G.; "An Automatic Method of Solving Discrete Programming Problems", in: Econometrica 28/3, 1960 [2.A.16.] Lemke, C.E.; Spielberg, K.; "Direct Search Algorithm for Zero - One and Mixed - Integer Programming", in: ORSA 15/5, 1967 [2.A.17-] Noltemeier, H.; "Ganzzahlige Programme und kürzeste Wege in Graphen", in: 0p. Res. Verf. XIV, 1972 [2.A.18-] Noltemeier, H.; " S e n s i t i v i t ä t s a n a l y s e bei diskreten l i n e aren Optimierungsproblemen", in: Lecture Notes in Operations Research and Mathematical Systems vol. 30; Springer Verlag, Berlin - Heidelberg - New York, 1970 [2.A.19.] Noltemeier, H.; "Zum asymptotischen Algorithmus", in: 0p. Res. Verf. X V I I , 1973 [2.A.20.] Picard, J . - C . ; R a t l i f f , H.D.; "A Graph - Theoretic Equivalence for Integer Programs", in: ORSA 21/1; 1973 [2.A.21.] Young, R.D.; "A Simplified Primal (All Integer) Integer Programming Algorithm", in: ORSA 16/4, 1968
Bibliography
381
Chapter 3 [3.B.I.]
Bellman, R. ; Cooke, K.L.; Lockett, J.A.; Algorithms, Graphs and Computers;in: Mathematics in Science and Engineering, vol. 62, Academic Press, New York - London 1970
[3.B.2.]
Berge, C.; Ghouila - Houri, A., [I.B.2.]
[3.B.3.]
Berge, C.; The Theory of Graphs and its Applications, J. Wiley & Sons Inc.,New York 1962
[3.B.4.]
Busacker, R.G.; Saaty, T.L.; Finite Graphs and Networks, Mc Graw Hill, New York 1965
[3.B.5.]
Domschke, W.; Kürzeste Wege in Graphen: Algorithmen, Verfahrensvergleiche; Mathematical Systems in Economics, Bd. 2, Hain Verlag, Anton Hain, Meisenheim/Glan 1972
[3.B.6.]
Ford, Jr. L.R.; Fulkerson, D.R.; Flows in Networks, Princetown University Press, Princetown N.J. 1962
[3.B.7.]
Harary, F.; Graph Theory; Addison Wesley Pubi. Comp., Reading (Mass.) 1972
[3.B.8.]
Hu, T.C.; [2.B.5.]
[3.B.9.]
Kaufmann, A.; Einführung in die Graphentheorie; in: Orientierung und Entscheidung; R. Oldenbourg, München Wien 1971
[3.B.10.] Knödel, W. ; Graphentheoretische Methoden und ihre Anwendungen, in: Ökonometrie und Unternehmensforschung, Band XIII, Springer Verlag; Berlin - Heidelberg - New York 1969 [3.B.11.] König, D.; Theorie der endlichen und unendlichen Graphen, New York, Chelsea Pubi. Comp. 1950; (reprint; Leipzig, 1936) [3.B.12.] Marshall, C.W.; Applied Graph Theory; J. Wiley & Sons Inc., New York 1971 [3.B.13.] Noltemeier, H.; Graphentheorie, W. de Gruyter; Berlin New York 1975 [3.B.14.] Ore, 0.; Graphs and their Uses; Random House; The L.W. Singer Comp., New York 1963
382
[3.A.I.]
Bibliography
Breyer, F . ; " E i n Algorithmus zur Konstruktion kostenminimaler Zirkulationen in Netzen", i n : Op. Res. Verf. XV, 1973
[3.A.2.]
Busacker, R.G.; Gowen, P . J . ; "A Procedure for Determining a Family of Minimal - Cost Network Flow P a t t e r n s " ; ORO Technical Report 15, Operations Research O f f i c e , Johns Hopkins U n i v e r s i t y , 1961
[3.A.3.]
Dantzig, G.B.; "On the Shortest Route Through a Network", i n : MS 6/2, 1960
[3.A.4.]
Dantzig, G.B.; Fulkerson, D.R.; "Computation of Maximal Flows in Networks", i n : NRLQ 2/4, 1955
[3.A.5.]
D i j k s t r a , E.W.; "A Note on Two Problems in Connection with Graphs", i n : Numerische Mathematik 1, 1959
[3.A.6.]
Domschke, W.; " E i n neues Verfahren zur Bestimmung kostenminimaler Flüsse in Kapazitätendigraphen", i n : 0p. Res. Verf. XVI, 1973
[3.A.7.]
Edmonds, J . ; Karp, R.M.; "Theoretical
Improvements in A l -
gori tmic E f f i c i e n c y for Network Flow Problems", i n : JACM 19/2, 1972 [3.A.8.]
Elmaghraby, S . E . ; "Some Network Models in Management S c i e n c e " , i n : Lecture Notes in Operations Research and Mahtematical Systems, vol. 29, Springer Verlag, B e r l i n Heidelberg - New York 1970
[3.A.9.]
Farbey, B.A.; Land, A.H.; Murchland, J . D . ; "The Cascade Algorithm for Finding a l l Shortest Distances in a Directed Graph", i n : MS 14/1, 1967
[3.A.10.] Flood, M.M.; "The Traveling Salesman Problem", i n : ORSA 4/1, 1956 [3.A.11.] Floyd, R.W.; "Algorithm 97: Shortest Path", i n : CACM 5/6, 1962 [3.A.12.] Ford, J r . L . R . ; Fulkerson, D.R.; "A Simple Algorithm for Finding Maximal Network Flows and An A p p l i c a t i o n to the Hitchcook Problem", i n : Canadian Journal of Mathematics 9/2, 1957
Bibliography
383
[3.A.13.] Fulkerson, D.R.; "An out - of - kilter Method for Minimal Cost Flow Problems", in: SIAM 9/1, 1961 [3.A.14.] Hasse, M.; "Liber die Behandlung graphentheoretischer Probleme unter Verwendung der Matrizenrechnung", in: Wissenschaftliche Zeitschrift der Technischen Universität Dresden 10, 1961 [3.A.15.] Hässig, K.; "Theorie verallgemeinerter Flüsse und Potentiale", in: 0p. Res. Verf. XXI, 1975 [3.A.16.] Hoffman, A.J.; "A Generalization of Max Flow - Min Cut", in: Math. Progr. 6/3, 1974 [3.A.17.] Klein, M.; "A Primal Method for Minimal Cost Flows with Applications to the Assignment and Transportation Problems", in: MS 14/3, 1967 [3.A.18.] Kruskal Jr., J.B.; "On the Shortest Spanning Tree of a Graph and the Traveling Salesman Problem", in: Proc. Am. Math. Soc. 7, 1956 [3.A.19.] Land, A.H.; Stairs, S.W.; "The Extension of the Cascade Algorithm to Larger Graphs", in: MS 14/1, 1967 [3.A.20.] Little, J.D.C.; Murty, K.G.; Sweeny, D.W.; Karel , C.; "An Algorithm for the Traveling Salesman Problem", in: ORSA 11/5, 1963 [ 3 . A . 2 1 J Noltemeier, H.; [2.A.17.] [3.A.22.] Noltemeier, H.; [2.A.19.] [3.A.23.] Noltemeier, H.; "The Distribution of Minimal Total
Dura-
tion in Networks with high 'Arc Density" 1 , in: Op. Res. Verf. V, 1968 [3.A.24.] Noltemeier, H.; "An Algorithm for the Determination of Longest Distances in a Graph", in: Math. Progr. 9, 1975 [3.A.25. ] Picard, J.-C.-, Rati iff, H.D.
[ 2.A.20. ]
[3.A.26.] Steckhan, H.; "Güterströme in Netzen"; in: Lecture Notes in Operations Research and Mathematical Systems, vol.88, Springer Verlag, Berlin - Heidelberg - New York 1973
384
Bibliography
Chapter 4 [4.B.I.]
B e r b i g , R. ; Franke, R . ; Netzplantechnik, Verlag f ü r Bauwesen, B e r l i n 1973
[4.B.2.]
Federal E l e c t r i c C o r p o r a t i o n ; A Programmed I n t r o d u c t i o n
[4.B.3.]
Gewald, K. ; Kasper, K . ; S c h e l l e , H.; Netzplantechnik;
to PERT, J . Wiley & Sons I n c . , New York 1965 Band 2: Kapazitätsoptimierung, R. Oldenbourg, München Wien 1972 [4.B.4.] [4.B.5.]
Götzke, H.; Netzplantechnik, Fachbuchverlag L e i p z i g , 1969 Küpper, W.; Lüder, K. ; S t r e i t f e r d t , L . ; Netzplantechnik, Physica V e r l a g , Würzburg - Wien 1975
[4.B.6.]
M i l l e r , R.W.; PERT, R.v. D e c k e r ' s Verlag G. Schenk, Hamburg - B e r l i n 1965
[4.B.7.]
R i e s t e r , W.F.; Schwinn, R . ;
Projektplanungsmodelle,
[4.B.8.]
Thumb, N.; Grundlagen und P r a x i s der Netzplantechnik,
Physica V e r l a g , Würzburg - Wien 1970 Verlag Moderne I n d u s t r i e , München 1968 [4.B.9.]
V o i g t , J . - P . ; Fünf Wege der Netzplantechnik, V e r l a g s g e s e l l s c h a f t R. M ü l l e r , Köln - Braunsfeld 1971
[ 4 . B . 1 0 . ] W i l l e , H.; Gewald, K.; Weber, H.D.; Netzplantechnik, Band I : Z e i t p l a n u n g ; R. Oldenbourg, München - Wien 1967 [ 4 . B . I I J Zimmermann, H . J . ; Netzplantechnik, Sammlung Göschen, Bd. 4011, W. de Gruyter, B e r l i n - New York 1971 [4.A.I.]
C l i n g e n , C . T . ; "A M o d i f i c a t i o n of F u l k e r s o n ' s PERT A l g o r i t h m " , i n : ORSA 12/4, 1964
[4.A.2.]
Kelley, Jr. J.E.; "Critical
Path Planning and Scheduling:
Mathematical B a s i s " , i n : ORSA 9/3, 1961 [4.A.3.]
Lambourn, S . ; "Ressource A l l o c a t i o n and M u l t i p r o j e c t Scheduling (RAMPS) - A New Tool on Planning and C o n t r o l " , i n : The Computer Journal 5, 1963
[4.A.4.]
M a r t i n , J . J . ; " D i s t r i b u t i o n of the Time Through a D i r e c ted A c y c l i c Network", i n : ORSA 13/1, 1965
Bibliography [4.A.5.J
385
Meyer, H.; "Stochastische Netzwerke vom Typ GERT", in: Op. Res. Verf. X V I I , 1973
[4.A.6.]
Neumann, K.; "Entscheidungsnetzpläne", in: Op. Res. Verf. X I I I , 1972
[4.A.7.]
Noltemeier, H.; "Netzplantechnik"; in: Management Lexikon, Gabler Verlag, Wiesbaden 1972
[4.A.8.]
P r i t s k e r , A.A.B.; "GERT Networks", in: The Production En-
[4.A.9.]
van Slyke, R.W.; "Monte Carlo Methods and the PERT Problem",
gineer 1968 in: ORSA 11/5, 1963 [4.A.10.] Welsh, D.J.A.; "Super - c r i t i c a l Arcs of PERT - Networks", in: ORSA 14/1, 1966
Chapter 5 [5.B.I.]
Abt, C.C.; Serious Games, The Viking Press, New York 1970
[5.B.2.]
Berge, C.; Ghouila - Houri, A.; [ I . B . 2 . ]
[5.B.3.]
Burger, E.; Einführung in die Theorie der S p i e l e , W. de Gruyter, Berlin 1966
[5.B.4.]
C o l l a t z , L . ; Wetterling, W., [ I . B . 5 . ]
[5.B.5.]
Davis, M.D.; Game Theory, Basic Book Inc. New York London 1970
[5.B.6.]
Dresher, M.; Strategische Spiele, Verlag I n d u s t r i e l l e Organisation, Zürich 1961
[5.B.7.]
Glicksman, A.M.; [1.B.13.]
[5.B.8.]
K a r l i n , S . ; [1.B.17.]
[5.B.9.]
Kromphard, W.; Henn, R.; Förstner, K.; [1.B.20.]
[5.B.10.] Luce, R.D.; R a i f f a , H.; Games and Decisions, J. Wiley & Sons Inc.,London - Sydney 1957 [5.B.11.] Me Kinsey, J . ; Introduction to the Theory of Games, Mc Graw - H i l l , New York 1952 [5.B.12.] Morgenstern, 0 . ; Spieltheorie und Wirtschaftswissenschaft, R. Oldenbourg, Wien - München 1963 [5.B.13.] v. Neumann, J . ; Morgenstern, 0 . ; Theory of Games and Economic Behaviour, Princetown University Press, Princetown N.J. 1953
386
Bibliography
[5.B.14.] Owen, G.; Game Theory, W.B. Saunders Comp., Philadelphia 1968 [5.B.15.] Vajda, S.; [ 1.B.29. ] [5.A.I.]
Bohnenblust, H.F.; "The Theory of Games", in: Modern Mathematics for the Engineer ( ed.
Beckenbach) New York
1970 [5.A.2.]
Brown, R.H.; "The Solution of a Certain Two - Person ZeroSum Game", in: ORSA 5/1, 1957
[5.A.3.]
Lemke, C.E.; "Bimatrix Equilibrium Points and Mathematical Programming", in: MS 11, 1965 (ser. A)
[5.A.4.]
Nash, J.F.; "Equilibrium Points in N - Person Games", in:
[5.A.5.]
Nash, J.F.; "Non - cooperative games"; in: Annals of
Proc. Nat. Ac. Sc. 36; 1950 Mathematics, 54, 1951 [5.A.6.1
Neumann, K.; "Kooperative und nichtkooperative Zweipersonenspiele"; in: 0p. Res. Verf. IX, 1971
[5.A.7.]
Rosenmüller, J.; "On a Generalization of the Lemke Howson - Algorithm to noncooperative n - Person - Games"; in: SIAM 21/1, 1971
[5.A.8.]
Rosenmüller, J.; "Kooperative Spiele und Märkte", in: Lecture Notes in Economics and Mathematical
Systems,
vol. 53, 1971
Chapter 6 [6.B.U
Beckmann, M.J.; Dynamic Programming of Economic Decisions, Springer Verlag; Berlin - Heidelberg - New York 1968
[6.B.2.]
Bellman, R.; Dynamic Programming; Princetown University Press; Princetown N.J. 1962
[6.B.3.]
Bellman, R.; Dreyfus, S.; Applied Dynamic Programming; Princetown 1962
[6.B.4.]
Hadley, G.;
Non - Linear and Dynamic Programming;
Addison Wesley Publ. Comp. Reading (Mass.) 1964
Bibliography
[6.B.5.]
387
Howard, R.A.; Dynamic Programming and Markov Processes; J. Wiley & Sons Inc.; New York 1960
[6.B.6.]
Jacobs, O.L.R.; An Introduction to Dynamic Programming; The Theory of Multistage Decision Process; London 1967
[6.B.7.]
Nemhauser, G.L.; Einführung in die Theorie der dynamischen
[6.B.8.]
Neumann, K.; Dynamische Optimierung; BI Hochschultaschen-
Programmierung; R. Oldenbourg; München - Wien 1969 bücher Bd. 714a; Mannheim 1969 [6.B.9.]
Schneeweiß, C.; Dynamisches Programmieren; Physica Verlag; Würzburg - Wien 1974
[6.B.10.] Vazsonyi, A.; [6.A.I.]
1.B.30.
Bellman, R.; "Notes in the Theory of Dynamic Programming III; Equipment Replacement Policy"; RAND Report P 632; 1955
[6.A.2.]
Bellman, R.; "Dynamic Programming and the Numerical So-
[6.A.3.]
Bellman, R.; Dreyfus, S.; "Dynamic Programming and the
lution of Variational Problems"; in: ORSA 5/2; 1957 Reliability of Multicomponent Devices"; in: ORSA 6/2; 1958 [6.A.4.]
Dreyfus, S.E.; "Computational Aspects of Dynamic Programming", in: ORSA 5/3, 1957
[6.A.5.]
Ferschl, F.; "Grundzüge des Dynamic Programming", in: Ufo 3, 1959
[6.A.6.]
Held, M.; Karp, R.M.; "A Dynamic Programming Approach to Sequencing Problems", in: SIAM 10, 1962
[6.A.7.]
Künzi, H.P.; Müller, 0.; Nievergelt, E.; "Einführungskurs in die dynamische Programmierung", in: Lecture Note in Operations Research and Mathematical Systems, vol. 6, Springer Verlag, Berlin - Heidelberg - New York, 1968
[6.A.8.]
Mitten, L.G.; Nemhauser, G.L.; "Multistage Optimization", in: Chemical Engineering Progress 59, 1963
Chapter 7 [7.B.I.]
Feller, W.; An Introduction to Probability Theory and its
388
Bibliography
Applications I., J. Wiley & Sons Inc., New York 1957 [7.B.2. ]
Ferschl, F.; Zufallsabhängige Wirtschaftsprozesse, Physica Verlag, Würzburg - Wien 1964
[7.B.3. ]
Kaufmann, A.;
[7.B.4.]
Khintchine, A.Y.; Mathematical Methods in the Theory of
[1.B.18. ]
Queueing, in: Griffin's Statistical Monographs and Courses; No. 7 [7.B.5. ]
Lee, A.M.; Applied Queueing Theory, Mc Millan & Co. Ltd.,
[7.B.6.]
Morse, P.M.; Queues, Inventories and Maintenance, J. Wiley
London 1966
& Sons Inc., New York 1967 [7.B.7. ]
Newell, G.F.; Applications of Queueing Theory, London 1971
[7.B.8.]
Page, E. ; Queueing Theory in OR, London, Butterworth 1972
[7.B.9.]
Prabhu, N.U.; Queues and Inventories, J. Wiley & Sons Inc., New York 1965
[7.B.10.] Ruiz - Pala, E.; Avila - Beioso, K. ; Wartezeit und Warteschlange, Verlag Anton Hain, Meisenheim/Glan 1967 [7.B.11.] Saaty, T.L.; Elements of Queueing Theory, Mc Graw - Hill Book Comp., New York 1961 [7.B.12.] Schassberger, R.; Warteschlangen, Springer Verlag, Wien New York 1973 [7.B.13.] Takäcs, L.; Introduction to the Theory of Queues, Oxford University Press, New York 1962 [7.A.I.]
Doig, A.; "A Bibliography of the Theory of Queues", in:
[7.A.2.]
Erlang, A.K.; "Solution of some problems in the Theory of
Biometrika 44, 1957
Probabilities of Significance in Automatic Telephone Exchanges", in: The Post Office Electrical Engineer's Journal 10, 1917/18 [7.A.3. ]
Förstner, K.; "Ober das Auftreten und die Berechnung von Warteschlangen", in: Op. Res. Verf. I, 1963
[7.A.4. ]
Luchak, G.; "The Solution of the Single - Channel
Queueing
Equations Characterized by a Time - Dependent Poisson -
Bibliography
389
Distributed Arrival Rate and a General Class of Holding Times", in: ORSA 4/6, 1956 [7.A.5.]
Luchak, 6.; "The Distribution of the Time Required to Reduce to Some Preassigned Level a Single - Channel
Queue
Characterized by a Time - Dependent Poisson - Distributed Arrival
Rate and a General
Class of Holding Times", in:
ORSA 5/2, 1957 [7.A.6.]
Meisling, T.; "Discrete - Time Queueing Theory", in: ORSA 6/1, 1958
[7.A.7.]
Phipps, Jr. T.E.; "Machine Repair as a Priority Waiting -
[7.A.8.]
Saaty, T.L.;
[7.A.9.]
Scott, M.; Ulmer, M.B.; "Some Results for a Simple Queue
Line Problem", in: ORSA 4/1, 1956 "Résumé of Useful
Formulas in Queueing
Theory", in: ORSA 5/2, 1957
with Limited Waiting Room", in: ZOR 16, ser. A , 1972 [7.A.10.] Stange, K.; "Zwei in Reihe geordnete Schalter (mit einer Warteschlange dazwischen) bei Exponentialverteilung
der
AnkLinfte und Abgänge", in: Ufo 6, 1962 [7.A.ll.] Takäcs, L.; "Priority Queues", in: ORSA 12/1, 1964
Chapter 8 [8.B.I.]
Abadie, J.M. ( ed. ); Non - Linear Programming, NorthHolland Pubi. Comp. Amsterdam 1967
[8.B.2.]
Arrow, K.J.; Hurwicz, L.; Uzawa, H.;
[8.B.3.]
Coli atz, L. ; Wetterling, W.; [I.B.5.]
[8.B.4. ]
Cooper, L.; Steinberg, D.;
[8.B.5. ]
Karl in, S.;
[8.B.6.]
KLinzi, H.P.; Krelle, W. ; Nichtlineare
[I.B.I.]
[I.B.6.]
[1.B.17.] Programmierung,
Heidelberger Taschenbücher Band 172, Springer Verlag, Berlin - Heidelberg - New York 1975 (reprint; 1962) [8.B.7.]
Luenberger, D.G.; [1.B.22.]
[8.B.8.]
Mangasarian, O.L.; Nonlinear Programming, Mc Graw - Hill Book Comp., New York 1969
390
Bibliography
[8.B.9.]
Zangwill, W.I.; Nonlinear Programming: A Unified Approach, Prentice Hall Inc., Englev/ood Cliffs, N.J. 1969
[8.B.10.] Zoutendijk, G.; Methods of feasible Directions, Elsevier Publ. Comp., Amsterdam 1960 [8.A.I.]
Beale, E.M.L.; "On Quadratic Programming", in: NRLQ 6/3,
[8.A.2.]
Beale, E.M.L.; "On Minimizing a Convex Function Subject
1959 to Linear Inequalities", in: J. Roy. Stat. Soc. 17 B, 1955 [8.A.3.]
Czap, H.; Dual itat bei nichtlinearen Optimierungsproblemen und exakten Penalty - Funktionen", in: 0p. Res. Verf. XXI, 1975
[8.A.4.]
Fiacco, A.V.; Mc Cormick, G.P.; "The Sequential Unconstrained Minimization Technique for Nonlinear Programming, a Primal Dual Method", in: MS 10/2, 1964
[8.A.5.1
Frank, M.; Wolfe, P. ;"An Algorithm for Quadratic Programming", in NRLQ 3/1 & 2, 1956
[8.A.6.1
Kuhn, H.W.; Tucker, A.W.; "Non - Linear Programming", in: Proc. of the Second Berkely Symp. on Math. Stat, and Probab., Berkeley (Calif.), University Press 1950
[8.A.7.]
Lemke, C.E.; "A Method of Solution for Quadratic Programs", in: MS 8/4, 1962
[8.A.8.]
Markovitz, H.M.; "The Optimization of a Quadratic Function subject to Linear Constraints", in: NRLQ 3/1 & 2, 1956
[8.A.9.]
Moeschlin, 0.; "Zur Stützpunktwahl beim Schnittverfahren der nichtlinearen Programmierung, Teil I und II; in: Ufo 14, 1970
[8.A.10.] Rosen, J.B.; "Gradient Projection Method for Non - Linear Programming: Part I, Linear Constraints", in: SIAM 8/1, 1960 [8.A.ll.] Rosen, J.B.; "The Gradient Projection Method for Non Linear Programming: Part II", in: SIAM 9/4, 1961 [8.A.12.] Wolfe, P.; "Methods of Non - Linear Programming", in: Graves, R.L.; Wolfe, P. ( eds.); Recent Advances in Mathe-
Bibliography
391
matical Programming, Mc Graw - Hill Book Comp., New York 1963 [8.A.13.] Wolfe, P.; "The Simplex - Method for Quadratic Programming", in: Econometrica 27, 1959 [8.A.14.] Zangwill, W.I.; "Non - Linear Programming via Penalty Functions", in: MS 13/5, 1967
Chapter 9 [9.B.I.]
Buxton, J.N. (ed.); Simulation Programming Languages, in: Proceedings of the IFIP Working Conference on Simulation Programming Languages, North Holland Pubi. Comp., Amsterdam, 1968
[9.B.2.]
Hammersley, J.M.; Handscomb, D.C.; Monte Carlo Methods, Methuen,London, 1964
[9.B.3.]
Koxholt, R.; Die Simulation - ein Hilfsmittel der Unternehmensforschung, R. Oldenbourg, München - Wien 1967
[9.B.4.]
Krüger, S. ; Simulation, W. de Gruyter, Berlin - New York
[9.B.5.]
Meyer, R.C.; Newell, W.T.; Pazer, H.L.; Simulation in
1975 Business and Economics, Prentice Hall
Inc.,Englewood
Cliffs, N.J. 1969 [9.B.6.] [9.B.7.]
Mertens, P.; Simulation, Poeschel Verlag, Stuttgart 1969 Mihram, G.A. ; Simulation: Statistical Foundations and Methodology, in: Mathematics in Science and Engineering, vol. 92, Academic Press, New York - London 1972
[9.B.8.]
Naylor, T.H.; Balintfy, J.L.; Burdick, D.S.; Chu, K.; Computer Simulation Techniques, J. Wiley & Sons Inc., New York 1966
[9.B.9.]
Niemeyer, G.; Die Simulation von Systemabläufen mit Hilfe von Fortran IV, W. de Gruyter, Berlin - New York 1972
[9.B.10.] Niemeyer, G. ; Systemsimulation, Akademische Verlagsgesellschaft, Frankfurt/Main 1973 [9.B.11.] Tocher, K.D.; The Art of Simulation, The English Universities Press Ltd., London 1967
392
Bibliography
[9.A.I.]
Buxton, J.N.; Laski, J.G.; "Control and Simulation Language", in: The Computer Journal 5/3, 1963
[9.A.2J
Clark, C.E.; "The U t i l i t y of S t a t i s t i c s of Random Numbers", in: ORSA 8/2, 1960
[9.A.3.]
Clark, C.E.; "Importance Sampling in Monte Carlo Analy-
[9.A.4.]
Conway, R.W.; "Some Tactical Problems in Digital Simu-
[9.A.5.]
Dimsdale, B.; Markowitz, H.M.; "A Description of the
s i s " , in: ORSA 9/5, 1961 l a t i o n " , in: MS 10/1, 1963 SIMSCRIPT Language", in: IBM Systems Journal 3/1, 1964 [9.A.6.]
Harling, J . ; "Simulation Techniques in Operations Research: A Review", in: ORSA 6/3, 1958
[9.A.7.]
Jacoby, J.H.; Harrison, S . ; "Multi - Variable Experimen-
[9.A.8.]
Namnech, P.; "Vergleich von Zufallszahlen-Generatoren",
[9.A.9.]
Schneeweiß, H.; "Simulationstechnik", in: AKOR - S c h r i f t
tation and Simulation Models", in: NRLQ 9/2, 1962 in: Elektronische Rechenanlagen 8, 1966 Nr. 5, Berlin - Köln - Frankfurt/Main 1971 [9.A.10.]
van Slyke, R.M.; [4.A.9:]
Chapter 10 [10.B.1.]
Barlow, R.E.; Pochan, F . ; Mathematical Theory of R e l i a b i l i t y , J. Wiley & Sons I n c . , New York 1965
[10.B.2.]
Bussmann, K.F.; Mertens, P.; Operations Research und Datenverarbeitung bei der Instandhaltungsplanung, PoeschelVerlag, Stuttgart 1968
[10.B.3.]
Cox, D.R.; Renewal Theory, London 1962
[10.B.4.]
Lloyd, D.K.; Lepow, M.; R e l i a b i l i t y , Management Methods and Mathematics, Prentice Hall I n c . , Englewood C l i f f s 1962
[10.B.5.]
Proschan, F . ; Pólya Type D i s t r i b u t i o n s in Renewal Theory with an Application to an Inventory Problem, Prentice Hall I n c . , Englewood C l i f f s N.J. 1960
393
Bibliography [10.B.6.]
Roberts, N.H.; "Mathematical Methods in Reliability Engineering, New York 1962
[10.A.1.]
Bellman, R.; [6.A.I.]
[10.A.2.]
Chung - Kai Lai; Polland, H.; "An Extension of Renewal Theory", in: Proc. Am. Math. Soc. 3, 1952
[10.A.3.]
Derman, C.; Sachs, J.; "Replacement of Periodically
in-
spected Equipment", in: NRLQ 7/4, 1960 [10.A.4.]
Dreyfus, S.E.; "A Generalized Equipment Replacement Study", in: Rand Report 1957
[10.A.5.]
Ehrenfeld, S.; "Interpolation of the Renewal
Function",
[10.A.6.]
Marathe, V.P.; Nair, K.P.K. ; "Multistage Planned Replace-
in: 0RSA 14/1, 1966
ment Strategies", in: 0RSA 14/5, 1966 [10.A.7.]
Pyke, R.; "Markov Renewal Processes with Finitely many States", in: Ann. Math. Statist. 32/4, 1961
[10.A.8.]
Sasieni, N.W.; "A Markov Chain Process in Industrial Replacement", in: 0RQ 7/1, 1956
[10.A.9.]
Smith, W.L.;"Renewal Theory and its Ramifications", in: J. Roy. Statist. Soc. Ser. B. 20/2, 1958
[10.A.10.] Vergin, R.C.; "Optimal Renewal Policies for Complex Systems", in: NRLQ 15/4, 1968
Chapter 11 [ll.B.l.]
Andler, K.; Rationalisierung der Fabrikation und optimale Losgröße, Oldenbourg, München - Berlin 1929
[11.B.2.]
Baffa, E.S.; Taubert, W.H.; Production - Inventory Systems: Planning and Control, Richard D. Irwin Inc., 1972
[11.B.3.]
Kaufmann, A.;
[11.B.4.]
Müller, E.; Simultane Lagerdisposition und Fertigungsab-
[1.B.18.]
laufplanung bei mehrstufiger
Mehrproduktfertigung,
W. de Gruyter, Berlin - New York 1972 [11.B.5.]
Naddor, E. ; Lagerhaltungssysteme, Verlag Harri
Deutsch,
394
Bibliography
Frankfurt/Main - Zürich 1971 [11.B.6. ] Proschan, R.; [10.B.5. ] [11.B.7. ] Vazsonyi, A.; [1.B.30.] [11.B.8.]
Whitin, T.M.; The Theory of Inventory Management, Princetown University Press, Princetown N.J. 1953
[H.A. 1. ]
Beckmann, M.J.; "An Inventory Model for Arbitrary Interval and Quality Distributions of Demand',' MS 8/1, 1961
[11.A.2.]
Beckmann, M.J.; "An Inventory Policy for Repair Parts",
[ll.A.3.]
Bishop, J.L. Jr.; "Experience with a Successful System
in: NRLQ 6/3, 1959 for Forecasting and Inventory Control", in: ORSA 22/6, 1974 [11.A.4.]
Fromovitz, S.; "A Class of One - Period Inventory Models", in: ORSA 13/5, 1965
[11.A.5.]
Haehling von Lanzenauer, Ch.; "Optimale Lagerhaltung bei merhstufigen Produktionsprozessen", in: Ufo 11, 1967
[11.A.6.]
Henn, R.¡"Fließbandfertigung und Lagerhaltung bei mehreren Gütern", in: Ufo 9, 1965
[ll.A.7.]
Kao, E.P.C.; "A Discrete Time Inventory Model with Arbitrary Interval and Quantity Distributions of Demand", in: ORSA 23/6, 1975
[11.A.8.]
Morse, P.M.; "Solutions of a Class of Discrete - Time Inventory Problems", in: ORSA 7/1, 1959
[11.A.9.]
Ohse, D.; "Zur Bestimmung der wirtschaftlichen Bestellmenge bei Preisstaffeln", in: Ufo 14/3, 1970
[ll.A.10.] Popp, E.; "Einführung in die Theorie der Lagerhaltung", in:
Lecture Notes in Economics and Mathematical Systems,
vol. 7, Springer Verlag, Berlin - Heidelberg - New York 1968 [H.A.11.] Riepl, R.J.; "Ein deterministisches Mehrprodukt - Lagerhaltungsmodell", in: 0p. Res. Verf. XVII, 1973 [ll.A.12.] Scarf, H.; "The Optimality of (S,s) Policies in the Dynamic Inventory Problem", in: Mathematical Methods in the Social Sciences; Arrow, K.; Karlin, S.; Suppes, P.; (eds.)
Bibliography
395
Stanford University Press, Stanford ( C a l i f . ) 1960 [ l l . A . 1 3 . ] S i v a z l i a n , B.D.; "A Continous - review ( s , S ) Inventory System with Arbitrary I n t e r a r r i v a l D i s t r i b u t i o n between Unit Demand", in: ORSA 22/1, 1974 [ l l . A . 1 4 . ] Veinott, J r . A . F . ; "The Optimal Inventory Policy for Batch Ordering", in: ORSA 13/3, 1965 Chapter 12 [12.B.1.]
Baker, K.R.; Introduction to Sequencing and Scheduling, J. Wiley & Sons Inc., New York 1974
[12.B.2.]
Conway, R.W.; Maxwell, W.L.; M i l l e r , L.W.; Theory of Scheduling, Addison Wesley Publ. Comp., Reading - Palo Alto - London 1967
[12.B.3.]
Kern, W.; Optimierungsverfahren in der Ablauforganisation,
[12.B.4.]
Mensch, G.; Ablaufplanung, Westdeutscher Verlag, Köln und
Girardet Verlag, Essen 1967 Opladen 1968 [12.B.5.]
Müller, E . ; [11.B.4.]
[12.B.6.]
Müller - Merbach, H.; Optimale Reihenfolgen, in: Ökonometrie und Unternehmensforschung, Band XV, Springer Verlag, Berlin - Heidelberg - New York 1970
[12.B.7.]
Muth, J . P . ; Thompson, G.L.; Industrial
Scheduling,
[12.B.8.]
Roy, B.; Ablaufplanung, Oldenbourg Verlag, München -
Prentice Hall Inc..Englewood C l i f f s 1963 Wien 1968 [12.B.9.]
S i e g e l , T . ; Optimale Maschinenbelegungsplanung, E. Schmidt Verlag, Berlin 1974
[12.A.1.]
Bowman, E.H.; "The Schedule Sequencing Problem", in: ORSA 7/5, 1959
[12.A.2.]
Elmaghraby, S . E . ; "The Machine Sequencing Problem", in: NRLQ 15/2, 1968
[12.A.3.]
G i f f l e r , B.; Thompson, G.L.; "Algorithm for Solving Production - Scheduling Problems", in: ORSA 8/4, 1960
396
Bibliography
[12.A.4.]
Johnson, S.M.; "Discussion: Sequencing in Jobs on Two Machines with Arbitrary Time Lags", in: MS 5/3, 1959
[12.A.5.]
Johnson, S.M.; "Optimal Two and Three - Stage Production Schedules with Set Up Times Included", in: NRLQ 1/1,1954
[12.A.6.]
Korte, B.; Oberhofer, W.; "Zwei Algorithmen zur Lösung eines komplexen Reihenfolgeproblems", in: Ufo 12/4, 1968
[12.A.7.]
Lomnicki, Z.A.; "A 'branch and bound' Algorithm for the exact Solution of the Three - Machine Scheduling Problem", in: OPQ 16/1, 1965
[12.A.8.]
Manne, A.S.; "On the Job Shop Scheduling Problem", in: ORSA 8/2, 1960
[12.A.9.]
Noltemeier, H.; "Produktionsablaufplanung - ausgewählte Probleme und Methoden", in: Informationssysteme im Produktionsbereich, Oldenbourg, München - Wien 1975
[12.A.10.] Szwarc, W.; "Mathematical Aspects of the 3 x n Job - Shop Sequencing Problem", in: NRLQ 21/1, 1974 [12.A.11.] Szwarc, W.; "Optimal Elimination Methods in the m x n Flow Shop Scheduling Problem", in: ORSA 6/6, 1973
Chapter 13 [13.B.1.]
Behrens, K.C; Allgemeine Standortbestimmungslehre, Westdeutscher Verlag Köln und Opladen, 1961
[13.B.2.]
Bloech, J.; Optimale Industriestandorte, Physica Verlag, Würzburg - Wien 1970
[13.B.3.]
Bloech, J.; Ihde, G.-B.; Betriebliche Distributions-
[13.B.4.]
Christaller, W.; Die zentralen Orte in Süddeutschland,
planung, Physica Verlag, Würzburg - Wien 1972
Jena 1933 [13.B.5.]
Grundmann, W.; et. al.; Mathematische Methoden zur Standortbestimmung, Verlag Die Wirtschaft, Berlin 1968
[13.B.6.]
Lösch, A. ; Die räumliche Ordnung der Wirtschaft, Jena 1944
[13.B.7.]
Thünen, J.H.v.; Der
isolirte
Staat in Beziehung auf rd Landwirthschaft und Nationalökonomie, part 1, 3 printing,
Bibliography
397
Schumacher - Zarchl in (ed.), Berlin 1875 [13.B.8.]
Weber, A.; über den Standort der Industrien, Part 1, Reine Theorie des Standorts, Tübingen 1922
[13.A.1.]
Beckmann, M.J.; "über den optimalen Standort eines Verkehrsnetzes", in: Zeitschrift für Betriebswirtschaft 35, 1965
[13.A.2.]
Böventer, E.V.; "Die Struktur der Landschaft, Versuch einer Synthese und Weiterentwicklung der Modelle J.H. von Thünens, W. Christallers und A. Löschs, optimales Wachstum und optimale Standortverteilung", in: Schneider, E. (ed.) , Schriften des Vereins für Socialpolitik, N.F., Bd. 27, Berlin 1962
[13.A.3.]
Davis, P.L.; Ray, T.L.; "A Branch-Bound Algorithm for the Capacitated Facilities Location Problem", in: NRLQ 16/3, 1969
[13.A.4.]
Domschke, W.; "Modelle und Verfahren zur Bestimmung betribelicher und innerbetrieblicher Standorte - Ein überblick", in: Discussion Paper Nr. 24, Institut für Wirtschaftstheorie und Operations Research, Universität Karlsruhe 1973 and in ZOR 19/2, 1975 ser. B.
[13.A.5.]
Efroymson, M.A.; Ray, T.L.; "A Branch-Bound Algorithm for Plant Location", in: ORSA 14/3, 1966
[13.A.6.]
Eiselt, H.A.; von Frajer, H.; "Der optimale Industriestandort - Ein Verfahren mit den Kriterien Transportkosten, Arbeitsmarkt, Boden", in: Neues Archiv für Niedersachsen 24/4, 1975
[13.A.7.]
Feldmann, E.; Lehrer, F.A.; Ray, T.L.; "Warehouse Location under Continous Economies of Scale", in: MS 12, 1966
[13.A.8.]
Khumawala, B.M.; "An Efficient Branch and Bound Algorithm
[13.A.9.]
Kuehn, A.A.; Hamburger, M.J.; "A Heuristic Program for
for the Warehouse Location Problem", in: MS 18, 1972 Locating Warehouses", in: MS 9, 1963
398
Bibliography
[13.A.10.] Launhardt, W. ; "Die Bestimmung des zweckmäßigsten Standortes einer gewerblichen Anlage", in: Zeitschrift des Vereines Deutscher Ingenieure 26, 1882 [13.A.ll.] Manne, A.S.; "Plant Location under Economies of Scale Decentralization and Computation", in: MS 11, 1964 [13.A.12.] Müller - Merbach, H.; "Modelle der Stadtortbestimmung", in: Kosiol (ed.), Handwörterbuch des Rechnungswesens, Stuttgart 1970 [13.A.13.] Spielberg, K.; "Algorithms for the Simple
Plant-Location
Problem with Some Side Conditions", in: ORSA 17, 1969 [13.A.14.] Spielberg, K. ; "Plant Location with Generalized Search Origin", in: MS 16, 1962 [13.A.15.] Warszawski, A.; "Pseudo - Boolean Solutions to Multidimensional Location Problems", in: ORSA 22/5, 1974