These are the lecture notes from Shapley’s course at UCLA. Some of the homework problems will be from the book Math 486
392 79 6MB
English Pages [181]

MATHEMATICS 1i86/6'Jb
GAME THEORY
Notes by L. S. ·Shapley
UCLA
Department of Mathematics April 199.4
Chapter 1 Gan1es in Extensive Forni Introduction The theory of games has been called the mathematics of competition and cooperation. It is applied widely in economics, in operations research, in military and political science, in organization theory, and in the study of bargaining and negotiation. First formulated in the 1920's independently by the famous mathematicians Emil Borel and John von Neumann, it did not become well kn9wn until the 1944 publication of a monumental work, Theory of Games and Economic Behavior, by von Neumann and the economist Oskar Morgenstern. Since that time, many others have joined in extending and applying the theory. Although the terminology of players, moves, rules and payoffs might suggest a preoccupation with sports or recreation, the theory of games bas seldom been applied effectively to the playing of real games. Some would say this is because game theory is based on idealized players having clear motives and unlimited skill and computational resources, whereas real games are interesting only when they challenge the limited abilities of real players. Others would suggest that the application exists, but goes the other waythat sports and games in the real world have supplied ideas and terminology that give direction to the development of the mathematical theory and its application to other areas of human competition and cooperation. Throughout most of this course we shall be studying games that are
1
2
CHAPTER 1. GAMES IN EXTENSNE FORM
given in strateg£c form or, later on, in coalitional form. These forms are concise mathematical abstractions, wellenough suited for the applications that we shall present. But by their nature they suppress many descriptive features that we would expect to see in a theory that claims to draw its inspiration from reallife competition. In the definition of the strategic form, for example, one looks in vain for any mention of moves or position or timing or chance events or the giving or supressing of information. Instead, all such descriptive material is bundled up and hidden inside the genera] concept of "strategy," an elegant mathematical abstraction introduced by Borel in 1921. Accordingly, we open these notes with an introduction to another, more descriptive game model, called the extensive form. Its premise is that games are sequential, multimove, interactive affairs th~t proceed from position to position in response to the wills of the players or the whims of Chance. It is possible to omit Chapter 1 when using these notes without impairing the logical or mathematical development of the following chapters. But the extensive form is an interesting topic in and of itself, serves also to give meaning and depth to the "strategy" concept that will play a conspicuou:S role in the sequel. Indeed, while we shall see that a game in the extensive form can always be reduced to strategic form (and the strategic form to coalitional form, when there are many players), a wealth of descriptive detail is lost in the process. These reductions limit our ability to analyze many questions of a tactical nature. On the other hand, the results we do obtain in the more general and abstra,ct models will ga.in in significance, precisely because they are not so concerned with procedural details or clever maneuvers, but consider instead the broader possibilities in the situation and the motivations of the players.
2
INTRODUCTION
3
Assorted readings, light and heavy.
J. von Neumann a.nd 0. Morgenstern,
Theory of Games and Economic Behavior, Princeton, 1944, 1947.  A book to be marvelled at rather than read straight through; a masterful combination of insight a.nd technical brilliance. E. Borel, Three early papers translated into English with commentaries by Maurice Frechet and John von Neumann, in Econometrica, Vol. 21, 101115, 1953  Who invented the theory of games?
J. D. \\'illiams, The Compleat Strategyst, being a primer on the theory of games of strategy, McGraw Hill, rev. edition, 1965.  A somewhat playful, copiously illustrated, elementary, and nevertheless perfectly accurate introduction to the theory and application of matrix games. H. Kuhn 1 "Extensive Garnes and the Problem of Information," in Annals of Mathematics, Study 28: Contributions to th!! Theory of Games, II, 193216, 1953.
L. Shapley, "Game Theory," in Academic American Encyclopedia, Vol. 'G,' 2729, 1980+.
G. Owen, Game Theory, £nd edition, The Free Press, 197x. An excellent undergraduate textbook, including much of the material covered in this course.
J. Na.sh, (1) "The Bargaining Problem," in Econometrica, Vol.IS, 155162, 1950; (2) "Equilibrium Points in nPerson Games" in Proceedings of the National Academy of Sciences, Vol. 36, 4849, 1950; (3) "Twoperson Cooperative Games," in Econometrica, Vol. 21, 128140, 1953.  A brilliant young mathematician makes his mark.
R. Dawkins, The Selfish Gene, Oxford University Press,1976. January 10, 1994
3
CHAPTER 1. GAMES IN EXTENSNE FORM
4
1.1
The Kuhn 'free
Many formal descriptions of multimove games have been introduced since von Neumann and Morgenstern coined the term "extensive form" in their 1944 cla.ssk, Theory of Games and Economic Behavior. Some were aimed at special situations tha.t seemed to invite direct mathematical treatment, e.g. positional games, games of pursuit and evasion, games with information lag, games of fair division, etc. But one of the earHest such ventures, by Harold Kuhn in the early 1950s, sought to capture the full generality and precision of the unwieldy, abstract formulation of von Neumann and Morgenstern in an intuitive and easytounderstand diagrammatic model. It was an immediate success, and to this day it remains the most widely used generalpurpose model for games ~n extensive form. In this chapter our goal is to give the reader a working familiarity with Kuhn trees, stressing four basic concepts that lie at the heart of the subject:
• • • •
information sets, pure strategies, strategic equilibrium, backward induction.
The limited range of this chapter should not be taken to imply that the exiensive form is only of peripheral interest to game theory. On the contrary, it remains an active field of research with an extensive literaturP. 1 replete with subtle ideas and challengiug ma.thematics. In the restricted compass of a short course, however, we can do no more than scratch the surface.
Graphs and trees. First, some graphtheoretic terminology will be useful. Both the terms and the ideas they define are highly intuitive, but we shall nevertheless aim for mathematical precision in their use. When a technical term is first used or defined it will be set in boldface.
4
1.1.
THE KUHN TREE
5
A graph Q is a figure ma.de up of nodes and edges. Every edge has exactly two ends, at each of which there is a node. Edge ends having the same node are said to be attached. The number of ends at a node is called the degree of the node, which may be zero or any positive integer. In a directed graph each edge is assigned a direction, indicated by an arrow. A node having no inpointing edges is called initial; one having no outpointing edges is called terminal. Figure 11 shows some of the possibilities. The little digits indicate the degrees of the nodes.
2
2
6 nodes, 4 edges
1
3
4 n., 2 e.
4 nodes, 6 edges
3
5 nodes, 8 edges
I node, 4 edges
Figure 11. Examples of graphs.
A path from node A to node B in 9 is a sequence of one or more edges laid end to end. This simple idea hardly needs elaboration, but a careful mathematical definition would have to specify that the ends of each edge in the sequence can be tagged "" and "+," in such a way that the  end of the first edge is A, the + end of the first edge is attached to the  end of the second edge, the + end of the second edge is attached to the  end of the third edge, and so on until, finally, the + end of the last edge is B. Hit happens that A= B the pa.th is called a cycle. A graph 9 is called acyclic if it contains no cycles. A node is said to occur in a path from A to B if it is A or B, or if it attaches the + end of a.n edge of the sequence to the  end of the next edge. The same node may occur more than once in a path. A path in g in which no node occurs more than once is called simple. Thus a cycle cannot be simple, nor any path with a repeated edge. A graph Q is connected if there is a. path between every two nodes of Q. A pa.th in a directed graph is directed if the tags "" and "+" (see
5
CHAPTER 1. GAMES IN EXTENSNE FORM
6
above) can be made to correspond to the arrowtails and arrowheads, respectively, on every edge of the sequence. Figure 12 illustrates. This time the little digits indicate the order of the edges in the path sequence; note the repeated edges in two of the examples. A
B
3
Figure 12. Examples of paths.
A tree is a connected, acyclic graph. This chapter mostly concerns trees. In particular, it concerns rooted trees, in which one node, the root is distinctively marked. In such a graph we can orient the edges by pointing them away from the root, giving us a directed graph. The root is the only initial node, since all others nodes have inpointing edges, but there are in generally many terminal nodes. All terminal nodes of a rooted tree have degree 1; all others, with the possible exception of the root, are of higher d~gree. Figure 13 (next page) illustrates a variety of rooted trees, with root denoted by the open dot. By inspection, you can ea.sily verify that:
(i) the number of nodes is one more than the number of edges; (ii) all nodes other than the root have precisely one "inpointing" edge; (iii) every directed path is simple; and
(iv) the number of maximal directed paths 1 is equal to the number of terminal nodes. These statements are actually true for all rooted trees; the proofs follow easily from the definitions. (See Exercise 1.1 below.) 1
I.e., directed paths not contained in a longer directed path.
6
1.1.
0
THE KUHN TREE
7
•
1 terminal node
10 terminal Dodes
9 terminal nodes
2 terminal nodes
Figure 13. Examples of trees (root= o).
The game tree. When a rooted tree is used to define a game, the nodes are interpreted as states or positions and the edges as moves. Each nonterminal position belongs either to a player, responsible for choosing the next move, or to "Chance" (see below). The initial node (root) starts the play, and any of the terminal nodes can end it. Each maximal directed path therefore describes a possible course of play. We therefore write at each terminal node some indication of the outcome of the play; typically this will take the form of a numerical payoff vector, telling how much money (or other measure of value) each player wins or loses.
In the following example the game is first described in words, and then expressed in tree form. PICK
UP
BRICKS. Five bricks have been
stacked on the ground. The two players take turn picking up either one or two bricks from the pile. The player who picks up the last brick loses; the other player wins.
7
8
CHAPTER 1. GAMES IN EXTENSNE FORM
II wins
[Numbers in brackets represent the number
of bricks remaining.} II wins
Figure 14. Kuhn tree for the "Bricks,, game.
Chance moves. Many real games involve chance. Rolling the dice (Backgammon), taking cards from a shuffled deck ( Gin Rummy, Blackjack, etc.), tossing a coin to determine who goes first (Chess, Football)are typical examples of chance moves that play significant roles in the game. In the Kuhn tree, the probabilities assigned to the edges lea.ding out of a "Chance" node should always be displayed, as they form part of the rules of the game, known to all the players. If there are two or more chance moves, their probabilities are assumed to be uncorrelated. It follows that if several chance moves occur in sequence, the probabilities along a path must be multiplied together to determine the probability of the outcome. Probability
A B C 0
D E F
.36 .54 .03 .04 .01 .02
E
Total
1.00
Figure 15. A proba.bility tree.
8
Outcome
J.l.
THE KUHN TREE
9
In game theory, the players are usually assumed to be risk neutral. They are willing to accept any gamble at fair odds. Given an equal chance of winning $50 or losing $10, they would regard this opportunity equal in value to the prospect of winning a sure $20  bee.a.use .5 x 50 + .5 x ( 10) = 20. If there were I cha.nee in 20,000 of losing a million dollars in some venture, they would be willing to insure themselves for any prerruum up to $50, but no more. Even if a game has nonmonetary payoffs, the assumption is that "utility numbers" can be assigned to the possible outcomes in such a way that, if chance moves occur, the players' objectives are truly represented by the expected utility of the outcome, in the sense of mathematic.a.I expectations. 2
Discussion of an example. To see _bow this works in a simple case, consider the lefthand tree in Figure 16.
(!CJ:!)
(10, 0)
tc
l•!
(~.2~
{1/J
J.J}
19.J~
t'/O.oJ
(
pay 1 (B),
where 1) denotes {D1 , B 2 , ••• , Bn}. Every edge in D1 that is not in B 1 will be called a defection. The only defections that really matter are those that lie along pat(V), which may coincide with pat(B) over an initial segment out of START. But at the first defection the two paths divide, never to meet again. Consider now the last defection along the path pat(V). Figure 53 (next page) gives a schematic view of our argument. The arrowheads represent BI choices, the darkened segments represent defections.
37
CHAPTER 1. GAMES IN EXTENSIVE FORM
38

/
/ pat(B)'>
/
/
1 \
START0\ ,
~
J'at(1') '
'
~
no
,rl T = pay(D') fy
I
the last defection
ay(1')

Figure 53. A defection
1)
/
I
/
against B.
Only relevant paths are shown; the blank areas may contain many other nodes and edges. In the figure, the last defection along pat(!') occurs at n 0 , which of course belongs to Player 1. At no, Bi calls for x, whereas Di calls for y. A short Bpath runs through x to a terminal node with payoff T = (T1, ... , Tn), as shown. But the pa.th through y to pay(1') is also a Bpath after y, there being no more defections. So the fact that BI chose x implies that
T1 > pay 1 (1')
> pay1 (B).
If we let D; denote the pure strategy Di with y replaced by x, then pay(1'') is T, so D~ also works as a counterexample. So far no contradiction. But note that there is one less defection on pat(D') than on pa.t(D). This is ra.ther remarkable when you think about it. What we have shown is that for every pure strategy that improves on B 1 's response to (B2 , ••• Bn), there is another such improving strategy with /ewer defections on its pa.th. This means that some improving strategy, say Di, will actually have zero defections along its path. In other words, pat( (Di, B 2 , ••• , Bn)) pat( {Bi, B2, .. :, Bn) ). Hence the payoffs a.re the same and Di cannot be an improvement after all! This is the desired contra.diction.
=
38
1.5. BACKWARD INDUCTION. PERFECT SES
39
Subgame perfection. In a game of perfect information, it is possible to take any nonterminal node and make it the starting point of a new game, called a subgame of the original. In the "tree" metaphor, you simply break off a branch and stick it into the ground, where it takes root. Moreover, any fyll pure strategy of the original game determines a full pure strategy of the subgame: we simply ignore references to edges that are outside the subgame. So a.ny BIgenerated strategy selection determines a PSE in every subgame. A PSE with this property is termed perfect, 13 abbreviated PPSE. (2): We now return to our second question: Given perfect information, is every PSE attainable by backward induction? The simple answer is no. Indeed, as a general rule the larger the tree the more numerous the PSEs, and most of them are not perfect. 14 The game of Exercise 5.2 is fairly typical for its size, and even the little example that closes this section, though about as small as a game can be, nevertheless has room to display two highly contrasting PSEs, only one of which is perfect. But in view of our latest definition, we can rephrase question (2) to make it more interesting, namely: Given perfect information, is every PPSE attainable by backward induction? The answer this time is an unqualified yes. In fact, it is true almost by definition, after (1) has been established of course.
Uniqueness. In a certain sense, almost all games of perfect information have exactly one PPSE. The BI process is at heart a straightforward, determinate process despite the fact that the1e are usually several different "lastdecision nodes"' available at any given time. A little reflection reveals that the order of attack ma.kes no difference to the end result. So nonuniqueness of the PPSE can only come about as a result of tiesi.e., equal payoffs to the decisionma.ker when a choice is to be made. Let's return to the game of Figure 51 and consider the 13
Or sometimes "subgame perfect." The term "perfect"here means "complete" or "total," not "extremely good" or "excellent." Perfect equilibria are sometimes very bad. I◄ for a notable exception, see Exercise 5.4.
39
40
CHAPTER 1. GAMES IN EXTENSNE FORM
subgame reproduced below:
Figure 54. A tie situation.
When the decision points for II and III have been resolved, I is asked to compare (3, 1, 1) and (3, 2, 2). Which move should we have her choose, "i" or "I"? In the austere world of noncooperative game theory players are supposed to be totally indifferent to the fate of their fellow participantsunless of course it affects their own score. Thus, when ties occur at crucial points we must accept the fact of multiple solutions.
If our only aim is to demonstrate the existence of at least one PSE, it doesn't matter which edge we have the player choosethe pure strategies we get by continuing the backward induction will yield a PPSE regardless. But if our aim is to find all PPSEs, we must follow every branch whenever the algorithm splits, because every branch necessarily produces a different PPSE.
A paradoxical example. In some games with multiple PSEs the perfect one happens to be the best of the lot from the point of view of the players' payoffs. In other games, the direct opposite is true. 15 In Hilt would be an error in interpretation to imagine that the players can somehow
choose the PSE when there are more than one. That would imply a context of bargaining and coordinated action in whichamong other thingsthe players would have no reason to limit their consideration to PSE outcomes. Strategic equilibria,
40
1.5. BACKWARD INDUCTION. PERFECT SES
41
our final example, both players have to reason. welcome I's use of an "irrational" threat.
(3, 6)
(3, 6)
(4, 8)
(4, 8)
Figure 55. A para.doxica.l example.
The PPSE is shown at left (arrows). It is moderately bad for Player I and disastrous for Player II. In the only other PSE, shown at right, the players do the exact opposite from the "perfect" strategy selection. The result, ironically, is moderately good for Player II and excellent for Player I. The moral? "Perfect" is just a word, not a value judgment. In game theory, the choice of solution concept always depends on context. Ii the players live under a rule of law, so that contracts and commitments can be made and enforced, then "perfection" in a PSE has no particular virtue. On the other hand, if no reliable legal system can be brought to bear, as in a dispute among mistrustful sovereign nations, then a perfect equilibrium may be the only cha.nee for a. stable settlement.
like some other economic solutions, "just happen." They are influenced, if at all, by invisible ma.nipula.tors outside the model. (See the discussion of competition vs. cooperation in Chapter 3.)
41
42
CHAPTER 1. GAMES IN EXTENSIVE FORM
EXERCISES Exercise 5.1 fjnd all the PPSEs of the game diagrammed in Figure 51 on page 35. Exercise 5.2 Reduce the following game to strategic form and find all the PSEs. Which one corresponds to the PPSE?
(5,5)
Exercise 5.3 Explain how to handle a chance move in the BI algorithm. Illustrate by performing backward induction on the game tree in Exercise 3.2 on page 28. Exercise 5.4 Determine all the PSEs and the PPSEs of the "Mutual Admiration Society" of Exercise 1.4 on pages 14 and 15. HINT: While a small calculation may be revealing, no calculation is required for the answer. Exercise 5.5 a. Determine the unique PPSE of the game at the top of page 43.
b. Now modify the game by altering II's payoffsand nothing else! in such a way that the following conditions are satisfied:
(i) every payoff to II is increased;
(ij) there is a unique PPSE; and; (iii) II gets less at this PPSE than he gets at the PPSE of the original game.
42
1.5. BACKWARD INDUCTION. PERFECT SES
43
(9, 3)
START
Exercise 5.6 . Determine all of the PPSEs of the following threeperson game jn extensive form: (3,7,9)
(4, 7, 6)
(3, 4, 9)
m
(5. 3, 9)
(6,
(4, 3, 8)
43
2. 4)
41
44
CHAPTER II MATRIX GAMES*
1,
Jn~rodu;tjon, The
Yte of
Mi~ed Strttf9ies.
A twoPerson ;rrosum 9~me in 1tr1tr9ie form can be re;arded as a function cf t~o vari•bles,
••w
f= 1/12. To sum up, I has•
~•w of assurin9 an expected ;ain of
at le~st 1/12,•nd II can assure that it will be no more than that.
The number 1/12 is called the v4lue of the game
•nd the mixed strt?.tt9ies that assure it ,are called opHmal
Urat~gies. Taken together, the value and opti~al strat@iies are referred tc as the minimax solution of the game,
...... __ ,,.. ____ _ • Here II's equallzing probabilities are the same as I"s nlw because the pa~off m•tri>: happen~ to be symmetrical 1n iener•l the number& will be ditfe~ent. 0
49
2.6
The nature of ~e
mixed
§tr~te;ies.
i~fore 9Cin~ furthe~
should remark that the •bove argument does not require
th~t the 9ame be Pl•wed m~nw times,
but onlw that the
plawers be riskneutri!l with respect to the unit. of P•woff. Th~t is to s~w, each pl•wer is •s~umed to be indifferent between• 9amble th•t paws tm, with probabilitw P and•~,
with prob~bilitw 1  P, and• !Ure prospect of winnini the amount t(pm, + is called an •m x n matrix g~me• because all the infcrm~tion c~n be displayed in ~n
A
x n matrix:
=
To pl~~ the game, a
~
Player I chooses a row, i, ~nd Pl~wer ll
colurnn, J. The matrix entry aiJ ~t the intersection point
is and
then the amount
(positive or negative> that I wins
I I loses.
Occasionallw a m~tri~ 9ame has~ verw 5imple solution, It ma~ hapi:>en +.hat some pair (i, j ) has the propertw that a~. 'J
is simult.aneously tt.e minimum of row i end the ma>drnum
ot column j. Then t.he pair would be a saoole point.
be
or else
C1
or else
must have
~
Then we ~us~ h~ve
a saddle point.
1} woc..,ld be a saddle point.
b (strictly),
b
t
he ~ r 9 um en t
either: both a
>
c,
F:in.?l l::1 we
or else {1, 1) wo 1.Jld be a saddle
tor
a> b:
i s t he same , e ,: c e P t that b and c end up
as the t,;;o.J~rgest entries rather than a and d. is no saddlt> poin't,
d
Then we musi; have
The logical chain .is depicted belot1
point..
It a
a::>
terc;ise JO.
J
•n
•nd of the unit interv•l• Solve the fol l~ing mat:ri>< varnesr
(b>
4
1
4
Eneirs;is1
u.
20
!>
11
4
4
1
1
12
( C:)
4
J
c;
5
6
4
4
8
Solve the ~•ffle at right bw
the tr•Phic•l •et hod.
1
s
4
4
6
J
1
6
:;
4
q
2
Note that th•re •re
h,o •ssential subrnatric••• and at least t~o opt iffl•l st.rate9ies for the E>:er,i 1e
1~. Solve the
the c;raphi c:•1 fhet hod, t"10
11cong t;Affle
Pl•wer.
•t right bw
Note that there •re
essenti•l aubmatric••• •nd at l••st t.,o
cptilll&l stra1tegies for the fir1t plawer.
63
Domin at. i c:;,n.
4.
Sometimes large m~trix ;ames c~n be reduced in si:e
~
deletin; rows and columns that are obviouslw bad for the
Definition. The ith row of a.nm x n tame m•trh dominat@s the kth
• , • , n•
1,
rO\&I
(fer Plawer I>
if
•,jk •lt
(••J)
for &11 j •
S i mi l • r l w, t he j  t h col um n dot'h i n • t ts the h th
than• for Plawer II (bec•use II'• ~cal i• to ~inimize>. It is intuitivelw obvious that • dominated ro~ or cclu11D c:an be delet.d from• ;ame_11natri• without c:han;inc; the v•lue
.
of the ;~me.
pla~er ~i;ht be •ble to
Indeed, •nwthinc; •
•c:compl ish bw using a domin•ted row or c:olurnn, he er •he c:an •ccomp l i sh just •• we 11 or better bw uai n,; the row or column
that domin~tes it.
followinc; sequence illustrates:
(
1
2
3
2
0
4
J
1
0
4
1
2
....
1
2
3
2
0
4
4
1
2
64
..
1
2
2
0
4
1
1
2
4
l
..,
2.21
t he dom i n at i on s. )
( The c LI r v e d arr ow s "'a r k
ex~mple, we observe that the 2 has se~arated diagonals.
~
To f i n i sh t hi s
2 submatrix that survives
So ~e appl~ the formulas of Sec
tion 2, insert Os for the dominated ro~s and column, and obtain the solution
=
v
7/4.
The reader should check this against the ori;inal matrix.
In the next example, Pla~er II has 5ever•l optimal ~ixed strategies  three of them are shown ~ritten above the rnatrh:.
But when we delete a dominated coluffln cnlw
one of them survives 1
1/2 1/2 1/2
1/4 1/8
1/2
0
1/4 3/8 1/2
1/~
4
0
0
4
0
0
1
s
1
1
1
1/2
0
•
4
0
4
0
1/2
This phenomemon depends on the f•ct that domin~tion
~as defined in terms of weal: inequalities.
Thus in com
P&ring colu~ns 2 ~nd 3 ~e have
but not But
if
u,e
r@strict. ourselv~s to strici: domination, 65
then al 1
Dom i n a+:
4..
i on
11
sO me t i l'tJ e 5 l • r g e
m~ t r i >: c; a mes c i' n
be red \..l c e d i n s i :: e ~
deletin; rows and columns that are obviouslw bad for the
s i mi 1 a. r l w, t he
j 
•ijk •lj
•11 J •
t h co 1 um n dom i n ! t es the ht h
(for Ple..!,ler II) if •1..jS at.h for •11 i •
column
for
t h an • f or P 1• wer I I ( be c il use I I '
&
ca o a 1 i •
It is intuitivelw obvious that •
tc
ffl
1 , , ,,,
1t1.
i n i mhe >.
domin&tt!d re~ or c0lu111
c:an be deleted from• t;une_,natri• without chan;in; the v1lut
of the ;&me.
.
Indeed, •nwthin; • plawer ~iiht be able to
•cc:omFl ish bw u1,in9 a domin•ted row er column, he or
•h• c:an
•c:ccrnpl ish just us "11!11 or better bw U5ing the row or c:clurnn
that dominates it, It i5 inter•stinta th«t the eliraination of a dominated row
or c:olumn m•w set the stac;e for further r•ductions, •• the
followini sequence illustrates:
(
1
2
3
2
0
4
J
1
0
4
1
2
....
1
2
J
2
0
4
4
1
2
64
...
1
2
2
0
4
1
1
..,
2
4
1

2.21
s' is dominated by some i 1 ~ s', and in A", each column j > s" is dominated by some j" ~ s 11 • Show that the values of A and Al· are the same and that every optimal strategy in A*· (for either player), ext ended by zeros to i > s' or j > s", is optimal in A as vell. (b) In each matrix below, find a Yay of permuting th rows and/or columns so that the hypothesis of (a) will be satisfied for some s', s 11 • (Hint: Begin your search 11 with small values of s' and s and York up.) Then solve
the resulting A~ and B~ and extend the solutions to the full gal!les.
B: A:
7
4
1
0
3 1 2
3
0
2
1
2
2
4
5
8
0 3
0
1
3 2
6
4
67
0
1
1
0
1
1
1
0
0
0
o·
1
0
1
0
0
0
0
1
1
1
0
1
1
0
2.24
5.
Hixed Strategies in the General Case. We shed l need some further not~tion in order to deal
proerly with the general properties of m~triK games.
B~ a probability simplex we mean the geometric figure, in any number of dimensions, that is obtained by taking the unit vectors ,
co,1 .... 0>, etc:. as vertices and
forming their convex hull. The actual dimension of~ probability simplex is one less th~n the number of its vertices.
For example:
and G the probability simple~ of II's ~ixed strategi~s q •
Thus,
the Pt are nonner;ative numbers adding
up tot; like~ise the q •• J Ne>:t,
c
define ••• k.
~
•
a:
v(A>.
and define the value of the ~atrix A to be v(A}.
The followin9 le~ma maw see~ rather obvious, but it is worth stating it e~plicitlw even though we for;o the ·proof. 70
LEMMA 1. The lower and upper v•lues of anw matrix 9ame ,re ~ell defined.• In other words, each Pl8~er h•s ~t least one safety strateg~. Figure 24 is a threedimensional version of Fi~ure 22 in Sect.ion 3. It depicts the •i~ed strategw simplex G
.
and the functions h.(q) for•~ X 3 matrix v~me •nd enables one to see ho~ II's safetw strategy q• picks out the minimum cf the upper envelope of the h.~
"
12
2
2
10
e
J
t::,
4
6
1:;.._j {;, r~S~
2.
~i..""'!:Jfl"(O,OJI) I
t
•
c(: (0, 1/,//, J
It is easw to demonstrate that the !en.er value cf•
CAn be sure of not lcsin; more than v, then there is no~•~
......... ______ ..,... _ • Merelw writin9 the word •m•x• er ••in• is no 9u&rant•e ~hat these oper•tions are ·~ell defined. if the dom•in of the v•riable is net finite. 71
thi?.t
l
can be sure of winning some arr,ount greater i:han
It is worth stating
this •obvious• fact
v.
~s a lemm~ too.
LEMMA 2. For every matrix A,
i
~, establi&hin~ the letnma.
For j • l, ••• ,n1 we have

>
( 1 £ kJ ( P•) + f .JfJ (1£)~CA)
]
So bw t•ldn9 f
5ufticientlw 5~•11 we can ~u&r&ntee that kJ< h•s • s~ddlepoir1t •nd .,e raise all
entries to the s•~e odd power?
76
APPENDIX C
LINEAR PROGRAMMING FORMULATION OF MATRIX GAMES*
In this appendix we shall show how to solye any matrix game via linear programming. In addition, we will prove the minimax theorem, i.e., Theorem: Let Q
= be the highest possible expected payoff that Player I (using either
pure or mixed strategies) can guarantee (no matter what player II does). Similarly, let
v=
be the lowest possible payoff that player II (using either pure or mixed strategies)
can guarantee to lose no more than. Then Q
= v,
i.e, every matrix game has a value.
To accomplish these objectives, we will use the following strategy. First, we shall write the calculation of y_ as a "maxmin" problem over the set of mixed strategies for I and II. Then we will reformulate this problem (using mathematical "tricks") as a linear program. When we do the same thing for v, the minimax theorem will be an easy consequence of the duali~y theorem of linear programming. First let us state the following Lemma:
= a1Y1 + ... + anYn be a linear function. Then the minimum off taken over all nonnegative vectors y: y 1 + ... + Yn = 1 is equal to ar, where j* is the Lemma: Let f(Y1, ... , Yn)
index which minimizes ai. Similarly, the maximum off is apt, wher~ j# maximizes ai. Proof: Easy, o_nce you think about it. Now suppose we a.re given a matrix game, in which player I has strategies iel, ... , m and II has strategies jel, ... , n. Also, let aii be the payoff from II to I if I chooses i and II j. J'hus, i~ I plays the mixed strategy x y
= (x 1 , .•. , xm) and II plays the 1nixed strategy
= (y1, ... ,Yn), the expected payoff from II to I is I:~1°2:J=l aijXiYi·
* This appendix presupposes the student is familiar with linear programming. If not, a good undergraduate level reference is An Introduction to Linear Programming and Ga.me Theorv, second edition, by Paul Thie, published by John Wiley and Sons. 1 77
The solution of player I's maxmin problem is: v

m
n
i= 1
J= 1
= m ..stra.t. max ( min CL L. ze~= m.strat. ye~n . n
aijXiYj)) m
= m.strat. max ( min CL(LaijXi)Yi)) :re~m m.strat. yElR" . . ;= 1
1=1
m.
= m.stra.t. max m(.min LaijXi):z:dR ;d, ... ,n . 1=1
= minje1,. .. ,n :z=;:1 aijXi.
The last equality follows from the Lemma. Now let Xo
Using this
notation, we can write the following constrained maximization problem for 3!: 1?.
=
max
(Pl)
xo m
s.t. xo
= Jd, .min I: aijXi ... ,n.
(1)
,=I
X1
+ ... + Xm = 1 Xi ~
(2) (3)
0 for iel, ... , m
Constraints (2) and (3) arise because x is a mixed strategy. Now consider (1). Suppose
we relax this constraint, requiring instead only that this would imply xo :::; min1e1, ... ,n
Xo
< I,:~ 1 aijXi for all j. Certainly
L;:1 aijXi. In fa~t, in problem (Pl) above, since we are
maximizing xo, this would imply x 0 is equal to
minjel, ... ,n
I::
1
aijXi, i.e., we would not
be changing the problem. So we can rewrite (Pl) as:
max
Q=
Xo',,
:ro,zt,•••1Xm
(P2)
,
m
s.t.
xo
•o
147
4.11
•~d thw IM:!lution ts (Ub, Ya, Wd, Jc,
Jf .th• •trls a,,0p0w th• •Mtiftt Mason• ts
e,ctr• virl.
.,.,c•• •• b
PtaJ r
d
~
·d d d( d d d •• •• ;d •I ·• • • b• •b •b • b cf f ~;' cf I • • • :, • • • • • • • • ••
•
d
d
I,
b
•f o;
C
t
c1
cf
b
b
C
f
d
I
.,. t,
b
b/ ·
C
C
C
C
C
C
f
f
f
f
f
f
f
but.the .elution is alM01t th•••~•• with
1
left o.it a;ain.
d•v0t•d tc
Ye, Za>, •1th 1th•
1
Hot•
b
(Vd,V1,Wb,Xc,Y1.Zt),
th•t the last Mven
•cs.ws•
•r•
runninv out her strin;, withOJt tMJccns. (!ut
TtEOREM 1. The ••tc:hin;s cbtain•d t:,,. the al 10r i th11 .,.. fhb l ,,
def•rr•d ec::r:9ptenc::e
in the ••nae that nc 11tttrac:m1 net 111tc:h•d bw the al10rith~ w0t.Jld a,refer tc h•v• bitten ••tch,d.
Proof.
Without lo•• cf t•n•r•litw •
c0w1pr=pos, version.
co~aider onlw the
Suppoa• that John ~r•f•r• Jane tc
the Mife he t•t• frc:i11 the al10rith•. Th•t .. ena he ..ropoa•d te J'an• ck.tr i n1 th• ttrcc•d\Jre but ••• r•J•ct•d !ft favc, c,f •
bow hither on h•r Ji•t•
E~ntuallw, Jane either .. rraa1
that bow or aarr_i•• •not her bow tmofll ah• 1 lie•• ev•n bett•r·
Sc th•~•••
ft0
tn1tabilitw, ainc• Jan• dolts net ~r•f•~ John
to her husb.,,d. 148
C•ll • bow t1irl~
f111B,%r for• •1rJ [bowl tf
th•T"'•
ia • 5tabl~ ~•tchin; in •hich th•w are ,11~•d•
TtEOREM 2, The •1tchin9s cbt•ined bv th• CS.ferr•d acc•Ptanc• .,.. wnito~mlw c~tae•J for the propo&•r•, in th• aense th•t each theeat fea1ibl• aate.
••t• Proof.
Without lo•• of ••n•r•litw, 
cnlw th~ 1irlspr0p0ae v•raion.
••w
cc:maio.r
The th•=~ •ill be ,,.ov•d
if ~e can ahow that in th~ co~rse of th• al;orith~
ft0
tirl
f••sibl• for h•r, h•nce Larrw, who cloea not r•J•ct K•t.•, is
bttlo~ Jeck en Kat••• list.
So Jack and Kate •ould 11,dlw
•o.,~p• N•rw and Larrw, ~npectiv•Jw,
•••u11t•d atabilitw cf
b
contr•dictin9 the
Hence Jae,~ tan;!, feealt,l• for
149
EXERCISES
E~rrcf11 1.
Find the •table •••i1naent• th•t
t.st• and •1irlbe1t• in each cf th• fcllo.int ...... 1
.,,.
•r• •~
••~ri•t•
I1t:J1 ;t er1f1r1:c,11
we
x,
• t,b
•
C C
YI b
?I
Wl it YI
Zt
C • C ii t,
d d d
d
Y Z WX ?
wX
Cl
wJ y z
di
z
V W lC
I1~J1 cf er If 1.,.. II:U;:11 z X • b C d wy
•
••
••bt
b C d b c d b C d
Cl
y X
z wy
V
w
z X
c1t X V Z w
I1t?l1 cf er1f11t1,11
( C)
U•
•
••bl
b C d VI b d C Wt d C b c b d
x,
•
• •
•
Cl
cu z
YI b C d 21 cf C b •
X W
Y X U WV
I1~J1 of !T""lflCl,,;11
( d>
. ,
VYUZ
WUYVZX V z Y WU X
• ., "
vc I
w z )( St w z Ill Cl V )( Ill z w lH z Ill V V
., •
EMt""S&tr 2.
••
CAID
• I
Cl.AD C A t) !
Iii I ZI
•r• Mveral ■ table of ttl•w•ra ••t• left o.,.1t
She"' th•t if th•r•
a1tchln1s, the s11t1• ,1aw•T" or Mt
EYttcf•r'•
A C t> DB CA
Sho~ that the
•••~Pl•
en~•••• •.io •"d
'9.11 h•• Just t.o ••tchine• ift Sta c~•• 150
;, , !hr
eo 11tir
•ctn:,, I P"'t G11ne •
In atssttnc:ai, th• •cell•;• ad11dssion_1• 11,r,e is th•
tvP•
A 11law.,r cf on•
1ave 1•,ne with pelw1an.w.
••rr
fa cp)lttHl
can •aarrw• aore than on• plawer cf th• cth•r twP• , UP to aoae atat•d c•P•citw.
Pr•f•r•nc•• ar•
b••~d en •i~ple orderint• of th• pl•w•r• of the 0,p01ite twP•I this fnv0lv•• the asauaption that applicants ~•nk
viduals.•
cants, the ~1pit1ls cell•;••• th• nu1Mrral a c•pac:iti•s.
(4)
C3 > ( ~) Ce.d
Ibr ;olltifl w: • 1 Ci\ i f Xt J
vi
Zl
1"'
f T"l" thf I Pr:>l i Cfl"ltf •••
. ' • C
■
b
n d C l p u Cl I 1 f t J: p k 1 Ci\ 11'1 h • n
at
.,,,
X y
bl X Cl X
cu z
ft
z z z
V
"' • V
C)
zw
l
..,z wy yz J
X
Z
X W W X
VW y
z
zy
er WY Z X I' I z w X y z y WX l"I w y X Z W Z Y X ti W X Y Z UI W V Z X
WV
y ·X W
.,
••
IJ V X X y W
Y Z XW iI Y W Z X JI Y X W Z
... ~~.........
I
"'' n1
hi
lcl J
., • .,. •
k t d j r P n u C h 9 h i 0 r b f u k t "' C C i d p • .. J h n b IP I U b _. t r j V f j d • C
y wz
vi
•r•
W
Z
XY
••t•
$ In eff•~t, th• c0lJ•1•• rankint of applicant• •1exi~cir•Phicallw•  '•••• thR tcp H~aon in• aet deter•in•• its p0siti0n in the orCS.r. Thus,• coll•1e with room for thr•• applicant• ta c=naid•r•d to pr•f•r th• two it r&nks 1 And 2~ to the thr•e it ranks 2. 3 and••
151
The e.oluticn critericn ta
•••11w at•ted. An
_.nt cf epplicants to ccll•t•• ts Ot9~•d
wn1•1bl•
send• an APPl i cant tc one eel l"e lfhen th•r• i i
cell••• th•t js preferred
b1, th•
•••i1nlf it
Aft0ther
•PP11cant, •nd ■ lth•r h•1
roo"' fer hil'l er ccul d ••''• rOOffl bw reJectint ~one •ht!
it likes 1••••
•nd i ec tc
Fer
v,
V would p1"'efer
•~••Pl•,
if &
•=•
to Wend~• Q• ,, ~•
the aituaticn is unst•bl• because beth
a
th•t V r•J•ct J. and eccept I. iftatead.
The
ccre cf the,..,. is defined tc bit the ••nts th•t at•w within
••t
end
cf •11 •11i;n
the capa~ttw li•it1 and are not.
The aclutic" cf the •••~Pl• prccNds •• shown on th• f0llC'6Jin1 P•t•• Th• COIM&& MPar•t• off th• hcldoverl frCffl the pr•c:•dint
•d•w•, r•cr:d•r•d for r:cnvenl•nc• accordin1
tc th• coll•v••• pref•~•nce liat.
• The opposite ~hilcscphw eitht be
•or•
ca1e cf the ••thlete P•cruitinv ••~e•! 152
r••listic fer the
lat d4w
x, tr1t,{v ~c:~ l• YI hiJ za ~f9pq
(4) Wt (3)
( 5) (6)
I
svtr,
clrn I ,
(4' WI ( :! ) XI (')
(6)
V&
z,
7th
&.It
(6)
z'
kuc i , t plq"'ht,f
tku0i,
tkuc'(,v
1t14'tnh ,
ifc•~
•t
tlcuv(.• •
5th
c'/,j. • . · kuoiat;f f
JT\
b
10th
ifca, Jnd,
1t~uv, ~
P h11r1 •,
"htm •,
,ltur,h• •'
(•r
Cer >
>
if Cl,.
jnd,
Jnd.
;UU''t
ttJ:uv.
P 1CUl'lhl,
PlCl•h••
(wrbo>
ic:antb•st scl1.1ti0n ts th•refcre J t j dn,
vs,tvuv, Zr p h1.,.h1},
C.• 2 •nd 2 lcsin; out.•
• Aa ~ith the aarria9e ,rcblem, it c•n be shown that in anw stable •••ivn~ent th•••~• fcur •PPlic1nts 11tOUld h•ve been r1Ject•d·
153
,
~1~h;,
J 2t ,.,
(w, if cs, .ui th ~,
fc•f• i Jftd,
fcab, Jnd,i
Out I .fT'\S
The trad•r• can tranafer own•rahip ••0n11t
th•~••lv•• in •nw
••w
tNw
~l•••••
•xcept that at the end
no cne is •liow•d to own acr• th•n on• hou•••
The trad•r•
T A!LE OF ftREFERENCES
··············· Al f ••b d Js Cl 0&
C •
bl• C • f d • f CIA d b C a be dlf
bf••• fl& C ············· Et
Ft
d
C
t,.d •
.
1 i kea her own hou•• be1t, the others h•v• 1t0ssi bi 1 j ti•• cf
for l.lJ.. cf it1 w.ft\b•r• bw tradint (frc~ th• btt;innin;> cnlw 155
aaonc; th8s•lv•••
••w..,.
fllk:)reov.r, we ah•l 1 M• thet •hi 1• there
••v•r•l ""1tCCllllf4ta in th• C01"'R, the •strict
CC1"'••
is
P••• 4.?). Th•t fa, ther• ia to r•diatribute th• hcu••• so th•t no co.11
unique(.,.._ th• foctnot• en cnlM on• NW
ticn could h,we bltter•d •v•n cne _..ber•• pesition •ithcut
The follc.int al10rith~, ck.a• tc D1vid Gal•, t• en the idrt• cf
Sste tr:u;Hnt s;vc;l11
unique strictcC'T"e cutcc»e.
IMsad
and PT"OciJc•• that
We •h•ll UH the ab0v•
§t1p 1. Mrlk• t1 direet•d IT"&Ph, •• •t ri;ht., ~ith each trader repr•••nted bw a ¥•rt•x frr:,m which an •dte er,oints to th• owner cf hit/her t=pranked hOJ•'I•
•i•
F
Str~ 2. Find the tcp tradirat cvcie(a) bw starting at •nw vert•x •nd followin; the arrows until the Pith loops back on it••lf. l~ th• exa~ple, •tartint at A, C
E
C er E wields thtt c111cl• t CEPl, and •t•rti n; • t ~ er F Id i • l d s th• cw cl• t I l.
······
$~IP 3. l>al•te fr0ffi the ~r•f•P•nc• t•bl• •11 Nnticn cf th• tr•dffS APP••rin; in the TTCa disc0v•r•d tn Step 2, •nd
.,
F'
·····
r•turn t0 Step I tf •nw traders are l•tt.
Stre •.
~
Wh•n •v•rw tr•ct.r h•• bHn
•••itn•d t0 • TTC in this ••nner, •••cute all thtt hu:U c:at•d trad••• In oth•r t1enda, award tc ••ch tr•d•r th• hcu,H c:niein•llw °"11T'f'ttd bw hi ■ aucce1aor h, the Thua,
Ta •
nc.
Sn th• ••••Pl• C e•ts •• E ••tad, I t•ta b, etc. Th• fin•l allocatiC'l'l t•
·tt• (
Aa, lb, C., l)c, Ed, F'f
156
.>.
f • f I.
Pre,
[FJ
.,..,u:•• '
'O
···
······
TIEOfEH 3. Th• •llocatic,n I obtained bw the •top tradinca cwcl•• •l10rith~ 1• ttfblr, in th• Hrt•• th•t ftO c:o•l ition ct trad•r•• tradin; cnlw with •ach other, could h•v• ec:hi•v•d anw allocation tft • h i c: h • 11 Mm b4t r • •""' 1d be t.. t t ff cf f •
Proof.
Let Q.. b• th• a 110C:I ti on det•r•i nttd bw the TTC
Cl•ar1 w1 0. i a a
• l ;or 1t h,n.
f••• i bl e
out C:c:N •
lut
SUPP0ae
could h•v• traded cnlw ..ont•t th1NtMlvea to arrive at an •llocation
a.. L•t TTC our i n9 in
• v • i l •bl •
a
l
'n .._ ~ which everw N1!1'lber ha•• ~•tter he>use th•n
i 0 ~ the f i1•at N111b•r cf S the •Ii Ori th~.
ft th ft It f 9 C.
an ••~li~r sta;e.
Then
i,o i a
to
b•
,.tt i n9
•••f vned
u,.
b•1 t
If h • i • to do bit t t er i ft
to •
hOUSf!l
a;,,'
"e
IYt no such trac::ktr exists in S, ainee ic
~•• the first to b• assivned. •l 1, •rid 1t fcl lows that
So S can•t •tapr~v•• efter
Q...· 11 in the cor•, ••
claf11,•d.
THEOREM 4. Th• alloc,t.i0n CJ,, cbtain•d =~ th• •tcp tradin; cw:le• •ltorith~ ia •~ron1lw 1ttble, in th• Hna• th•t no
coalition cf tr•der1, trading on1w with ••ch ether, can •chi•v• an etl0c1tion in which at l•••t on• IHPfllb•r i• ~tt.•r off and non• are worae cff. fbr•ovwr, it ta th• onlw ellocattcn of this kind.
For an ••••Pl• of• cor• allocation strontlw •tabl•,
Me
that ta D21,
E••rci•• 8 1:,,alow. Curiou1lw enou1h, if
157
4.23
s.
Games in Characteristic Function Form. Having studied several special •NTU" models, togeth~r
with the combinatorial algorithms that led us to their core solutions, we now turn back to the •TU" branch of cooperative game .theory.
The central idea is the characteristic
ft.mc+.ion, introduced by von Neumann some fifty years ago as a means of applying his 2person minima~ theorem to games with n
>
2 players. The core, like several other cooper
ative solution concepts, was defined first for TU games in characteristic function form, and only later extended to other types of games.
As a mathematical object, the characteristic function is simply.a realvalued fun~tion, defined on the subsets of a finite set.
It plays a role in cooperative nperson game
theory, something like that of the matri~ game in 2person zerosum ~ame theory.
Like the matri~ game, it adapts to a
wide variety of applications partly because of its abstract quality  its lack of strLtctural detail.
Characterist.ic
functions have been used to study the effects of collusion and cartels in economic markets,
to analy~e the distribu~
tion of power in political constitutions, to allocate overhead costs among departments sharing a common facility, and to determine equitable charges for telephone service, etc. Most S3dsi:ematic treatments of ~ ~ t h e o r y use . the characteristic function as a startJng point.
"
160
4.24
Notation.
We shall assume familiarit~ with the stan
•U" for union,
dard settheoretical notations: intersection, clusion,
11
C
• 11 for setsubtraction, for strict inclusion,
"g" for the empty set.
• e,n
nf;"
•n
11
for
for weak in
for membership,
The players in an nperson game are
usuallkJ named by the integers
1, 2, ••• , n,.
thoLtgh we may
sometimes use other names; in any case, the s~mbol "N• will be reserved for the set of all players, as it is sometimes called.
or grand coalition
For specific subsets of N we
shall employ the following concise notation, consisting of the names of the members run together with a bar on top: 123,
2436,
BC,
1 ••• k,
3,
A.
For generic subsets of N, capital letters like S, T, .•. will be used.
The complement of Sis written N8 and the
number of elements of Sis written Isl. The characteristic function of a cooperative TU game,
conventionally denoted by "v•,
is a function that assigns
to each subset S of N a rea 1 nL1mber v' ( S)
represent the worth of S. total
wh i c:h is meant to
Intuitively, v(S) is the maximum
payoff that the members of Scan guarantee if they
pool their interests and act as a coalition
against the
rest of the game. This is another instance of the conservative or •worstcase" assumption that gave us the "safety level• and •maxmin~ concepts in Chapters II and III. In other words, if we are given an nperson game in strategic form, then for each S 161
~
N, 0 we can construct a
EXERCISES
t¥.1rc;j1r
,.
bw th1r lftethod cf
...
!• er I) t
E•
Fr GI Ht
Solv• the fcllcwln1 hou•• ·••PPiftt tc,p
f C I
•
•• • •••
trading cwcl ••· t:,
d h
C t, h f 'I d c b • h I C: h 9 • d a t, C f t, h d d t h f C b d c b • t, f b C d e f 9 h
•
Al
!,r Cl
Dr (a>
•·~•s
Shou,
•d f
f
•
•
'
Cb)
I I
JI Cl
Dt El Fl
•d f f •b •• b • b •d af bd f • db f • • f db • cl
C
C
C
•
C
C
C
•
b d. C c a d b d t, C. • c t, d
th11t the allcc•ticn is in the core.
a:s(At,, •••
Cd,
»c)
(bl Show that ft i• •••klw doffiinat•d bw the a11cc•ticn th•t 1iv•1 •11 trad..ra their first •choic1,
Ccl Fcr~ul•t• th•••~• in which th• aaM four trad•r• start •i th the •1 lccaticn O.', •nd •·t•ratne th cc,• •nd atrict cc,,e.
158
APPEND I X ON THE STR I CT CORE
Proof that Q...is Strcngly Stable. We know that Q.. is in t.he core. Bllt suppose that some coalition S coLlld make a "u,eaJ~" improvement, that is, co1..1ld b~ trading onl~ within the coalition, achieve an allocation a/ which makes some of the members better off than in Q. and the rest no worse off. Call the betteroff members •improvers• and the rest •nonimprovers", and let i~ be the first improver to enter a TTC when the algorithm is run.* Since a,. gives i 0 the best house available on that round 0J can't be an improvement unless it gives him the house of some J 0 in S who entered a TTC on an earlier round, say T* = C ••• j 0 j t j i · • • J, and who must therefore be a nonimprover. Since there are no ties in the rankings, she must get the same house in a.. as in a!. So her successor in T*, j 1 , al so belongs to S, and is a nonimprover too. B~ the same token, j~ and all the other traders in T* are also n9nimprovers in S. In partic:ular, jc,"s predecessor in T* is a nonimprover in S, and so gets j 0 's ho1_1se in both CL and Cl'., P,Ltt this contradic:ts the assumption that a/ gives j 0 's house to i0 • So S cannot •weakly impro~e" after all, and it follows that ais in the st.rict core. Proof of Uni9ueness. SL\ppose a! is an allocation different from the Q. of the algorithm. Let be the TTCs of the algorithm, arranged in order of their construction,* and let T~ be the first one that does not agree with the trades made 1n Q/. Let S = r, ti ••• U Tac:. Then the coalition S can improve weakly on a,' by simply redi st.r i but. i ng houses according to the trading c~cles r,,~, T~. Indeed, the members of T, u· ... ~ Tk:r will get the same houses as in a, and a/, tJJh i le the members of T k 1.11i 11 be getting the best. hu~ses available. This reprejents a weak improvement for S, because at least one member of Tk was getting something else in and hence something worse. So a,' is not in the strict core.
a',
* If several TTCs are constructed in the same round it won't matter how they are ordered in this list.
159
4.25
twoperson game in which Sand NS have been condensed into single players.
Then v(S} is the
11
pessimistic 11 ma;:min
value of this game to the "S• pla~er  which in turn is equal to the minimax value of the related 0sum game in which the payoff to the negative of the
11
11
NS" player is replaced by the
s• player's payoff.
To complete the definition, we take v(N), the worth of the grand coalition, to be the ma,dmum possible total payoff when everyone cooperates Chapter I I I>, and
UJe
.take
(generalizing the
v, the
11
0•
cf
worth of the empty
coalition, to be zero. It is possible in principle to calculate the complete characteristic function of any nperson game that is given to us in finite strategic form.
In practice, however, this
may be a prohibi~ive calculation, both because coalitions tend to have very large numbers of pure strategies
and
because of the huge number of coalitions that are possible when n is large. Fortunately, there are many situations where a 1/erbal description of the gameleads directl~ to the characteristic function,
without passing through any intermediate
explicit strategic form.
If we focus on the best that a
coalition can \do, we can often escape the need to specify strategic possibilities in detail. Here is an example that we ~hall return to in subsequent sections •.• 162
4.26
THE CATTLE DRIVE. Rancher "A• has some cattle ready for market, and he foresees a profit of $1200 on the sale. But two other ranches lie between his ranch and the market to~n, and their owners "B• and "C" can deny passage through their land, or demand a transit fee. The ql\est.ion, of course, is what wollld be a suitable fee?
S
v
........ 1200 1200 12(> (>
0
rowN
~
Figure 42. The cattle drive.
The characteristic function is tabulated above at the right.

Observe that the coalition BC, desFite its strong
bargaining position, is technically "worthless".
This is
because Band C cannot turn a profit without dealing with
A. So the blocking power of BC, such as it is, is reflected only in the fact that.
v(A}
is zero.*
* For some exercises on this section, see the "(a)" parts of the exercises at the end of Section 6. 163
The opposite phenomenon can also be troublesome: the core may be "too big 11
•
Indeed,
in games ~here coalitions of
intermediate size are relatively unprofitable, the core may be so large as to be virtually useless, either as a predictive or ·a normative solution.
bargaining" efficient,
In the extreme case of "pure
(see E>~. 8, below),
the core consists of all
individually rational pa~off vectors, and tells
us nothing at. all about the oL\tcome. These drawbacks notwithstanding,
the core remains of
great importance in the study of cooperative games.*
Even
when another type of solution is in use we will still want to ask whether it has the core property, since the interpretation of the other solution will depend on the answer. Let us return to the example of the ncattle Drive• (seep. 4.26) •. This is a case in which the core is neither too big nor too small.
In fact, it is Just a single payoff
vector and so makes ·a definite prediction.
But this pre
diction reveals another, more subtle difficulty with the core concept: its lack of sensitivit.y to barg~ining power. Applying conditions (1) and (2)
to the characteristic
function on page 4.26, we see that a pa~off vector~ is in the core if and only if •••
*
In economic theory, i t can be shown that the classic market: e9ui 1 ibrit.tm, i.e. a set. of pri c:es and transact ions tha~ balance supply and demand while fLtlfilling individual desires, always has the core propert~ when viewed as a cooperative multiperson game . Namely, no SLtbs~t of traders, by trading only among themselves, can improv;· upon what they get in the general equi •••• ~.//next page//
166
4.30
(a)
XA +
(b)
;: A
( C ),
>;A
>~A
)( !,
+ >: e
+ >: e + >:c
=
1200
l
1200
l
1200
L
()
'· .::..
0
(5)
): e,
(e)
(t > Conditions (a>, (b),
zero.* Similarly, zero.
'· >:c .::.. (f
>,
taken together,
(a>, ( c), (e}
(J.
imply that }:C is
together imply that >:_g .is
So the core consists of just the point (1200, 0, O>.
In other words, the only feasible outcome that all coalitions ~ill accept. is for B or C to allow A to drive his cattle through for free!
Intuitively, this' seems to be wrong. it's the right answer to a wrong question.
Or, perhaps, To be sure, one
can imagine a kind of '"~rice war•, in which Band C take turns cutting their •cattle t.ran?it fees", Lmtil in the end the fees are essentially zero.
Some economic equilibrium
models do make predictions of this kind.
But our example
is not a large, anonymous market, and Band C would surely
// •.•••. librium. Moreover, the core in this t~pe of system tends to be quite small, and hence a good predictor. These theorems were first discovered by F.Y. Edgeworth in 1881, making,him at least the spiritual originator of t.he core concept, 72 years b~fore it was identified and given a name by gametheorists, A wellknown economist, Edgeworth's results in this field were virtually ignored until their revival by Martin Shubik in the 1950s.
* Proof: \Jse (a) to eliminatethe variables xA, and x6 .from (b), then compare the result with (f). 167
4.27
6. The Core of a Characteristic Function Game. The principle that underlies the "core• solution is direct and compelling.
No group of players will accept an
outcome if, by forming a coalition, they could do better. Applied ±o games v,
all S~N.
The core is then the set of all payoff vectors that are acceptable in this sense. Several remarks on this definition are in order. First. Since N is included among the S's of (2), conditions (1) and (2) taken together imply that (3)
~ x.L.
=
V (N).
ieN
This is a basic postulate of cooperative game theory,
known
as the eff i c:iency axiom.*
Second. Since the
0
singletons" i are included among
the S's of (2>, we have
*
Called Pareto optimalit~ in economics. 164
4.29
al 1 i~N.
(4)
This condition is called individual rationality, another basic axiom of cooperative game theory.
It is of course a
special case of the Ncoalitional rationality" embodied in (2>, but it r~sts on a stronger intuitive foundation.
We
cannot be sure which coalitions will form, but we do expect the individual players to be forever alert to their own interests. Third. Does every game have a core? no.
Unfortunately,
The linear inequalities in always have a feasible
solution since they only impose lower bounds on th~ variables x ••
But (1) adds an upper bound to the constraint
set, and this can render the whole system of linear inequalities infeasible and leave us with an empty core.* This is a serious drawback to the core concept.
All
too often we find that characteristic function games which have highscoring middlesized coalitions also have empty
cores.
A simple everyda~ example is the game of voting by
majority rule (see E.x .. 7, belo1.1.1>. the
Indeed the validity of
core concept itself is called into question by such
examples.
Whether we like i t or not, we must face the fact
that many perfectly reasonable interactive situations will necessarily have some sets of players accepting less th~n what they could obt.ain in coalition.
* When it is not empty, the core of an nperson game ( N, v) is a conve~ polyhedron, usually of dimension n1. 165
4.31
be aware of the advantages of combining forces in order to wring concessions from A.
And even if they were somehow
convinced that $0 is the theoretically •correct" fee~
Band
C would still have no incentive, apart from pure altruism,*
to let the cattle pass through. In short, Band Care in a good bargaining position visavis A, and yet the core stubbornly refuses to recognize this fact.
We shall return to this example in Section
7 armed with another solution concept which is designed to take better account of bargaining power.
·* It A's welfare makes a difference to B or C (positiv~ or negative>, this fact ought to be reflected in their payoff functions.
168
4.32
EXERCISES /
E>:erc:ise 9, MAJORITY RULE. A group of n players are
offered a sum of money, say $100, on condition that they agree ho1i, to dist.ribute·it among themselves.
A majorit!= of
the group must OK the distribution, else the offer is void.
(a) Determine the characteristic function.* (b) Show that if
n)
2 then the core is empty.
[Hint: Take the core inequalities (2) for all the coalitions S that have n1 members, add them all together, and show that an impossible condition on xis obtained. J
✓
Exercise l_C, PURE BARGAINING. As in Exercise
'!/,
exc:ept
that the proposed distribution must be approved unanimoLtsly. (a) Determine the characteristic function.* (b) Show that the core contains ever~ individually
rational allocation of the $100.
CHint: Show that if (2) holds for every singleton S, then it holds tor every S. J
E>:ercise f:]•• POST (>FFICE. There ·are n pla~ers, n )· 2.
Each player must select one cf the other players and mail
him or her a dollar bill. Ca) Determine the characteristic function for n = 3 and 4.* (b) Show that the core is empty.
* Note that when a game is symmetric in th~ players, the characteristic function depends only on the size of the coalition. So we may set v(S) = f(ISt) and specify the garrre completely by the numbers f(k), k = O, •• ~n. 169
4.33
/E>;ercise 1.1. A ROAD NETWORK. Three farms are connected with each other and with •civilization" by rough trails, as shown in the figure below.
Each farmer would benefit from
a paved road connecting his farm to the highway; the amount
t .. ]
of the benefit is given along with the cost
( •• )
in the figure Cin $1000's>,
of ·paving each segment of the
trail network should get nothing in the valLte. {ill If two characteristic functions are added then the sum of their values should be the value of their sum.
Certain other PlausibleN properties, like individual rationality (4), are logical consequences of these axioms, while others, like coalitional rationality (2) are logic8
ally inconsistent with them.
172
4.36
To prove this claim, consider a particular pair S~,i 0 , uiith ii,!!
s0 ·•
In how many ct the possible orders 1JJill the
of player. i 0s predecessors be precisely S 0 i 0 ?
set P.
~
random order: (8)
... , J. f . . 1tit1itt~~ .,i_______________ _,,
SoIo
NSo
la
Clearly the members of S0 ie, can be permuted at 1.uill; like
wise the members of NS~. times INS 0
This gives at least us
t! different orders.
ts 0 i8 1!
But any other permutation
would destroy the situation depicted above in (8). coefficient to be attached to the term D =
as was to be proved.
In the •cattle drive• proorder
:e,
A
C
blem, the construction of the ABC
value is not difficult, since
BCA CAB
1200
CP.A
1200
(>
(I
4800
1200
1200
E,AC
there are only 6 permutations of the
players. (See
page
4.26
for the characteristic funcValue:
tion.>
0
1200 0 0 0 0
0 0 1200 1200
ACB
p=
< 800,
200,
1200 ()
(I
0
200)
Thus, according to the Shapley value, rancher A should end
up
collecting $1200 for his herd and paying $400 for
the transit rights, which Band C will split equally = 100
v(BCD> = 36
v =
81
v (AE, >
v