### Chapter One

**Static Games of Complete Information**

In this chapter we consider games of the following simple form:
first the players simultaneously choose actions; then the players
receive payoffs that depend on the combination of actions just chosen.
Within the class of such static (or simultaneous-move) games,
we restrict attention to games of *complete information*. That is, each
player's payoff function (the function that determines the player's
payoff from the combination of actions chosen by the players) is
common knowledge among all the players. We consider dynamic
(or sequential-move) games in Chapters 2 and 4, and games of
incomplete information (games in which some player is uncertain
about another player's payoff function-as in an auction where
each bidder's willingness to pay for the good being sold is unknown
to the other bidders) in Chapters 3 and 4.

In Section 1.1 we take a first pass at the two basic issues in
game theory: how to describe a game and how to solve the resulting
game-theoretic problem. We develop the tools we will use
in analyzing static games of complete information, and also the
foundations of the theory we will use to analyze richer games in
later chapters. We define the *normal-form representation* of a game
and the notion of a *strictly dominated strategy*. We show that some
games can be solved by applying the idea that rational players
do not play strictly dominated strategies, but also that in other
games this approach produces a very imprecise prediction about
the play of the game (sometimes as imprecise as "anything could
happen"). We then motivate and define *Nash equilibrium*-a solution
concept that produces much tighter predictions in a very
broad class of games.

In Section 1.2 we analyze four applications, using the tools developed in the previous section: Cournot's (1838) model of imperfect competition, Bertrand's (1883) model of imperfect competition, Farber's (1980) model of final-offer arbitration, and the problem of the commons (discussed by Hume [1739] and others). In each application we first translate an informal statement of the problem into a normal-form representation of the game and then solve for the game's Nash equilibrium. (Each of these applications has a unique Nash equilibrium, but we discuss examples in which this is not true.)

In Section 1.3 we return to theory. We first define the notion
of a *mixed strategy*, which we will interpret in terms of one
player's uncertainty about what another player will do. We then
state and discuss Nash's (1950) Theorem, which guarantees that a
Nash equilibrium (possibly involving mixed strategies) exists in a
broad class of games. Since we present first basic theory in Section
1.1, then applications in Section 1.2, and finally more theory
in Section 1.3, it should be apparent that mastering the additional
theory in Section 1.3 is not a prerequisite for understanding the
applications in Section 1.2. On the other hand, the ideas of a mixed
strategy and the existence of equilibrium do appear (occasionally)
in later chapters.

This and each subsequent chapter concludes with problems, suggestions for further reading, and references.

**1.1 Basic Theory: Normal-Form Games and Nash
Equilibrium
**

**
1.1.A Normal-Form Representation of Games**

In the normal-form representation of a game, each player simultaneously
chooses a strategy, and the combination of strategies
chosen by the players determines a payoff for each player. We
illustrate the normal-form representation with a classic example
- *The Prisoners' Dilemma*. Two suspects are arrested and charged
with a crime. The police lack sufficient evidence to convict the suspects,
unless at least one confesses. The police hold the suspects in
separate cells and explain the consequences that will follow from
the actions they could take. If neither confesses then both will be
convicted of a minor offense and sentenced to one month in jail.
If both confess then both will be sentenced to jail for six months.
Finally, if one confesses but the other does not, then the confessor
will be released immediately but the other will be sentenced
to nine months in jail-six for the crime and a further three for
obstructing justice.

The prisoners' problem can be represented in the accompanying bi-matrix. (Like a matrix, a bi-matrix can have an arbitrary number or rows and columns; "bi" refers to the fact that, in a two-player game, there are two numbers in each cell-the payoffs to the two players.)

In this game, each player has two strategies available: confess (or fink) and not confess (or be mum). The payoffs to the two players when a particular pair of strategies is chosen are given in the appropriate cell of the bi-matrix. By convention, the payoff to the so-called row player (here, Prisoner 1) is the first payoff given, followed by the payoff to the column player (here, Prisoner 2). Thus, if Prisoner 1 chooses Mum and Prisoner 2 chooses Fink, for example, then Prisoner 1 receives the payoff -9 (representing nine months in jail) and Prisoner 2 receives the payoff 0 (representing immediate release).

We now turn to the general case. The *normal-form representation*
of a game specifies: (1) the players in the game, (2) the strategies
available to each player, and (3) the payoff received by each player
for each combination of strategies that could be chosen by the
players. We will often discuss an *n*-player game in which the
players are numbered from 1 to *n* and an arbitrary player is called
player *i*. Let [S.sub.i] denote the set of strategies available to player *i*
(called *i's strategy space*), and let s, denote an arbitrary member of
this set. (We will occasionally write [*s.sub.i*] [member of] [*S.sub.i*], to indicate that the
strategy [*s.sub.i*], is a member of the set of strategies [*S.sub.i*].) Let (*[s.sub.i], ..., [s.sub.n]*) denote a combination of strategies, one for each player, and let
[*u.sub.i*] denote player *i*'s *payoff function*: *[u.sub.i]*(*[S.sub.1], ..., [s.sub.n]*) is the payoff to player *i* if the players choose the strategies (*[s.sub.i], ..., [s.sub.n]*). Collecting all of this information together, we have:

**Definition** *The normal-form representation of an n-player game specifies
the players' strategy spaces [S.sub.1], ..., [S.sub.n] and their payoff functions
[u.sub.1], ..., [u.sub.n]. We denote this game by G =* {

*[S.sub.i], ..., [S.sub.n];[u.sub.i], ..., [u.sub.n]*}.

Although we stated that in a normal-form game the players
choose their strategies simultaneously, this does not imply that the
parties necessarily *act* simultaneously: it suffices that each choose
his or her action without knowledge of the others' choices, as
would be the case here if the prisoners reached decisions at arbitrary
times while in their separate cells. Furthermore, although
in this chapter we use normal-form games to represent only static
games in which the players all move without knowing the other
players' choices, we will see in Chapter 2 that normal-form representations
can be given for sequential-move games, but also that
an alternative-the *extensive-form* representation of the game-is
often a more convenient framework for analyzing dynamic issues.

**1.1.B Iterated Elimination of Strictly Dominated
Strategies**

Having described one way to represent a game, we now take a first pass at describing how to solve a game-theoretic problem. We start with the Prisoners' Dilemma because it is easy to solve, using only the idea that a rational player will not play a strictly dominated strategy.

In the Prisoners' Dilemma, if one suspect is going to play Fink,
then the other would prefer to play Fink and so be in jail for six
months rather than play Mum and so be in jail for nine months.
Similarly, if one suspect is going to play Mum, then the other
would prefer to play Fink and so be released immediately rather
than play Mum and so be in jail for one month. Thus, for prisoner
*i*, playing Mum is dominated by playing Fink-for each strategy
that prisoner *j* could choose, the payoff to prisoner *i* from playing
Mum is less than the payoff to *i* from playing Fink. (The same
would be true in any bi-matrix in which the payoffs 0, -1, -6,
and -9 above were replaced with payoffs *T, R, P,* and *S*, respectively,
provided that *T > R > P > S* so as to capture the ideas
of temptation, reward, punishment, and sucker payoffs.) More
generally:

**Definition** *In the normal-form game G =* {[S.sub.1], ..., [S.sub.n]; [u.sub.1], ..., [u.sub.n]}, *let
[s'.sub.i] and [s".sub.i] be feasible strategies for player i (i.e., [s'.sub.i] and [s".sub.i] are members of
[S.sub.i]). Strategy [s'.sub.i] is strictly dominated by strategy [s".sub.i] if for each feasible
combination of the other players' strategies, i's payoff from playing [s'.sub.i] is
strictly less than i's payoff from playing [s".sub.i]:*

*[u.sub.i]*(*[s.sub.1], ..., [s.sub.i] - 1, [s'.sub.i], [s.sub.i+1], ..., [s.sub.n]*) < *[u.sub.i]* (*[s.sub.1], ..., [s.sub.i-1], [s".sub.i], [s.sub.i+1], ..., [s.sub.n]*) (DS)

*for each ([s.sub.1], ..., [s.sub.i-1], [s.sub.i+1], ..., [s.sub.n]) that can be constructed from the other
players' strategy spaces [S.sub.1], ..., [S.sub.i-1], [S.sub.i+1], ..., [S.sub.n].*

Rational players do not play strictly dominated strategies, because there is no belief that a player could hold (about the strategies the other players will choose) such that it would be optimal to play such a strategy. Thus, in the Prisoners' Dilemma, a rational player will choose Fink, so (Fink, Fink) will be the outcome reached by two rational players, even though (Fink, Fink) results in worse payoffs for both players than would (Mum, Mum). Because the Prisoners' Dilemma has many applications (including the arms race and the free-rider problem in the provision of public goods), we will return to variants of the game in Chapters 2 and 4. For now, we focus instead on whether the idea that rational players do not play strictly dominated strategies can lead to the solution of other games.

Consider the abstract game in Figure 1.1.1. Player 1 has two
strategies and player 2 has three: *[S.sub.1]* = {Up, Down} and *[S.sub.2]* =
{Left, Middle, Right}. For player 1, neither Up nor Down is strictly
dominated: Up is better than Down if 2 plays Left (because 1 > 0),
but Down is better than Up if 2 plays Right (because 2 > 0). For
player 2, however, Right is strictly dominated by Middle (because
2 > 1 and 1 > 0), so a rational player 2 will not play Right.
Thus, if player 1 knows that player 2 is rational then player 1 can
eliminate Right from player 2's strategy space. That is, if player
1 knows that player 2 is rational then player 1 can play the game
in Figure 1.1.1 *as if* it were the game in Figure 1.1.2.

In Figure 1.1.2, Down is now strictly dominated by Up for
player 1, so if player 1 is rational (and player 1 knows that player 2
is rational, so that the game in Figure 1.1.2 applies) then player 1
will not play Down. Thus, if player 2 knows that player 1 is rational,
*and* player 2 knows that player 1 knows that player 2 is
rational (so that player 2 knows that Figure 1.1.2 applies), then
player 2 can eliminate Down from player 1's strategy space, leaving
the game in Figure 1.1.3. But now Left is strictly dominated
by Middle for player 2, leaving (Up, Middle) as the outcome of
the game.

This process is called *iterated elimination of strictly dominated
strategies*. Although it is based on the appealing idea that rational
players do not play strictly dominated strategies, the process
has two drawbacks. First, each step requires a further assumption
about what the players know about each other's rationality. If
we want to be able to apply the process for an arbitrary number
of steps, we need to assume that it is *common knowledge* that the
players are rational. That is, we need to assume not only that all
the players are rational, but also that all the players know that all
the players are rational, and that all the players know that all the
players know that all the players are rational, and so on, *ad infinitum*.
(See Aumann [1976] for the formal definition of common
knowledge.)

The second drawback of iterated elimination of strictly dominated strategies is that the process often produces a very imprecise prediction about the play of the game. Consider the game in Figure 1.1.4, for example. In this game there are no strictly dominated strategies to be eliminated. (Since we have not motivated this game in the slightest, it may appear arbitrary, or even pathological. See the case of three or more firms in the Cournot model in Section 1.2.A for an economic application in the same spirit.) Since all the strategies in the game survive iterated elimination of strictly dominated strategies, the process produces no prediction whatsoever about the play of the game.

We turn next to Nash equilibrium-a solution concept that produces much tighter predictions in a very broad class of games. We show that Nash equilibrium is a stronger solution concept than iterated elimination of strictly dominated strategies, in the sense that the players' strategies in a Nash equilibrium always survive iterated elimination of strictly dominated strategies, but the converse is not true. In subsequent chapters we will argue that in richer games even Nash equilibrium produces too imprecise a prediction about the play of the game, so we will define still-stronger notions of equilibrium that are better suited for these richer games.

**1.1.C Motivation and Definition of Nash Equilibrium**

One way to motivate the definition of Nash equilibrium is to argue
that if game theory is to provide a unique solution to a game-theoretic
problem then the solution must be a Nash equilibrium,
in the following sense. Suppose that game theory makes a unique
prediction about the strategy each player will choose. In order
for this prediction to be correct, it is necessary that each player be
willing to choose the strategy predicted by the theory. Thus, each
player's predicted strategy must be that player's best response
to the predicted strategies of the other players. Such a prediction
could be called *strategically stable or self-enforcing*, because no single
player wants to deviate from his or her predicted strategy. We will
call such a prediction a Nash equilibrium:

**Definition** *In the n-player normal-form game G = {[S.sub.1], ..., [S.sub.n]; [u.sub.1], ...,
[u.sub.n]], the strategies ([s.sup.*.sub.1], ..., [s.sup.*.sub.n]) are a Nash equilibrium if, for each player
i, [s.sup.*.sub.i] is (at least tied for) player i's best response to the strategies specified
for the n - 1 other players, ([s.sup.*.sub.1], ..., [s.sup.*.sub.i-1], [s.sup.*.sub.i+1], ..., [s.sup.*.sub.n]}:*

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] *(NE)
*

*
for every feasible strategy [s.sub.i] in [S.sub.i]; that is, [s.sup.*.sub.i] solves*

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

To relate this definition to its motivation, suppose game theory
offers the strategies (*[s'.sub.1], ..., [s'.sub.n]*) as the solution to the normal-form
game *G = {[S.sub.1], ..., [S.sub.n]; [u.sub.1], ..., [u.sub.n]}*. Saying that (*[s'.sub.1], ..., [s'.sub.n]*) is *not*
a Nash equilibrium of *G* is equivalent to saying that there exists
some player *i* such that *[s'.sub.i]* is *not* a best response to (*[s'.sub.1], ..., [s'.sub.i-1], [s'.sub.i+1], ..., [s'.sub.n]*). That is, there exists some [s".sub.i] in [S.sub.i], such that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Thus, if the theory offers the strategies (*[s'.sub.1], ..., [s'.sub.n]*) as the solution
but these strategies are not a Nash equilibrium, then at least one
player will have an incentive to deviate from the theory's prediction,
so the theory will be falsified by the actual play of the game.
A closely related motivation for Nash equilibrium involves the
idea of convention: if a convention is to develop about how to
play a given game then the strategies prescribed by the convention
must be a Nash equilibrium, else at least one player will not
abide by the convention.

(Continues...)