January 11, 2007: Evo Game Theory
Secondhalf lecture [Dan Friedman]
Notes transcribed by Ryan Shelby and edited by DF
Outline
 Technical points on mixed strategies, linearity, etc.
 Desiderata for solutions (desired concept form)
 DS, IEDS, BI
{Dominant Strategy, Iterated Elimination of Dominated Strategies, Backwards Induction}
 BR, NE, ESS
{Best Response/Reply, Nash Equilibrium, Evolutionary Stable Stragtegy}
Technical Points on Mixed Strategies and Payoff Matrices
f_{i}(s_{i}, s_{i}) is a general payoff function for a normal form game, where s_{i} S_{i} denotes the strategy chosen by player i and s_{i} denotes the strategy (combination) chosen by the other player(s).
Mixed strategies are a vector of probabilities that sum to 1 and are usually denoted by sigma .
The payoff function can be represented by a matrix when there are two players each with a finite number of pure strategies. Let
a_{ij} = f_{1}(purestrat_{i}, purestrat_{j}),
and form the matrix A = ((a_{ij})). A is the payoff matrix for the first player in a two player game. Similarly, let
b_{ij} = f_{2}(purestrat_{j}, purestrat_{i}).
Then the second player’s payoff function is represented by the matrix B = ((b_{ij})).
Two special cases should be noted.
In a symmetric two player game, B = A' (Why the transpose? Because the Column Player doesn’t know he’s second; from his point of view he is the row [first] player, so we have to transpose columns to rows.) The more general statement is that in a symmetric Normal form game f_{i} = f_{j} i, j. Of course, for the function to be the same their domain, the strategy sets, have to be the same, so S_{i} = S_{j} = S. For example, S = {Up, Down}. A symmetric game looks the same to all players.
In a Zero Sum Game a_{ij} + b_{ij} = 0. For the symmetric case, this is just a_{ij} + a_{ji} = 0. Constant sum is strategically equivalent to zero sum.
We will usually look at nonzero sum games, but often will look at symmetric two player games. Often, to save space, we will just write the A matrix in this case, since the B matrix is redundant
Matrices are especially helpful when we consider probabilistic mixtures of pure strategies.
 Mixed Strategies: x, y S then p.x + (1p).y is mixed, where p [0,1] is the probability that x is chosen. In populations p may be the frequency.
 Expected Payoff: of a mixed strategy is given by:
f(p.x + (1p).y, s_{i}) = p.f(x, s) + (1p).f(y, s)
Linearity is always assumed in the first argument. For general games the payoff may not be linear in the second argument. Usually linearity there is also assumed and then we can use the matrix representation of the payoff. For example, if I chose the mixed strategy (p_{1}, p_{2}, ..., p_{m}) = P and my opponent chooses (q_{1}, q_{2}, ..., q_{m}) = Q, then my (expected) payoff is given by:
f(P,Q) = P.A.Q'
Desiderata
What do you want from a static solution concept? (Next week, dynamic solution concepts will be covered).
 Sensible, intuitive, plausible
 Exists for wide class of games
 Unique for wide class of games
 Empirically useful
 Easy to compute
Unfortunately there is no solution with all these desirable properties. We'll see how close we can get.
DS, IEDS, and BI
Dominant Strategy (DS)
Recall the HawkDove 2x2 matrix.
If V > C, then no matter what the opponent is doing, it is better to play the first pure strategy, Hawk (H). Thus H is a dominant strategy. The general definition is that D S_{i} is a dominant strategy if f(D, x) > f(y, x) x S_{i} y S_{i}. D is a strictly dominant strategy if f(D, x) > f(y, x) x y = D.
Game theory is interesting because dominant strategies do not always exist. For example, in the HawkDove game, there is no dominant strategy when C > V.
Iterated Elimination of Dominated Strategies (IEDS)
Consider the asymmetric game below, where the row player has two pure strategies and the column player has three. Since Center is always better than Right for the column player, we can assume he will eventually stop playing Right—it is a dominated strategy. So we can ignore the Right column—it gets eliminated. Now Row has a dominated strategy, UP. Eventually row player stops playing UP; row UP gets eliminated. Left with only two choices DL and DC, column player notices that he/she can only win by playing Left. So the IEDS solution is (Down, Left) with payoff (5,5).

Left 
Center 
Right 
UP 
4,3 
2,7 
0,4 
DOWN 
5,5 
5,1 
4,2 
The following general procedure is useful in solving games in normal form. Look for a pure strategy for either player that always (no matter what the other players do) gives her lower payoffs than some other strategy (pure or mixed) available to her. If you find such a dominated strategy, eliminate it from that player's strategy set to form a reduced game. Then do the same thing in the reduced game. Iterate until no dominated pure strategies remain. If only one strategy remains for each player, this strategy combination is the IEDS solution. If several players still have several undominated strategies, then at least you have reduced the game to the essential parts and can proceed with other solution concepts.
Backwards Induction (BI) for EFG
Starting from the payoffs (terminal nodes), identify the player choosing at the immediately previous node and eliminate all branches except those that give that player the highest payoff. Now that node is a terminal node in the reduced game. Working back towards the root a node at a time, we eventually solve the game (if there are no nontrivial information sets and no ties for the best payoff at any node). Look at Watson for details. BI and IEDS are actually the same thing only in different forms. Thus IEDS or BI works for games that are equivalent to extensive form games of perfect information. For more general games, the IEDS/BI solution does not exist, unfortunately.
Note: Backwards Induction is much more complicated with imperfect information, i.e., information sets. We'll discuss it a bit in a later lecture.
Nash Equilibrium (NE)
x S_{i} is a best response to s_{i} S_{i} if f_{i}(x, s_{i}) > f_{i}(y, s_{i}) y S_{i}. That means: given what everyone else is doing (s_{i}) you do your best (x).

Definition: s* = (s*_{1}, s*_{2}, ..., s*_{n}) is a Nash Equilibrium (NE) if s*_{i} is a best response to s*_{i1} for all i.

Theorem (Nash, 1950): For any normal form game with a finite number of players (or player populations) each with a finite number of pure strategies, there exist a NE.
The NE is possibly in mixed strategies.
There may be more than one NE. Sometimes there are lots and lots.
Evolutionary Stable Strategy (ESS)
Read Weibull Chapter 2. S* is an ESS of a symmetric game (with payoff function) f if once established it is not invadable by a small number of mutants.

Definition: The strategy s* is an ESS if x S either:
 (a) f(s*, s*) > f(x, s*), that is you get higher payoff playing s* yourself rather than playing the mutant strategy. Or
 (b) f(s*, s*) = f(x, s*), but (tiebreaker) f(s*, x) > f(x, x).

Improved Definition: f(S*, .x + (1).S*) > f(x, .x + (1).S*), x S*, > 0 sufficiently small.
Note: a NE might be invadable. ESS is a stronger condition. The improved definition also works for nonlinear payoff functions. Also, an ESS can easily represent an NE, while an NE may not always be classified as an ESS.
The second definition (the improved one) is better able to accurately deal with nonlinearity in games.