Game Theory (Part 1)
Game Theory (Part 1)
Game Theory (Part 1)
Game Theory
ADITYA KASHYAP
[email protected]
Strategic behavior
◦ Strategic interdependence: your choices will impact other people’s choices, and
vice versa
◦ Need to consider “he-thinks-I-think” scenarios (multiple
iterations) ◦ Examples = oligopoly, poker, war, most of life
INTRODUCTION
2
Strategic thinking:
Efficient markets hypothesis
Consider: everybody knows that the price of Reliance shares will double
next Tuesday…
3
Strategic thinking:
Random class quiz
We will have a quiz in class, but the quiz will be on a day that you don’t
expect…
Brief history of
Game Theory
Pre-history of game theory
◦ Letters from Waldegrave (1713) about a card game
◦ Games studied by Cournot (1838), Bertrand (1883) and Edgeworth (1925) in the context of
oligopolistic pricing
Post-Nash world
◦ Reinhard Selten (1965) proposes sub-game perfect equilibria
◦ John Harsanyi (1967-68) introduced techniques to solve static games of incomplete
information where players are unsure about one another’s payoff
Cooperative &
non-cooperative games
Cooperative games
◦ The focus of early game theory – optimal strategies for groups of individuals,
presuming that they can enforce agreements
Non-cooperative games
◦ Most common use of game theory is in situations where you have to make
your decision without cooperation/enforcement
◦ Your optimal choice may depend on forecasting opponent’s
choice ◦ Does not mean you are working “against” the other
player(s) ◦ This subject will only consider non-cooperative games
Practical uses of
game theory
Game theory allows us to make predictions about the outcome of strategic
situations
◦ When predictions wrong; learn about preferences (behavioural economics) ◦ Can also work
backwards… create rules of a game to get preferred outcome (market design, institutional
economics)
Different equilibriums
informatio
n
Complete Incomplete
informatio n
10
The prisoners’
dilemma actually
happened
Over 50 years ago, Perry Smith & Dick Hickock robbed and murdered a
family in Kansas for $50
◦ Caught in Las Vegas six weeks later
◦ Hard evidence for lesser crimes (parole violation & fraud)
◦ Weak evidence for the murders
◦ They were interrogated separately, not knowing what the other said
11
Player A
Player B
GAME (PD) Not confess Confess
12
Solution for
prisoners’ dilemma
Consider the options for Player A:
◦ If Player B “not confess”, then A can “not confess” (-10) or “confess”
(-5) ◦ If Player B “confess”, then A can “not confess” (-50) or “confess”
(-30) ◦ In both situations, Player A does better by choosing “confess”
Dominant &
weakly dominant
Dominant strategy
◦ A strategy is dominant if the payoff for that strategy is always better than
alternative strategies, no matter what the other player does
GAME (Dominant) Good Bad
Good (10, 7) (9, 2)
Bad (3, 6) (2, 1)
15
Easy games:
Dominant & dominated
The prisoners’ dilemma had a dominant strategy ◦ In all
situations, “confess” gives better result than “not confess”
16
Easy games:
Dominant & dominated
Solve by iterative removal of dominated strategies ◦ For
player B, “centre” dominates “right” – so “right” is dominated ◦
Therefore, we can remove that column from the game ◦ We now
have a (2 x 2) matrix
◦ Note: still no dominant strategy for player B
Player B
GAME (1) Left Centre Right Up (1,0)
(1,2) (0,1) Down (0,3) (0,1) (2,0)
Player A
17
Easy games:
Dominant & dominated
Solve by iterative removal of dominated strategies
◦ Now just considering the smaller (2 x 2) matrix
◦ For player A, “up” dominates “down” (can also say “down” is
dominated) ◦ Therefore, we can remove “down” from the game
◦ We now only have two options left = (up, left) or (up, centre)
Player B
GAME (1) Left Centre Right Up (1,0)
(1,2) (0,1) Down (0,3) (0,1) (2,0)
Player A
18
Easy games:
Dominant & dominated
Solve by iterative removal of dominated
strategies ◦ Two options remaining = (up, left) or
(up, centre) ◦ For player B, “centre” dominates “left”
Player A
19
Easy games:
Another example
Is there a dominant strategy?
◦ Below, player A prefers “up” sometimes and “down” sometimes ◦
Below, player B prefers “right”, “left” & “centre” in different scenarios
◦ No dominant strategy
Player B
GAME (2) Left Centre Right Up (4, 11)
(3, 6) (5, 12) Middle (3, 4) (2, 8) (4, 6)
Player A
20
Easy games:
Another example
Solve by iterative removal of dominated strategies ◦ For player A,
“middle” is dominated by “up”; so “middle” is removed ◦ Still leaves
no dominant strategy for player A or player B ◦ Next round…
Player B
GAME (2) Left Centre Right Up (4, 11)
(3, 6) (5, 12) Middle (3, 4) (2, 8) (4, 6)
Player A
21
Easy games:
Another example
Solve by iterative removal of dominated strategies ◦ For
player B, “centre” is dominated; that column can be removed ◦
Leaves (2 x 2) matrix
◦ Next round…
Player B
GAME (2) Left Centre Right Up (4, 11)
(3, 6) (5, 12) Middle (3, 4) (2, 8) (4, 6)
Player A
22
Easy games:
Another example
Solve by iterative removal of dominated strategies ◦ For player A,
“down” is dominated by “up”; so “down” can be removed ◦ Leaves only
two options = (up, left) and (up, right)
◦ Next round…
Player B
GAME (2) Left Centre Right Up (4, 11)
(3, 6) (5, 12) Middle (3, 4) (2, 8) (4, 6)
Player A
23
Easy games:
Another example
Solve by iterative removal of dominated strategies
◦ Only two options remaining = (up, left) and (up,
right) ◦ For Player B, “right” dominates “left”
24
Too easy…
The above games had dominated strategies
◦ Relatively easy to solve with iterative elimination of dominated
strategies ◦ However, in many instances there are no dominated strategy
◦ Can still find the “Nash equilibrium”
25
Problem:
no dominated strategy
In the below example there is no dominant or dominated
strategy ◦ Player A prefers “up”, “middle” & “down” in different
scenarios ◦ Player B prefers “left”, “centre” & “right” in different
scenarios ◦ Cannot use iterative elimination to find equilibrium
◦ … but an equilibrium still exists
Player B
GAME (3) Left Centre Right Up (0, 4)
26
Nash equilibrium
A Nash equilibrium exists when all players pick the best strategy given the
other players’ strategies
◦ No incentive to unilaterally change strategy
◦ The “no regret” or “stable state” strategy
27
Nash equilibrium
w/o dominant
strategy
Game (3) has no dominant or dominated strategy
◦ Cannot use iterative elimination to find equilibrium
◦ Nash equilibrium is (down, right) with payoff (6, 6)
◦ From (down, right), player A has no incentive to change to up or middle
◦ From (down, right), player B has no incentive to change to left or
centre
Player B
GAME (3) Left Centre Right Up (0, 4)
(4, 0) (5, 3) Middle (4, 0) (0, 4) (5, 3)
Down (3, 5) (3, 5) (6, 6)
Player A
28
How to find
Nash equilibrium
The manual approach…
◦ Check every payoff individually to see whether either player has an incentive to
move. No incentive to move = Nash equilibrium.
◦ For small games (2 x 2) this is easy; for large games it is time consuming
29
“If-then” approach to
find Nash equilibrium
For player A, what is the best response for each strategy by player
B? ◦ If player B chooses “left”, player A should choose “middle” ◦
Underline the relevant payoff
Player B
GAME (3) Left Centre Right Up (0, 4)
(4, 0) (5, 3) Middle (4, 0) (0, 4) (5, 3)
Player A
31
“If-then” approach to
find Nash equilibrium
For player A, what is the best response for each strategy by player
B? ◦ If player B chooses “centre”, player A should choose “up” ◦ Underline
the relevant payoff
Player B
GAME (3) Left Centre Right Up (0, 4)
(4, 0) (5, 3) Middle (4, 0) (0, 4) (5, 3)
Player A
32
“If-then” approach to
find Nash equilibrium
For player A, what is the best response for each strategy by player
B? ◦ If player B chooses “right”, player A should choose “down” ◦
Underline the relevant payoff
Player B
GAME (3) Left Centre Right Up (0, 4)
(4, 0) (5, 3) Middle (4, 0) (0, 4) (5, 3)
Player A
33
“If-then” approach to
find Nash equilibrium
For player B, what is the best response for each strategy by player
A? ◦ If player A chooses “up”, player B should choose “left”
◦ Underline the relevant payoff
Player B
GAME (3) Left Centre Right Up (0, 4)
(4, 0) (5, 3) Middle (4, 0) (0, 4) (5, 3)
Player A
34
“If-then” approach to
find Nash equilibrium
For player B, what is the best response for each strategy by player
A? ◦ If player A chooses “middle”, player B should choose “centre” ◦
Underline the relevant payoff
Player B
GAME (3) Left Centre Right Up (0, 4)
(4, 0) (5, 3) Middle (4, 0) (0, 4) (5, 3)
Player A
35
“If-then” approach to
find Nash equilibrium
For player B, what is the best response for each strategy by player
A? ◦ If player A chooses “down”, player B should choose “right” ◦
Underline the relevant payoff
◦ After all “if-then” answers, there is one solution that was chosen by both
players (down, right) with a payoff (6, 6) = Nash equilibrium
GAME (3) Left Centre Right Up (0, 4)
36
“If-then” approach to
Nash: another example
Game (4) = similar game with different payoffs
◦ Confirm that there is no dominant (or dominated) strategy
◦ For each player, find the best response for each strategy of the other
player, and underline that payoff
Player B
GAME (4) Left Centre Right Up (7, 2)
(1, 3) (3, 4) Middle (4, 2) (6, 4) (4, 0)
Player A
37
“If-then” approach to
Nash: another example
For player A, what is the best response for each strategy by player
B? ◦ If player B chooses “left”, player A chooses “up”
◦ If player B chooses “centre”, player A chooses “middle”
◦ If player B chooses “right”, player A chooses “down”
◦ Underline the relevant payoffs
Player B
GAME (4) Left Centre Right Up (7, 2)
38
“If-then” approach to
Nash: another example
For player B, what is the best response for each strategy by player
A? ◦ If player A chooses “up”, player B chooses “right”
◦ If player A chooses “middle”, player B chooses “centre”
◦ If player A chooses “down”, player B chooses “left”
◦ Underline the relevant payoffs
◦ Nash equilibrium = (middle, centre) with payoff = (6, 4)
GAME (4) Left Centre Right Up (7, 2)
(1, 3) (3, 4) Middle (4, 2) (6, 4) (4, 0)
39
40
Multiple equilibriums
Nash concept can generate multiple equilibriums
41
Multiple equilibriums
In this game there are two Nash equilibriums
◦ Either (Ace, Ace) or (King, King)
◦ Simple to show using manual approach or “if-then” approach ◦
There is actually a third equilibrium (mixed strategy = discussed later)
Pareto dominance
◦ An equilibrium Pareto-dominates another equilibrium if at least one player
would be better off while no other player worse off… (Ace, Ace)
GAME (Cards) Ace King Ace (2, 2)
Non-existence of pure
equilibrium
Nash equilibrium concept might not find a “pure-strategy” equilibrium
◦ “Pure strategy” means players must choose only one strategy ◦ Will
consider “mixed strategies” later
43
Non-existence of pure
equilibrium
In this game (Pennies), there is no “pure strategy” Nash
equilibrium ◦ Simple to show using manual approach or “if-then”
approach
◦ Consider starting at (heads, heads)… then player B would switch to “tails”
resulting in (heads, tails)… then player A would switch to “tails” resulting
in (tails, tails), etc = no stable strategy
◦ There is a mixed strategy Nash equilibrium = discussed later
GAME (Pennies) Heads Tails
-1)
Player A
Player B
44
45
Application 1:
Free riding & public goods
Since at least Hume (1740), political philosophers have known about
the “free rider” problem
◦ “Public goods” and “common resources” are non-excludable, so the
incentives for each person is to over-use and/or under-pay
46
Application 1:
Free riding & public
goods The game (security) is same as prisoners’
dilemma ◦ Two players = {player A, player B} & two strategies = {buy,
not buy} ◦ If both buy: benefit is $80, cost is $50, net benefit is $30 each
◦ If neither buy: no benefit or cost, net benefit is $0 each ◦ If one buys:
the buyer benefit is $80, cost is $100, net benefit is -$20 ◦ If one buys:
the non-buyer benefit is $80, cost is $0, net benefit is $80 ◦ Best
outcome is (buy, buy) with payoff (30, 30)
GAME (Security) Buy Not buy Buy
(0, 0)
Player A
Player B
47
Application 1:
Free riding & public
goods Consider the incentives
◦ If player B “buy”, then player A should “not buy” ($80 v $20)
◦ If player B “not buy”, then player A should “not buy” ($0 v -$20)
◦ Nash equilibrium is (not buy, not buy) – check manually or using “if-then”
◦ Conclusion… people will try to “free ride” on other people buying public goods, and
therefore public goods will be under-supplied
GAME (Security) Buy Not buy Buy
(0, 0)
Player A
Player B
48
Application 1:
Free riding & public goods
Mixed evidence from reality & experiments
◦ Historically, some public goods have been provided privately
◦ When people are given public good “prisoner dilemma” scenarios in experiments, some
people chose to “buy” – altruism? Ignorance?
◦ More communication with (or concern for) other players = more people “buy”
Application 2:
Chicken game &
Hawk/Dove Chicken game
◦ Two players drive towards each other at high speed
◦ Two strategies {swerve, don’t swerve}
◦ If neither swerves, they crash and suffer injuries (-3,
-3) ◦ If both swerve, they are safe but didn’t “win” (2, 2)
◦ If one swerves, the swerver loses (0) and the straight driver wins (3)
GAME (Chicken) Swerve Don’t
Player A
Player B
50
Application 2:
Chicken game &
Hawk/Dove Chicken game outcome
◦ Two Nash equilibriums = (swerve, don’t swerve) & (don’t swerve, swerve) ◦ The
worst outcome is a crash; both players have an incentive to change ◦ Mutual swerving
might give a good outcome (2, 2), but if you think the other person
will swerve then you get a higher benefit by not swerving, and so you have an
incentive to change
◦ Each player would prefer to be “don’t swerve” playing against “swerve”
GAME (Chicken) Swerve Don’t
51
Application 2:
Chicken game &
Hawk/Dove Hawk/Dove (HD) game
◦ A variant of the “chicken game” originally used in biology
◦ Two players {A, B} are competing for a resource & can be aggressive or passive, so
there are two strategies {hawk, dove}
◦ If both are aggressive (hawks), they fight and suffer injuries (-5, -5) ◦ If
both are passive (dove), they are safe and share the benefit (10, 10) ◦ If
they differ (hawk, dove), the hawk wins (20) & the dove misses out (0)
Player B
GAME (HD) Hawk Dove Hawk (-5,
Player A
52
Application 2:
Chicken game &
Hawk/Dove Hawk/Dove outcome = chicken game
outcome
◦ Two Nash equilibriums = (hawk, dove) & (dove, hawk)
◦ Worst outcome is a (hawk, hawk); both players have incentive to change ◦
Cooperation (dove, dove) might give a good outcome, but if you think the other
person will choose “dove” then you get a higher benefit by choosing “hawk”, and so
you have an incentive to change
◦ Each player would prefer to be “hawk” playing against “dove”
Player B
GAME (HD) Hawk Dove Hawk (-5,
Player A
53
Application 2:
Hawk/Dove & Prisoners’
dilemma N ote: careful not to change the game
◦ For “hawk/dove” game, the outcome from fighting (hawk, hawk) must be worse
than the outcome from being a dove against a hawk
◦ If the the fighting outcome is better than the outcome from being a dove against a
hawk, then “hawk/dove” game turns into “prisoners’ dilemma”
◦ See below that if (hawk, hawk) has payoff (1, 1) then it becomes the sole Nash
equilibrium in a prisoners’ dilemma
GAME (HD as PD) Hawk Dove
(10, 10)
Player A
Player B
54
Application 2:
Hawk/Dove & Brinkmanship
In “hawk/dove”, both players would prefer the other player to
choose “dove”
◦ Communication, reputation & credible threats become important
◦ If the other player is certain you will play “hawk” then their optimal strategy
is to play “dove”
◦ In experiments where one player was able to make credible threats (lock in
the “hawk” strategy) they gained an advantage
◦ More generally, if you can make your opponent think you are irrational, crazy
or suicidal, then they are more likely to play “dove”
55
Application 2:
Modified hawk/dove
game M odified Hawk/Dove (MHD) game
◦ Modified version of hawk/dove has same payoffs for all strategies except (dove,
dove), which now has a higher payoff for “pacifist” player A
◦ There is now a single Nash equilibrium (dove, hawk) with payoff (0, 20); this is an
ideal situation for player B, but player A would prefer (dove, dove) ◦ The “pacifist
problem”: player A can try to create a credible threat of choosing “hawk”, to scare
player B into choosing “dove”, but everybody knows they are really a pacifist. Is the
threat credible?
Player B
GAME (MHD) Hawk Dove Hawk
Player A
56
Application 2:
Modified hawk/dove
game Modified Hawk/Dove (MHD) game
◦ For threat to be credible, there must be at least the chance of war. This could be
achieved by ambiguity (or randomness) in one of the payoffs – for example below,
the (dove, dove) payoff is ( ? , 10). This is an example of “incomplete information”
which we will discuss later
◦ The “peace loving warrior”: with ambiguity, player A can try to create a credible
threat of choosing “hawk”, to scare player B into choosing “dove” (threaten war
to try prevent war). Is the threat credible now?
Player B
GAME (MHD) Hawk Dove Hawk
Player A
57
Application 2:
Modified HD – Golden
Balls British game show “Golden Balls”
◦ Two players, who each have two strategies {steal, split}
◦ Similar to “hawk/dove” except the conflict option (steal, steal) is costless, so no
incentive to change. Therefore, three (weak) Nash equilibriums.
◦ Cooperation (split, split) might give a good outcome, but if you think the other
person will choose “split” then you get a higher benefit by choosing “steal”, and so
you have an incentive to change
Player B
GAME (Balls) Steal Split Steal (0,
Player A
58
Application 2:
Modified HD – Golden
Balls British game show “Golden Balls”
◦ Each player would prefer the other player to choose “split”
◦ An altruistic player might get benefit (+60) from sharing, but they face the problem
that their opponent might not feel the same (pacifist problem)
◦ Players have two minutes to discuss what they should do… what would you
do? ◦ https://www.youtube.com/watch?v=S0qjK3TWZE8
Player B
GAME (Balls) Steal Split Steal (0,
Player A
59
Application 2:
Brinkmanship summary
In “anti-coordination” games like hawk/dove, there are generally two
Nash
equilibriums ◦ Either (hawk, dove) or (dove, hawk)
◦ Both players prefer to be the “hawk” and the opponent be the “dove” ◦ Players
want to provide credible threat (communication or reputation) that they will play
“hawk”, to force the other player to choose “dove”
Hawk/dove – threats as a fighting strategy
◦ During the Cuban Missile Crisis, the US government took actions that indicated an
intention to invade Cuba (threat of “hawk”) to encourage the Russians to remove
their missiles from Cuba
◦ One argument in favour of the 2003 Iraq war was that it showed the US was a
dangerous and unpredictable “hawk”, therefore giving other nations an
incentive to play “dove” in future disagreements
◦ A “tough guy” can build a reputation for fighting, which forces other people to
the “dove” position, therefore winning future conflict without needing to fight
60
Application 3:
Coordination game (easy)
Recall the earlier game (Cards):
◦ Two players have two cards (King, Ace) and must choose one card
◦ If both play Ace, both receive $2… if both play King, both receive
$1 ◦ If they play different cards, both receive $0
◦ Two Nash equilibriums = (Ace, Ace) & (King, King)
◦ One solution (Ace, Ace) is Pareto dominant; higher payoff for both
Player B
GAME (Cards) Ace King Ace (2, 2)
Player A
61
Application 3:
Coordination games
Battle of the sexes (1950s version)
◦ Two players (A = wife, B = husband) choose where to go for the
evening ◦ Two strategies {ballet, boxing}
◦ They want to be together, so opposite choices gives no benefit (0,
0) ◦ If they both go to the ballet, then payoff is (2, 1)
◦ If they both go to the boxing, then payoff is (1, 2)
Player B: husband
GAME (battle) Ballet Boxing Ballet
Player A: wife
62
Application 3:
Coordination games
Battle of the sexes outcome
◦ Two Nash equilibriums = (ballet, ballet) & (boxing, boxing) ◦ The
worst outcome is picking opposite strategies; incentive to change ◦
Wife would prefer (ballet, ballet) and husband prefer (boxing, boxing)
◦ But no Pareto dominant solution… so how to choose?
Player B: husband
GAME (battle) Ballet Boxing Ballet
Player A: wife
63
Application 3:
Coordination games
Two good equilibriums but no way to choose a priori
◦ This is the heart of coordination problems
◦ Pure coordination games: meeting a friend in NYC but don’t know where, or two
people trying to guess the same number for a shared prize, but don’t know
what?
◦ Minority games: people want to go to bar, but only if it’s not too
crowded ◦ Anti-coordination games: chicken and hawk/dove as discussed
above
No easy solution
◦ Can be solved in sequential games = discussed later
◦ There is also a mixed strategy = discussed later
◦ In static pure games = need a signal, heuristic, reputation, or focal point
◦ A focal point (Schelling point) is an assumed outcome in the absence of
communication, such as friends in NYC guessing that they will meet the same place
they met last time
64
Application 4:
Stag hunt & trust games
Stag hunt
◦ Two players are hunting for food & have two strategies {stag, rabbit}
◦ Catching the stag requires two people – if both people hunt the stag then they both
get the biggest reward (4, 4)
◦ If one person hunts the stag while the other chases a rabbit, the stag-hunter gets
nothing, while the rabbit-chaser gets some benefit (0, 3)
◦ If both people chase the rabbit, they both get a small benefit (2, 2)
GAME (stag) Hunt stag Chase rabbit
0) (2, 2)
Player A
Player B
65
Application 4:
Stag hunt & trust games
Stag hunt outcome
◦ Two Nash equilibriums = (stag, stag) & (rabbit, rabbit)
◦ The worst outcome for each player is hunting the stag when the other person is
chasing the rabbit; incentive to change
◦ Optimal outcome is (stag, stag), but requires trust in the other person ◦ This is a
type of coordination game, but temptation to betray (choose “rabbit” when other
chooses “stag”) if players are risk averse & lack trust
GAME (stag) Hunt stag Chase rabbit
0) (2, 2)
Player A
Player B
66
Application 4:
Stag hunt, risk & trust
Stag hunt & risk aversion
◦ Choosing “stag” gives possible outcomes of (4) or (0) – risky ◦
Choosing “rabbit” gives possible outcomes of (3) or (2) – low risk ◦
Without a view on other player’s strategy, “rabbit” is a safer option
◦ We can say that (stag, stag) is “payoff dominant”, while (rabbit, rabbit) is “risk
dominant”
GAME (stag) Hunt stag Chase rabbit
Player A
Player B
67
Application 4:
Stag hunt, risk & trust
Stag hunt, social cooperation & safety
◦ (stag, stag) = “payoff dominant”; (rabbit, rabbit) = “risk dominant” ◦ Player
need to choose whether they will “risk trusting other people in search of the
biggest benefit” or “not trust and taking the safe result”
◦ This is sometimes interpreted as a choice between “social cooperation” and
“safety”. More trust more cooperation better outcome.
Application 4:
Stag hunt & prisoners’
dilemma Note: careful not to change the game
◦ For “stag hunt”, outcome from betrayal (choosing “rabbit” when others choose
“stag”) must be worse than the outcome from (stag, stag)
◦ If the the outcome from betrayal is better than outcome from (stag, stag)
cooperation, then “stag hunt” game turns into “prisoners’ dilemma” ◦ See below
that if the benefit from betrayal increases to (5), then (rabbit, rabbit) becomes
the sole Nash equilibrium of the prisoners’ dilemma
GAME (stag as PD) Hunt stag Chase
69
Application 4:
Changing games on
purpose The prisoners’ dilemma equilibrium is
sub-optimal
GAME (PD) Not confess Confess Not
confess (-10, -10) (-50, -5) Confess (-5,
Player A -50) (-30, -30)
Player B
70
71
Player A
72
Player A
73
Phew…
The above games show how we can find a Nash equilibrium in some
situations
◦ Prisoners’ dilemma; hawk & dove; cooperation; stag hunt
◦ Simple games = still relatively easy to solve
74
Mixed strategies
◦ Allow players to chose a mix of strategies instead of a “pure” strategy
◦ Existence of Nash equilibrium
Dynamic games
◦ Repeated games (finite & infinite)
◦ Sequential games & extensive form (game tree)
◦ Sequential games with imperfect information
Incomplete information
◦ Reasons for incomplete information (mixed strategies, genuine uncertainty, private information,
deception)
◦ Modeling incomplete information and imperfect information
◦ Bayesian updating & Bayesian equilibrium
75