1 / 51

Replicator Dynamics

Replicator Dynamics. Nash makes sense ( arguably ) if… - Uber - rational - Calculating. Such as Auctions…. Or Oligopolies…. But why would game theory matter for our puzzles ?. Norms/rights/morality are not chosen ; rather… We believe we have rights!

vicky
Télécharger la présentation

Replicator Dynamics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Replicator Dynamics

  2. Nash makes sense (arguably) if… -Uber-rational -Calculating

  3. Such as Auctions…

  4. Or Oligopolies…

  5. But why would game theory matter for our puzzles?

  6. Norms/rights/morality are not chosen;rather… We believewe have rights! We feel angry when uses service but doesn’t pay

  7. But… From where do these feelings/beliefs come?

  8. In this lecture, we will introduce replicator dynamics The replicator dynamic is a simple model of evolution and prestige-biased learning in games Today, we will show that replicator leads to Nash

  9. We consider a large population, ,of players Each period, a player is randomly matched with another player and they play a two-player game

  10. Each player is assigned a strategy. Players cannot choose their strategies

  11. We can think of this in a few ways, e.g.: • Players are “born with” their mother’s strategy (ignore sexual reproduction) • Players “imitate” others’ strategies

  12. Note: Rationality and consciousness don’t enter the picture.

  13. Suppose there are two strategies, and . We start with: Some number, , of players assigned strategy and some number, , of players assigned strategy

  14. We denote the proportion of the population playing strategy as , so:

  15. The state of the population is given by where , , and

  16. Since players interact with another randomly chosen player in the population, a player’s EXPECTEDpayoff is determined by the payoff matrix and the proportion of each strategy in the population.

  17. For example, consider the coordination game: And the following starting frequencies:

  18. Payoff for player who is playing is Since depends on and we write

  19. We interpret payoffs as rate of reproduction (fitness).

  20. The average fitness, , of a population is the weighted average of the two fitness values.

  21. How fast do and grow? Recall First, we need to know how fast grows Let Each individual reproduces at a rate , and there are of them. So: Next we need to know how fast grows. By the same logic: By the quotient rule, and with a little simplification…

  22. This is the replicator equation: Current frequency of strategy Own fitness relative to the average

  23. Growth rate of A Current frequency of strategy Because that’s how many As can reproduce Own fitness relative to the average This is our key property. More successful strategies grow faster

  24. If: : The proportion of As is non-zero : The fitness of A is above average Then: : A will be increasing in the population

  25. We are interested in states where since these are places where the proportions of A and B are stationary. We will call such points steady states. 0

  26.  The steady states are such that

  27. A B Recall the payoffs of our (coordination) game: b, c a, a A B c, b d, d a > c b < d

  28. = “asymptotically stable” steady states i.e., steady states s.t.the dynamics point toward it

  29. What were the pure Nash equilibria of the coordination game?

  30. A B b, c a, a A B c, b d, d

  31. 1 0

  32. And the mixed strategy equilibrium is:

  33. Replicator teaches us: We end up at Nash (…if we end) AND not just any Nash (e.g. not mixed Nash in coordination)

  34. Let’s generalize this to three strategies:

  35. Now… is the number playing is the number playing is the number playing

  36. Now… is the proportion playing is the proportion playing is the proportion playing

  37. The state of population is where , , , and

  38. For example, Consider the Rock-Paper-Scissors Game: With starting frequencies: R P S R 0 -1 1 P 1 0 -1 S -1 1 0

  39. Fitness for player playing is

  40. In general, fitness for players with strategy R is:

  41. The average fitness, f, of the population is:

  42. Replicator is still: Current frequency of strategy Own fitness relative to the average

  43. Notice not asymptotically stable It cycles

  44. R P S R 0 -1 2 P 2 0 -1 S -1 2 0

  45. Note now is asymptotically stable

More Related