Download
slide1 n.
Skip this Video
Loading SlideShow in 5 Seconds..
Game Theory: One Encoding of Many PowerPoint Presentation
Download Presentation
Game Theory: One Encoding of Many

Game Theory: One Encoding of Many

56 Vues Download Presentation
Télécharger la présentation

Game Theory: One Encoding of Many

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Game Theory: One Encoding of Many It should be clear from the discussions of Chapter I that a theory of rational behavior – i.e., of the foundations of economics and of the main mechanisms of social organization – requires a thorough study of the “games of strategy”… in the process of this analysis it will be technically advantageous to rely on pictures and examples which are rather remote from the field of economics proper, and belong strictly to the field of games of the conventional variety. Thus the discussions which follow will be dominated by illustrations from Chess, “Matching Pennies”, Poker, Bridge, etc., and not from the structure of cartels, markets, oligopolies, etc. (Von Neumann and Morgenstern, p. 46-7). • What does game theory require to encode a problem? • Problem must be represented by an extensive form – i.e., a tree • Explicit utility functions that provide values for terminal nodes (strategies) • Solution algorithm is backwards induction; Nash and refinements • Does this work for all games of interest?

  2. Misguided Critiques of Game Theory • Where Green and Shapiro (and others) have gone wrong… • Confusion of “game theory” with “rational” and “boundedly rational” with human (and the ordering this implies) • Kahneman and Tversky style experiments show we lack computational power and knowledge • Attacks upon the nature of assumptions in mathematical modeling, and implicit comparison to historiography / case studies • We outperform “rational choice” players in almost all contexts • History not the answer – KKV sensibility on cases and spanning parameter space

  3. Detour 1: Machine Chess Does Deep Blue use artificial intelligence?The short answer is "no." Earlier computer designs that tried to mimic human thinking weren't very good at it. No formula exists for intuition. So Deep Blue's designers have gone "back to the future." Deep Blue relies more on computational power and a simpler search and evaluation function. (Deep Blue FAQ. http://www.research.ibm.com/deepblue/meet/html/d.3.3.html). • Argument so far: game theory does not deserve the name • Counter-argument: machine chess • Similarities: • Deep Blue uses an extensive form to encode game • Deep Blue uses backwards induction (actually, alpha-beta pruning) to “solve” the game • Deep Blue looks for minimax strategies • Deep Blue can beat humans at a complex game

  4. Zermelo Was Wrong • Differences between Deep Blue and Game Theory • Extensive form not complete – one cannot associate strategies with chess payoffs {win, lose, draw} • Idiosyncratic utility function • Many terms – e.g., position, material, pawn structure • Term weights chosen by OLS and hillclimbing • Data set for fit was distance between idiosyncratic utility and games of masters • Thus, terms + data generated by human experts • Component games • Opening move libraries used extensively – convention? • Middle game relied upon different idiosyncratic utility function • End games brute force • Tailored encoding – chess specific approach that does not generalize to other games (go, diplomacy, poker)

  5. Detour 2: Why Do We Care About Models? Conflict Regions 1, 2, and 3 Alliance A: 10 units to assign Alliance B: 10 units to assign • Colonel Blotto and Non-Dyadic War • Rules: • Simultaneous moves, A and B assign 10 units to the three provinces • Objective Function: win majority of regions • Empirical Problems: • Dyadic observations are likely not IID • This is not something you can simply correct for (empirically) without understanding the underlying model

  6. Signorino Was Right Given the requirement for theoretical precision, how are we to specify and test strategic theories without doing so formally? …although the call for increased formalization of theories may be welcomed by many positivists, the importance of structures also seems to cut the other way. Consider the typical derivation and analysis of a positive theory. One major assumption generally held – indeed, held throughout this article – is that the structure of the model remains constant across all observations in the data… It does not seems unreasonable to suspect, however, that the true game structure changes over time and place. If even small changes in structure can make a large difference in likely outcomes, and if the true structure of the strategic interaction changes from observation to observation in our data, then what are we to make of any statistical results predicated on the assumption of a fixed game?” (Signorino, p. 294-295). • A Negative Result I Really Like: • Structure matters – one cannot change payoffs, strategies, etc. There are no equivalence classes of games • “Brittle” encoding – if one maps strategies to measures, one cannot make gross analogies • Game theory is precise, but structure makes it brittle. No analogies given ugly combinatorics of strategy space in complex games

  7. Example from Security Studies: Fearon’s Brittle Game • Those Terrible Assumptions (borrowed from labor economics) • War is costly? • For whom? See Goemans. • By what measure? See macroeconomics peace lit. • War is dyadic? • War is a single shot game? • Garfinkel and Skaperdas show iterated game • Equilibrium outcome? War in first play • 4) Single, continuous, resource? • Niou and Lacy, de Marchi and Goemans

  8. Example from Agent Based Modeling: the New IPD • The lack of specific encodings is not solely a problem for game theory. • Cederman: State formation (International Studies Quarterly, 1994) • Axelrod: Culture dissemination (Journal of Conflict Resolution, 1997) • Lustick: Collective Identity (Journal of Artificial Societies and Social Simulation, 1999) • How does one empirically test (almost) identical models that “solve” different problems? • All models share the following elements: • Adaptive agents situated on a lattice (derived from Ising models in physics) • Agents adapt to neighbors • Different metrics for different results • i. Cederman and Axelrod use a Von Nuemann neighborhood w/ range = 1; • ii. Lustic uses a Moore neighborhood w/ range = 1 (includes diagonals)

  9. Example from Agent Based Modeling (cont.)

  10. An Alternative Methodology I • Constraint Satisfaction Problems and Combinatorial Game Theory • (Berlekamp, Conway, and Guy) • Alternatives use game specific encoding / solution concepts • Objective is not to find equilibrium play; rather, to discover or beat human performance • Definitions: • Component game: A component of the game in Figure I is any proper, connected subgraph of G. • Component (idiosyncratic) utility function: • strategies are not necessarily finite (even in a component game) • one needs to assign payoffs to actions taken in component • Two problems: one needs to choose an interval for analyzing payoffs; • one needs to choose a function that represents “progress” in the component.

  11. An Alternative Methodology II • Computational Modeling meets Combinatorial Game Theory • Alliance game encoded on graph • Population of agents select components (proper subgraphs with 2 – 5 vertices) • Agents assign idiosyncratic utility functions to components • Empirical assessment of components / idiosyncratic utilities that perform well • Genetic algorithm relied upon to search • Optimization theory can be boiled down to two components • Jiggle: crossover • Jump: mutation

  12. I II I II III IV IV III Figure I: A Simple Diplomacy Game

  13. IV III Component 1: Friends and Enemies, Together Forever Is this an equilibrium? What happens if the other side does not achieve the same equilibrium?

  14. IV III Component 2: For Better or For Worse? How does one choose a component utility function? Answer: computationally! The space is quite large…

  15. Component Utility Function 1: Candidate 1: u1() = {-1, 0, +1}. This represents the delta in territories after the Fall move. It is plausible, insofar as gaining territory has to be seen as “good” and losing territory as “bad”. Component game u1: Nash Equilibria: [hold, hold]; [attack III’s home, hold]; [attack IV’s home, hold]; [attack IV’s home, attack open territory]; [attack IV’s home; ½ attack open territory and ½ attack IV’s home]

  16. Component Utility Function 2: Candidate 2: u2() = {-1, 0, +1}, where -1 = your opponent has more territory than you do at turn end; 0 = equal territory; and +1 = you have more territory. Again, this is plausible, as it incorporates some notion of relative gains. Component game with u2: Nash Equilibrium: [1/3 hold and 2/3 attack IV’s home, 2/3 hold and 1/3 attack IV’s home] QUESTION: Which component utility function is “better”?

  17. I II I II III IV IV III ? Component 3: Tough Choices…