1 / 28

Machiavellian Intelligence and the Evolution of Cooperation

Machiavellian Intelligence and the Evolution of Cooperation. Metternich, on hearing that Talleyrand had died: “What does he mean by that?”. Collaborators…. Nicholas Allen, Psychologist, University of Melbourne;

emily
Télécharger la présentation

Machiavellian Intelligence and the Evolution of Cooperation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Machiavellian Intelligence and the Evolution of Cooperation Metternich, on hearing that Talleyrand had died: “What does he mean by that?”

  2. Collaborators…. • Nicholas Allen, Psychologist, University of Melbourne; • James Hanley, Political Scientist, Workshop for Political Theory and Policy Analysis, Indiana University; • Jason Hartwig, Political Scientist, University of Oregon; • Tomonori Morikawa, Political Scientist, Center for International Education, Waseda University. • John Orbell, Political Scientist, University of Oregon.

  3. I will: • Show how cooperative dispositions can emerge and be sustained as a product of cognitive evolution on Machiavellian intelligence—independent of kin selection, reciprocity, and group selection; • Work within PD paradigm, but with players having alternative ways of “making a living” beyond such games.

  4. Questions that cognitively “well designed” social animals must be able to answer rapidly and accurately in an “ecology of diverse social games”… • 1.What game is being played—or offered—in this encounter? • What are the stakes in the game? • What resources and intentions do others bring to the game? • What resources and intentions do I bring to the game? • What payoff can I expect from playing this game with this partner? • What are the alternatives to playing this game with this partner?

  5. In short, as Schmitt & Grammerer (1997) put it: “Costs and benefits have to be assessed, risks and chances inferred, both in the case of failure and success, both for ego and others. The other’s mind has to be read, his/her knowledge, intentions and behavioral capabilities have to be considered. Finally, the most promising tactic has to be selected from a host of available procedures.”

  6. This predicts…. • (Presumably) modular capacities for “Mindreading” and “Manipulation” where… • “...the difference is that mind-reading involves exploiting the victim’s behavior as it spontaneously emerges from the victim, while manipulation involves actively changing the victim’s behavior.” • Krebs and Dawkins 1999 • 2. Mechanisms for calculating what is the most adaptive choice to make—between and within games;

  7. Basic structure of the simulation… • Some number of individuals encounter each other…. • When they do, they must decide among…. • Playing a potentially cooperative (but risky) PD game…. • Choosing some other way of making a living… • 3. And where each individual chooses between PD and NP, based on an expected value (EV) calculation. • Accumulated wealth determines whether they reproduce at the end of a generation, or suffer “genotypic death” • 4. Agents have a Probability of Cooperation (PC)—the probability with which the it cooperates in a given PD game

  8. BUT information about others’ intentions is needed….and “One of the most important things to realize about systems of animal communication is that they are not systems for the dissemination of truth. An animal selected to signal to another animal may be selected to convey correct information, misinformation, or both” Robert Trivers (1985) • 5. …agents can lie( “manipulate”) with some success, but can also penetrate lies (“mindread”), meaning that they are equipped with appropriate “cognitive capacities” … • … that are transmitted from parent to offspring, subject to mutation…

  9. 6. Mutation happens …. • …with values magnitude and frequencyon interval-level variables from which the probability variables are constructed… • For example: • Mindreading plus =positive capacities to read others’ intentions—an integer above zero… • Mindread minus =negative capacities to read others’ intentions—an integer above zero… • And the proportion “Mindread” = Mindread plus /(Mindread plus + Mindread minus) 7. There is a “carrying capacity” constraint, with those whose accumulated wealth places them below that constraint dying without reproducing…

  10. 8. Agents each play the role of “sender” and “receiver” of messages about their “cooperative dispositions.” We assume: • Agents always convey the message “I will always cooperate”—viz, “my PC is 1.0” • 9. Their actual PC will (normally) be short of that. Thus: • Agents send 100 “bits” of “I will always cooperate” messages, some of which are (normally) false. • And… • Their “true” and “false” bits will vary in the “believability” that the agent can muster… • ....with “believability” defined as a probability between 0.0 and 1.0…

  11. Mean believability of sender’s truths Mean believability of sender’s lies Messages not believed Messages believed 0.0 1.0 • Example: • Sender has modest PC, truths somewhat more believable than lies • Receiver has quite high level of Mistrust • Before Mindreading, neither true nor false messages believed

  12. NOW ADDING SENDERS’ MINDREADING….. ….which makes Senders’ false messages less believable for receivers, and their true messages more believable Mindreading is a proportion between 0.0 and 1.0—with 0.0 accepting messages (true and false) as sent, and 1.0 recognizing and accepting all true messages, and recognizing and rejecting all false messages….

  13. Mean believability of lies after mindreading Mean believability of truths after mindreading 0.0 1.0 • ….after mindreading when (e.g.) the receiver has .7 mindreading capacity: • Now a slight majority of true “bits” are above receiver’s mistrust threshold, thus are believed by receiver; • Those few accepted “bits” now the basis for receiver’s EV calculation…

  14. Higher PC at equilibrium More unstable “spikes” “Spikes” to cooperative equilibrium frequent s d-d zero c-c t Low PC dominates, society dies No Play dominates Value of the NP alternative

  15. DEVELOPING THIS: We ran 90 simulations, using the parameters: Free riding = 15 Mutual cooperation = 5 Mutual defection = -5 Sucker’s payoff = -15 And with ten simulations run with, in each case, ALT set at .5 intervals between 0 and 5 (the mutual payoff value).

  16. Table 1 Cooperative transitions within the parameter range 0 < ALT < c. (Based on ten simulations for each value of ALT) ALT values Number of cooperative transitions Mean generation where transition starts Predicted PC threshold* Observed PC threshold 4.5 9/10 6535 0.975 0.981 4.0 10/10 5923 0.950 0.965 3.5 10/10 5983 0.925 0.940 3.0 9/10 5924 0.900 0.926 2.5 9/10 6046 0.875 0.898 2.0 10/10 4474 0.850 0.897 1.5 8/10 3398 0.825 0.858 1.0 8/10 4040 0.800 0.833 0.5 8/10 5566 0.775 0.815 Cooperative transitions across 0 < ALT < Mutual Coop; (Mutual coop = 5; 100 runs of the simulation, ten at each .5 value; until 20,000th generation) *Where EV of a PD = exactly ALT

  17. In this case… • One {high PC + high mindreading} genealogy drifted to 1.0, its successive members reproducing by rejecting others’ offers of PD play—and “living off” the solitary NP = 4 payoff. • For the previous 20 generations, members were offered PD play on 77% of encounters—viz, when mindreading “pushed” the 100 true messages sent by members above receivers’ Mistrust thresholds. • In general, offers of PD play increased with an agent’s PC.. • BUT…

  18. Coefficients Intercept 1.2216 Mistrust -0.6821* Mindreading -0.9414* Probability of cooperating -0.0641* TABLE 2 Multiple regression beta weights: Proportion of offered PD games accepted—by mistrust, mindreading and PC. (100 generations prior to the cooperative transition) R2 = .557; * P <.001; N = 4978; observations are on 50 agents in each of the 100 generations; missing cases had zero offers.

  19. …all of those “approaches” were rejected by the member’s own high Mindreading—which pushed the many lies (average PC = .25) below their own Mistrust thresholds. • But these two come into contact with each other, recognize each other as a good bet for a PD (EV > ALT), and play with each other; • Within fifteen generations, their descendants had eliminated the less cooperative agents… • …and were now competing with each other for “slots” in the ecology…producing… • Downward pressure on mistrust; by generation 5350 mean PC was .33 by comparison with .5 of the two founding parents….

  20. Generally, the higher MUTUAL COOPERATIONis relative to ALTERNATIVE, the lower the equilibrium level of cooperation, after the TRANSITION… WHY? The “flypaper theorem” • OPTIMUM LEVEL OF COOPERATION (given an alternative way of making a living) will be … • HIGH ENOUGH TO ATTRACT COMPETENT MINDREADERS INTO PLAYING PD GAMES… • BUT • LOW ENOUGH TO MAXIMIZE GAINS FROM DEFECTION IN JOINED PD GAMES…

  21. Conclusions (1)… • The availability of an “alternative way of making a living”—within appropriate parameters—makes it possible for “mutant cooperators” to avoid being exploited in a nasty world… • But for them TO stay alive, they must have also fairly high capacity for Mindreading, and that originated in the dangers of low PC agents’ greater willingness to offer PD games… • And mindreading maintains the cooperative equilibrium from invasion from low PC types. • ;

  22. Conclusions (2) In the ancestral past, cooperative dispositions evolved to their highest levels when the payoff from mutual cooperation is closest to alternative, but does not exceed it…

  23. Conclusions (3) –in particular, for Political Scientists who argue about “rationality”: • We can distinguish between: • Rationality in action, and • Rationality in design. • This analysis suggests that a “rational design” for highly social animals such as ourselves involves (1) well developed capacity for mindreading; (2) modest levels of mistrust, and (3) quite high cooperative dispositions • NOTE that rationality in design thus resolves many of the empirical “anomalies” of rationality in action

More Related