1 / 23

The Menu for Choice

The Menu for Choice. How do States Make Decisions?. I. Descriptive Realism and its Assumptions. RISK Lord Palmerston: “His Majesty’s Government has no permanent friends, only permanent interests.”

Télécharger la présentation

The Menu for Choice

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Menu for Choice How do States Make Decisions?

  2. I. Descriptive Realism and its Assumptions • RISK • Lord Palmerston: “His Majesty’s Government has no permanent friends, only permanent interests.” • Winston Churchill: “If Hitler invaded hell, I would make at least a favorable reference to the devil in the House of Commons.”

  3. A. States are the relevant actors in world politics Evidence: Most wars fought by one or more states Counter-evidence: Trade is not state-to-state (but reduces interstate conflict), >50% of wars involve non-state actors, and IGO membership reduces interstate conflict

  4. B. States behave “as if” unitary, rational actors • Evidence: Even liberal states often practice “power politics” – i.e. US intervention in Latin America, British colonialism, Chinese hegemony in Vietnam, etc. • Counter-evidence: the “democratic peace”

  5. C. States actually pursue the “national interest” Evidence: Rational calculation appears to occur (Lord Palmerston quote) Counter-evidence: Leaders matter, need for foreign policy advice, voluntary losses of sovereignty (EU, Czechoslovakia)

  6. II. Prescriptive Realism • Seek a “Balance of Power” • Logic: Since it’s a dog-eat-dog world, bigger states are expected to conquer smaller ones (unless the small one gets allies) • Problem: Strong evidence suggests that imbalances of power (disparity) are less war-prone than balances of power (parity)!

  7. B. Si vis pacem, para bellum (If you want peace, prepare for war) Logic: Deterrence theory holds that the stronger you are, the less likely peopleare to attack you. Problem: Power politics increases war risk: signing an outside alliance, building up arms (whether mutual or unilateral). States that prepare for war tend to fight a lot.

  8. C. “War Is the Health of the State” • Logic: War unites nations, expands the successful ones, and generates growth • War is sub-optimal • Bargaining without war: Side A and Side B are arguing over something. Expressing each side’s share as a percentage, A gets x of the disputed resources or territory and B gets 1-x. So A’s share plus B’s share = 1, or 100%. This is called Pareto Optimality (nothing is left on the table).

  9. b. Compare to War • Each side has a chance of winning and losing. One side’s chance of winning is the other side’s chance of losing. • Winner gets everything (100% of disputed resources), loser gets nothing (0%) • Both sides suffer costs (economic, social, military, etc.)

  10. The Math • Represent A’s probability of winning as p. Then B’s probability of winning is 1-p. • A’s payoff for war = p*1 + (1-p)*0 – CostsA • Simplify: p - CostsA • B’s payoff for war = (1-p)*1+p*0 – CostsB • Simplify: 1- p - CostsB • The total return on war is (p-CostsA) + (1-p-CostsB) • = p – CostsA + 1 – p – CostsB • = 1 – CostsA – Costs B • Since bargaining gives a total return of 1 and 1 > 1 – CostsA – CostsB war is inefficient. Not Pareto Optimal.

  11. D. Don’t be a Sucker 1. Logic • Prisoners’ Dilemma: Used to model “Security Dilemmas” -- Efforts to increase own security make others less secure (arms races, etc.) • Both players end up worse, even though each plays rationally!

  12. 2. Problems Repeated play: Axelrod’s “tournament” establishes that “always defect” is suboptimal! Superior: TFT Not all games are PD. Some have cooperative outcomes. EARTH simulation: Establishes that best alliance strategy is: never initiate war, never ally with initiator, always ally with target. “Collective security states” do best!

  13. 3. Summary • No matter what the outcome is to a war, the two sides could always have found some agreement that BOTH would have preferred to war – IF both of them agreed on how the war was likely to turn out. • Example: Both sides in a war would ALWAYS be better off by simply adopting the war’s outcome (other than the actual fighting part) as a pre-war bargain. • So why do people fight?

  14. III. Arrow’s Theorem against the National Interest • Focus: How to aggregate individual interests into social or national interest • Setting and question • Three or more citizens • Three or more outcomes or objectives they must rank: Example: economic growth, human rights, and military security. • Is there a reasonable way for society as a whole to rank the outcomes? Could be anything – voting, polling, mind-reading, etc. Is there any system at all that would be reasonable?

  15. C. Notation • Choices or outcomes are indicated by capital letters: A, B, C, etc. • Preferences indicated by use of letters p, i, or r: • Strong preference: If someone prefers one option to another we write: A p B • Indifference: If someone thinks A and B are about equal, we write A i B • Weak preference: If A p B or A i B then A r B. So A r B means “A is at least as good as B”

  16. 2. A minimal definition of rationality • Preferences are connected: Given any pair of options, someone can relate them with p, i, or r. • Preferences are transitive: If A r B and B r C then A r C.

  17. D. Characteristics of a desirable aggregation technique • Universality: Our technique should apply to any group of rational people, regardless of their specific preferences about A, B, or C.

  18. 2. Non-Dictatorship • If Bob says: A p B • But everyone else says B p A • then… • We should not conclude that for society, A p B

  19. 3. Unanimity • If everyone agrees that A p B • then… • We should conclude that for society, A p B

  20. 4. Collective Rationality • If individuals are rational, our technique should create social preferences that are rational • Remember what this means: connected and transitive preferences

  21. 5. Independence of Irrelevant Alternatives • Suppose I have the options A, B, and C. I can rank these however I want. One example: A p B p C • Now suppose a new option is available: D. • I must not change the order of A, B, and C relative to each other. • Starting with above example: • D p A p B p C  OK • A p D p B p C  OK • A p B p D p C  OK • A p B p C p D  OK • D p B p A p C  Not OK (B and A swapped places) • Restaurant analogy: Waiter offers chicken or fish. I like chicken better. Waiter comes back and explains there is also beef. I now decide I want the fish. (Not OK)

  22. D. Characteristics of a desirable aggregation technique (revisited) • Universality: Applies to people with different values or beliefs • Non-Dictatorship: No one person’s preference outweighs everyone else together • Unanimity: If everyone prefers one option to another, then so should society as a whole • Collective Rationality: Should produce a transitive ranking of options • Independence of Irrelevant Alternatives: New options don’t change the relative ranks of earlier options

  23. E. Conclusion and Implications • Arrow proved these conditions cannot all be true! • Implications • There are times when there is no single “national interest,” “general will” or “will of the people” • Rational individuals may not make a rational collectivity • Preference cycles and the power of agenda-setting • Voter 1: A p B p C • Voter 2: B p C p A • Voter 3: C p A p B • SOCIETY: • A p B • B p C • C p A!

More Related