1 / 39

Why You Don’t Tell The Truth: The Irrationality of Disagreement

Why You Don’t Tell The Truth: The Irrationality of Disagreement. Robin Hanson Associate Professor of Economics George Mason University. We Disagree, Knowingly (sincerely, on factual topics). Stylized Facts: Argue in science/politics, bets on stocks/sports

genero
Télécharger la présentation

Why You Don’t Tell The Truth: The Irrationality of Disagreement

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Why You Don’t Tell The Truth:The Irrationality of Disagreement Robin Hanson Associate Professor of Economics George Mason University

  2. We Disagree, Knowingly(sincerely, on factual topics) • Stylized Facts: • Argue in science/politics, bets on stocks/sports • Especially regarding ability, when hard to check • Less on “There’s another tree” • Dismiss dumber, but not defer to smarter • Disagree not embarrass, its absence can • Given any free pair, find many disagree topics • Even people who think rationals should not disagree • Precise: we can publicly predict direction of other’s next opinion, relative to what we say

  3. Life Without Knowing Disagree • Less optimistic war • Little “for their own good” paternalism • Few “not invented here” innovation barriers • No belief-based identities: religious, political, academic, national, sporty, artistic • Far less speculative trade, surveys harder • But prices and surveys more influential • Debaters take roles, like actors

  4. Hold (Nearly) Firm P. van Inwagen ’96 A. Plantinga ’00 G. Rosen ‘01 R. Foley ‘01 T. Kelly ’05, ‘07 P. Pettit ’06 B. Weatherson ‘07 (Near) Equal Weight Sextus Empiricus K. Lehrer ‘76 H. Sidgwick ’81 R. Feldman ‘04 A. Elga ’06 B. Frances ‘07 D. Christensen ‘07 How Respond to Differing Peer? (If near same quality evidence, reasoning abilities) Philosophers weigh in

  5. as in statistics, comp. sci., physics, econ. possible worlds take on claim, know, info, merge & common info degree of belief (set) “if think enough” ideal constraints, not recipe P(A)+P(not A)=1 not say how to fix failure If enough, rational beliefs unique NOT Necessarily: anything goes, actual mental state, offer bets on all, exact beliefs, perfect logic, sure evidence, Bayes’ rule, prior as first beliefs, symmetric prior, common prior, accept/confirm `Bayesian’ ≡ Probability, Info Theory

  6. Aumann 1976 assumed: Any finite information Of possible worlds Common knowledge Of exact E1[x], E2[x] Would say next For Bayesians With common priors If seek truth, not lie, josh, or misunderstand We Can’t Agree to Disagree Nobel Prize 2005 his most cited paper by x2 Agent 1 Info Set Agent 2 Info Set Common Knowledge Set Aumann (1976) Annals of Statistics

  7. Aumann in 1976: Any information Of possible worlds Common knowledge Of exact E1[x], E2[x] Would say next For Bayesians With common priors If seek truth, not lie or misunderstand Since generalized to: Impossible worlds Common belief A f(•, •), or who max Last ±(E1[x] - E1[E2[x]]) At core, or wannabe Symmetric prior origins We Can’t Agree to Disagree

  8. Aumann in 1976: Any information Of possible worlds Common knowledge Of exact E1[x], E2[x] Would say next For Bayesians With common priors If seek truth, not lie or misunderstand Since generalized to: Impossible worlds Common belief A f(•, •), or who max Last ±(E1[x] - E1[E2[x]]) At core, or wannabe Symmetric prior origins We Can’t Agree to Disagree

  9. E C B1 B2 Common Belief = in C, q-agree that E 2(1-q) = max % can q-agree that disagree re X Monderer & Samet (1989) Games and Economic Behavior

  10. Aumann in 1976: Any information Of possible worlds Common knowledge Of exact E1[x], E2[x] Would say next For Bayesians With common priors If seek truth, not lie or misunderstand Since generalized to: Impossible worlds Common belief A f(•, •), or who max Last ±(E1[x] - E1[E2[x]]) At core, or wannabe Symmetric prior origins We Can’t Agree to Disagree

  11. We Can’t Foresee To Disagree Hanson (2002) Economics Letters

  12. Strong Bayesian Response Opinion Ei[X] Bayes E1[E2[X]] E1[X] Equal Weight Hold Firm E2[X] when clearly tell any future date Time

  13. Joint Random Walk Opinion Ei[X] For any Bayesians (or wannabes) with common prior, announced opinions follow joint random walk. Time

  14. A gets clue on X A1 = A’s guess of X A told Sign(B2-B1) A2 = A’s guess of X Loss (A1-X)2+(A2-X)2 B gets clue on X B told A1 B1 = B’s guess of X B2 = B’s guess of A2 Loss (B1-X)2+(B2-A2)2 Experiment Shows Disagree E.g.: What % of U.S. say dogs better pets than cats? time Example 30% 70% 40% “low” 40% A neglects clue from B B reliably predicts neglect

  15. Aumann in 1976: Any information Of possible worlds Common knowledge Of exact E1[x], E2[x] Would say next For Bayesians With common priors If seek truth, not lie or misunderstand Since generalized to: Impossible worlds Common belief A f(•, •), or who max Last ±(E1[x] - E1[E2[x]]) At core, or wannabe Symmetric prior origins We Can’t Foresee To Disagree

  16. Generalized Beyond Bayesians • Possibility-set agents: if balanced (Geanakoplos ‘89), or “Know that they know” (Samet ‘90), … • Turing machines: if can prove all computable in finite time (Medgiddo ‘89, Shin & Williamson ‘95) • Ambiguity Averse (maxact minp in S Ep[Uact]) • Many specific heuristics … • Bayesian Wannabes Agree Disagree e.g., Maccheroni, Marinacci & Rustichini (2006) Econometrica 1 1 0 0

  17. Bayesian Wannabe • A general model of compute-limited agents • Try to be Bayesian, but fail, and so have belief errors • Reason about own Error = Actual – Bayesian • Can consider arbitrary meta-evidence (see Kelly ’05) • Assume two B.W. agree to disagree (A.D.) & maintain a few easy-to-compute belief relations: • A.D. regarding any Xw implies A.D. re Yw=Y. • Since info is irrelevant to estimating Y, any A.D. implies a pure error-based A.D. • So if pure error A.D. irrational, all are. Hanson (2003) Theory & Decision

  18. Error Diffs Are Like Priors Diffs Prior Info Errors Pure Agree to Disagree? Disagree Sources Yes No Yes Either combo implies pure version! Ex: E1[p] @ 3.14, E2[p]@ 22/7

  19. Theorem in English • If two Bayesian wannabes • nearly agree to disagree about any X, • nearly agree each thinks himself nearly unbiased, • nearly agree that one agent’s estimate of other’s bias is nearly consistent with a simple algebraic relation • Then they nearly agree to disagree about Y, one agent’s average error regarding X. (Y is state-independent, so info is irrelevant). Hanson (2003) Theory & Decision

  20. Aumann in 1976: Any information Of possible worlds Common knowledge Of exact E1[x], E2[x] Would say next For Bayesians With common priors If seek truth, not lie or misunderstand Since generalized to: Impossible worlds Common belief A f(•, •), or who max Last ±(E1[x] - E1[E2[x]]) At core, or wannabe Symmetric prior origins We Can’t Foresee To Disagree

  21. Which Priors Are Rational? Prior = counterfactual belief if same min info • Extremes: all priors rational, vs. only one is • Can claim rational unique even if can’t construct (yet) • Common to say these should have same prior: • Left & right brain halves; me-today & me-Sunday • Bad prior origins, e.g. random brain changes • You must think your differing prior is special, but • Standard genetics says DNA process same for all • Standard sociology says your culture process similar Hanson (2006) Theory & Decision

  22. Standard Bayesian Model Agent 1 Info Set A Prior Agent 2 Info Set Common Kn. Set

  23. An Extended Model Multiple Standard Models With Different Priors

  24. Standard Bayesian Model

  25. Extending the State Space As event

  26. An Extended Model

  27. My Differing Prior Was Made Special My prior and any ordinary event E are informative about each other. Given my prior, no other prior is informative about any E, nor is E informative about any other prior. Hanson (2006) Theory & Decision

  28. Corollaries My prior only changes if events are more or less likely. If an event is just as likely in situations where my prior is switched with someone else, then those two priors assign the same chance to that event. Only common priors satisfy these and symmetric prior origins.

  29. A Tale of Two Astronomers • Disagree if universe open/closed • To justify via priors, must believe: “Nature could not have been just as likely to have switched priors, both if open and if closed” “If I had different prior, would be in situation of different chances” “Given my prior, fact that he has a particular prior says nothing useful” All false for genetic influences on brothers’ priors!

  30. Aumann in 1976: Any information Of possible worlds Common knowledge Of exact E1[x], E2[x] Would say next For Bayesians With common priors If seek truth, not lie or misunderstand Since generalized to: Impossible worlds Common belief A f(•, •), or who max Last ±(E1[x] - E1[E2[x]]) At core, or wannabe Symmetric prior origins We Can’t Foresee To Disagree

  31. We Disagree, Knowingly • Stylized Facts: • Argue in science/politics, bets on stocks/sports • Especially regarding ability, when hard to check • Less on “There’s another tree” • Dismiss dumber, but not defer to smarter • Disagree not embarrass, its absence can • Given any free pair, find many disagree topics • Even people who think rationals should not disagree • Precise: we can publicly predict direction of other’s next opinion, relative to what we say

  32. Theory or data wrong? Few know theory? Infeasible to apply? We lie? Exploring issues? Misunderstandings? We not seek truth? Each has prior: “I reason better” ? They seem robust Big change coming? Need just a few adds We usually think not, and effect is linear But we complain of this in others Why Do We Disagree?

  33. An Answer: We Self-Deceive • We biased to think better driver, lover, … “I less biased, better data & analysis” • Evolutionary origin: helps us to deceive • Mind “leaks” beliefs via face, tone of voice, … • Leak less if conscious mind really believes • Beliefs like clothes • Function in harsh weather, fashion in mild • When made to see self-deception, still disagree • So at some level we accept that we not seek truth

  34. How Few Meta-Rationals (MR)? Meta-Rational = Seek truth, not lie, not self-favoring-prior, know disagree theory basics • Rational beliefs linear in chance other is MR • MR who meet, talk long, should see are MR? • Joint opinion path becomes random walk • We see no virtually such pairs, so few MR! • N each talk 2T others, makes ~N*T*(%MR)2 pairs • 2 billion ea. talk to 100, if 1/10,000 MR, see 1000 pairs • See none even among accept disagree irrational

  35. When Justified In Disagree? When others disagree, so must you • Key: relative MR/self-deception before IQ/info • Psychology literature self-deception clues: • Less in skin response, harder re own overt behaviors, older kids hide better, self-deceivers have more self-esteem, less psychopathology/depression • Clues?: IQ/idiocy, self-interest, emotional arousal, formality, unwilling to analyze/consider • Self-deceptive selection of clues use • Need: data on who tends to be right if disagree! • Tetlock shows “hedgehogs” wrong more on foreign events • One media analysis favors: longer articles, in news vs editorial style, by men, non-book on web or air, in topical publication with more readers and awards

  36. Aumann in 1976: Any information Of possible worlds Common knowledge Of exact E1[x], E2[x] Would say next For Bayesians With common priors If seek truth, not lie or misunderstand Since generalized to: Impossible worlds Common Belief A f(•, •), or who max Last ±(E1[x] - E1[E2[x]]) At core, or Wannabe Symmetric prior origins We Can’t Agree to Disagree

  37. Implications • Self-Deception is Ubiquitous! • Facts may not resolve political/social disputes • Even if we share basic values • Let models of academics have non-truth-seekers • New info institution goal: reduce self-deception • Speculative markets do well; use more? • Self-doubt for supposed truth-seekers • “First cast out the beam out of thine own eye; and then shalt thou see clearly to cast out the mote out of thy brother's eye.” Matthew 7:5

  38. Life Without Knowing Disagree • Less optimistic war • Little “for their own good” paternalism • Few “not invented here” innovation barriers • No belief-based identities: religious, political, academic, national, sporty, artistic • Far less speculative trade, surveys harder • But prices and surveys more influential • Debaters take roles, like actors

  39. Common Concerns • I’m smarter, understand my reasons better • My prior is more informed • Different models/assumptions/styles • Lies, ambiguities, misunderstandings • Logical omniscience, act non-linearities • Disagree explores issue, motivates effort • We disagree on disagreement • Bayesian “reductio ad absurdum”

More Related