390 likes | 400 Vues
This article explores how probability distributions are judged, discussing normative approaches such as Bayesian updating and heuristic processes used in behavioral decision making. It covers topics like attribute substitution, prototype heuristics, partition dependence, optimism and overconfidence, mis-Bayesian approaches, probabilities expressed in markets, and research frontiers.
E N D
Probability judgmentEc101 Caltech How are probability distributions judged? Normative: Optimal inference using laws of statistics Bayesian updating P(H1|data)/P(H2|data)=[P(H1)/P(H2)]xP(data|H1)/P(data|H2)] posterior odds prior odds likelihood ratio Behavioral: TK “heuristics-and-biases” program c. ‘74: Heuristic processes substitute for explicit calculation Representativeness, availability, anchoring Heuristics can be established by the "biases" from optimality c. ‘03: System 1 is heuristics, system 2 is rational override Note: Controversial! (see Shafir-LeBoeuf AnnRevPsych ’02)
Examples • P(event) • Will Atty General Alberto Gonzales resign? • Will Tomomi return to Japan after Fulbright? • Will the TimeWarner-AOL merger succeed? • Numerical quantities • Box office gross of “Spiderman 3” • Inflation rate next year • Where do I rank compared to others?
Major topics 1. System 1-2 2. Attribute substitution 3. Prototype heuristics 4. Partition-dependence 5. Optimism and overconfidence 6. Mis-Bayesian approaches 7. Probabilities expressed in markets 8. Research frontiers
Systems 1 and 2 in action • “Mindless” behavior (Langer helping studies) • A bat and ball together cost $11 • The bat costs $10 more than the ball • How much does the bat cost?
Systems 1 and 2 in action • A bat and ball together cost $11 • The bat costs $10 more than the ball • How much does the bat cost? • System 1 guess $10 • System 2 checks constraint satisfaction • x+y=11 • x-y=10 • System 1 “solves” x+y=11, x=10, y=1 • System 2 notices that 10-1 ≠ 10
Provides a tool to study individual differences • Cognitive reflection test (CRT, Frederick JEP 05)
2. “Attribute substitution” by system 1 • “...when an individual assesses a specified target attribute of a judgment object by substituting another property of that object—the heuristic attribute—which comes more readily to mind.” (Kahneman 03 p 1460) • Like the politician’s rules: Answer the question you wish were asked, not what was actually asked.
Attribute substitution: Examples • “Risk as feelings” (Loewenstein et al): • Question: “Is it likely to kill you?” • Substitute: “Does it scare you?” • Role of lack of control and catastrophe • Mad cow disease (labelling, Heath Psych Sci), terrorism?, flying vs driving (post 9-11, Gigerenzer 04 Psych Sci) • Personal interviews (notoriously unreliable): • Question: “Will this person do a good job?” • Substitute: “Do you like them, are they glib, etc?”
Example: Competence judgment by outsiders highly correlated with actual Senate election votes (Todorov et al Sci 05)
Tom W: Neglect college major base-rate when prototype matching is accessible 3. Prototype heuristics
Stereotypes can violate conjunction laws P(feminist bank teller)>P(bank teller) Easily corrected by “Of 100 people like Linda..” Conjunction fallacy
What is area? (top) What is total line length? (bottom) Implies that displays matter Role for supply-side “marketing” Which attribute is substituted depends on “accessibility” (cf. availability)
Availability (c. 1974) • Is r more likely to occur as 1st or 3rd letter? • “illusory correlation” (Chapman ’67 J AbnPsych) • E.g. gay men draw muscular or effeminate people in D-A-P test • Resistant to feedback because of encoding bias and availability • Confirmation bias: Overweight +/+ cell in 2x2 matrix • Secret to astrology • “Something challenging is going on in your life…” • Wayne Lukas horse trainer…winningness in Breeder’s Cup! (and losingest)
Mystique of horse trainer D. Wayne Lukas - Has been the dominant trainer in the Breeders' Cup and is the only trainer to have at least one starter every year. Is the career leader in purse money won, starters and victories. Ten of his record 18 wins have come with 2-year-olds: five colts and five fillies. Has also won the Classic once, the Distaff four times, the Mile once, the Sprint twice …(ntra.com) • http://www.ntra.com/images/2006_MEDIA_Historical_trainers.pdf
Mystique of horse trainer D. Wayne Lukas - Has been the dominant trainer in the Breeders' Cup and is the only trainer to have at least one starter every year. Is the career leader in purse money won, starters and victories. Ten of his record 18 wins have come with 2-year-olds: five colts and five fillies. Has also won the Classic once, the Distaff four times, the Mile once, the Sprint twice …(ntra.com) • http://www.ntra.com/images/2006_MEDIA_Historical_trainers.pdf
Illusions of transparency • Hindsight bias: • Ex post recollections of ex ante guesses tilted in the direction of actual events • Curse of knowledge • Hard to imagine others know less (Piaget on teaching, computer manuals…) • Other illusions of transparency • Speaking English louder in a foreign country • Gilovich “Barry Manilow” study
4. Partition-dependence • Events and numerical ranges are not always naturally “partitioned” • E.g.: Car failure “fault tree” • Risks of death • Income ranges (e.g. economic surveys) • {Obama, Clinton, Republican} or {Democrat, Republican} • Presented partition is “accessible” • Can influenced judged probability when system 2 does not override • Corollary: Difficult to create the whole tree • “When you do a crime there are 50 things that can go wrong. And you’re not smart enough to think of all 50”– Mickey Rourke character, Body Heat)
Probabilities are sensitive to the partition of sets of events (Fox, Clemen 05 Mgt Sci, S’ss are Duke MBAs judging starting salaries)
Prediction markets for economic statistics (Wolfers and Zitzewitz JEcPersp 06)
Debiasing partition-dependent forecasts improves their accuracy (Sonneman, Fox, Camerer, Langer, unpub’d) • Suppose observed F(x) is a mixture of an unbiased B(x) and diffuse prior (1/n) F(x)=αB(x)+(1-α)(1/n) compute B*(x|α)=(1/α)[F(x) – (1-α)(1/n)] (e.g. B(.6<retail<.7|α=.6)=(1/.6)[.094 – (.4)(1/18)]=.120) • Are inferred B(x) more accurate than observed F(x)? • Yes: α=.6 mean abs error .673 (α=1 .682) • Correction w/ α=.99 improves forecasts 58.2% (n=153, p<.01)
Hindsight Bias Across the Life Span Anchoring • An “anchor” is accessible • System 2 must work hard to override it • E.g. % African nations in the UN (TK 74) • (visibly random) anchor 10 median 25 anchor 45 median 65 • “Tom Sawyer” pricing
“Tom Sawyer” pricing of poetry reading (+/- $5 anchor)(Ariely et al)
5. Optimism and overconfidence • Some evidence of two “motivated cognition” biases: • Optimism (good things will happen) • Overconfidence • Confidence intervals too narrow (e.g. Amazon river) • P(relative rank) biased upward (“Lake Woebegone effect”)
Relative overconfidence and competitor neglect • Entry game paradigm (Camerer-Lovallo ‘99 AER): • 12 entrants, capacity C (2,4,6,8,10) • Top-ranked C earn $50 total. Bottom -$10 • Ranks are random, or based on skill (trivia) • Also guess number of other entrants
Earn more $ in random compared to skill (aren’t thinking about competitor skill)
Competitor neglect • Joe Roth (Walt Disney studio head) on why big movies compete on holidays: • Substitute “Is my movie good?” attribute for “Is my movie better?”
“Hubris” of CEO’s correlated with merger premiums (Roll, ’86 JBus; Hayward-Hambrick 97 ASQ)
Narrow confidence intervals • Surprising fact: • 90% confidence intervals too narrow (50% miss) • Feb. 7, Defense Secretary Donald Rumsfeld, to U.S. troops in Aviano, Italy: "It is unknowable how long that conflict will last. It could last six days, six weeks. I doubt six months.“ • Paul Wolfowitz, the deputy secretary of defense at the time, was telling Congress that the upper range [60-95 B$] was too high and that Iraq's oil wealth would offset some of the cost. "To assume we're going to pay for it all is just wrong," Wolfowitz told a congressional committee. • Nonfatal examples: Lost golf ball or contact lens, project costs/time (planning fallacy) • But ask: How many will be wrong? People correctly say 5 of 10 (Sniezek et al)!
Inside/outside view (Kahneman-Lovallo 93 Mgt Sci, HBR 03) • How could each decision be wrong but aggregate statistics be right? • Inside view: • Rich, emotive, narrative, rosy • Outside view: • Abstract, acausal, statistical • Examples: Marriage, merger • No formal model of this yet
6. ‘Mis-’Bayesian approaches • Start with Bayesian structure • Relax one or more assumptions • Choose structure so resulting behavior matches stylized facts • Find new predictions
‘Mis-’Bayesian approaches (cont’d) • Confirmation bias (Rabin-Schrag, QJE 97) • Law of small numbers (Rabin, QJE 01?) • Overconfidence (Van de Steen, AER; Santos-Pinto & Sobel, AER) • Preference for optimistic beliefs (Brunnermeier-Parker AER) • …many more
Confirmation bias • States A and B. Signals a, b. • Signal a more indicative of A (P(a|A)>P(a|B) • P(A)>P(B) a perceived correctly, b misperceived with some probability (“see it when you believe it”) • mistaken belief can persist a long time
Relative overconfidence • People differ in production functions based on skill bundles (Santos-Pinto, Sobel AER) • Invest in skills to maximize ability • Compare themselves to others using their own production function • i.e,. implicitly overrate the importance of what they are best at (cf. Dunning et al 93 JPSP) • Prediction: Narrowly-defined skill will erase optimism; it does
7. Probability estimates implicit in markets • Do markets eliminate biases? Pro: Specialization • Market is a dollar-weighted average opinion • Uninformed traders follow informed ones • Bankruptcy of mistaken traders • Con: Investors may not be selected for probability judgment • Short-selling constraints • Confidence (and trade size) may be uncorrelated with information • Herd behavior (and relative-performance incentives) may reduce capacity to correct mistakes (Zweibel, LTCM tale of woe) • Example study: • Camerer (1987) • Experience reduces pricing biases but *increases* allocation biases
Representativeness bias is perceptible but diminishes with experience (bottom,Camerer 87 AER)
8. Research frontiers • Aggregation in markets • Novice system 2 deliberate calculations become replaced by expert system 1 “intuitions” (cf. “Blink”) • How? And are they accurate? • Reconciling mis-Bayesian and system 1-2 approaches • Applications to field data • Where are systems 1-2 in the brain? • System 1-2 approach predicts fragility of some results (e.g. Linda the bank teller)… how can system 2 be turned on?