html5-img
1 / 69

Can we use RUM and don’ get DRUNK? J orge E. Araña University of Las Palmas de Gran Canaria

FF8 Fortnight Analysis of Discrete Choice Data Sheffield, September of 2007. Can we use RUM and don’ get DRUNK? J orge E. Araña University of Las Palmas de Gran Canaria. Collaborators: Carmelo J. Le ón (ULPGC), W. Michael Hanemann (UC Berkeley). Outline. RUM and DC experiments

derron
Télécharger la présentation

Can we use RUM and don’ get DRUNK? J orge E. Araña University of Las Palmas de Gran Canaria

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. FF8 Fortnight Analysis of Discrete Choice Data Sheffield, September of 2007 Can we use RUM anddon’ get DRUNK?Jorge E. ArañaUniversity of Las Palmas de Gran Canaria Collaborators:Carmelo J. León (ULPGC), W. Michael Hanemann (UC Berkeley)

  2. Outline • RUM and DC experiments • Sources of mistakes in Citizens choices • An Extended Frame: Bayesian Modelling • Example: Heuristics and DCE 4.1. STUDY 1: Is it really a practical problem? A Verbal Protocol Analysis. 4.2. STUDY 2: A Bayesian Finite Mixture Model in the WTP space. The effects of Complexity and Emotional Load on the use of Heuristics. 4.3. STUDY 3: Heuristics Heterogeneity and Preference Reversals in Choice-Ranking: An Alternative Explanation. 4.4. STUDY 4: Can we use RUM and don’t get DRUNK?. A Monte Carlo Study • Discussion and Further Research

  3. DCE and Non-Market Valuation Valuation of Health = appropriate methods • DCE are increasingly used and accepted. Coherent Results for CBA or CEA Decision Making Process Individual Preferences

  4. The Underlying Economic Theory • Morishima (METRO,59) – Value from characteristics • Lancaster (JPE, 66) B= observed/stated choices P= Preferences (Fundamental Value) E= Random term (Context) THE TWO MAIN ISSUES • MEASURING PREFERENCES: (definingP) • i) Experienced vs Choice Utility • ii) Absolute vs. relative utility (prospect theory) • iii) … 2. LINK CHOICES AND PREFERENCES: f (.)

  5. The Departing Point From the Economic theory point of view • Lancaster (1966) – Value for characteristics MAIN ISSUES • MEASURING PREFERENCES: (definingP) • i) Experienced vs Choice Utility • ii) Direct Utility • iii) Absolute vs. relative utility (prospect theory) • iv) happiness vs. utility • … 2. LINK CHOICES AND PREFERENCES: f (.)

  6. “Individuals have a single set of well-defined goals, and her behavior is driven by the choice of the best way to achieve those goals”. Traditional Answer: (RUM) How can we link Choices and Preferences? f (.) General Simple Intuitive An accurate explanation of agents choices in a wide range of situations

  7. However… Strong and large evidence that citizens don’t choose what make them happy? Why? • Failing Predicting Future Experiences - Projection bias, Distinction bias, Memory bias, Belief bias, Impact bias • Failing Following Predictions - Procrastination , Self-control bias, Overconfidence, Anchoring Effects,Simplifyng Decision Rules,…

  8. However… - Preference Reversals (Slovic and Lichtenstein, 1971,1973) - Framing effects (Tversky and Kahneman, 1981, 1986) - … Do f(.) exists? or just B = ε ? Our belief:YES, f(.) do exists. The Challenge: Defining f(.) in a way that can accommodate these deviations. Research Strategy: Thinking in a Hyper-rationality concept Context matters… but Fundamental values too (McFadden, 2001; Grether & Plott, 1979, Slovic, 2002;…)

  9. Solutions NEED to be … • Multidisciplinary - Economic Theory - Social Psychology - Statistics - Cognitive Psychology - Neurology - Political Science,… • We need an Extended Frame that integrate contributions from these different areas.

  10. Why not Bayesian? • One Elegant and Robust way of integrating Multidisciplinary contributions to DC Theory and Data Analysis: Bayesian Econometrics

  11. Potential Bayesian Contributions to DCE • Can use prior information (there is a lot of prior info available!. previous research, experts, Benefit Transfer, Optimal Designs,…). • Able to tackle more complex/sophisticated models • More accurate results (e.g. Exact theory in finite samples) • More informational results (reports full posterior distributions instead of just one or two moments) • Sample means are inefficient and sensitive to outliers (this is especially important when studying heterogeneity in behaviour. The role of tails have been long ignored) • Bayesian methods can quantify and account for several kinds of components of uncertainty. • More interpretable inferences (probabilities, confidence?,…)

  12. EXAMPLE: Heterogeneous Decision Rules and DC

  13. The Heterogeneity in Decision Rules Argument • Decision Making requires an Information Process • Simon (1956) Kahnemann and Tversky (1974) • Individuals have a set of decision strategies h1, h2,…, hH • at their disposal that vary in terms of: • Effort=EC (how much cognitive work is necessary to • make the decision using that strategy) • Accuracy=EU (the ability of that strategy to produce • a good outcome).

  14. Literature on Heuristics • The Adaptive Decision Maker (Payne, Bettman, and Johnson, 1993) • • Toolbox of possible choice heuristics in multi-attribute choice • WADD: Weighted additive rule • EQW: equal weight heuristic • SAT: Satisficing Rule (Simon, 1955) • LEX: Lexicographic Heuristics • EBA: Elimination by Aspects (Tversky, 1972) • ANC: Anchoring Heuristic (Tversky and Kahneman) • MCD: Majority Confirming dimensions (Russo & Dosser, 1983) • ADDIF: Additive difference model (Tversky, 1969) • FRQ: Freq. of good and bad features (Alba and Marmorstein, 1987) • AH: Affect Heuristic. Slovic (2002) • Combined Strategies

  15. Choosing How to Choose (CHTC) TWO STEP PROCESS STEP 1. Choosing How to Choose. (Choice of the D. Rule) STEP 2. Applying the Decision Rule. Applications: Manski (1977), Gensch (1987), Chiang et al (1999), Gilbride and Allenby (2004), Beach and Potter(1992) Swait and Adamovicz(2001), Amaya and Ryan (2004) Araña, Hanemann and León (2005)

  16. if the individual faces a multi attribute discrete choice problem, the researcher will observe that individual i chooses alternative j* if, such that The Theoretical Model For a well-behaved preference map, a general indirect utility function of individual i, given an alternative j: • Different specifications of I(.) makes the model collapse to alternative decision rules

  17. Different Heuristics

  18. Solution: Rewriting the probability as the product of a second step of the choice process and a marginal heuristic probability. That is, By adding the likelihood functions over the different decision rules, resulting in a globally concave likelihood surface, f(.) is a mixture distribution Non regularity Problem 1: The likelihood surface for a heuristic is discontinuous, and therefore, the global concavity can not be guaranteed. .

  19. Evaluate an Intractable Function From Bayes’ theorem, Problem 2: The posterior distribution is intractable and difficult to evaluate . Solution: Here we deal with that complication by employing MCMC methods as is proposed in discrete choice by Albert and Chib (1993) by combining… • GS Algorithm (Geman and Geman, 1984) • DA Technique (Tanner and Wong, 1987)

  20. Prior Distributions

  21. MCMC Algorithm

  22. MCMC Algorithm

  23. MCMC Algorithm

  24. Different Studies that have been discussed during FF8 Study 1: Determinants of Choosing Decision Rules (task complexity, emotional load,…) Study 2: Heuristics and Preference Reversals in Ranking vs Choice. Study 3: Testing the Validity of the Model to screen out Heuristics Study 4: Monte Carlo Simulation Study Study 5: Verbal Protocol and Emotional Load

  25. STUDY 1: The Data Valuation of a set of programs designed to improve health care conditions for the elderly in the island of Gran Canaria. Good to be valued link Programmes Survey Process (From Jun-2004 To Ap-2005) - 2 Focus Groups - 3 Pre-Test Questionnaires - Final Questionnaire 550 Individuals Sample Size • D-optimal design method (Huber & Zwerina,96) • Elicitation Technique: Choice Experiment • Scenario were successfully tested in prior research Survey Design

  26. Testing Complexity effects on CHTC SAMPLE I TWO SPLIT SAMPLES 2 pairs of alternatives + status quo SAMPLE II 4 pairs of alternatives + status quo

  27. Testing Emotional load effects on CHTC • Content (what we remember) • Process (how we reason) MEASURING EMOTIONS Individuals emotional intensity Scale (EIS) Emotional Intensity -------- mood experience ----- individual decision making Def. Emotion: “ Stable individual differences in the strenght with which individuals experience their emotions” (Larsen and Diener, 1987) EIS-R (Geuens and Pelsmacker, 2002)

  28. Results & Discussion Introduction The Model The MC Experiment Results Application Conclusion

  29. TEST I: COMPLEXITY AND VALUATION RESULTS Table 3. Welfare Estimation Results for M1 (€) Introduction The Model The MC Experiment Results Application Conclusion RESULT 1: Complexity seems to affects absolute values of Welfare Estimations, BUT DO NOT affect programs ranking.

  30. TEST I: COMPLEXITY AND VALUATION RESULTS Table 3. Welfare Estimation Results for M1 (€) Introduction The Model The MC Experiment Results Application Conclusion RESULT 2: Complexity makes people focus on the most appreciate attributes, what leads to higher valuations for most valued prog. (HOSPITAL) and lower valuations for less valued prog. (DAY CARE).

  31. TEST II: Complexity and Choosing how to Choose Introduction The Model The MC Experiment Results Application Conclusion RESULT 3: The proportion of people responding in a totally random way is low.

  32. TEST II: Complexity and Choosing how to Choose Introduction The Model The MC Experiment Results Application Conclusion RESULT 4: Deviations from M1 are extended in the sample (55%), although M1 has the larger proportion.

  33. TEST II: Complexity and Choosing how to Choose Introduction The Model The MC Experiment Results Application Conclusion RESULT 5: Complexity does increase the likelihood that Individuals follow non compensatory decision rules.

  34. TEST II: Complexity and Choosing how to Choose Introduction The Model The MC Experiment Results Application Conclusion RESULT 5: Complexity does increase the likelihood that Individuals follow non compensatory decision rules.

  35. TEST III: Emotional Intensity and Choosing how to choose Table 5. Individuals assigned to non-compensatory rules According to the degree of EIS (%) Introduction The Model The MC Experiment Results Application RESULT 6: Emotional Sensitivity does affect the use of Alternative decision rules Conclusion

  36. TEST III: Emotional Intensity and Choosing how to choose Table 5. Individuals assigned to non-compensatory rules According to the degree of EIS (%) Introduction The Model The MC Experiment Results Application RESULT 7: Extreme EIS (high or low) induces a larger departure from M1 than average EIS. Conclusion

  37. Summary of Results Shows that Decision Rules are different in Choice and in Ranking. When we take responses to ranking that are worse than status quo out of the sample, decision rules and mean WTP are very similar (although variances are lower in RK since it uses more information) STUDY 3: RK-Choice Preference Reversals

  38. The Data Valuation of a set of environmental actions in a vast rural park in the island of Gran Canaria called “The Guiniguada valley”. Good to be valued Gran Canaria Island Population Population Survey Proccess (14 months in total) - 3 Focus Group - Pre-Test Questionnaire - 1 Focus Group - Final Questionnaire 540 Individuals Sample Size • D-optimal design method (Huber and Zwerina, 1996). • Elicitation Techniques: Choice and Ranking. • Scenario (verbal and photos) were tested in prior research.. Survey Design

  39. Results Table 3. Welfare Estimations from M1(RUM) for Choice and Ranking

  40. Table 4. Proportion of individuals assigned to each decision rule in each model

  41. Table 4. Proportion of individuals assigned to each decision rule in each model

  42. Table 4. Proportion of individuals assigned to each decision rule in each model

  43. Results Table 5. Welfare Estimations from Aggregated Model for Choice and Ranking

  44. Conclusions • In this application, the EBA is the most predominant heuristic (over the FLC) • A small % of subjects follows the Completely Random Heuristic. • Heuristics Heterogeneity is different between Choice and Ranking (in particular between RK below SQ). • When the Heuristics Heterogeneity is incorporated in the model the gap between Choice and Ranking is drastically reduced.

  45. GENERAL DISCUSSION AN FURTHER RESEARCH The model seems to do a good job detecting people that use these heuristics (average efficiency 85% MC study) It can be used as a test to further explore the validity of a specific DCE are good enough to be used in PUBLIC POLICY (friendly code will be available very soon). Results from these studies can also help to decide several aspects of the DCE design: number of attributes, levels,…) First further research would be to use this information in the DCE design using a Bayesian approach so we can improve the accuracy of the results (respondent eficiency vs statistical efficiency). Results also have implications for Benefit Transfer. It is possible to reduce the cost of these studies by transferring results from previous studies to new ones. The Bayesian framework seem to be the most adequate approach to do so.

  46. Thanks !!!!!

  47. STUDY 3: Testing the Validity of the Model to screen out Heuristics

  48. STUDY 3: Testing the Validity of the Model to screen out Heuristics

  49. STUDY 3: Testing the Validity of the Model to screen out Heuristics Average efficiency: 85% Notes: No prior info and no respondent efficient design have been applied

  50. STUDY 4: Monte Carlo Study. People follow alternative heuristics…. So what are the consequences? • A conventional Conditional Logit model and a Hierarchical Bayes Model are estimated in 900 samples following same idea that study 2. • Samples differ in terms of the % of citizens following each decision rule (e.g. 10, 20, 30, 40, 50, 60, 70, 80, 90%).

More Related