1 / 55

Ranking and Rating Data in Joint RP/SP Estimation

Ranking and Rating Data in Joint RP/SP Estimation. by JD Hunt, University of Calgary M Zhong, University of Calgary PROCESSUS Second International Colloquium Toronto ON, Canada June 2005. Overview. Introduction Context Motivations Definitions Revealed Preference Choice

adolfo
Télécharger la présentation

Ranking and Rating Data in Joint RP/SP Estimation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Ranking and Rating Data in Joint RP/SP Estimation by JD Hunt, University of Calgary M Zhong, University of Calgary PROCESSUS Second International Colloquium Toronto ON, Canada June 2005

  2. Overview • Introduction • Context • Motivations • Definitions • Revealed Preference Choice • Stated Preference Rankings • Revealed Preference Ratings • Stated Preference Ratings • Estimation Testbed • Concept • Synthetic Data Generation • Results • Conclusions

  3. Overview • Introduction • Context • Motivations • Definitions • Revealed Preference Choice • Stated Preference Rankings • Revealed Preference Ratings • Stated Preference Ratings • Estimation Testbed • Concept • Synthetic Data Generation • Results – so far • Conclusions – so far

  4. Introduction • Context • Common task to estimate logit model utility function for non-existing mode alternatives • Joint RP/SP estimation available • Good for sensitivity coefficients • Problems with alternative specific constants (ASC) • Motivation • Improve situation regarding ASC • Seeking to expand on joint RP/SP estimation • Add rating information • 0 to 10 scores • Direct utility • Increase understanding of issues regarding ASC generally

  5. Definitions • Revealed Preference Choice • Stated Preference Ranking • Revealed Preference Ratings • Stated Preference Ratings • Linear-in-parameters logit utility function Um = Σk αm,k xm,k + βm

  6. Definitions • Revealed Preference Choice • Stated Preference Ranking • Revealed Preference Ratings • Stated Preference Ratings • Linear-in-parameters logit utility function Um = Σk αm,k xm,k + βm sensitivity coefficient ASC

  7. Revealed Preference Choice • Actual behaviour • Best alternative choice from existing • Attribute values determined separately • Indirect utility measure – observe outcome Umr = λr [ Σk αm,k xm,k + βm ] + βmr

  8. Revealed Preference Choice • Disaggregate estimation provides Umr = Σk α’m,kr xm,k + β’mr with α’m,kr = λr αm,k β’mr = λr βm + βmr

  9. Stated Preference Ranking • Stated behaviour • Ranking alternatives from presented set • Attribute values indicated • Indirect utility measure – observe outcome Ums = λs [ Σk αm,k xm,k + βm ] + βms

  10. Stated Preference Ranking • Disaggregate (exploded) estimation provides Ums = Σk α’m,ks xm,k + β’ms with α’m,ks = λs αm,k β’ms = λs βm + βms

  11. Revealed Preference Ratings • Stated values for selected and perhaps also unselected alternatives • Providing 0 to 10 score with associated descriptors 10 = excellent; 5 = reasonable; 0 = terrible • Attribute values determined separately • Direct utility measure (scaled?) Rmg = θg [ Σk αm,k xm,k + βm ] + βmg

  12. Revealed Preference Ratings • Regression estimation provides Rmg = Σk α’m,kg xm,k + β’mg with α’m,kg = θgαm,k β’mg = θgβm + βmg

  13. Stated Preference Ratings • Stated values for each of set of alternatives • Providing 0 to 10 score with associated descriptors 10 = excellent; 5 = reasonable; 0 = terrible • Attribute values indicated • Provides verification of rankings • Direct utility measure (scaled?) Rmh = θh [ Σk αm,k xm,k + βm ] + βmh

  14. Stated Preference Ratings • Regression estimation provides Rmh = Σk α’m,kh xm,k + β’mh with α’m,kh = θhαm,k β’mh = θhβm + βmh

  15. Estimation Testbed • Specify true parameter values (αm,kand βm) • Generate synthetic observations • Assume attribute values and error distributions • Sample to get specific error values • Calculate utility values using attribute values, true parameter values and error values • Develop RP choice observations and SP ranking observations using utility values • Develop RP ratings observations and SP ratings observations by scaling utility values to fit within 0 to 10 range • Test estimation techniques in terms of returning to true parameter values

  16. True Utility Function Um = Σk αm,k xm,k + βm + em

  17. True Parameter Values

  18. Attribute Values sampled from N(μm,k ,σm,k) with

  19. Error Values • Sampled from N(μ= 0 , σm ) • σm varies by observation type: • RP Choice: σm = σrm = 2.4 • SP Rankings: σm = σsm = 1.5 • RP Ratings: σm = σgm = 2.1 • SP Ratings: σm = σhm = 1.8

  20. Generated Synthetic Samples • Each of 4 observation types • 7 alternatives for each observation (m=7) • Set of 15,000 observations • Sometimes considered subsets of alternatives with overall across observation types, as indicated below

  21. Testbed Estimations • RP Choice • SP Rankings • Joint RP/SP Data • Ratings • Combined RP/SP Data and Ratings

  22. RP Choice • Used ALOGIT software • Set β’m=1r = 0 to avoid over-specification • Provides: • α’m,kr = λr αm,k • β’mr = λr βm + βmr • Know that λr = π / ( √6 σrm ) = 0.534

  23. ρ20= 0.1834 ρ2c= 0.6982

  24. RP Choice Selection frequencies and ASC estimates

  25. RP Choice 2 Selection frequencies and ASC estimates

  26. ρ20= 0.3151 ρ2c= 0.1683

  27. RP Choice 3 Selection frequencies and ASC estimates

  28. ρ20= 0.1852 ρ2c= 0.1736

  29. SP Rankings • Used ALOGIT software • Set β’m=1s = 0 to avoid over-specification • Provides: • α’m,ks = λs αm,k • β’ms = λs βm + βms • Know that λs = π / ( √6 σsm ) = 0.855

  30. SP Rankings • More information with full ranking • Also confirm against RP above • ‘ranking version’ available • estimate using full ranking

  31. RP Rankings Estimates vs True Values with 15,000 observations 4 3 2 1 estimated 0 -5 -4 -3 -2 -1 0 1 2 3 4 5 -1 -2 -3 -4 observed

  32. SP vs RP Rankings • ASC translated en bloc to some extent

  33. SP Rankings: Role of σm,k • Impact of changing σm,k used when synthesizing attribute values • Sampling from N(μm,k ,σm,k) • Different σm,k means different spreads on attribute values • Impacts relative size of σsm • Implications for SP survey design

  34. Attribute Values sampled from N(μm,k ,σm,k) with

  35. SP Rankings: Role of σm,k • Increasingσm,k improves estimators • Roughly proportional • Ratio of βmtoαm,kmaintained • Use 1.00 ·αm,kin remaining work here • Implications for SP survey design • More variation in attribute values is better

  36. Joint RP/SP Data • Two basic approaches for αm,k • Sequential (Hensher) First estimate α’m,ks using SP observations; Then estimate α’m,kr using RP observations, also forcing ratios among α’m,kr to match those obtained first for α’m,ks • Simultaneous (Ben Akiva; Morikawa; Daly; Bradley) Estimate α’m,kr using RP observations and α’m,kr using SP observations and (λs/λr) altogether where (λs/λr) α’m,kr is used in place of α’m,ks • Little concensus on approach for βm

  37. Joint RP/SP Data • Used ALOGIT software • Set β’m=1s = 0 and β’m=1r = 0 to avoid over-specification • Provides: • α’m,ks = λs αm,k α’m,kr = λr αm,k • β’ms = λs βm +βms β’mr = λr βm +βmr • λr/λs • Know that λr = 0.855 and λs = 1.166

  38. Joint RP/SP Ranking Estimation for Full set of RP and SP 15,000 Observations (7 Alternatives for each) 3 2 1 0 estimated -3 -2 -1 0 1 2 3 -1 -2 -3 observed

  39. Joint RP/SP Ranking Estimation with 15,000 RP Observations for Alternative 1-4 and 15,000 SP Observations for Alternatives 4-7 3 2 1 0 Estimated -3 -2 -1 0 1 2 3 -1 -2 -3 Observed

  40. RP Ratings • Two potential interpretations of ratings • Value provided is a (scaled?) direct utility • Value provided is 10x probability of selection • Issue of reference • ‘excellent’ in terms of other people’s travel • ‘excellent’ relative to other alternatives for respondent specifically • Related to interpretation above • Here: Use direct utility interpretation and thus reference is in terms of other people’s travel

  41. RP Ratings • Used MINITAB MLE Provides: • α’m,kg = θgαm,k • β’mg = θgβm + βmg

  42. Estimation of PlottedRP Ratings Values • θgis found by minimizing the minimum square error between estimated sensitivities (θgαm,k ) and the true values αm,k • The estimated values for βmare then found using (β’mg - βmg min)/θ g with the above-determined value for θg

  43. SP Ratings • Used MINITAB • Provides: • α’m,kh = θhαm,k • β’mh = θhβm + βmh

More Related