1 / 34

Obtaining Better Data in Stated Preference Choice-Based Conjoint

Obtaining Better Data in Stated Preference Choice-Based Conjoint. Bryan Orme and Rich Johnson, Sawtooth Software, Inc. May, 2007. Example CBC Question. CBC’s Popularity. CBC (Choice-Based Conjoint) is the most commonly used stated preference modeling technique among market researchers.

avedis
Télécharger la présentation

Obtaining Better Data in Stated Preference Choice-Based Conjoint

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Obtaining Better Data in Stated Preference Choice-Based Conjoint Bryan Orme and Rich Johnson, Sawtooth Software, Inc. May, 2007

  2. Example CBC Question

  3. CBC’s Popularity • CBC (Choice-Based Conjoint) is the most commonly used stated preference modeling technique among market researchers. • It has advantages over ratings-based conjoint methods: • Choices are more realistic • Models can be more sophisticated: (alternative-specific effects, availability, cross-effects) • Practitioners have felt that predictions are more accurate • Rigorous error theory behind MNL

  4. CBC: Typical Practice • CBC designs usually feature: • Level Balance (each level shown an equal number of times) • Orthogonality (independence of the attributes) • Minimal Overlap (avoid showing the same level more than once within a choice task whenever possible) • Such designs have assumed compensatory behavior and support near-optimally efficient estimates of main effects. • Typical methods are hierarchical Bayes (HB), Latent Class, or aggregate Logit

  5. Minimal Overlap: Example • Consider Memory, with four levels (512MB, 1GB, 2GB, and 4GB). • The task below reflects level overlap, and would contribute less than full information with respect to Memory.

  6. Minimal Overlap: Discussion • Minimal Overlap is good for statistical efficiency. • But it is bad in the sense that it encourages simplification behavior and shallow processing of information. • Consider a person who requires a Dell laptop (non-compensatory behavior). • There is only one possible product to choose from in each task (or the “None”)!

  7. Other Problems with CBC • Interview doesn’t seem very focused to respondent. Product concepts are often “all over the map” and sometimes unrealistic (e.g. bad features at high price). • Respondents often answer very quickly (16 seconds per task for internet respondents for the example CBC tasks shown earlier). • How can they possibly be giving thoughtful answers, reflecting how they’d behave in the real world? • CBC questionnaires typically involve 12 or more repeated tasks. Many respondents consider this boring and tedious.

  8. Evidence of Simplification Behavior • Gilbride et al. (2004) and Hauser et al. (2006) showed evidence that choices could be fit by non-compensatory models in which only a few attribute levels are taken into account per respondent. • Our experience with our current CBC project supports these conclusions.

  9. Our View • We believe CBC is an effective method that has been of genuine value to marketing researchers, but especially in cases with about five or more attributes, it can be improved. • Current brand-package-price research with standard CBC seems to be working well in our opinion. • We believe the greatest need at this point is not for better models, but rather for better data.

  10. Our Objectives for This Research • Provide a more stimulating experience that will encourage more engagement in the interview than conventional CBC questionnaires. • Mimic actual shopping experiences, which may involve non-compensatory as well as compensatory behavior. • Screen a wide variety of product concepts, but focus on a subset of most interest to the respondent. • Provide more information to estimate individual partworths than is obtainable from conventional CBC analysis.

  11. The Adaptive-CBC Questionnaire • Involves three core stages: • BYO (“Build Your Own” Exercise) • Screening Exercise • Choice Tasks • Facilitated by an attractive interviewer, who seems to be engaged in a shopping experience with the respondent.

  12. Introductory Screen

  13. BYO Exercise

  14. Purpose of BYO Exercise • Based on the BYO-constructed product concept, we generate a pool of “near-neighbor” concepts for the respondent to consider. • Generate each new concept by altering from 2 to 4 attributes from the BYO concept, in a near-orthogonal design. • Price for each concept determined by summing the prices for each feature +/- 7% or 20%. • We used a pool of 40 concepts for each respondent in this current study.

  15. Screening Exercise

  16. Purpose of Screening Exercise • Identify (by looking across past responses) possible non-compensatory (cutoff rules) that the respondent may be using. • Identify product concepts that are highly relevant to this respondent, to take forward to a final evaluation.

  17. Must Haves/Unacceptables • After respondent has evaluated a few screens of concepts, we scan those answers and suggest possible non-compensatory rules:

  18. Paring the Pool • If respondent confirms a non-compensatory decision rule, we scan all not-yet-evaluated products in the pool, and automatically mark as “not a possibility” any concepts with the unacceptable level(s). • In our project, an average of 8 of the 40 products in the pool were deleted due to non-compensatory rules.

  19. Choice Tasks Section • Products marked as “possibilities” in the Screening Exercise are taken into a choice tournament to identify the overall winner:

  20. Optional Calibration Section • Additional Likert-scale rating questions (we used 5) can be asked to estimate a “None” utility threshold.

  21. Empirical Study • N=600 Calibration Respondents. • 300 standard CBC (18 tasks; 4 concepts + None) • 300 Adaptive CBC (ACBC) • N=900 Holdout Respondents. • Completing a 12-task CBC exercise (4 concepts without None) • Identical attributes and levels as before • Data collected using “Opinion Outpost Internet Panel”.

  22. Qualitative Results • Respondents took longer to complete the ACBC exercise (ACBC=11.6 minutes; CBC=5.4 minutes). • But, given how respondents speed through CBC surveys, is it a bad thing that they spent twice as long with ACBC? • Despite the increased interview length, relative to CBC respondents, ACBC respondents said their interview (p<.05)… • Was an overall better interview experience • Presented more realistic product concepts • Was less boring • Offered a better format to reflect what they’d do if actually buying a laptop • Made them more likely to want to slow down and make careful choices

  23. Estimating Partworths for ACBC • We used two methods to estimate partworths: • HB • Monotone regression (non-parametric technique that produces part-worths that can predict rank-order relationships suggested by the choices) • Both were successful, but HB even more so. • We employed HB estimation for the CBC respondents as well.

  24. Partworth Utility Estimation • Data from each section in ACBC can be rearranged in real (or synthetic) choice sets: • BYO: one task per attribute (choice of one of K levels at given prices) • Screening: one task per chosen concept (each task reflecting the chosen concept preferred to all rejected concepts) • Choice Tasks: per standard CBC practice • We recognize that we are pooling choice sets that involve different error levels, and look to academic HB experts to propose a more appropriate handling of this.

  25. BYO Exercise (Review)

  26. X Matrix for BYO task • We format the data with as many “tasks” as attributes (except Price). • For example, Brand has four levels: • Acer +$0 • Dell +$0 • Toshiba +$50 • HP +$100 • We code information on chosen brand as a single choice task with four alternatives: 1 0 0 . . . All other attributes “0”. . . 0 0 1 0 . . . All other attributes “0”. . . 0 0 0 1 . . . All other attributes “0”. . . 50 -1 -1 -1 . . . All other attributes “0”. . . 100 Brand Effects Other 8 Attributes Price

  27. Screening Exercise (Review)

  28. X Matrix for Screening Questions • We have choices for each respondent on 40 product concepts: “Chosen” or “Rejected.” • Assume respondent indicates 20 of the 40 product concepts are “possibilities.” • We code as if the respondent had answered 20 standard CBC tasks, each with 21 alternatives (each chosen concept versus all 20 rejected concepts).

  29. Choice Tasks Section (Review) • Products marked as “possibilities” from the pool are taken into a choice tournament to identify the overall winner:

  30. X Matrix for Choice Tasks • All concepts marked as “possibilities” from the Screening Section (up to a maximum of 20) are taken forward to a Choice Tasks “tournament.” • The concepts are shown in triples. • Winning concepts are carried forward to subsequent rounds of the “tournament.” • It takes t/2 tasks to identify the overall winner from among t concepts. • Choice tasks are coded the same way as traditional CBC tasks (exploded to account for first and second choice within each triple).

  31. Quantitative Results • ACBC contains about triple the information content of the CBC interview with 18 tasks. • ACBC performed better than CBC on: • Hit rates within the same respondents (internal reliability) • Share predictions for choices made by holdout respondents (external validity) • ACBC was at its best (vis-à-vis CBC) at predicting respondents who took the most time completing the holdout questionnaire (and presumably giving more careful answers).

  32. Methods Bias • The fact that ACBC partworths could predict CBC-formatted holdouts even better than partworths estimated from CBC calibration tasks is indeed impressive. • ACBC beat CBC at CBC’s own game • Methods bias was strongly in favor of CBC.

  33. Is Simplification a Problem? • Most choice researchers admit that task simplification at the individual level must exist in complex CBCs. • Many have believed that the aggregate effect of hundreds of respondents (each employing different simplification strategies) should counteract this problem and fairly accurately reflect the more careful processing of information of real-world decisions. • Our results suggest that respondents that take more time to complete CBC questionnaires provide different aggregate shares, and that a data collection technique that encourages a greater depth of processing may produce more accurate share predictions.

  34. More Information • A live example of the ACBC interview described here may be found at: • A working paper with much more detail than this presentation is available upon request (bryan@sawtoothsoftware.com). www.sawtoothsoftware.com/test/byo/byologn.htm

More Related