1 / 45

Use of eye-tracking for studying survey response processes

Use of eye-tracking for studying survey response processes. Mirta Galesic Roger Tourangeau Fred Conrad Mick Couper September 10, 2009. Why eye-tracking?. To get another perspective on results of our web experiments To resolve some ambiguities in the data To test some emerging hypotheses

ewa
Télécharger la présentation

Use of eye-tracking for studying survey response processes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Use of eye-tracking for studying survey response processes Mirta Galesic Roger Tourangeau Fred Conrad Mick Couper September 10, 2009

  2. Why eye-tracking? • To get another perspective on results of our web experiments • To resolve some ambiguities in the data • To test some emerging hypotheses • To gain additional insight in the answering process • In Fall 05: small pretest (N=24) • This year: larger study (N=117) • These are the first results

  3. Acknowledgments • Acknowledgments: We are grateful to Scott Fricker, Duane Gilbert, Ting Yan, and Cong Ye for their help in conducting this study. • This research was supported by a grant from the National Institute for Child Health and Human Development (R01 HD041386-01A1) to Roger Tourangeau, Mick Couper, Fred Conrad, and Reg Baker. The National Institute for Child Health and Human Development is not responsible for the conclusions presented here.

  4. Eye-tracking technology TOBII: ClearView analysis software + hardware • Unobtrusive eye-tracking • Uses near-infrared beams and video images to capture one’s eye movements • No need for helmets, lenses... • Easy calibration • Good accuracy • Frame rate 50Hz • Margin of error +/- 3 ms (time), +/- 0.5-1 degree (position) • Data: fixations and durations

  5. Procedure • Lab experiment • Sample • N=117 – recruited through advertisements on campus, ads in local newspapers, flyers in libraries, on bus stations, etc. • Age: 48% 18-24, 34% 25-34, 17% 35-64 • Sex: 50% male, 50% female • Education: most have at least some college • Most (80%) use the Internet every day, 59% consider themselves advanced or expert users • Most (77%) already participated in at least one Web survey • Questionnaire • A combination of previously used web experiments on visual context effects, response order, question format, and definitions

  6. 1. Response order effects

  7. Rationale • Order of response options can affect the results • Possible underlying mechanisms... • Diminishing attention to later options—may not even read all the options • Low threshold for acceptance—consider each option in turn for acceptability; stop when answer is good enough (“satisficing” in the original sense) • Either way, time spent on reading the first options should be longer than the time spent reading the last options on the list • This study: several questions with varying order of response options

  8. Desirable qualities of a child Which one of these qualities is the most desirable for a child to have? • That he has good manners • That he tries hard to succeed • That he is honest • That he is neat and clean • That he has good sense and sound judgment • That he has self-control • That he acts like a boy or she acts like a girl • That he gets along well with other children • That he obeys his parents well • That he is responsible • That he is considerate of others • That he is interested in how and why things happen A B

  9. Other questions: Crime, Police, Morality Q7. Some say individuals are more to blame than social conditions for crime and lawlessness in this country. Others say the contrary—social conditions are more to blame than individuals for crime and lawlessness in this country. Which one of these two statements comes closest to your opinion on this issue? • Individuals are more to blame. • Social conditions are more to blame. Q8. Next, we would like you to think about the amount of trust you have that the police officers in your area will always do what is right. Would you say you have— • A great deal of trust • A moderate amount of trust • Equal amounts of trust and distrust • A moderate amount of distrust • A great deal of distrust Q9A. In your opinion, should government (federal, state, or local) have some responsibility for preventing the breakdown of morality, or should private organizations and individuals be entirely responsible for preventing the breakdown of morality? • Government is responsible • Private organizations and individuals are responsible

  10. Different response styles • Considering all options and choosing the best answer: web14 • Selecting the first option, then going through the list and updating the response: web03 • Reading only part of the list, then selecting the answer: web01

  11. Hot spot analysis: answer in the first half

  12. Hot spot analysis: answer in the second half

  13. Fixations at response options – Top vs. Bottom half • For all questions: more fixations in the top part…

  14. Time spent looking – Top vs. Bottom half • ...and more time spent looking at the options in the top part Notes: Times corrected for the time needed to click on an answer (200 msec) (c.f. Kieras, 2001). T-tests calculated on log-transformed data.

  15. Proportion of time and answers in the first half Notes: Times corrected for the time needed to click on an answer (200 msec) (c.f. Kieras, 2001). T-tests calculated on log-transformed data.

  16. Relationship between gazing time and answers • Respondents who spend more time looking at the top part are more likely to choose an answer from that part: Percentage of answers in the first half, by time spent looking at the first half 2(4)=22.59, p<.01 Note: Times corrected for the time needed to click on an answer (200 msec) (c.f. Kieras, 2001).

  17. Relationship between gazing time and answers Percentage of answers in the first half, by time spent looking at the first half 2(4)=26.79, p<.01 Note: Times corrected for the time needed to click on an answer (200 msec) (c.f. Kieras, 2001).

  18. Relationship between gazing time and answers Percentage of answers in the first half, by time spent looking at the first half 2(4)=33.66, p<.01 Note: Times corrected for the time needed to click on an answer (200 msec) (c.f. Kieras, 2001).

  19. Relationship between gazing time and answers Percentage of answers in the first half, by time spent looking at the first half 2(4)=8.27, p<.01 Note: Times corrected for the time needed to click on an answer (200 msec) (c.f. Kieras, 2001).

  20. 2. Visual format of response scale: Radio buttons vs. Drop down lists

  21. Rationale • Previous studies showed that people more often choose response options that are initially visible, than the options they need to uncover by additional mouse clicks • For example, in some drop-down lists, several options are initially visible; the others appear after an additional click

  22. Rationale • Typical result: options on top chose more often

  23. Experimental design • 3 different question formats: radio buttons, drop-down list with 5 options initially visible, and drop-down list with 0 options initially visible • Two questions: on breakfast cereal and automobiles; 10 response options each • Order of response options also systematically varied

  24. Question formats 1. RADIO BUTTONS 2. DROP DOWN LIST – 5 OPTIONS INITIALLY VISIBLE 3. DROP DOWN LIST – NO OPTIONS INITIALLY VISIBLE

  25. Effects of question format: % of time spent on top part • If top options are the only ones shown initially, Rs look at them much longer Overall F(2,105)=10.76, p<.01. Drop box 5 significantly different from others; and significantly different from 50%.

  26. Effects of question format: % of answers in the top part • Top options (especially when they are the only ones shown initially) are chosen somewhat more often. For both questions, overall 2 n.s.

  27. Relationship between gazing time and answers Percentage of answers in the top half, by time spent looking at the top half 2(4)=29.28, p<.01

  28. Relationship between gazing time and answers Percentage of answers in the top half, by time spent looking at the top half 2(4)=18.70, p<.01

  29. 3. Definitions

  30. Why definitions • Survey concepts are not always understood as intended – definitions can be useful • But – do people read the definitions? • Rarely: only 14% - 22% clicked on a link to definition in recent experiments (Conrad et al, 2005) • Substantially more likely to do so for technical than for ordinary concepts • More likely when the definition more accessible • Reading the definition may change answers • Conclusions of those studies based on indirect data: number of requests for definitions, response times

  31. This study • Questions about consumption of 8 food items: Page 1: Fat, Dietary supplements, Grain products, Poultry Page 2: Vegetables, Diary products, Cholesterol, Calcium • Each item accompanied with a definition • Presentation of the definitions • Always on • On mouse roll-over • We measured the time spent looking at each definition

  32. Example of “Always on” definitions:

  33. Example of “Mouse roll-over” definitions:

  34. Some respondents appeared to read the definitions…

  35. … and some didn’t

  36. Reading definitions on the first page

  37. Reading definitions on the second page (the same respondent)

  38. Do they read? • In silent reading, a typical fixation lasts about 225 msec, covering about 8 letters (Rayner, 1998) • Only 54% of the respondents read at least 16 letters (about two words) of the definitions • i.e. spent at least 2x225 msec looking at the definitions • Big difference between definitions that were always on and those that were opened by mouse-roll-over: • Only 20% of the Rs in the mouse roll-over condition opened at least one definition; but only 9% opened it long enough to fixate at least two words • Vs. 96% in the always-on condition who fixated at least two words in any definition

  39. Total gaze time, by format • Total gaze time (for all definitions together) much longer in always-on condition t(63)=7.93, p<.01, for log-transformed data

  40. Gaze time per definition, by format • Gaze time per definition is also much longer in always-on condition t(63)=5.25, p<.01, for log-transformed data

  41. Reading time and estimates of consumption: Dietary supplements DEFINITION: A multivitamin supplement taken daily is recommended to help insure adequate levels of necessary vitamins and micronutrients. In addition, dietary supplements help protect cells against aging, improve sexual performance and reduce stress, among other benefits. r = - 0.19, p<.01

  42. Reading time and estimates of consumption: Grain products DEFINITION: Bread and foods made with bread, including muffins, French toast, stuffing, popcorn, and pre-sweetened cereals. Include also pasta, rice, and drinks such as beer. r = 0.20, p<.01

  43. Summary RESPONSE OPTIONS • People read from top to bottom; answer depends on how low they go; Different answering styles • More time spent on looking at the top part • The more time respondents spend on reading the top part, the more likely is that they will choose an answer from that part

  44. Summary QUESTION FORMAT • Radio buttons vs. Drop-down lists • When only some options are initially shown, respondents • read them longer and • are more likely to select their answer among them DEFINITIONS • Mouse roll-over definitions are rarely read • When read, can affect results

  45. The End

More Related