1 / 14

Evaluating Search Interfaces

Evaluating Search Interfaces. Marti Hearst UC Berkeley. Enterprise Search Summit West Search UI Design Panel. Evaluating Search Interfaces. This is very hard to do well First, a recap on iterative design and evaluation. Then I’ll present some do’s and don’ts. Interface design is iterative.

Télécharger la présentation

Evaluating Search Interfaces

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluating Search Interfaces Marti Hearst UC Berkeley Enterprise Search Summit West Search UI Design Panel

  2. Evaluating Search Interfaces • This is very hard to do well • First, a recap on iterative design and evaluation. • Then I’ll present some do’s and don’ts.

  3. Interface design is iterative Design Evaluate Prototype

  4. Fast A small number of participants (5) Test mock-ups and prototypes in addition to finished designs Learn about what doesn’t work, a bit about what does, maybe new good ideas for future iterations. More time-consuming Need many participants (often still too few) Test particular components or principles to be used by others Learn if something is better than something else, and by how much. Discount Testing vs. Formal Testing

  5. Qualitative Semi-Formal Studies • After the design has been mocked up, evaluated, redesigned several times, • Evaluate the system holistically or in parts with a large user base • Watch the participants use the system on their own queries • Use Likert scales to get subjective responses to different features • Find bugs • Find features/tasks that need to be streamlined • Determine next round of useful features • Refine and test again.

  6. Participants need to know and care about the search goal (Jared Spool, UIE.com) Do Use Motivated Participants

  7. Have people use the system for their needs for several weeks or months. Observe changes in behavior, and subjective preferences. Do Longitudinal Studies

  8. If you’re doing something new with search, start simple, see what works, then add more in using additional evaluations. Do Add New Features Gradually

  9. In search engine comparisons, variability between queries/tasks can be greater than variability between systems. Beware of Query Sensitivity

  10. Some things are eye-catching, but serve best to draw the user in. Will they really like it over time? Or if they don’t like it at first, will they learn to like it? (rarer) Beware of Cool vs. Usable

  11. Compare your new idea against the best, most popular current solution. A good test: “How often would you use this system?” Do Compare Against a Strong Baseline

  12. Time to complete the task can be a misleading metric. Subjective impressions are key for determining search interface success. Subjective vs. Quantitative Measures

  13. Time to complete the task can be a misleading metric. Subjective impressions are key for determining search interface success. Subjective vs. Quantitative Measures

  14. Summary • Search evaluation is hard because of huge variations in • Information needs • Searchers’ knowledge and skills • Collection contents • A good strategy is to: • Add a few features at a time, test as you add • Obtain subjective preference information • Measure over time using longitudinal studies.

More Related