1 / 15

Toolkit Support for Usability Evaluation

Toolkit Support for Usability Evaluation. 05-830 Spring 2013 – Karin Tsai. Overview. Motivation Definitions Background from Literature Examples of Modern Tools. Motivation. To improve or validate usability Comparison between products, AB tests, etc. Measuring progress

zephr-fry
Télécharger la présentation

Toolkit Support for Usability Evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Toolkit Support for Usability Evaluation 05-830 Spring 2013 – Karin Tsai

  2. Overview • Motivation • Definitions • Background from Literature • Examples of Modern Tools

  3. Motivation • To improve or validate usability • Comparison between products, AB tests, etc. • Measuring progress • Verify adherence to guidelines or standards • Discover features of human cognition

  4. Usability Attributes • Learnability – easy to learn • Efficiency – efficient to use • Memorability – easy to remember how to use • Errors – low error rate; easy to recover • Satisfaction – pleasant to use and likable

  5. Evaluation Categories • Predictive • psychological modeling techniques • design reviews • Observational • observations of users interacting with the system • Participative • questionnaires • interviews • “think aloud” user-testing

  6. Challenges and Tradeoffs • Quality vs. Quantity • “Quality” defined as abstraction, interpretability, etc. • User testing – high quality; low quantity • Counting mouse clicks – low quality; high quantity • Observing Context • Abstraction • Event reporting in applications places burden on developers • Complicates software evolution

  7. CogTool • Evaluation Type: Predictive • Description: Uses a predictive human performance model (“cognitive crash dummy”) to evaluate designs.

  8. CogTool Overall Score: 6.5/10

  9. Mixpanel • Evaluation Type:Observational • Description:Aggregates developer-defined event data in useful ways.

  10. Mixpanel Overall Score: 9.5/10

  11. Chartbeat • Evaluation Type:Observational • Description: Real-time data visualization.

  12. Chartbeat Overall Score: 7/10

  13. User Testing • Evaluation Type:Participative • Description:Watch a user complete a task on your system while thinking aloud.

  14. User Testing Overall Score: 8.5/10

  15. Questions?

More Related