1 / 68

Intro to Evaluation

This article provides an overview of evaluation techniques for assessing the usability of software, including formative and summative assessment methods. It explores different evaluation techniques such as interviews, questionnaires, observation, experiments, heuristic evaluation, cognitive walkthrough, diary studies, and more. The article also discusses the importance of gathering relevant, diagnostic, credible, and corroborated data as evidence in usability studies.

jhilton
Télécharger la présentation

Intro to Evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Intro to Evaluation See how (un)usable your software really is…

  2. Summative assess an existing system judge if it meets some criteria Formative assess a system being designed gather input to inform design Summative or formative? Depends on maturity of system how evaluation results will be used Same technique can be used for either Why evaluation is done?

  3. Form of results of obtained Quantitative Qualitative Who is experimenting with the design End users HCI experts Approach Experimental Naturalistic Predictive Other distinctions

  4. Evaluation techniques • Predictive Evaluation • Interviews • Questionnaires • Observation • Experiment • Discount Evaluation techniques • Heuristic eval • Cognitive walkthrough

  5. Predictive Evaluation • Predict user performance and usability • Rules or formulas based on experimentation • Quantitative • Predictive • In a bit…

  6. Interviews & Questionnaires • Ask users what they think about your prototype / design / ideas • Qualitative & quantitative • Subjective • End users or other stakeholders • Often accompanies other methods to get subjective feedback

  7. Observation • Watch users perform tasks with your interface • Qualitative & quantitative • Objective • Experimental or naturalistic • Variations • Think-aloud • Cooperative evaluation

  8. Experiments • Test hypotheses about your interface • Quantitative • Objective • Experimental • Examine dependent variables against independent variables • Often used to compare two designs or compare performance between groups • Next week…

  9. Discount usability techniques • Fast and cheap method to get broad feedback • Use HCI experts instead of users • Qualitative mostly • Heuristic evaluation • Several experts examine interface using guiding heuristics (like the ones we used in design) • Cognitive Walkthrough • Several experts assess learnability of interface for novices

  10. And still more techniques • Diary studies • Users relate experiences on a regular basis • Can write down, call in, etc. • Experience Sampling Technique • Interrupt users with very short questionnaire on a random-ish basis • Good to get idea of regular and long term use in the field (real world)

  11. A “typical” usability study • Bring users into a lab • Introduce them to your interface • Give them a script or several tasks and ask to complete them • Look for errors & problems, performance, etc. • Interview or questionnaire after to get additional feedback

  12. UsabilityLab Large viewing area in this one-way mirror which includes an angled sheet of glass the improves light capture and prevents sound transmission between rooms. Doors for participant room and observation rooms are located such that participants are unaware of observers movements in and out of the observation room. http://www.surgeworks.com/services/observation_room2.htm

  13. A “typical” usability study • Questionnaire (biographical data) • Observation of several tasks • Sometimes as part of an experiment • Interview (for additional feedback)

  14. Evaluation is Detective Work • Goal: gather evidence that can help you determine whether your usability goals are being met • Evidence (data) should be: • Relevant • Diagnostic • Credible • Corroborated

  15. Data as Evidence • Relevant • Appropriate to address the hypotheses • e.g., Does measuring “number of errors” provide insight into how effective your new air traffic control system supports the users’ tasks? • Diagnostic • Data unambiguously provide evidence one way or the other • e.g., Does asking the users’ preferences clearly tell you if the system performs better? (Maybe)

  16. Data as Evidence • Credible • Are the data trustworthy? • Gather data carefully; gather enough data • Corroborated • Do more than one source of evidence support the hypotheses? • e.g. Both accuracy and user opinions indicate that the new system is better than the previous system. But what if completion time is slower?

  17. General Recommendations • Identify evaluation goals • Include both objective & subjective data • e.g. “completion time” and “preference” • Use multiple measures, within a type • e.g. “reaction time” and “accuracy” • Use quantitative measures where possible • e.g. preference score (on a scale of 1-7) Note: Only gather the data required; do so with minimum interruption, hassle, time, etc.

  18. Evaluation planning • Decide on techniques, tasks, materials • What are usability criteria? • How much required authenticity? • How many people, how long • How to record data, how to analyze data • Prepare materials – interfaces, storyboards, questionnaires, etc. • Pilot the entire evaluation • Test all materials, tasks, questionnaires, etc. • Find and fix the problems with wording, assumptions • Get good feel for length of study

  19. Recruiting Participants • Various “subject pools” • Volunteers • Paid participants • Students (e.g., psych undergrads) for course credit • Friends, acquaintances, family, lab members • “Public space” participants - e.g., observing people walking through a museum • Email, newsgroup lists • Must fit user population (validity) • Note: Ethics, Consent apply to *all* participants, including friends & “pilot subjects”

  20. Performing the Study • Be well prepared so participant’s time is not wasted • Explain procedures without compromising results • Session should not be too long , subject can quit anytime • Never express displeasure or anger • Data to be stored anonymously, securely, and/or destroyed • Expect anything and everything to go wrong!! (a little story)

  21. Consent • Why important? • People can be sensitive about this process and issues • Errors will likely be made, participant may feel inadequate • May be mentally or physically strenuous • What are the potential risks (there are always risks)?

  22. Data Analysis • Start just looking at the data • Were there outliers, people who fell asleep, anyone who tried to mess up the study, etc.? • Identify issues: • Overall, how did people do? • “5 W’s” (Where, what, why, when, and for whom were the problems?) • Compile aggregate results and descriptive statistics

  23. Making Conclusions • Where did you meet your criteria? Where didn’t you? • What were the problems? How serious are these problems? • What design changes should be made? • But don’t make things worse… • Prioritize and plan changes to the design • Iterate on entire process

  24. Example: Heather’s study • Software: MeetingViewer interface fully functional • Criteria – learnability, efficiency, see what aspects of interface get used, what might be missing • Resources – subjects were students in a research group, just me as evaluator, plenty of time • Wanted completely authentic experience

  25. Heather’s software

  26. Heather’s evaluation • Task: answer questions from a recorded meeting, use my software as desired • Think-aloud • Video taped, software logs • Also had post questionnaire • Wrote my own code for log analysis • Watched video and matched behavior to software logs

  27. Example materials

  28. Data analysis • Basic data compiled: • Time to answer a question (or give up) • Number of clicks on each type of item • Number of times audio played • Length of audio played • User’s stated difficulty with task • User’s suggestions for improvements • More complicated: • Overall patterns of behavior in using the interface • User strategies for finding information

  29. Data representation example

  30. Data presentation

  31. Some usability conclusions • Need fast forward and reverse buttons (minor impact) • Audio too slow to load (minor impact) • Target labels are confusing, need something different that shows dynamics (medium impact) • Need more labeling on timeline (medium impact) • Need different place for notes vs. presentations (major impact)

  32. Your turn: during break • In your project groups • Which usability goals are important for you? • How might you measure each one? • Which techniques would help get those measurements?

  33. Learnability Predictability Synthesizability Familiarity Generalizability Consistency Error prevention Recoverability Observability Responsiveness Task conformance Flexibility Customizability Substitutivity Satisfying Engaging Motivating Efficient Aesthetic Reminder: (some) usability goals

  34. Predictive Models • Translate empirical evidence into theories and models that can influence design. • Performance measures • Quantitative • Time prediction • Working memory constraints • Competence measures • Focus on certain details, others obscured

  35. Two Types of User Modeling • Stimulus-Response • Practice law • Fitt’s law • Cognitive – human as interpreter/predictor – based on Model Human Processor (MHP) • Key-stroke Level Model • Low-level, simple • GOMS (and similar) Models • Higher-level (Goals, Operations, Methods, Selections)

  36. Power law of practice • Tn = T1n-a • Tn to complete the nth trial is T1 on the first trial times n to the power -a; a is about .4, between .2 and .6 • Skilled behavior - Stimulus-Response and routine cognitive actions • Typing speed improvement • Learning to use mouse • Pushing buttons in response to stimuli • NOT learning

  37. Power Law: Tn = T1n-a If first trial (T1) takes 5 seconds, how long will future trials take? When will improvements level off? (a = -0.4)

  38. Uses for Power Law of Practice • Use measured time T1 on trial 1 to predict whether time with practice will meet usability criteria, after a reasonable number of trials • How many trials are reasonable? • Predict how many practices will be needed for user to meet usability criteria • Determine if usability criteria is realistic

  39. Fitts’ Law • Models movement times for selection tasks • Paul Fitts: war-time human factors pioneer • Basic idea: Movement time for a well-rehearsed selection task • Increases as the distance to the target increases • Decreases as the size of the target increases

  40. D W STOP START Moving • Move from START to STOP Index of Difficulty: ID = log2 ( 2D/W )(in unitless bits) width of target distance

  41. MT ID Movement Time MT = a + b*ID or MT = a + b log2 (2D/W) • Empirical measurement establishes constants a and b • Different for different devices and different ways the same device is used.

  42. Questions • What do you do in 2D? • h x l rect:one way is ID = log2(d/min(w, l) + 1) • Should take into account direction of approach

  43. Applications • When does it apply? • How used in interface design?

  44. GOMS • Goals, Operators, Methods, Selection Rules Card, Moran, & Newell (1983) • Assumptions • Human activity is problem solving • Decompose into subproblems • Determine goals to “attack” problem • Know sequence of operations used to achieve the goals • Timing values for each operation

  45. GOMS: Components • Goals • State to be achieved • Operators • Elementary perceptual, cognitive, motor acts • Not so fine-gained as Model Human Processor • Methods • Procedures for accomplishing a (sub)goal • e.g., move cursor via mouse or keys • Selection Rules • if-then rules that determine which method to use

  46. GOMS: Limitations • GOMS is not so well suited for: • Tasks where steps are not well understood • Inexperienced users • Why?

  47. GOMS: Application • NYNEX telephone operation system • GOMS analysis used to determine critical path, time to complete typical task • Determined that new system would actually be slower • Abandoned, saving millions of dollars

  48. Keystroke Level Model (KLM) • Chapter 12.5 • Low-level GOMS variant • Also developed by Card, Moran, and Newell (1983) • Skilled users performing routine tasks • Assumes error-free performance • Analyze only observable behaviors • Keystrokes, mouse movements • Assigns times to basic human operations - experimentally verified

  49. KSLM Accounts for • Keystroking TK • Mouse button press TB • Pointing (typically with mouse) TP • Hand movement betweenkeyboard and mouse TH • Drawing straight line segments TD • “Mental preparation” TM • System Response time TR

  50. Step One : MS Word Find Command • Use Find Command to locate a six character word • H (Home on mouse) • P (Edit) • B (click on mouse button - press/release) • P (Find) • B (click on mouse button) • H (Home on keyboard) • 6K (Type six characters into Find dialogue box) • K (Return key on dialogue box starts the find)

More Related