1 / 38

CS3205: HCI in SW Development Evaluation (Return to…)

CS3205: HCI in SW Development Evaluation (Return to…). We ’ ve had an introduction to evaluation. Now for more details on…. Topics to Cover. Types of evaluation paradigm, categories of techniques, specific techniques How to design an evaluation ID-book ’ s DECIDE framework

abedi
Télécharger la présentation

CS3205: HCI in SW Development Evaluation (Return to…)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS3205: HCI in SW DevelopmentEvaluation (Return to…) We’ve had an introduction to evaluation. Now for more details on…

  2. Topics to Cover • Types of evaluation • paradigm, categories of techniques, specific techniques • How to design an evaluation • ID-book’s DECIDE framework • Using experts for inspections and walkthroughs

  3. Reminder: Formative vs. Summative Eval. • Summative • After the something is complete • Does it “meet spec”? • Formative • As something is being developed • Begin as early as possible • For the purpose of affecting the process, the item being developed

  4. High-level Categories of Techniques • observing users, • asking users for their opinions, • asking experts for their opinions, • testing users’ performance • modeling users’ task performance

  5. Evaluation paradigm • Any kind of evaluation is guided explicitly or implicitly by a set of beliefs, which are often under-pinned by theory. These beliefs and the methods associated with them are known as an ‘evaluationparadigm’

  6. Four evaluation paradigms • ‘quick and dirty’ • usability testing • field studies • predictive evaluation

  7. Quick and dirty • ‘quick & dirty’ evaluation describes the common practice in which designers informally get feedback from users or consultants to confirm that their ideas are in-line with users’ needs and are liked. • Quick & dirty evaluations are done any time. • The emphasis is on fast input to the design process rather than carefully documented findings.

  8. Quick and Dirty cont’d • Applies to various other approaches. E.g., could either be: • small number of experts (i.e. predictive evaluation) • small number of subjects (i.e. usability testing) • How many subjects? Not as many as you think! • See Nielsen’s data on why 5 users is probably enough:http://www.useit.com/alertbox/20000319.html • 15 users to find 100% of problems. 5 users find 85%. • Better three studies with 5 users than one with 15?

  9. Usability testing • Usability testing involves recording typical users’ performance on typical tasks in controlled settings. Field observations may also be used. • As the users perform these tasks they are watched & recorded on video & their key presses are logged. • This data is used to calculate performance times, identify errors & help explain why the users did what they did. • User satisfaction questionnaires & interviews are used to elicit users’ opinions.

  10. Usability Engineering • Term coined by staff at Digital Equipment Corp. around 1986 • Concerned with: • Techniques for planning, achieving and verifying objectives for system usability • Measurable goals must be defined early • Goals must be assessed repeatedly • Note verification above • Definition by Christine Faulkner (2000): • “UE is an approach to the development of software and systems which involves user participation from the outset and guarantees the usefulness of the product through the use of a usability specification and metrics.”

  11. Field studies • Field studies are done in natural settings • The aim is to understand what users do naturally and how technology impacts them. • In product design field studies can be used to:- identify opportunities for new technology- determine design requirements - decide how best to introduce new technology- evaluate technology in use.

  12. Predictive evaluation • Experts apply their knowledge of typical users, often guided by heuristics, to predict usability problems. • Heuristic evaluation • Walkthroughs • Another approach involves theoretically based models. • Predicting time, errors: • GOMS and Fitts’ Law formula • A key feature of predictive evaluation is that users need not be present • Relatively quick & inexpensive

  13. How to Plan an Evaluation? • ID-book’s DECIDE framework • captures many important practical issues • works with all categories of study

  14. DECIDE: A framework to guide evaluation • Determine the goals the evaluation addresses. • Explore the specific questions to be answered. • Choose the evaluationparadigm and techniques to answer the questions. • Identify the practical issues. • Decide how to deal with the ethical issues. • Evaluate, interpret and present the data.

  15. Determine the goals • What are the high-level goals of the evaluation? • Who wants it and why?The goals influence the paradigm for the study • Some examples of goals: • Identify the best metaphor on which to base the design. • Check to ensure that the final interface is consistent. • Investigate how technology affects working practices. • Improve the usability of an existing product .

  16. Explore the questions • All evaluations need goals & questions to guide them so time is not wasted on ill-defined studies. • For example, the goal of finding out why many customers prefer to purchase paper airline tickets rather than e-tickets can be broken down into sub-questions:- What are customers’ attitudes to these new tickets?- Are they concerned about security?- Is the interface for obtaining them poor? • What questions might you ask about the design of a cell phone?

  17. Choose the evaluation paradigm & techniques • The evaluation paradigm strongly influences the techniques used, how data is analyzed and presented. • E.g. field studies do not involve testing or modeling

  18. Identify practical issues For example, how to: • select users • stay on budget • staying on schedule • find evaluators • select equipment

  19. Decide on ethical issues • Develop an informed consent form • See example(s) in text, Web site, etc. • Study these. • Participants have a right to:- know the goals of the study- what will happen to the findings- privacy of personal information- not to be quoted without their agreement - leave when they wish - be treated politely

  20. Evaluate, interpret & presentdata • How data is analyzed & presented depends on the paradigm and techniques used. • The following also need to be considered:- Reliability: can the study be replicated?- Validity: is it measuring what you thought?- Biases: is the process creating biases?- Scope: can the findings be generalized?- Ecological validity: is the environment of the study influencing it - e.g. Hawthorn effect

  21. Pilot studies • A small trial run of the main study. • The aim is to make sure your plan is viable. • Pilot studies check:- that you can conduct the procedure- that interview scripts, questionnaires, experiments, etc. work appropriately • It’s worth doing several to iron out problems before doing the main study. • Ask colleagues if you can’t spare real users.

  22. Key points • An evaluation paradigm is an approach that is influenced by particular theories and philosophies. • Five categories of techniques were identified: observing users, asking users, asking experts, user testing, modeling users. • The DECIDE framework has six parts: - Determine the overall goals - Explore the questions that satisfy the goals - Choose the paradigm and techniques - Identify the practical issues - Decide on the ethical issues - Evaluate ways to analyze & present data • Do a pilot study

  23. Overview: Evaluation By Experts • Several approaches • Heuristic evaluation • Walkthroughs (several flavors) • In general: • Inexpensive and quick compared to asking users • “Discount evaluation” • Experts may suggest solutions (users probably don’t)

  24. Asking experts • Experts use their knowledge of users & technology to review software usability • Expert critiques (crits) can be formal or informal reports • Heuristic evaluation is a review guided by a set of heuristics • Walkthroughs involve stepping through a pre-planned scenario noting potential problems

  25. Heuristic evaluation • Developed Jacob Nielsen in the early 1990s • Based on heuristics distilled from an empirical analysis of 249 usability problems • These heuristics have been revised for current technology, e.g., HOMERUN for web • Heuristics still needed for mobile devices, wearables, virtual worlds, etc. • Design guidelines form a basis for developing heuristics

  26. Nielsen’s “general” heuristics (Remember these?) • Visibility of system status • Match between system and real world • User control and freedom • Consistency and standards • Help users recognize, diagnose, recover from errors • Error prevention • Recognition rather than recall • Flexibility and efficiency of use • Aesthetic and minimalist design • Help and documentation

  27. Nielsen HOMERUN • Derived from general heuristics • More specific for commercial Web sites: • High-quality content • Often updated • Minimal download time • Ease of use • Relevant to users’ needs • Unique to the online medium • Netcentric corporate culture

  28. Visibility of System Status Match between system and real world User control and freedom Consistency and Standards Error Prevention Recognition rather than recall Flexibility and efficiency of use Aesthetic and minimalist design Help users recover from errors Help and documentation Navigation Structure of Information Physical constraints Extraordinary users Heuristic Categories from ID-Book

  29. Heuristics are not Bullet Items • What you see here in slides are not heuristics usable for evaluation, i.e. not checklists • For a better example, see:http://www.stcsig.org/usability/topics/articles/he-checklist.html

  30. Discount evaluation • Heuristic evaluation is referred to as discount evaluation when 5 evaluators are used. • Empirical evidence suggests that on average 5 evaluators identify 75-80% of usability problems. • Note how similar to “quick and dirty” user studies results

  31. 3 stages for doing heuristic evaluation • Briefing session to tell experts what to do • Evaluation period of 1-2 hours in which:- Each expert works separately- Take one pass to get a feel for the product- Take a second pass to focus on specific features • Debriefing session in which experts work together to prioritize problems

  32. Advantages and problems • Few ethical & practical issues to consider • Can be difficult & expensive to find experts • Best experts have knowledge of application domain & users • Biggest problems- important problems may get missed- many trivial problems are often identified • One study has shown: • For each true problem, 1.2 false alarms and 0.6 missed problems

  33. Evaluating Websites • Heuristics like Nielsen’s “general list” less applicable for websites • Nielsen’s HOMERUN • Important: Heuristics can become more useful when they become guidelines for development • Analysis of outcomes might lead to particular suggestions. E.g. for one set of published results: • (1) layout of pages, (2) arrangement of topics, (3) depth of navigation

  34. Preece’s Web Heuristics/Guidelines • Navigation • (1) long pages with wasted space, (2) navigation support like a site map, (3) good menu structures, (4) standard link colors, (5) consistent look and feel • Access • (1) avoid complex URLs, (2) long download times • Information Design (both content and comprehension) • (1) outdated or incomplete info, (2) good graphical design, (3) good use of color, (4) avoid gratuitous graphics and animation, (5) consistency

  35. Topic: Heuristics for… • What are a good set of heuristics for a cellphone’s UI? • Status • Phone should use color or borders to support status. • Phone should clearly display battery life / signal strength / time. • Phone should display call status (ringing, connected, disconnected) • Navigation • Never be more than 2-3 steps from making call. • If move to bottom of screen with many steps, one step back to top. • Easy to go back to previous screen / application (undo mistaken selection) • See active applications and move to running apps • Error prevention • Efficiency

  36. Topic: Heuristics for… • What are a good set of heuristics for a cellphone’s UI? • Status • status of call should be visible (call, connection, roaming, battery) • mode (vibrate etc.) • unread text messages, voice mails • Navigation • one-button for phonebook numbers • consistent navigation button for back, etc. • Error prevention • prevent accidental button presses in pocket, backpack, purse • Efficiency

More Related