1 / 114

Evaluation, cont’d

Evaluation, cont’d. Two main types of evaluation. Formative evaluation is done at different stages of development to check that the product meets users’ needs. Summative evaluation assesses the quality of a finished product. Our focus is on formative evaluation. What to evaluate.

diallo
Télécharger la présentation

Evaluation, cont’d

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluation, cont’d

  2. Two main types of evaluation • Formative evaluation is done at different stages of development to check that the product meets users’ needs. • Summative evaluation assesses the quality of a finished product. Our focus is on formative evaluation

  3. What to evaluate • Iterative design & evaluation is a continuous process that examines: • Early ideas for conceptual model • Early prototypes of the new system • Later, more complete prototypes Designers need to check that they understand users’ requirements.

  4. Tog says … “Iterative design, with its repeating cycle of design and testing, is the only validated methodology in existence that will consistently produce successful results. If you don’t have user-testing as an integral part of your design process you are going to throw buckets of money down the drain.”

  5. When to evaluate • Throughout design • From the first descriptions, sketches etc. of users needs through to the final product • Design proceeds through iterative cycles of ‘design-test-redesign’ • Evaluation is a key ingredient for a successful design.

  6. Another example - development of “HutchWorld” • Many informal meetings with patients, carers & medical staff early in design • Early prototype informally tested on site • Designers learned a lot • language of designers & users was different • asynchronous communication was also needed • Redesigned to produce the portal version

  7. Usability testing • User tasks investigated:- how users’ identify was represented- communication- information searching- entertainment • User satisfaction questionnaire • Triangulation to get different perspectives

  8. Findings from the usability test • The back button didn’t always work • Users didn’t pay attention to navigation buttons • Users expected all objects in the 3-D view to be clickable. • Users did not realize that there could be others in the 3-D world with whom to chat, • Users tried to chat to the participant list.

  9. Key points • Evaluation & design are closely integrated in user-centered design. • Some of the same techniques are used in evaluation & requirements but they are used differently (e.g., interviews & questionnaires) • Triangulation involves using a combination of techniques to gain different perspectives • Dealing with constraints is an important skill for evaluators to develop.

  10. A case in point … • “The Butterfly Ballot: Anatomy of disaster”.See http://www.asktog.com/columns/042ButterflyBallot.html

  11. An evaluation framework

  12. The aims • Explain key evaluation concepts & terms. • Describe the evaluation paradigms & techniques used in interaction design. • Discuss the conceptual, practical and ethical issues that must be considered when planning evaluations. • Introduce the DECIDE framework.

  13. Evaluation paradigm • Any kind of evaluation is guided explicitly or implicitly by a set of beliefs, which are often under-pinned by theory. These beliefs and the methods associated with them are known as an ‘evaluationparadigm’

  14. User studies • User studies involve looking at how people behave in their natural environments, or in the laboratory, both with old technologies and with new ones.

  15. Four evaluation paradigms • ‘quick and dirty’ • usability testing • field studies • predictive evaluation

  16. Quick and dirty • ‘quick & dirty’ evaluation describes the common practice in which designers informally get feedback from users or consultants to confirm that their ideas are in-line with users’ needs and are liked. • Quick & dirty evaluations are done any time. • The emphasis is on fast input to the design process rather than carefully documented findings.

  17. Usability testing • Usability testing involves recording typical users’ performance on typical tasks in controlled settings. Field observations may also be used. • As the users perform these tasks they are watched & recorded on video & their key presses are logged. • This data is used to calculate performance times, identify errors & help explain why the users did what they did. • User satisfaction questionnaires & interviews are used to elicit users’ opinions.

  18. Field studies • Field studies are done in natural settings • The aim is to understand what users do naturally and how technology impacts them. • In product design field studies can be used to:- identify opportunities for new technology- determine design requirements - decide how best to introduce new technology- evaluate technology in use.

  19. Predictive evaluation • Experts apply their knowledge of typical users, often guided by heuristics, to predict usability problems. • Another approach involves theoretically based models. • A key feature of predictive evaluation is that users need not be present • Relatively quick & inexpensive

  20. Overview of techniques • observing users, • asking users’ their opinions, • asking experts’ their opinions, • testing users’ performance • modeling users’ task performance

  21. DECIDE: A framework to guide evaluation • Determine the goals the evaluation addresses. • Explore the specific questions to be answered. • Choose the evaluationparadigm and techniques to answer the questions. • Identify the practical issues. • Decide how to deal with the ethical issues. • Evaluate, interpret and present the data.

  22. Determine the goals • What are the high-level goals of the evaluation? • Who wants it and why? • The goals influence the paradigm for the study • Some examples of goals: • Identify the best metaphor on which to base the design. • Check to ensure that the final interface is consistent. • Investigate how technology affects working practices. • Improve the usability of an existing product .

  23. Explore the questions • All evaluations need goals & questions to guide them so time is not wasted on ill-defined studies. • For example, the goal of finding out why many customers prefer to purchase paper airline tickets rather than e-tickets can be broken down into sub-questions:- What are customers’ attitudes to these new tickets?- Are they concerned about security?- Is the interface for obtaining them poor? • What questions might you ask about the design of a cell phone?

  24. Choose the evaluation paradigm & techniques • The evaluation paradigm strongly influences the techniques used, how data is analyzed and presented. • E.g. field studies do not involve testing or modeling

  25. Identify practical issues For example, how to: • select users • stay on budget • staying on schedule • find evaluators • select equipment

  26. Decide on ethical issues • Develop an informed consent form • Participants have a right to:- know the goals of the study- know what will happen to the findings- privacy of personal information- not to be quoted without their agreement - leave when they wish - be treated politely

  27. Evaluate, interpret & presentdata • How data is analyzed & presented depends on the paradigm and techniques used. • The following also need to be considered:- Reliability: can the study be replicated?- Validity: is it measuring what you thought?- Biases: is the process creating biases?- Scope: can the findings be generalized?- Ecological validity: is the environment of the study influencing it - e.g. Hawthorne effect

  28. Pilot studies • A small trial run of the main study. • The aim is to make sure your plan is viable. • Pilot studies check:- that you can conduct the procedure- that interview scripts, questionnaires, experiments, etc. work appropriately • It’s worth doing several to iron out problems before doing the main study. • Ask colleagues if you can’t spare real users.

  29. Key points • An evaluation paradigm is an approach that is influenced by particular theories and philosophies. • Five categories of techniques were identified: observing users, asking users, asking experts, user testing, modeling users. • The DECIDE framework has six parts: - Determine the overall goals - Explore the questions that satisfy the goals - Choose the paradigm and techniques - Identify the practical issues - Decide on the ethical issues - Evaluate ways to analyze & present data • Do a pilot study

  30. Observing users

  31. The aims • Discuss the benefits & challenges of different types of observation. • Describe how to observe as an on-looker, a participant, & an ethnographer. • Discuss how to collect, analyze & present observational data. • Examine think-aloud, diary studies & logging. • Provide you with experience in doing observation and critiquing observation studies.

  32. What and when to observe • Goals & questions determine the paradigms and techniques used. • Observation is valuable any time during design. • Quick & dirty observations early in design • Observation can be done in the field (i.e., field studies) and in controlled environments (i.e., usability studies) • Observers can be:- outsiders looking on- participants, i.e., participant observers- ethnographers

  33. Frameworks to guide observation • - The person. Who? - The place. Where?- The thing. What? • The Goetz and LeCompte (1984) framework:- Who is present? - What is their role? - What is happening? - When does the activity occur?- Where is it happening? - Why is it happening? - How is the activity organized?

  34. The Robinson (1993) framework • Space. What is the physical space like? • Actors. Who is involved? • Activities. What are they doing? • Objects. What objects are present? • Acts. What are individuals doing? • Events. What kind of event is it? • Goals. What do they to accomplish? • Feelings. What is the mood of the group and of individuals?

  35. You need to consider • Goals & questions • Which framework & techniques • How to collect data • Which equipment to use • How to gain acceptance • How to handle sensitive issues • Whether and how to involve informants • How to analyze the data • Whether to triangulate

  36. Observing as an outsider • As in usability testing • More objective than participant observation • In usability lab equipment is in place • Recording is continuous • Analysis & observation almost simultaneous • Care needed to avoid drowning in data • Analysis can be coarse or fine grained • Video clips can be powerful for telling story

  37. Participant observation & ethnography • Debate about differences • Participant observation is key component of ethnography • Must get co-operation of people observed • Informants are useful • Data analysis is continuous • Interpretivist technique • Questions get refined as understanding grows • Reports usually contain examples

  38. Data collection techniques • Notes & still camera • Audio & still camera • Video • Tracking users:- diaries- interaction logging

  39. Data analysis • Qualitativedata - interpreted & used to tell the ‘story’ about what was observed. • Qualitative data - categorized using techniques such as content analysis. • Quantitative data - collected from interaction & video logs. Presented as values, tables, charts, graphs and treated statistically.

  40. Interpretive data analysis • Look for key events that drive the group’s activity • Look for patterns of behavior • Test data sources against each other - triangulate • Report findings in a convincing and honest way • Produce ‘rich’ or ‘thick descriptions’ • Include quotes, pictures, and anecdotes • Software tools can be useful e.g., NUDIST, Ethnograph (URLs will be provided)

  41. Looking for patterns • Critical incident analysis • Content analysis • Discourse analysis • Quantitative analysis - i.e., statistics

  42. Key points • Observe from outside or as a participant • Analyzing video and data logs can be time-consuming. • In participant observation collections of comments, incidents, and artifacts are made. Ethnography is a philosophy with a set of techniques that include participant observation and interviews. • Ethnographers immerse themselves in the culture that they study.

  43. Asking users & experts

  44. The aims • Discuss the role of interviews & questionnaires in evaluation. • Teach basic questionnaire design. • Describe how do interviews, heuristic evaluation & walkthroughs. • Describe how to collect, analyze & present data. • Discuss strengths & limitations of these techniques

  45. Interviews • Unstructured - are not directed by a script. Rich but not replicable. • Structured - are tightly scripted, often like a questionnaire. Replicable but may lack richness. • Semi-structured - guided by a script but interesting issues can be explored in more depth. Can provide a good balance between richness and replicability.

  46. Basics of interviewing • Remember the DECIDE framework • Goals and questions guide all interviews • Two types of questions:‘closed questions’ have a predetermined answer format, e.g., ‘yes’ or ‘no’‘open questions’ do not have a predetermined format • Closed questions are quicker and easier to analyze

  47. Things to avoid when preparing interview questions • Long questions • Compound sentences - split into two • Jargon & language that the interviewee may not understand • Leading questions that make assumptions e.g., why do you like …? • Unconscious biases e.g., gender stereotypes

  48. Components of an interview • Introduction - introduce yourself, explain the goals of the interview, reassure about the ethical issues, ask to record, present an informed consent form. • Warm-up - make first questions easy & non-threatening. • Main body – present questions in alogicalorder • A cool-off period - includea few easy questions to defuse tension at the end • Closure - thank interviewee, signal the end, e.g, switch recorder off.

  49. The interview process • Use the DECIDE framework for guidance • Dress in a similar way to participants • Check recording equipment in advance • Devise a system for coding names of participants to preserve confidentiality. • Be pleasant • Ask participants to complete an informed consent form

  50. Probes and prompts • Probes - devices for getting more information.e.g., ‘would you like to add anything?’ • Prompts - devices to help interviewee, e.g., help with remembering a name • Remember that probing and prompting should not create bias. • Too much can encourage participants to try to guess the answer.

More Related