1 / 89

IS 4800 Empirical Research Methods for Information Science Class Notes March 2, 2012

IS 4800 Empirical Research Methods for Information Science Class Notes March 2, 2012. Instructor: Prof. Carole Hafner, 446 WVH hafner@ccs.neu.edu Tel: 617-373-5116 Course Web site: www.ccs.neu.edu/course/is4800sp12/. Outline. Finish discusion of usability testing

jkipp
Télécharger la présentation

IS 4800 Empirical Research Methods for Information Science Class Notes March 2, 2012

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. IS 4800 Empirical Research Methods for Information Science Class Notes March 2, 2012 Instructor: Prof. Carole Hafner, 446 WVH hafner@ccs.neu.edu Tel: 617-373-5116 Course Web site: www.ccs.neu.edu/course/is4800sp12/

  2. Outline Finish discusion of usability testing Hypothesis testing review Sampling, Power and Effect Size Chi square – review and SPSS application Correlation – review and SPSS application Begin t-test if time permits

  3. UI/Usabililty evaluation • What are the three approaches ?? • What are the advantages and disadvantages of each? • Explain a usability experiment that is within-subjects • Explain a usability experiment that is between-subjects • What are the advantages and disadvantages of each ?

  4. What is a Usability Experiment? • Usability testing in a controlled environment • There is a test set of users • They perform pre-specified tasks • Data is collected (usually quantitative and qualitative) • Take mean and/or median value of quantitative attributes • Compare to goal or another system • Contrasted with “expert review” and “field study” evaluation methodologies • The growth of usability groups and usability laboratories

  5. Usability Experiment • Defining the variables to collect ? • Techniques for data collection ? • Descriptive statistics to use • Potential for inferential statistics • Basis for correlational vs experimental claims • Reliability and validity

  6. Experimental factors Subjects representative sufficient sample Variables independent variable (IV) characteristicchanged to produce different conditions. e.g. interface style, number of menu items. dependent variable (DV) characteristicsmeasured in the experiment e.g. timetaken, number of errors.

  7. Experimental factors (cont.) Hypothesis prediction of outcome framed in terms ofIV and DV null hypothesis: states no differencebetween conditions aim is to disprovethis. Experimental design within groups design each subjectperforms experiment under each condition. transfer of learning possible less costlyand less likely to suffer from user variation. between groups design each subjectperforms under only one condition notransfer of learning more users required variation can bias results.

  8. Summative AnalysisWhat to measure? (and it’s relationship to a usability goal) Total task time User “think time” (dead time??) Time spent not moving toward goal Ratio of successful actions/errors Commands used/not used frequency of user expression of: confusion, frustration, satisfaction frequency of reference to manuals/help system percent of time such reference provided the needed answer

  9. Measuring User Performance Measuring learnability Time to complete a set of tasks Learnability/efficiency trade-off Measuring efficiency Time to complete a set of tasks How to define and locate “experienced” users Measuring memorability The most difficult, since “casual” users are hard to find for experiments Memory quizzes may be misleading

  10. Measuring User Performance (cont.) Measuring user satisfaction Likert scale (agree or disagree) Semantic differential scale Physiological measure of stress Measuring errors Classification of minor v. serious

  11. Reliability and Validity Reliability means repeatability. Statistical significance is a measure of reliability Validity means will the results transfer into a real-life situation. It depends on matching the users, task, environment Reliability - difficult to achieve because of high variability in individual user performance

  12. Formative EvaluationWhat is a Usability Problem?? Unclear - the planned method for using the system is not readily understood or remembered (info. design level) Error-prone - the design leads users to stray from the correct operation of the system (any design level) Mechanismoverhead - the mechanism design creates awkward work flow patterns that slow down or distract users. Environmentclash - the design of the system does not fit well with the users’ overall work processes. (any design level) Ex: incomplete transaction cannot be saved

  13. Qualitative methods for collecting usability problems Thinking aloud studies Difficult to conduct Experimenter prompting, non-directive Alternatives: constructive interaction, coaching method, retrospective testing Output: notes on what users did and expressed: goals, confusions or misunderstandings, errors, reactions expressed Questionnaires Should be usability-tested beforehand Focus groups, interviews

  14. Observational Methods - Think Aloud user observed performing task user asked to describe what he is doing andwhy, what he thinks is happening etc. Advantages simplicity - requires little expertise can provide useful insight can show how system is actually use Disadvantages subjective selective act of describing may alter taskperformance

  15. Observational Methods - Cooperative evaluation variation on thinkaloud user collaborates in evaluation both user and evaluator can ask each otherquestions throughout Additional advantages less constrained and easier to use user is encouraged to criticize system clarification possible

  16. Observational Methods - Protocol analysis paper and pencil cheap, limited to writing speed audio good for think aloud, diffcult to match with other protocols video accurate and realistic, needs special equipment, obtrusive computer logging automatic and unobtrusive, large amounts of data difficult to analyze user notebooks coarse and subjective, useful insights, good for longitudinal studies Mixed use in practice. Transcription of audio and video difficult andrequires skill. Some automatic support tools available

  17. Query Techniques - Interviews analyst questions user on one to one basisusually based on prepared questions informal, subjective and relatively cheap Advantages can be varied to suit context issues can be explored more fully can elicit user views and identifyunanticipated problems Disadvantages very subjective time consuming

  18. Query Techniques - Questionnaires Set of fixed questions given to users Advantages quick and reaches large user group can be analyzed more rigorously Disadvantages less flexible less probing

  19. Laboratory studies: Pros and Cons Advantages: specialist equipment available uninterrupted environment Disadvantages: lack of context difficult to observe several users cooperating Appropriate if actual system location is dangerous orimpracticalfor to allow controlled manipulation of use.

  20. Steps in a usability experiment The planning phase The execution phase Data collection techniques Data analysis

  21. The planning phase (your proposal) • Who, what, where, when and how much? • Who are test users, and how will they be recruited? • Who are the experimenters? • When, where, and how long will the test take? • What equipment/software is needed? • How much will the experiment cost? <not required> • Prepare detailed test protocol • *What test tasks? (written task sheets) • *What user aids? (written manual) • *What data collected? (include questionnaire) • How will results be analyzed/evaluated? • Pilot test protocol with a few users <one user>

  22. Execution Phase: Designing Test Tasks Tasks: Are representative Cover most important parts of UI Don’t take too long to complete Goal or result oriented (possibly with scenario) Not frivolous or humorous (unless part of product goal) First task should build confidence Last task should create a sense of accomplishment

  23. Detailed Test Protocol What tasks? Criteria for completion? User aids What will users be asked to do (thinking aloud studies)? Interaction with experimenter What data will be collected? All materials to be given to users as part of the test, including detailed description of the tasks.

  24. Execution phase Prepare environment, materials, software Introduction should include: purpose (evaluating software) voluntary and confidential explain all procedures recording question-handling invite questions During experiment give user written task description(s), one at a time only one experimenter should talk De-briefing

  25. Execution phase: ethics of human experimentation applied to usability testing • Users feel exposed using unfamiliar tools and making errors • Guidelines: • Re-assure that individual results not revealed • Re-assure that user can stop any time • Provide comfortable environment • Don’t laugh or refer to users as subjects or guinea pigs • Don’t volunteer help, but don’t allow user to struggle too long • In de-briefing • answer all questions • reveal any deception • thanks for helping

  26. Data collection - usability labs and equipment Pad and paper the only absolutely necessary data collection tool! Observation areas (for other experimenters, developers, customer reps, etc.) - should be shown to users Videotape (may be overrated) - users must sign a release Video display capture Portable usability labs Usability kiosks

  27. Analysis of data Before you start to do any statistics: look at data save original data Choice of statistical technique depends on type of data information required Type of data discrete - finite number of values continuous - any value

  28. Testing usability in the field (6 things you can do) 1. Direct observation in actual use discover new uses take notes, don’t help, chat later 2. Logging actual use objective, not intrusive great for identifying errors which features are/are not used privacy concerns

  29. Testing Usability in the Field (cont.) 3. Questionnaires and interviews with real users ask users to recall critical incidents questionnaires must be short and easy to return 4. Focus groups 6-9 users skilled moderator with pre-planned script computer conferencing?? 5 On-line direct feedback mechanisms initiated by users may signal change in user needs trust but verify 6. Bulletin boards and user groups

  30. Field Studies: Pros and Cons Advantages: natural environment context retained (though observation mayalter it) longitudinal studies possible Disadvantages: distractions noise Appropriate for “beta testing” where context is crucialfor longitudinal studies

  31. H1: Research Hypothesis: Population 1 is different than Population 2 H0: Null Hypothesis: No difference between Pop 1 and Pop 2 State test criteria (a, tails) Compute p(observed difference|H0) ‘p’ = probability observed difference is due to random variation If p <alpha then reject H0 => accept H1 alpha typically set to 0.05 for most work p is called the “level of significance” (actual) alpha is called the criterion Statistical Thinking (samples and populations)

  32. Relationship between alpha, beta, and power. “The Truth” H1 True H1 False Correct p = power Type I err p = alpha Decide to Reject H0 & accept H1 Type II err p = beta Correct p = 1-alpha Do not Reject H0 & do not accept H1

  33. Relationship Between Population and Samples When a Treatment Had No Effect

  34. Relationship Between Population and Samples When a Treatment Had An Effect

  35. Sampling Distribution The distribution of every possible sample taken from a population (with size n) Sampling Error The difference between a sample mean and the population mean: M - μ The standard error of the mean is a measure of sampling error (std dev of distribution of means) Some Basic Concepts

  36. Degrees of Freedom The number of scores in sample with a known mean that are free to vary and is defined as n-1 Used to find the appropriate tabled critical value of a statistic Parametric vs. Nonparametric Statistics Parametric statistics make assumptions about the nature of an underlying population Nonparametric statistics make no assumptions about the nature of an underlying population Some Basic Concepts

  37. Mean? Variance? Population m Sample of size N Mean values from all possible samples of size N aka “distribution of means” Sampling MM = m ZM = ( M - m ) /

  38. Estimating the Population Variance S2 is an estimate of σ2 S2 = SS/(N-1) for one sample (take sq root for S) For two independent samples – “pooled estimate”: S2 = df1/dfTotal * S12 + df2/dfTotal * S22 dfTotal = df1 + df2 = (N1 -1) + (N2 – 1) From this calculate variance of sample means: S2M = S2/N needed to compute t statistic

  39. Z tests and t-tests t is like Z: Z = M - μ / t = M – 0 / We use a stricter criterion (t) instead of Z because is based on an estimate of the population variance while is based on a known population variance.

  40. We can compute the distribution of means and finally determine the probability that this mean occurred by chance Now, given a particular sample of change scores of size N We compute its mean df = N-1 T-test with paired samples Given info about population of change scores and the sample size we will be using (N) ? m = 0 S2 est s2 from sample = SS/df S2M = S2/N

  41. Estimate variances of distributions of means t test for independent samples Estimate variance of differences between means (mean = 0) Estimate population variances (assume same) Given two samples This is now your comparison distribution

  42. Compute t = (M1-M2)/SDifference Determine if beyond cutoff score for test parameters (df,sig, tails) from lookup table. t test for independent samples, continued Distribution of differences between means This is your comparison distribution NOT normal, is a ‘t’ distribution Shape changes depending on df df = (N1 – 1) + (N2 – 1)

  43. The amount of change in the DVs seen. Can have statistically significant test but small effect size. Effect size

  44. Power Increases with effect size Increases with sample size Decreases with alpha Should determine number of subjects you need ahead of time by doing a ‘power analysis’ Standard procedure: Fix alpha and beta (power) Estimate effect size from prior studies Categorize based on Table 13-8 in Aron (sm/med/lg) Determine number of subjects you need For Chi-square, see Table 13-10 in Aron reading Power Analysis

  45. X^2 tests For nominal measures Can apply to a single measure (goodness of fit) Correlation tests For two numeric measures t-test for independent means For categorical IV, numeric DV

  46. Categorial Examples • Observational study/descriptive claim • Do NU students prefer Coke or Pepsi? • Study with correlational claim • Is there a difference between males and females in Coke or Pepsi preference? • Experimental Study with causal claim • Does exposure to advertising affect Coke or Pepsi preference? (students assigned to treatments)

  47. Sources of variance IV Other uncontrolled factors (“error variance”) If (many) independent, random variables with the same distribution are added, the result approximately a normal curve The Central Limit Theorem Understanding numeric measures

  48. The most important parts of the normal curve (for testing) 5% Z=1.65

  49. The most important parts of the normal curve (for testing) 2.5% 2.5% Z=-1.96 Z=1.96

  50. Hypothesis: sample (of 1) will be significantly greater than known population distribution Population completely known (not an estimate) Example – WizziWord experiment: H1: m WizziWord >m Word a = 0.05 (one-tailed) Population (Word users): m Word =150, s=25 What level of performance do we need to see before we can accept H1? Hypothesis testing – one tailed

More Related