1 / 7

Analyzing and Presenting Results Establishing a User Orientation

Analyzing and Presenting Results Establishing a User Orientation. Alfred Kobsa University of California, Irvine. Tabulating and analyzing data. Tabulate data in spreadsheet(s) per user and per task Both quantitative and qualitative data (e.g., comments)

walld
Télécharger la présentation

Analyzing and Presenting Results Establishing a User Orientation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Analyzing and Presenting ResultsEstablishing a User Orientation Alfred Kobsa University of California, Irvine

  2. Tabulating and analyzing data • Tabulate data in spreadsheet(s) per user and per task • Both quantitative and qualitative data (e.g., comments) • Compute totals per user and averages per task • Find outlier values in the raw data • Try to explain them • get back to the original data source to check for transcription errors • look at time sheet / protocol and video recording • Outliers may point to infrequent usability problems, or they may derive from “accidental” characteristics of the respective test user. In the latter case • Disregard outlier values if this can be justified, or use median instead of average • [Remove subjects with many outlier values completely if this can be justified (very few subjects only!)] • Look at means/medians and possibly standard deviations to • determine whether usability concerns are confirmed by the data • discover surprises in the data, and determine whether they point to usability problems

  3. Analyzing video and audio recordings • Unless subjects were asked to “think aloud”, it is generally easier to analyze video data with concrete questions in mind rather than merely “watching out for usability problems” • This does not so much apply to audio, since subjects often verbalize the problem they encounter • Observations should be noted down (with time stamps) • Categories for observations may already exist, or can be created in the observation process • Often it is advisable to use two independent observers who afterwards compare their notes (and get back to the recordings to resolve disputes)

  4. Statistical presentation and analysis • Results of usability tests are usually presented using • tabulated raw values • descriptive statistics (means, medians, standard deviations) • visualizations of raw values and statistical values • In rare cases, inferential statistics can be used • Specifically for comparing two competing prototypes, or the “old” and the “new” system • Should be done with extreme caution, since • Preconditions for the applicability of statistical tests are often not met (randomness of subject sampling and assignment to conditions, normal distribution of data) • Sample sizes are often very small • Statistical significance of a difference does not mean that the difference is important • Decision makers do not know how to interpret the results of a statistical test (and are not familiar with the preconditions and limits of such tests) • Testers are not well trained in statistics and do not know which test is appropriate

  5. Identifying usability problems • Involve the designers / programmers (particularly if they are going to perform the revisions) • Focus on global problems since they often affect many aspects of an interface • Global problems are more difficult to pinpoint and to correct • Rank problems by level of severity • Level 1: problem may prevent the successful completion of a task • Level 2: problem may create significant delay and frustration • Level 3: problem has minor effect on usability • Level 4: possible enhancement that can be added in the future • Recommend changes (and test those changes later)

  6. Communicating the results • Preparing a report / reports • See Dumas and Reddish, Chapter 22 • Courage and Baxter, Chapter 14 • Preparing a Powerpoint presentation • Preparing a stand-alone video/multimedia presentation • See Dumas and Reddish, Chapter 23

  7. Changing the product and process • Collaborate with designers/developers throughout the evaluation process (and possibly with management) • Prioritize and motivate your recommendations for re-design • Collaborate on finding feasible ways to fix the problems • Make suggestions to improve the design process, such as • earlier involvement of users • earlier testing of designs and prototypes • hiring HCI staff • developing design guidelines

More Related