1 / 35

Usability Research: Unexpected Results

Usability Research: Unexpected Results. Overview. User feedback and user performance Unexpected results in research/usability Small scale research accuracy. Product: College Writing Handbook. How it is tested. Textbook publishers – experts Computer industry – usability tests.

wilma-allen
Télécharger la présentation

Usability Research: Unexpected Results

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Usability Research:Unexpected Results

  2. Overview • User feedback and user performance • Unexpected results in research/usability • Small scale research accuracy

  3. Product: College Writing Handbook How it is tested • Textbook publishers – experts • Computer industry – usability tests

  4. Product: College Writing Handbook Research Goals • Convince publishers of value of usability testing • Compare visual against traditional version • Help find/fix problems in new handbook • Usability research: new thinking about old products

  5. Delivery of Material Old way: prose instruction Traits Traditional Vocabulary Information listed by bullets Emphasis – minimal visuals “^” Italics

  6. Delivery of Material New Way: artwork Traits Minimal grammar terminology Color coding Simpler language “adding a medial modifier to an independent clause”

  7. User Profiles: 12 Participants • All 18 or 19 years old • First semester, first composition course • Engineering, Business, etc • 6 student from 2 year college • 4 male, 2 female • 6 students from 4 year college • 3 male, 3 female

  8. Scenarios 1. “Put complex source into correct MLA style” 2. “Identifying non-trivial comma errors” 3. Evaluate source acceptability for assignment/audience • Talk out loud while using textbook • Point at text while reading • Prompted after 5 seconds of silence • Videotaped

  9. Scenario 1: Citation • Two sources provided • Use both handbooks – alternate handbook used first • Ratings: • Very useful • Useful • Rarely Useful • Not useful • Explain the rating

  10. Scenario 2: Punctuation • Evaluate a paragraph “pregnant with comma errors” • Comma required? • Comma optional? • Ease of use • Explain the rating

  11. Scenario 3: Acceptability of Source • Users given a research topic and audience • Users given possible sources • Acceptable? • Unacceptable? • More information required? • Ease of use

  12. “Print Quality Bias” • Prototype vs finished product • Color copies of prototype vs color excerpts • Texts plastic comb bound • Both texts referred to as prototypes

  13. Findings • Visual product preferred by users • Verbal product rated slightly more difficult

  14. “Users failed at tasks, but didn’t realize it” • Ease of use does not equal usability • Works cited - both prototypes failed users • 12 unsatisfactory work cited entries • Minor omissions: “Press” or “Inc” • Critical omissions: authors, title, edition number, pages • Punctuation – 11 of 12 students misuse comma • Source acceptability Howard 10

  15. Creating a citation • Positive responses • 12/12 user failure • Problem areas • Large font, highlighting, underlining • Users misled – other info required for citation

  16. “In America* it is quite possible to live a cocoon.” • Correct Response • Comma optional at * • Visual prototype - page 433 • Verbal prototype - page 236 • User results • 11/12 gave “required” as response – both books • Visual prototype – 12/12 cited the correct page • Verbal – 12/12 cited incorrect page

  17. Explanation of failures • “Readers scanned pages for examples that matched mental models” • “They thought the problem was simple and didn’t look beyond the first solution…” • Relied on bold headings, skipping paragraphs Howard 11

  18. Explanation of failures cont’d • “Visual manual tried to combine too much information in one graphic.” • “Authors of the manuals didn’t understand their users’ mental models.” • One text failed: possible delivery problem? Howard 11

  19. First Simple Solution • Users appeared to focus on bold headings • Scanned examples • Looked for examples to match pattern of task • Rarely read prose paragraphs

  20. Additional Issues • Visual is too complex • User comments • “’tangled up’” or “’messy’” • “’Too busy’” • “Too much effort” • Skipped it

  21. Preferred page – why? • Users scan for syntax patterns • “Does not…combine elements into one visual”

  22. Acceptability of Source • Step-by-step instructions: too simple • Provide context or “’If, Then’” scenarios • Visual book used “stories” • Pedro, Aaliyah with respective assignments • Both students evaluate the same source • Story shows decision making process • User-centered design? User-experience!

  23. Context • Both texts made assumptions • Knowledge of corporate authors, reference books, etc • How to determine the type of source • Fixes • Task Environment • “’How do quote or paraphrase in my text?’” • “’How do I format entry for works cited, reference list, etc” Howard 14

  24. Other Results • Fixes for handbooks – visuals, complexity indicators • Total client focus can be bad • “I like/want this!” • Focus: task completion AND decision making • Make users aware of complexity – context • Usability Test/final product • Model problem solving behavior in usability test

  25. Small Scale Research Accuracy

  26. Small Numbers? • Revised handbook • Usability results vs actual user results? • Task success • Oversimplified results • Extreme results

  27. Confidence Interval • 95% - by convention • 95% of the time - results fall within planned range • Based on sample size and success rate • 5 users – large margin for error • 100 users – smaller margin for error

  28. Confidence Interval • 5 users - 95% of the time, completion of tasks will be between 48% - 100%

  29. Adjustment – Maximum Likelihood Estimate (MLE) 4/5 = .80 or 80% • Successful attempts (x)/total attempts(n) • x/n = probability of success (p) • x/n = p

  30. Adjustment – Jeffreys Method 4/5 = .80 or 80% 4.5/6 = .75 or 75% • Successful attempts (x)/total attempts(n) • x/n = probability of success (p) • x/n = p • (x+.5)/(n+1)

  31. Adjustment – Laplace Method 4/5 = .80 or 80% 5/7= .714 or 71.4% • Successful attempts (x)/total attempts(n) • x/n = probability of success (p) • x/n = p • (x+1)/(n+2)

  32. Adjustment – Wilson Method 4/5 = .80 or 80% 6/9= .667 or 66.7% • Successful attempts (x)/total attempts(n) • x/n = probability of success (p) • x/n = p • (x+2)/(n+4)

  33. Adjustment Review 4/5 = .80 80% MLE 4.5/6 = .75 75% Jeffreys 5/7= .714 74.1% Laplace 6/9= .667 66.7% Wilson • 5/5 = 100% - Really?

  34. Sample less than 20, use adjustment method www.measuringusability.com/wald 4 of 5 users succeed: 71.4% Keep it Simple Lewis and Sauro 2-15

  35. Summary • Writing Handbook • Unexpected results – usable, but not useful • Adjusting for small samples • http://www.upassoc.org/upa_publications/jus/ • www.measuringusability.com/wald

More Related