1 / 7

EDG Evaluation Phase 1

EDG Evaluation Phase 1. 12/08/03 Xin Chen Javier Tuason Dennis Frezzo. Exploring Different Evaluation Methods as a Pilot. Heuristic Evaluation Empirical Timing + Observation Selected-Response Format (multiple choice) Constructed-Response Format (short answer) Goal: guide further testing

malana
Télécharger la présentation

EDG Evaluation Phase 1

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EDG Evaluation Phase 1 12/08/03 Xin Chen Javier Tuason Dennis Frezzo

  2. Exploring Different Evaluation Methods as a Pilot • Heuristic Evaluation • Empirical Timing + Observation • Selected-Response Format (multiple choice) • Constructed-Response Format (short answer) • Goal: guide further testing • Goal: guide iterations of SBD/UCD process • N= 3 thus far • Instructor #1 (Observed), Student #1 (Self-report), Expert #1(self-report) • More direct in-person testing needed (to be done this week)

  3. Heuristic Evaluation • Heuristic Evaluation by simple ranking not helpful; discussions required • Participants noted that the Heuristic design guidelines hightened their sensitivity to the UI issues, and informed (usually increased) their critiques • Non-representative sample: All participants were more web-savvy than typical

  4. Empirical Testing • More testing to be done this week • Early quantitative testing revealed various issues that (parts of maze not working, glitches after submitting question answers, etc.) that were fixed. So one iteration of testing was done prior to these first 3 users • Based on limited data we can guess at reasonable expected times

  5. Survey • Too few samples to do any statistics • Still determining redundancy to try to shorten the survey • Still feel that with larger sample size this would be an interesting measure of various aspects of user satisfaction

  6. Open-ended Response • Received extremely valuable design ideas, though this reflects the high expertise level of this first group of testers • All participants liked the chance to imagine making the game better • Several questions clearly redundant and/or tiredness factor as answers were often not applicable, same as before, nothing to add

  7. Recommendations • Further Testing N = 3 sample size is obviously too small, more testing this week Despite small sample size, much learned in terms of streamlined testing procedure Especially in a game, empirical testing is crucial • Further Iteration of SBD? Explore Warriors of the Net Metaphor Re-visit activity, information, and interaction scenarios Open-ended response helped here • Further Iteration of UCD? Narrowly, to improve GUI of the maze Broadly, to improve interactions Heuristic evaluation and survey helped here

More Related