User Interface Design - PowerPoint PPT Presentation

user interface design n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
User Interface Design PowerPoint Presentation
Download Presentation
User Interface Design

play fullscreen
1 / 95
User Interface Design
103 Views
Download Presentation
march
Download Presentation

User Interface Design

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. User Interface Design Southern Methodist University CSE 8316 Spring 2003

  2. Temporal Relations and Usability Specifications

  3. Introduction • Previous chapter discussed low level primitives • Now focus on abstraction and relative timing of events • Such issues as interruptibility and interleavability should be part of interaction design and not driven by constructional design.

  4. Introduction • UAN can be used to specify: • Sequence • Iteration • Optionality • Repeating choice • Order independence • Interruptibility • Interleavability • Concurrency • Waiting

  5. Sequencing and Grouping • Sequence • Sequence: One task is performed in its entirety before the next task is begun • Represent in the UAN by grouping (horizontally or vertically) without any intervening operators • Grouping • Tasks can be grouped together using various operators to form new tasks • Definition is similar to that for regular expressions

  6. Abstraction • Have only seen UAN describing articulatory actions -- primitive tasks performed by the user. • In this form, describing an entire interaction design would be overly complex and difficult • Introduce abstraction by allowing groups of tasks to be named. • As with procedures, a reference to the name is equivalent to performing all the tasks described by that name

  7. Abstraction • To aid in reusability, allow tasks references to be parameterized • Reusing tasks promotes logical decomposition, providing for consistent system model • Abstraction hides details, but also hides user feedback. This information can be listed at one or both levels. • With task naming, can now perform top-down design.

  8. Task Operators • Choice • Simple choice is represented in UAN with the vertical bar, `|'. • Repeating choice is formed by adding the iterators `*' and `+'.

  9. Task Operators • Order Independence • Set of tasks that must be completed before continuing, but order of completion of the subtasks is not important. • Represented by the `&'.

  10. Task Operators • Interruption • Interruption occurs when one task is suspended while another task is started • Since UAN describes what can happen, you cannot specify an interruption, but rather what can be interrupted (interruptibility) • To specify that A can interrupt B use A --> B.

  11. Task Operators • Uninterruptible Tasks • Assume all primitive actions are uninterruptible (e.g. pressing a mouse button). • Specify the uninterruptibilty of higher-level tasks (e.g. modality) by enclosing in brackets, `<A>'.

  12. Task Operators • Interleavability • If two tasks can interrupt each other, they are considered interleavable. • Assume that operator is transitive. • Represented with double arrow, A <--> B.

  13. Task Operators • Concurrency • If two tasks can be performed in parallel (e.g. two different users), then tasks are concurrent Represented with `||'.

  14. Task Operators • Intervals and Waiting • Can add explicit time intervals between two events. • Two forms: • If task B must be completed within n seconds of task A: `A (t<n) B' • If task B is to occur only n seconds after task A: `A (t>n) B'

  15. Other Representations • Screen Pictures and Scenarios • UAN describes user actions, but does not describe the format/display of screens • Should supplement UAN with screen layouts and scenarios. • State Transition Diagrams • Typical interface contains various states • To provided global view of how states are related, add state transition diagram to UAN

  16. Design Rationale • Basic role of UAN is communication • Important to provide reasons behind various decisions • Gives motivation and goals and helps prevent later duplication of mistakes

  17. Usability Specifications

  18. Usability Specifications • Quantitative, measurable goals for knowing when the interface is good enough • Often overlooked, but provide insurance that multiple iterations are converging • For this reason, should be established early

  19. Usability Specification Table • Convenient method for indicating parameters • Contains following information • Usability Attribute • Measuring Instrument • Value to be Measured • Current Level • Worst Acceptable Level • Planned Target Level • Best Possible Level • Observed Results

  20. Usability Attribute • Represents the usability characteristic being measured • Must determine classes of intended users • For each class determine realistic set of tasks • Goal is to determine what user performance will be acceptable

  21. Usability Attributes • Typical attributes include: • Initial Performance: User's performance during the first few uses. • Long-term Performance: User's performance after extended use of the product • Learnability: How quickly the user learns the system • Retainability: How quickly does the knowledge of how to use the system dissipate

  22. Usability Attributes • Advanced Feature Usage: Usability of sophisticated features • First Impression: Subjective user feelings at first glance • Long-term User Satisfaction: User's opinion after extended use

  23. Measuring Instrument • Method to find a value for a usability attribute • Quantitative, but may be objective or subjective • Objective: based on user task performance • Subjective: deal with user opinion (questionnaires) • Both types are needed to effectively evaluate

  24. Benchmark Tasks • User is asked to perform a task using the interface • Most common objective measure • Task should be a specific, single interface feature • Description should be clearly worded without describing how to do it

  25. Questionnaire • Quantitative measure for subjective feelings • Creating survey that provides useful data is not trivial • Recommend use of scientifically created question (e.g. QUIS)

  26. Values To Be Measured • The data value metric • Typically metrics are: • Time for task completion • Number of errors • Average scores/ratings on questionnaire • Percentage of task completed in a given time • Ratio of successes to failures • Time spent in errors and recovery

  27. Values To Be Measured • Number of commands/actions used to perform task • Frequency of help/documentation use • Number of repetitions of failed commands • Number of available commands not invoked • Number of times user expresses frustration or satisfaction

  28. Setting Levels • Having determined what and how to measured, need to set acceptable levels • These levels will be used to determine when the interface has reached the appropriate level of usability • Important to be specific enough so that levels can be reasonably set

  29. Current Level • Present level of the value to be measured • Values can be determined from manual system, current automated system or prototypes • Proof that usability attribute can be measured • Baseline against which new system will be judged

  30. Worst Acceptable Level • Lowest acceptable level of user performance • This level must be attained for the product to be considered complete • Not a prediction of how the user will perform, but rather the worst performance that is considered acceptable

  31. Worst Acceptable Level • Tendency/pressure is to set the values too low • Good rule of thumb is to set them at or near the current levels

  32. Planned Target Level • The level of unquestioned usability, the ideal situation • Serve to focus attention on those aspects needing the most work (now or later) • May be based on competitive systems

  33. Best Possible Level • State-of-the-art upper limit • Provides goals for next versions • Gives indication of improvement that is possible • Frequently determined by having measuring expert user

  34. Observed Results • Actual values obtained from user testing • Provides quick comparison with projected levels

  35. Setting Levels • There are various methods for estimating the levels: • Existing systems or previous versions of new system • Competitive systems • Performing task manually • Developer performing with prototype • Marketing input based on observations of user performance on existing systems

  36. Setting Levels • The context of the task is important in determining these levels

  37. Example usability table

  38. Example usability table

  39. Example usability table

  40. Cautions • Each usability attribute should be (realistically) measurable • User classes need to be clearly specified • The number of attributes to be measured should be reasonable. Start small and add as experience grows • All project members should agree on the values

  41. Cautions • The values should be reasonable • If found to be too low, then increase them on next iteration • If they appear too high, it may be they were not realistically set or that the interface needs a lot of work! Judgement call

  42. Expert Reviews, Usability Testing, Surveys, and Continuing Assessment

  43. Introduction • Designers can become so entranced with their creations that they may fail to evaluate them adequately • Experienced designers have attained the wisdom and humility to know that extensive testing is a necessity

  44. Introduction • The determinants of the evaluation plan include: • stage of design (early, middle, late) • novelty of project (well defined vs. exploratory) • number of expected users • criticality of the interface (life-critical medical system vs. museum exhibit support)

  45. Introduction • costs of product and finances allocated for testing • time available • experience of the design and evaluation team

  46. Introduction • The range of evaluation plans might be from an ambitious two-year test to a few days test. • The range of costs might be from 10% of a project down to 1%.

  47. Expert Reviews • While informal demos to colleagues or customers can provide some useful feedback, more formal expert reviews have proven to be effective. • Expert reviews entail one-half day to one week effort, although a lengthy training period may sometimes be required to explain the task domain or operational procedures.

  48. Expert Reviews • There are a variety of expert review methods to chose from: • Heuristic evaluation • Guidelines review • Consistency inspection • Cognitive walkthrough • Formal usability inspection

  49. Expert Reviews • Expert reviews can be scheduled at several points in the development process when experts are available and when the design team is ready for feedback. • Different experts tend to find different problems in an interface, so 3-5 expert reviewers can be highly productive, as can complementary usability testing.

  50. Expert Reviews • The dangers with expert reviews are that the experts may not have an adequate understanding of the task domain or user communities.