1 / 66

Testing Interactive Software: A Challenge for Usability and Reliability

Testing Interactive Software: A Challenge for Usability and Reliability. Sandra Basnyat LIIHS-IRIT, University Toulouse 3, 31062 Toulouse, France basnyat@irit.fr. Regina Bernhaupt ICT&S-Center, Universität Salzburg 5020 Salzburg, Austria Regina.Bernhaupt@sbg.ac.at. Ronald Boring

greerm
Télécharger la présentation

Testing Interactive Software: A Challenge for Usability and Reliability

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Testing Interactive Software: A Challenge for Usability and Reliability Sandra Basnyat LIIHS-IRIT, University Toulouse 3, 31062 Toulouse, France basnyat@irit.fr Regina Bernhaupt ICT&S-Center, Universität Salzburg 5020 Salzburg, Austria Regina.Bernhaupt@sbg.ac.at Ronald Boring Idaho National Laboratory Idaho Falls 83415, Idaho, USA ronald.boring@inl.gov Philippe Palanque LIIHS-IRIT, University Toulouse 3, 31062 Toulouse, France palanque@irit.fr Special Interest Group – CHI 2006 – Montréal – 22nd April 2006 Chris Johnson Dept. of Computing Science, University of Glasgow, Glasgow, G12 8QQ, Scotland johnson@dcs.gla.ac.uk

  2. Outline of the SIG • Short introduction about the SIG (10 mn) • Short presentations (20 mn) • Software engineering testing for reliability (Philippe) • Human reliabilty for interactive systems testing (Ron) • Incident and accident analysis and reporting for testing (Sandra) • HCI testing for usability (Regina) • Gathering feedback from audience (10 mn) • Presentation of some case studies (20 mn) • Listing of issues and solutions for interactive systems testing (20 mn) • Discussion and summary (10 mn)

  3. Introduction • What are interactive applications • What is interactive applications testing • Coverage testing • Non regression testing • Usability versus reliability • What about usability testing of a non reliable interactive application • What about reliable applications with poor usability

  4. Interactive Systems

  5. A paradigm switch • Control flow is in the hands of the user • Interactive application idle waiting for input from the users • Code is sliced • Execution influenced by internal and external states • Nothing new but …

  6. Classical Behavior Read Input Read Input Process Input Process Input Exit ? Exit ? End

  7. Event-based Functioning EH Registration Register Event Handlers Event Queue At startup Call Window Manager Get next event Finished Application Event Handler 1 Dispatch event At runtime Ack received Event Handler 2 States Wait for next event Event Handler n Window Manager

  8. Safety Critical Systems Software Engineers System centered Reliability Safety requirements (certification) Formal specification Verification / Proof Waterfall model / structured Archaic interaction techniques Interactive Systems Usability experts User centered Usability Human factors Task analysis & modeling Evaluation Iterative process / Prototyping Novel Interaction techniques Reliability & Usability Safety Critical Interactive Systems

  9. Some Well-known Examples (1/2)

  10. Some Well-known Examples

  11. The Shift from Reliability to Fault-Tolerance • Failures will occur • Mitigate failures • Reduce the impact of a failure • A small demo …

  12. Informal Description of a Civil Cockpit application • The working mode • The tilt selection mode: AUTO or MANUAL (AUTO)The CTRL push-button allows to swap between the two modes • The stabilization mode: ON or OFFThe CTRL push-button allows to swap between the two modesThe access to the button is forbidden when in AUTO tilt selection mode • The tilt angle: a numeric edit box permits to select its valueinto range [-15°; 15°] • Modifications are forbidden when in AUTO tilt selection mode

  13. Various perspectives of this Special Interest Group • Software engineering testing for reliability • Human reliability testing • Incident and accident analysis and reporting for testing • HCI testing for usability

  14. What do we mean by human error? Consequence: Inconvenience Consequence: Danger 14

  15. Conceptualizing error • Humans are natural “error emitters” • On average we make around 5-6 errors every hour • Under stress and fatigue that rate can increase dramatically • Most errors are inconsequential or mitigated • No consequences or impact from many mistakes made • Where there may consequences, many times defenses and recovery mechanisms prevent serious accidents 15

  16. Human Reliability Analysis (HRA) • Classic Definition • The use of systems engineering and human factors methods in order to render a complete description of the human contribution to risk and to identify ways to reduce that risk • What’s Missing • HRA can be used to predict human performance issues and to identify human contributions to incidents before they occur • Can be used to design safe and reliable systems 16

  17. Performance Shaping Factors (PSFs) • Are environmental, personal, or task-oriented factors that influence the probability of human error • Are an integral part of error modeling and characterization • Are evaluated and used during quantification to obtain a human error rate applicable to a particular set of circumstances • Specifically, the basic human error probabilities obtained for generic circumstances are modified (adjusted) per the specific situation 17

  18. Example: SPAR-H PSFs

  19. Maximizing Human Reliability • Increasingly, human reliability needs to go beyond being a • diagnostic tool to become a prescriptive tool • NRC and nuclear industry are looking at new designs for control rooms and want plants designed with human reliability in mind, not simply verified after the design is completed • NASA has issued strict Human-Rating Requirements (NPR 8705.2) that all space systems designed to come in contact with humans must demonstrate that they impose minimal risk, they are safe for humans, and they maximize human reliability in the operation of that system • How do we make reliable human systems? • Design • Test • Model }“classic” human factors }human reliability analysis 19

  20. Best Achievable Practices for HR • The Human Reliability Design Triptych 20

  21. Concluding Thoughts • Human error is ubiquitous • Pressing need to design ways to prevent human error • Impetus comes from safety-critical systems • Lessons learned from safety-critical systems potentially apply across the board, even including designing consumer software that is usable • Designing for human reliability requires merger of two fields • Human factors/HCI for design and testing • Human reliability for modeling

  22. Incidents and Accidents as a Support for Testing • Aim, contribute to a design method for safer safety-critical interactive systems • Inform a formal system model • Ultimate goals • Embedding reliability, usability, efficiency and error tolerance within the end product • While ensuring consistency between models

  23. The Approach (1/2) • Address the issue of system redesign after the occurrence of an incident or accident • 2 Techniques • Events and Causal Factors Analysis • Marking Graphs extracted from a system model • 2 Purposes • Ensure current system model accurately models the sequence of events that led to the accident • Reveal further scenarios that could eventually lead to similar adverse outcomes

  24. Accident Report Safety - ECF Model The Case Analysis System Analysis Re - model The System Formal ICO System Model Including Erroneous Events Extraction of Marking Relevant Graph Scenarios Analysis Re - Design System Model to make Accident Torlerant The Approach (2/2) Part of the whole process Incident & accident investigation part System design part

  25. ECFA Chart of the Accident

  26. Marking Trees & Graphs • Marking Tree – identify the entire set of reachable states • Is a form of state transition diagram • Analysis support tools available • However, can impose considerable overheads when considering complex systems such as those in case study

  27. The Approach Not Simplified

  28. Usability Evaluation Methods (UEM) • UEMs conducted by experts • Usability Inspection Methods, Guideline Reviews, … • Any type of interactive systems • UEMs involving the user • Empirical evaluation, Observations, … • Any type of interactive systems (from low-fi prototypes to deployed applications)

  29. Usability Evaluation Methods (UEM) • Computer supported UEMs • Automatic testing based on guidelines, … • Task models-based evaluations, metrics-based evaluation, … • Applications with standardized interaction techniques (Web, WIMP)

  30. Issues of Reliability and Usability • Testing the usability of a non reliable system? • Constructing reliable systems without concerning usability? • Possible ways to enhance, extend, enlarge UEMs to address these needs?

  31. Gathering feedback from the audience through case studies • Do we need to integrate methods OR develop new methods ? • In favor of integration • Joint meetings (including software developers) through brainstorming + rapid prototyping (more problems of non usable reliable systems) • Problems • Some issues are also related to system reliability (ATMs) problem of testing a prototype versus testing the system • Issues of development time rather than application type • Application type has an impact of the processes selected for development • Don’t know how to build a reliable interactive system … whatever time we have • How can reliablity-oriented methods support usability-oriented methods

  32. Gathering feedback from the audience through case studies • How to design for testability (both the reliability of the software and the usability) • Is testing enough or do we need proof • Usability testing is at higher level of abstraction (goal oriented) while software testing is at lower level (functions oriented) • Is there an issue with interaction techniques (do we need precise description of interaction techniques and is it useful for usability testing?) • Automated testing through user-events simulation (how to understand how the user can react to that?) • Issue of reliability according to the intention of the user? and not only the reliability of the system per se • Beyond one instance of use but on reproducing the use many times

  33. Gathering feedback from the audience and case studies • Control Room (Ron) • Home/Mobile – testing in non traditional environments (Regina) • Mining case study (Sandra)

  34. First Case Study: Control Room

  35. Advanced Control Room DesignTransitioning to new domains of Human System Interaction Typical DesignHybrid Controls PBMR Conceptual design Problem: Next generation nuclear power plants coupled with advanced instrumentation and controls (I&C), increased levels of automation and onboard intelligence all coupled with large-scale hydrogen production present unique operational challenges.

  36. Example Software Interface with: • Cumbersome dialog box • No discernible exits • Good shortcuts

  37. Example UCC = 0.1 x 2 = 0.2 10 1 1 1 10 .1 1 1 1 0.1

  38. Second Case Study: Mobile interfaces

  39. Testing Mobile Interfaces • Lab or field • Method selection • Data gathering/ analysis • Problematic Area: Testing in non traditional environment

  40. Non Traditional Environments • Combine and balance different UEMs according to usability/reliability issues • Combine Lab and Field • Select UEMs according to development phase

  41. Third Case Study: Mining Accident

  42. Reminder

  43. Events & Causal Factors Analysis (ECFA) • Provides scenario of events and causal factors that contributed to the accident • Chronologically sequential representation • Provides overall picture • Relation between factors • Gain overall perspective of • Casual factors such as conditions (pressure, temperature…), evolution of system states

  44. Analysing the accident • Fatal mining accident involving human operators, piping system & control system • Decided to switch from North to South • Fuel didn’t arrive to plant kilns • Bled pipes while motors in operation • Motor speed auto-increase due to low pressure • Fuel hammer effect • Grinder exploded

  45. ECFA Chart of the Accident

  46. Listing of issues and solutions for interactive systems testing

  47. Hybrid methods (Heuristic evaluation refined (prioritisation of Heuristics)) • Remote usability testing • Task analysis + system modelling • Cognitive walkthrough (as is)

  48. Towards Solutions • Formal models for supporting usability testing • Formal models for incidents and accidents analysis • Usability and human reliability analysis

  49. Usability Heuristics • Heuristics are key factors that comprise a usable interface (Nielsen & Molich, 1990) • Useful in identifying usability problems • Obvious cost savings for developers • 9 heuristics identified for use in the present study • In our framework, these usability heuristics are used as “performance shaping factors” to constitute a usability error probability (UEP)

  50. Heuristic Evaluation and HRA “Standard” heuristic evaluation HRA-based heuristic evaluation

More Related