1 / 26

Reasoning about human error with interactive systems based on formal models of behaviour

Reasoning about human error with interactive systems based on formal models of behaviour. Paul Curzon Queen Mary, University of London. 1. Acknowledgements. Ann Blandford (UCL) Rimvydas Rukš ė nas (QMUL) Jonathan Back (UCL) George Papatzanis (QMUL) Dominic Furniss (UCL) Simon Li (UCL)

gracie
Télécharger la présentation

Reasoning about human error with interactive systems based on formal models of behaviour

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reasoning about human error with interactive systems based on formal models of behaviour Paul Curzon Queen Mary, University of London 1

  2. Acknowledgements • Ann Blandford (UCL) • RimvydasRukšėnas(QMUL) • Jonathan Back (UCL) • George Papatzanis (QMUL) • Dominic Furniss (UCL) • Simon Li (UCL) • …+ various QMUL/UCL students

  3. 1 Background • The design of computer systems (including safety critical systems) has historically focused on the hardware and software components of an interactive system • People have typically been outside the system as considered for verification

  4. Can we bring users into the development process? • In a way that talks at the same level of abstraction as established software development • That accounts for cognitive causes of error • That doesn’t require historical data to establish probabilities • That doesn’t demand strong cognitive science background of the analyst 1

  5. The Human error Modelling (HUM) project • Systematic investigations of human error and its causes • Formalise results in a user model included in the “system” for verification • Model of cognitively plausible behaviour • Investigate ways of “informalising” the knowledge to make it usable in practice • focus on dynamic context-aware systems • Improve understanding of actual usability design practice 1

  6. Systematic Errors • Many errors are systematic • They have cognitive causes • NOT due to lack of knowledge of what should do • If we understand the patterns of such errors, then we can minimise their likelihood through better design • Formalise the behaviour from which they emerge and we can develop verification tools to identify problems 1

  7. Post-completion errors (PCEs) • Characterised by there being a clean-up or confirmation operation after achievement of main goal • Infrequent but persistent • Examples: • Leaving the original on the photocopier • Leaving the petrol filler cap at the petrol station • …etc. 1

  8. Experiments: eg Fire engine dispatch

  9. Call prioritization

  10. The structure of specifications

  11. Cognitive principles: Non-determinism Relevance Salience Mental vs. physical Pre-determined goals Reactive behaviour Voluntary completion Forced termination UserModel{goals,actions,…} = … TRANSITION ([]g,slc: Commit_Action: … ) [] ([]a: Perform_Action: … ) [] Exit_Task: … [] Abort_Task: … [] Idle: … Generic user model in SAL 1

  12. Recent Work: salienceand cognitive load • Our early work suggested importance of salience and cognitive load… • Humans rely on various cues to correctly perform interactive tasks: • procedural cues are internal; • sensory cues are provided by interfaces; • sensory cues can strengthen procedural cueing (Chung & Byrne, 2004). • Cognitive load can affect the strength of sensory & procedural cues. 1

  13. Aims 1 • To determine the relationship between salience and cognitive load; • To extend (refine) our cognitive architecture with salience and load rules; • To assess the formalization by modeling the task used in the empirical studies. • To highlight further areas where empirical studies are needed.

  14. 1 Approach • Use fire engine dispatch to develop an understanding of the link between cognitive load and salience • Re-analyse all previous experiments to refine and validate understanding, identifying load and salience of individual elements • Informally devise rule for the relationship • Formalise the informal rule in user model • Model and verify one detailed experimental scenario - fire engine dispatch • Compare models predicted results with those from the experiment.

  15. Experimental setting • Hypothesis: slip errors are more likely when the salience of cues is not sufficient to influence attentional control. • Variables: intrinsic and extraneous cognitive load. 1

  16. Fire engine dispatch

  17. 1 Results

  18. Interpretation of empirical data • High intrinsic load reduces the salience of procedural cues. • High intrinsic & extraneous load may reduce the salience of sensory cues 1

  19. Formal salience and load rules • Types: Salience  {High,Low,None}; Load  {High,Low} • Procedural: if default  High  intrinsic  High then procedural  Low else procedural  default • Sensory: if default  High  intrinsic  High  extraneous  High then sensory  {High, Low} else sensory  default 1

  20. Levels of overall salience • HighestSalience(…)  … procedural  High  procedural  Low  sensory  High • HighSalience(…)  … procedural  None  sensory  High • LowSalience(…) … 1

  21. Choice priorities [] g,slc: Commit_Action: HighestSalience(g,…)  (HighSalience(g,…)  NOT(h: HighestSalience(h,…)))  (LowSalience(g,…)  NOT(h: HighestSalience(h,…)  HighSalience(g,…)))  …  commit[…]  committed; status … 1

  22. Correctness verification • Use model checking to reason about properties of combined user model - fire engine dispatch system • Compare to actual results from the experiment 1

  23. Correctness verification • Functional correctness: System EVENTUALLY(Perceived Goal Achieved) • ‘Decide mode’ goal: System ALWAYS (Route Constructed  Mode chosen) 1

  24. Formal verification & empirical data

  25. 1 Results (again)

  26. 1 Summary • Abstract (simple) formalisation of salience & load: • close correlation with empirical data for some errors; • Initialization error - match • Mode error - false positives • Termination error - 1 condition false negative • further refinement of salience & load rules requires new empirical studies. • Demonstrates how empirical studies and formal modelling work can feed each other.

More Related