260 likes | 384 Vues
Reasoning about human error with interactive systems based on formal models of behaviour. Paul Curzon Queen Mary, University of London. 1. Acknowledgements. Ann Blandford (UCL) Rimvydas Rukš ė nas (QMUL) Jonathan Back (UCL) George Papatzanis (QMUL) Dominic Furniss (UCL) Simon Li (UCL)
E N D
Reasoning about human error with interactive systems based on formal models of behaviour Paul Curzon Queen Mary, University of London 1
Acknowledgements • Ann Blandford (UCL) • RimvydasRukšėnas(QMUL) • Jonathan Back (UCL) • George Papatzanis (QMUL) • Dominic Furniss (UCL) • Simon Li (UCL) • …+ various QMUL/UCL students
1 Background • The design of computer systems (including safety critical systems) has historically focused on the hardware and software components of an interactive system • People have typically been outside the system as considered for verification
Can we bring users into the development process? • In a way that talks at the same level of abstraction as established software development • That accounts for cognitive causes of error • That doesn’t require historical data to establish probabilities • That doesn’t demand strong cognitive science background of the analyst 1
The Human error Modelling (HUM) project • Systematic investigations of human error and its causes • Formalise results in a user model included in the “system” for verification • Model of cognitively plausible behaviour • Investigate ways of “informalising” the knowledge to make it usable in practice • focus on dynamic context-aware systems • Improve understanding of actual usability design practice 1
Systematic Errors • Many errors are systematic • They have cognitive causes • NOT due to lack of knowledge of what should do • If we understand the patterns of such errors, then we can minimise their likelihood through better design • Formalise the behaviour from which they emerge and we can develop verification tools to identify problems 1
Post-completion errors (PCEs) • Characterised by there being a clean-up or confirmation operation after achievement of main goal • Infrequent but persistent • Examples: • Leaving the original on the photocopier • Leaving the petrol filler cap at the petrol station • …etc. 1
Cognitive principles: Non-determinism Relevance Salience Mental vs. physical Pre-determined goals Reactive behaviour Voluntary completion Forced termination UserModel{goals,actions,…} = … TRANSITION ([]g,slc: Commit_Action: … ) [] ([]a: Perform_Action: … ) [] Exit_Task: … [] Abort_Task: … [] Idle: … Generic user model in SAL 1
Recent Work: salienceand cognitive load • Our early work suggested importance of salience and cognitive load… • Humans rely on various cues to correctly perform interactive tasks: • procedural cues are internal; • sensory cues are provided by interfaces; • sensory cues can strengthen procedural cueing (Chung & Byrne, 2004). • Cognitive load can affect the strength of sensory & procedural cues. 1
Aims 1 • To determine the relationship between salience and cognitive load; • To extend (refine) our cognitive architecture with salience and load rules; • To assess the formalization by modeling the task used in the empirical studies. • To highlight further areas where empirical studies are needed.
1 Approach • Use fire engine dispatch to develop an understanding of the link between cognitive load and salience • Re-analyse all previous experiments to refine and validate understanding, identifying load and salience of individual elements • Informally devise rule for the relationship • Formalise the informal rule in user model • Model and verify one detailed experimental scenario - fire engine dispatch • Compare models predicted results with those from the experiment.
Experimental setting • Hypothesis: slip errors are more likely when the salience of cues is not sufficient to influence attentional control. • Variables: intrinsic and extraneous cognitive load. 1
1 Results
Interpretation of empirical data • High intrinsic load reduces the salience of procedural cues. • High intrinsic & extraneous load may reduce the salience of sensory cues 1
Formal salience and load rules • Types: Salience {High,Low,None}; Load {High,Low} • Procedural: if default High intrinsic High then procedural Low else procedural default • Sensory: if default High intrinsic High extraneous High then sensory {High, Low} else sensory default 1
Levels of overall salience • HighestSalience(…) … procedural High procedural Low sensory High • HighSalience(…) … procedural None sensory High • LowSalience(…) … 1
Choice priorities [] g,slc: Commit_Action: HighestSalience(g,…) (HighSalience(g,…) NOT(h: HighestSalience(h,…))) (LowSalience(g,…) NOT(h: HighestSalience(h,…) HighSalience(g,…))) … commit[…] committed; status … 1
Correctness verification • Use model checking to reason about properties of combined user model - fire engine dispatch system • Compare to actual results from the experiment 1
Correctness verification • Functional correctness: System EVENTUALLY(Perceived Goal Achieved) • ‘Decide mode’ goal: System ALWAYS (Route Constructed Mode chosen) 1
1 Results (again)
1 Summary • Abstract (simple) formalisation of salience & load: • close correlation with empirical data for some errors; • Initialization error - match • Mode error - false positives • Termination error - 1 condition false negative • further refinement of salience & load rules requires new empirical studies. • Demonstrates how empirical studies and formal modelling work can feed each other.