1 / 22

The Field Guide to Human Error Investigations

The Field Guide to Human Error Investigations. Chapters 7 – 13 “The New View of Human Error” AST 425. The New View. Human Error is a symptom of trouble deeper inside a system To explain failure, do not try to explain where people went wrong

chen
Télécharger la présentation

The Field Guide to Human Error Investigations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Field Guide to Human Error Investigations Chapters 7 – 13 “The New View of Human Error” AST 425

  2. The New View • Human Error is a symptom of trouble deeper inside a system • To explain failure, do not try to explain where people went wrong • Instead, investigate how people’s assessments and actions would have made sense at the time, given the circumstances that surrounded them

  3. Chapter 7- New View • Human error is not the cause, it is the effect or symptom of deeper trouble • Human error is not random, it is systematically connected to features of people’s tools, tasks and operating environment • Human error is not the conclusion of an investigation, it is the beginning

  4. New View • Safety is never the only goal in systems that people operate. Goals are multiple (schedules, economic, competition, etc.) • Trade-offs between safety and other goals often must be made under uncertainty and ambiguity. People decide to “borrow” from the safety goal to accomplish these other goals • Systems are not basically safe, people create safety by adapting under pressure and acting under uncertainty

  5. New View- People • People are vital to “negotiating” safety under these circumstances • Under these conditions, human error should be expected

  6. New View of Error • Errors/Failures should be treated as: • A window on a problem which might happen again • A red flag in the everyday operation of a system and an opportunity to learn about the conditions which caused the failure potential

  7. New View Recommendations • Seldom focus on individuals- everyone is potentially vulnerable • Do not focus on tightening procedures- individuals need discretion to deal with complex operations • Do not get trapped in the promise of new technology (which will present new opportunities for error) • Speak in systemic terms- organizational conditions, operational conditions, or technological features

  8. Chapter 8- Human Data, fault finding • Traditional investigations have gathered Human Factors data by: • Interviewing peers or others who give their opinion about the people under scrutiny • Scrutinize training or other relevant records • Document what people did leading up to the accident • Fuels the Bad Apple Theory

  9. Human Data • The problem of the previous method lies in human memory: • Memory is not like a tape which can be rewound • Often it is impossible to separate actual events and cues which were observed from later inputs • Human memory tends to order and structure events more than they actually were- we add plausibility to fill in gaps

  10. Human Data • Participants should be allowed to tell their story with questions from the investigator such as: • What were you seeing? • What were you focusing on? • What were you expecting to happen? • What pressures were you experiencing? • Were you making any operational trade-offs? • Were you trained to deal with this situation? • Were you reminded of any previous experience?

  11. Chapter 9, Reconstructing the Unfolding Mindset • Lay out the sequence of events in time • Divide the sequence of events into episodes • Find the data you now know to have been available to people during the episode- was the right data available? Was it complete? • Identify what was observed during the event and why it made sense (particularly harsh or salient cues will attract attention even if they are little understood at the time)- the hard part

  12. Chapter 10- Patterns of Failure • Technology- new technology doesn’t eliminate human error, it changes it- attention slips from managing the process to managing the automation interface • Automation relies on monitoring- something humans aren’t good at for infrequent events • Many automated systems provide users with little feedback allowing operators to detect discrepencies.

  13. Ch. 10 • Pilots often interpret their automation based on what they believe they have told it to do and not on the (often weaker more ambiguous) cues as to what is actually happening • It takes a very compelling cue to get pilots to change this mindset.

  14. Ch. 10- drift • Accidents don’t just occur, they are the result of an erosion of margins that went unnoticed- less defended systems are more vulnerable (ie. A J-3 cub in someone’s barn vs. a 747) • Often the absence of adverse consequences of violations lead people down the wrong path- “the normalization of deviance”- to understand why we need to understand the complexity behind the violation • Recognize that Safety is not a constant- what causes an accident today may not tommorrow

  15. Ch. 10 • Real progress in safety lies in seeing the similarities between events which may highlight particular patterns toward breakdown (ie. The airbus being in Vertical speed mode rather than a descent angle mode)

  16. Chapter 11- Writing Recommendations • Can be “high end” (recommending the reallocation of resources) or “low end” (changing a procedure) • The easier a recommendation can be sold, the less effective it will be- true solutions are seldom simple and are usually costly • Recommendations should focus on change not “diagnosis”

  17. Chapter 12- Learning from Failure • Use Outside “objective” auditors • Avoid accepting errors as “just human” • Avoid “setting an example” of individual failures- this just makes people avoid reporting errors • Avoid Compartmentalization- seek to find commonalities in failure • Avoid passing the buck- safety is everyone’s problem

  18. Ch. 12 • Those making safety decisions should never divorce themselves totally from the day-to-day operations becoming immersed in an idealized world

  19. Chapter 13- In Summary • You cannot use the outcome of a sequence of events to assess the quality of the decisions and actions that led up to it • Don’t mix elements from your own reality now into the reality that surrounded people at the time. Resituate performance in the circumstances that brought it fourth

  20. Summary • Don’t present the people you investigate with a shopping bag full of epiphanies (“it should have been so clear!”) as this is seldom the way the evidence presented itself • Recognize that consistencies, certainties and clarities are products of your hindsight- not data available to those in the situation

  21. Summary • To understand human performance, you must understand how the situation unfolded around people at the time- try to understand how people’s actions made sense at the time • Remember the point of a human error investigation is to understand “why” not to judge them for what they did not do.

  22. Finally • Remember the fundamental difference between “explaining” and “excusing” human performance- Some people always need to bear the brunt of a system’s failure; usually it’s those on the blunt end of a system (manager’s, supervisors, etc.)

More Related