1 / 36

Living with High-Risk Systems

Living with High-Risk Systems. Michael S. Tashbook Department of Computer Science University of Virginia September 23, 2002. Categories of Risk. Not all high-risk systems are created equal We can partition the set of high-risk systems into three classes: Hopeless Cases Salvageable Systems

carlo
Télécharger la présentation

Living with High-Risk Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Living with High-Risk Systems Michael S. Tashbook Department of Computer Science University of Virginia September 23, 2002

  2. Categories of Risk • Not all high-risk systems are created equal • We can partition the set of high-risk systems into three classes: • Hopeless Cases • Salvageable Systems • Self-Correcting Systems Living with High-Risk Systems

  3. Hopeless Cases • This category is composed of systems where the (inevitable) risks far outweigh any reasonable benefits • These systems should just be abandoned — at least in Perrow’s view • Examples: • Nuclear weapons • Nuclear power Living with High-Risk Systems

  4. Salvageable Systems • Salvageable systems are • systems that we can’t do without, but that can be made less risky with considerable effort, or • systems where the expected benefits are so great that some risks should be run • Examples: • Some marine transport • DNA research Living with High-Risk Systems

  5. Self-Correcting Systems • This category contains systems that are not completely self-correcting, but are self-correcting to some degree • Only modest efforts are needed to improve these systems further • Examples: • Chemical plants • Airplanes/Air Traffic Control Living with High-Risk Systems

  6. Is Abandonment the Answer? • Should systems in the “Hopeless Cases” category be abandoned summarily? • Should drastic modifications be made for other high-risk systems (namely, those in the “Salvageable” category)? • Not necessarily; Perrow’s argument makes several assumptions that may not be true Living with High-Risk Systems

  7. Perrow’s Assumptions • Current risk assessment theory is flawed • The public is adequately equipped to make rational decisions, and its opinions should be respected by policy experts • Organizational changes will have little effect in increasing system safety Living with High-Risk Systems

  8. 1. Risk Assessment • Analysis of the risks and benefits offered by new systems — examination of the tradeoffs (if any) • Modern risk assessors work to: • inform and advise on the risks and benefits of new systems • legitimate risks and reassure the public • second-guess regulatory agencies’ actions Living with High-Risk Systems

  9. How Safe is Safe Enough? • More accurately, how do we model risk? • Mathematical models are generally used to model risk • The problem with this kind of analysis is that it only measures things that can be quantified • How much is your life worth? Living with High-Risk Systems

  10. Biased Interpretations • Problem of systematic biases and public opinion • Does every death have the same impact? • Is a death from diabetes or cancer as bad as a murder? • The public doesn’t seem to think so. • Are fifty thousand annual highway deaths really equivalent to a single nuclear catastrophe? Living with High-Risk Systems

  11. Systematic Biases • Risk assessment differentiates between voluntary risks and involuntary risks • However, the system doesn’t discriminate between the imposition of risks and the acceptance of risks • This dispassionate cost-benefit approach often leads to “the tyranny of the bean-counters” Living with High-Risk Systems

  12. Cost-Benefit Analysis (CBA) • CBA ignores the distribution of wealth in society • Risk assessments ignore the social class distribution of risks • CBA relies heavily on current market prices • Thus, low-paid employees are worth less when risks are considered Living with High-Risk Systems

  13. More CBA Assumptions • New risks should not be higher than others we have already accepted • if other systems become riskier, we can lower safety levels on new systems • Competitive markets require risky endeavors Living with High-Risk Systems

  14. More RA/CBA Criticisms • RA/CBA does not distinguish between: • Addiction and free choice • Active risks and passive risks • This isn’t just a matter of in/voluntary risk — it’s a question of control • Risk assessors would prefer to exclude the public from decisions that affect their interests Living with High-Risk Systems

  15. 2. Decision-Making • Risk assessors assert that the public is ill-equipped to make decisions on their own behalf, and cognitive psychologists agree • Humans don’t reason well: • We maximize some dangers while minimizing others • We don’t calculate odds “properly” Living with High-Risk Systems

  16. Three Types of Rationality • Absolute rationality • Risks and benefits are calculated exactly, offering a clear view of what to do • Bounded rationality • Employs heuristics to make decisions • Social and cultural rationality • Limited rationality has social benefits Living with High-Risk Systems

  17. Bounded Rationality • People don’t make absolutely rational decisions, possibly due to: • neurological limitations • memory/attention limits • lack of education • lack of training in statistics and probability • Instead, we tend to use hunches, rules of thumb, estimates, and guesses Living with High-Risk Systems

  18. More on Bounded Rationality “There are two reasons for perfect or deductive rationality to break down under complication. The obvious one is that beyond a certain complicatedness, our logical apparatus ceases to cope—our rationality is bounded. The other is that in interactive situations of complication, agents can not rely upon the other agents they are dealing with to behave under perfect rationality, and so they are forced to guess their behavior. This lands them in a world of subjective beliefs, and subjective beliefs about subjective beliefs. Objective, well-defined, shared assumptions then cease to apply. In turn, rational, deductive reasoning—deriving a conclusion by perfect logical processes from well-defined premises—itself cannot apply. The problem becomes ill-defined.” — W. Brian Arthur, “Inductive Reasoning and Bounded Rationality” (1994) Living with High-Risk Systems

  19. The Efficiency of Heuristics • Heuristics are useful; they save time, even if they are wrong on occasion • Heuristics: • prevent decision-making “paralysis” • drastically reduce search costs • improve (are refined) over time • facilitate social life • work best in loosely-coupled (slack, buffered) environments Living with High-Risk Systems

  20. Pitfalls of Heuristics • Heuristics rely on the problem context; if this is wrong, then the resulting action will be inappropriate • Context definition is subtle and difficult • Heuristics are related to intuitions • Intuitions are a form of heuristic • Intuitions may be held even in the face of contrary evidence Living with High-Risk Systems

  21. Rationality and TMI • The TMI accident occurred shortly after it was put into service • Absolute rationality acknowledges that a problem was was bound to happen eventually; it just happened sooner rather than later • Is this comparable to the “1x10-9 standard”? Living with High-Risk Systems

  22. Rationality and TMI (cont’d) • This may be true, but is it the point? • TMI was a new type of system, and no heuristics existed for it at the time • Even though problems may be rare, they can be very serious • Experts predicted that TMI was unlikely to occur, yet it did; could they have been wrong? Living with High-Risk Systems

  23. Bounded Rationality vs. TMI • The logic of the public response to TMI was technically faulty; even so, it was efficient and understandable • Experts have been wrong before; it’s efficient to question them • Bounded rationality is efficient because it avoids extensive effort • Can John Q. Public make a truly informed decision about nuclear power? Living with High-Risk Systems

  24. Social and Cultural Rationality • Our cognitive limits are a blessing rather than a curse • There are two reasons for this: • Individuals vary in their relative cognitive abilities (multiple intelligences theory) • These differences encourage social bonding • Individual limitations or abilities lead to different perspectives on (and solutions to) a given problem Living with High-Risk Systems

  25. Risk Assessment Studies • Clark University study of experts and the lay public • The two groups disagreed on how to judge the risk of some activities • Disaster potential seemed to explain the discrepancy between perceived and actual risk • For the public, dread/lethality ratings were accurate predictors of risk assessments • Subsequent study identified three “factors” (clusters of interrelated judgments) Living with High-Risk Systems

  26. Dread Risk • Associated with: • lack of control over activity • fatal consequences • high catastrophic potential • reactions of dread • inequitable risk-benefit distribution • belief that risks are not reducible • Correlation with interactively complex, tightly-coupled systems Living with High-Risk Systems

  27. Unknown Risk • This factor includes risks that are: • unknown • unobservable • new • delayed in their manifestation • This factor is not conceptually related to interaction and coupling as well as dread risk Living with High-Risk Systems

  28. Societal/Personal Exposure • This factor measures risks based on: • the number of people exposed • the rater’s personal exposure to the risk in question • Of all three factors, dread risk was the best predictor of perceived risk Living with High-Risk Systems

  29. Thick vs. Thin Descriptions • A “thin description” is quantitative, precise, logically consistent, economical, and value-free • A “thick description” recognizes subjective dimensions and cultural values, and expresses a skepticism about human-made systems Living with High-Risk Systems

  30. 3. Organizational Solutions • In general, risky enterprises are organizational enterprises • Tightly controlled, highly centralized, authoritarian organizations should be put into place to run risky systems and eliminate “operator error” • But does this really help things? Living with High-Risk Systems

  31. Suggested Organization Types Living with High-Risk Systems

  32. Where Does the Problem Lie? • Technology? • “[W]e are in the grip of a technological imperative that threatens to wipe out cultural values….” • Capitalism? • Private profits lead to short-run concerns • Social costs are borne by everyone • Greed? • Private gain versus the public good Living with High-Risk Systems

  33. The Problem of Externalities • Externalities are the social costs of an activity (pollution, injuries, anxieties) that are not reflected in its price • Social costs are often borne by those who receive no benefit from the activity, or who are even unaware of it • Systems with identifiable/predictable victims are more likely to consider externalities Living with High-Risk Systems

  34. A New Cost-Benefit Analysis • How risky are the systems we have been considering, only in terms of catastrophic potential? • How costly are the alternative ways (if any) of producing the same outputs? Living with High-Risk Systems

  35. The Final Analysis • Systems are human constructs, whether carefully designed or unplanned emergences • These systems are resistant to change • System catastrophes are warning signals, but not the ones we think • These signals come not from individual errors, but from the systems themselves Living with High-Risk Systems

  36. Living with High-Risk Systems Michael S. Tashbook Department of Computer Science University of Virginia September 23, 2002

More Related