1 / 71

Traditional and Current Risk Analysis

Traditional and Current Risk Analysis. Tony Cox MORS Workshop April 14, 2009. Traditional Risk Analysis. Risk assessment: How bad is it? Hazard identification: What could go wrong? How? How likely is it? Fault trees, event trees, PRA, Monte Carlo simulation So what?

dexter-koch
Télécharger la présentation

Traditional and Current Risk Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Traditional and Current Risk Analysis Tony Cox MORS Workshop April 14, 2009

  2. Traditional Risk Analysis • Risk assessment: How bad is it? • Hazard identification: What could go wrong? How? • How likely is it? • Fault trees, event trees, PRA, Monte Carlo simulation • So what? • Consequence modeling and evaluation • Risk communication: What to say about it? • Document and compare risks • Risk = Threat x Vulnerability x Consequence (?) • Red, yellow, green? • Risk management: What to do about it? • Request/allocate resources to biggest risks first? • (Risk attribution: Who to blame for it, how much?)

  3. Two ways to manage risks • Choose actions to optimize risk reduction (subject to constraints) • Budget constraints • Interactions among threats, vulnerabilities, consequences, countermeasures • Minimax: Anticipate attacker’s response • Identify, document and rank concerns, then tackle biggest ones first.

  4. Two ways to manage risks • Choose actions to optimize risk reduction (subject to constraints) • Budget constraints • Interactions among threats, vulnerabilities, consequences, countermeasures • Minimax; Anticipate attacker’s response • Identify, document and rank concerns, then tackle biggest ones first. • This talk: First way is better. Second can be surprisingly bad.

  5. Terrorism risk assessment • Risk matrices • Red, yellow, green, high, medium low • Risk scoring formulas • Risk ranking and priority lists • Risk simulation models • Risk optimization models • Attacker-defender models, game theory • Optimize resource allocations

  6. Risk Matrices

  7. MIL-STD-882c, January, 1993 http://www.weibull.com/mil_std/mil_std_882c.pdf

  8. Source: FAA, 2007 www.faa.gov/airports_airtraffic/airports/resources/advisory_circulars/media/150-5200-37/150_5200_37.doc

  9. Now, everyone’s doing it • National and international standards • Guidance documents • Computer and IT security • Threat, vulnerability, consequence ratings for terrorism threats • Compliance programs • Training • Certification programs

  10. Example risk matrices Swedish Rescue Service U.S. FHA Supply Chain Digest Australian Government

  11. Stop!

  12. How well does this work?

  13. Should B outrank A? • A • B

  14. Not necessarily! • A Isorisk contour • B

  15. How bad can misrankings be? • Misrankings always exist • for any coloring and smooth, downward-sloping indifference-curve contours • Unavoidable • Up to 100% of points can be misranked (!) • if frequency and severity are negatively correlated • More than three colors: Spurious resolution • inconsistent with any quantitative model, increases misrankings • Common in practice

  16. 100% misclassified

  17. Some other problems • Ambiguous (and mysterious) rating scales • “Frequency” is not well-defined • “Severity” is not well-defined • “Risk” (and risk reduction) are not well-defined • Recommended decisions are often bad • Budget constraints? • Diversification? • Optimization?

  18. Mysterious definitions “Almost certain: Is expected to occur in most circumstances Likely: Will probably occur in most circumstances Possible: Might occur at some time Unlikely: Could occur at some time Rare: May occur only in exceptional circumstances” www.health.gov.au/internet/main/publishing.nsf/Content/mental-pubs-n-safety-toc~mental-pubs-n-safety-5~mental-pubs-n-safety-5-7

  19. Some other problems • Ambiguous (and mysterious) rating scales • “Frequency” is not well-defined • Is MTBF ~ U[0, 8] more frequent than MTBF = 4? • No way to define “frequency” so smaller is better. (MTBF= 4 is preferred if mission life is 3 years, but not if it is 5 years) • “Severity” is not well-defined • “Risk” (and risk reduction) are not well-defined • Recommended decisions are bad • Budget constraints? • Diversification? • Optimization?

  20. Ambiguous rating scales and definitions

  21. Ambiguous rating scales and definitions Descriptions are not mutually exclusive Suppose that controls are in place to prevent an attack, but they are ineffective during snow storms

  22. Suppose Pr(H) = Pr(L) = 1/2? • Ratings do not handle uncertainty • Example: How to rate “likelihood” of an event judged equally likely to be “H” or “L” • (Risk matrix-based standards never address this question.)

  23. Limitations of “Likelihood” ratings • Example: Suppose likelihood ratings are: • Low (L): 0  p  0.4 • Medium (M): 0.4 < p < 0.6 • High (H): 0.6  p  1 • Then the “likelihood” of an event that is equally likely to be “H” or “L” should be…. • “L” if the two equally likely values for p are (0, 0.7) • “M: if the two equally likely values for p are (0.3, 0.7) • “H” if the two equally likely values for p are (0.3, 1) • Need numbers to know what to do.

  24. From likelihood and impact ratings to risk ratings

  25. 100% probability of loss 28 40% probability of loss 40 Why should an expected loss of 0.40*40 = 16 outrank a sure loss of 28?

  26. 100% probability of loss 28 40% probability of loss 40 No way to objectively classify highly uncertain impacts.

  27. From risk rating to risk management: Setting risk management priorities without costsor budgets Costs? Budget?

  28. Why is 100% probability of minor impact rated the same as 100% probability of critical impact? (Range compression)

  29. How should we rate a risk with 5% probability of moderate impact and 95% probability of minor impact?

  30. Summary so far… • Risk matrices have some problems • “Frequency” is usually undefined • “Severity” is usually ambiguous • Risk ratings can be worse than useless • Misranking can be worse-than-random • Recommendations are often nonsensical • Give higher management priority to smaller risks

  31. Can risk formulas do better? • Remove artificial discretization • Allow smooth indifference curves … but the fundamental problems remain

  32. Risk formulas Examples: • Risk = Threat  Vulnerability  Consequence • All values are expected values (RAMCAP) • Risk = frequency  severity • Risk = jwjxj, • xj = level of bad attribute j • wj = importance weight for attribute j • Risk (expected loss) = f(attributes)

  33. Example: Additive scoring rule Bioterrorism risk scoring (Macintyre, 2006): • Probability of attack = ease of procurement + ease of weaponization + history of use • Impact = lack of preventability of disease + lack of treatability • Score each factor as: 0 = no, 1 = low, 2 = high • Priorityscore = Probability + Impact

  34. Example: Additive scoring rule Implications of: • Priorityscore = Probability score + Impact score • (0 + 2 = 2 + 0) • (unobtainable agent, untreatable effect) ~ (obtainable agent, treatable effect) • Zero risk ~ positive risk  bad advice!

  35. Plotting  - Establish probability scale on y-axis  - Establish impact scale on x-axis  - Priority regions are set by the risk assessors Red - Highest PriorityYellow - Medium PriorityGreen - Low Priority http://www.mitre.org/work/sepo/toolkits/risk/procedures/RiskPlotting.htm l

  36. TVC paradigm • Risk = threat x vulnerability x consequence • Threat = relative probability of attack • Reflects attacker’s intent, capability, decisions • Budget and resource constraints? Opportunity costs? • Vulnerability = probability that attack succeeds, if attempted • Partial degrees of success? • Consequence = defender’s loss from successful attack • Sum over multiple risks to get total risk • Risk management: Allocate resources to defend biggest risks first • TVC prioritylist

  37. E(T)E(V)(EC) comparison

  38. E(T)E(V)(EC) is irrelevant for risk

  39. E(T)E(V)E(C)  E(TVC)

  40. No other scalar summaries can work, either

  41. Summing risks • “The risk associated with one asset can be added to others to obtain the aggregate risk for an entire facility… [and] can be aggregated and/or compared across whole industries and economic sectors. This is precisely the goal of DHS.” (RAMCAP Framework) • Is this a good idea? • No: Risks should not be added! • T1V1C1 + T2V2C2 is not valid

  42. Risks are not additive • Let success prob. for “attack via front door” = V1 = 0.5; and let V2 = 0.5 for “attack via back door”. (Let T = C = 1 for both.) • If these two vulnerabilities, 0.5 and 0.5, are independent, then what is the total risk? • It is not T1V1C1 + T2V2C2 = 0.5 + 0.5 =1. • It is 1 – 0.5*0.5 = 0.75 • If the two vulnerabilities are dependent, then total risk can be anywhere between 0.5 (if r2 = 1)and 1 (if r2 = -1)

  43. Doubts about additivity Two attackers, each can grab either the $10 nearest it, or the $8 in the middle. attacker A: $10 $8 $10 attacker B • Which asset should we protect, if we can only afford to protect one?

  44. Doubts about additivityattacker A $10 $8 $10 attacker B “Relative attractiveness” solution: • For each attacker, Pr(grab $8) = relative value of asset = 8/18 = 4/9. • Pr(grab $10) = 10/18 = 5/9 • Sum of threats: • 4/9 + 4/9 = 8/9 for $8 > 5/9 for each $10. • So, defend the $8!

  45. Limitations of scoring and ranking

  46. Fundamental limitations of risk scoring and ranking Risk formulas and ranks do not exploit • Correlations • Diversification • Interactions, portfolios of countermeasures • Scale of risk management decisions • Distributed risk management • Budget, resource constraints

  47. Risk scores fail to diversify • Any risk formula that ranks or scores identical prospects identically fails to optimally diversify portfolios of risk-reducing measures

  48. Risk scores fail to diversify • Example: We can afford to protect 10 sites. • Benefits depend on unknown state of world • Two equally likely states (attacks on A vs. B sites): • Protecting a type A site yields (0 or 100) benefits • Protecting a type B sites yields (100 or 0) benefits • Optimal risk-averse decision: Protect 5 of each type of site • Benefit = 500 (variance = 0) • Risk-scoring: Protect 10 of higher-scoring type • Benefit = 0 or 1000 (maximum-variance)

  49. Risk scores ignore interactions • Risk formulas that score each countermeasure based on how much it reduces risk do not create optimal portfolios of risk-reducing measures. • Budget constraints imply that best portfolio cannot necessarily be represented as a priority list.

More Related