1 / 16

Comparing two techniques for intrusion visualization

Comparing two techniques for intrusion visualization. Vikash Katta 1,3 , Peter Karpati 1 , Andreas L. Opdahl 2 , Christian Raspotnig 2,3 & Guttorm Sindre 1

adonis
Télécharger la présentation

Comparing two techniques for intrusion visualization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Comparing two techniques for intrusion visualization Vikash Katta1,3, Peter Karpati1, Andreas L. Opdahl2, Christian Raspotnig2,3 & Guttorm Sindre1 1) Norwegian University of Science and Technology, Trondheim2) University of Bergen, Norway3) Institute for Energy Technology, Halden, Norway Andreas.Opdahl@uib.no

  2. The ReqSec Project Method and tool support for security requirements engineering: involve non-experts lightweight integrated, add-on industrially evaluated Funded by the Norwegian Research Council (NFR), 2008-2012 Many techniques proposed, e.g., anti-behaviours...

  3. Perspective System security models: black-box models of monolothic systems single systems security analysis and specification Security architecture models: high-level organisational views enterprise architecture for security Need for intermediate solutions: security modelling for SOA white-box models of service collaborations bordering organisation and technology

  4. Misuse Case Maps (MUCM) Inspired by Use Case Maps (R.J.A. Buhr, D. Aymot...)

  5. Misuse Case Maps (MUCM) Use case maps: components, scenario paths, responsibilities Misuse case maps: vulnerabilities, exploit paths, vulnerable responsibility Preliminary evaluations: good for architectural overviews need better visualisation of attack step sequences

  6. Misuse Sequence Diagrams (MUSD)

  7. Misuse Sequence Diagrams (MUSD) Sequence diagrams: actor, object/component, action, event/message Misuse sequence diagrams: attacker, vulnerability, exploit action and event/message Initial evaluation: can MUSD complement MUCM? how do the two techniques compare wrt. understanding performance perception

  8. Comparison Controlled experiment with 42 subjects Latin squares organisation, random assignment Treatment (independent variables): technique: MUCM, MUSD task: bank intrusion (BAN), penetration test (PEN) Measures (dependent variables): understanding (UND) performance (VULN, MITIG, VUMI) perception (PER) Control (control variables): background (KNOW, STUDY, JOB)

  9. Hypotheses H11: MUCM better on architectural questions H21: MUSD better for temporal sequence questions H31: Either technique better on the neutral questions H41: Either technique better overall H51: Different numbers of vulnerabilities identified H61: Different numbers of mitigations identified H71: Different total numbers of vulnerabilities and mitigations identified H81: Usefulness perceived differently H91: Ease of use perceived differently H101: Intentions to use perceived differently H111: MUCM and MUSD perceived differently

  10. Procedure 4 groups of 10-11 2nd year computer science students 10 steps: Filling in the pre-experiment questionnaire (2 min) Reading a short introduction to the experiment (1 min) First technique on first task: introduction to the technique (9 min) read about task, looking at diagrams (12 min) 20 true/false questions about the case (8 min) finding vulnerabilities and mitigations (11 min) post-experiment questionnaire (4 min) Easy physical exercise (2 min) Repeat for second technique and task (44 min)

  11. Results Backgrounds: No sig. differences between groups: Kruskal-Wallis H test 2-4 semesters of ICT studies 2.07 months of job experience (three outliers) Sig. knowledge differences across groups: Wilcoxon signed-rank tests KNOW_MOD > KNOW_SEC, p = .000 KNOW_SD > KNOW_UCM, p = .003 KNOW_MUSD ≈ KNOW_MUCM

  12. Understanding Wilcoxon signed-rank tests H1 & H2 accepted, H3 & H4 rejected Medium effect size (Cohen) No impact of technique or task order

  13. Performance Two blank outliers removed (from 11-student groups) H5, H6 & H7 rejected No impact of technique order More identifications for bank task

  14. Perception H8, H9, H10 & H11 accepted Medium to large effect sizes (Cohen) Only one insig. statement (“would be useless”) More positive perception of first technique used

  15. Conclusion The techniques are complementary They facilitate understanding better for their “intended use”: MUCM best for architectural issues MUSD best for temporal sequences They are equal in performance the bank task was more productive MUSDs were perceived more positively the first technique was perceived more positively Further work: simpler MUCMs, qualitative analysis, more techniques, industrial subjects, notation and method integration, industrial case studies and action research...

  16. Central concepts RFC 2828: vulnerability: a weakness in a system ... that can be exploited to violate its security policy threat: a potential for violation of security ... that could cause harm countermeasure: something that reduces a threat or attack by eliminating... preventing... minimizing the harm... or by reporting it to enable corrective action

More Related