1 / 65

CS463.12 Intrusion Detection

CS463.12 Intrusion Detection. Fall 2005. Project Announcements. Now is the peer-review session of the project Projects rotated between groups Evaluation due next Friday. Evaluation Details. Evaluation at three levels Requirements Is the proposed functionality interesting, useful, novel, …

lilac
Télécharger la présentation

CS463.12 Intrusion Detection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS463.12Intrusion Detection Fall 2005

  2. Project Announcements • Now is the peer-review session of the project • Projects rotated between groups • Evaluation due next Friday

  3. Evaluation Details • Evaluation at three levels • Requirements • Is the proposed functionality interesting, useful, novel, … • Design • How well does the design satisfy the requirements? • Implementation • Quality of the code • Quality of the demo

  4. Evaluation Criteria • Evaluate the three levels with respect to: • Costs • Benefits • Threat model • Risks • Simplicity • Effectiveness

  5. Group Communications • Evaluated group will supply • Design document (updated) • Code • Anything else that helps answer the questions • Extra communication between groups is OK • Evaluating group will produce a Target of Evaluation document, containing: • All materials provided by evaluated group • Record of extra communications, demos • Completeness of TOE will affect evaluated group’s grade

  6. Evaluation Deliverables • TOE document • A presentation of results, with slides • Presentations will be on Friday, Dec 9 in half hour slots • 20 minutes presentation • 10 minutes questions & rebuttal • Evaluated group encouraged to attend • Everyone else is welcome to attend, too

  7. Group Rotation • Everyone evaluates the next group • E.g. Group 2 evaluates Group 1 • Projects: • Group 1: Hasan, Bakht [Prohori] • Group 2: Doshi, da Silva, Chen • Group 3: LeMay, Fatemieh, Katasani • Group 4: Fliege, Dutta, Pai [I2CS] • Group 5: Sliva, Tabriz • Group 6: Maestro, Chan, Bui [ConDor]

  8. Presentation Schedule (Tentative)

  9. Final Exam • Take-home exam • Handed out end of last class, Dec 8 • Due 5pm, Dec 13 • Covers the entire course • Should take 3-4 hours to do if you remember & understand the slides & text • Longer if you need to look things up & study

  10. Intrusion Detection • Definition • Models • Intrusion Response • Network Intrusion Detection • Firewalls • Internet Worms

  11. Readings • Chapter 25, 25.1-25.3, 25.5, 25.6 • Exercises: • 1,2,5,7-10

  12. Intrusion Characteristics • Main idea: a compromised system has different characteristics than a normal one • Statistical patterns of activity • Suspicious activity • Specifications

  13. Adversarial IDS model • Attackers want to mask traces of intrusion • Slow down response • Exploit compromised system for longer • Tools to do this are called “root kits” • E.g. Sony DRM, Spyware • Main goal of IDS: detect root kits • Main goal of root kits: avoid IDS

  14. Root kit techniques • Hide presence of intrusions • Alter log files • Change monitoring software • New versions of ls, ps, netstat, … • Change kernel • Virtualization / sand boxes

  15. Root Kit difficulties • A system has many monitoring facilities • Resource utilization, timings, etc. • E.g. detecting presence of a new file • ls, find, “echo *”, du, df, … • A determined effort to find a root kit will probably succeed • Root kit cannot cover all bases • “No perfect crime”

  16. IDS difficulties • IDS only monitors a finite number of parameters • Root kits can cover all the bases the IDS knows • IDS is an alarm system, not forensics • Arms race ensues

  17. IDS goals • Detect wide range of intrusions • Including previously unknown attacks • Detect intrusions quickly • Allow timely response • A good IDS can be used for intrusion prevention • Explain intrusions well • Allow intelligent response • Detect accurately

  18. Accuracy • False negatives • Fail to detect an intrusion • False positives • Alert an intrusion when there isn’t one • Most designs allow a trade-off between the two • E.g. 0% false positives is easy to achieve with 100% false negatives

  19. False Positives & Rare Events • E.g. fingerprint matches • False positive rate = 0.01% • Number of fingerprints on record = 1 million • Suppose a fingerprint at the scene matches someone in the database • Odds are 100-1 that person is innocent! • Intrusions are rare • False positive rate must be very low to be usable

  20. Anomaly Models • Manual models • Describe what behavior is correct or anomalous • Statistical models • Learn what is the normal behavior

  21. Statistical Models • Monitor system in normal state • Learn patterns of activity • Various statistical models to do this • Decide an intrusion threshold • E.g. 2 standard deviations from normal • Adapt over time (optional)

  22. Simple Model (Normal) • Measure values of parameters • E.g. network load • Calculate mean & standard deviation • Set a threshold based on a confidence interval • E.g. 2 standard deviatons =~ 95% • 3 standard deviations =~ 99.7% • Alert for values outside the threshold

  23. Markov Models • Consider anomalous sequences of operations • Usually system calls • Markov models: next operation depends on current one • E.g. read follows open • Transition probabilities computed by training • Can classify likelihood of sequences

  24. Higher Order Markov Models • First order Markov models consider only the previous state • I.e. likelihood of each digram of operations • E.g. if training set is: • how is it going? • the sky is blue. • Then the sentence “how is blue” falls within the model • Higher order Markov models consider several previous states

  25. n-grams • Another way to think about previous states is with n-grams open read write open mmap write fchmod close • 3-grams are: open read write read write open write open mmap open mmap write mmap write fchmod write fchmod close fchmod close

  26. Statistical Models • Pro: • No need to know what is “normal” in advance • Flexibility between installations • Adaptive • Control of false positive rates

  27. Statistical Models • Cons: • Statistical model may be wrong • E.g. not normally distributed data • Training set may be inadequate • Same problem as testing • Alerts difficult to explain • Attacks may be able to get around them

  28. Misuse specification • Look for patterns of activity that shouldn’t happen • E.g. swiping many doors in Siebel • E.g. control transfer to a randomized location • E.g. traffic with internal address coming from outside • Usually very low false positive rate • But only detects known attacks

  29. Specification-based Detection • Specify correct operation, everything else an attack • E.g. rdist specification • open world readable files • open non-world readable files rdist creates • create files in /tmp • chown/chmod files it creates • Any other filesystem operation is an error

  30. Manual Specification <valid_op> -> open_r_worldread | open_r_not_worldread { if !Created(F) then violation(); fi; } | open_rw { if !Dev(F) then violation; fi; } …

  31. Automated Specification • Manual specification labor-intensive and error-prone • Idea: take specification from source code • Static analysis to build model of system calls • Different models considered: • FSA, PDA, n-gram • Advantage: no false positives • Disadvantage: • Only detects control flow hijacking • Mimicry attacks

  32. Mimicry Attacks • Tailor attack specifically to an IDS • E.g. pad system calls sequences to look legitimate • Normal sequence: open read write close open fchmod close exec • Naïve attack: open read exec • Mimicry attack (digrams): open read write close exec

  33. Mimicry Attacks • More precise models better defend against mimicry • Mimicry exploits similarity between attack and detection • Makes attack sequences look non-anomalous • Continues arms race

  34. Network Intrusion Detection • Most attacks come from the outside network • Monitoring outside link(s) easier than monitoring all systems in an enterprise • Network Intrusion Detection Systems (NIDS) a popular tool

  35. NIDS challenges • NIDS Challenges • Volume of traffic • Attacks on the monitor • Uncertainty about host behavior

  36. Volume of Traffic • Organizations can easily have 100Mps - Gps links to the outside world • NIDS must examine all traffic • Reconstruct communications context • Keep state about connections

  37. Attacks on Monitor • Deliberate attacks on monitor can compromise detection • Step 1: • Overload monitor • Cause it to crash • Step 2: • Carry out attack • Performance becomes an adversarial task

  38. Speed of Processing • Discard things that aren’t interesting • Packet filters • Fast rules for selecting interesting packets • Flow rules • Ignore flow after it’s deemed safe / uninteresting • E.g. look at first 1000 bytes of connection • Parallelize • Can work to a limited extent

  39. Memory Usage • Keep state as small as possible • Ideally, no state at all, but this impacts accuracy • Delayed state allocation • E.g. don’t create state for half-open TCP connections • Careful use of expensive analyzers • E.g. HTTP analyzer might use a lot of RAM • Attacker can cause many HTTP requests to crash the NIDS

  40. Subterfuge • NIDS reconstructs state at the host • What packets it saw • How it interpreted them • Reconstruction may be imperfect • Different packet lifetimes at NIDS and at host • Unexpected semantics

  41. IP fragments • IP has an option to split packets into fragments • Not used often, ignored by early NIDS • Attackers use fragments to hide their attacks

  42. Overlapping Fragments Fragment 1 login: roger Fragment 2 ot\n rm -rf / • Does the packet get reconstructed as: • login roger… • login root…

  43. TCP retransmits Packet 1 logi n: ro Packet 2 ger\n Packet 3 Retransmit Packet 3 ot\n

  44. Network Tricks • Time-to-Live (TTL) field • Set TTL low enough so that NIDS sees the packet, but host doesn’t • NIDS may be able to detect this, but only if it knows distance to all hosts • Don’t Fragment (DF) flag • If a link between the NIDS & the host has small MTU size, DF flag could cause the packet to be dropped • …

  45. Resolving Ambiguity • How to resolve the ambiguity? • It depends! • Implementation on the host • Network topology • Congestion

  46. Bifurcation • Solution 1: Split Analysis • Spawn two threads, each making an alternate choice • Watch host response, kill any thread that’s inconsistent with host behavior • Expensive • May be exploited by attackers • Generates false alarms

  47. Normalization • Actively modify traffic going through the NIDS • Normalize it to resolve ambiguities • Reassemble fragments • Reset TTL • Clear DF flag • Expensive • May violate semantics

  48. Mapping • Learn host behavior by probing • Learn network topology • Probe how ambiguities are resolved by implementations • Partial solution • May disturb hosts • E.g. “is this host vulnerable to ping-of-death”?

  49. Non-Solution • Treat ambiguities as attacks • Generates too many false alarms • Experience in practice sees all of these in normal usage • DF packets • Inconsistent IP fragments • Inconsistent retransmits

  50. Worms • History of computer intrusions • Manual (- early 90’s) • Root kits (mid 90’s - 00’s) • Worms (now)

More Related