1 / 47

Intrusion Detection

Intrusion Detection. Somesh Jha University of Wisconsin. Intrusion Detection. Goal: Discover attempts to maliciously gain access to a system. Network Intrusion Detection Systems (NIDS). Inspects packets at certain vantage points For example, behind the routers

melora
Télécharger la présentation

Intrusion Detection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Intrusion Detection Somesh Jha University of Wisconsin

  2. Intrusion Detection Goal: Discover attempts to maliciously gain access to a system J. Giffin and S. Jha

  3. Network Intrusion Detection Systems (NIDS) • Inspects packets at certain vantage points • For example, behind the routers • Look for malicious or anomalous behavior • Much more fine-grained than firewalls • Example: drop a packet whose payload “matches” a certain string J. Giffin and S. Jha

  4. Nomenclature • Packet Classification • Typically only look at packet headers • Enough to determine which firewall rules apply • Deep packet inspection (DPI) • Have to also inspect packet payloads • Needed for intrusion detection systems (IDS) and intrusion prevention systems (IPS) J. Giffin and S. Jha

  5. Classification of NIDS • Signature-based • Also called misuse detection • Establish a database of malicious patterns • If a sequence of packets “matches” one of the patterns, raise an alarm • Positives • Good attack libraries • Easy to understand the results • Negatives • Unable to detect new attacks or variants of old attacks • Example • Cisco, Snort, Bro, Tippingpoint, NFR, … J. Giffin and S. Jha

  6. Network Intrusion Prevention System (IPS) • NIDS are generally “passive” • Raise alerts if something suspicious happens • IPS are active • Drop suspicious looking packages • Route certain packets for further inspection • Main challenge: have to work at line speeds J. Giffin and S. Jha

  7. Classification of NIDS • Anomaly-based • Establish a statistical profile of normal traffic • If monitored traffic deviates “sufficiently” from the established profile, raise an alarm • Positives • Can detect new attacks • Negatives • High false alarm rate • High variability in normal traffic • Intruder can go under the “radar” • Examples • Mostly research systems J. Giffin and S. Jha

  8. Classification of NIDS • Stateless • Need to keep no state • Example: raise an alarm if you see a packet that contains the pattern “mellissa” • Positives • Very fast • Negatives • For some attacks need to keep state J. Giffin and S. Jha

  9. Evasion Attacks • Insertion, Evasion and Denial Of Service: Eluding NIDS by T.H. Ptacek and T. Newsham • Fragment the attack packet to break the keyword • Reorder the packet • Any valid transformation that TCP allows J. Giffin and S. Jha

  10. Classification of NIDS • Stateful • Keeps state • Sometime need to do reassembly • Reassemble packets that belong to the same connection, e.g., packets that belong to the same ssh session • Quite hard! (out-of-order delivery) • Positives • Can detect more attacks • Negatives • Requires too much memory J. Giffin and S. Jha

  11. Typical Stages of a NIDS • Limited TCP reassembly • Handle out-of-order packets and fragmentation • Protocol Parsing and Normalization • Various fields of a protocol (e.g., HTTP) • Normalize inputs (e.g., convert all URL names to a standard form) • Signature matching • Simple keywords (fast algorithms exist) • Full regular expression matching (slow) J. Giffin and S. Jha

  12. Snort logs, alerts, ... malicious patterns Filteredpacketstream libpcap J. Giffin and S. Jha

  13. libpcap • Takes the “raw” packet stream • Parses the packets and presents them as a • Filtered packet stream • Library for packet capture • Website for more details • http://www-nrg.ee.lbl.gov/. J. Giffin and S. Jha

  14. Malicious Pattern Example alert tcp any any -> 10.1.1.0/24 80 (content: “/cgi-bin/phf”; msg: “PHF probe!”;) action pass log alert destination address destination port source address source port protocol J. Giffin and S. Jha

  15. Malicious Patterns Example • content: “/cgi-bin/phf” • Matches any packet whose payload contains the string “/cgi-bin/phf” • Look at http://www.cert.org/advisories/CA-1996-06.html • msg: “PHF probe!” • Generate this message if a match happens J. Giffin and S. Jha

  16. More Examples alert tcp any any -> 10.1.1.0/24 6000:6010 (msg: “X traffic”;) alert tcp !10.1.1.0/24 any -> 10.1.1.0/24 6000:6010 (msg: “X traffic”;) J. Giffin and S. Jha

  17. How to generate new patterns? • Buffer overrun found in Internet Message Access Protocol (IMAP) • http://www.cert.org/advisories/CA-1997-09.html • Run exploit in a test network and record all traffic • Examine the content of the attack packet J. Giffin and S. Jha

  18. Notional "IMAP buffer overflow" packet 052499-22:27:58.403313 192.168.1.4:1034 -> 192.168.1.3:143 TCP TTL:64 TOS:0x0 DF ***PA* Seq: 0x5295B44E Ack: 0x1B4F8970 Win: 0x7D78 90 90 90 90 90 90 90 90 90 90 90 90 90 90 EB 3B ...............; 5E 89 76 08 31 ED 31 C9 31 C0 88 6E 07 89 6E 0C ^.v.1.1.1..n..n. B0 0B 89 F3 8D 6E 08 89 E9 8D 6E 0C 89 EA CD 80 .....n....n..... 31 DB 89 D8 40 CD 80 90 90 90 90 90 90 90 90 90 1...@........... 90 90 90 90 90 90 90 90 90 90 90 E8 C0 FF FF FF ................ 2F 62 69 6E 2F 73 68 90 90 90 90 90 90 90 90 90 /bin/sh......... J. Giffin and S. Jha

  19. Alert rule for the new buffer overflow alert tcp any any -> 192.168.1.0/24 143 (content:"|E8C0 FFFF FF|/bin/sh"; msg:"New IMAP Buffer Overflow detected!";) Can mix hex formatted bytecode and text J. Giffin and S. Jha

  20. Advantages of Snort • Lightweight • Small footprint • Focused monitoring: highly tuned Snort for the SMTP server • Malicious patterns easy to develop • Large user community • Consider the IRDP denial-of-service attack • Rule for this attack available on the same day the attack was announced • Commercial company (Sourcefire) behind it J. Giffin and S. Jha

  21. Disadvantages • Does not perform full stream reassembly • Attackers can use that to “fool” Snort • Break one attack packet into a stream • Pattern matching is expensive • Matching patterns in payloads is expensive (avoid it!) • Rule development methodology is adhoc J. Giffin and S. Jha

  22. Host-based ID • Monitor interaction between a specific program and OS • Raise an alarm if suspicious “system calls” are observed • Unlike NIDS monitoring happens at the end hosts • Need to model • Unusual behavior • Normal behavior J. Giffin and S. Jha

  23. Goal: Discover attempts to maliciously gain access to a system • Misuse Detection • Specify patterns ofattack or misuse • Ensure misuse patternsdo not arise at runtime • Snort • Rigid: cannot adaptto novel attacks • Anomaly Detection • Learn typical behaviorof application • Variations indicatepotential intrusions • IDES • High false alarm rate • Specification-Based • Monitoring • Specify constraints uponprogram behavior • Ensure execution doesnot violate specification • Our work; Ko, et. al. • Specifications can becumbersome to create J. Giffin and S. Jha

  24. Specification-Based Monitoring • Two components: • Specification: Indicates constraints upon program behavior • Enforcement: How the specification is verified at runtime or from audit data J. Giffin and S. Jha

  25. SpecificationEnforcement Analyst orAdministrator TrainingSets StaticSource CodeAnalysis StaticBinary CodeAnalysis ExecutionMatches Model of Application ExecutionObeys Static Ruleset J. Giffin and S. Jha

  26. Representative Work by Ko, et al. • Specification: Programmers or administrators specify correct program behavior • Enforcement: At runtime, only allow actions that match the specified policy PROGRAM fingerd read(X) :- worldreadable(X); bind(79); write(“/etc/log”); exec(“/usr/ucb/finger”); END J. Giffin and S. Jha

  27. SpecificationEnforcement Analyst orAdministrator TrainingSets StaticSource CodeAnalysis StaticBinary CodeAnalysis ExecutionMatches Model of Application ExecutionObeys Static Ruleset J. Giffin and S. Jha

  28. Representative Work by Forrest, et al • Specification: Learn correct program behavior with training • Record sequences of system calls • Enforcement: Only accept behaviors similar to learned patterns • Example system: STIDE J. Giffin and S. Jha

  29. Training • Repeatedly run the program, varying the input • For some n, record all sequences of n system calls observed • n depends upon the program • End result: database of n-tuples of system calls J. Giffin and S. Jha

  30. geteuid, getuid, getegid, getgid, fstat, open, fstat, lseek, mmap, read, memcntl, write, lseek, munmap, lseek, close, close, exit geteuid, getuid getuid, getegid getegid, getgid getgid, fstat fstat, open / lseek open, fstat lseek, mmap / munmap / close mmap, read read, memcntl memcntl, write write, lseek munmap, lseek close, close / exit cat (print file contents) J. Giffin and S. Jha

  31. Enforcement • Monitor system calls generated by application • Ensure that the last n calls match a sequence in the database • Option: Allow slight deviation from database • Training set may have been incomplete J. Giffin and S. Jha

  32. geteuid, getuid, getegid, getgid, fstat, open, fstat, lseek, mmap, read, memcntl, write, lseek, munmap, lseek, close, close, exit Accepts incorrect system call sequences geteuid, getuid, getegid, getgid, fstat, lseek, close, exit geteuid, getuid getuid, getegid getegid, getgid getgid, fstat fstat, open / lseek open, fstat lseek, mmap / munmap / close mmap, read read, memcntl memcntl, write write, lseek munmap, lseek close, close / exit cat (print file contents) J. Giffin and S. Jha

  33. Drawbacks • Accepts incorrect call sequences • Due to window-based approach with ambiguity • Opportunity for attack sequence to go undetected • Only learn behaviors exercised in training set • Not all execution paths followed • Users must construct valid training sets • Users must determine window size J. Giffin and S. Jha

  34. Drawbacks • Specification may over-fit the data • If training on real data, training set may contain exploits • Learn exploit pattern as normal J. Giffin and S. Jha

  35. SpecificationEnforcement Analyst or Administrator TrainingSets StaticSource CodeAnalysis StaticBinary CodeAnalysis ExecutionMatches Model of Application ExecutionObeys Static Ruleset J. Giffin and S. Jha

  36. CFI: Control-Flow Integrity [Abadi et al.] • Main idea: pre-determine control flow graph (CFG) of an application • Static analysis of source code • Static binary analysis  CFI • Execution profiling • Explicit specification of security policy • Execution must follow the pre-determined control flow graph

  37. CFI: Binary Instrumentation • Use binary rewriting to instrument code with runtime checks • Inserted checks ensure that the execution always stays within the statically determined CFG • Whenever an instruction transfers control, destination must be valid according to the CFG

  38. CFI (Continued) • Goal: prevent injection of arbitrary code and invalid control transfers (e.g., return-to-libc) • Secure even if the attacker has complete control over the thread’s address space Somesh Jha

  39. CFG Example

  40. CFI: Control Flow Enforcement • For each control transfer, determine statically its possible destination(s) • Insert a unique bit pattern at every destination • Two destinations are equivalent if CFG contains edges to each from the same source • This is imprecise (why?) • Use same bit pattern for equivalent destinations

  41. CFI Enforcement • Insert binary code • that at runtime will check whether the bit pattern of the target instruction • matches the pattern of possible destinations Somesh Jha

  42. Instrumented code Abuse an x86 assembly instruction to insert “12345678” tag into the binary Jump to the destination only if the tag is equal to “12345678” CFI: Example of Instrumentation Original code

  43. CFI: Preventing Circumvention • Unique IDs • Bit patterns chosen as destination IDs must not appear anywhere else in the code memory except ID checks • Non-writable code • Program should not modify code memory at runtime • What about run-time code generation and self-modification?

  44. CFI: Preventing Circumvention • Non-executable data • Program should not execute data as if it were code • Enforcement: hardware support + prohibit system calls that change protection state + verification at load-time Somesh Jha

  45. Improving CFI Precision • Suppose a call from A goes to C, and a call from B goes to either C, or D (when can this happen?) • CFI will use the same tag for C and D, but this allows an “invalid” call from A to D • Possible solution: duplicate code or inline • Possible solution: multiple tags • Function F is called first from A, then from B; what’s a valid destination for its return? • CFI will use the same tag for both call sites, but this allows F to return to B after being called from A • Solution: shadow call stack

  46. CFI: Security Guarantees • Effective against attacks based on illegitimate control-flow transfer • Stack-based buffer overflow, return-to-libc exploits, pointer subterfuge • Does not protect against attacks that do not violate the program’s original CFG • Incorrect arguments to system calls • Substitution of file names • Other data-only attacks

  47. Questions? J. Giffin and S. Jha

More Related