1 / 27

Automated Signature and Policy Generation

Automated Signature and Policy Generation. Douglas S. Reeves. MURI Annual Meeting October 29, 2013. Past Work: NSDMiner. Automated discovery of network service dependencies, based on passive observation of network traffic. Recen t Work: MetaSymploit. Goals for malware analysis

admon
Télécharger la présentation

Automated Signature and Policy Generation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Automated Signatureand Policy Generation Douglas S. Reeves MURI Annual Meeting October 29, 2013

  2. Past Work: NSDMiner • Automated discovery of network service dependencies, based on passive observation of network traffic

  3. Recent Work: MetaSymploit • Goals for malware analysis • Faster signature generation (less time from release of exploit to availabilityof signature) • High quality signatures + efficient pattern matching

  4. Script-Based Attack “Factories” • All-in-one framework with built-in components provide rich attack-generation capabilities • Written in scripting languages (Ruby, Python, PHP…) • Development / deployment of attacks + variants + combinations much faster and easier than development of patches

  5. Ex: Metasploit

  6. Script Example • Probe Target Port scanning, Fingerprinting, etc. • Compose Attack Payload • Includes shellcode, junk, target-specific vul bytes, etc. • Send Payload • Trigger vulnerability • Post Exploit • Wait for shellcodeto be executed, backdoor channel created, etc.

  7. MetaSymploit • First system for attack script analysis • Automatic IDS signature generation from source code • Features • Based on symbolic execution • Only a few minutes to analyze new attack scripts and generate signatures: “day-one defenses” Attack Scripts IDS Signatures MetaSymploit

  8. Architecture

  9. Symbolic Execution Layer • “Symbolize” APIs related to environment and dynamic content • Record behavioral APIs and attack branching conditions • Hook output API to capture the entire attack payload

  10. Script Example Symbolic APIs: probe_ver() shellcode() rand_alpha() Behavior & Constraint Logging: probe_ver() sym_ver == 5 shellcode() & get_target_ret() Hook output API: sock.put(payload)

  11. Signature Generation Layer • Extract signature patterns for specific attack payload (e.g., constant bytes, length, offset) • Refine patterns to filter out benign/trivial patterns, avoid duplicates • Derive semantic context of patterns by analyzing behaviors and constraints

  12. Example of Signature Line 23: payload => [ <sym_shellcode, len=sym_integer>, <sym_rand_alpha, len=(1167-sym_integer)>, <"\xe9\x38\x6c\xfb\xff\xff\xeb\xf9\xad\x32\xaa\x71", 12>, <sym_rand_alpha, 2917> ] alerttcp any any -> any 617 ( msg:“script: type77.rb (Win), target_version: 5, behavior: probe_version, stack_overflow, JMP to Shellcode with vulnerable_ret_addr"; content:"|e9 38 6c fbffffeb f9 ad 32 aa 71|"; pcre:"/[.]{1167}\xe9\x38\x6c\xfb\xff\xff\xeb\xf9\xad\x32\xaa\x71[a-zA-Z]{2917}/"; classtype:shellcode-detect; sid:5000656; ) red is symbolic value, green is concrete value

  13. Implementation • We developed a lightweight symbolic execution engine for Ruby • No modification to Ruby interpreter required • Integrated MetaSymploit into Metasploit Console as a simple command • Output is Snortrules (signatures) • Gecode/R & HAMPI used as constraint solvers • Currently applied to 10 popular built-in components in Metasploit: Tcp, Udp, Ftp, Http, Imap, Exe, Seh, Omelet, Egghunter, Brute

  14. Evaluation: Speed and Completion • Tested 548 attack scripts • Average symbolicexecution time:Less than 1 minute

  15. Evaluation: Detection Rate

  16. Evaluation: Detection Rate • Tested signatures on 45Metasploit attack scripts targeting 45 vulnerable applications from exploit-db.com • Results • 100% detection rate with generated signatures • 0% false positive rate on “normal” network traffic (collected in our department)

  17. Evaluation: Comparison with PublicRuleset • From 11/2012 Snort ruleset, only 22 out of 45 scripts had corresponding official Snort rules (based on CVE analysis) Pattern comparison between 53MetaSymploit generated rules and 50 official Snort rules for 22Metasploit attack scripts

  18. Evaluation: Comparison (cont’d) • Snort ruleset 07/2013 has more rules to cover Metasploit-generated exploits • Including Meterpretershellcode • Example specific rules: exploit-kit.rulesmalware-tools.rules • Good news?

  19. Discussion • Fast, successful, accurate automated signature generation for scripting-based exploits • Limitations • Requires source code • Standard limitations of symbolic execution: loops, path explosion, constraint solvers • Cannot handle multi-threaded attacks

  20. Future Work: Test-Driven Security Policy Generation • SEAndroid is currently being merged into AOSP • goal is to reduce the attack surface using least-privilege policy • Challenge: (human) effort required to write suitable MAC policies for a particular platform and applications

  21. Current status of SEAndroid Policies • Current policy ruleset is manually written by NSA SEAndroid team • 793 allow rules • Categorizes apps in a very coarse-grained way for simplicity • Difficult to adapt rules for new platforms (ex.: The current ruleset breaks “Enforcing” mode for Nexus 7) • The community often argues whether a new rule is correctly written

  22. Proposed Approach • Automatically generate MAC policy from functional tests provided by the developers • Not intended to be comprehensive ruleset; instead, a major head start on creating rules • Writing test cases is already an essential step in app deployment; policy generation is “free” • Test cases exercise expected use and correct behavior of an app • System apps and middleware framework are already equipped with rich tests in AOSP

  23. SEAndroid Test Runner The tested app JUnit Test Suites JUnit Test Suites Android Middleware Linux Kernel SEAndroid (Audit Mode) Static Parser Runtime Behaviors of Middleware/Kernel APIs Semantics of Test Cases SEAndroid Policy Rule Generator Auto-Generated SEAndroid Policy Rules for this App Proposed WorkFlow

  24. Assumptions • Developers are benign, and conscientiously provide test cases with high coverage • Should be true for system and platform developers, not necessarily true for 3rd party application developers • Generated policies should be sound, won’t be complete, but… • Too many policy rules?

  25. Example • We processed the test suite of the Gallery app that invokes Camera functionality to take and store photos • The test suite CameraTestRunnercontains 3 test classes (13 test methods) • These tests cover all camera activities, including image storage

  26. Example (cont'd) • The test code and audit trace logs were analyzed to generateSEAndroid policy (only partially automated): • allow gallery3d_app mediaserver:binder call; • allow gallery3d_app servicemanager:binder call; • allow gallery3d_app system_server:binder { transfer call }; • allow gallery3d_app media_app:binder { transfer call }; • allow gallery3d_app media_app:fd use; • …(29 rules generated)

  27. Challenges • How to distinguish runtime contexts between the execution of test code and target app • How to handle mock / fake / isolated Content / ContentProviderused in test cases • How to aggregate / generalize policy rules derived from test cases (reduce ruleset size)

More Related