1 / 52

Midterm

Midterm. Quiz 2 Posted on DEN. Same as quiz 1 Due by Wed 3/16 Should be taken after you complete your Firewalls lab Grading : If you take both quizzes I’ll just use the higher grade. If you skip one I’ll average both grades. Human Behavior Modeling 1.

ipo
Télécharger la présentation

Midterm

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Midterm

  2. Quiz 2 Posted on DEN • Same as quiz 1 • Due by Wed 3/16 • Should be taken after you complete your Firewalls lab • Grading: If you take both quizzes I’ll just use the higher grade. If you skip one I’ll average both grades.

  3. Human Behavior Modeling1 1“Modeling Human Behavior for Defense Against Flash Crowd Attacks”, Oikonomou, Mirkovic 2009. • Goal: defend against flash-crowd attacks on Web servers • Model human behavior along three dimensions • Dynamics of interaction with server (trained) • Detect aggressive clients as attackers • Semantics of interaction with server (trained) • Detect clients that browse unpopular content or use unpopular paths as attackers • Processing of visual and textual cues • Detect clients that click on invisible or uninteresting links as attackers

  4. Can It Work? • Attackers can bypass detection if they • Act non-aggressively • Use each bot for just a few requests, then replace it • But this forces attacker to use many bots • Tens to hundreds of thousands • Beyond reach of most attackers • Other flooding attacks will still work

  5. Advantages And Limitations • Transparent to users • Low false positives and false negatives • Requires server modification • Server must store data about each client • Will not work against other flooding attacks • May not protect services where humans do not generate traffic, e.g., DNS

  6. Worms

  7. Viruses vs. Worms • Viruses don’t break into your computer – they are invited by you • They cannot spread unless you run infected application or click on infected attachment • Early viruses spread onto different applications on your computer • Contemporary viruses spread as attachments through E-mail, they will mail themselves to people from your addressbook • Worms break into your computer using some vulnerability, install malicious code and move on to other machines • You don’t have to do anything to make them spread

  8. What is a Worm? • A program that: • Scans network for vulnerable machines • Breaks into machines by exploiting the vulnerability • Installs some piece of malicious code – backdoor, DDoS tool • Moves on • Unlike viruses • Worms don’t need any user action to spread – they spread silently and on their own • Worms don’t attach themselves onto other programs – they exist as a separate code in memory • Sometimes you may not even know your machine has been infected by a worm

  9. Why Are Worms Dangerous? • They spread extremely fast • They are silent • Once they are out, they cannot be recalled • They usually install malicious code • They clog the network

  10. First Worm Ever – Morris Worm • Robert Morris, a PhD student at Cornell, was interested in network security • He created the first worm with a goal to have a program live on the Internet in Nov. 1988 • Worm was supposed only to spread, fairly slowly • It was supposed to take just a little bit of resources so not to draw attention to itself • But things went wrong … • Worm was supposed to avoid duplicate copies by asking a computer whether it is infected • To avoid false “yes” answers, it was programmed to duplicate itself every 7th time it received “yes” answer • This turned out to be too much

  11. First Worm Ever – Morris Worm • It exploited four vulnerabilities to break in • A bug insendmail • A bug in finger deamon • A trusted hosts feature (/etc/.rhosts) • Password guessing • Worm was replicating at a much faster rate than anticipated • At that time Internet was small and homogeneous (SUN and VAX workstations running BSD UNIX) • It infected around 6,000 computers, one tenth of then-Internet, in a day

  12. First Worm Ever – Morris Worm • People quickly devised patches and distributed them (Internet was small then) • A week later all systems were patched and worm code was removed from most of them • No lasting damage was caused • Robert Morris paid $10,000 fine, was placed on probation and did some community work • Worm exposed not only vulnerabilities in UNIX but moreover in Internet organization • Users didn’t know who to contact and report infection or where to look for patches

  13. First Worm Ever – Morris Worm • In response to Morris Worm DARPA formed CERT (Computer Emergency Response Team) in November 1988 • Users report incidents and get help in handling them from CERT • CERT publishes security advisory notes informing users of new vulnerabilities that need to be patched and how to patch them • CERT facilitates security discussions and advocates better system management practices

  14. Code Red • Spread on July 12 and 19, 2001 • Exploited a vulnerability in Microsoft Internet Information Server that allows attacker to get full access to the machine (turned on by default) • Two variants – both probed random machines, one with static seed for RNG, another with random seed for RNG (CRv2) • CRv2 infected more than 359,000 computers in less than 14 hours • It doubled in size every 37 minutes • At the peak of infection more than 2,000 hosts were infected each minute

  15. Code Red v2

  16. Code Red v2 • 43% of infected machines were in US • 47% of infected machines were home computers • Worm was programmed to stop spreading at midnight, then attack www1.whitehouse.gov • It had hardcoded IP address so White House was able to thwart the attack by simply changing the IP address-to-name mapping • Estimated damage ~2.6 billion

  17. Sapphire/Slammer Worm • Spread on January 25, 2003 • The fastest computer worm in history • It doubled in size every 8.5 seconds. • It infected more than 90% of vulnerable hosts within 10 minutes • It infected 75,000 hosts overall • Exploited buffer overflow vulnerability in Microsoft SQL server, discovered 6 months earlier

  18. Sapphire/Slammer Worm • No malicious payload • The aggressive spread had severe consequences • Created DoS effect • It disrupted backbone operation • Airline flights were canceled • Some ATM machines failed

  19. Sapphire/Slammer Worm

  20. Why Was Slammer So Fast? • Both Slammer and Code Red 2 use random scanning • Code Red uses multiple threads that invoke TCP connection establishment through 3-way handshake – must wait for the other party to reply or for TCP timeout to expire • Slammer packs its code in single UDP packet – speed is limited by how many UDP packets can a machine send • Could we do the same trick with Code Red? • Slammer authors tried to use linear congruentialgenerators to generate random addresses for scanning, but programmed it wrong

  21. Sapphire/Slammer Worm • 43% of infected machines were in US • 59% of infected machines were home computers • Response was fast – after an hour sites started filtering packetsfor SQL server port

  22. BGP Impact of Slammer Worm

  23. Stuxnet Worm • Discovered in June/July 2010 • Targets industrial equipment • Uses Windows vulnerabilities (known and new) to break in • Installs PLC (Programmable Logic Controller) rootkit and reprograms PLC • Without physical schematic it is impossible to tell what’s the ultimate effect • Spread via USB drives • Updates itself either by reporting to server or by exchanging code with new copy of the worm

  24. Scanning Strategies • Many worms use random scanning • This works well only if machines have very good RNGs with different seeds • Getting large initial population represents a problem • Then the infection rate skyrockets • The infection eventually reaches saturation since all machines are probing same addresses “Warhol Worms: The Potential for Very Fast Internet Plagues”, Nicholas C Weaver

  25. Random Scanning

  26. Scanning Strategies • Worm can get large initial population with hitlistscanning • Assemble a list of potentially vulnerable machines prior to releasing the worm – a hitlist • E.g., through a slow scan • When the scan finds a vulnerable machine, hitlist is divided in half and one half is communicated to this machine upon infection • This guarantees very fast spread – under one minute!

  27. Hitlist Scanning

  28. Scanning Strategies • Worm can get prevent die-out in the end with permutationscanning • All machines share a common pseudorandom permutation of IP address space • Machines that are infected continue scanning just after their point in the permutation • If they encounter already infected machine they will continue from a random point • Partitioned permutation is the combination of permutation and hitlist scanning • In the beginning permutation space is halved, later scanning is simple permutation scan

  29. Permutation Scanning

  30. Scanning Strategies • Worm can get behind the firewall, or notice the die-out and then switch to subnetscanning • Goes sequentially through subnet address space, trying every address

  31. Infection Strategies • Several ways to download malicious code • From a central server • From the machine that performed infection • Send it along with the exploit in a single packet

  32. Worm Defense • Three factors define worm spread: • Size of vulnerable population • Prevention – patch vulnerabilities, increase heterogeneity • Rate of infection (scanning and propagation strategy) • Deploy firewalls • Distribute worm signatures • Length of infectious period • Patch vulnerabilities after the outbreak

  33. How Well Can Containment Do? • This depends on several factors: • Reaction time • Containment strategy – address blacklisting and content filtering • Deployment scenario – where is response deployed • Evaluate effect of containment 24 hours after the onset “Internet Quarantine: Requirements for Containing Self-Propagating Code”, Proceedings of INFOCOM 2003, D. Moore, C. Shannon, G. Voelker, S. Savage

  34. How Well Can Containment Do?Code Red Idealized deployment: everyone deploysdefenses after given period

  35. How Well Can Containment Do?Depending on Worm Aggressiveness Idealized deployment: everyone deploysdefenses after given period

  36. How Well Can Containment Do?Depending on Deployment Pattern

  37. How Well Can Containment Do? • Reaction time needs to be within minutes, if not seconds • We need to use content filtering • We need to have extensive deployment on key points in the Internet

  38. Detecting and Stopping Worm Spread • Monitor outgoing connection attempts to new hosts • When rate exceeds 5 per second, put the remaining requests in a queue • When number of requests in a queue exceeds 100 stop all communication “Implementing and testing a virus throttle”, Proceedings of Usenix Security Symposium 2003,J. Twycross, M. Williamson

  39. Detecting and Stopping Worm Spread

  40. Detecting and Stopping Worm Spread

  41. Cooperative Strategies for Worm Defense • Organizations share alerts and worm signatures with their “friends” • Severity of alerts is increased as more infection attempts are detected • Each host has a severity threshold after which it deploys response • Alerts spread just like worm does • Must be faster to overtake worm spread • After some time of no new infection detections, alerts will be removed “Cooperative Response Strategies for Large-Scale Attack Mitigation”, Proceedings of DISCEX 2003, D. Norjiri, J. Rowe, K. Levitt

  42. Cooperative Strategies for Worm Defense • As number of friends increases, response is faster • Propagating false alarms is a problem

  43. Early Worm Detection • Early detection would give time to react until the infection has spread • The goal of this paper is to devise techniques that detect new worms as they just start spreading • Monitoring: • Monitor and collect worm scan traffic • Observation data is very noisy so we have to filter new scans from • Old worms’ scans • Port scans by hacking toolkits C. C. Zou, W. Gong, D. Towsley, and L. Gao. "The Monitoring and Early Detection of Internet Worms," IEEE/ACM Transactions on Networking. 

  44. Early Worm Detection • Detection: • Traditional anomaly detection: threshold-based • Check traffic burst (short-term or long-term). • Difficulties: False alarm rate • “Trend Detection” • Measure number of infected hosts and use it to detect worm exponential growth trend at the beginning

  45. Assumptions • Worms uniformly scan the Internet • No hitlists but subnet scanning is allowed • Address space scanned is IPv4

  46. Worm Propagation Model • Simple epidemic model: Detect wormhere. Shouldhave exp. spread

  47. Monitoring System

  48. Monitoring System • Provides comprehensive observation data on a worm’s activities for the early detection of the worm • Consists of : • Malware Warning Center (MWC) • Distributed monitors • Ingress scan monitors – monitor incoming traffic going to unused addresses • Egress scan monitors – monitor outgoing traffic

  49. Monitoring System • Ingress monitors collect: • Number of scans received in an interval • IP addresses of infected hosts that have sent scans to the monitors • Egress monitors collect: • Average worm scan rate • Malware Warning Center (MWC) monitors: • Worm’s average scan rate • Total number of scans monitored • Number of infected hosts observed

  50. Worm Detection • MWC collects and aggregates reports from distributed monitors • If total number of scans is over a threshold for several consecutive intervals, MWC activates the Kalman filter and begins to test the hypothesis that the number of infected hosts follows exponential distribution

More Related