1 / 53

Distributed Denial of Service Attacks CS 236 Advanced Computer Security Peter Reiher May 20, 2008

Distributed Denial of Service Attacks CS 236 Advanced Computer Security Peter Reiher May 20, 2008. Groups for This Week. Golita Behnoodi, Andrew Castner, Yu-Yuan Chen Darrel Carbajal, Chia-Wei Chang, Faraz Zahabian Chien-Chia Chen, Michael Cohen, Mih-Hsieh Tsai

Télécharger la présentation

Distributed Denial of Service Attacks CS 236 Advanced Computer Security Peter Reiher May 20, 2008

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Distributed Denial of Service AttacksCS 236Advanced Computer Security Peter ReiherMay 20, 2008

  2. Groups for This Week • Golita Behnoodi, Andrew Castner, Yu-Yuan Chen • Darrel Carbajal, Chia-Wei Chang, Faraz Zahabian • Chien-Chia Chen, Michael Cohen, Mih-Hsieh Tsai • Jih-Chung Fan, Zhen Huang, Nikolay Laptev • Vishwa Goudar, Abishek Jain, Kuo-Yen Lo • Michael Hall, Chen-Kuei Lee, Peter Peterson • Chieh-Ning Lien, Hootan Nikbakht, Peter Wu • Jason Liu, Sean McIntyre, Ionnis Pefkianakis

  3. Distributed Denial of Service (DDoS) Attacks • Goal: Prevent a network site from doing its normal business • Method: overwhelm the site with attack traffic • Response: ?

  4. The Problem

  5. Characterizing the Problem • An attacker compromises many hosts • Usually spread across Internet • He orders them to send garbage traffic to a target site • The combined packet flow overwhelms the target • Perhaps his machine • Perhaps his network link • Perhaps his ISP’s network link

  6. Why Are These Attacks Made? • Generally to annoy • Sometimes for extortion • If directed at infrastructure, might cripple parts of Internet • So who wants to do that . . .?

  7. Attack Methods • Pure flooding • Of network connection • Or of upstream network • Overwhelm some other resource • SYN flood • CPU resources • Memory resources • Application level resource • Direct or reflection

  8. Why “Distributed”? • Targets are often highly provisioned servers • A single machine usually cannot overwhelm such a server • So harness multiple machines to do so • Also makes defenses harder

  9. Yahoo Attack • Occurred in February 2000 • Resulted in intermittent outages for nearly three hours • Attacker caught and successfully prosecuted • Other companies (eBay, CNN, Microsoft) attacked in the same way at around the same time

  10. DDoS Attack on DNS Root Servers • Concerted ping flood attack on all 13 of the DNS root servers in October 2002 • Successfully halted operations on 9 of them • Lasted for 1 hour • Turned itself off, was not defeated • Did not cause major impact on Internet • DNS uses caching aggressively • Another (less effective) attack in February 2007

  11. DDoS Attack on Estonia • Occurred April-May 2007 • Estonia moved a statue that Russians liked • Then somebody launched large DDoS attack on Estonian gov’t sites • Took much of Estonia off-line for ~ 3 weeks • Recently, DDoS attack on Radio Free Europe sites in Belarus

  12. How to Defend? • A vital characteristic: • Don’t just stop a flood • ENSURE SERVICE TO LEGITIMATE CLIENTS!!! • If you deliver a manageable amount of garbage, you haven’t solved the problem

  13. Complicating Factors • High availability of compromised machines • At least tens of thousands of zombie machines out there • Internet is designed to deliver traffic • Regardless of its value • IP spoofing allows easy hiding • Distributed nature makes legal approaches hard • Attacker can choose all aspects of his attack packets • Can be a lot like good ones

  14. Basic Defense Approaches • Overprovisioning • Dynamic increases in provisioning • Hiding • Tracking attackers • Legal approaches • Reducing volume of attack

  15. Overprovisioning • Be able to handle more traffic than attacker can generate • Works pretty well for Microsoft and Google • Not a suitable solution for Mom and Pop Internet stores

  16. Dynamic Increases in Provisioning • As attack volume increases, increase your resources • Dynamically replicate servers • Obtain more bandwidth • Not always feasible • Probably expensive • Might be easy for attacker to outpace you

  17. Hiding • Don’t let most people know where your server is • If they can’t find it, they can’t overwhelm it • Possible to direct your traffic through other sites first • Can they be overwhelmed . . .? • Not feasible for sites that serve everyone

  18. Tracking Attackers • Almost trivial without IP spoofing • With IP spoofing, more challenging • Big issue: • Once you’ve found them, what do you do? • Not clear tracking actually does much good • Loads of fun for algorithmic designers, though

  19. Legal Approaches • Sic the FBI on them and throw them in jail • Usually hard to do • FBI might not be interested in “small fry” • Slow, at best • Very hard in international situations • Generally only feasible if extortion is involved • By following the money

  20. Reducing the Volume of Traffic • Addresses the core problem: • Too much traffic coming in, so get rid of some of it • Vital to separate the sheep from the goats • Unless you have good discrimination techniques, not much help • Most DDoS defense proposals are variants of this

  21. Approaches to Reducing the Volume • Give preference to your “friends” • Require “proof of work” from submitters • Detect difference between good and bad traffic • Drop the bad • Easier said than done

  22. Some Sample Defenses • D-Ward • Pushback • DefCOM • SOS

  23. D-WARD • Core idea is to leverage a difference between DDoS traffic and good traffic • Good traffic responds to congestion by backing off • DDoS traffic responds to congestion by piling on • Look for the sites that are piling on, not backing of

  24. The D-Ward Approach • Deploy D-Ward defense boxes at exit points of networks • Use ingress filtering here to stop most spoofing • Observe two-way traffic to different destinations • Throttle “poorly behaved” traffic • If it continues to behave badly, throttle it more • If it behaves well under throttling, back off and give it more bandwidth

  25. attacks D-WARD in Action requests replies D-WARD D-WARD

  26. A Sample of D-Ward’s Effectiveness

  27. The Problem With D-Ward • D-Ward defends other people from your network’s DDoS attacks • It doesn’t defend your network from other people’s DDoS attacks • So why would anyone deploy it? • No one did, even though, if fully deployed, it could stop DDoS attacks

  28. Pushback • Goal: Drop attack traffic to relieve congestion • Detect congestion locally • Drop traffic from high-bandwidth aggregates • Push back the rate limits to the routers sending those aggregates • Who will then iterate • Rate limits pushed towards attack sites • Or other sites with high volume

  29. Can Pushback Work? • Even a few core routers are able to control high-volume attacks • But issues of partial deployment • Only traffic for the victim is dropped • Drops affect a portion of traffic that contains the attack traffic • But will inflict collateral damage on legitimate traffic • Traffic sharing controlled links with attack traffic likely to be harmed

  30. DefCOM • Different network locations are better for different elements • Near source good for characterizing traffic • Core nodes can filter effectively with small deployments • Near target it’s easier to detect and characterize an attack • DefCOM combines defense in all locations

  31. core core classifier classifier DefCOM in Action Classifiers can assure priority for good traffic DefCOM instructs core nodes to apply rate limits alert generator Core nodes use information from classifiers to prioritize traffic

  32. Benefits of DefCOM • Provides effective DDoS defense • Without ubiquitous deployment • Able to handle higher volume attacks than target end defenses • Offers deployment incentives for those who need to deploy things

  33. DefCOM Performance

  34. SOS • A hiding approach • Don’t let the attackers send packets to the possible target • Use an overlay network to deliver traffic to the destination • Filter out bad stuff in the overlay • Which can be highly provisioned

  35. How SOS Defends • Clients are authenticated at the overlay entrance • A few source addresses are allowed to reach the protected node • All other traffic is filtered out • Several overlay nodes designated as “approved” • Nobody else can route traffic to protected node • Good traffic tunneled to “approved” nodes • They forward it to the server

  36. Can SOS Work? • Should successfully protect communication with a private server: • Access points distinguish legitimate from attack communications • Overlay protects traffic flow • Firewall drops attack packets • What about attacking overlay? • Redundancy and secrecy might help

  37. SOS Advantages and Limitations • Ensures communication of “confirmed” user with the victim • Resilient to overlay node failure • Resilient to DoS • Does not work for public service • Clients must be aware of overlay and use it to access the victim • Traffic routed through suboptimal path • Still allows brute force attack on links entering the filtering router in front of client • If the attacker can find it

  38. How Do We Test DDoS Defense? • What are the core claims about each defense? • Which of those are least plausible or most risky? • How do we prioritize among many things we could test?

  39. Performance Questions • How well does each defend against attacks? • Does it damage performance of normal traffic? • Can it run fast enough for realistic cases? • How much does partial deployment pattern matter? • Does regular traffic pattern matter? • Does attack traffic pattern matter? • Can the defense be used as an attack tool?

  40. How Do We Test? • Let’s concentrate first on the core issue of whether the system defends • Using DefCOM as an example • How do we propose to test that?

  41. Basic Approach • What is our basic testing approach? • Set up a four machine testbed like so: Traffic source Classifier Rate limiter Target

  42. core core classifier classifier Or One Like This? alert generator

  43. Or One Like This?

  44. If It’s Not the Simple One . . . • What is the topology? • How many edge nodes? • Organized into how many subnets? • How many core nodes? • Connected how? • And how do we arrange the routing?

  45. Is the Base Case Full Deployment? • And what does that mean in terms of where we put classifiers and filtering nodes? • If it’s not full deployment, what is the partial deployment pattern? • A single pattern? • Or treat that as a factor in experiment?

  46. Metrics • What metric or metric should we use to decide if DefCOM successfully defends against DDoS attacks? • Utilization of the bottleneck link? • Percentage of dropped attack packets? • Percentage of legitimate packets delivered? • Something else?

  47. Workload • Probably two components: • Legitimate traffic • Attack traffic • Where do we get them from? • If we’re not using the simple topology, where do we apply them?

  48. The Attack Workload • Basically, something generating a lot of packets • But is there more to it? • Do we care about kind of packets? • Pattern of their creation? • Contents? • Header? • Payload? • Do attack dynamics change during attack? • Which nodes generate attack packets?

  49. The Legitimate Workload • What is it? • How realistic must it be? • How do we get it? • Where is it applied? • Is it responsive to what happens at the target? • Cross-traffic?

  50. How Much Work Must We Do? • Do we just define one set of conditions and test DefCOM there? • If not, what gets varied? • Deployment pattern? • Attack size in packets? • Number of attacking nodes? • Legitimate traffic patterns? • Size of target’s bottleneck link? • Accuracy of classification? • Something else?

More Related