1 / 24

Minimizing Collateral Damage by Proactive Surge Protection

Explore a proactive approach to limit network damage from large-scale attacks. Learn about bandwidth isolation techniques and surge protection mechanisms.

henryhill
Télécharger la présentation

Minimizing Collateral Damage by Proactive Surge Protection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Minimizing Collateral Damage by Proactive Surge Protection Jerry Chou, Bill Lin University of California, San Diego Subhabrata Sen, Oliver Spatscheck AT&T Labs-Research

  2. Problem • Large-scale bandwidth-based DDoS attacks can quickly knock out substantial parts of the network before reactive defenses can respond • All traffic that share common route links will suffer collateral damage even if OD pair is not under direct attack

  3. Problem • Potential for large-scale bandwidth-based DDoS attacks exist • e.g. large botnets with more than 100,000 bots exist today that, when combined with the prevalence of high-speed Internet access, can give attackers multiple tens of Gb/s of attack capacity • Moreover, core networks are oversubscribed (e.g. some core routers in Abilene have more than 30 Gb/s incoming traffic from access networks, but only 20 Gb/s of outgoing capacity to the core

  4. Problem • Router-based defenses like Random Early Drop (RED, RED-PD, etc) can prevent congestion by dropping packets early before congestion • But may drop normal traffic indiscriminately, causing responsive TCP flows to severely degrade • Approximate fair dropping schemes aim to provide fair sharing between flows • But attackers can launch many seemingly legitimate TCP connections with spoofed IP addresses and port numbers • Both aggregate-based and flow-based router defense mechanisms can be defeated

  5. Problem • Router-based defenses like Random Early Drop (RED, RED-PD, etc) can prevent congestion by dropping packets early before congestion • But may drop normal traffic indiscriminately, causing responsive TCP flows to severely degrade • Approximate fair dropping schemes aim to provide fair sharing between flows • But attackers can launch many seemingly legitimate TCP connections with spoofed IP addresses and port numbers • Both aggregate-based and flow-based router defense mechanisms can be defeated In general, defenses based on unauthenticated header information such as IP addresses and port numbers may not be reliable

  6. Example Scenario Seattle/NY: 3 Gb/s Sunnyvale/NY: 3 Gb/s • Suppose under normal condition • Traffic between Seattle/NY + Sunnyvale/NY under 10 Gb/s Seattle New York 10G Kansas City 10G 10G Sunnyvale Indianapolis Houston Atlanta

  7. Example Scenario Seattle/NY: 3 Gb/s Sunnyvale/NY: 3 Gb/s Houston/Atlanta: Attack 10 Gb/s • Suppose sudden attack between Houston/Atlanta • Congested links suffer high rate of packet loss • Serious collateral damage oncrossfire OD pairs Seattle New York 10G Kansas City 10G 10G Sunnyvale Indianapolis Houston Atlanta

  8. Impact on Collateral Damage • OD pairs are classified into 3 types with respect to the attack traffic • Even a small percentage of attack flows can affect substantial parts of the network

  9. Our Solution • Provide bandwidth isolation between OD pairs, independent of IP spoofing or number of TCP/UDP connections • We call this method Proactive Surge Protection (PSP) as it aims to proactively limit the damage that can be caused by sudden demand surges, e.g. sudden bandwidth-based DDoS attacks

  10. Basic Idea: Bandwidth Isolation Seattle/NY: Limit: 3.5 Gb/s Actual: 3 Gb/s All admitted as High Sunnyvale/NY: Limit: 3.5 Gb/s Actual: 3 Gb/s All admitted as High Houston/Atlanta: Limit: 3 Gb/s Actual: 10 Gb/s High: 3 Gb/s Low: 7 Gb/s Traffic received in NY: Seattle: 3 Gb/s Sunnyvale: 3 Gb/s … • Reservebandwidth for expected OD pair demand • Meter and tag packets on ingress as HIGH or LOW • Drop LOW packets under congestion inside network Seattle New York 10G Kansas City 10G 10G Sunnyvale Indianapolis Houston Atlanta

  11. Basic Idea: Bandwidth Isolation Seattle/NY: Limit: 3.5 Gb/s Actual: 3 Gb/s All admitted as High Sunnyvale/NY: Limit: 3.5 Gb/s Actual: 3 Gb/s All admitted as High Houston/Atlanta: Limit: 3 Gb/s Actual: 10 Gb/s High: 3 Gb/s Low: 7 Gb/s Traffic received in NY: Seattle: 3 Gb/s Sunnyvale: 3 Gb/s … • Reservebandwidth for expected OD pair demand • Meter and tag packets on ingress as HIGH or LOW • Drop LOW packets under congestion inside network Unlike conventional admission control, packets are permitted into the network even when reserved bandwidth has been exceeded Proposed mechanism readily available in modern routers Seattle New York 10G Kansas City 10G 10G Sunnyvale Indianapolis Houston Atlanta

  12. Architecture Forecast Matrix Forecaster Bandwidth Allocator Bandwidth Allocation Matrix Policy Plane Data Plane Deployed at Network Routers forwarded packets tagged packets arriving packets Preferential Dropping Differential Tagging Deployed at Network Perimeter dropped packets High priority Low priority

  13. Forecasting and Allocation • We use historical network measurements as a forecast of expected normal traffic • e.g. average weekday traffic demand at 3pm EDT over past 2 months • More sophisticated forecasting methods (e.g. Bayesian schemes) possible, but already good results with simple forecasting • To account for forecasting inaccuracies and to provide headroom for traffic burstiness, proportionally scale forecast matrix to fully allocate available network capacity

  14. Proportional Scaling Bandwidth Allocation A B C 10G 10G A ∞ 6 4 A B C B 4 ∞ 6 10G 10G C 6 4 ∞ 2nd round BW 10 8 6 4 2 0 AB BC CB BA Links • Iteratively scale bandwidth allocation in “water-filling” manner Forecast Matrix A B C A 1 1.5 1 B 0.5 2 0.5 C 1.5 1 1 BW 1st round 10 8 6 4 2 0 AB BC CB BA Links

  15. Networks • Abilene • US public academic network • 11 nodes, 14 links (10Gb/s) • Traffic data: 10/01/06-12/06/06 • US Backbone • US Private ISP tier1 backbone network • 700 nodes, 2000 links (1.5Mb/s – 10Gb/s) • Traffic data: 09/01/06-11/17/06 • Europe Backbone • Europe private ISP tier1 backbone network • 900 nodes, 3000 links (1.5Mb/s – 10Gb/s) • Traffic data: 11/18/06-12/18/06

  16. DDoS Attack Data • Abilene • Bottleneck links • Denver, Kansas City,Indianapolis  Chicago (5G each) • US Backbone • Commercial anomaly detection alarm • Pick the alarm with most flows, and scale their demand by 1000x • Europe Backbone • Synthetic attack flow generator • Randomly generate attack flows among 0.1% OD pairs. Seattle New York Chicago Sunnyvale Denver Indianapolis Los Angeles Kansas City Washington Atlanta Houston

  17. Packet Drop Rate Comparison Abilene

  18. Packet Drop Rate Comparison US

  19. Packet Drop Rate Comparison Europe

  20. Behavior Under Scaled Attacks • Packet drop rate under attack demand scaled by factor 0 to 3x • PSP provides greater improvement as attack scale increases Abilene

  21. Behavior Under Scaled Attacks • Packet drop rate under attack demand scaled by factor 0 to 3x • PSP provides greater improvement as attack scale increases US

  22. Behavior Under Scaled Attacks • Packet drop rate under attack demand scaled by factor 0 to 3x • PSP provides greater improvement as attack scale increases Europe

  23. Summary of Contributions • Proposed proactive solution provides network operators with first line of defense when sudden DDoS attacks occur • Solution not dependent on unauthenticated header information, thus robust to IP and TCP sproofing • Minimize collateral damage by providing bandwidth isolation between traffic • Solution readily deployable using existing router mechanism • Simulation results show up to 95.5% of network could suffer collateral damage • Solution reduced collateral damage by 60.5-97.8%

  24. Questions?

More Related