1 / 23

Enterprise Networks under Stress

Enterprise Networks under Stress. = 60% growth/year. Vern Paxson, ICIR, “Measuring Adversaries”. “Background” Radiation -- Dominates traffic in many of today’s networks. = 596% growth/year. Vern Paxson, ICIR, “Measuring Adversaries”. Some Observations.

keena
Télécharger la présentation

Enterprise Networks under Stress

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Enterprise Networksunder Stress

  2. = 60% growth/year Vern Paxson, ICIR, “Measuring Adversaries”

  3. “Background” Radiation --Dominates traffic in many of today’snetworks = 596% growth/year Vern Paxson, ICIR, “Measuring Adversaries”

  4. Some Observations • Internet reasonably robust to point problems like link and router failures (“fail stop”) • Successfully operates under a wide range of loading conditions and over diverse technologies • During 9/11/01, Internet worked well, under heavy traffic conditions and with some major facilities failures in Lower Manhattan

  5. The Problem • Networks awash in illegitimate traffic: port scans, propagating worms, p2p file swapping • Legitimate traffic starved for bandwidth • Essential network services (e.g., DNS, NFS) compromised • Needed: better network management of services/applications to achieve good performance and resilience even in the face of network stress • Self-aware network environment • Observing and responding to traffic changes • While sustaining the ability to control the network

  6. From the Frontlines • Berkeley Campus Network • Unanticipated traffic surges render the network unmanageable (and may cause routers to fail) • Denial of service attacks, latest worm, or the newest file sharing protocol largely indistinguishable • In-band control channel is starved, making it difficult to manage and recover the network • Berkeley EECS Department Network (12/04) • Suspected denial-of-service attack against DNS • Poorly implemented/configured spam appliance adds to DNS overload • Traffic surges render it impossible to access Web or mount file systems • Network problems contribute to brittleness of distributed systems

  7. Why and HowNetworks Fail • Complex phenomenology of failure • Traffic surges break enterprise networks • “Unexpected” traffic as deadly as high net utilization • Cisco Express Forwarding: random IP addresses --> flood route cache --> force traffic thru slow path --> high CPU utilization --> dropped router table updates • Route Summarization: powerful misconfigured peer overwhelms weaker peer with too many router table entries • SNMP DoS attack: overwhelm SNMP ports on routers • DNS attack: response-response loops in DNS queries generate traffic overload

  8. Traffic Shaping Load Balancing TechnologyTrends • Integration of servers, storage, switching, and routing • Blade Servers, Stateful Routers, Inspection-and-Action Boxes (iBoxes) • Packet flow manipulations at L4-L7 • Inspection/segregation/accounting of traffic • Packet marking/annotating • Building blocks for network protection • Pervasive observation and statistics collection • Analysis, model extraction, statistical correlation and causality testing • Actions for load balancing and traffic shaping

  9. Scenario: Traffic Surge Inhibiting Network Services Internet Edge II • DNS Server swamped by excessive request traffic • Observe: DNS time outs, Web access traffic slowed, but also higher than normal mail delivery latency implying busy server edge (correlation between Mail Server and DNS Server utilization?) • Root Cause: High DNS request rates generated by Spam Appliance triggered by mail surge R Primary & Secondary DNS Servers Distribution Tier S S E Mail Server E R R S IA IS E Spam Appliance Server Edge Access Edge E S

  10. Scenario Continued Internet Edge II • How Diagnosed? • I-S detects high link utilization but abnormally high DNS traffic • Stats from I-I: high mail traffic, low outgoing web traffic, in traffic high but link utilization not high • Stats from I-A: lower web traffic, no unusual mail origination • Problem localized to Server edge, but visibility limited R Primary & Secondary DNS Servers Distribution Tier S S E Mail Server E R R S IA IS E Spam Appliance Server Edge Access Edge E S

  11. Scenario Continued Internet Edge II • Possible Action Responses • Experiment: Redirect local DNS requests to Secondary DNS server: if these complete, can infer the server is the problem, not the network • Throttle: Due to MS-DNS correlation, block/slow email traffic at Server Edge: should expect reduced DNS server utilization R Primary & Secondary DNS Servers Distribution Tier S S E Mail Server E R R S IA IS E Spam Appliance Server Edge Access Edge E S

  12. Internet Edge Access Edge PC MS Spam Filter FS DNS Server Edge Scenario Distribution Tier

  13. ObservedOperational Problems • User visible services: • NFS mount operations time out • Web access also fails intermittently due to time outs • Failure causes: • Independent or correlated failures? • Problem in access, server, or Internet edge? • File server failure? • Internet denial of service attack?

  14. Unusualstep jump/DNS xactrates Gentle rise in ingressb/w b/w consumed FS CPUutilization MS CPUutilization DNS CPUutilization AccessEdge b/wconsumed time time time time time Declinein accessedge b/w No unusualpattern Mail trafficgrowing In Web Email Out Web Network Dashboard

  15. AccessEdge b/wconsumed b/w consumed FS CPUutilization MS CPUutilization DNS CPUutilization time time time time time In Web Email Out Web Network Dashboard Unusualstep jump/DNS xactrates Gentle rise in ingressb/w CERT Advisory! DNS Attack! Declinein accessedge b/w No unusualpattern Mail trafficgrowing

  16. Causality no surprise! How doesmail trafficcause DNSload? Observed Correlations • Mail traffic up • MS CPU utilization up • Service time up, service load up, service queue longer, latency longer • DNS CPU utilization up • Service time up, request rate up,latency up • Access edge b/w down

  17. In Web Mail trafficlimited Email MS CPUutilization DNS CPUutilization AccessEdge b/wconsumed Out Web time time time DNS down Accessedge b/wreturns Run ExperimentShape Mail Traffic • Root cause: • Spam appliance --> DNS lookups to verify sender domains; • Spam attack hammers internal DNS, degrading other services: NFS, Web

  18. Policies and ActionsRestore the Network • Shape mail traffic • Mail delay acceptable to users? • Can’t do this forever unless mail is filtered at the Internet edge • Load balance DNS services • Increase resources faster than incoming mail rate • Actually done: dedicated DNS server for Spam appliance • Other actions? Traffic priority, QoS knobs

  19. Analysis • Root causes difficult to diagnose • Transitive and hidden causes • Key is pervasive observation • iBoxes provide the needed infrastructure • Observations to identify correlations • Perform active experiments to “suggest” causality

  20. Many Challenges • Policy specification: how to express? Service Level Objectives? • Experimental plan • Distributed vs. centralized development • Controlling the experiments … when the network is stressed • Sequencing matters, to reveal “hidden” causes • Active experiments • Making things worse before they get better • Stability, convergence issues • Actions • Beyond shaping of classified flows, load balancing, server scaling?

  21. Implications for Network Operations and Management • Processing-in-the-Network is real • Enables pervasive monitoring and actions • Statistical models to discover correlations and to detect anomalies • Automated experiments to reveal causality • Policies drive actions to reduce network stress

  22. Datacenter Networks

  23. Networks Under Stress

More Related