1 / 54

Scalability & Stability of the Internet Infrastructure

Scalability & Stability of the Internet Infrastructure. Farnam Jahanian Department of EECS University of Michigan <farnam@umich.edu>. Context. Routers. Name Servers. Critical Services. LIGHTHOUSE: Survivable Network Infrastructure. Network Infrastructure. Protocol Scrubbers.

taline
Télécharger la présentation

Scalability & Stability of the Internet Infrastructure

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Scalability & Stability of the Internet Infrastructure Farnam Jahanian Department of EECS University of Michigan <farnam@umich.edu>

  2. Context • Routers • Name Servers • Critical Services LIGHTHOUSE: Survivable Network Infrastructure Network Infrastructure • Protocol Scrubbers • Network Attacks • Replication schemes • Operational Faults • Countermeasures • S/H Failures Active Response Capabilities Anomalous Network Events • Netflow Statistics • Event Aggregation Analysis Engines Coarse and Fine Grained Measurement Tools • Windmill Probes • Data Mining Joint projects between U. Michigan & Merit Network

  3. Motivation • Increasing reliance of financial and national utility infrastructures on interconnected IP-based networks • Explosive growth in both size and topological complexity of the underlying communication infrastructure • Reliance on off-the-self infrastructure & shrink-wrapped code • Network infrastructure is vulnerable: • inherent instability and transient oscillations • delayed convergence and long failover • coordinated denial of service attacks on network resources • hardware and software failures • operational faults and misconfigurations

  4. Imminent Collapse of the Internet Collapse of the Internet ? Now

  5. Internet Growth Explosive growth in both size and topological complexity • Internet end-system growth • Traffic volume & characteristics • Infrastructure topological evolution

  6. Infrastructure Topological Evolution Between 1995-1999: • Decentralization: from a single backbone network to a conglomeration of 100s of backbone and 1000s ISP. • Loss of hierarchy and abstraction: from strict hierarchical network to increasingly a full-mesh interconnection. • Significant bandwidth increase: from signle T3 (45MB) circuit and T1 (1MB) links to multiple OC48 (1.2GB) circuits and OC12 (622MB) lines between nodes.

  7. Internet Evolution: NSFNet NSFNet Backbone Hello/EGP Hello/EGP Hello/EGP Regional Regional Regional Campus Campus Campus Campus Hierarchical network with a single central backbone

  8. AS1 AS2 C2 C1 AS4 AS3 C4 C3 Internet Evolution: Today Full-mesh interconnection of ISP backbones and customers

  9. Impact of Instability & Failures • Increased end-to-end Loss/Latency • Increased delay in convergence & network reachability • Backbone infrastructure CPU/Memory requirements • Backbone “route flap storms” • Network management complexity

  10. Background: Internet Architecture BGP BGP BGP

  11. Background: Internet Routing • Two major categories • Inter-domain (BGP between autonomous systems) • Intra-domain (OSPF, ISIS, IGRP inside an AS) • BGP • Incremental: announcements and withdraws • Updates include policy (e.g. MED, ASPath) • Maintain multiple possible routes

  12. Background: BGP Routing Protocol • BGP is an incremental protocol that sends update information only upon changes in network topology or routing policy. • Two forms of messages: • announcements: • New network accessible • Prefer another route to network destination • withdrawals: • Destination network is no longer accessible • Routing policies vs. shortest number of hops

  13. Background: Internet Core • Networks aggregated into CIDR (Classless Inter-Domain Routing) prefixes • Prefix represents a set of destination IP addresses • At Internet “core” all routers maintain paths to “default-free” routes • Originally 5 major Internet Exchange Points (IXPs) • In 1996, approximately 30,000 default-free routes

  14. Roadmap • Study of stability of routing in the Internet backbone • Transient oscillations, pathological redundant updates • congestion collapse and correlation to network usage • SIGCOMM’97 and INFOCOMM’99 • Study of route availability and failover rates • long-term availability of Internet backbone routes • Case study of regional provider • FTCS’99 • Study of convergence behavior of routing protocols • Injection of route changes into the Internet backbone • Impact of convergence delay on end-to-end path • 18-month study & ongoing

  15. Internet Exchange Points Deployed probes machines at five public exchange points Collected all routing updates at IXPs over four year period

  16. Internet Routing Instability Results • Number of BGP routing updates exchanged per day in the Internet core is orders of magnitude larger than expected. • Most routing information is dominated by pathological, or redundant updates, which do not directly reflect changes in routing policy or topology. • Instability and redundant updates exhibit a specific periodicity of 30 and 60 seconds. • Instability and redundant updates show a surprising correlation to network usage and exhibit corresponding daily and weekly cyclic trends.

  17. Instability Results (Continued) • Instability is not dominated by a small set of autonomous systems or routes. • Instability is not disproportionately dominated by prefixes of specific lengths, i.e. independent of aggregation. • Discounting policy fluctuation and pathological behavior, there remains a significant level of Internet forwarding instability. • Details: SIGCOMM’97 & INFOCOMM’99

  18. Growth in Routing State Linear growth in routing table

  19. Initial Findings (SIGCOMM’97) • Up to 60 million BGP updates/day for only 30,000 default-free routes! • On avg. 2-6 Million withdraws per day (mostly duplicates) • e.g., ISP A had 259 routes but withdrew 2.4 million routes • All state changes well distributed across prefix lengths, autonomous systems • Unexpected frequency components • 30 second inter-arrival time between updates • Daily/weekly components

  20. More Initial Observations • Most routing updates pathological (millions!) • Some due to misconfiguration • Private networks • Host routes • Multicast routes • Majority duplicate updates • Duplicate withdraws (WWDup > 99.99%) • Duplicate announcements (AADup)

  21. BGP Updates

  22. 30 Second Frequency Components 1997

  23. Origins of Pathological Updates (INFOCOM99) • Majority stem from two router software implementation issues: • stateless BGP withdraws • non-transitive attribute filtering • Frequency due to non-jittered router timers • lack of precise specification • Others sources of pathologies: • BGP/IBGP misconfiguration • Still others DSU/CSU oscillation • And still others due distance-vector algorithm

  24. After Initial Publication of Results • One popular vendor validated our conjectures and released updated software in 1997 • Software rapidly deployed by ISPs • Stateful BGP reduced updates by orders of magnitude • Addition of random intervals to timers diminished frequency components

  25. BGP Announcements and Withdraws NANOG presentation ISP Geeks Release Mainline Release

  26. Frequency Components 1997 1998

  27. BGP Failures -- Congestion Collapse(BGP Frequency)

  28. A Short Story Sigcomm '97 findings were puzzling: Bandwidth Utilization  Instability Hypothesis: • Congestion causes underlying TCP to backoff • BGP-level timers expire, causing termination

  29. Border Gateway Protocol (BGP) • Interdomain protocol between Autonomous Systems • Routing peers exchange reachability information incrementally • BGP uses TCP as the transport protocol between peer routers Sprint MCI

  30. Congestion causes underlying TCP to backoff BGP-level timers expire, causing termination Interaction between BGP and TCP leads to router congestion collapse High bandwidth utilization  BGP Instability Validated using Windmill tool (SIGCOMM98) BGP Congestion Collapse Hypothesis

  31. What about Failures? • Some state changes due to policy changes & network failures • Cannot distinguish between policy, intra-domain and inter-domain failures • Methodology: • Measure long-term rate of failure for Internet backbone routes • Case study of regional provider

  32. Internet Infrastructure Failures(FTCS99) • Internet significantly less reliable and available than PSTN telephone network. • After a network becomes unreachable, in most cases, it takes longer than 5 mins before it is reachable again. • Even for transient oscillations, convergence of backbone routing states may be in the order of mins! • Route failover (re-routing of traffic to a given network) occurs on average of once every three days or more. • A small fraction of network paths contribute disproportionately to number of long-term outages

  33. Definitions • Route Failure: Prefix destination unavailable for 30 or more minutes • Route Repair: A failed route becomes available • Route Failover: A route replaced with one associated with a different path

  34. Route Failures: How long before a network is unreachable?

  35. Route Repairs: How long before a network is reachable again?

  36. Failover: How long before traffic is re-routed?

  37. Conventional Wisdom on Convergence • Internet is highly redundant • Just reroute around in a few milliseconds • Routing protocol convergence takes only a few ???? • “Bad news travels fast” • Fast withdraw propagation valid goal • Announcements slower because bundled • BGP has great convergence properties • Path vector solved the convergence and counting to infinity (looping) problems • All my customers are multi-homed, triple-homed • Convergence -- what, me worry? Not True!

  38. 18-Month Study of Convergence Behavior • Instrument the Internet • Inject routes into geographically and topologically diverse provider BGP peering sessions (Japan, Michigan, US Exchange Points, Canada, UK) • Periodically fail and change these routes (i.e. send withdraws or new attributes) • Time events using ICMP ping and NTP synchronized BGP “routeviews” monitoring machines • Wait 18 months… (50,000 routing events)

  39. Passive & Active Measurement Infrastructure Fault Injection Server BGP Stub AS Upstream ISP3 BGP Fault BGP ISP2 ISP4 BGP Stub AS RouteViews Data ISP5 Collection Internet BGP Fault ICMP Echos Probe ISP6 Upstream ISP1

  40. Terminology • Tdown: A previously available route is withdrawn. This is a route failure. • Tup : previously unavailable route is announced as available. This is a route repair. • Tshort: A route is replaced with another route having a shorter path. This is a route failover. • Tlong: A route is replaced by another route with a longer path. This is a route failover.

  41. Avg. number of messages generated by each ISP following a routing update event • Tdown and Tlong generated more messages than Tup and Tshort • Significant variation among ISPs within each category of message

  42. Withdraw Convergence (Tdown) After a BGP route is withdrawn, barring other failures, how long does it take Internet routing tables to reach steady-state?

  43. Withdraw Convergence Convergence delay after a Tdown

  44. Withdraw Convergence • Different providers exhibit different behavior • 70% of withdraws from most ISPs take more than a minute • For ISP in Canada, 20% withdraws took more than three minutes to converge • Observed latencies of up to 10 mins for certain events • No correlation between convergence latency and geography or topological (except for MichNet)

  45. Failovers and Repairs What are the relative convergence latencies for failovers and repairs? Does bad news (withdraws) travel faster?

  46. Failures, Failovers and Repairs Bad News Does Not Travel Fast!

  47. Failures, Failovers and Repairs • Bad news does not travel fast… • Repairs (Tup) exhibit similar convergence properties as long  short path failover • Failures (Tdown) and short  long failovers also similar • Slower than Tup (e.g. a repair) • 60% take longer than two minutes • Failover times degrade the greater the degree of multi-homing!

  48. End2End Connectivity Impact of delayed convergence on E2E connectivity? After a failover, how long before my site is reachable? • Modified ICMP pings sent once a second • Source IP address block of pseudo-AS • 100 randomly chosen web sites from cache logs

  49. Impact of Convergence Delay on End-to-End Path Avg. packet loss to 100 web sites (1 min bins in the ten mins preceding and following a routing update)

  50. What is Happening? • Non-deterministic ordering of BGP update messages leads to • Transient oscillations • Each change in FIB adds delay (CPU, BGP bundling timer) • At extreme, convergence triggers BGP dampening

More Related