1 / 43

Demystifying and Controlling the Performance of Data Center Networks

Demystifying and Controlling the Performance of Data Center Networks. Why ar e Data Centers Important?. Internal users Line-of-Business apps Production test beds External users Web portals Web services Multimedia applications Chat/IM. Why are Data Centers Important?.

beate
Télécharger la présentation

Demystifying and Controlling the Performance of Data Center Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Demystifying and Controlling the Performance of Data Center Networks

  2. Why are Data Centers Important? • Internal users • Line-of-Business apps • Production test beds • External users • Web portals • Web services • Multimedia applications • Chat/IM

  3. Why are Data Centers Important? • Poor performance  loss of revenue • Understanding traffic is crucial • Traffic engineering is crucial

  4. Road Map • Understanding Data center traffic • Improving network level performance • Ongoing work

  5. Canonical Data Center Architecture Core (L3) Aggregation (L2) Edge (L2) Top-of-Rack Application servers

  6. Dataset: Data Centers Studied • 10 data centers • 3 classes • Universities • Private enterprise • Clouds • Internal users • Univ/priv • Small • Local to campus • External users • Clouds • Large • Globally diverse

  7. Dataset: Collection • SNMP • Poll SNMP MIBs • Bytes-in/bytes-out/discards • > 10 Days • Averaged over 5 mins • Packet Traces • Cisco port span • 12 hours • Topology • Cisco Discovery Protocol

  8. Canonical Data Center Architecture Core (L3) Aggregation (L2) Packet Sniffers Edge (L2) Top-of-Rack Application servers

  9. Analyzing Packet Traces • Transmission patterns of the applications • Properties of packet crucial for • Understanding effectiveness of techniques • ON-OFF traffic at edges • Binned in 15 and 100 m. secs • We observe that ON-OFF persists Routing must react quickly to overcome bursts

  10. Data-Center Traffic is Bursty • Understanding arrival process • Range of acceptable models • What is the arrival process? • Heavy-tail for the 3 distributions • ON, OFF times, Inter-arrival, • Lognormal across all data centers • Different from Pareto of WAN • Need new models Need new models to generate traffic

  11. Canonical Data Center Architecture Core (L3) Aggregation (L2) Edge (L2) Top-of-Rack Application servers

  12. Intra-Rack Versus Extra-Rack • Quantify amount of traffic using interconnect • Perspective for interconnect analysis Extra-Rack Edge Application servers Intra-Rack Extra-Rack = Sum of Uplinks Intra-Rack = Sum of Server Links – Extra-Rack

  13. Intra-Rack Versus Extra-Rack Results • Clouds: most traffic stays within a rack (75%) • Colocation of apps and dependent components • Other DCs: > 50% leaves the rack • Un-optimized placement

  14. Extra-Rack Traffic on DC Interconnect • Utilization: core > agg > edge • Aggregation of many unto few • Tail of core utilization differs • Hot-spots  links with > 70% util • Prevalence of hot-spots differs across data centers

  15. Persistence of Core Hot-Spots • Low persistence: PRV2, EDU1, EDU2, EDU3, CLD1, CLD3 • High persistence/low prevalence: PRV1, CLD2 • 2-8% are hotspots > 50% • High persistence/high prevalence: CLD4, CLD5 • 15% are hotspots > 50%

  16. Prevalence of Core Hot-Spots 0.6% • Low persistence: very few concurrent hotspots • High persistence: few concurrent hotspots • High prevalence: < 25% are hotspots at any time 0.0% 6.0% Smart routing can better utilize core and avoid hotspots 0.0% 24.0% 0.0% 0 10 20 30 40 50 Time (in Hours)

  17. Insights Gained • 75% of traffic stays within a rack (Clouds) • Applications are not uniformly placed • Traffic is bursty at the edge • At most 25% of core links highly utilized • Effective routing algorithm to reduce utilization • Load balance across paths and migrate VMs

  18. Road Map • Understanding Data center traffic • Improving network level performance • Ongoing work

  19. Options for TE in Data Centers? • Current supported techniques • Equal Cost MultiPath (ECMP) • Spanning Tree Protocol (STP) • Proposed • Fat-Tree, VL2 • Other existing WAN techniques • COPE,…, OSPF link tuning

  20. How do we evaluate TE? • Simulator • Input: Traffic matrix, topology, traffic engineering • Output: Link utilization • Optimal TE • Route traffic using knowledge of future TM • Data center traces • Cloud data center (CLD) • Map-reduce app • ~1500 servers • University data center (UNV) • 3-Tier Web apps • ~500 servers

  21. Draw Backs of Existing TE • STP does not use multiple path • 40% worst than optimal • ECMP does not adapt to burstiness • 15% worst than optimal

  22. Design Goals for Ideal TE

  23. Design Requirements for TE • Calculate paths & reconfigure network • Use all network paths • Use global view • Avoid local optimals • Must react quickly • React to burstiness • How predictable is traffic? ….

  24. Is Data Center Traffic Predictable? • YES! 27% or more of traffic matrix is predictable • Manage predictable traffic more intelligently 27% 99%

  25. How Long is Traffic Predictable? • Different patterns of predictability • 1 second of historical data able to predict future 1.5 – 5.0 1.6 - 2.5

  26. MicroTE

  27. MicroTE: Architecture Monitoring Component • Global view: • Created by network controller • React to predictable traffic: • Routing component tracks demand history • All N/W paths: • Routing component creates routes using all paths Routing Component Network Controller

  28. Architectural Questions • Efficiently gather network state? • Determine predictable traffic? • Generate and calculate new routes? • Install network state?

  29. Architectural Questions • Efficiently gather network state? • Determine predictable traffic? • Generate and calculate new routes? • Install network state?

  30. Monitoring Component • Efficiently gather TM • Only one server per ToR monitors traffic • Transfer changed portion of TM • Compress data • Tracking predictability • Calculate EWMA over TM (every second) • Empirically derived alpha of 0.2 • Use time-bins of 0.1 seconds

  31. Routing Component New Global View Determine predictable ToRs Calculate network routes for predictable traffic Set ECMP for unpredictable traffic Install routes

  32. Routing Predictable Traffic • LP formulation • Constraints • Flow conservation • Capacity constraint • Use K-equal length paths • Objective • Minimize link utilization • Bin-packing heuristic • Sort flows in decreasing order • Place on link with greatest capacity

  33. Implementation • Changes to data center • Switch • Install OpenFlow firmware • End hosts • Add kernel module • New component • Network controller • C++ NOX modules

  34. Evaluation

  35. Evaluation: Motivating Questions • How does MicroTE Compare to Optimal? • How does MicroTE perform under varying levels of predictability? • How does MicroTE scale to large DCN? • What overheard does MicroTE impose?

  36. Evaluation: Motivating Questions • How does MicroTE Compare to Optimal? • How does MicroTE perform under varying levels of predictability? • How does MicroTE scale to large DCN? • What overheard does MicroTE impose?

  37. How do we evaluate TE? • Simulator • Input: Traffic matrix, topology, traffic engineering • Output: Link utilization • Optimal TE • Route traffic using knowledge of future TM • Data center traces • Cloud data center (CLD) • Map-reduce app • ~1500 servers • University data center (UNV) • 3-Tier Web apps • ~400 servers

  38. Performing Under Realistic Traffic • Significantly outperforms ECMP • Slightly worse than optimal (1%-5%) • Bin-packing and LP of comparable performance

  39. Performance Versus Predictability • Low predictability  performance is similar to ECMP

  40. Performance Versus Predictability • Low predictability  performance is similar to ECMP • High predictability  performance is comparable to Optimal • MicroTE adjusts according to predictability

  41. Conclusion • Study existing TE • Found them lacking (15-40%) • Study data center traffic • Discovered traffic predictability (27% for 2 secs) • Developed guidelines for ideal TE • Designed and implemented MicroTE • Brings state of the art within 1-5% of Ideal • Efficiently scales to large DC (16K servers)

  42. Road Map • Understanding Data center traffic • Improving network level performance • Ongoing work

  43. Looking forward • Stop treating the network as a carrier of bits • Bits in the network have a meaning • Applications know this meaning. • Can applications control networks? • E.g Map-reduce • Scheduler performs network aware task placement and flow placement

More Related