html5-img
1 / 28

VL2: A Scalable and Flexible Data Center Network

Networking the Cloud Presenter: b97901184 電機三 姜慧如. VL2: A Scalable and Flexible Data Center Network. “Shared” Data Center for Cloud. Data Centers holding tens to hundreds of thousands of servers. Concurrently supporting a large number of distinct services. Economies of scale.

talli
Télécharger la présentation

VL2: A Scalable and Flexible Data Center Network

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Networking the Cloud Presenter: b97901184 電機三 姜慧如 VL2:A Scalable and Flexible Data Center Network

  2. “Shared” Data Center for Cloud • Data Centers holding tens to hundreds of thousands of servers. • Concurrently supporting a large number of distinct services. • Economies of scale. • Dynamically reallocate servers among services as workload pattern changes. • High utilization is needed.  Agility!

  3. Agility is Important in Data Centers • Agility: Any machine to be able to play any role. • “Any Service, any Server.”

  4. But…Utilization tend to be low • Ugly Secret: 30% utilization is considered good. • Uneven application fit: -- Each server has CPU, memory, disk: most applications exhaust one resource, stranding the others. • Long provisioning timescales: -- New servers purchased quarterly at best. • Uncertainty in demand: -- Demand for a new service can spike quickly. • Risk management: -- Not having spare servers to meet demand brings failure just when success is at hand. • Session state and storage constraints: -- If the world were stateless servers….

  5. How to achieve agility? • Workload management -- Means for rapidly installing a service’s code on a server.  Virtual Machines, disk images. • Storage management -- Means for a server to access persistent data. Distributed filesystems. • Network -- Means for communicating with other servers, regardless of where they are in the data center. But: • Today’s data center network prevent agility in several ways.

  6. Conventional DCN Problems

  7. Static network assignment Fragmentation of resource Poor server to server connectivity Traffics affects each other Poor reliability and utilization Conventional DCN Problems CR CR 1:240 AR AR AR AR S S S S I have spare ones, but… I want more 1:80 . . . S S S S S S S S 1:5 … … A A A A A A … … A A A A A A

  8. Conventional DCN Problems (cont.) • Achieve scale by assigning servers topologically related IP addresses and dividing servers among VLANs. •  Limited utility of VMs, cannot migrate out the original VLAN while keeping the same IP address. •  Fragmentation of address space. •  Configuration needed when reassigned to different services.

  9. Virtual Layer 2 Switch The Illusion of a Huge L2 Switch CR CR 1. L2 semantics AR AR AR AR . . . S S S S 2. Uniform high capacity 3. Performance isolation S S S S S S S S . . . A A A … … A A A A A A A A A A A A A A A A A A A A … … A A A A A A A A A A A A A A

  10. Network Objectives Developers want a mental model where all their servers, and only their servers, are plugged into an Ethernet switch. 1. Layer-2 semantics -- Flat addressing, so any server can have any IP Address. -- Server configuration is the same as in a LAN. -- VM keeps the same IP address even after migration 2. Uniform high capacity -- Capacity between servers limited only by their NICs. -- No need to consider topology when adding servers. 3. Performance isolation -- Traffic of one service should be unaffected by others.

  11. VL2 Goals and Solutions Objective Approach Solution Employ flat addressing Name-location separation & resolution service 1. Layer-2 semantics Flow-based random traffic indirection(Valiant LB) Guarantee bandwidth forhose-model traffic 2. Uniformhigh capacity between servers Enforce hose model using existing mechanisms only 3. Performance Isolation TCP “Hose”: each node has ingress/egress bandwidth constraints

  12. Reminder: Layer 2 vs. Layer 3 • Ethernet switching (layer 2) • Cheaper switch equipment • Fixed addresses and auto-configuration • Seamless mobility, migration, and failover • IP routing (layer 3) • Scalability through hierarchical addressing • Efficiency through shortest-path routing • Multipath routing through equal-cost multipath • So, like in enterprises… • Data centers often connect layer-2 islands by IP routers

  13. Measurements and Implications of DCN • Data-Center traffic analysis: • DC traffic != Internet traffic • Traffic volume between servers to entering/leaving data center is 4:1 • Demand for bandwidth between servers growing faster • Network is the bottleneck of computation • Flow distribution analysis: • Majority of flows are small, biggest flow size is 100MB • The distribution of internal flows is simpler and more uniform • 50% times of 10 concurrent flows, 5% greater than 80 concurrent flows

  14. Measurements and Implications of DCN • Traffic matrix analysis: • Poor summarizing of traffic patterns • Instability of traffic patterns • Failure characteristics: • Pattern of networking equipment failures: 95% < 1min, 98% < 1hr, 99.6% < 1 day, 0.09% > 10 days • No obvious way to eliminate all failures from the top of the hierarchy

  15. VL2’s Methods • Flat Addressing: Allow service instances (ex. virtual machines) to be placed anywhere in the network. • Valiant Load Balancing: (Randomly) Spread network traffic uniformly across network paths. • End-system based address resolution: To scale to large server pools, without introducing complexity to the network control plane.

  16. Virtual Layer Two Networking (VL2) • Design principle: • Randomizing to cope with volatility: • Using Valiant Load Balancing (VLB) to do destination independent traffic spreading across multiple intermediate nodes • Building on proven networking technology: • Using IP routing and forwarding technologies available in commodity switches • Separating names from locators: • Using directory system to maintain the mapping between names and locations • Embracing end systems: • A VL2 agent at each server

  17. VL2 Topology: Clos Network

  18. Clos Network Topology Offer huge aggr capacity & multi paths at modest cost VL2 . . . Int . . . Aggr K aggr switches with D ports . . . . . . . . . TOR . . . . . . . . . . . 20 Servers 20*(DK/4) Servers

  19. Use Randomization to cope with Volatility

  20. Valiant Load Balancing: Indirection Cope with arbitrary TMs with very little overhead IANY IANY IANY Links used for up paths • [ ECMP + IP Anycast ] • Harness huge bisection bandwidth • Obviate esoteric traffic engineering or optimization • Ensure robustness to failures • Work with switch mechanisms available today Links usedfor down paths T1 T2 T3 T4 T5 T6 IANY T3 T5 y z payload payload 1. Must spread traffic 2. Must ensure dst independence Equal Cost Multi Path Forwarding x y z

  21. Random Traffic Spreading over Multiple Paths IANY IANY IANY Links used for up paths Links usedfor down paths T1 T2 T3 T4 T5 T6 IANY T3 T5 z y payload payload x y z

  22. Separating names from locations • How Smart servers use Dumb switches– Encapsulation. • Commodity switches have simple forwarding primitives. • Complexity moved to servers -- computing the headers.

  23. VL2 Directory System RSM RSMServers 3. Replicate RSM RSM 4. Ack 2. Set (6. Disseminate) . . . . . . . . . DirectoryServers DS DS DS 2. Reply 2. Reply 5. Ack 1. Lookup 1. Update Agent Agent “Lookup” “Update”

  24. VL2 Addressing and Routing VL2 Switches run link-state routing and maintain only switch-level topology DirectoryService LAs … x  ToR2 y  ToR3 z  ToR4 … … x  ToR2 y  ToR3 z  ToR3 … . . . . . . . . . ToR1 ToR2 ToR3 ToR4 ToR3 y payload Lookup & Response y, z y z x ToR3 ToR4 z z payload payload Servers use flat names AAs

  25. Embracing End Systems • Data center Oses already heavily modified for VMs, storage clouds, etc. • No change to apps or clients outside DC.

  26. Evaluation • Uniform high capacity: • All-to-all data shuffle stress test: • 75 servers, deliver 500MB • Maximal achievable goodput is 62.3 • VL2 network efficiency as 58.8/62.3 = 94%

  27. Evaluation • Performance isolation: • Two types of services: • Service one: 18 servers do single TCP transfer all the time • Service two: 19 servers starts a 8GB transfer over TCP every 2 seconds • Service two: 19 servers burst short TCP connections

  28. Evaluation • Convergence after link failures • 75 servers • All-to-all data shuffle • Disconnect links between intermediate and aggregation switches

More Related