1 / 32

DCell : A Scalable and Fault-Tolerant Network Structure for Data Centers

DCell : A Scalable and Fault-Tolerant Network Structure for Data Centers. Chuanxiong Guo , Haitao Wu, Kun Tan, Lei Shi , Yongguang Zhang, Songwu Lu Microsoft Research Asia, Tsinghua University, UCLA. Background. Data Center Data Center Networking(DCN)

akamu
Télécharger la présentation

DCell : A Scalable and Fault-Tolerant Network Structure for Data Centers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. DCell: A Scalable and Fault-Tolerant Network Structure for Data Centers ChuanxiongGuo, Haitao Wu, Kun Tan, Lei Shi, Yongguang Zhang, SongwuLu Microsoft Research Asia, TsinghuaUniversity, UCLA

  2. Background • Data Center • Data Center Networking(DCN) Networking infrastructure inside a data center, which connects a large number of servers via high-speed links and switches.

  3. DCN Motivation • Ever increasing scale • Google has 450,000 servers in 2006 • Microsoft doubles its number of servers in 14 months • The expansion rate exceeds Moore’s Law • Network capacity: Bandwidth hungry data-centric applications • Data shuffling in MapReduce/Dryad • Data replication/re-replication in distributed file systems • Index building in Search • Fault-tolerance: When data centers scale, failures become the norm • Cost: Using high-end switches/routers to scale up is costly

  4. DCN Structure • The current DCN practice is to connect all the servers using a tree hierarchy of switches, core-switches or core-routers. (Can not meet the design requirements!) • A novel network structure called DCell.

  5. DCell Structure • DCell is a recursively defined structure, in which a high-level DCell is constructed from many low-level DCells and DCells at the same level are fully connected with one another. • Scalable: scales doubly exponentially with server node degree • Fault Tolerant: no single point of failure, address various fails • High Capacity: distributes traffic evenly, and no bottleneck

  6. DCell Physical Structure • DCellk (k >=0) denotes a level-k Dcell • DCell0 is the building block to construct larger DCells. It has n servers and a mini-switch. All servers in DCell0 are connected to the mini-switch. • A level-1 DCell1 is constructed using n + 1 DCell0s. In DCell1, each DCell0 is connected to all the other DCell0s with one link. • Assign each server a 2-tuple [a1; a0], where a1 and a0 are the level-1 and level-0 IDs, respectively. Then two servers with 2-tuples [i; j-1] and [j; i] are connected with a link for every i and every j > i.

  7. DCell: the Construction Dcell_0 n=2, k=1 Mini-switch Server DCell_1 n servers in a DCell_0 n=2, k=0

  8. DCell Physical Structure(Cont.) • Build level-2 or higher DCellk recursively in the same way to the above DCell1 construction. • If we have built DCellk-1 and each DCellk-1 has tk-1 servers, then we can create a maximum tk-1 + 1 of Dcellk-1s. • Again we treat each DCellk-1 as a virtual node and fully connect these virtual nodes to form a complete graph. • gk : the number of DCellk-1s in a DCellk tk: the number of servers in a DCellk gk = tk-1 + 1 tk = gk * tk-1

  9. Build a DCelll network • A DCellkis assigned a (k + 1)-tuple[ak, ak-1, … , a1, a0], where ai < gi(0 < i <= k) indicates which Dcelli-1 this server is located at and a0 < n indicates the index of the server in that DCell0. • We further denote [ak; ak-1; … ; ai+1] (i > 0) as the prefix to indicate the DCelli this node belongs to.

  10. Build a DCelll network(Cont.) End recursion by building DCell0 Build sub-DCells Connect sub-DCells to form complete graph

  11. Routing in a DCell • Cannot use global link-state routing scheme • Cannot use hierarchical OSPF • UseDCell Fault-tolerant Routing protocol(DFR) • Firstly, routing without failure • Secondly, broadcast scheme • Finally, DFR!

  12. Routing without Failure • DCellRouting: DCell uses a simple and effcientsingle-path routing algorithm for unicastby exploiting the recursive structure of DCell. • To find the routing from src to dst in a DCellk 1. calculate the intermediate link (n1; n2) that interconnects the two DCellk-1s. 2. routing is then divided into how to find the two sub-pathes from src to n1 and from n2 to dst.

  13. Routing without Failure(Cont.) GetLink:Letsk-m and dk-m (sk-m < dk-m) be the indices of the two sub-DCells. Based on BuildDCells, the link that interconnects these two sub-DCells is ([sk-m; dk-m-1], [dk-m; sk-m]).

  14. src n1 n2 dst GetLink:Letsk-m and dk-m (sk-m < dk-m) be the indices of the two sub-DCells. Based on BuildDCells , the link that interconnects these two sub-DCells is ([sk-m; dk-m-1], [dk-m; sk-m]).

  15. Routing without Failure(Cont.)

  16. Broadcast • Spanning Tree? Not fault tolerant! • DCellBroadcast, a sender delivers the broadcast packet to all its k +1 neighbors when broadcasting a packet in a DCellk. Upon receiving a broadcast packet, a receiver first checks whether this packet has been received before. The receiver drops a duplicate packet but broadcasts a new packet to its other k links. • DCellBroadcast is fault-tolerant in that a broadcast packet can reach all the receivers as long as the network is connected.

  17. Fault-tolerant Routing • DFR uses DCellRouting and DCellBroadcast as building blocks. • DFR handles three types of failures: server failure, rack failure, and link failure. • Solutions: • local reroute -> link failure (to bypass failed links ) • local link-state -> server failure (avoid loops with only local- reroute) • jump-up -> rackfailure(To bypass a whole failed rack)

  18. DFR: DCell Fault-tolerant Routing DCellb dst i2 DCellb src r1 L n2 i1 n1 m2 m1 q2 L Proxy p1 L+1 s2 Proxy s1 q1 Servers in a same share local link-state p2 DCellb i3

  19. Local-reroute and Proxy • From src to dst (in the same DCellk). • First compute a path from src to dst using DCellRouting. • Now assume an intermediate link (n1; n2) has failed. • Local-reroute (bypass the failed link) • 1. Calculates the level of (n1; n2), denoted by l. Then n1 and n2 are known to be in the same DCelll but in two different DCelll-1s. • 2. It can always choose an other DCelll-1 (e.g., the one nearest to n1 but different from the one n2 is in). There must exist a link, denoted as (p1; p2), that connects this Dcelll-1 and the one where n1 resides. • 3. Then chooses p2 as its proxy and re-routes packets from n1 to the selected proxy p2. p2 simply uses DCellRouting to route the packet to dst.

  20. Local-reroute and Proxy(Cont.) • Problem! In pure local-reroute, if there is node which is in the path to the dst fails, we can never reroute the packet to dst! • Local-reroute alone cannot completely address node failures. This is because it is purely based on DCell topology and does not utilize any kind of link or node states. • Consider from src to dst there is sub DCellRouting path {(q1; q2), (q2; q3)}. The level of (q1; q2) is 1 and the level of (q2; q3) is 3. Now q1 finds that (q1; q2) is down (while actually q2 failed). Then, no matter how we re-route inside this DCell2, we will be routed back to the failed node q2!

  21. Local Link-state • In a DCellb, each node uses DCellBroadcast to broadcast the status of all its (k + 1) links periodically or when it detects link failure. A node thus knows the status of all the outgoing/incoming links in its DCellb. • Intra-Dcell routing: Link-state routing(Dijkstra algorithm)) • Inter-Dcell routing: DCellRoutingand local reroute

  22. Jump-up for Rack Failure • in Figure 4. Upon receiving the rerouted packet (implying (n1; n2) has failed), p2 checks whether (q1; q2) has failed or not. If (q1; q2) also fails, it is a good indication that the whole i2 failed. p2 then chooses a proxy from Dcellswith higher level (i.e., it jumps up). Therefore, with jump-up, the failed DCelli2 can be bypassed. • To remove a packet 1. a retry count is added in the packet header 2. each packet has a time-to-live (TTL) field

  23. DFR(DCell Fault-tolerant Routing) • Combine DCellRouting, Local-reroute, and Local Link-state together. • 1. perform DCellRouting • 2. get the first link(highest level of link) • 3. if link fail, perform Local-reroute (Then perform DCellRouting recursively) • 4. no fail, perform Local Link-state(Dijkstra routing)

  24. DFR(Cont.)

  25. Incremental Expansion • 1. re-wiring should not be allowed • 2. addresses of existing machines should not change. • Bottom-up(from DCell0 to DCell1,DCell2…), not Fault Tolerant! • Top-down. When constructing a DCellk, we start from building many incomplete DCellk-1s and make them fully connected.

  26. Simulation

  27. Simulation(Cont.)

  28. Implementation The DCN protocol suite serves as a network layer for DCell-based data centers. similar to IP over the Internet.

  29. Implementation(Cont.) • Layer-2.5 DCN Prototyping • Apps only see TCP/IP • Routing is in DCN • More than 13000 lines of C code.

  30. Experimental Environment • DCell1 with over 20 server nodes. • This DCell1 is composed of 5 DCell0s, • Each of which has 4 servers (Figure 1). Each server is a DELL 755DT desktop with Intel 2.33GHz dual-core CPU, 2GB DRAM, and 160GB hard disk. Each server also installs an Intel PRO/1000 PT Quad Port Ethernet adapter. • The Ethernet switches used to form the DCell0s are D-Link 8-port Gigabit switches DGS-1008D (with each costing about $50).

  31. Experimental Result

  32. Thank You!

More Related