1 / 25

Server-Storage Virtualization: Integration and Load Balancing in Data Centers

Server-Storage Virtualization: Integration and Load Balancing in Data Centers. Aameek Singh, Madhukar Korupolu (IBM Almaden) Dushmanta Mohapatra (Georgia Tech). Overview. Motivation Virtualization is common in datacenters Both compute and storage New degrees of freedom for load balancing

ariane
Télécharger la présentation

Server-Storage Virtualization: Integration and Load Balancing in Data Centers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Server-Storage Virtualization: Integration and Load Balancing in Data Centers Aameek Singh, Madhukar Korupolu (IBM Almaden)Dushmanta Mohapatra (Georgia Tech)

  2. Overview • Motivation • Virtualization is common in datacenters • Both compute and storage • New degrees of freedom for load balancing • Integrating compute & storage mgmt is important • Multiple resource dimensions complicate solution • Hierarchical data flows must be considered

  3. Harmony • A system for virtual server and storage monitoring and control • Monitoring & migration are off-the-shelf • Employs VectorDot, a heuristic load balancing algorithm for load balancing systems with multidimensional and hierarchical constraints • Inspired by Toyoda method for solving multidimensional knapsack problem

  4. Harmony overview Servers and Storage Mgmt Trigger Detection Configuration and Performance Manager Server Virtualization Mgmt Optimization Planning (VectorDot) Virtualization Orchestrator Storage Virtualization Mgmt Migrationrecommendations Harmony

  5. Cluster testbed

  6. Load balancing input • Record system state in utilization, capacities and thresholds • Any node which has any utilization above threshold is called a trigger

  7. Multidimensionality and hierarchy constraints • Vitems, nodes w/ multidimensional resources • E.g., VM requires 100MHz CPU, 50 MB RAM, 0.5Mbps network, 0.2 Mbps of storage IO • Server with 2GHz of CPU, 512 MB RAM, 2Mbps network, 2Mbps storage • VMs also use switch resources, determined by paths to the root switch • Path vectors encode path from node to the root • What if a flow doesn’t go all the way to the root?

  8. Node load and virtualitem fraction vectors • Usage fraction & threshold for each resource • For a server: • <cpuU/cpuCap, memU/memCap, netU/netCap, ioU/ioCap>,<cpuT, memT, netT, ioT> • For storage node: • <spaceU/spaceCap, ioU/ioCap>, <spaceT, ioT> • For switch: • <ioU/ioCap>, <ioT> • Requirements for VMs and vdisks • VM: <cpuU, memU, netU, ioU> • Vdisk: <spaceU, ioU>

  9. Imbalance scores • Imbalance score penalizes nodes for being above threshold • IBscore(f, T) = 0 if f < T, e^(f – T)/T otherwise • Exponential weighting penalizes nodes which are further over threshold • E.g., distinguish between (3T, T) and (2T, 2T) • Sum scores over all dimensions and all nodes

  10. Path vectors • FlowPath(u) for a node is the path from node to the storage virtualizer

  11. VectorDot • Score of mapping virtual items to nodes • Start with simple dot product of the PathLoadFracVec(u) (Au) and the ItemPathLoadFracVec(vi, u) (Bu(vi)) • Example: • Au = <0.4, 0.2, 0.4, 0.2, 0.2> • Aw = <0.2, 0.4, 0.2, 0.4, 0.2> • Bu(vi) = Bv(vi) = <0.2, 0.05, 0.2, 0.05, 0.2> • Au . Bu(vi) < Av . Bv(vi), so assign vi to u

  12. Extended vector product (EVP) • Extensions to account for thresholds, imbalance scores, and avoid oscillations • First: Smooth PathLoadFracVec(u) with respect to PathThresholdVec(u) • Similar to exponential penalization of imbalance • E.g., component at utilization 0.6 with threshold of 0.4 gets higher value than 0.6 when threshold is 0.8 • Second: Avoid oscillations by considering post-move load vectors

  13. Using EVP • Identify trigger nodes: those whose load fraction exceeds the threshold along any dim • Search among trigger nodes in descending IBScore order • Consider four selection criteria for search, traversing destination nodes in static order (i.e., by name) • FirstFit • BestFit • WorstFit • RelaxedBestFit • Visit nodes in random order until N feasible nodes are found, then choose that with minimum EVP

  14. Migration overheads • Simple experiment: live migration of VM running PostMark benchmark, and its vdisk • Migration incurs some overhead

  15. Evaluation: Simulation • Built a simulator to generate topologies and system and node configurations • Simple ratios between # of components • E.g., 500 vms, 1 disk per vm mapped onto 100 physical hosts, 33 storage nodes, 10 edge switches 4 core switches • No details on what these ratios are besides example • Load capacities and resource requirements from Normal distributions • No details on parameters, other than the default for α and β are 0.55, although they claim to vary them… • Generate vms, vdisks, servers, switches and storage nodes, do initial mapping, then balance with VectorDot

  16. Results: Imbalance • BestFit and RelaxedBestFit achieve low imbalance scores

  17. Results: Moves from initial state • BestFit and RelaxedBestFit require fewest moves to reach balance • At no point does the # of triggers or imbalance score increase

  18. Results: Convergence • ??

  19. Results: Running time • Basic allocation 35 seconds, max • Better initial placement = faster load balancing Initial placement + load balancing Time for initial placement

  20. Evaluation: Real data center • Figure 1 • 3 servers, 3 switches, 3 storage nodes • 6 vms, 6 storage volumes • Disabled caching? • Workload generators – lookbusy, IOMeter

  21. Results: Single server overload • Figure 11b

  22. Results: Multi-server overload

  23. Results: Server+storage overload

  24. Results: Switch overload

  25. Summary • Virtual server + virtual storage load balancing together • Harmony: System for monitoring, planning, and executing server & storage load balancing • They just use off the shelf software… • VectorDot: Heuristics for multidimensional and hierarchical load balancing • Does this generalize back to other problems? • Evaluation w/ simulated & “real” datacenters • “Real” evaluation seems too dinky

More Related