1 / 36

Dynamic Resource Allocation u sing Virtual Machine for Cloud Computing Environment

ChinaSys2012 Talk. Dynamic Resource Allocation u sing Virtual Machine for Cloud Computing Environment. Zhen Xiao, Weijia Song, and Qi Chen Dept. of Computer Science Peking University http://zhenxiao.com/. Introduction. Background :

clyder
Télécharger la présentation

Dynamic Resource Allocation u sing Virtual Machine for Cloud Computing Environment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ChinaSys2012 Talk Dynamic Resource Allocation using Virtual Machine for Cloud Computing Environment Zhen Xiao, Weijia Song, and Qi Chen Dept. of Computer Science Peking University http://zhenxiao.com/

  2. Introduction • Background: Nowadays Cloud computing allows business customers to scale up and down their resource usage based on needs. It uses virtualization technology to multiplex hardware resources among a large group of users. • Problem: How can a cloud service provider best multiplex virtual to physical resources?

  3. Amazon EC2-Style Service User Virtualization User Physical Servers Virtual Machines I-a-a-S Service

  4. Amazon EC2-Style Service VM1 cpu:0.1 net:0.1 mem:0.61 VM2 cpu:0.1 net:0.1 mem:0.3 VM3 cpu:0.6 net:0.6 mem:0.05 VM4 cpu:0.3 net:0.3 mem:0.2 VMs PM1 PM2 PM3 PMs

  5. Goals and objectives • Our goals: • overload avoidance: the capacity of a PM should be sufficient to satisfy the resource needs of all VMs running on it. Otherwise, the PM is overloaded and can lead to degraded performance of its VMs. • green computing: the number of PMs used should be minimized as long as they can still satisfy the needs of all VMs. Idle PMs can be put to sleep to save energy.

  6. Overview of the rest of the talk • System overview • Details of the algorithm • Simulation results • Experiment results • Conclusion

  7. System Overview VM Scheduler Plugin Predictor Hotspot Solver Coldspot Solver Migration list Migration list Usher CTRL Dom 0 Dom U Dom 0 Dom U Dom 0 Dom U Usher LNM Usher LNM Usher LNM MM Adjustor MM Adjustor MM Adjustor Xen Hypervisor Xen Hypervisor Xen Hypervisor WS Prober WS Prober WS Prober

  8. System overview • How to collect the usage statistics of resources for each VM ? • The CPU and network usage can be calculated by monitoring the scheduling events in Xen (XenStat) • The memory usage can be calculated by a working set prober (WS Prober) on each hypervisor to estimate the working set sizes of VMs. (We use the random page sampling technique as in the VMware ESX Server)

  9. System overview • How to predict the future resource demands of VMs and the future load of PMs? • VMware ESX Server uses EWMA (exponentially weighted moving average) • Our system uses FUSD (Fast Up and Slow Down)

  10. Overview of the rest of the talk • System overview • Details of the algorithm • Simulation results • Experiments • Conclusion

  11. Our Algorithm • Definitions1 • We define a server as a hot spot if the utilization of any of its resources is above a hot threshold. This indicates that the server is overloaded and hence some VMs running on it should be migrated away. • We define a server as a cold spot if the utilizations of all its resources are below a cold threshold. This indicates that the server is mostly idle and a potential candidate to turn off/sleep to save energy.

  12. Parameters of the algorithm • Hot threshold • Cold threshold • Green computing threshold • Warm threshold • Consolidation limit

  13. Our Algorithm • Definitions2 • We use skewness to quantify the unevenness in the utilization of multiple resources on a server • Let n be the number of resources we consider and be the utilization of the ith resource. We define the resource skewness of a server p as • where is the average utilization of all resources for server p.

  14. Our Algorithm • Definitions3 • We define the temperature of a hot spot p as the square sum of its resource utilization beyond the hot threshold • Let R be the set of overloaded resources in server p andrtbe the hot threshold for resource r. The definition of the temperature is :

  15. Layout Predict results Init hot threshold Solve hotspots Generate hotspots and sort them by their temperature Any hotspots to solve? Y Choose a hot pm and try to solve it N Done

  16. Our Algorithm • For each hot spot • Which VM to migrate away? • Where does it migrated to?

  17. hotspot Sort the vms on the hot pm based on the resulting temperature if that vm is migrated away Solve hotspot Any vm to try? N Fail to solve this hotspot Y Migrate the vm and solve (or cool) the hotspot Find out the pm which may accept the vm without becoming hotspot and generate a target list Choose the pm with the maximum decrease of skewness value after acceptting that vm to be the destination Is the target list empty? Y N

  18. PM1 (mem > 0.9) PM2 PM3 VM1 cpu:0.1 net:0.1 mem:0.61 VM2 cpu:0.1 net:0.1 mem:0.3 VM3 cpu:0.6 net:0.6 mem:0.05 VM4 cpu:0.3 net:0.3 mem:0.2 Skewness Dt:0.0036 Ds:-0.337 Dt:0.0036 Ds: 0.106 Dt:0.0035 Ds: -1.175 Ds:0.1178 Ds:-0.98 PM1 PM1 (mem > 0.9) PM2 PM2 PM3 VM1 cpu:0.1 net:0.1 mem:0.61 VM2 cpu:0.1 net:0.1 mem:0.3 VM3 cpu:0.6 net:0.6 mem:0.05 VM4 cpu:0.3 net:0.3 mem:0.2

  19. Generate coldspots and noncoldspots, and sort coldspots by used RAM Green computing N Need green computing? Done Y Any coldspots to solve? N Y Move the cold pm to noncoldspots Y Solved num > limit? N Choose a cold pm and try to solve it N Can it be solved? Y Migration list

  20. coldspot Init N Any vm to solve? succeed Y If the destination pm becomes non-coldspot, move it to non-coldspots list Solve coldspot From all non-coldspots find out the pm whose resourse utilization is below warm threshold after acceptting the vm and generate a target list Choose a pm with the maximum decrease of skewness value after acceptting that vm to be the destination N Is the target list empty? Choose a pm with the maximum decrease of skewness value after acceptting that vm to be the destination Y From all the rest coldspots find out the pm whose resourse utilization is below warm threshold after acceptting the vm and generate a target list Y N Is the target list empty? Fail to solve this coldspot

  21. cold threshold = 0.25 warm threshold = 0.65 PM1 PM2 PM3 VM1 cpu:0.1 net:0.1 mem:0.1 VM2 cpu:0.1 net:0.1 mem:0.1 VM3 cpu:0.2 net:0.2 mem:0.25 VM4 cpu:0.5 net:0.5 mem:0.5 Skewness PM1 PM1 PM1 PM2 PM2 PM3 PM3 VM1 cpu:0.1 net:0.1 mem:0.1 VM2 cpu:0.1 net:0.1 mem:0.1 VM2 cpu:0.1 net:0.1 mem:0.1 VM3 cpu:0.2 net:0.2 mem:0.25 VM3 cpu:0.2 net:0.2 mem:0.25 VM4 cpu:0.5 net:0.5 mem:0.5

  22. Analysis of the algorithm • The skewness algorithm consists of three parts(let n and m be the number of PMs and VMs in the system): • Load prediction: O(n+m) ~ O(n) • hot spot mitigation: O(n2) • green computing: O(n2) (if we add restriction to the cold spots to solve, it can be down to O(n)) • The overall complexity of the algorithm is bounded by O(n2)

  23. Overview of the rest of the talk • System overview • Details of the algorithm • Simulation results • Experiments • Conclusion

  24. Simulation desktop • Generate work load • traces from a variety of servers in our university including our faculty mail server, the central DNS server, the syslog server of our IT department, the index server of our P2P storage project, and many others • a synthetic workload which is created to examine the performance of our algorithm in more extreme situations mimics the shape of a sine function (only the positive part) and ranges from 15% to 95% with a 20% random fluctuation DNS server log mail server

  25. Parameters in the simulation • Hot threshold: 0.9 • Cold threshold: 0.25 • Warm threshold: 0.65 • Green computing threshold: 0.4 • Consolidation limit: 0.05

  26. Effect of thresholds on APMs (a) Different thresholds (b) #APM with synthetic load

  27. Scalability of the algorithm (a) average decision time (b) total number of migrations (c) number of migrations per VM

  28. Effect of load prediction

  29. Overview of the rest of the talk • System overview • Details of the algorithm • Simulation results • Experiments • Conclusion

  30. Experiments • Environment • 30 servers with Intel E5420 CPUx2 and 24GB of RAM. The servers run Xen-3.3 and Linux 2.6.18. • The servers are connected over a gigabyte Ethernet to three NFS centralized storage server • TPC-W Benchmark: a transactional web e-Commerce benchmark.

  31. Algorithm effectiveness

  32. Application Performance

  33. Load Balancing

  34. Overview of the rest of the talk • System overview • Details of the algorithm • Simulation results • Experiments • Conclusion

  35. Conclusion • We have presented a resource management system for Amazon EC2-style cloud computing services. • We use the skewness metric to combine VMs with different resource characteristics appropriately so that the capacities of servers are well utilized. • Our algorithm achieves both overload avoidance and green computing for systems with multi-resource constraints.

  36. Thank You!

More Related