1 / 28

VirtualPower : Coordinated Power Management in Virtualized Enterprise Systems

VirtualPower : Coordinated Power Management in Virtualized Enterprise Systems. Paper by: Ripal Nathuji & Karsten Schwan Georgia Institute of Technology ACM SOSP ‘07. Presented by Joshua Ferguson. Presentation Outline. Problem Description Technological Background Authors’ Solution

gryta
Télécharger la présentation

VirtualPower : Coordinated Power Management in Virtualized Enterprise Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. VirtualPower: Coordinated Power Management in Virtualized Enterprise Systems Paper by: RipalNathuji & Karsten Schwan Georgia Institute of Technology ACM SOSP ‘07 Presented by Joshua Ferguson

  2. Presentation Outline • Problem Description • Technological Background • Authors’ Solution • Evaluation Results • Conclusions

  3. Virtualization A simple reminder: Layers of Abstraction Controls access to hardware Raw Hardware Abstraction: a general concept formed by extracting common features from specific examples 1 1) http://www.thefreedictionary.com/abstraction

  4. Virtualization continued . . . • Which servers can utilize energy saving techniques? 70% 50% 90% Workload: 20% 20% 30% 80% Xen Hypervisor Hypervisor Xen Non-Virtualized Virtualized

  5. Virtualization continued . . . • Which servers can utilize energy saving techniques? Hypervisors trap ACPI power management instructions. How should these instructions be coordinated and used, if used at all? Having direct control of the hardware, application specific knowledge can be utilized at the Guest VM level to implement power management policies. 70% 50% 90% Workload: 20% 20% 30% 80% Is it possible for a virtualized environment to utilize similar power management techniques as non-virtualized environments? Hypervisor Hypervisor ? ? Non-Virtualized Virtualized

  6. Presentation Outline • Problem Description • Technological Background • Authors’ Solution • Evaluation Results • Conclusions

  7. Xen Hypervisor DomU (Unprivileged) Guest VMs Set CPU to C2 Dom0 First VM image booted; administrative interface to the XenHypervisor. Set CPU to C2 Xen Trap and ignore Dom0 has direct access to hardware, and even performs tasks such as creating the virtual device images used by Guest VMs. ACPI request from DomU are trapped and ignored, most of the time.

  8. Xen Hypervisor continued . . . [2] xenpm can sample and set C and P states, as well as Hypervisor level governor • cpufreq (P) governors • ondemand – dynamically scales P states • performance – highest frequency • powersave– lowest frequency • userspace– accepts ACPI calls from Guest VMs All techniques either: 1) Ignore application aware knowledge from Guest VMs or 2) Provide no coordination, limiting applicability. 2) http://wiki.xensource.com/xenwiki/xenpm

  9. Problem Description H a r dw a r e Quality, Application-Specific Workload Characterization Quality, Application Specific Workload Characterization ??? . . . Effective Power Management Quality, Application-Specific Workload Characterization How can we use and coordinate the workload knowledge of Guest VMs to perform more effective power management, all without sacrificing the benefits of Virtualization?

  10. Presentation Outline • Problem Description • Technological Background • Authors’ Solution • Evaluation Results • Conclusions

  11. VirtualPower VPM Channel The VPM Channel, VPM Policies, and VPM Mechanisms provide the capability of coordinated power management using Application Specific Workload knowledge. This is performed while maintaining Guest VM independence from the underlying hardware. Quality, Application-Specific Workload Characterization X en Dom0 . . . Quality, Application-Specific Workload Characterization VPM Policies Polling ACPI State Hardware VPM Mechanism

  12. VirtualPower – Global Policy VPM Channel VPM Channel VPM Channel VPM Global Rules Quality, Application Specific Workload Characterization Quality, Application Specific Workload Characterization Quality, Application Specific Workload Characterization X en X en X en Dom0 Dom0 Dom0 . . . . . . . . . Quality, Application Specific Workload Characterization Quality, Application Specific Workload Characterization Quality, Application Specific Workload Characterization VPM Rules VPM Rules VPM Rules Polling Polling Polling ACPI State ACPI State ACPI State Hardware Hardware Hardware The Authors’ go a step beyond the virtualization problem, and suggest a global coordination scheme. VPM Mechanism VPM Mechanism VPM Mechanism

  13. VirtualPower – Global Policy cont. Similar, long periods of CPU idle SLAs that include energy caps (necessitating low P states) Energy Saving P-State Coordinated C-State deep idles VPM-G dictates migration a a b b Xen w/ VPM Xen Xen w/ VPM Xen a a b b VPM Global Rules Xen Xen w/ VPM Xen w/ VPM Xen Virtualized Virtualized Similarities between VM workloads can be exploited to enter energy saving ACPI states more frequently. Additionally, heterogeneous data center hardware can lead to VM consolidation based on device power profiling.

  14. VirtualPower - Mechanisms VPM Policy • Hardware Scaling • VPM Policy can set hardware states with hypercall (to Xen): “VPM_SET_PSTATE” • Soft Scaling • Hypervisor emulation of requested ACPI state VPM Hypercall Xen Hardware • Consolidation • Matching effectively paired workloads on a common server • Consolidating workloads onto fewer devices within a server, allowing others to idle (putting the full workload on a single core, for instance, or a single disk)

  15. Presentation Outline • Problem Description • Technological Background • Authors’ Solution • Evaluation Results • Conclusions

  16. Evaluation Methodology Details • Dual core Pentium 4s with identical hardware components • Two P-states: 3.2 Ghz and 2.8 Ghz • Extech 380801 – Power measurement “at the wall,” before DC conversion • Transactional Workload • SPEC CPU2000 – Proprietary transaction processing modules • Emulating batch workloads traced by authors’ corporate sponsor, Delta Airlines • Web Service Workload • RUBiS – Auction site benchmark implementing the core functionality of an auction site • Client requests generated by separate server via gigabit Ethernet

  17. Soft Scaling • Authors’ report reduction in energy usage, but fail to report energy usage over the entire job Xen {800 Mhz, 1.6 Ghz, 2.0 Ghz, 2.8 Ghz, 3.2 Ghz} Hardware • {2.8 Ghz, 3.2 Ghz}

  18. Soft Scaling continued . . . Xen {800 Mhz, 1.6 Ghz, 2.0 Ghz, 2.8 Ghz, 3.2 Ghz} Hardware • {2.8 Ghz, 3.2 Ghz} The degree to which Soft Scaling can emulate a native P-State without error is more relevant than non-normalized power consumption. This, combined with Hardware Scaling, is where Virtual Power derives most of its energy savings

  19. Coordinated Scaling: PM-Local • By providing a wider selection of P-states, VirtualPower dramatically changes the schedule of ACPI calls. Web Server Database These are confusing results, though. Unless there was a clerical error, this indicates that VirtualPower is enabling the web server to run in much higher performance states than without power management.

  20. Coordinated Scaling: PM-Local • Suitable Virtualization Level Policies: • Minimize Power consumption while maintaining minimal application performance degradation • PM-Lmin • Power capping, for responding to perceived thermal events or power source policies • PM-Lthrottle • Prediction based scheduling, for large transactional workloads • PM-Lplan

  21. PM-LminResults • The author’s present PM-Lmin results optimistically • They claim, “These measurements establish that our approach can capture and then effectively use VM-specific power/performance tradeoffs via its VPM channels and mechanisms.” Using just this technique will cause an overall energy increase • This is true, but the overall energy usage is shown to increase. • Re-graphing the data makes it apparent that a 20% reduction in processing speed does not result in a 20% reduction in energy usage. Power (W) Normalized Desired Processing Rate

  22. PM-Lthrottle Results • Throttling is the act of reducing overall resource usage (and usually efficiency) for the sake of remaining below a certain threshold • Throttling has some distinctly new uses in today’s data centers • Reducing power usage during hours of peak power cost • Enabling data centers to run on renewable yet volatile sources of energy • Conforming to “Smart Grid” management • Experimentation using RUBiS showed a dramatic reduction in energy usage, in exchange for a low reduction of performance for the transition from P0 to P2, for both the Database and the Web Server: • 9% performance degradation for 10% energy reduction PM-Lthrottle Shows an overall energy reduction for this test web application; much better than the PM-Lmin results for transactional workloads.

  23. PM-Lplan Results • This policy is somewhat deceptively named. • The overall benefit shown by this result is that historical data can be used for more effective throttling. • The VPM planning policy predicts that at time 700, a large batch job will arrive. • The VPM plans accordingly, giving the other VMs on the machine higher performance states. • This is done to maintain performance and power requirements once the batch arrives. It is easy to see that PM-Lplan can help maintain power usage within throttling caps.

  24. PM-LplanResults continued . . . Without actual data, it can’t be proven, but by giving a rough estimate of each trace’s mean, it seems that PM-Lplan is likely using slightly more energy, over the lifetime of the workload, than without planning.

  25. PM-G Policies • Given a set of Guest VMs that can enter ACPI Sleep, and others that must operate, consolidation through PM-G managed migration shows dramatic energy efficiency improvement. • PM-G monitors PM-L level Guest VM requests and, based on policy, consolidates. • This graph shows the consolidation, over time, of guest VMs from Pentium 4 systems to the new Pentium Core microarchitecture • The Pentium 4 systems are left to idle One of the most convincing arguments made in this paper. Though the authors don’t mention it, workload characterization has a large effect on the applicability of this benefit.

  26. Presentation Outline • Problem Description • Technological Background • Authors’ Solution • Evaluation Results • Conclusions

  27. Other Approaches • Virtual energy throttling and virtual device energy accounting • Energy Management for Hypervisor-Based Virtual Machines, 2007. • J. Stoess, System Architecture Group, University of Karlsruhe, Germany • Currency system extension for VirtualPower • VPM tokens: virtual machine-aware power budgeting in datacenter, 2009. • RipalNathuji, Microsoft Research

  28. Conclusions • The problem description again: • How can we use and coordinate the workload knowledge of Guest VMs to perform more effective power management, all without sacrificing the benefits of Virtualization? • From the results shown, it is evident that the hypervisor (or the OS controlling it) should, in some fashion, collect power management requests from Guest VMs. • Given the history of power management requests from Guest VMs: • strategies for intelligently consolidating workloads can bring about some level of overall energy savings, and • strategies for throttling can be implemented to meet ‘Green Energy’ or ‘Smart Grid’ needs.

More Related