1 / 1

Motivation

Energy-efficient Resource Management for HPC Applications on Virtual Systems. Can Hankendi Ayse K. Coskun Electrical and Computer Engineering Department, Boston University, MA, USA { hankendi , acoskun }@ bu.edu. Abstract. Performance Isolation on Virtual Systems .

leola
Télécharger la présentation

Motivation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Energy-efficient Resource Management for HPC Applications on Virtual Systems Can Hankendi AyseK. Coskun Electrical and Computer Engineering Department, Boston University, MA, USA {hankendi, acoskun}@bu.edu Abstract Performance Isolation on Virtual Systems Runtime Implementation & Results A As multi-threaded workloads start to emerge on the cloud, providing energy-efficient consolidation strategies for these high-performance computing (HPC)-type loads is becoming an important research problem. This work proposes an adaptive resource provisioning technique for multi-threaded workloads to improve the energy efficiency of a virtualized multi-core server. Proposed technique adjusts available resources for a virtual machine (VM) based on the application power efficiency, while delivering the desired performance guarantees. Experiments on a real-life multi-core server show that the proposed technique improves the system throughput-per-watt by 15% on average (and by up to 21%) over existing co-scheduling techniques. • Consolidating multiple workloads can degrade performance due to resource contention • CPU binding and NUMA balancing can mitigate the performance variation Throughput Constraint • Each application is initially executed with equal resources • Our technique: • (1) monitor IPC, CPU Utilization and throughput • (2) access LUT to check phases: • allocate more resources to higher class (i.e., scalable applications), • fewer resources to lower class applications • (3) monitor throughput gains and loses due to resource adjustments, if gains are higher, continue to adjust resources Motivation • Energy consumption of computing clusters is increasing by 15% per year • Energy efficiency and budget/cost control are the major challenges for data centers Runtime Behavior w/o Throughput Constraints • HPC on Cloud • HPC applications are expected to shift towards cloud resources • Nature of HPC applications differs from traditional workloads on cloud • VM w/ NUMA balancer and w/ binding provides comparable performance isolation and performance with respect to the best case Enterprise Loads HPC Runtime Behavior w/ Throughput Constraints Classifying Applications for Power Efficiency Experimental Setup • IPC*CPU Utilization metric shows strong correlation with power efficiency • We utilize density-based clustering algorithm (DBSCAN) to determine application groups (classes) • 12-core AMD MagnyCours • Two 6-core processors in a single package • For randomly generated 50 workload sets, proposed technique improves the throughput-per-watt by 15% on average, reaching up to 21%. Application selection based References [1]C.Hankendi, A. Coskun, ‘Adaptive Energy-Efficient Resource Sharing for Multi-threaded Workloads in Virtualized Systems’. In CHANGE-DAC’12. [2]C.Hankendi, A. Coskun, ‘Reducing the Energy Cost of Computing Through Efficient Co-Scheduling of Parallel Workloads’. In DATE’12. [3]C. Bienia et al. ‘The PARSEC benchmark suite: characterization and architectural implications’. In PACT, 2008. *This work is partially funded by VMware, Inc. and MGHPCC. Applications: PARSEC 2.1 Parallel Benchmarks [3]

More Related