130 likes | 143 Vues
Novel energy-optimizing list scheduling approach considering dynamic and static energy consumption based on real measurements. Precise energy model applicable to CPUs and accelerators with power sensors. Tasks on accelerators affect host system energy.
E N D
reMinMin: A Novel Static Energy-Centric List Scheduling Approach Based on Real Measurements Achim Lösch and Marco Platzner {achim.loesch, platzner}@upb.de
Heterogeneous Compute Node Contribution: Novel energy-optimizing list scheduling approach for single heterogeneous compute nodes based on real measurements
Energy Scheduling • Related Work: • Energy-minimizing list schedulers, e.g., Energy-Aware MinMin [1] and Minimum Energy-Minimum Energy [2] • Do not consider energy consumed by idling resources • MINMIN [3] adjusts estimated energy consumption between a min and max value, depending on the number of cores allocated to a task • Considers energy consumed by idling resources but energy model not applicable to non-CPU architectures • Our approach: • Considers both, dynamic and static energy consumption • Energy model more precise than in related work • Feasible for CPUs and accelerators with power sensors • Considers that tasks executed on accelerators induce energy consumption on host • Energy data measured on real system instead of estimations
Energy Model – Determining Idle Power R = { rCPU, rGPU, rFPGA} P Pidle(R) Etotal(R|τSLEEP,rCPU) rFPGA ① t P rGPU t P Heterogeneous Compute Node Pidle(rCPU) ≈ 16.9 W Pidle(rGPU) ≈ 26.9 W Pidle(rFPGA) ≈ 23.8 W Pidle(R) ≈67.7 W rCPU t T(τSLEEP,rCPU)
Energy Model – Task-induced Energy R = { rCPU, rGPU, rFPGA} P Pidle(R) rFPGA ① t P Etask(R|τi,rGPU) = Etotal(R|τi,rGPU) – Eidle(R|τi,rGPU) Etotal(R|τi,rGPU) rGPU ③ t P Eidle(R|τi,rGPU) = T(τi,rGPU) ·Pidle(R) rCPU ② t T(τi,rGPU)
Energy Model – Total Energy R = { rCPU, rGPU, rFPGA} P rFPGA t P Etotal(R|τi,rGPU) = Etask(R|τi,rGPU) + T(τi,rGPU) ·Pidle(R) rGPU t ③ ② ① P Measure N tasks @ M resources (N ∙M task-resource pairs) Scheduler input: ① Pidle(R) ② ETC[N][M] Expected Time for Completion ③ Etask[N][M] rCPU t T(τi,rGPU)
reMinMin Approach repeat: for each task-resource pair: • Calculate completion time () • Update system’s static energy consumption () • Calculate system’s total energy consumption () end • Assign task to resource with overall minimum • Remove assigned task from task set until all tasks are assigned
Example ② ① ③ P [W] 50 40 30 20 10 t [s] 0 4 10 2 8 0 6 12 22 20 16 14 24 18 P [W] 50 40 30 20 10 t [s] 0 4 10 2 8 0 6 12 22 20 16 14 24 18
Example P [W] 1) 50 40 30 20 2) 10 t [s] 0 4 10 2 8 0 6 12 22 20 16 14 24 18 P [W] 50 3) 40 30 20 10 t [s] 0 4 10 2 8 0 6 12 22 20 16 14 24 18
Example P [W] 1) 50 40 30 20 2) 10 t [s] 0 4 10 2 8 0 6 12 22 20 16 14 24 18 P [W] 50 3) 40 30 20 10 t [s] 0 4 10 2 8 0 6 12 22 20 16 14 24 18
Example P [W] 50 40 30 20 10 t [s] 0 4 10 2 8 0 6 12 22 20 16 14 24 18 P [W] 50 40 30 20 10 t [s] 0 4 10 2 8 0 6 12 22 20 16 14 24 18 Considering idle energy is key to optimize total energy consumption.
Paper/Poster Outline • Present reMinMin in more detail • Experiments • Comparison to 2 scheduling approaches • Minimum of • Optimum scheduler (exhaustive search) • 3 task sets • Homogeneous task set • Heterogeneous task set • Mixed task set • Task sets consist of 16 tasks • Instances of 4 applications • Results from experiments • reMinMin outperforms Minimum of approach • reMinMin even close to Optimum scheduler
Thank you for your attention! References: [1] Y. Li, Y. Liu, and D. Qian, “A heuristic energy-aware scheduling algorithm for heterogeneous clusters,” in 2009 15th International Conference on Parallel and Distributed Systems, Dec 2009 [2] J. K. Kim, H.J. Siegel, A. A. Maciejewski, and R. Eigenmann, “Dynamic resource management in energy constrained heterogeneous computing systems using voltage scaling,” IEEETransactions on Parallel and Distributed Systems, vol. 19, no. 11, Nov 2008 [3] S. Nesmachnow, B. Dorronsoro, J. E. Pecero, and P. Bouvry, “Energy-aware scheduling on multi- core heterogeneous grid computing systems,” Journal of Grid Computing, vol. 11, no. 4, 2013