1 / 21

Predicting Parallel Performance

Predicting Parallel Performance. Introduction to Parallel Programming – Part 10. Review & Objectives. Previously: Design and implement of a task decomposition solution At the end of this part you should be able to: Define speedup and efficiency

job
Télécharger la présentation

Predicting Parallel Performance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Predicting Parallel Performance Introduction to Parallel Programming – Part 10

  2. Review & Objectives • Previously: • Design and implement of a task decomposition solution • At the end of this part you should be able to: • Define speedup and efficiency • Use Amdahl’s Law to predict maximum speedup

  3. Speedup Cores Speedup • Speedup is the ratio between sequential execution time and parallel execution time • For example, if the sequential program executes in 6 seconds and the parallel program executes in 2 seconds, the speedup is 3X Speedup curves look like this

  4. Efficiency • Efficiency • A measure of core utilization • Speedup divided by the number of cores • Example • Program achieves speedup of 3 on 4 cores • Efficiency is 3 / 4 = 75% Efficiency Efficiency curves look like this Cores

  5. Speedup and Efficiency Speedup Example • Painting a picket fence • 30 minutes of preparation (serial) • One minute to paint a single picket • 30 minutes of cleanup (serial) • Thus, 300 pickets takes 360 minutes (serial time)

  6. Speedup and Efficiency Computing Speedup

  7. Speedup and Efficiency Efficiency Example

  8. Idea Behind Amdahl’s Law Portion of computation that will be performed sequentially Portion of computation that will be executed in parallel s ExecutionTime s 1-s s s s (1-s )/2 (1-s )/3 (1-s )/4 (1-s )/5 Cores

  9. Derivation of Amdahl’s Law • Speedup is ratio of execution time on 1 core to execution time on p cores • Execution time on 1 core is s + (1-s) • Execution time on p cores is at least s + (1-s)/p

  10. Amdahl’s Law Is Too Optimistic • Amdahl’s Law ignores parallel processing overhead • Examples of this overhead include time spent creating and terminating threads • Parallel processing overhead is usually an increasing function of the number of cores (threads)

  11. Graph with Parallel Overhead Added Parallel overhead increases with # of cores ExecutionTime Cores

  12. Other Optimistic Assumptions • Amdahl’s Law assumes that the computation divides evenly among the cores • In reality, the amount of work does not divide evenly among the cores • Core waiting time is another form of overhead Task started Working time Waiting time Task completed

  13. Graph with Workload Imbalance Added ExecutionTime Time lost due to workload imbalance Cores

  14. Illustration of the Amdahl Effect Linear speedup n = 100,000 Speedup n = 10,000 n = 1,000 Cores

  15. Using Amdahl’s Law • Program executes in 5 seconds • Profile reveals 80% of time spent in function alpha, which we can execute in parallel • What would be maximum speedup on 2 cores? • New execution time ≥ 5 sec / 1.67 = 3 seconds

  16. Superlinear Speedup • According to our general speedup formula, the maximum speedup a program can achieve on pcores is p • Superlinear speedup is the situation where speedup is greater than the number of cores used • It means the computational rate of the cores is faster when the parallel program is executing • Superlinear speedup is usually caused because the cache hit rate of the parallel program is higher

  17. References • Michael J. Quinn, Parallel Programming in C with MPI and OpenMP, McGraw-Hill (2004).

  18. More General Speedup Formula

  19. Amdahl’s Law: Maximum Speedup Assumes parallel work divides perfectly among available cores This term is set to 0

  20. The Amdahl Effect As n   these terms dominate Speedup is an increasing function of problem size

More Related