1 / 15

Quantitative Basis for Design: Considerations and Metrics in Parallel Programming Optimization

Explore the optimization problem in parallel programming, considering factors like execution time, scalability, efficiency, and cost. Mathematical performance models are used to assess costs and predict performance. Learn about Amdahl's Law, metrics for performance, and the challenges of measuring and reporting performance.

Télécharger la présentation

Quantitative Basis for Design: Considerations and Metrics in Parallel Programming Optimization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 584 Lecture 11 • Assignment? • Paper Schedule • 10 Students • 5 Days • Look at the schedule and email me your preference. Quickly.

  2. A Quantitative Basis for Design • Parallel programming is an optimization problem. • Must take into account several factors: • execution time • scalability • efficiency

  3. A Quantitative Basis for Design • Parallel programming is an optimization problem. • Must take into account several factors: • Also must take into account the costs: • memory requirements • implementation costs • maintenance costs etc.

  4. A Quantitative Basis for Design • Parallel programming is an optimization problem. • Must take into account several factors: • Also must take into account the costs: • Mathematical performance models are used to asses these costs and predict performance.

  5. Defining Performance • How do you define parallel performance? • What do you define it in terms of? • Consider • Distributed databases • Image processing pipeline • Nuclear weapons testbed

  6. Amdahl's Law • Every algorithm has a sequential component. • Sequential component limits speedup Maximum Speedup Sequential Component = 1/s = s

  7. Amdahl's Law s Speedup

  8. What's wrong? • Works fine for a given algorithm. • But what if we change the algorithm? • We may change algorithms to increase parallelism and thus eventually increase performance. • May introduce inefficiency

  9. Metrics for Performance • Speedup • Efficiency • Scalability • Others …………..

  10. Speedup What is Speed? What algorithm for Speed1? What is the work performed? How much work?

  11. Two kinds of Speedup • Relative • Uses parallel algorithm on 1 processor • Most common • Absolute • Uses best known serial algorithm • Eliminates overheads in calculation.

  12. Speedup • Algorithm A • Serial execution time is 10 sec. • Parallel execution time is 2 sec. • Algorithm B • Serial execution time is 2 sec. • Parallel execution time is 1 sec. • What if I told you A = B?

  13. Performance Measurement • Algorithm X achieved speedup of 10.8 on 12 processors. • What is wrong? • A single point of reference is not enough! • What about asymptotic analysis?

  14. Performance Measurement • There is not a perfect way to measure and report performance. • Wall clock time seems to be the best. • But how much work do you do? • Best Bet: • Develop a model that fits experimental results.

More Related