1 / 10

High Performance Computing

High Performance Computing. Lecture 5 Scalability of Parallel Systems 9/21/2014. Scalability. Scalability is widely used in parallel computing. However, it is complicate to give scalability a precise mathematical definition.

bendek
Télécharger la présentation

High Performance Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. High Performance Computing Lecture 5 Scalability of Parallel Systems 9/21/2014

  2. Scalability • Scalability is widely used in parallel computing. However, it is complicate to give scalability a precise mathematical definition. • Performance depends upon both communication patterns of the algorithm and the infrastructure provided by the machine so a good measure of scalability should adequately reflect the interaction between these two aspects. • Intuitively, scalability is the ability of a parallel system to effectively utilize an increasing number of processors.

  3. Parallel Systems • Whether a parallel system is or not scalable depends on the definition of scalability and the metric used. • Need of a realistic metric to measure scalability • Impact on design of architectures and applications. • Multiple parameters involved • Metrics for grid computing? PARALLEL SYSTEM Parallel Algorithm Parallel Machine

  4. Scalability Models • Fixed-Problem Size Model • Speedup • Amdhal’s Law • Overhead & Degree of concurrence • Memory-Constrained Model • Scaled speedup (Gustafson) • May lead to unacceptable execution times • Scaled speedup is less than lineal (Flatt & Kennedy) • Isoefficiency (kumar & Gupta) • Fixed-Time Scaling Model • Isospeed (Sun & Rover) • Seeks to determine how much the problem size should be scaled with the system under the constrain that the problem must be solved at the same absolute time Memory-contrained – Storage complexity Fixed-time - computational complexity

  5. Scalability Metrics • Speedup Ideal Actual # of Processors

  6. Scalability Metrics • Isoefficiency • Generalization of Flatt & Kennedy’s results • Relation pf problem size and the maximum number of processors which can be used in a cost-optimal fashion • A parallel system is cost optimal iff pTp =O(W). • A parallel system is scalable iff its isoefficiency function exists. If W needs to grow exponentially with respect to p, the parallel system is poorly scalable. On the contrary, if W grows nearly linear with p, the parallel system is highly scalable

  7. Scalability Metrics • Isoefficiency (Gupta & Kumar) Convergence rate degradation Memory Limit Problem Size Communication overhead speedup Number of Processors

  8. Scalability Metrics • Effectiveness (MSU-EIRS-ERC-97-6)

  9. Scalability …. • A scalability metric for a parallel system should reflect the interaction between communication patterns of the application and architecture of the parallel machine. • Scaling based only on problem size represents just a slice of the scalability surface so a scalability metric should take into account other parameters involving architecture, algorithms and the applications. • A Scalability metric should give us information not only about whether a parallel system is scalable or not, but also information regarding the specific values and conditions for scalability

  10. References • J. Gustafson, “Reevaluating Amdahl’s law.” Communications of the ACM, 31(5):532-543, 1988. • H. Flatt & K. Kennedy, “Performance of parallel processors.” Parallel Computing, 12:1-20, 1989. • A. Gupta & V. Kumar, “Performance properties of large scale parallel systems. ” Journal of Parallel and Distributed Computing, 19:234-244, 1993. • X. Sun & D. Rover, “Scalability of parallel algorithm-machine combinations.” IEEE Transactions on Parallel and Distributed Systems, 5(6):599-613, 1994. • E.A. Luke, I. Banicescu & J. Li, “The optimal effectiveness metric for parallel application analysis.” Technical Report MSSU-EIRS-ERC-97-6.

More Related