Download
cs4402 parallel computing n.
Skip this Video
Loading SlideShow in 5 Seconds..
CS4402 – Parallel Computing PowerPoint Presentation
Download Presentation
CS4402 – Parallel Computing

CS4402 – Parallel Computing

1 Views Download Presentation
Download Presentation

CS4402 – Parallel Computing

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. CS4402 – Parallel Computing Lecture 1: Classification of Parallel Computers Classification of Parallel Computation Important Laws of Parallel Compuation

  2. How I used to make breakfast……….

  3. How to set family to work...

  4. How finally got to the office in time….

  5. What is Parallel Computing? In the simplest sense, parallel computing is the simultaneous use of multiple computing resources to solve a problem. Parallel computing is the solution for "Grand Challenge Problems“: • weather and climate • biological, human genome • chemical and nuclear reactions Parallel Computing is a necessity for some commercial applications: • parallel databases, data mining • computer-aided diagnosis in medicine Ultimately, parallel computing is an attempt to minimize time.

  6. Grand Challenges Problems

  7. List of Supercomputers • Find this information at http://www.top500.org/

  8. Reason 1: Speedup

  9. Reason 2: Economy Resources already available. • Taking advantage of non-local resources • Cost savings - using multiple "cheap" computing resources instead of paying for time on a supercomputer. A parallel system is cheaper than a better processor. • Transmission speeds. • Limits to miniaturization. • Economic limitations.

  10. Reason 3: Scalability

  11. Types of || Computers Parallel Computers Hardware Software Shared memory Distributed memory Hybrid memory SIMD MIMD

  12. The Banking Analogy • Tellers: Parallel Processors • Customers: tasks • Transactions: operations • Accounts: data

  13. Vector/Array • Each teller/processor gets a very fine-grained task • Use pipeline parallelism • Good for handling batches when operations can be broken down into fine-grained stages

  14. SIMD (Single-Instruction-Multiple-Data) • All processors do the same things or idle • Phase 1: data partitioning and distributed • Phase 2: data-parallel processing • Efficient for big, regular data-sets

  15. Systolic Array • Combination of SIMD and Pipeline parallelism • 2-d array of processors with memory at the boundary • Tighter coordination between processors • Achieve very high speeds by circulating data among processors before returning to memory

  16. MIMD(Multi-Instruction-Multiple-Data) • Each processor (teller) operates independently • Need synchronization mechanism • by message passing • or mutual exclusion (locks) • Best suited for large-grained problems • Less than data-flow parallelism

  17. Important Laws of || Computing.

  18. Important Consequences • f=0 when no serial part  S(n)=n perfect speedup. • f=1 when everything is serial  S(n)=1 no parallel code.

  19. Important Consequences • S(n) is increasing when n is increasing • S(n) is decreasing when f is increasing.

  20. Important Consequences no matter how many processors are being used the speedup cannot increase above Examples: f = 5%  S(n) < 20 f = 10%  S(n) < 10 f = 20%  S(n) < 5.

  21. Gustafson’s Law - More

  22. Gustafson’s Speed-up When s+p=1 Important Consequences: • S(n) is increasing when n is increasing • S(n) is decreasing when n is increasing • There is no upper bound for the speedup.

  23. To read: • John L. Gustafson, Re-evaluating Amdahl's Law, http://www.scl.ameslab.gov/Publications/Gus/AmdahlsLaw/Amdahls.html • Yuan Shi, Re-evaluating Amdahl's and Gustafson’s Laws, http://www.cis.temple.edu/~shi/docs/amdahl/amdahl.html • Wilkinson’s book, • sections of the laws of parallel computing • sections about types of parallel machines and compuation