1 / 17

CS 470/570:Introduction to Parallel and Distributed Computing

CS 470/570:Introduction to Parallel and Distributed Computing. Lecture 1 Outline. Course Details General Issues in Parallel Software Development Parallel Architectures Parallel Programming Models. General Information. Instructor: Thomas Gendreau Office: 211 Wing Office Phone: 785-6813

crescent
Télécharger la présentation

CS 470/570:Introduction to Parallel and Distributed Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 470/570:Introduction to Parallel and Distributed Computing

  2. Lecture 1 Outline • Course Details • General Issues in Parallel Software Development • Parallel Architectures • Parallel Programming Models

  3. General Information • Instructor: Thomas Gendreau • Office: 211 Wing • Office Phone: 785-6813 • Office Hours: • Monday, Wednesday, Friday 2:15-3:15 • Thursday 1:00-3:00 • Feel free to see me at times other than my office hours • email: gendreau@cs.uwlax.edu • web: cs.uwlax.edu/~gendreau • Textbook: Introduction to Parallel Programming by Peter S. Pacheco

  4. Grading • 240 points: 12 quizzes (best 12 out of 14) • 100 points: Assignments • 100 points Cumulative final exam • 4:45-6:45 Wednesday December 17 • 440 total points • A quiz will be given in class every Friday and on Wednesday November 21st. No makeup quizzes will be given. Unusual situations such as multi-week illness will be handled on an individual basis.

  5. General Issues/Terminology • Identify possible parallelism • Match amount of parallelism to the architecture • Scalability • Synchronization • Mutual exclusion • Choose a parallel programming model • Performance • Speedup • Amdahl’s law • Data distribution

  6. General Issues/Terminology • Why study parallel programming now? • Evolution of computer architecture • Computer networks as potential parallel computers • Manage large data sets • Solve computationally difficult problems

  7. General Issues/Terminology • Parallelism in computer systems • Multiprogrammed OS • Instruction level parallelism (pipelines) • Vector processing • Multiprocessor machines • Multicores • Networks of processors

  8. Parallel Architectures • Classics Taxonomy (Flynn) • Single Instruction Single Data Stream: SISD • Single Instruction Multiple Data Streams: SIMD • Multiple Instructions Multiple Data Streams: MIMD

  9. SIMD • Single control Unit • Multiple ALUs • All ALUs execute the same instruction on local data

  10. MIMD • Multiple control units and ALUs • Separate stream of control on each processor • Possible organization • Shared Physical Address Space • Uniform or Non-uniform Memory Access • Cache Coherance issues • Distributed Physical Address Space • Network of Computers

  11. Parallel Programming Models • How is parallelism expressed? • Process • Task • Thread • Implicitly • How is information accurately shared among parallel actions? • How are parallel actions synchronized?

  12. Parallel Programming Models • Shared Address Space Programming • Shared variables • Message Based Programming • Send/receive values among cooperating processes • Programming models are independent of the underlying architecture • SPMD • Single Program Multiple Data

  13. Shared Address Space Programming • Processes • Threads • Directive Model

  14. Unix Processes • Heavyweight processes • Requires the creation of a copy of the parent’s data space • Process creation has high overhead • Example functions • fork • waitpid

  15. Threads • Lightweight process • Does not require the creation of a copy of the parent’s data space • Example pthread functions • pthread_create • pthread_join

  16. Directive Based Parallel Programming • Directives in source code tell compiler where parallelism should be used • Threads are implicitly created • OpenMP • Compiler directives • #pragma omp contruct clause • #pragma omp parallel for

  17. Message Based Parallel Programming • Send/Receive messages to share values • Synchronous communication • Asynchronous communication • Blocking • Non-blocking

More Related