1 / 24

Chapter 1 Parallel Machines and Computations (Fundamentals of Parallel Processing)

Chapter 1 Parallel Machines and Computations (Fundamentals of Parallel Processing) Dr. Ranette Halverson. Overview. Goal: Faster Computers Parallel Computers are one solution Came – Gone – Coming again Includes Algorithms Hardware Programming Languages Text integrates all 3 together.

kalkin
Télécharger la présentation

Chapter 1 Parallel Machines and Computations (Fundamentals of Parallel Processing)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 1 Parallel Machines and Computations (Fundamentals of Parallel Processing) Dr. Ranette Halverson

  2. Overview • Goal: Faster Computers • Parallel Computers are one solution • Came – Gone – Coming again • Includes • Algorithms • Hardware • Programming Languages • Text integrates all 3 together R. Halverson Parallel Machines & Computations

  3. INTRODUCTION Evolution of Parallel Architectures G. Alaghband Fundamentals of Parallel Processing 1, Introduction

  4. Key elements of a computing system and their relationships G. Alaghband Fundamentals of Parallel Processing 2, Introduction

  5. Parallelism in Sequential Computers – also speedup -- • Interrupts • I/O Processors • Multiprocessing • High-speed block transfers • Virtual Memory • Pipelining • Multiple ALU’s • Optimizing Compilers R. Halverson Parallel Machines & Computations 2a

  6. Problems: Parallelism in SISD Pipelining • Jumps (conditional branches) • Solutions • Look ahead • Multiple fetches • Good compilers R. Halverson Parallel Machines & Computations 2b

  7. Multiple ALU’s • Resource conflict • 2 concurrent instructions need to use same ALU or store result to same register Data Dependencies • One instruction needs result of another • Race conditions R. Halverson Parallel Machines & Computations 2c

  8. Compiler Problems • Compiler tries to re-order instruction to achieve concurrency (parallelism) • Not easy to program • What about a compiler that takes sequential program that creates code for parallel computer? R. Halverson Parallel Machines & Computations 2d

  9. Flynn’s Categories Based on Instruction & Data Streams • SI – Single Instruction streams • MI – Multiple Instruction streams • SD – Single Data • MD -- Multiple Data R. Halverson Parallel Machines & Computations 2e

  10. Flynn’s 4 Categories SISD: • Traditional sequential computer • Von Neumann model SIMD: • One instruction used to operate on multiple data items • Vector computers • Each PC executes same instruction but has own data set • True vector computers must work this way • Other can “simulate” SIMD • Synchronous R. Halverson Parallel Machines & Computations 2f

  11. MIMD: • Multiple “independent” PC • Each PC has own instruction stream and own data • Work “asynchronously” but synchronization is usually needed periodically MISD: • Not really a useful model • MIMD can simulate MISD R. Halverson Parallel Machines & Computations 2g

  12. Evaluation of Expressions Exp = A+B+C+(D*E*F)+G+H Using an in-order traversal, the following code is generated by a compiler to evaluate EXP. G. Alaghband Fundamentals of Parallel Processing 14, Introduction

  13. Evaluation of Expressions Using Associativity and Commutativity laws the expression can be reordered by a compiler algorithm to generate code corresponding to the following tree What is significance of tree height? Height = 4 This is the most parallel computation for the given expression. G. Alaghband Fundamentals of Parallel Processing 15, Introduction

  14. SIMD (Vector) Computers Basis for Vector Computing • Loops!! • Iterations must be “independent” True SIMD One CPU (control unit) + multiple ALU’s, each with a memory (can be shared memory) Pipelined SIMD ALU’s work in a pipelined manner, not independently R. Halverson Parallel Machines & Computations

  15. Evolution of Computer Architectures Continued “True” Vector Processors Single-Instruction Stream Multiple-Data Stream SIMD Multiple arithmetic units with single control unit. A Typical True SIMD Computer Architecture G. Alaghband Fundamentals of Parallel Processing 16, Introduction

  16. Pipelined Vector Processors Pipelined SIMD Pipelined arithmetic units with shared memory A Typical Pipelined SIMD Computer Architecture G. Alaghband Fundamentals of Parallel Processing 17, Introduction

  17. MIMD (Multiprocessor) Computers 2 Variants • Shared Memory • Distributed Memory (fixed connection) (See Figure 1-10a) • Also pipelined version (Figure 1-10b) but we won’t study these in detail – called Multithreaded Computers R. Halverson Parallel Machines & Computations

  18. Multiprocessors Multiple-Instruction Stream Multiple-Data Stream MIMD A. Multiple-Processors/Multi-bank-Memory: Figure 1-10a G. Alaghband Fundamentals of Parallel Processing 18, Introduction

  19. Multiprocessors Multiple-Instruction Stream Multiple-Data Stream MIMD B. Multi-(Processor/Memory) pairs and Communication Figure 1-10a G. Alaghband Fundamentals of Parallel Processing 19, Introduction

  20. Pipelined Multiprocessors Pipelined MIMD Many instruction streams issue instructions into the pipe alternately: Figure 1-10b G. Alaghband Fundamentals of Parallel Processing 20, Introduction

  21. Interconnection Networks • Physical connections among PCs or memory • To facilitate routing of data & synchronization • SIMD: sharing of data & results among ALU’s • MIMD: Defines the model of computing!!! • Transfers to the nw as switch R. Halverson Parallel Machines & Computations

  22. Application to Architecture • A different approach and/or solution is necessary for different architectures • As we have seen, some problems have obvious parallelism, others don’t

  23. Interconnection NW -- MIMD • Topology, structure => performance • Performance determined by level of concurrency • Concurrent communication • More concurrency => More complexity =>More cost • E.G. Bus vs. fully connected (covered in Chapter 6) R. Halverson Parallel Machines & Computations

  24. Pseudo Code Conventions • Sequential • Similar to Pascal or C • SIMD • MIMD • Conventions are necessary for indicating parallelism; also for compilers

More Related